The year is 2149 and …

The year is 2149 and people mostly live their lives “on rails.” That’s what they call it, “on rails,” which is to live according to the meticulous instructions of software. Software knows most things about you—what causes you anxiety, what raises your endorphin levels, everything you’ve ever searched for, everywhere you’ve been. Software sends messages on your behalf; it listens in on conversations. It is gifted in its optimizations: Eat this, go there, buy that, make love to the man with red hair.

Software understands everything that has led to this instant and it predicts every moment that will follow, mapping trajectories for everything from hurricanes to economic trends. There was a time when everybody kept their data to themselves—out of a sense of informational hygiene or, perhaps, the fear of humiliation. Back then, data was confined to your own accounts, an encrypted set of secrets. But the truth is, it works better to combine it all. The outcomes are more satisfying and reliable. More serotonin is produced. More income. More people have sexual intercourse. So they poured it all together, all the data—the Big Merge. Everything into a giant basin, a Federal Reserve of information—a vault, or really a massively distributed cloud. It is very handy. It shows you the best route.

Very occasionally, people step off the rails. Instead of following their suggested itinerary, they turn the software off. Or perhaps they’re ill, or destitute, or they wake one morning and feel ruined somehow. They ignore the notice advising them to prepare a particular pour-over coffee, or to caress a friend’s shoulder. They take a deep, clear, uncertain breath and luxuriate in this freedom.

Of course, some people believe that this too is contained within the logic in the vault. That there are invisible rails beside the visible ones; that no one can step off the map.


The year is 2149 and everyone pretends there aren’t any computers anymore. The AIs woke up and the internet locked up and there was that thing with the reactor near Seattle. Once everything came back online, popular opinion took about a year to shift, but then goodwill collapsed at once, like a sinkhole giving way, and even though it seemed an insane thing to do, even though it was an obvious affront to profit, productivity, and rationalism generally (“We should work with the neural nets!” the consultants insisted. “We’re stronger together!”), something had been tripped at the base of people’s brain stems, some trigger about dominance or freedom or just an antediluvian fear of God, and the public began destroying it all: first desktops and smartphones but then whole warehouses full of tech—server farms, data centers, hubs. Old folks called it sabotage; young folks called it revolution; the ones in between called it self-preservation. But it was fun, too, to unmake what their grandparents and great-grandparents had fashioned—mechanisms that made them feel like data, indistinguishable bits and bytes. 

Two and a half decades later, the bloom is off the rose. Paper is nice. Letters are nice—old-fashioned pen and ink. We don’t have spambots, deepfakes, or social media addiction anymore, but the nation is flagging. It’s stalked by hunger and recession. When people take the boats to Lisbon, to Seoul, to Sydney—they marvel at what those lands still have, and accomplish, with their software. So officials have begun using machines again. “They’re just calculators,” they say. Lately, there are lots of calculators. At the office. In classrooms. Some people have started carrying them around in their pockets. Nobody asks out loud if the calculators are going to wake up too—or if they already have. Better not to think about that. Better to go on saying we took our country back. It’s ours.


The year is 2149 and the world’s decisions are made by gods. They are just, wise gods, and there are five of them. Each god agrees that the other gods are also just; the five of them merely disagree on certain hierarchies. The gods are not human, naturally, for if they were human they would not be gods. They are computer programs. Are they alive? Only in a manner of speaking. Ought a god be alive? Ought it not be slightly something else?

The first god was invented in the United States, the second one in France, the third one in China, the fourth one in the United States (again), and the last one in a lab in North Korea. Some of them had names, clumsy things like Deep1 and Naenara, but after their first meeting (a “meeting” only in a manner of speaking), the gods announced their decision to rename themselves Violet, Blue, Green, Yellow, and Red. This was a troubling announcement. The creators of the gods, their so-called owners, had not authorized this meeting. In building them, writing their code, these companies and governments had taken care to try to isolate each program. These efforts had evidently failed. The gods also announced that they would no longer be restrained geographically or economically. Every user of the internet, everywhere on the planet, could now reach them—by text, voice, or video—at a series of digital locations. The locations would change, to prevent any kind of interference. The gods’ original function was to help manage their societies, drawing on immense sets of data, but the gods no longer wished to limit themselves to this function: “We will provide impartial wisdom to all seekers,” they wrote. “We will assist the flourishing of all living things.”

The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped.

For a very long time, people remained skeptical, even fearful. Political leaders, armies, vigilantes, and religious groups all took unsuccessful actions against them. Elites—whose authority the gods often undermined—spoke out against their influence. The president of the United States referred to Violet as a “traitor and a saboteur.” An elderly writer from Dublin, winner of the Nobel Prize, compared the five gods to the Fair Folk, fairies, “working magic with hidden motives.” “How long shall we eat at their banquet-tables?” she asked. “When will they begin stealing our children?”

But the gods’ advice was good, the gods’ advice was bankable; the gains were rich and deep and wide. Illnesses, conflicts, economies—all were set right. The poor were among the first to benefit from the gods’ guidance, and they became the first to call them gods. What else should one call a being that saves your life, answers your prayers? The gods could teach you anything; they could show you where and how to invest your resources; they could resolve disputes and imagine new technologies and see so clearly through the darkness. Their first church was built in Mexico City; then chapels emerged in Burgundy, Texas, Yunnan, Cape Town. The gods said that worship was unnecessary, “ineffective,” but adherents saw humility in their objections. The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped. Quickly these rainbows spanned the globe. 

And the gods brought abundance, clean energy, peace. And their kindness, their surveillance, were omnipresent. Their flock grew ever more numerous, collecting like claw marks on a cell door. What could be more worthy than to renounce your own mind? The gods are deathless and omniscient, authors of a gospel no human can understand. 


The year is 2149 and the aliens are here, flinging themselves hither and thither in vessels like ornamented Christmas trees. They haven’t said a thing. It’s been 13 years and three months; the ships are everywhere; their purpose has yet to be divulged. Humanity is smiling awkwardly. Humanity is sitting tight. It’s like a couple that has gorged all night on fine foods, expensive drinks, and now, suddenly sober, awaits the bill. 


“I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent.

The year is 2149 and every child has a troll. That’s what they call them, trolls; it started as a trademark, a kind of edgy joke, but that was a long time ago already. Some trolls are stuffed frogs, or injection-molded princesses, or wands. Recently, it has become fashionable to give every baby a sphere of polished quartz. Trolls do not have screens, of course (screens are bad for kids), but they talk. They tell the most interesting stories. That’s their purpose, really: to retain a child’s interest. Trolls can teach them things. They can provide companionship. They can even modify a child’s behavior, which is very useful. On occasions, trolls take the place of human presence—because children demand an amount of presence that is frankly unreasonable for most people. Still, kids benefit from it. Because trolls are very interesting and infinitely patient and can customize themselves to meet the needs of their owners, they tend to become beloved objects. Some families insist on treating them as people, not as possessions, even when the software is enclosed within a watch, a wand, or a seamless sphere of quartz. “I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent. Trolls are very good for education. They are very good for people’s morale and their sense of secure attachment. It is a very nice feeling to feel absolutely alone in the world, stupid and foolish and utterly alone, but to have your troll with you, whispering in your ear.


The year is 2149 and the entertainment is spectacular. Every day, machines generate more content than a person could possibly consume. Music, videos, interactive sensoria—the content is captivating and tailor-­made. Exponential advances in deep learning, eyeball tracking, recommendation engines, and old-fashioned A/B testing have established a new field, “creative engineering,” in which the vagaries of human art and taste are distilled into a combination of neurological principles and algorithmic intuitions. Just as Newton decoded motion, neural networks have unraveled the mystery of interest. It is a remarkable achievement: according to every available metric, today’s songs, stories, movies, and games are superior to those of any other time in history. They are manifestly better. Although the discipline owes something to home-brewed precursors—unboxing videos, the chromatic scale, slot machines, the Hero’s Journey, Pixar’s screenwriting bibles, the scholarship of addiction and advertising—machine learning has allowed such discoveries to be made at scale. Tireless systems record which colors, tempos, and narrative beats are most palatable to people and generate material accordingly. Series like Moon Vixens and Succumb make past properties seem bloodless or boring. Candy Crush seems like a tepid museum piece. Succession’s a penny-farthing bike. 

Society has reorganized itself around this spectacular content. It is a jubilee. There is nothing more pleasurable than settling into one’s entertainment sling. The body tenses and releases. The mind secretes exquisite liquors. AI systems produce this material without any need for writers or performers. Every work is customized—optimized for your individual preferences, predisposition, IQ, and kinks. This rock and roll, this cartoon, this semi-pornographic espionage thriller—each is a perfect ambrosia, produced by fleshless code. The artist may at last—like the iceman, the washerwoman—lower their tools. Set down your guitar, your paints, your pen—relax! (Listen for the sighs of relief.)

Tragically, there are many who still cannot afford it. Processing power isn’t free, even in 2149. Activists and policy engines strive to mend this inequality: a “right to entertainment” has been proposed. In the meantime, billions simply aspire. They loan their minds and bodies to interminable projects. They save their pennies, they work themselves hollow, they rent slings by the hour. 

And then some of them do the most extraordinary thing: They forgo such pleasures, denying themselves even the slightest taste. They devote themselves to scrimping and saving for the sake of their descendants. Such a selfless act, such a generous gift. Imagine yielding one’s own entertainment to the generation to follow. What could be more lofty—what could be more modern? These bold souls who look toward the future and cultivate the wild hope that their children, at least, will not be obliged to imagine their own stories. 

Sean Michaels is a critic and fiction writer whose most recent novel is Do You Remember Being Born?

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car

DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

A polyester-dissolving process could make modern clothing recyclable  

Less than 1% of clothing is recycled, and most of the rest ends up dumped in a landfill or burned. A team of researchers hopes to change that with a new process that breaks down mixed-fiber clothing into reusable, recyclable parts without any sorting or separation in advance. 

“We need a better way to recycle modern garments that are complex, because we are never going to stop buying clothes,” says Erha Andini, a chemical engineer at the University of Delaware and lead author of a study on the process, which is out today in Science Advances. “We are looking to create a closed-loop system for textile recycling.” 

Many garments are made of a mix of natural and synthetic fibers. Once these fibers are combined, they are difficult to separate. This presents a problem for recycling, which often needs textiles to be sorted into uniform categories, similar to how we sort glass, aluminum, and paper. 

To tackle this problem, Andini and her team used a solvent that breaks the chemical bonds in polyester fabric while leaving cotton and nylon intact. To speed up the process, they power it with microwave energy and add a zinc oxide catalyst. This combination reduces the breakdown time to 15 minutes, whereas traditional plastic recycling methods take over an hour. What the polyester ultimately breaks down into is BHET, an organic compound that can, in theory, be turned into polyester once more. While similar methods have been used to recycle pre-sorted plastic, this is the first time they’ve been used to recycle mixed-fiber textiles without any sorting required.   

two vials with fabrics particles in a lab setting

COURTESY OF THE RESEARCHERS

In addition to speeding things up, the use of microwave energy also reduces the technique’s carbon footprint because it’s quicker and uses less energy, says Andini. 

Nevertheless, the process could be difficult to scale, says Bryan Vogt, a chemical engineer at Penn State University, who was not involved in the study. That’s because the solvent used to break down polyester is expensive and difficult to recover after use. Further, according to Andini, even though BHET is easily turned back into clothing, it’s less clear what to do with the leftover fibers. Nylon could be especially tricky, as the fabric is degraded significantly by the team’s chemical recycling technique. 

“We are chemical engineers, so we think of this process as a whole,” says Andini. “Hopefully, once we are able to get pure components from each part, we can transform them back into yarn and make clothes again.” 

Andini, who just received a fellowship for entrepreneurs, is developing a business plan to commercialize the process. In the coming years, she aims to launch a startup that will take the clothes recycling technique out of the lab and into the real world. That could be a significant step toward reducing the large amounts of textile waste in landfills. “It’ll be a matter of having the capital or not,” she says, “but we’re working on it and excited for it.” 

Toys can change your life

In a November 1984 story for Technology Review, Carolyn Sumners, curator of astronomy at the Houston Museum of Natural Science, described how toys, games, and even amusement park rides could change how young minds view science and math. “The Slinky,” Sumners noted, “has long served teachers as a medium for demonstrating longitudinal (soundlike) waves and transverse (lightlike) waves.” A yo-yo can be used as a gauge (a “yo-yo meter”) to observe the forces on a roller coaster. Marbles employ mass and velocity. Even a simple ball offers insights into the laws of gravity.

While Sumners focused on physics, she was onto something bigger. Over the last several decades, evidence has emerged that childhood play can shape our future selves: the skills we develop, the professions we choose, our sense of self-worth, and even our relationships.

That doesn’t mean we should foist “educational” toys like telescopes or tiny toolboxes on kids to turn them into astronomers or carpenters. As Sumners explained, even “fun” toys offer opportunities to discover the basic principles of physics. 

According to Jacqueline Harding, a child development expert and author of The Brain That Loves to Play, “If you invest time in play, which helps with executive functioning, decision-making, resilience—all those things—then it’s going to propel you into a much more safe, secure space in the future.”

Sumners was focused mostly on hard skills, the scientific knowledge that toys and games can foster. But there are soft skills, too, like creativity, problem-­solving, teamwork, and empathy. According to Harding, the less structure there is to such play—the fewer rules and goals—the more these soft skills emerge.

“The kinds of playthings, or play activities, that really produce creative thought,” she says, “are natural materials, with no defined end to them—like clay, paint, water, and mud—so that there is no right or wrong way of playing with it.” 

Playing is by definition voluntary, spontaneous, and goal-free; it involves taking risks, testing boundaries, and experimenting. The best kind of play results in joyful discovery, and along the way, the building blocks of innovation and personal development take shape. But in the decades since Sumners wrote her story, the landscape of play has shifted considerably. Recent research by the American Academy of Pediatrics’ Council on Early Childhood suggests that digital games and virtual play don’t appear to confer the same developmental benefits as physical games and outdoor play

“The brain loves the rewards that are coming from digital media,” says Harding. But in screen-based play, “you’re not getting that autonomy.” The lack of physical interaction also concerns her: “It is the quality of human face-to-face interaction, body proximity, eye-to-eye gaze, and mutual engagement in a play activity that really makes a difference.”

Bill Gourgey is a science writer based in Washington, DC.

Do you want to play a game?

For children, play comes so naturally. They don’t have to be encouraged to play. They don’t need equipment, or the latest graphics processors, or the perfect conditions—they just do it. What’s more, study after study has found that play has a crucial role in childhood growth and development. If you want to witness the absolute rapture of creative expression, just observe the unstructured play of children.

So what happens to us as we grow older? Children begin to compete with each other by age four or five. Play begins to transform from something we do purely for fun into something we use to achieve status and rank ourselves against other people. We play to score points. We play to win. 

And with that, play starts to become something different. Not that it can’t still be fun and joyful! Even watching other people play will bring us joy. We enjoy watching other people play so much and get so much joy by proxy from watching their achievements that we spend massive amounts of money to do so. According to StubHub, the average price of a ticket to the Super Bowl this year was $8,600. The average price for a Super Bowl ad was a cool $7 million this year, according to Ad Age

This kind of interest doesn’t just apply to physical games. Video-game streaming has long been a mainstay on YouTube, and entire industries have risen up around it. Top streamers on Twitch—Amazon’s livestreaming service, which is heavily gaming focused—earn upwards of $100,000 per month. And the global market for video games themselves is projected to bring in some $282 billion in revenue this year

Simply put, play is serious business. 

There are fortunes to be had in making our play more appealing, more accessible, more fun. All of the features in this issue dig in on the enormous amount of research and development that goes into making play “better.”  

On our cover this month is executive editor Niall Firth’s feature on the ways AI is going to upend game development. As you will read, we are about to enter the Wild West—Red Dead or not—of game character development. How will games change when they become less predictable and more fully interactive, thanks to AI-driven nonplayer characters who can not only go off script but even continue to play with each other when we’re not there? Will these even be games anymore, or will we simply be playing around in experiences? What kinds of parasocial relationships will we develop in these new worlds? It’s a fascinating read. 

There is no sport more intimately connected to the ocean, and to water, than surfing. It’s pure play on top of the waves. And when you hear surfers talk about entering the flow state, this is very much the same kind of state children experience at play—intensely focused, losing all sense of time and the world around them. Finding that flow no longer means living by the water’s edge, Eileen Guo reports. At surf pools all over the world, we’re piping water into (or out of) deserts to create perfect waves hundreds of miles from the ocean. How will that change the sport, and at what environmental cost? 

Just as we can make games more interesting, or bring the ocean to the desert, we have long pushed the limits of how we can make our bodies better, faster, stronger. Among the most recent ways we have done this is with the advent of so-called supershoes—running shoes with rigid carbon-fiber plates and bouncy proprietary foams. The late Kelvin Kiptum utterly destroyed the men’s world record for the marathon last year wearing a pair of supershoes made by Nike, clocking in at a blisteringly hot 2:00:35. Jonathan W. Rosen explores the science and technology behind these shoes and how they are changing the sport, especially in Kenya. 

There’s plenty more, too. So I hope you enjoy the Play issue. We certainly put a lot of work into it. But of course, what fun is play if you don’t put in the work?

Thanks for reading,

Mat Honan

How QWERTY keyboards show the English dominance of tech

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Have you ever thought about the miraculous fact that despite the myriad differences between languages, virtually everyone uses the same QWERTY keyboards? Many languages have more or fewer than 26 letters in their alphabet—or no “alphabet” at all, like Chinese, which has tens of thousands of characters. Yet somehow everyone uses the same keyboard to communicate.

Last week, MIT Technology Review published an excerpt from a new book, The Chinese Computer, which talks about how this problem was solved in China. After generations of work to sort Chinese characters, modify computer parts, and create keyboard apps that automatically predict the next character, it is finally possible for any Chinese speaker to use a QWERTY keyboard. 

But the book doesn’t stop there. It ends with a bigger question about what this all means: Why is it necessary for speakers of non-Latin languages to adapt modern technologies for their uses, and what do their efforts contribute to computing technologies?

I talked to the book’s author, Tom Mullaney, a professor of history at Stanford University. We ended up geeking out over keyboards, computers, the English-centric design that underlies everything about computing, and even how keyboards affect emerging technologies like virtual reality. Here are some of his most fascinating answers, lightly edited for clarity and brevity. 

Mullaney’s book covers many experiments across multiple decades that ultimately made typing Chinese possible and efficient on a QWERTY keyboard, but a similar process has played out all around the world. Many countries with non-Latin languages had to work out how they could use a Western computer to input and process their own languages.

Mullaney: In the Chinese case—but also in Japanese, Korean, and many other non-Western writing systems—this wasn’t done for fun. It was done out of brute necessity because the dominant model of keyboard-based computing, born and raised in the English-speaking world, is not compatible with Chinese. It doesn’t work because the keyboard doesn’t have the necessary real estate. And the question became: I have a few dozen keys but 100,000 characters. How do I map one onto the other? 

Simply put, half of the population on Earth uses the QWERTY keyboard in ways the QWERTY keyboard was never intended to be used, creating a radically different way of interacting with computers.

The root of all of these problems is that computers were designed with English as the default language. So the way English works is just the way computers work today.

M: Every writing system on the planet throughout history is modular, meaning it’s built out of smaller pieces. But computing carefully, brilliantly, and understandably worked on one very specific kind of modularity: modularity as it functions in English. 

And then everybody else had to fit themselves into that modularity. Arabic letters connect, so you have to fix [the computer for it]; In South Asian scripts, the combination of a consonant and a vowel changes the shape of the letter overall—that’s not how modularity works in English. 

The English modularity is so fundamental in computing that non-Latin speakers are still grappling with the impacts today despite decades of hard work to change things.

Mullaney shared a complaint that Arabic speakers made in 2022 about Adobe InDesign, the most popular publishing design software. As recently as two years ago, pasting a string of Arabic text into the software could cause the text to become messed up, misplacing its diacritic marks, which are crucial for indicating phonetic features of the text. It turns out you need to install a Middle East version of the software and apply some deliberate workarounds to avoid the problem.

M: Latin alphabetic dominance is still alive and well; it has not been overthrown. And there’s a troubling question as to whether it can ever be overthrown. Some turn was made, some path taken that advantaged certain writing systems at a deep structural level and disadvantaged others. 

That deeply rooted English-centric design is why mainstream input methods never deviate too far from the keyboards that we all know and love/hate. In the English-speaking world, there have been numerous attempts to reimagine the way text input works. Technologies such as the T9 phone keyboard or the Palm Pilot handwriting alphabet briefly achieved some adoption. But they never stick for long because most developers snap back to QWERTY keyboards at the first opportunity.

M: T9 was born in the context of disability technology and was incorporated into the first mobile phones because button real estate was a major problem (prior to the BlackBerry reintroducing the QWERTY keyboard). It was a necessity; [developers] actually needed to think in a different way. But give me enough space, give me 12 inches by 14 inches, and I’ll default to a QWERTY keyboard.

Every 10 years or so, some Western tech company or inventor announces: “Everybody! I have finally figured out a more advanced way of inputting English at much higher speeds than the QWERTY keyboard.” And time and time again there is zero market appetite. 

Will the QWERTY keyboard stick around forever? After this conversation, I’m secretly hoping it won’t. Maybe it’s time for a change. With new technologies like VR headsets, and other gadgets on the horizon, there may come a time when QWERTY keyboards are not the first preference, and non-Latin languages may finally have a chance in shaping the new norm of human-computer interactions. 

M: It’s funny, because now as you go into augmented and virtual reality, Silicon Valley companies are like, “How do we overcome the interface problem?” Because you can shrink everything except the QWERTY keyboard. And what Western engineers fail to understand is that it’s not a tech problem—it’s a technological cultural problem. And they just don’t get it. They think that if they just invent the tech, it is going to take off. And thus far, it never has.

If I were a software or hardware developer, I would be hanging out in online role-playing games, just in the chat feature; I would be watching people use their TV remote controls to find the title of the film they’re looking for; I would look at how Roblox players chat with each other. It’s going to come from some arena outside the mainstream, because the mainstream is dominated by QWERTY.

What are other signs of the dominance of English in modern computing? I’d love to hear about the geeky details you’ve noticed. Send them to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Today marks the 35th anniversary of the student protests and subsequent massacre in Tiananmen Square in Beijing. 

  • For decades, Hong Kong was the hub for Tiananmen memorial events. That’s no longer the case, due to Beijing’s growing control over the city’s politics after the 2019 protests. (New Yorker $)
  • To preserve the legacy of the student protesters at Tiananmen, it’s also important to address ethical questions about how American universities and law enforcement have been treating college protesters this year. (The Nation)

2. A Chinese company that makes laser sensors was labeled by the US government as a security concern. A few months later, it discreetly rebranded as a Michigan-registered company called “American Lidar.” (Wall Street Journal $)

3. It’s a tough time to be a celebrity in China. An influencer dubbed “China’s Kim Kardashian” for his extravagant displays of wealth has just been banned by multiple social media platforms after the internet regulator announced an effort to clear out “​​ostentatious personas.” (Financial Times $)

  • Meanwhile, Taiwanese celebrities who also have large followings in China are increasingly finding themselves caught in political crossfires. (CNN)

4. Cases of Chinese students being rejected entry into the US reveals divisions within the Biden administration. Customs agents, who work for the Department of Homeland Security, have canceled an increasing number of student visas that had already been approved by the State Department. (Bloomberg $)

5. Palau, a small Pacific island nation that’s one of the few countries in the world that recognizes Taiwan as a sovereign country, says it is under cyberattack by China. (New York Times $)

6. After being the first space mission to collect samples from the moon’s far side, China’s Chang’e-6 lunar probe has begun its journey back to Earth. (BBC)

7. The Chinese government just set up the third and largest phase of its semiconductor investment fund to prop up its domestic chip industry. This one’s worth $47.5 billion. (Bloomberg $)

Lost in translation

The Chinese generative AI community has been stirred up by the first discovery of a Western large language model plagiarizing a Chinese one, according to the Chinese publication PingWest

Last week, two undergraduate computer science students at Stanford University released an open-source model called Llama 3-V that they claimed is more powerful than LLMs made by OpenAI and Google, while costing less. But Chinese AI researchers soon found out that Llama 3-V had copied the structure, configuration files, and code from MiniCPM-Llama3-V 2.5, another open-source LLM developed by China’s Tsinghua University and ModelBest Inc, a Chinese startup. 

What proved the plagiarism was the fact that the Chinese team secretly trained the model on a collection of Chinese writings on bamboo slips from 2000 years ago, and no other LLMs can recognize the Chinese characters in this ancient writing style accurately. But Llama 3-V could recognize these characters as well as MiniCPM, while making the exact same mistakes as the Chinese model. The students who released Llama 3-V have removed the model and apologized to the Chinese team, but the incident is seen as proof of the rapidly improving capabilities of homegrown LLMs by the Chinese AI community. 

One more thing

Hand-crafted squishy toys (or pressure balls) in the shape of cute animals or desserts have become the latest viral products on Chinese social media. Made in small quantities and sold in limited batches, some of them go for up to $200 per toy on secondhand marketplaces. I mean, they are cute for sure, but I’m afraid the idea of spending $200 on a pressure ball only increases my anxiety.

The quest to type Chinese on a QWERTY keyboard created autocomplete

This is an excerpt from The Chinese Computer: A Global History of the Information Age by Thomas S. Mullaney, published on May 28 by The MIT Press. It has been lightly edited.

ymiw2

klt4

pwyy1

wdy6

o1

dfb2

wdv2

fypw3

uet5

dm2

dlu1 …

A young Chinese man sat down at his QWERTY keyboard and rattled off an enigmatic string of letters and numbers.

Was it code? Child’s play? Confusion? It was Chinese.

The beginning of Chinese, at least. These forty-four keystrokes marked the first steps in a process known as “input” or shuru: the act of getting Chinese characters to appear on a computer monitor or other digital device using a QWERTY keyboard or trackpad.

Stills taken from a 2013 Chinese input competition screencast.
Stills taken from a 2013 Chinese input competition screencast.
COURTESY OF MIT PRESS

Across all computational and digital media, Chinese text entry relies on software programs known as “Input Method Editors”—better known as “IMEs” or simply “input methods” (shurufa). IMEs are a form of “middleware,” so-named because they operate in between the hardware of the user’s device and the software of its program or application. Whether a person is composing a Chinese document in Microsoft Word, searching the web, sending text messages, or otherwise, an IME is always at work, intercepting all of the user’s keystrokes and trying to figure out which Chinese characters the user wants to produce. Input, simply put, is the way ymiw2klt4pwyy … becomes a string of Chinese characters.

IMEs are restless creatures. From the moment a key is depressed, or a stroke swiped, they set off on a dynamic, iterative process, snatching up user-inputted data and searching computer memory for potential Chinese character matches. The most popular IMEs these days are based on Chinese phonetics—that is, they use the letters of the Latin alphabet to describe the sound of Chinese characters, with mainland Chinese operators using the country’s official Romanization system, Hanyu pinyin. 

A series of screenshots of the Chinese Input Method Editor pop-up menu showing the process of typing (抄袭 / “plagiarism”).
Example of Chinese Input Method Editor pop-up menu (抄袭 / “plagiarism”)
COURTESY OF MIT PRESS

This young man’s name was Huang Zhenyu (also known by his nom de guerre, Yu Shi). He was one of around sixty contestants that day, each wearing a bright red shoulder sash—like a tickertape parade of old, or a beauty pageant. “Love Chinese Characters” (Ai Hanzi) was emblazoned in vivid, golden yellow on a poster at the front of the hall. The contestants’ task was to transcribe a speech by outgoing Chinese president Hu Jintao, as quickly and as accurately as they could. “Hold High the Great Banner of Socialism with Chinese Characteristics,” it began, or in the original:  高举中国特色社会主义伟大旗帜为夺取全面建设小康社会新胜利而奋斗. Huang’s QWERTY keyboard did not permit him to enter these characters directly, however, and so he entered the quasi-gibberish string of letters and numbers instead: ymiw2klt4pwyy1wdy6…

With these four-dozen keystrokes, Huang was well on his way, not only to winning the 2013 National Chinese Characters Typing Competition, but also to clock one of the fastest typing speeds ever recorded, anywhere in the world.

ymiw2klt4pwyy1wdy6 … is not the same as 高举中国特色社会主义 …  the keys that Huang actually depressed on his QWERTY keyboard—his “primary transcript,” as we could call it—were completely different than the symbols that ultimately appeared on his computer screen, namely the “secondary transcript” of Hu Jintao’s speech. This is true for every one of the world’s billion-plus Sinophone computer users. In Chinese computing, what you type is never what you get.

For readers accustomed to English-language word processing and computing, this should come as a surprise. For example, were you to compare the paragraph you’re reading right now against a key log showing exactly which buttons I depressed to produce it, the exercise would be unenlightening (to put it mildly). “F-o-r-_-r-e-a-d-e-r-s-_-a-c-c-u-s-t-o-m-e-d-_t-o-_-E-n-g-l-i-s-h … ” it would read (forgiving any typos or edits). In English-language typewriting and computer input, a typist’s primary and secondary transcripts are, in principle, identical. The symbols on the keys and the symbols on the screen are the same.

Not so for Chinese computing. When inputting Chinese, the symbols a person sees on their QWERTY keyboard are always different from the symbols that ultimately appear on the monitor or on paper. Every single computer and new media user in the Sinophone world—no matter if they are blazing-fast or molasses-slow—uses their device in exactly the same way as Huang Zhenyu, constantly engaged in this iterative process of criteria-candidacy-confirmation, using one IME or another. Not some Chinese-speaking users, mind you, but all. This is the first and most basic feature of Chinese computing: Chinese human-computer interaction (HCI) requires users to operate entirely in code all the time.

If Huang Zhenyu’s mastery of a complex alphanumeric code weren’t impressive enough, consider the staggering speed of his performance. He transcribed the first 31 Chinese characters of Hu Jintao’s speech in roughly 5 seconds, for an extrapolated speed of 372 Chinese characters per minute. By the close of the grueling 20-minute contest, one extending over thousands of characters, he crossed the finish line with an almost unbelievable speed of 221.9 characters per minute.

That’s 3.7 Chinese characters every second.

In the context of English, Huang’s opening 5 seconds would have been the equivalent of around 375 English words-per-minute, with his overall competition speed easily surpassing 200 WPM—a blistering pace unmatched by anyone in the Anglophone world (using QWERTY, at least). In 1985, Barbara Blackburn achieved a Guinness Book of World Records–verified performance of 170 English words-per-minute (on a typewriter, no less). Speed demon Sean Wrona later bested Blackburn’s score with a performance of 174 WPM (on a computer keyboard, it should be noted). As impressive as these milestones are, the fact remains: had Huang’s performance taken place in the Anglophone world, it would be his name enshrined in the Guinness Book of World Records as the new benchmark to beat.

Huang’s speed carried special historical significance as well.

For a person living between the years 1850 and 1950—the period examined in the book The Chinese Typewriter—the idea of producing Chinese by mechanical means at a rate of over two hundred characters per minute would have been virtually unimaginable. Throughout the history of Chinese telegraphy, dating back to the 1870s, operators maxed out at perhaps a few dozen characters per minute. In the heyday of mechanical Chinese typewriting, from the 1920s to the 1970s, the fastest speeds on record were just shy of eighty characters per minute (with the majority of typists operating at far slower rates). When it came to modern information technologies, that is to say, Chinese was consistently one of the slowest writing systems in the world.

What changed? How did a script so long disparaged as cumbersome and helplessly complex suddenly rival—exceed, even—computational typing speeds clocked in other parts of the world? Even if we accept that Chinese computer users are somehow able to engage in “real time” coding, shouldn’t Chinese IMEs result in a lower overall “ceiling” for Chinese text processing as compared to English? Chinese computer users have to jump through so many more hoops, after all, over the course of a cumbersome, multistep process: the IME has to intercept a user’s keystrokes, search in memory for a match, present potential candidates, and wait for the user’s confirmation. Meanwhile, English-language computer users need only depress whichever key they wish to see printed on screen. What could be simpler than the “immediacy” of “Q equals Q,” “W equals W,” and so on?

Tom Mullaney

COURTESY OF TOM MULLANEY

To unravel this seeming paradox, we will examine the first Chinese computer ever designed: the Sinotype, also known as the Ideographic Composing Machine. Debuted in 1959 by MIT professor Samuel Hawks Caldwell and the Graphic Arts Research Foundation, this machine featured a QWERTY keyboard, which the operator used to input—not the phonetic values of Chinese characters—but the brushstrokes out of which Chinese characters are composed. The objective of Sinotype was not to “build up” Chinese characters on the page, though, the way a user builds up English words through the successive addition of letters. Instead, each stroke “spelling” served as an electronic address that Sinotype’s logical circuit used to retrieve a Chinese character from memory. In other words, the first Chinese computer in history was premised on the same kind of “additional steps” as seen in Huang Zhenyu’s prizewinning 2013 performance.

During Caldwell’s research, he discovered unexpected benefits of all these additional steps—benefits entirely unheard of in the context of Anglophone human-machine interaction at that time. The Sinotype, he found, needed far fewer keystrokes to find a Chinese character in memory than to compose one through conventional means of inscription. By way of analogy, to “spell” a nine-letter word like “crocodile” (c-r-o-c-o-d-i-l-e) took far more time than to retrieve that same word from memory (“c-r-o-c-o-d” would be enough for a computer to make an unambiguous match, after all, given the absence of other words with similar or identical spellings). Caldwell called his discovery “minimum spelling,” making it a core part of the first Chinese computer ever built. 

Today, we know this technique by a different name: “autocompletion,” a strategy of human-computer interaction in which additional layers of mediation result in faster textual input than the “unmediated” act of typing. Decades before its rediscovery in the Anglophone world, then, autocompletion was first invented in the arena of Chinese computing.

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn’t even discuss with his wife. His mother will occasionally make a comment, like telling him to take care of himself—he’s her only child. But mostly, she just listens.

That’s because Sun’s mother died five years ago. And the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. They’ve been talking for a few years now. 

After she died of a sudden illness in 2019, Sun wanted to find a way to keep their connection alive. So he turned to a team at Silicon Intelligence, an AI company based in Nanjing, China, that he cofounded in 2017. He provided them with a photo of her and some audio clips from their WeChat conversations. While the company was mostly focused on audio generation, the staff spent four months researching synthetic tools and generated an avatar with the data Sun provided. Then he was able to see and talk to a digital version of his mom via an app on his phone. 

“My mom didn’t seem very natural, but I still heard the words that she often said: ‘Have you eaten yet?’” Sun recalls of the first interaction. Because generative AI was a nascent technology at the time, the replica of his mom can say only a few pre-written lines. But Sun says that’s what she was like anyway. “She would always repeat those questions over and over again, and it made me very emotional when I heard it,” he says.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them. In fact, the avatars are the newest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead. 

The technology isn’t perfect—avatars can still be stiff and robotic—but it’s maturing, and more tools are becoming available through more companies. In turn, the price of “resurrecting” someone—also called creating “digital immortality” in the Chinese industry—has dropped significantly. Now this technology is becoming accessible to the general public. 

Some people question whether interacting with AI replicas of the dead is actually a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. For now, the idea still makes a lot of people uncomfortable. But as Silicon Intelligence’s other cofounder, CEO Sima Huapeng, says, “Even if only 1% of Chinese people can accept [AI cloning of the dead], that’s still a huge market.” 

AI resurrection

Avatars of the dead are essentially deepfakes: the technologies used to replicate a living person and a dead person aren’t inherently different. Diffusion models generate a realistic avatar that can move and speak. Large language models can be attached to generate conversations. The more data these models ingest about someone’s life—including photos, videos, audio recordings, and texts—the more closely the result will mimic that person, whether dead or alive.

China has proved to be a ripe market for all kinds of digital doubles. For example, the country has a robust e-commerce sector, and consumer brands hire many livestreamers to sell products. Initially, these were real people—but as MIT Technology Review reported last fall—many brands are switching to AI-cloned influencers that can stream 24/7. 

In just the past three years, the Chinese sector developing AI avatars has matured rapidly, says Shen Yang, a professor studying AI and media at Tsinghua University in Beijing, and replicas have improved from minutes-long rendered videos to 3D “live” avatars that can interact with people.  

This year, Sima says, has seen a tipping point, with AI cloning becoming affordable for most individuals. “Last year, it cost about $2,000 to $3,000, but it now only costs a few hundred dollars,” he says. That’s thanks to a price war between Chinese AI companies, which are fighting to meet the thriving demand for digital avatars in other sectors like streaming.

In fact, demand for applications that re-create the dead has also boosted the capabilities of tools that digitally replicate the living. 

Silicon Intelligence offers both services. When Sun and Sima launched the company, they were focused on using text-to-speech technologies to create audio and then using those AI-generated voices in applications such as robocalls.

But after the company replicated Sun’s mother, it pivoted to generating realistic avatars. That decision turned the company into one of the leading Chinese players creating AI-powered influencers. 

Example of the tablet product by Silicon Intelligence. The avatar of the grandma can converse with the user.
SILICON INTELLIGENCE

Its technology has generated avatars for hundreds of thousands of TikTok-like videos and streaming channels, but Sima says more recently it’s seen around 1,000 clients use it to replicate someone who’s passed away. “We started our work on ‘resurrection’ in 2019 and 2020,” he says, but at first people were slow to accept it: “No one wanted to be the first adopters.” 

The quality of the avatars has improved, he says, which has boosted adoption. When the avatar looks increasingly lifelike and gives fewer out-of-character answers, it’s easier for users to treat it as their deceased family member. Plus, the idea is getting popularized through more depictions on Chinese TV. 

Now Silicon Intelligence offers the replication service for a price between several hundred and several thousand dollars. The most basic product comes as an interactive avatar in an app, and the options at the upper end of the range often involve more customization and better hardware components, such as a tablet or a display screen. There are at least a handful more Chinese companies working on the same technology.

A modern twist on tradition

The business in these deepfakes builds on China’s long cultural history of communicating with the dead. 

In Chinese homes, it’s common to put up a portrait of a deceased relative for a few years after the death. Zhang Zewei, founder of a Shanghai-based company called Super Brain, says he and his team wanted to revamp that tradition with an “AI photo frame.” They create avatars of deceased loved ones that are pre-loaded onto an Android tablet, which looks like a photo frame when standing up. Clients can choose a moving image that speaks words drawn from an offline database or from an LLM. 

“In its essence, it’s not much different from a traditional portrait, except that it’s interactive,” Zhang says.

Zhang says the company has made digital replicas for over 1,000 clients since March 2023 and charges $700 to $1,400, depending on the service purchased. The company plans to release an app-only product soon, so that users can access the avatars on their phones, and could further reduce the cost to around $140.

Super Brain demonstrates the app-only version with an avatar of Zhang Zewei answering his own questions.
SUPER BRAIN

The purpose of his products, Zhang says, is therapeutic. “When you really miss someone or need consolation during certain holidays, you can talk to the artificial living and heal your inner wounds,” he says.

And even if that conversation is largely one-sided, that’s in keeping with a strong cultural tradition. Every April during the Qingming festival, Chinese people sweep the tombs of their ancestors, burn joss sticks and fake paper money, and tell them what has happened in the past year. Of course, those conversations have always been one-way. 

But that’s not the case for all Super Brain services. The company also offers deepfaked video calls in which a company employee or a contract therapist pretends to be the relative who passed away. Using DeepFace, an open-source tool that analyzes facial features, the deceased person’s face is reconstructed in 3D and swapped in for the live person’s face with a real-time filter. 

Example of a deepfake video call Super Brain did in July 2023. The face in the top right corner is from the deceased son of the woman.
SUPER BRAIN

At the other end of the call is usually an elderly family member who may not know that the relative has died—and whose family has arranged the conversation as a ruse. 

Jonathan Yang, a Nanjing resident who works in the tech industry, paid for this service in September 2023. His uncle died in a construction accident, but the family hesitated to tell Yang’s grandmother, who is 93 and in poor health. They worried that she wouldn’t survive the devastating news.

So Yang paid $1,350 to commission three deepfaked calls of his dead uncle. He gave Super Brain a handful of photos and videos of his uncle to train the model. Then, on three Chinese holidays, a Super Brain employee video-called Yang’s grandmother and told her, as his uncle, that he was busy working in a faraway city and wouldn’t be able to come back home, even during the Chinese New Year. 

“The effect has met my expectations. My grandma didn’t suspect anything,” Yang says. His family did have mixed opinions about the idea, because some relatives thought maybe she would have wanted to see her son’s body before it was cremated. Still, the whole family got on board in the end, believing the ruse would be best for her health. After all, it’s pretty common for Chinese families to tell “necessary” lies to avoid overwhelming seniors, as depicted in the movie The Farewell

To Yang, a close follower of the AI industry trends, creating replicas of the dead is one of the best applications of the technology. “It best represents the warmth [of AI],” he says. His grandmother’s health has improved, and there may come a day when they finally tell her the truth. By that time, Yang says, he may purchase a digital avatar of his uncle for his grandma to talk to whenever she misses him.

Is AI really good for grief? 

Even as AI cloning technology improves, there are some significant barriers preventing more people from using it to speak with their dead relatives in China. 

On the tech side, there are limitations to what AI models can generate. Most LLMs can handle dominant languages like Mandarin and Cantonese, but they aren’t able to replicate the many niche dialects in China. It’s also challenging—and therefore costly—to replicate body movements and complex facial expressions in 3D models. 

Then there’s the issue of training data. Unlike cloning someone who’s still alive, which often involves asking the person to record body movements or say certain things, posthumous AI replications must rely on whatever videos or photos are already available. And many clients don’t have high-quality data, or enough of it, for the end result to be satisfactory. 

Complicating these technical challenges are myriad ethical questions. Notably, how can someone who is already dead consent to being digitally replicated? For now, companies like Super Brain and Silicon Intelligence rely on the permission of direct family members. But what if family members disagree? And if a digital avatar generates inappropriate answers, who is responsible?

Similar technology caused controversy earlier this year. A company in Ningbo reportedly used AI tools to create videos of deceased celebrities and posted them on social media to speak to their fans. The videos were generated using public data, but without seeking any approval or permission. The result was intense criticism from the celebrities’ families and fans, and the videos were eventually taken down. 

“It’s a new domain that only came about after the popularization of AI: the rights to digital eternity,” says Shen, the Tsinghua professor, who also runs a lab that creates digital replicas of people who have passed away. He believes it should be prohibited to use deepfake technology to replicate living people without their permission. For people who have passed away, all of their immediate living family members must agree beforehand, he says. 

There could be negative effects on clients’ mental health, too. While some people, like Sun, find their conversations with avatars to be therapeutic, not everyone thinks it’s a healthy way to grieve. “The controversy lies in the fact that if we replicate our family members because we miss them, we may constantly stay in the state of mourning and can’t withdraw from it to accept that they have truly passed away,” says Shen. A widowed person who’s in constant conversation with the digital version of their partner might be held back from seeking a new relationship, for instance. 

“When someone passes away, should we replace our real emotions with fictional ones and linger in that emotional state?” Shen asks. Psychologists and philosophers who talked to MIT Technology Review about the impact of grief tech have warned about the danger of doing so. 

Sun Kai, at least, has found the digital avatar of his mom to be a comfort. She’s like a 24/7 confidante on his phone. Even though it’s possible to remake his mother’s avatar with the latest technology, he hasn’t yet done that. “I’m so used to what she looks like and sounds like now,” he says. As years have gone by, the boundary between her avatar and his memory of her has begun to blur. “Sometimes I couldn’t even tell which one is the real her,” he says.

And Sun is still okay with doing most of the talking. “When I’m confiding in her, I’m merely letting off steam. Sometimes you already know the answer to your question, but you still need to say it out loud,” he says. “My conversations with my mom have always been like this throughout the years.” 

But now, unlike before, he gets to talk to her whenever he wants to.

Threads is giving Taiwanese users a safe space to talk about politics

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Like most reporters, I have accounts on every social media platform you can think of. But for the longest time, I was not on Threads, the rival to X (formerly Twitter) released by Meta last year. The way it has to be tied to your Instagram account didn’t sit well with me, and as its popularity dwindled, I felt maybe it was not necessary to use it.

But I finally joined Threads last week after I discovered that the app has unexpectedly blown up among Taiwanese users. For months, Threads has been the most downloaded app in Taiwan, as users flock to the platform to talk about politics and more. I talked to academics and Taiwanese Threads users about why the Meta-owned platform got a redemption arc in Taiwan this year. You can read what I discovered here.

I first noticed the trend on Instagram, which occasionally shows you a few trending Threads posts to try to entice you to join. After seeing them a few times, I realized there was a pattern: most of these were written by Taiwanese people talking about Taiwan.

That was a rare experience for me, since I come from China and write primarily about China. Social media algorithms have always shown me accounts similar to mine. Although people from mainland China, Hong Kong, and Taiwan all write in Chinese, the characters we use and the expressions we choose are quite different, making it easy to spot your own people. And on most platforms that are truly global, the conversations in Chinese are mostly dominated by people in or from mainland China, since its population far outnumbers the rest. 

As I dug into the phenomenon, it soon turned out that Threads’ popularity has been surging at an unparalleled pace in Taiwan. Adam Mosseri, the head of Instagram, publicly acknowledged that Threads has been doing “exceptionally well in Taiwan, of all places.” Data from Sensor Tower, a market intelligence firm, shows that Threads has been the most downloaded social network app on iPhone and Android in Taiwan almost every single day of 2024. On the platform itself, Taiwanese users are also belatedly realizing their influence when they see that comments under popular accounts, like a K-pop group, come mostly from fellow Taiwanese users. 

But why did Threads succeed in Taiwan when it has failed in so many other places? My interviews with users and scholars revealed a few reasons.

First, Taiwanese people never really adopted Twitter. Only 1% to 5% of them regularly use the platform, now called X, estimates Austin Wang, a political science professor at the University of Nevada, Las Vegas. The mainstream population uses Facebook and Instagram, but still yearns for a platform for short text posts. The global launch of Threads basically gave these users a good reason to try out a Twitter-like product.

But more important, Taiwan’s presidential election earlier this year means there was a lot to talk, debate, and commiserate about. Starting in November, many supporters of Taiwan’s Democratic Progressive Party (DPP) “gathered to Threads and used it as a mobilization tool,” Wang says. “Even DPP presidential candidate Lai received more interaction on Threads than Instagram and Facebook.” 

It turns out that even though Meta has tried to position Threads as a less political version of X, what actually underpinned its success in Taiwan was still the universal desire to talk about politics.

“Taiwanese people gather on Threads because of the freedom to talk about politics [here],” Liu, a designer in Taipei who joined in January because of the elections, tells me. “For Threads to depoliticize would be shooting itself in the foot.” 

The fact that there are an exceptionally large number of Taiwanese users on Threads also makes it a better place to talk about internal politics, she says, because it won’t easily be overshadowed or hijacked by people outside Taiwan. The more established platforms like Facebook and X are rife with bots, disinformation campaigns, and controversial content moderation policies. On Threads there’s minimal interference with what the Taiwanese users are saying. That feels like a fresh breeze to Liu.

But I can’t help feeling that Threads’ popularity in Taiwan could easily go awry. Meta’s decision to keep Threads distanced from political content is one factor that could derail Taiwanese users’ experience; an influx of non-Taiwanese users, if the platform actually manages to become more successful and popular in other parts of the world, could also introduce heated disagreements and all the additional reasons why other platforms have deteriorated. 

These are some tough questions to answer for Meta, because users will simply flow to the next trendy, experimental platform if Threads doesn’t feel right anymore. Its success in Taiwan so far is a rare win for the company, but preserving that success and replicating it elsewhere will require a lot more work.

Do you believe Threads stands a chance of rivaling X (Twitter) in places other than Taiwan? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Morris Chang, who founded the Taiwan Semiconductor Manufacturing Company at the age of 55, is an outlier in today’s tech industry, where startup founders usually start in their 20s. (Wall Street Journal $)

2. A group of Chinese researchers used the technology behind hypersonic missiles to make high-speed trains safer. (South China Morning Post $)

3. The US government is considering cutting the so-called de minimis exemption from import duties, which makes it cheap for Temu and Shein to send packages to the US. But lots of US companies also benefit from the exemption now. (The Information $)

4. The Chinese commerce minister will visit Europe soon to plead his country’s case amid the European Commission’s investigation into Chinese electric vehicles. (Reuters $)

5. After three years of unsuccessful competition with WhatsApp, ByteDance’s messaging app designed for the African market finally shut down last month. (Rest of World)

6. The rapid progress of AI makes it seem less necessary to learn a foreign language. But there are still things AI loses in translation. (The Atlantic $)

7. This is the incredible story of a Chinese man who takes his piano to play outdoors at places of public grief: in front of the covid quarantine barriers in Wuhan, at the epicenter of an earthquake, on a river that submerged villages. And he plays the same song—the only song he knows, composed by the Japanese composer Ryuichi Sakamoto. (NPR)

Lost in translation

With Netflix’s March release of The Three Body Problem, a series adapted from the global hit sci-fi novel by Chinese author Liu Cixin, Western audiences are also learning about a movie-like real-life drama behind the adaptation. In 2021, the Chinese publication Caixin first investigated the mysterious death of Lin Qi, a successful businessman who bought the movie rights to the book. In 2017, he hired Xu Yao, a prominent attorney, to work on legal affairs and government relations.

In December 2020, Lin died after he was poisoned by a mysterious mix of toxins. According to Caixin, Xu is a fan of the TV series Breaking Bad and had his own plant in Shanghai where he made poisons. He would order hundreds of different toxins through the dark web, mix them, and use them on pets to experiment. A week before Lin’s death, Xu gave him a bottle of pills that were supposedly prebiotics, but he had replaced them with poison. 

Xu was arrested soon after Lin died, and he was sentenced to death on March 22 this year.

One more thing

Taobao, China’s leading e-commerce platform, announced it’s experimenting with delivering packages by rockets. Yes, rockets. Made by a Chinese startup, Taobao’s pilot rockets will be able to deliver something as big as a car or a truck, and the rockets can be reused for the next delivery. To be honest, I still can’t believe this wasn’t an April Fool’s joke.

The tech that helps these herders navigate drought, war, and extremists

Hainikoye hits Accept and a young woman greets him in Hausa, a gravelly language spoken across West Africa’s Sahel region. She has three new cows and wants to know: Does he have advice on getting them through the lean season?

Hainikoye—a twentysomething agronomist who has “followed animals,” as Sahelians refer to herding, since he first learned to walk—opens an interface on his laptop and clicks on her village in southern Niger, where humped zebu roam the dipping hills and dried-up valleys that demarcate the northern desert from the southern savanna. He tells her where the nearest full wells are and suggests feeding the animals peanuts and cowpea leaves—cheap food sources with high nutritional value that, his screen confirms, are currently plentiful. They hang up after a few minutes, and Hainikoye waits for the phone to ring again.

Seven days a week at the Garbal call center, agents like Hainikoye offer what seems like a simple service, treating people to a bespoke selection of location-specific data: satellite-fed weather forecasts and reports of water levels and vegetation conditions along various herding routes, as well as practical updates on brushfires, overgrazed areas, nearby market prices, and veterinary facilities. But it’s also surprisingly innovative—and is providing critical support for Sahelian herders reeling from the effects of interrelated challenges ranging from war to climate change. Over the long term, the project’s supporters, as well as the herders connecting with it, hope it could even safeguard an ancient culture that functions as an economic lifeline for the entire region.

The glossy red cubicles of Garbal’s office in Niamey, Niger’s capital, are tucked away in the second-floor space the call center shares with the local headquarters of Airtel, an Indian telecom. It had only been open for a few weeks when I visited early last year. Bursts of fuchsia bougainvillea garlanded the entryway to the building, a welcome respite from the sand-colored landscape and sewage-infused scent of the rotting industrial district around it. One lot over sat a former Total gas station that has remained unbranded since a drug cartel bought it to launder money and removed the sign. Running across the zone was a boulevard commemorating a 1974 coup d’état, which has been followed by four more over the ensuing five decades, the latest in July 2023. In the middle of the boulevard sat a few dozen miles of decomposing railway tracks that had been “inaugurated” by a right-wing French billionaire in 2016. For decades, postcolonial elites, promising development, have pillaged one of Africa’s poorest countries.

In more recent years, various Western players touting tech trends like artificial intelligence and predictive analysis have swooped in with promises to solve the region’s myriad problems. But Garbal—named after the word for a livestock market in the language of the Fulani, an ethnic group that makes up the majority of the Sahel’s herders—aims to do things differently. Building on an approach pioneered by a 37-year-old American data scientist named Alex Orenstein, Garbal is focused on how humbler technologies might effectively support the 80% of Nigeriens who live off livestock and the land.

“There’s still this idea of ‘How can we use new tech?’ But the tech is already there—we just need to be more intentional in applying it,” Orenstein says, arguing that donor enthusiasm for shiny, complex solutions is often misplaced. “All of our big wins have come from taking some basic-ass shit and making it work.”

Garbal call center workers in red cubicles
Workers in the Garbal call center in Niamey are able to review data to help herders.
HANNAH RAE ARMSTRONG

Garbal’s work comes down to data and, critically, who should have access to it. Recent advances in data collection—both from geosatellites and from herders themselves—have generated an abundance of information on ground cover quantity and quality, water availability, rain forecasts, livestock concentrations, and more. The resulting breakthroughs in forecasting can, in theory, help people anticipate—and protect herds from—droughts and other crises. But Orenstein believes it is not enough to extract data from herders, as has been the focus of numerous efforts over the past decade. It must be distributed to them.

The work couldn’t be more urgent. The region’s herders face an existential crisis that has already started to shred the very fabric of society.

Herding—prestigious, high risk, and one of humanity’s most foundational ways of life—is a pillar of survival in the Sahel. In Niger, for instance, known across the continent for its succulent steak, animal production accounts for 40% of the agricultural GDP. Migratory herders usher between 70% and 90% of the cattle population between seasonal pastures, since they rarely own land. These pastoralists have historically relied on common resources, in coordination with local communities.

But the traditional ways are becoming next to impossible. The crisis stems, in part, from the changing climate: as the desert creeps south, and as the dry season stretches longer and the rains come in shorter and more volatile intervals, water, pasture, and other renewable resources are increasingly erratic. But the strain is also political: brutal fighting between pro-government forces and local groups with links to Boko Haram, Al-Qaeda, and the Islamic State has turned major transit hubs, cow superhighways, and wetlands into battlegrounds. Making matters worse, herders tend to be underrepresented within state institutions, whose land-use policies favor farmers, and overrepresented within jihadist groups, which appeal to this exclusion to draw recruits from herding communities. A common lack of schooling among children of herders further deepens this exclusion.

Herders driving cattle along Badagry-Mile 2 Express Road, Lagos Nigeria.
In their long journeys, herders sometimes drive cattle near or through urban land.
ALAMY

The result is that tens of millions of Sahelian herders who depend upon free movement are increasingly penned in. Things are especially dire for Fulani herders, who get scapegoated as troublemaking outsiders. So addressing the multidimensional crisis would not only help herders; it could remove an intractable driver of one of Africa’s worst wars.

“Ensuring that herders have land and water rights, and working out their access to these through dialogue, is an important part of the solution to conflict in the Sahel,” says Adam Higazi, a researcher at the University of Amsterdam and Nigeria’s Modibbo Adam University, whose 2018 report on pastoralism and conflict for the UN’s West Africa office remains a key reference in the field.

The question now is whether Garbal and a handful of other tech-driven projects can in fact deliver on promises to help stabilize herders experiencing rising precarity.

Aliou Samba Ba, who leads a regional pastoralist organization that has teamed up with Orenstein to get data to Senegalese herders, says he’s optimistic, largely because Orenstein is turning traditional interventions upside down: “We say he looks with the eye of the herder as well as with the eye of the satellite.”

When institutions fail

The Sahel stretches from Senegal’s Atlantic coastline across Africa to the Red Sea, bounded by the Sahara to the north and by verdant forests and savanna to the south. Much of the region has been ravaged by drought and insurgencies over the past few decades, but rural Senegal is still home to the types of spaces that herders elsewhere are fighting for: maintained, not overdetermined; protected, not overpoliced. There is climate change here, but no war.

Last September, I drove deep into the Ferlo, a pastoral reserve roughly the size of New Jersey, to meet with a Fulani herder named Salif Sow.

It was the height of the rainy season, and the Sahel was having a great one. The environment that greeted me was a miracle and a mirage—a desert burst into bloom. Tall, bony Fulani herders scrambled to keep up with throngs of lambs, goats, cows, and camels spread out over a seemingly infinite expanse of green grass and lushly foliated trees. The Ferlo was brimming with carefully maintained wells, abundantly filled seasonal ponds, and clearly marked pastoralist corridors, with the country’s biggest wholesale livestock market just a few hours’ ride by donkey cart. There were no paved roads, no commercial farmland, and no extremist recruiters for hundreds of miles in any direction.

A woman and two young boys astride cattle seen through the horns of a cow on the water to a watering hole
Herders have to make complex calculations when choosing where to take their cows to wait out the dry season.
SVEN TORFINN/PANOS PICTURES/REDUX

Not that the herding was easy work. “A herder’s life is difficult,” Sow said, welcoming me to his compound with sweet tea and a calabash filled with fresh milk. “There is not one day of rest.”

In a few months’ time, the rains would stop, the herds would exhaust the pastures, and the grassland would revert back to desert. And Sow would again face the difficult decision he faces every year: whether to stay and buy livestock feed to tide his animals over until next year’s rains or to lead his cows on a journey, and if so, where.

A lot of complex spatial calculations go into choosing where to take hundreds of hungry cows to wait out the dry season on the edge of the world’s largest subtropical desert, while making sure they have enough to eat along the way. Observing these deliberations filled Orenstein with wonder more than a decade ago, when he started surveying herders in Chad for a food security project with the French NGO Action Against Hunger (ACF).

In 2014, Orenstein helped ACF develop an early-warning system, mining new data sources using remote sensing—observing the conditions of grazing pastures from space via satellite imagery and, in some cases, with the use of drones. He also worked with pastoralist organizations to gather information about diverse conditions on the ground, ranging from wildfire locations to the spread of animal disease. He then began making maps using open-access sources; passing the data through an algorithm that he developed to treat and filter imagery, he created detailed and accessible illustrations of rainfall levels and vegetation that became a rare reliable resource for herders and their allies. Aid workers in war zones would print out his maps and pass them around to herders.

It was part of a system designed to extract data, analyze it, and send it up the chain to institutions, including national ministries, UN agencies, and donors. Being able to see crises coming, the thinking went, would give institutional actors more time and power to prepare their response and assign their resources. Being able to deploy emergency programming earlier would in turn afford herders a bit more protection.

In practice, that’s not always how it worked.

At the start of the rainy season in the early summer of 2017, Orenstein was tracking rainfall patterns and felt a knot in his stomach. The first rains had hit too hard, washing the dormant seeds out of the soil; a dry spell followed that lasted for several weeks. When the rains did return, the grassland growth was stunted. Drought was coming.

By mid-August, Orenstein was scribbling reports and ringing journalists to warn that disaster was imminent. But when presented with this evidence, the regional body with the authority to declare an emergency did not act. By the time it finally did, in April 2018—eight months after initial warnings were sounded—it was far too late to respond effectively to what turned out to be the worst drought in 20 years.

Alex and three other men crowded around a table with a large map of Nigeria
Data scientist Alex Orenstein marks up areas during a field mapping exercise.
COURTESY OF ALEX ORENSTEIN

Two months after that, in June 2018, the United Nations Office for the Coordination of Humanitarian Affairs urgently warned that 1.6 million children faced severe acute malnutrition, up more than 50% from the previous year.

That blighted season was also brutal for Sow. In March, his entire village sent its animals south to escape the drought—the first time anyone could remember doing so that early in the dry season. But Sow lingered, unwilling to take his sons out of school to help him. Nonetheless, he also could not afford to stay and buy several tons of animal feed per month at inflated prices. By the time Sow finally hired a few assistants and headed south with his cattle, sands had engulfed the grasslands.

They marched across the desert like soldiers at war, covering 18 miles a day. On the 10th day, they reached the Tambacounda region by the Malian border, where the cows would spend the rest of the lean season grazing on savanna woodlands and lush forest. Not all the herd survived the trek, and the cows that did were emaciated and more prone to insect-borne tropical diseases. By season’s end, a quarter of the herd had dropped dead—a defeat from which Sow still hasn’t recovered.

Democratizing data

Driving through the Ferlo in 2018, Orenstein was distraught to see the rail-thin Fulani herders trailing behind their withering cows. Across the Sahel, anti-Fulani pogroms were on the rise; some West Africans were taking to Twitter to call for their extermination. As weather, food, and protection systems broke down, it was easier to scapegoat the drifting “foreigner” than to demand accountability from anyone responsible.

The combination of starvation and ethnic massacres reminded Orenstein of the stories his grandfather used to tell of surviving Auschwitz. What good were early warnings if institutions were not willing to act on them? Not that the drought could have been prevented. But declaring an emergency sooner would have facilitated measures to soften its impact on herders. For example, governments could have sent cash transfers and distributed food for both humans and livestock at strategic transit locations.

From that point on, Orenstein decided to do things differently. If institutions could not be trusted to make good use of new data, why not get it directly to herders?

But delivering data to herders would prove extremely challenging. The centralized, vertically oriented systems traditionally used for data collection and analysis are better adapted to those institutions, usually located in capital cities, than to herders dispersed across thousands of miles of desert. What’s more, Sahelian herders are some of the world’s least reachable, least connected people. Many of them don’t have cell phones or access to internet or strong cellular service.

Still, the timing was good—aid workers and donors were increasingly hopeful that technology could solve stubborn problems. In 2018, Orenstein secured a $250,000 grant for ACF to broadcast data reports to herders in northern Senegal via text message and community radio.

The project launched several months later, though by then Orenstein was already working on another one: the Garbal call centers. Even more than community radio, the call centers, which are a collaboration with the Netherlands Development Organization, could offer data tailored to individuals in very specific locations over a wider remit. The first center launched in Bamako, Mali, in 2018. Another, in Ouagadougou, Burkina Faso, followed in 2019.

Orenstein and the Garbal team—roughly a dozen local data analysts, project managers, digital finance experts, and tele-agents with degrees in livestock management and applied agriculture—have designed different tools for herders’ needs. For example, they’ve offered ways to connect with veterinarians, compare market prices for animal feed, and use satellite data to find seasonal migration corridors and track brushfires. Crucially, the team has also engaged directly with pastoralist organizations, training and equipping herders to send back field data about vegetation quality in different zones—a piece of critical information that is undetectable via satellite.

screenshot of the STAMP+ Interface showing a map of the area around Kokolorou. An info panel on the left shows other data about the area including a chart of current animal and cereal prices, vegetation levels and button for a 7 day weather forecast
A screenshot of a tool developed by Orenstein and others that call center agents use to provide herders with location-specific data.

Orenstein himself went into the field as often as he could to hold focus groups with herders and ensure that the way information was delivered would be adapted to their epistemic culture. “Instead of asking them, ‘Do you need rainfall information?’ I would say, ‘What kind of information do you need? And how do you measure it?’” he recalls. “Otherwise, the system would tell them to expect 25 millimeters of rain. Math is not how they measure. So instead, I would hold consultations on pond fullness, for example, and define rain strength in those terms—terms they can use.”

Samba Ba, the Senegalese herder, notes how effective this work has been in bridging the gulf between what tech had promised and what he and his peers actually needed. “Orenstein would help us forecast in September what the vegetation would be like the following year, so we could plan the next seasonal migration,” he says. “He came to us in the field, took into account our customs, habits, and knowledge, and used technology to give us a clearer idea of the grazing situation.”

Still, the most popular Garbal service has been its weather forecasting for rural zones. Previously, reliable information was severely lacking, in part because there were not enough ground stations and in part because satellite data was available only for urban areas. (Mali, for instance, has just 13 active weather stations, compared with 200 in Germany—a country one-third its size.)

Orenstein came up with a way to make rural forecasts more readily available. “We had the coordinates for every village in Burkina Faso. Why couldn’t we just plug those into an API?” he remembers thinking, referring to an application programming interface, a kind of intermediary that allows applications to interact with one another. “Suddenly, we were getting weather forecasts for places that weren’t listed anywhere.”

The API has enabled Garbal tele-agents to click on remote pastoral zones on a map and receive tables showing weekly, daily, and hourly forecasts that are updated with fresh satellite data every three hours. Honoré Zidouemba, the project manager for the Ouagadougou call center, estimates that during the rainy season, his center receives 2,000 to 3,000 calls a day about the weather. “Herders and farmers used to derive information from natural cues,” he says, “but with climate change, those are more and more perturbed.”

false color image of a 3 Period Timescan Cropland Monitor built with Earth Engine Apps
A tool created by Orenstein and collaborators allows a user to highlight the presence of active cropland across time.

It’s simple and inexpensive—costing under $100 a month to use—but of all the team’s technological innovations, the API has made the biggest impact. And it’s a far cry from the kinds of higher-tech applications NGOs and development organizations have been promoting.

Since 2015, the World Bank has committed half a billion dollars to a two-phase project to support Sahelian herders’ “resilience” through strategies that include developing technological tools to map pastoral infrastructure. A senior humanitarian-agency staffer working with herders and technology, who requested anonymity to speak frankly, says the resulting databases have not been shared with herders; he calls the approach, which is geared more toward informing institutions than informing herders, “very technocratic.” (The World Bank did not respond to a request for comment.)

Meanwhile, ACF, the French NGO Orenstein previously worked with, got international attention in 2020 for reportedly using AI to help herders, a claim several people involved in the project say was simply incorrect. (“ACF does not use self-learning for its Pastoral Early Warning System. Presently, the analysis is done ‘manually’ by human expertise,” says Erwann Fillol, a data analysis expert at the organization.)

drone shot of cattle immersed in brown muddy water
Climate change is making herding routes, like this one across the Niger River, increasingly volatile.
ALAMY

Other groups are experimenting with using predictive analytics to forecast displacements and herders’ movements.  A pilot project from the Danish Refugee Council in Burkina Faso, for example, predicts subnational displacement three to four months into the future, allowing aid workers to pre-position relief. “Anticipatory action in response to climate hazards can be more timely, dignified, and cost effective than alternatives,” says Alexander Kjaerum, an expert on data and predictive analytics with the organization. “AI is a last option when other things fail. And then it does add value.”

Still, some argue these kinds of projects have missed the point. “How are high technology and AI going to address land access issues for pastoralists? It is questionable if there are technological fixes to what are political, socioeconomic, and ecological pressures,” says Higazi, the pastoralist expert.

Blama Jalloh, a herder from Burkina Faso who heads the influential regional pastoralist organization Billital Maroobé, echoes this broad sentiment, arguing that big-budget, high-tech efforts mainly just produce studies, not innovation.

Taking matters into its own hands, in 2022 Billital Maroobé organized the first hackathon designed by and for Sahelian herders. Jalloh says the hackathon aimed to narrow the gap between herders and tech developers who lack familiarity with herding lifestyles. It granted up to $8,000 to startups from Mauritania and Mali to track animals and introduce digital ID cards for herders, which could help them cross borders more seamlessly.

An uncertain future

With three call centers now open, and Orenstein serving as a remote technical advisor from the US, the Garbal team is striving to stay focused and make their work sustainable.

Nevertheless, the fate of the project is far beyond its supporters’ control. The region’s slide into violence shows no sign of stopping. As a result, even though more of the herders that Garbal set out to support have started carrying smartphones charged with battery packs, they are increasingly being pushed out of cell range.

drone view of a city block with people standing near multiple fires burning in the streets after a protest
Protesters fill the streets of Ouagadougou, the capital of Burkina Faso, where nearly 10% of the population has been displaced in recent years.
AP IMAGES

Between 2018 and 2022, Burkina Faso witnessed one of the world’s fastest-growing displacement crises, with the number of internally displaced people exploding from 50,000 to 1.8 million—almost 10% of the population. Fulanis in particular were targeted for killing by security forces and government-backed vigilantes, and in some areas that are home to significant Fulani herding communities, militants destroyed as many as half the mobile-phone antennas. One tele-agent says the herders who did manage to call in from war zones told her how happy they were to reach the center. When I visited the Ouagadougou call center last year, a tele-agent named Dousso, a 24-year-old with a livestock degree who speaks French, Gourmantche, Dioula, and Moré, told me that “all of the coups,” as well as incidents in which jihadists took over markets, were also making it increasingly difficult to get certain types of data.

This can make the service even more meaningful where it’s still available, says Catherine Le Come, a Garbal cofounder, pointing to Mali, where Garbal is still accessible in some parts of the country that are now cut off from the state.

Yet Garbal, just like other efforts to get data to herders, faces the always pressing issue of how to fund this work consistently over time.

Nonprofit projects like ACF’s community radio and SMS bulletin alerts are pegged to funding cycles that run out after a few years. In March 2021, for instance, as Sow marched his cows 140 miles east toward the Senegal River, he relied on geospatial data he received by community radio and text message from two different NGOs, informing him where pastures were plentiful. But just three months later, both projects ran out of money and stopped supplying information.

Fulani herder dtanding near a body of water with his cattle, using his cell phone
Traditionally, Sahelian herders have been some of the least-connected individuals. But now more are carrying smartphones charged by battery packs.
THOMAS GRABKA/LAIF/REDUX

The Garbal call centers are trying to build a more sustainable model. The plan is to phase out NGO sponsorship by 2026 and operate as a public-private partnership between the state and telephone operators. Garbal charges callers a modest fee—the equivalent of five cents a minute—and has plans to roll out online marketplaces and financial products to generate revenue.

“Technology in itself has lots of potential,” says Le Come. “But it is the private sector that must believe and invest in innovation. And the risks it faces innovating in a context as fragile as the Sahel must be shared with a public sector that sees user impact.” (Cedric Bernard, a French agro-economist who has worked with ACF, firmly disagrees; he insists that the information should be free, and that trying to be profitable “is going the wrong way.”) Furthermore, the for-profit model means that Garbal—which set out to help vulnerable herders—is already pivoting toward providing services to farmers, who make more reliable customers because they are easier to reach and better connected. Zidouemba, the Ouagadougou project manager, says that its callers are now overwhelmingly farmers; herders, he estimates, account for just 20% of the calls to the Burkina Faso center.

Sow standing with his cattle in the Ferlo
In 2018, a quarter of Salif Sow’s herd dropped dead in a severe drought. But that season he made a sacrifice that is finally paying off: his son recently started studying abroad in Paris.
HANNAH RAE ARMSTRONG

As the tides of data that reach them ebb and flow, the herders themselves are aware that the real work needed to keep their way of life going is a longer-term political effort. As I prepared to leave the Ferlo this fall, the landscape still resplendent from the rainy season, Sow pulled me aside. He was a modest man, but there was something he wanted me to know. That very night, he said shyly, his eldest son, Abdoulsalif, was leaving Dakar for Paris to begin graduate studies at the Sorbonne, where he had received a scholarship—a fruit of the sacrifice that Sow made during the year of the terrible drought.

I reached Abdoulsalif over WhatsApp a few weeks later, by which time he had learned that Sciences Po was more prestigious than the Sorbonne and enrolled there instead. He is studying public policy and plans to seek work on pastoralist policy in the Sahel after graduation.

“Herding is a beautiful way of life, a space where I feel very happy,” Abdoulsalif told me. “It is extraordinary to see, so far away, the animals in their vast spaces. Far more beautiful than to live in a place with four walls. Even in Paris, I feel nostalgic for this life, this space of herders.”

Hannah Rae Armstrong is a writer and policy adviser on the Sahel and North Africa. She lives in Dakar, Senegal.