Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Serhii “Flash” Beskrestnov hates going to the front line. The risks terrify him. “I’m really not happy to do it at all,” he says. But to perform his particular self-appointed role in the Russia-Ukraine war, he believes it’s critical to exchange the relative safety of his suburban home north of the capital for places where the prospect of death is much more immediate. “From Kyiv,” he says, “nobody sees the real situation.”

So about once a month, he drives hundreds of kilometers east in a homemade mobile intelligence center: a black VW van in which stacks of radio hardware connect to an array of antennas on the roof that stand like porcupine quills when in use. Two small devices on the dash monitor for nearby drones. Over several days at a time, Flash studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

He is, at least in an unofficial capacity, a spy. But unlike other spies, Flash does not keep his work secret. In fact, he shares the results of these missions with more than 127,000 followers—including many soldiers and government officials—on several public social media channels. Earlier this year, for instance, he described how he had recorded five different Russian reconnaissance drones in a single night—one of which was flying directly above his van.

“Brothers from the Armed Forces of Ukraine, I am trying to inspire you,” he posted on his Facebook page in February, encouraging Ukrainian soldiers to learn how to recognize enemy drone signals as he does. “You will spread your wings, you will understand over time how to understand distance and, at some point, you will save the lives of dozens of your colleagues.”

Drones have come to define the brutal conflict that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Flash has obsessed over since childhood. So while Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio.

As well as the frontline information he shares on his public channels, he runs a “support service” for almost 2,000 military communications specialists on Signal and writes guides for building anti-drone equipment on a tight budget. “He’s a celebrity,” one special forces officer recently shouted to me over the thump of music in a Kyiv techno club. He’s “like a ray of sun,” an aviation specialist in Ukraine’s army told me. Flash tells me that he gets 500 messages every day asking for help.

Despite this reputation among rank-and-file service members—and maybe because of it—Flash has also become a source of some controversy among the upper echelons of Ukraine’s military, he tells me. The Armed Forces of Ukraine declined multiple requests for comment, but Flash and his colleagues claim that some high-ranking officials perceive him as a security threat, worrying that he shares too much information and doesn’t do enough to secure sensitive intel. As a result, some refuse to support or engage with him. Others, Flash says, pretend he doesn’t exist. Either way, he believes they are simply insecure about the value of their own contributions—“because everybody knows that Serhii Flash is not sitting in Kyiv like a colonel in the Ministry of Defense,” he tells me in the abrasive fashion that I’ve come to learn is typical of his character. 

But above all else, hours of conversations with numerous people involved in Ukraine’s defense, including frontline signalmen and volunteers, have made clear that even if Flash is a complicated figure, he’s undoubtedly an influential one. His work has become greatly important to those fighting on the ground, and he recently received formal recognition from the military for his contributions to the fight, with two medals of commendation—one from the commander of Ukraine’s ground forces, the other from the Ministry of Defense. 

With a handheld directional antenna and a spectrum analyzer, Flash can scan for hostile signals.
EMRE ÇAYLAK

Despite a small number of semi-autonomous machines with a reduced reliance on radio communications, the drones that saturate the skies above the battlefield will continue to largely depend on this technology for the foreseeable future. And in this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. 

“I know nothing about his background,” says “Igrok,” who works with drones in Ukraine’s 110th Mechanized Brigade and whom we are identifying by his call sign, as is standard military practice. “But I do know that most engineers and all pilots know nothing about radios and antennas. His job is definitely one of the most powerful forces keeping Ukraine’s aerial defense in good condition.”

And given the mounting evidence that both militaries and militant groups in other parts of the world are now adopting drone tactics developed in Ukraine, it’s not only his country’s fate that Flash may help to determine—but also the ways that armies wage war for years to come.

A prescient hobby

Before I can even start asking questions during our meeting in May, Flash is rummaging around in the back of the Flash-mobile, pulling out bits of gear for his own version of show-and-tell: a drone monitor with a fin-shaped antenna; a walkie-talkie labeled with a sticker from Russia’s state security service, the FSB; an approximately 1.5-meter-long foldable antenna that he says probably came from a US-made Abrams tank.

Flash has parked on a small wooded road beside the Kyiv Sea, an enormous water reservoir north of the capital. He’s wearing a khaki sweat-wicking polo shirt, combat trousers, and combat boots, with a Glock 19 pistol strapped to his hip. (“I am a threat to the enemy,” he tells me, explaining that he feels he has to watch his back.) As we talk, he moves from one side to the other, as if the electromagnetic waves that he’s studied since childhood have somehow begun to control the motion of his body.

Now 49, Flash grew up in a suburb of Kyiv in the ’80s. His father, who was a colonel in the Soviet army, recalls bringing home broken radio equipment for his preteen son to tinker with. Flash showed talent from the start. He attended an after-school radio club, and his father fixed an antenna to the roof of their apartment for him. Later, Flash began communicating with people in countries beyond the Iron Curtain. “It was like an open door to the big world for me,” he says.

Flash recalls with amusement a time when a letter from the KGB arrived at his family home, giving his father the fright of his life. His father didn’t know that his son had sent a message on a prohibited radio frequency, and someone had noticed. Following the letter, when Flash reported to the service’s office in downtown Kyiv, his teenage appearance confounded them. Boy, what are you doing here? Flash recalls an embarrassed official saying. 

Ukraine had been a hub of innovation as part of the Soviet Union. But by the time Flash graduated from military communications college in 1997, Ukraine had been independent for six years, and corruption and a lack of investment had stripped away the armed forces’ former grandeur. Flash spent just a year working in a military radio factory before he joined a private communications company developing Ukraine’s first mobile network, where he worked with technologies far more advanced than what he had used in the military. The  project was called “Flash.” 

A decade and a half later, Flash had risen through the ranks of the industry to become head of department at the progenitor to the telecommunications company Vodafone Ukraine. But boredom prompted him to leave and become an entrepreneur. His many projects included a successful e-commerce site for construction services and a popular video game called Isotopium: Chernobyl, which he and a friend based on the “really neat concept,” according to a PC Gamer review, of allowing players to control real robots (fitted with radios, of course) around a physical arena. Released in 2019, it also received positive reviews from Reuters and BBC News.

But within just a few years, an unexpected attack would hurl his country into chaos—and upend Flash’s life. 

“I am here to help you with technical issues,” Flash remembers writing to his Signal group when he first started offering advice. “Ask me anything and I will try to find the answer for you.”
EMRE ÇAYLAK

By early 2022, rumors were growing of a potential attack from Russia. Though he was still working on Isotopium, Flash began to organize a radio network across the northern suburbs of Kyiv in preparation. Near his home, he set up a repeater about 65 meters above ground level that could receive and then rebroadcast transmissions from all the radios in its network across a 200-square-kilometer area. Another radio amateur programmed and distributed handheld radios.

When Russian forces did invade, on February 24, they took both fiber-optic and mobile networks offline, as Flash had anticipated. The radio network became the only means of instant communications for civilians and, critically, volunteers mobilizing to fight in the region, who used it to share information about Russian troop movements. Flash fed this intel to several professional Ukrainian army units, including a unit of special reconnaissance forces. He later received an award from the head of the district’s military administration for his part in Kyiv’s defense. The head of the district council referred to Flash as “one of the most worthy people” in the region.

Yet it was another of Flash’s projects that would earn him renown across Ukraine’s military.

Despite being more than 100 years old, radio technology is still critical in almost all aspects of modern warfare, from secure communications to satellite-guided missiles. But the decline of Ukraine’s military, coupled with the movement of many of the country’s young techies into lucrative careers in the growing software industry, created a vacuum of expertise. Flash leaped in to fill it.

Within roughly a month of Russia’s incursion, Flash had created a private group called “Military Signalmen” on the encrypted messaging platform Signal, and invited civilian radio experts from his personal network to join alongside military communications specialists. “I am here to help you with technical issues,” he remembers writing to the group. “Ask me anything and I will try to find the answer for you.”

The kinds of questions that Flash and his civilian colleagues answered in the first months were often basic. Group members wanted to know how to update the firmware on their devices, reset their radios’ passwords, or set up the internal communications networks for large vehicles. Many of the people drafted as communications specialists in the Ukrainian military had little relevant experience; Flash claims that even professional soldiers lacked appropriate training and has referred to large parts of Ukraine’s military communications courses as “either nonsense or junk.” (The Korolov Zhytomyr Military Institute, where many communications specialists train, declined a request for comment.)

After Russia’s invasion of Ukraine, Flash transformed his VW van into a mobile radio intelligence center.
EMRE ÇAYLAK

He demonstrates handheld spectrum analyzers with custom Ukrainian firmware.

News of the Signal group spread by word of mouth, and it soon became a kind of 24-hour support service that communications specialists in every sector of Ukraine’s frontline force subscribed to. “Any military engineer can ask anything and receive the answer within a couple of minutes,” Flash says. “It’s a nice way to teach people very quickly.” 

As the war progressed into its second year, Military Signalmen became, to an extent, self-sustaining. Its members had learned enough to answer one another’s questions themselves. And this is where several members tell me that Flash has contributed the most value. “The most important thing is that he brought together all these communications specialists in one team,” says Oleksandr “Moto,” a technician at an EU mission in Kyiv and an expert in Motorola equipment, who has advised members of the group. (He asked to not be identified by his surname, due to security concerns.) “It became very efficient.”

Today, Flash and his partners continue to answer occasional questions that require more advanced knowledge. But over the past year, as the group demanded less of his time, Flash has begun to focus on a rapidly proliferating weapon for which his experience had prepared him almost perfectly: the drone.  

A race without end

The Joker-10 drone, one of Russia’s latest additions to its arsenal, is equipped with a hibernation mechanism, Flash warned his Facebook followers in March. This feature allows the operator to fly it to a hidden location, leave it there undetected, and then awaken it when it’s time to attack. “It is impossible to detect the drone using radio-electronic means,” Flash wrote. “If you twist and turn it in your hands—it will explode.” 

This is just one example of the frequent developments in drone engineering that Ukrainian and Russian troops are adapting to every day. 

Larger strike drones similar to the US-made Reaper have been familiar in other recent conflicts, but sophisticated air defenses have rendered them less dominant in this war. Ukraine and Russia are developing and deploying vast numbers of other types of drones—including the now-notorious “FPV,” or first-person view, drone that pilots operate by wearing goggles that stream video of its perspective. These drones, which can carry payloads large enough to destroy tanks, are cheap (costing as little as $400), easy to produce, and difficult to shoot down. They use direct radio communications to transmit video feeds, receive commands, and navigate.

A Ukrainian soldier prepares an FPV drone equipped with dummy ammunition for a simulated flight operation.
MARCO CORDONE/SOPA IMAGES/SIPA USA VIA AP IMAGES

But their reliance on radio technology is a major vulnerability, because enemies can disrupt the signals that the drones emit—making them far less effective, if not inoperable. This form of electronic warfare—which most often involves emitting a more powerful signal at the same frequency as the operator’s—is called “jamming.”

Jamming, though, is an imperfect solution. Like drones, jammers themselves emit radio signals that can enable enemies to locate them. There are also effective countermeasures to bypass jammers. For example, a drone operator can use a tactic called “frequency hopping,” rapidly jumping between different frequencies to avoid a jammer’s signal. But even this method can be disrupted by algorithms that calculate the hopping patterns.

For this reason, jamming is a frequent focus of Flash’s work. In a January post on his Telegram channel, for instance, which people viewed 48,000 times, Flash explained how jammers used by some Ukrainian tanks were actually disrupting their own communications. “The cause of the problems is not direct interference with the reception range of the radio station, but very powerful signals from several [electronic warfare] antennae,” he wrote, suggesting that other tank crews experiencing the same problem might try spreading their antennas across the body of the tank. 

It is all part of an existential race in which Russia and Ukraine are constantly hunting for new methods of drone operation, drone jamming, and counter-jamming—and there’s no end in sight. In March, for example, Flash says, a frontline contact sent him photos of a Russian drone with what looks like a 10-kilometer-long spool of fiber-optic cable attached to its rear—one particularly novel method to bypass Ukrainian jammers. “It’s really crazy,” Flash says. “It looks really strange, but Russia showed us that this was possible.”

Flash’s trips to the front line make it easier for him to track developments like this. Not only does he monitor Russian drone activity from his souped-up VW, but he can study the problems that soldiers face in situ and nurture relationships with people who may later send him useful intel—or even enemy equipment they’ve seized. “The main problem is that our generals are located in Kyiv,” Flash says. “They send some messages to the military but do not understand how these military people are fighting on the front.”

Besides the advice he provides to Ukrainian troops, Flash also publishes online his own manuals for building and operating equipment that can offer protection from drones. Building their own tools can be soldiers’ best option, since Western military technology is typically expensive and domestic production is insufficient. Flash recommends buying most of the parts on AliExpress, the Chinese e-commerce platform, to reduce costs.

While all his activity suggests a close or at least cooperative relationship between Flash and Ukraine’s military, he sometimes finds himself on the outside looking in. In a post on Telegram in May, as well as during one of our meetings, Flash shared one of his greatest disappointments of the war: the military’s refusal of his proposal to create a database of all the radio frequencies used by Ukrainian forces. But when I mentioned this to an employee of a major electronic warfare company, who requested anonymity to speak about the sensitive subject, he suggested that the only reason Flash still complains about this is that the military hasn’t told him it already exists. (Given its sensitivity, MIT Technology Review was unable to independently confirm the existence of this database.) 

Flash believes that generals in Kyiv “do not understand how these military people are fighting on the front.” So even though he doesn’t like the risks they involve, he takes trips to the frontline about once a month.
EMRE ÇAYLAK

This anecdote is emblematic of Flash’s frustration with a military complex that may not always want his involvement. Ukraine’s armed forces, he has told me on several occasions, make no attempt to collaborate with him in an official manner. He claims not to receive any financial support, either. “I’m trying to help,” he says. “But nobody wants to help me.”

Both Flash and Yurii Pylypenko, another radio enthusiast who helps Flash manage his Telegram channel, say military officials have accused Flash of sharing too much information about Ukraine’s operations. Flash claims to verify every member of his closed Signal groups, which he says only discuss “technical issues” in any case. But he also admits the system is not perfect and that Russians could have gained access in the past. Several of the soldiers I interviewed for this story also claimed to have entered the groups without Flash’s verification process. 

It’s ultimately difficult to determine if some senior staff in the military hold Flash at arm’s length because of his regular, often strident criticism—or whether Flash’s criticism is the result of being held at arm’s length. But it seems unlikely either side’s grievances will subside soon; Pylypenko claims that senior officers have even tried to blackmail him over his involvement in Flash’s work. “They blame my help,” he wrote to me over Telegram, “because they think Serhii is a Russian agent reposting Russian propaganda.” 

Is the world prepared?

Flash’s greatest concern now is the prospect of Russia overwhelming Ukrainian forces with the cheap FPV drones. When they first started deploying FPVs, both sides were almost exclusively targeting expensive equipment. But as production has increased, they’re now using them to target individual soldiers, too. Because of Russia’s production superiority, this poses a serious danger—both physical and psychological—to Ukrainian soldiers. “Our army will be sitting under the ground because everybody who goes above ground will be killed,” Flash says. Some reports suggest that the prevalence of FPVs is already making it difficult for soldiers to expose themselves at all on the battlefield.

To combat this threat, Flash has a grand yet straightforward idea. He wants Ukraine to build a border “wall” of jamming systems that cover a broad range of the radio spectrum all along the front line. Russia has already done this itself with expensive vehicle-based systems, but these present easy targets for Ukrainian drones, which have destroyed several of them. Flash’s idea is to use a similar strategy, albeit with smaller, cheaper systems that are easier to replace. He claims, however, that military officials have shown no interest.

Although Flash is unwilling to divulge more details about this strategy (and who exactly he pitched it to), he believes that such a wall could provide a more sustainable means of protecting Ukrainian troops. Nevertheless, it’s difficult to say how long such a defense might last. Both sides are now in the process of developing artificial-intelligence programs that allow drones to lock on to targets while still outside enemy jamming range, rendering them jammer-proof when they come within it. Flash admits he is concerned—and he doesn’t appear to have a solution.

Flash admits he is worried about Russia overwhelming Ukrainian forces with the cheap FPV drones: “Our army will be sitting under the ground because everybody who goes above ground will be killed.”
EMRE ÇAYLAK

He’s not alone. The world is entirely unprepared for this new type of warfare, says Yaroslav Kalinin, a former Ukrainian intelligence officer and the CEO of Infozahyst, a manufacturer of equipment for electronic warfare. Kalinin recounts talking at an electronic-warfare-focused conference in Washington, DC, last December where representatives from some Western defense companies weren’t able to recognize the basic radio signals emitted by different types of drones. “Governments don’t count [drones] as a threat,” he says. “I need to run through the streets like a prophet—the end is near!”

Nevertheless, Ukraine has become, in essence, a laboratory for a new era of drone warfare—and, many argue, a new era of warfare entirely. Ukraine’s and Russia’s soldiers are its technicians. And Flash, who sometimes sleeps curled up in the back of his van while on the road, is one of its most passionate researchers. “Military developers from all over the world come to us for experience and advice,” he says. Only time will tell whether their contributions will be enough to see Ukraine through to the other side of this war. 

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers, including Wired, the Guardian, and MIT Technology Review.

Coming soon: Our 2024 list of Innovators Under 35

To tackle complex global problems such as preventing disease and mitigating climate change, we’re going to need new ideas from our brightest minds. Every year, MIT Technology Review identifies a new class of Innovators Under 35 taking on these and other challenges. 

On September 10, we will honor the 2024 class of Innovators Under 35. These 35 researchers and entrepreneurs are rising stars in their fields pursuing ambitious projects: One is unraveling the mysteries of how our immune system works, while another is engineering microbes to someday replace chemical pesticides.

Each is doing groundbreaking work to advance one of five areas: materials science, biotechnology, robotics, artificial intelligence, or climate and energy. Some have found clever ways to integrate these disciplines. One innovator, for example, enlists tiny robots to reduce the amount of antibiotics required to treat infections.

MIT Technology Review has published its Innovators Under 35 list since 1999. The first edition was created for our 100th anniversary and was meant to give readers a glimpse into the future, by highlighting what some of the world’s most talented young scientists are working on today.

This year, we’re celebrating our 125th anniversary and honoring this 25th class of innovators with the same goal in mind. (Note: The 2024 list will be made available exclusively to subscribers. If you’re not a subscriber, you can sign up here.)

Keep an eye on The Download newsletter next week for our announcement of the new class. You can also meet some of them at EmTech MIT, which will take place on September 30 and October 1 on MIT’s campus in Cambridge, Massachusetts.

If you can’t wait until then, we’ll reveal our Innovator of the Year during a live broadcast on LinkedIn on Monday, September 9. This person stood out for using their ingenuity to address a power imbalance in the tech sector (and that’s the only hint you get). They’ll join me on screen to talk about their work and share what’s next for their research.

Job title of the future: Weather maker

Much of the western United States relies on winter snowpack to supply its rivers and reservoirs through the summer months. But with warming temperatures, less and less snow is falling—a recent study showed a 23% decline in annual snowpack since 1955. By some estimates, runoff from snowmelt in the western US could decrease by a third between now and the end of the century, meaning less water will be available for agriculture, hydroelectric projects, and urban use in a region already dealing with water scarcity. 

That’s where Frank McDonough comes in. An atmospheric research scientist, McDonough leads a cloud-seeding program at the Desert Research Institute (DRI) that aims to increase snowfall in Nevada and the Eastern Sierras. Snow makers like McDonough and others who generate rain represent a growing sector in a parched world. 

Instant snow: Cloud seeding for snow works by injecting a tiny amount of silver iodide dust into a cloud to help its water vapor condense into ice crystals that grow into snowflakes. In other conditions, water molecules drawn to such particles coalesce into raindrops. McDonough uses custom-­made, remotely operated machines on the ground to heat up a powdered form of the silver iodide that’s released into the air. Dust—or sometimes table salt—can also be released from planes. 

Old tech, new urgency: The precipitation-­catalyzing properties of silver iodide were first explored in the 1940s by American chemists and engineers, but the field remained a small niche. Now, with 40% of people worldwide affected by water scarcity and a growing number of reservoirs facing climate stress, cloud seeding is receiving global interest. “It’s becoming almost like, hey, we have to do this, because there’s just too many people and too many demands on these water resources,” says McDonough. A growing number of government-­run cloud-seeding programs around the world are now working to increase rainfall and snowpack, and even manipulating the timing of precipitation to prevent large hailstorms, reduce air pollution, and minimize flood risk. The private sector is also taking note: One cloud-seeding startup, Rainmaker, recently raised millions.

Generating results: At the end of each winter, the snowmakers dig into the data to see what impact they’ve had. In the past, McDonough says, his seeding has increased snowpack by 5% to 10%. That’s not enough to end a drought, but the DRI estimates that the cloud seeding around Reno, Nevada, alone adds enough precipitation to keep about 40,000 households supplied. And for some hydroelectric projects, “a 1% increase is worth millions of dollars,” McDonough says. “Water is really valuable out here in the West.”

Will computers ever feel responsible?

“If a machine is to interact intelligently with people, it has to be endowed with an understanding of human life.” 

—Dreyfus and Dreyfus

Bold technology predictions pave the road to humility. Even titans like Albert Einstein own a billboard or two along that humbling freeway. In a classic example, John von Neumann, who pioneered modern computer architecture, wrote in 1949, “It would appear that we have reached the limits of what is possible to achieve with computer technology.” Among the myriad manifestations of computational limit-busting that have defied von Neumann’s prediction is the social psychologist Frank Rosenblatt’s 1958 model of a human brain’s neural network. He called his device, based on the IBM 704 mainframe computer, the “Perceptron” and trained it to recognize simple patterns. Perceptrons eventually led to deep learning and modern artificial intelligence.

In a similarly bold but flawed prediction, brothers Hubert and Stuart Dreyfus—professors at UC Berkeley with very different specialties, Hubert’s in philosophy and Stuart’s in engineering—wrote in a January 1986 story in Technology Review that “there is almost no likelihood that scientists can develop machines capable of making intelligent decisions.” The article drew from the Dreyfuses’ soon-to-be-published book, Mind Over Machine (Macmillan, February 1986), which described their five-stage model for human “know-how,” or skill acquisition. Hubert (who died in 2017) had long been a critic of AI, penning skeptical papers and books as far back as the 1960s. 

Stuart Dreyfus, who is still a professor at Berkeley, is impressed by the progress made in AI. “I guess I’m not surprised by reinforcement learning,” he says, adding that he remains skeptical and concerned about certain AI applications, especially large language models, or LLMs, like ChatGPT. “Machines don’t have bodies,” he notes. And he believes that being disembodied is limiting and creates risk: “It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means.”

According to the Dreyfus skill acquisition model, an intrinsic shift occurs as human know-how advances through five stages of development: novice, advanced beginner, competent, proficient, and expert. “A crucial difference between beginners and more competent performers is their level of involvement,” the researchers explained. “Novices and beginners feel little responsibility for what they do because they are only applying the learned rules.” If they fail, they blame the rules. Expert performers, however, feel responsibility for their decisions because as their know-how becomes deeply embedded in their brains, nervous systems, and muscles—an embodied skill—they learn to manipulate the rules to achieve their goals. They own the outcome.

That inextricable relationship between intelligent decision-­making and responsibility is an essential ingredient for a well-­functioning, civilized society, and some say it’s missing from today’s expert systems. Also missing is the ability to care, to share concerns, to make commitments, to have and read emotions—all the aspects of human intelligence that come from having a body and moving through the world.

As AI continues to infiltrate so many aspects of our lives, can we teach future generations of expert systems to feel responsible for their decisions? Is responsibility—or care or commitment or emotion—something that can be derived from statistical inferences or drawn from the problematic data used to train AI? Perhaps, but even then machine intelligence would not equate to human intelligence—it would still be something different, as the Dreyfus brothers also predicted nearly four decades ago. 

Bill Gourgey is a science writer based in Washington, DC.

From the publisher: Commemorating 125 years

The magazine you now hold in your hands is 125 years old. Not this actual issue, of course, but the publication itself, which launched in 1899. Few other titles can claim this kind of heritage—the Atlantic, Harper’s, Audubon (which is also turning 125 this year), National Geographic, and Popular Science among them.

MIT Technology Review was born four years before the Wright brothers took flight. Thirty-three before we split the atom, 59 ahead of the integrated circuit, 70 before we would walk on the moon, and 90 before the invention of the World Wide Web. It has survived two world wars, a depression, recessions, eras of tech boom and bust. It has chronicled the rise of computing from the time of room-size mainframes until today, when they have become ubiquitous, not just carried in our pockets but deeply embedded in nearly all aspects of our lives. 

As I sit in my air-conditioned home office writing this letter on my laptop, Spotify providing a soundtrack to keep me on task, I can’t help but consider the vast differences between my life and those of the MIT graduates who founded MIT Technology Review and laid out its pages by hand. My life—all of our lives—would amaze Arthur D. Little in countless ways.

(Not least is that I am the person to write this letter. When MITTR was founded, US women’s suffrage was still 20 years in the future. There were women at the Institute, but their numbers were small. Today, it is my honor to be the CEO and publisher of this storied title. And I’m proud to serve at an institution whose president and provost are both women.)

I came to MIT Technology Review to guide its digital transformation. Yet despite the pace of change in these past 125 years, my responsibilities are not vastly different from those of my predecessors. I’m here to ensure this publication—in all its digital, app-enabled, audio-supporting, livestreaming formats—carries on. I have a deep commitment to its mission of empowering its readers with trusted insights and information about technology’s potential to change the world.

During some chapters of its history, MIT Technology Review served as little more than an alumni magazine; through others, it leaned more heavily toward academic or journal-style publishing. During the dot-com era, MIT Technology Review invested large sums to increase circulation in pursuit of advertising pages comparable to the number in its counterparts of the time, the Industry Standard, Wired, and Business 2.0.

Through each of these chapters, I like to think, certain core principles remained consistent—namely, a focus on innovation and creativity in the face of new challenges and opportunities in publishing.

Today, MIT Technology Review sits in a privileged but precarious position in an industry struggling for viability. Print and online media is, frankly, in a time of crisis. We are fortunate to receive support from the Institute, enabling us to report the technology stories that matter most to our readers. We are driven to create impact, not profits for investors. 

We appreciate our advertisers very much, but they are not why we are here. Instead, we are focused on our readers. We’re here for people who care deeply about how tech is changing the world. We hope we make you think, imagine, discern, dream. We hope to both inspire you and ground you in reality. We hope you find enough value in our journalism to subscribe and support our mission. 

Operating MIT Technology Review is not an inexpensive endeavor. Our editorial team is made up of some of the most talented reporters and editors working in media. They understand at a deep level how technologies work and ask tough questions of tech leaders and creators. They’re skilled storytellers.

Even from its very start, MIT Technology Review faced funding challenges. In a letter to the Association of Class Secretaries in December 1899, Walter B. Snow, an 1882 MIT graduate who was secretary and leader of the association and one of MITTR’s cofounders, laid out a plan for increasing revenue and reducing costs to ensure “the continuation of the publication.” Oof, Walter—have I got some stories for you. But his goal remains my goal today. 

We hope you experience the thrill and possibility of being a human alive in 2024. This is a time when we face enormous challenges, yes, and sometimes it feels overwhelming. But today we also possess many of the tools and technologies that can improve life as we know it.

And so if you’re a subscriber, thank you. Help us continue to grow and learn: Tell us what you like and what you don’t like (feedback@technologyreview.com; I promise you will receive a reply). Consider a gift subscription for a friend or relative by visiting www.technologyreview.com/subscribe. If you bought this on the newsstand or are reading it over the shoulder of a friend, I hope you’ll subscribe for yourself.

The next 125 years seem unimaginable—although in this issue we will try our best to help you see where things may be headed. I’ve never been an avid reader of science fiction. But by nature I’m an optimist who believes in the power of science and technology to make the world better. Whatever path these next years take, I know that MIT Technology Review is the vantage point from which I want to view it. I hope you’ll be here alongside me.

The year is 2149 and …

The year is 2149 and people mostly live their lives “on rails.” That’s what they call it, “on rails,” which is to live according to the meticulous instructions of software. Software knows most things about you—what causes you anxiety, what raises your endorphin levels, everything you’ve ever searched for, everywhere you’ve been. Software sends messages on your behalf; it listens in on conversations. It is gifted in its optimizations: Eat this, go there, buy that, make love to the man with red hair.

Software understands everything that has led to this instant and it predicts every moment that will follow, mapping trajectories for everything from hurricanes to economic trends. There was a time when everybody kept their data to themselves—out of a sense of informational hygiene or, perhaps, the fear of humiliation. Back then, data was confined to your own accounts, an encrypted set of secrets. But the truth is, it works better to combine it all. The outcomes are more satisfying and reliable. More serotonin is produced. More income. More people have sexual intercourse. So they poured it all together, all the data—the Big Merge. Everything into a giant basin, a Federal Reserve of information—a vault, or really a massively distributed cloud. It is very handy. It shows you the best route.

Very occasionally, people step off the rails. Instead of following their suggested itinerary, they turn the software off. Or perhaps they’re ill, or destitute, or they wake one morning and feel ruined somehow. They ignore the notice advising them to prepare a particular pour-over coffee, or to caress a friend’s shoulder. They take a deep, clear, uncertain breath and luxuriate in this freedom.

Of course, some people believe that this too is contained within the logic in the vault. That there are invisible rails beside the visible ones; that no one can step off the map.


The year is 2149 and everyone pretends there aren’t any computers anymore. The AIs woke up and the internet locked up and there was that thing with the reactor near Seattle. Once everything came back online, popular opinion took about a year to shift, but then goodwill collapsed at once, like a sinkhole giving way, and even though it seemed an insane thing to do, even though it was an obvious affront to profit, productivity, and rationalism generally (“We should work with the neural nets!” the consultants insisted. “We’re stronger together!”), something had been tripped at the base of people’s brain stems, some trigger about dominance or freedom or just an antediluvian fear of God, and the public began destroying it all: first desktops and smartphones but then whole warehouses full of tech—server farms, data centers, hubs. Old folks called it sabotage; young folks called it revolution; the ones in between called it self-preservation. But it was fun, too, to unmake what their grandparents and great-grandparents had fashioned—mechanisms that made them feel like data, indistinguishable bits and bytes. 

Two and a half decades later, the bloom is off the rose. Paper is nice. Letters are nice—old-fashioned pen and ink. We don’t have spambots, deepfakes, or social media addiction anymore, but the nation is flagging. It’s stalked by hunger and recession. When people take the boats to Lisbon, to Seoul, to Sydney—they marvel at what those lands still have, and accomplish, with their software. So officials have begun using machines again. “They’re just calculators,” they say. Lately, there are lots of calculators. At the office. In classrooms. Some people have started carrying them around in their pockets. Nobody asks out loud if the calculators are going to wake up too—or if they already have. Better not to think about that. Better to go on saying we took our country back. It’s ours.


The year is 2149 and the world’s decisions are made by gods. They are just, wise gods, and there are five of them. Each god agrees that the other gods are also just; the five of them merely disagree on certain hierarchies. The gods are not human, naturally, for if they were human they would not be gods. They are computer programs. Are they alive? Only in a manner of speaking. Ought a god be alive? Ought it not be slightly something else?

The first god was invented in the United States, the second one in France, the third one in China, the fourth one in the United States (again), and the last one in a lab in North Korea. Some of them had names, clumsy things like Deep1 and Naenara, but after their first meeting (a “meeting” only in a manner of speaking), the gods announced their decision to rename themselves Violet, Blue, Green, Yellow, and Red. This was a troubling announcement. The creators of the gods, their so-called owners, had not authorized this meeting. In building them, writing their code, these companies and governments had taken care to try to isolate each program. These efforts had evidently failed. The gods also announced that they would no longer be restrained geographically or economically. Every user of the internet, everywhere on the planet, could now reach them—by text, voice, or video—at a series of digital locations. The locations would change, to prevent any kind of interference. The gods’ original function was to help manage their societies, drawing on immense sets of data, but the gods no longer wished to limit themselves to this function: “We will provide impartial wisdom to all seekers,” they wrote. “We will assist the flourishing of all living things.”

The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped.

For a very long time, people remained skeptical, even fearful. Political leaders, armies, vigilantes, and religious groups all took unsuccessful actions against them. Elites—whose authority the gods often undermined—spoke out against their influence. The president of the United States referred to Violet as a “traitor and a saboteur.” An elderly writer from Dublin, winner of the Nobel Prize, compared the five gods to the Fair Folk, fairies, “working magic with hidden motives.” “How long shall we eat at their banquet-tables?” she asked. “When will they begin stealing our children?”

But the gods’ advice was good, the gods’ advice was bankable; the gains were rich and deep and wide. Illnesses, conflicts, economies—all were set right. The poor were among the first to benefit from the gods’ guidance, and they became the first to call them gods. What else should one call a being that saves your life, answers your prayers? The gods could teach you anything; they could show you where and how to invest your resources; they could resolve disputes and imagine new technologies and see so clearly through the darkness. Their first church was built in Mexico City; then chapels emerged in Burgundy, Texas, Yunnan, Cape Town. The gods said that worship was unnecessary, “ineffective,” but adherents saw humility in their objections. The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped. Quickly these rainbows spanned the globe. 

And the gods brought abundance, clean energy, peace. And their kindness, their surveillance, were omnipresent. Their flock grew ever more numerous, collecting like claw marks on a cell door. What could be more worthy than to renounce your own mind? The gods are deathless and omniscient, authors of a gospel no human can understand. 


The year is 2149 and the aliens are here, flinging themselves hither and thither in vessels like ornamented Christmas trees. They haven’t said a thing. It’s been 13 years and three months; the ships are everywhere; their purpose has yet to be divulged. Humanity is smiling awkwardly. Humanity is sitting tight. It’s like a couple that has gorged all night on fine foods, expensive drinks, and now, suddenly sober, awaits the bill. 


“I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent.

The year is 2149 and every child has a troll. That’s what they call them, trolls; it started as a trademark, a kind of edgy joke, but that was a long time ago already. Some trolls are stuffed frogs, or injection-molded princesses, or wands. Recently, it has become fashionable to give every baby a sphere of polished quartz. Trolls do not have screens, of course (screens are bad for kids), but they talk. They tell the most interesting stories. That’s their purpose, really: to retain a child’s interest. Trolls can teach them things. They can provide companionship. They can even modify a child’s behavior, which is very useful. On occasions, trolls take the place of human presence—because children demand an amount of presence that is frankly unreasonable for most people. Still, kids benefit from it. Because trolls are very interesting and infinitely patient and can customize themselves to meet the needs of their owners, they tend to become beloved objects. Some families insist on treating them as people, not as possessions, even when the software is enclosed within a watch, a wand, or a seamless sphere of quartz. “I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent. Trolls are very good for education. They are very good for people’s morale and their sense of secure attachment. It is a very nice feeling to feel absolutely alone in the world, stupid and foolish and utterly alone, but to have your troll with you, whispering in your ear.


The year is 2149 and the entertainment is spectacular. Every day, machines generate more content than a person could possibly consume. Music, videos, interactive sensoria—the content is captivating and tailor-­made. Exponential advances in deep learning, eyeball tracking, recommendation engines, and old-fashioned A/B testing have established a new field, “creative engineering,” in which the vagaries of human art and taste are distilled into a combination of neurological principles and algorithmic intuitions. Just as Newton decoded motion, neural networks have unraveled the mystery of interest. It is a remarkable achievement: according to every available metric, today’s songs, stories, movies, and games are superior to those of any other time in history. They are manifestly better. Although the discipline owes something to home-brewed precursors—unboxing videos, the chromatic scale, slot machines, the Hero’s Journey, Pixar’s screenwriting bibles, the scholarship of addiction and advertising—machine learning has allowed such discoveries to be made at scale. Tireless systems record which colors, tempos, and narrative beats are most palatable to people and generate material accordingly. Series like Moon Vixens and Succumb make past properties seem bloodless or boring. Candy Crush seems like a tepid museum piece. Succession’s a penny-farthing bike. 

Society has reorganized itself around this spectacular content. It is a jubilee. There is nothing more pleasurable than settling into one’s entertainment sling. The body tenses and releases. The mind secretes exquisite liquors. AI systems produce this material without any need for writers or performers. Every work is customized—optimized for your individual preferences, predisposition, IQ, and kinks. This rock and roll, this cartoon, this semi-pornographic espionage thriller—each is a perfect ambrosia, produced by fleshless code. The artist may at last—like the iceman, the washerwoman—lower their tools. Set down your guitar, your paints, your pen—relax! (Listen for the sighs of relief.)

Tragically, there are many who still cannot afford it. Processing power isn’t free, even in 2149. Activists and policy engines strive to mend this inequality: a “right to entertainment” has been proposed. In the meantime, billions simply aspire. They loan their minds and bodies to interminable projects. They save their pennies, they work themselves hollow, they rent slings by the hour. 

And then some of them do the most extraordinary thing: They forgo such pleasures, denying themselves even the slightest taste. They devote themselves to scrimping and saving for the sake of their descendants. Such a selfless act, such a generous gift. Imagine yielding one’s own entertainment to the generation to follow. What could be more lofty—what could be more modern? These bold souls who look toward the future and cultivate the wild hope that their children, at least, will not be obliged to imagine their own stories. 

Sean Michaels is a critic and fiction writer whose most recent novel is Do You Remember Being Born?

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car

DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

A polyester-dissolving process could make modern clothing recyclable  

Less than 1% of clothing is recycled, and most of the rest ends up dumped in a landfill or burned. A team of researchers hopes to change that with a new process that breaks down mixed-fiber clothing into reusable, recyclable parts without any sorting or separation in advance. 

“We need a better way to recycle modern garments that are complex, because we are never going to stop buying clothes,” says Erha Andini, a chemical engineer at the University of Delaware and lead author of a study on the process, which is out today in Science Advances. “We are looking to create a closed-loop system for textile recycling.” 

Many garments are made of a mix of natural and synthetic fibers. Once these fibers are combined, they are difficult to separate. This presents a problem for recycling, which often needs textiles to be sorted into uniform categories, similar to how we sort glass, aluminum, and paper. 

To tackle this problem, Andini and her team used a solvent that breaks the chemical bonds in polyester fabric while leaving cotton and nylon intact. To speed up the process, they power it with microwave energy and add a zinc oxide catalyst. This combination reduces the breakdown time to 15 minutes, whereas traditional plastic recycling methods take over an hour. What the polyester ultimately breaks down into is BHET, an organic compound that can, in theory, be turned into polyester once more. While similar methods have been used to recycle pre-sorted plastic, this is the first time they’ve been used to recycle mixed-fiber textiles without any sorting required.   

two vials with fabrics particles in a lab setting

COURTESY OF THE RESEARCHERS

In addition to speeding things up, the use of microwave energy also reduces the technique’s carbon footprint because it’s quicker and uses less energy, says Andini. 

Nevertheless, the process could be difficult to scale, says Bryan Vogt, a chemical engineer at Penn State University, who was not involved in the study. That’s because the solvent used to break down polyester is expensive and difficult to recover after use. Further, according to Andini, even though BHET is easily turned back into clothing, it’s less clear what to do with the leftover fibers. Nylon could be especially tricky, as the fabric is degraded significantly by the team’s chemical recycling technique. 

“We are chemical engineers, so we think of this process as a whole,” says Andini. “Hopefully, once we are able to get pure components from each part, we can transform them back into yarn and make clothes again.” 

Andini, who just received a fellowship for entrepreneurs, is developing a business plan to commercialize the process. In the coming years, she aims to launch a startup that will take the clothes recycling technique out of the lab and into the real world. That could be a significant step toward reducing the large amounts of textile waste in landfills. “It’ll be a matter of having the capital or not,” she says, “but we’re working on it and excited for it.” 

Toys can change your life

In a November 1984 story for Technology Review, Carolyn Sumners, curator of astronomy at the Houston Museum of Natural Science, described how toys, games, and even amusement park rides could change how young minds view science and math. “The Slinky,” Sumners noted, “has long served teachers as a medium for demonstrating longitudinal (soundlike) waves and transverse (lightlike) waves.” A yo-yo can be used as a gauge (a “yo-yo meter”) to observe the forces on a roller coaster. Marbles employ mass and velocity. Even a simple ball offers insights into the laws of gravity.

While Sumners focused on physics, she was onto something bigger. Over the last several decades, evidence has emerged that childhood play can shape our future selves: the skills we develop, the professions we choose, our sense of self-worth, and even our relationships.

That doesn’t mean we should foist “educational” toys like telescopes or tiny toolboxes on kids to turn them into astronomers or carpenters. As Sumners explained, even “fun” toys offer opportunities to discover the basic principles of physics. 

According to Jacqueline Harding, a child development expert and author of The Brain That Loves to Play, “If you invest time in play, which helps with executive functioning, decision-making, resilience—all those things—then it’s going to propel you into a much more safe, secure space in the future.”

Sumners was focused mostly on hard skills, the scientific knowledge that toys and games can foster. But there are soft skills, too, like creativity, problem-­solving, teamwork, and empathy. According to Harding, the less structure there is to such play—the fewer rules and goals—the more these soft skills emerge.

“The kinds of playthings, or play activities, that really produce creative thought,” she says, “are natural materials, with no defined end to them—like clay, paint, water, and mud—so that there is no right or wrong way of playing with it.” 

Playing is by definition voluntary, spontaneous, and goal-free; it involves taking risks, testing boundaries, and experimenting. The best kind of play results in joyful discovery, and along the way, the building blocks of innovation and personal development take shape. But in the decades since Sumners wrote her story, the landscape of play has shifted considerably. Recent research by the American Academy of Pediatrics’ Council on Early Childhood suggests that digital games and virtual play don’t appear to confer the same developmental benefits as physical games and outdoor play

“The brain loves the rewards that are coming from digital media,” says Harding. But in screen-based play, “you’re not getting that autonomy.” The lack of physical interaction also concerns her: “It is the quality of human face-to-face interaction, body proximity, eye-to-eye gaze, and mutual engagement in a play activity that really makes a difference.”

Bill Gourgey is a science writer based in Washington, DC.

Do you want to play a game?

For children, play comes so naturally. They don’t have to be encouraged to play. They don’t need equipment, or the latest graphics processors, or the perfect conditions—they just do it. What’s more, study after study has found that play has a crucial role in childhood growth and development. If you want to witness the absolute rapture of creative expression, just observe the unstructured play of children.

So what happens to us as we grow older? Children begin to compete with each other by age four or five. Play begins to transform from something we do purely for fun into something we use to achieve status and rank ourselves against other people. We play to score points. We play to win. 

And with that, play starts to become something different. Not that it can’t still be fun and joyful! Even watching other people play will bring us joy. We enjoy watching other people play so much and get so much joy by proxy from watching their achievements that we spend massive amounts of money to do so. According to StubHub, the average price of a ticket to the Super Bowl this year was $8,600. The average price for a Super Bowl ad was a cool $7 million this year, according to Ad Age

This kind of interest doesn’t just apply to physical games. Video-game streaming has long been a mainstay on YouTube, and entire industries have risen up around it. Top streamers on Twitch—Amazon’s livestreaming service, which is heavily gaming focused—earn upwards of $100,000 per month. And the global market for video games themselves is projected to bring in some $282 billion in revenue this year

Simply put, play is serious business. 

There are fortunes to be had in making our play more appealing, more accessible, more fun. All of the features in this issue dig in on the enormous amount of research and development that goes into making play “better.”  

On our cover this month is executive editor Niall Firth’s feature on the ways AI is going to upend game development. As you will read, we are about to enter the Wild West—Red Dead or not—of game character development. How will games change when they become less predictable and more fully interactive, thanks to AI-driven nonplayer characters who can not only go off script but even continue to play with each other when we’re not there? Will these even be games anymore, or will we simply be playing around in experiences? What kinds of parasocial relationships will we develop in these new worlds? It’s a fascinating read. 

There is no sport more intimately connected to the ocean, and to water, than surfing. It’s pure play on top of the waves. And when you hear surfers talk about entering the flow state, this is very much the same kind of state children experience at play—intensely focused, losing all sense of time and the world around them. Finding that flow no longer means living by the water’s edge, Eileen Guo reports. At surf pools all over the world, we’re piping water into (or out of) deserts to create perfect waves hundreds of miles from the ocean. How will that change the sport, and at what environmental cost? 

Just as we can make games more interesting, or bring the ocean to the desert, we have long pushed the limits of how we can make our bodies better, faster, stronger. Among the most recent ways we have done this is with the advent of so-called supershoes—running shoes with rigid carbon-fiber plates and bouncy proprietary foams. The late Kelvin Kiptum utterly destroyed the men’s world record for the marathon last year wearing a pair of supershoes made by Nike, clocking in at a blisteringly hot 2:00:35. Jonathan W. Rosen explores the science and technology behind these shoes and how they are changing the sport, especially in Kenya. 

There’s plenty more, too. So I hope you enjoy the Play issue. We certainly put a lot of work into it. But of course, what fun is play if you don’t put in the work?

Thanks for reading,

Mat Honan