In a November 1984 story for Technology Review, Carolyn Sumners, curator of astronomy at the Houston Museum of Natural Science, described how toys, games, and even amusement park rides could change how young minds view science and math. “The Slinky,” Sumners noted, “has long served teachers as a medium for demonstrating longitudinal (soundlike) waves and transverse (lightlike) waves.” A yo-yo can be used as a gauge (a “yo-yo meter”) to observe the forces on a roller coaster. Marbles employ mass and velocity. Even a simple ball offers insights into the laws of gravity.
While Sumners focused on physics, she was onto something bigger. Over the last several decades, evidence has emerged that childhood play can shape our future selves: the skills we develop, the professions we choose, our sense of self-worth, and even our relationships.
That doesn’t mean we should foist “educational” toys like telescopes or tiny toolboxes on kids to turn them into astronomers or carpenters. As Sumners explained, even “fun” toys offer opportunities to discover the basic principles of physics.
According to Jacqueline Harding, a child development expert and author of The Brain That Loves to Play, “If you invest time in play, which helps with executive functioning, decision-making, resilience—all those things—then it’s going to propel you into a much more safe, secure space in the future.”
Sumners was focused mostly on hard skills, the scientific knowledge that toys and games can foster. But there are soft skills, too, like creativity, problem-solving, teamwork, and empathy. According to Harding, the less structure there is to such play—the fewer rules and goals—the more these soft skills emerge.
“The kinds of playthings, or play activities, that really produce creative thought,” she says, “are natural materials, with no defined end to them—like clay, paint, water, and mud—so that there is no right or wrong way of playing with it.”
Playing is by definition voluntary, spontaneous, and goal-free; it involves taking risks, testing boundaries, and experimenting. The best kind of play results in joyful discovery, and along the way, the building blocks of innovation and personal development take shape. But in the decades since Sumners wrote her story, the landscape of play has shifted considerably. Recent research by the American Academy of Pediatrics’ Council on Early Childhood suggests that digital games and virtual play don’t appear to confer the same developmental benefits as physical games and outdoor play.
“The brain loves the rewards that are coming from digital media,” says Harding. But in screen-based play, “you’re not getting that autonomy.” The lack of physical interaction also concerns her: “It is the quality of human face-to-face interaction, body proximity, eye-to-eye gaze, and mutual engagement in a play activity that really makes a difference.”
Bill Gourgey is a science writer based in Washington, DC.
For children, play comes so naturally. They don’t have to be encouraged to play. They don’t need equipment, or the latest graphics processors, or the perfect conditions—they just do it. What’s more, study after study has found that play has a crucial role in childhood growth and development. If you want to witness the absolute rapture of creative expression, just observe the unstructured play of children.
So what happens to us as we grow older? Children begin to compete with each other by age four or five. Play begins to transform from something we do purely for fun into something we use to achieve status and rank ourselves against other people. We play to score points. We play to win.
And with that, play starts to become something different. Not that it can’t still be fun and joyful! Even watching other people play will bring us joy. We enjoy watching other people play so much and get so much joy by proxy from watching their achievements that we spend massive amounts of money to do so. According to StubHub, the average price of a ticket to the Super Bowl this year was $8,600. The average price for a Super Bowl ad was a cool $7 million this year, according to Ad Age.
This kind of interest doesn’t just apply to physical games. Video-game streaming has long been a mainstay on YouTube, and entire industries have risen up around it. Top streamers on Twitch—Amazon’s livestreaming service, which is heavily gaming focused—earn upwards of $100,000 per month. And the global market for video games themselves is projected to bring in some $282 billion in revenue this year.
Simply put, play is serious business.
There are fortunes to be had in making our play more appealing, more accessible, more fun. All of the features in this issue dig in on the enormous amount of research and development that goes into making play “better.”
On our cover this month is executive editor Niall Firth’s feature on the ways AI is going to upend game development. As you will read, we are about to enter the Wild West—Red Dead or not—of game character development. How will games change when they become less predictable and more fully interactive, thanks to AI-driven nonplayer characters who can not only go off script but even continue to play with each other when we’re not there? Will these even be games anymore, or will we simply be playing around in experiences? What kinds of parasocial relationships will we develop in these new worlds? It’s a fascinating read.
There is no sport more intimately connected to the ocean, and to water, than surfing. It’s pure play on top of the waves. And when you hear surfers talk about entering the flow state, this is very much the same kind of state children experience at play—intensely focused, losing all sense of time and the world around them. Finding that flow no longer means living by the water’s edge, Eileen Guo reports. At surf pools all over the world, we’re piping water into (or out of) deserts to create perfect waves hundreds of miles from the ocean. How will that change the sport, and at what environmental cost?
Just as we can make games more interesting, or bring the ocean to the desert, we have long pushed the limits of how we can make our bodies better, faster, stronger. Among the most recent ways we have done this is with the advent of so-called supershoes—running shoes with rigid carbon-fiber plates and bouncy proprietary foams. The late Kelvin Kiptum utterly destroyed the men’s world record for the marathon last year wearing a pair of supershoes made by Nike, clocking in at a blisteringly hot 2:00:35. Jonathan W. Rosen explores the science and technology behind these shoes and how they are changing the sport, especially in Kenya.
There’s plenty more, too. So I hope you enjoy the Play issue. We certainly put a lot of work into it. But of course, what fun is play if you don’t put in the work?
This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.
Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for photography and surveillance, as well as for spraying pesticides, moving parcels, and many other purposes around the world.
But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations.
The reason? While its market dominance has attracted scrutiny for years, it’s increasingly clear that DJI’s commercial products are so good and affordable they are also being used on active battlefields to scout out the enemy or carry bombs. As the US worries about the potential for conflict between China and Taiwan, the military implications of DJI’s commercial drones are becoming a top policy concern.
DJI has managed to set the gold standard for commercial drones because it is built on decades of electronic manufacturing prowess and policy support in Shenzhen. It is an example of how China’s manufacturing advantage can turn into a technological one.
“I’ve been to the DJI factory many times … and mainly, China’s industrial base is so deep that every component ends up being a fraction of the cost,” Sam Schmitz, the mechanical engineering lead at Neuralink, wrote on X. Shenzhen and surrounding towns have had a robust factory scene for decades, providing an indispensable supply chain for a hardware industry like drones. “This factory made almost everything, and it’s surrounded by thousands of factories that make everything else … nowhere else in the world can you run out of some weird screw and just walk down the street until you find someone selling thousands of them,” he wrote.
But Shenzhen’s municipal government has also significantly contributed to the industry. For example, it has granted companies more permission for potentially risky experiments and set up subsidies and policy support. Last year, I visited Shenzhen to experience how it’s already incorporating drones in everyday food delivery, but the city is also working with companies to use drones for bigger and bigger jobs—carrying everything from packages to passengers. All of these go into a plan to build up the “low-altitude economy” in Shenzhen that keeps the city on the leading edge of drone technology.
As a result, the supply chain in Shenzhen has become so competitive that the world can’t really use drones without it. Chinese drones are simply the most accessible and affordable out there.
This reliance on one Chinese company and the supply chain behind it is what worries US politicians, but the danger would be more pronounced in any conflict between China and Taiwan, a prospect that is a huge security concern in the US and globally.
Last week, my colleague James O’Donnell wrote about a report by the think tank Center for a New American Security (CNAS) that analyzed the role of drones in a potential war in the Taiwan Strait. Right now, both Ukraine and Russia are still finding ways to source drones or drone parts from Chinese companies, but it’d be much harder for Taiwan to do so, since it would be in China’s interest to block its opponent’s supply. “So Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make its own drones or find alternative manufacturers, likely in the US,” James wrote.
If the ban on DJI sales in the US is eventually passed, it will hit the company hard for sure, as the US drone market is currently worth an estimated $6 billion, the majority of which is going to DJI. But undercutting DJI’s advantage won’t magically grow an alternative drone industry outside China.
“The actions taken against DJI suggest protectionism and undermine the principles of fair competition and an open market. The Countering CCP Drones Act risks setting a dangerous precedent, where unfounded allegations dictate public policy, potentially jeopardizing the economic well-being of the US,” DJI told MIT Technology Review in an emailed statement.
The Taiwanese government is aware of the risks of relying too much on China’s drone industry, and it’s looking to change. In March, Taiwan’s newly elected president, Lai Ching-te, said that Taiwan wants to become the “Asian center for the democratic drone supply chain.”
Already the hub of global semiconductor production, Taiwan seems well positioned to grow another hardware industry like drones, but it will probably still take years or even decades to build the economies of scale seen in Shenzhen. With support from the US, can Taiwanese companies really grow fast enough to meaningfully sway China’s control of the industry? That’s a very open question.
A housekeeping note: I’m currently visiting London, and the newsletter will take a break next week. If you are based in the UK and would like to meet up, let me know by writing to zeyi@technologyreview.com.
Now read the rest of China Report
Catch up with China
1. ByteDance is working with the US chip design company Broadcom to develop a five-nanometer AI chip. This US-China collaboration, which should be compliant with US export restrictions, is rare these days given the political climate. (Reuters $)
2. After both the European Union and China announced new tariffs against each other, the two sides agreed to chat about how to resolve the dispute. (New York Times $)
Canada is preparing to announce its own tariffs on Chinese-made electric vehicles. (Bloomberg $)
3. A NASA leader says the US is “on schedule” to send astronauts to the moon within a few years. There’s currently a heated race between the US and China on moon exploration. (Washington Post $)
4. A new cybersecurity report says RedJuliett, a China-backed hacker group, has intensified attacks on Taiwanese organizations this year. (Al Jazeera $)
5. The Canadian government is blocking a rare earth mine from being sold to a Chinese company. Instead, the government will buy the stockpiled rare earth materials for $2.2 million. (Bloomberg $)
6. Economic hardship at home has pushed some Chinese small investors to enter the US marijuana industry. They have been buying lands in the States, setting up marijuana farms, and hiring other new Chinese immigrants. (NPR)
Lost in translation
In the past week, the most talked-about person in China has been a 17-year-old girl named Jiang Ping, according to the Chinese publication Southern Metropolis Daily. Every year since 2018, the Chinese company Alibaba has been hosting a global mathematics contest that attracts students from prestigious universities around the world to compete for a generous prize. But to everyone’s surprise, Jiang, who’s studying fashion design at a vocational high school in a poor town in eastern China, ended up ranking 12th in the qualifying round this year, beating scores of college undergraduate or even master’s students. Other than reading college mathematics textbooks under her math teacher’s guidance, Jiang has received no professional training, as many of her competitors have.
Jiang’s story, highlighted by Alibaba following the announcement of the first-round results, immediately went viral in China. While some saw it as a tale of buried talents and how personal endeavor can overcome unfavorable circumstances, others questioned the legitimacy of her results. She became so famous that people, including social media influencers, kept visiting her home, turning her hometown into an unlikely tourist destination. The town had to hide Jiang from public attention while she prepared for the final round of the competition.
One more thing
After I wrote about the new Chinese generative video model Kling last week, the AI tool added a new feature that can turn a static photo into a short video clip. Well, what better way to test its performance than feeding it the iconic “distracted boyfriend” meme and watching what the model predicts will happen after that moment?
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Supershoes are reshaping distance running
Since 2016, when Nike introduced the Vaporfly, a paradigm-shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport.
“Supershoes” —which combine a lightweight, energy-returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.
To some, this is a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.
Still, some argue that they’ve changed the sport too quickly. Read the full story.
—Jonathan W. Rosen
This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to launch tomorrow, so if you don’t already, subscribe now to get a copy when it lands.
Why China’s dominance in commercial drones has become a global security issue
Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for everything from photography to spraying pesticides to moving parcels.
But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations.
To understand why, you need to consider the potential for conflict between China and Taiwan, and the fact that the military implications of DJI’s commercial drones have become a top policy concern for US lawmakers. Read the full story.
—Zeyi Yang
This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The EU has issued antitrust charges against Microsoft For bundling Teams with Office—just a day after it announced similar charges against Apple. (WSJ $) + It seems likely it’ll be hit with a gigantic fine. (Ars Technica) + The EU has new powers to regulate the tech sector, and it’s clearly not afraid to use them. (FT $)
2 OpenAI is delaying launching its voice assistant (WP $) + It’s also planning to block access in China—but plenty of Chinese companies stand ready to fill the void. (Mashable)
3 Deepfake creators are re-victimizing sex trafficking survivors Non-consensual deepfake porn is proliferating at a terrifying pace—but this is the grimmest example I’ve seen. (Wired $) + Three ways we can fight deepfake porn. (MIT Technology Review)
4 Chinese tech company IPOs are a rarity these days It’s becoming very hard to avoid the risk of it all being derailed by political scrutiny, whether at home or abroad. (NYT $) + Global chip company stock prices have been on a rollercoaster ride recently, thanks to Nvidia. (CNBC)
5 Why AI is not about to replace journalism It can crank out content, sure—but it’s incredibly boring to read. (404 Media) + After all the hype, it’s no wonder lots of us feel ever-so-slightly disappointed by AI. (WP $) + Despite a troubled launch, Google’s already extending AI Summaries to Gmail as well as Search. (CNET)
6 This week of extreme weather is a sign of things to come Summers come with a side-serving of existential dread now, as we all feel the effects of climate change. (NBC) + Scientists have spotted a worrying new tipping point for the loss of ice sheets in Antarctica. (The Guardian)
7 Inside the fight over lithium mine expansion in Argentina Indigenous communities had been divided in opposition—but as the cash started flowing, cracks started appearing. (The Guardian) + Lithium battery fires are a growing concern for firefighters worldwide. (WSJ $)
8 What even is intelligent life? We value it, but it’s a slippery concept that’s almost impossible to define. (Aeon) + What an octopus’s mind can teach us about AI’s ultimate mystery. (MIT Technology Review)
9 Tesla is recalling most Cybertrucks… for the fourth time You have to laugh, really. (The Verge) + Luckily, it’s not sold that many of them anyway. (Quartz $)
10 The trouble with Meta’s “smart” Ray Bans Well… basically they’re just not very smart. At all. (Wired $)
Quote of the day
“We’re making the biggest bet in AI. If transformers go away, we’ll die. But if they stick around, we’re the biggest company of all time.”
—Fighting talk to CNBC from Gavin Uberti, cofounder and CEO of a two-year-old startup called Etched, which believes its AI-optimized chips could take on Nvidia’s near-monopoly.
The big story
This nanoparticle could be the key to a universal covid vaccine
COURTESY OF WELLCOME LEAP, CALTECH, AND MERKIN INSTITUTE
September 2022 Long before Alexander Cohen—or anyone else—had heard of the alpha, delta, or omicron variants of covid-19, he and his graduate school advisor Pamela Bjorkman were doing the research that might soon make it possible for a single vaccine to defeat the rapidly evolving virus—along with any other covid-19 variant that might arise in the future.
The pair and their collaborators are now tantalizingly close to achieving their goal of manufacturing a vaccine that broadly triggers an immune response not just to covid and its variants but to a wider variety of coronaviruses. Read the full story.
—Adam Piore
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Happy 80th Birthday to much beloved Muswell Hillbilly Ray Davies, frontman of the Kinks. + Need to cool your home down? Plants can help! + Well, uh, that’s certainly one way to cope with a long-haul flight. + Glad to know I’m not the only person obsessed with Nongshim instant noodles.
Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.
The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year.
“It’s very impressive. No one else is able to do that,” says Jack Saunders, a researcher at the University of Bath, who was not involved in Synthesia’s work.
The full-body avatars he previewed are very good, he says, despite small errors such as hands “slicing” into each other at times. But “chances are you’re not really going to be looking that close to notice it,” Saunders says.
Synthesia launched its first version of hyperrealistic AI avatars, also known as deepfakes, in April. These avatars use large language models to match expressions and tone of voice to the sentiment of spoken text. Diffusion models, as used in image- and video-generating AI systems, create the avatar’s look. However, the avatars in this generation appear only from the torso up, which can detract from the otherwise impressive realism.
To create the full-body avatars, Synthesia is building an even bigger AI model. Users will have to go into a studio to record their body movements.
COURTESY SYNTHESIA
But before these full-body avatars become available, the company is launching another version of AI avatars that have hands and can be filmed from multiple angles. Their predecessors were only available in portrait mode and were just visible from the front.
Other startups, such as Hour One, have launched similar avatars with hands. Synthesia’s version, which I got to test in a research preview and will be launched in late July, has slightly more realistic hand movements and lip-synching.
Crucially, the coming update also makes it far easier to create your own personalized avatar. The company’s previous custom AI avatars required users to go into a studio to record their face and voice over the span of a couple of hours, as I reported in April.
This time, I recorded the material needed in just 10 minutes in the Synthesia office, using a digital camera, a lapel mike, and a laptop. But an even more basic setup, such as a laptop camera, would do. And while previously I had to record my facial movements and voice separately, this time the data was collected at the same time. The process also includes reading a script expressing consent to being recorded in this way, and reading out a randomly generated security passcode.
These changes allow more scale and give the AI models powering the avatars more capabilities with less data, says Riparbelli. The results are also much faster. While I had to wait a few weeks to get my studio-made avatar, the new homemade ones were available the next day.
Below, you can see my test of the new homemade avatars with hands.
COURTESY SYNTHESIA
The homemade avatars aren’t as expressive as the studio-made ones yet, and users can’t change the backgrounds of their avatars, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. The hands are animated using an advanced form of looping technology, which repeats the same hand movements in a way that is responsive to the content of the script.
Hands are tricky for AI to do well—even more so than faces, Vittorio Ferrari, Synthesia’s director of science, told me in in March. That’s because our mouths move in relatively small and predictable ways while we talk, making it possible to sync the deepfake version up with speech, but we move our hands in lots of different ways. On the flip side, while faces require close attention to detail because we tend to focus on them, hands can be less precise, Ferrari says.
Even if they’re imperfect, AI-generated hands and bodies add a lot to the illusion of realism, which poses serious risks at a time when deepfakes and online misinformation are proliferating. Synthesia has strict content moderation policies, carefully vetting both its customers and the sort of content they’re able to generate. For example, only accredited news outlets can generate content on news.
These new advancements in avatar technologies are another hammer blow to our ability to believe what we see online, says Saunders.
“People need to know you can’t trust anything,” he says. “Synthesia is doing this now, and another year down the line it will be better and other companies will be doing it.”
First, a confession. I only got into playing video games a little over a year ago (I know, I know). A Christmas gift of an Xbox Series S “for the kids” dragged me—pretty easily, it turns out—into the world of late-night gaming sessions. I was immediately attracted to open-world games, in which you’re free to explore a vast simulated world and choose what challenges to accept. Red Dead Redemption 2 (RDR2), an open-world game set in the Wild West, blew my mind. I rode my horse through sleepy towns, drank in the saloon, visited a vaudeville theater, and fought off bounty hunters. One day I simply set up camp on a remote hilltop to make coffee and gaze down at the misty valley below me.
To make them feel alive, open-world games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.
In open-world games like Red Dead Redemption 2, players can choose diverse interactions within the same simulated experience.
After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun—I had a whale of a time, honestly, looting stagecoaches, fighting in bar brawls, and stalking deer through rainy woods—but the illusion starts to weaken when you poke at it. It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you loot a few stagecoaches, you finish, you move on.
It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.
Startups employing generative-AI models, like ChatGPT, are using them to create characters that don’t rely on scripts but, instead, converse with you freely. Others are experimenting with NPCs who appear to have entire interior worlds, and who can continue to play even when you, the player, are not around to watch. Eventually, generative AI could create game experiences that are infinitely detailed, twisting and changing every time you experience them.
The field is still very new, but it’s extremely hot. In 2022 the venture firm Andreessen Horowitz launched Games Fund, a $600 million fund dedicated to gaming startups. A huge number of these are planning to use AI in gaming. And the firm, also known as A16Z, has now invested in two studios that are aiming to create their own versions of AI NPCs. A second $600 million round was announced in April 2024.
Early experimental demos of these experiences are already popping up, and it may not be long before they appear in full games like RDR2. But some in the industry believe this development will not just make future open-world games incredibly immersive; it could change what kinds of game worlds or experiences are even possible. Ultimately, it could change what it means to play.
“What comes after the video game? You know what I mean?” says Frank Lantz, a game designer and director of the NYU Game Center. “Maybe we’re on the threshold of a new kind of game.”
These guys just won’t shut up
The way video games are made hasn’t changed much over the years. Graphics are incredibly realistic. Games are bigger. But the way in which you interact with characters, and the game world around you, uses many of the same decades-old conventions.
“In mainstream games, we’re still looking at variations of the formula we’ve had since the 1980s,” says Julian Togelius, a computer science professor at New York University who has a startup called Modl.ai that does in-game testing. Part of that tried-and-tested formula is a technique called a dialogue tree, in which all of an NPC’s possible responses are mapped out. Which one you get depends on which branch of the dialogue tree you have chosen. For example, say something rude about a passing NPC in RDR2 and the character will probably lash out—you have to quickly apologize to avoid a shootout (unless that’s what you want).
In the most expensive, high-profile games, the so-called AAA games like Elden Ring or Starfield, a deeper sense of immersion is created by using brute force to build out deep and vast dialogue trees. The biggest studios employ teams of hundreds of game developers who work for many years on a single game in which every line of dialogue is plotted and planned, and software is written so the in-game engine knows when to deploy that particular line. RDR2 reportedly contains an estimated 500,000 lines of dialogue, voiced by around 700 actors.
“You get around the fact that you can [only] do so much in the world by, like, insane amounts of writing, an insane amount of designing,” says Togelius.
Generative AI is already helping take some of that drudgery out of making new games. Jonathan Lai, a general partner at A16Z and one of Games Fund’s managers, says that most studios are using image-generating tools like Midjourney to enhance or streamline their work. And in a 2023 survey by A16Z, 87% of game studios said they were already using AI in their workflow in some way—and 99% planned to do so in the future. Many use AI agents to replace the human testers who look for bugs, such as places where a game might crash. In recent months, the CEO of the gaming giant EA said generative AI could be used in more than 50% of its game development processes.
Ubisoft, one of the biggest game developers, famous for AAA open-world games such as Assassin’s Creed, has been using a large-language-model-based AI tool called Ghostwriter to do some of the grunt work for its developers in writing basic dialogue for its NPCs. Ghostwriter generates loads of options for background crowd chatter, which the human writer can pick from or tweak. The idea is to free the humans up so they can spend that time on more plot-focused writing.
GEORGE WYLESOL
Ultimately, though, everything is scripted. Once you spend a certain number of hours on a game, you will have seen everything there is to see, and completed every interaction. Time to buy a new one.
But for startups like Inworld AI, this situation is an opportunity. Inworld, based in California, is building tools to make in-game NPCs that respond to a player with dynamic, unscripted dialogue and actions—so they never repeat themselves. The company, now valued at $500 million, is the best-funded AI gaming startup around thanks to backing from former Google CEO Eric Schmidt and other high-profile investors.
Role-playing games give us a unique way to experience different realities, explains Kylan Gibbs, Inworld’s CEO and founder. But something has always been missing. “Basically, the characters within there are dead,” he says.
“When you think about media at large, be it movies or TV or books, characters are really what drive our ability to empathize with the world,” Gibbs says. “So the fact that games, which are arguably the most advanced version of storytelling that we have, are lacking these live characters—it felt to us like a pretty major issue.”
Gamers themselves were pretty quick to realize that LLMs could help fill this gap. Last year, some came up with ChatGPT mods (a way to alter an existing game) for the popular role-playing game Skyrim. The mods let players interact with the game’s vast cast of characters using LLM-powered free chat. One mod even included OpenAI’s speech recognition software Whisper AI so that players could speak to the players with their voices, saying whatever they wanted, and have full conversations that were no longer restricted by dialogue trees.
The results gave gamers a glimpse of what might be possible but were ultimately a little disappointing. Though the conversations were open-ended, the character interactions were stilted, with delays while ChatGPT processed each request.
Inworld wants to make this type of interaction more polished. It’s offering a product for AAA game studios in which developers can create the brains of an AI NPC that can be then imported into their game. Developers use the company’s “Inworld Studio” to generate their NPC. For example, they can fill out a core description that sketches the character’s personality, including likes and dislikes, motivations, or useful backstory. Sliders let you set levels of traits such as introversion or extroversion, insecurity or confidence. And you can also use free text to make the character drunk, aggressive, prone to exaggeration—pretty much anything.
Developers can also add descriptions of how their character speaks, including examples of commonly used phrases that Inworld’s various AI models, including LLMs, then spin into dialogue in keeping with the character.
“Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended.”
Jeff Orkin, founder, Bitpart
Game designers can also plug other information into the system: what the character knows and doesn’t know about the world (no Taylor Swift references in a medieval battle game, ideally) and any relevant safety guardrails (does your character curse or not?). Narrative controls will let the developers make sure the NPC is sticking to the story and isn’t wandering wildly off-base in its conversation. The idea is that the characters can then be imported into video-game graphics engines like Unity or Unreal Engine to add a body and features. Inworld is collaborating with the text-to-voice startup ElevenLabs to add natural-sounding voices.
Inworld’s tech hasn’t appeared in any AAA games yet, but at the Game Developers Conference (GDC) in San Francisco in March 2024, the firm unveiled an early demo with Nvidia that showcased some of what will be possible. In Covert Protocol, each player operates as a private detective who must solve a case using input from the various in-game NPCs. Also at the GDC, Inworld unveiled a demo called NEO NPC that it had worked on with Ubisoft. In NEO NPC, a player could freely interact with NPCs using voice-to-text software and use conversation to develop a deeper relationship with them.
LLMs give us the chance to make games more dynamic, says Jeff Orkin, founder of Bitpart, a new startup that also aims to create entire casts of LLM-powered NPCs that can be imported into games. “Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended,” he says.
Bitpart’s approach is in part inspired by Orkin’s PhD research at MIT’s Media Lab. There, he trained AIs to role-play social situations using game-play logs of humans doing the same things with each other in multiplayer games.
Bitpart’s casts of characters are trained using a large language model and then fine-tuned in a way that means the in-game interactions are not entirely open-ended and infinite. Instead, the company uses an LLM and other tools to generate a script covering a range of possible interactions, and then a human game designer will select some. Orkin describes the process as authoring the Lego bricks of the interaction. An in-game algorithm searches out specific bricks to string them together at the appropriate time.
Bitpart’s approach could create some delightful in-game moments. In a restaurant, for example, you might ask a waiter for something, but the bartender might overhear and join in. Bitpart’s AI currently works with Roblox. Orkin says the company is now running trials with AAA game studios, although he won’t yet say which ones.
But generative AI might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.
Making the impossible possible
When I asked Frank Lantz about how AI could change gaming, he talked for 26 minutes straight. His initial reaction to generative AI had been visceral: “I was like, oh my God, this is my destiny and is what I was put on the planet for.”
Lantz has been in and around the cutting edge of the game industry and AI for decades but received a cult level of acclaim a few years ago when he created the Universal Paperclips game. The simple in-browser game gives the player the job of producing as many paper clips as possible. It’s a riff on the famous thought experiment by the philosopher Nick Bostrom, which imagines an AI that is given the same task and optimizes against humanity’s interest by turning all the matter in the known universe into paper clips.
Lantz is bursting with ideas for ways to use generative AI. One is to experience a new work of art as it is being created, with the player participating in its creation. “You’re inside of something like Lord of the Rings as it’s being written. You’re inside a piece of literature that is unfolding around you in real time,” he says. He also imagines strategy games where the players and the AI work together to reinvent what kind of game it is and what the rules are, so it is never the same twice.
For Orkin, LLM-powered NPCs can make games unpredictable—and that’s exciting. “It introduces a lot of open questions, like what you do when a character answers you but that sends a story in a direction that nobody planned for,” he says.
Generative A I might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.
It might mean games that are unlike anything we’ve seen thus far. Gaming experiences that unspool as the characters’ relationships shift and change, as friendships start and end, could unlock entirely new narrative experiences that are less about action and more about conversation and personalities.
Togelius imagines new worlds built to react to the player’s own wants and needs, populated with NPCs that the player must teach or influence as the game progresses. Imagine interacting with characters whose opinions can change, whom you could persuade or motivate to act in a certain way—say, to go to battle with you. “A thoroughly generative game could be really, really good,” he says. “But you really have to change your whole expectation of what a game is.”
Lantz is currently working on a prototype of a game in which the premise is that you—the player—wake up dead, and the afterlife you are in is a low-rent, cheap version of a synthetic world. The game plays out like a noir in which you must explore a city full of thousands of NPCs powered by a version of ChatGPT, whom you must interact with to work out how you ended up there.
His early experiments gave him some eerie moments when he felt that the characters seemed to know more than they should, a sensation recognizable to people who have played with LLMs before. Even though you know they’re not alive, they can still freak you out a bit.
“If you run electricity through a frog’s corpse, the frog will move,” he says. “And if you run $10 million worth of computation through the internet … it moves like a frog, you know.”
But these early forays into generative-AI gaming have given him a real sense of excitement for what’s next: “I felt like, okay, this is a thread. There really is a new kind of artwork here.”
If an AI NPC talks and no one is around to listen, is there a sound?
AI NPCs won’t just enhance player interactions—they might interact with one another in weird ways. Red Dead Redemption 2’s NPCs each have long, detailed scripts that spell out exactly where they should go, what work they must complete, and how they’d react if anything unexpected occurred. If you want, you can follow an NPC and watch it go about its day. It’s fun, but ultimately it’s hard-coded.
NPCs built with generative AI could have a lot more leeway—even interacting with one another when the player isn’t there to watch. Just as people have been fooled into thinking LLMs are sentient, watching a city of generated NPCs might feel like peering over the top of a toy box that has somehow magically come alive.
We’re already getting a sense of what this might look like. At Stanford University, Joon Sung Park has been experimenting with AI-generated characters and watching to see how their behavior changes and gains complexity as they encounter one another.
Because large language models have sucked up the internet and social media, they actually contain a lot of detail about how we behave and interact, he says.
Gamers came up with ChatGPT mods for the popular role-playing game Skyrim.
Although 2016’s hugely hyped No Man’s Sky used procedural generation to create endless planets to explore, many saw it as a letdown.
In Covert Protocol, players operate as private detectives who must solve the case using input from various in-game NPCs
In Park’s recent research, he and colleagues set up a Sims-like game, called Smallville, with 25 simulated characters that had been trained using generative AI. Each was given a name and a simple biography before being set in motion. When left to interact with each other for two days, they began to exhibit humanlike conversations and behavior, including remembering each other and being able to talk about their past interactions.
For example, the researchers prompted one character to organize a Valentine’s Day party—and then let the simulation run. That character sent invitations around town, while other members of the community asked each other on dates to go to the party, and all turned up at the venue at the correct time. All of this was carried out through conversations, and past interactions between characters were stored in their “memories” as natural language.
For Park, the implications for gaming are huge. “This is exactly the sort of tech that the gaming community for their NPCs have been waiting for,” he says.
His research has inspired games like AI Town, an open-source interactive experience on GitHub that lets human players interact with AI NPCs in a simple top-down game. You can leave the NPCs to get along for a few days and check in on them, reading the transcripts of the interactions they had while you were away. Anyone is free to take AI Town’s code to build new NPC experiences through AI.
For Daniel De Freitas, cofounder of the startup Character AI, which lets users generate and interact with their own LLM-powered characters, the generative-AI revolution will allow new types of games to emerge—ones in which the NPCs don’t even need human players.
The player is “joining an adventure that is always happening, that the AIs are playing,” he imagines. “It’s the equivalent of joining a theme park full of actors, but unlike the actors, they truly ‘believe’ that they are in those roles.”
If you’re getting Westworld vibes right about now, you’re not alone. There are plenty of stories about people torturing or killing their simple Sims characters in the game for fun. Would mistreating NPCs that pass for real humans cross some sort of new ethical boundary? What if, Lantz asks, an AI NPC that appeared conscious begged for its life when you simulated torturing it?
It raises complex questions he adds. “One is: What are the ethical dimensions of pretend violence? And the other is: At what point do AIs become moral agents to which harm can be done?”
There are other potential issues too. An immersive world that feels real, and never ends, could be dangerously addictive. Some users of AI chatbots have already reported losing hours and even days in conversation with their creations. Are there dangers that the same parasocial relationships could emerge with AI NPCs?
“We may need to worry about people forming unhealthy relationships with game characters at some point,” says Togelius. Until now, players have been able to differentiate pretty easily between game play and real life. But AI NPCs might change that, he says: “If at some point what we now call ‘video games’ morph into some all-encompassing virtual reality, we will probably need to worry about the effect of NPCs being too good, in some sense.”
A portrait of the artist as a young bot
Not everyone is convinced that never-ending open-ended conversations between the player and NPCs are what we really want for the future of games.
“I think we have to be cautious about connecting our imaginations with reality,” says Mike Cook, an AI researcher and game designer. “The idea of a game where you can go anywhere, talk to anyone, and do anything has always been a dream of a certain kind of player. But in practice, this freedom is often at odds with what we want from a story.”
In other words, having to generate a lot of the dialogue yourself might actually get kind of … well, boring. “If you can’t think of interesting or dramatic things to say, or are simply too tired or bored to do it, then you’re going to basically be reading your own very bad creative fiction,” says Cook.
Orkin likewise doesn’t think conversations that could go anywhere are actually what most gamers want. “I want to play a game that a bunch of very talented, creative people have really thought through and created an engaging story and world,” he says.
This idea of authorship is an important part of game play, agrees Togelius. “You can generate as much as you want,” he says. “But that doesn’t guarantee that anything is interesting and worth keeping. In fact, the more content you generate, the more boring it might be.”
GEORGE WYLESOL
Sometimes, the possibility of everything is too much to cope with. No Man’s Sky, a hugely hyped space game launched in 2016 that used algorithms to generate endless planets to explore, was seen by many players as a bit of a letdown when it finally arrived. Players quickly discovered that being able to explore a universe that never ended, with worlds that were endlessly different, actually fell a little flat. (A series of updates over subsequent years has made No Man’s Sky a little more structured, and it’s now generally well thought of.)
One approach might be to keep AI gaming experiences tight and focused.
Hilary Mason, CEO at the gaming startup Hidden Door, likes to joke that her work is “artisanal AI.” She is from Brooklyn, after all, says her colleague Chris Foster, the firm’s game director, laughing.
Hidden Door, which has not yet released any products, is making role-playing text adventures based on classic stories that the user can steer. It’s like Dungeons & Dragons for the generative AI era. It stitches together classic tropes for certain adventure worlds, and an annotated database of thousands of words and phrases, and then uses a variety of machine-learning tools, including LLMs, to make each story unique. Players walk through a semi-unstructured storytelling experience, free-typing into text boxes to control their character.
The result feels a bit like hand-annotating an AI-generated novel with Post-it notes.
In a demo with Mason, I got to watch as her character infiltrated a hospital and attempted to hack into the server. Each suggestion prompted the system to spin up the next part of the story, with the large language model creating new descriptions and in-game objects on the fly.
Each experience lasts between 20 and 40 minutes, and for Foster, it creates an “expressive canvas” that people can play with. The fixed length and the added human touch—Mason’s artisanal approach—give players “something really new and magical,” he says.
There’s more to life than games
Park thinks generative AI that makes NPCs feel alive in games will have other, more fundamental implications further down the line.
“This can, I think, also change the meaning of what games are,” he says.
For example, he’s excited about using generative-AI agents to simulate how real people act. He thinks AI agents could one day be used as proxies for real people to, for example, test out the likely reaction to a new economic policy. Counterfactual scenarios could be plugged in that would let policymakers run time backwards to try to see what would have happened if a different path had been taken.
“You want to learn that if you implement this social policy or economic policy, what is going to be the impact that it’s going to have on the target population?” he suggests. “Will there be unexpected side effects that we’re not going to be able to foresee on day one?”
And while Inworld is focused on adding immersion to video games, it has also worked with LG in South Korea to make characters that kids can chat with to improve their English language skills. Others are using Inworld’s tech to create interactive experiences. One of these, called Moment in Manzanar, was created to help players empathize with the Japanese-Americans the US government detained in internment camps during World War II. It allows the user to speak to a fictional character called Ichiro who talks about what it was like to be held in the Manzanar camp in California.
Inworld’s NPC ambitions might be exciting for gamers (my future excursions as a cowboy could be even more immersive!), but there are some who believe using AI to enhance existing games is thinking too small. Instead, we should be leaning into the weirdness of LLMs to create entirely new kinds of experiences that were never possible before, says Togelius. The shortcomings of LLMs “are not bugs—they’re features,” he says.
Lantz agrees. “You have to start with the reality of what these things are and what they do—this kind of latent space of possibilities that you’re surfing and exploring,” he says. “These engines already have that kind of a psychedelic quality to them. There’s something trippy about them. Unlocking that is the thing that I’m interested in.”
Whatever is next, we probably haven’t even imagined it yet, Lantz thinks.
“And maybe it’s not about a simulated world with pretend characters in it at all,” he says. “Maybe it’s something totally different. I don’t know. But I’m excited to find out.”
In a clean room in his lab, Sean Moore peers through a microscope at a bit of intestine, its dark squiggles and rounded structures standing out against a light gray background. This sample is not part of an actual intestine; rather, it’s human intestinal cells on a tiny plastic rectangle, one of 24 so-called “organs on chips” his lab bought three years ago.
Moore, a pediatric gastroenterologist at the University of Virginia School of Medicine, hopes the chips will offer answers to a particularly thorny research problem. He studies rotavirus, a common infection that causes severe diarrhea, vomiting, dehydration, and even death in young children. In the US and other rich nations, up to 98% of the children who are vaccinated against rotavirus develop lifelong immunity. But in low-income countries, only about a third of vaccinated children become immune. Moore wants to know why.
His lab uses mice for some protocols, but animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. Researchers have documented this translation gap since at least 1962. “All these pharmaceutical companies know the animal models stink,” says Don Ingber, founder of the Wyss Institute for Biologically Inspired Engineering at Harvard and a leading advocate for organs on chips. “The FDA knows they stink.”
But until recently there was no other option. Research questions like Moore’s can’t ethically or practically be addressed with a randomized, double-blinded study in humans. Now these organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. These tissues expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.
More than 60 companies now produce organs on chips commercially, focusing on five major organs: liver, kidney, lung, intestines, and brain. They’re already being used to understand diseases, discover and test new drugs, and explore personalized approaches to treatment.
As they continue to be refined, they could solve one of the biggest problems in medicine today. “You need to do three things when you’re making a drug,” says Lorna Ewart, a pharmacologist and chief scientific officer of Emulate, a biotech company based in Boston. “You need to show it’s safe. You need to show it works. You need to be able to make it.”
All new compounds have to pass through a preclinical phase, where they’re tested for safety and effectiveness before moving to clinical trials in humans. Until recently, those tests had to run in at least two animal species—usually rats and dogs—before the drugs were tried on people.
But in December 2022, President Biden signed the FDA Modernization Act, which amended the original FDA Act of 1938. With a few small word changes, the act opened the door for non-animal-based testing in preclinical trials. Anything that makes it faster and easier for pharmaceutical companies to identify safe and effective drugs means better, potentially cheaper treatments for all of us.
Moore, for one, is banking on it, hoping the chips help him and his colleagues shed light on the rotavirus vaccine responses that confound them. “If you could figure out the answer,” he says, “you could save a lot of kids’ lives.”
While many teams have worked on organ chips over the last 30 years, the OG in the field is generally acknowledged to be Michael Shuler, a professor emeritus of chemical engineering at Cornell. In the 1980s, Shuler was a math and engineering guy who imagined an “animal on a chip,” a cell culture base seeded with a variety of human cells that could be used for testing drugs. He wanted to position a handful of different organ cells on the same chip, linked to one another, which could mimic the chemical communication between organs and the way drugs move through the body. “This was science fiction,” says Gordana Vunjak-Novakovic, a professor of biomedical engineering at Columbia University whose lab works with cardiac tissue on chips. “There was no body on a chip. There is still no body on a chip. God knows if there will ever be a body on a chip.”
Shuler had hoped to develop a computer model of a multi-organ system, but there were too many unknowns. The living cell culture system he dreamed up was his bid to fill in the blanks. For a while he played with the concept, but the materials simply weren’t good enough to build what he imagined.
“You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Linda Griffith, founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant”
He wasn’t the only one working on the problem. Linda Griffith, a founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant,” designed a crude early version of a liver chip in the late 1990s: a flat silicon chip, just a few hundred micrometers tall, with endothelial cells, oxygen and liquid flowing in and out via pumps, silicone tubing, and a polymer membrane with microscopic holes. She put liver cells from rats on the chip, and those cells organized themselves into three-dimensional tissue. It wasn’t a liver, but it modeled a few of the things a functioning human liver could do. It was a start.
Griffith, who rides a motorcycle for fun and speaks with a soft Southern accent, suffers from endometriosis, an inflammatory condition where cells from the lining of the uterus grow throughout the abdomen. She’s endured decades of nausea, pain, blood loss, and repeated surgeries. She never took medical leaves, instead loading up on Percocet, Advil, and margaritas, keeping a heating pad and couch in her office—a strategy of necessity, as she saw no other choice for a working scientist. Especially a woman.
And as a scientist, Griffith understood that the chronic diseases affecting women tend to be under-researched, underfunded, and poorly treated. She realized that decades of work with animals hadn’t done a damn thing to make life better for women like her. “We’ve got all this data, but most of that data does not lead to treatments for human diseases,” she says. “You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Or, at least, the human cells. Shuler and Griffith, and other scientists in Europe, worked on some of those early chips, but things really kicked off around 2009, when Don Ingber’s lab in Cambridge, Massachusetts, created the first fully functioning organ on a chip. That “lung on a chip” was made from flexible silicone rubber, lined with human lung cells and capillary blood vessel cells that “breathed” like the alveoli—tiny air sacs—in a human lung. A few years later Ingber, an MD-PhD with the tidy good looks of a younger Michael Douglas, founded Emulate, one of the earliest biotech companies making microphysiological systems. Since then he’s become a kind of unofficial ambassador for in vitro technologies in general and organs on chips in particular, giving hundreds of talks, scoring millions in grant money, repping the field with scientists and laypeople. Stephen Colbert once ragged on him after the New York Times quoted him as describing a chip that “walks, talks, and quacks like a human vagina,” a quote Ingber says was taken out of context.
Ingber began his career working on cancer. But he struggled with the required animal research. “I really didn’t want to work with them anymore, because I love animals,” he says. “It was a conscious decision to focus on in vitro models.” He’s not alone; a growing number of young scientists are speaking up about the distress they feel when research protocols cause pain, trauma, injury, and death to lab animals. “I’m a master’s degree student in neuroscience and I think about this constantly. I’ve done such unspeakable, horrible things to mice all in the name of scientific progress, and I feel guilty about this every day,” wrote one anonymous student on Reddit. (Full disclosure: I switched out of a psychology major in college because I didn’t want to cause harm to animals.)
Emulate is one of the companies building organ-on-a-chip technology. The devices combine live human cells with a microenvironment designed to emulate specific tissues.
EMULATE
Taking an undergraduate art class led Ingber to an epiphany: mechanical forces are just as important as chemicals and genes in determining the way living creatures work. On a shelf in his office he still displays a model he built in that art class, a simple construction of sticks and fishing line, which helped him realize that cells pull and twist against each other. That realization foreshadowed his current work and helped him design dynamic microfluidic devices that incorporated shear and flow.
Ingber coauthored a 2022 paper that’s sometimes cited as a watershed in the world of organs on chips. Researchers used Emulate’s liver chips to reevaluate 27 drugs that had previously made it through animal testing and had then gone on to kill 242 people and necessitate more than 60 liver transplants. The liver chips correctly flagged problems with 22 of the 27 drugs, an 87% success rate compared with a 0% success rate for animal testing. It was the first time organs on chips had been directly pitted against animal models, and the results got a lot of attention from the pharmaceutical industry. Dan Tagle, director of the Office of Special Initiatives for the National Center for Advancing Translational Sciences (NCATS), estimates that drug failures cost around $2.6 billion globally each year. The earlier in the process failing compounds can be weeded out, the more room there is for other drugs to succeed.
“The capacity we have to test drugs is more or less fixed in this country,” says Shuler, whose company, Hesperos, also manufactures organs on chips. “There are only so many clinical trials you can do. So if you put a loser into the system, that means something that could have won didn’t get into the system. We want to change the success rate from clinical trials to a much higher number.”
In 2011, the National Institutes of Health established NCATS and started investing in organs on chips and other in vitro technologies. Other government funders, like the Defense Advanced Research Projects Agency and the Food and Drug Administration, have followed suit. For instance, NIH recently funded NASA scientists to send heart tissue on chips into space. Six months in low gravity ages the cardiovascular system 10 years, so this experiment lets researchers study some of the effects of aging without harming animals or humans.
Scientists have made liver chips, brain chips, heart chips, kidney chips, intestine chips, and even a female reproductive system on a chip (with cells from ovaries, fallopian tubes, and uteruses that release hormones and mimic an actual 28-day menstrual cycle). Each of these chips exhibits some of the specific functions of the organs in question. Cardiac chips, for instance, contain heart cells that beat just like heart muscle, making it possible for researchers to model disorders like cardiomyopathy.
Shuler thinks organs on chips will revolutionize the world of research for rare diseases. “It is a very good model when you don’t have enough patients for normal clinical trials and you don’t have a good animal model,” he says. “So it’s a way to get drugs to people that couldn’t be developed in our current pharmaceutical model.” Shuler’s own biotech company used organs on chips to test a potential drug for myasthenia gravis, a rare neurological disorder. In 2022,the FDA approved the drug for clinical trials based on that data—one of six Hesperos drugs that have so far made it to that stage.
Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body. “We try and build a physical replica of the mathematical model of what really occurs in the body,” explains Shuler. That model guides the way the chip is designed, re-creating the amount of time a fluid or chemical stays in that particular organ—what’s known as the residence time. “As long as you have the same residence time, you should get the same response in terms of chemical conversion,” he says.
Tiny channels on each chip, each between 10 and 100 microns in diameter, help bring fluids and oxygen to the cells. “When you get down to less than one micron, you can’t use normal fluid dynamics,” says Shuler. And fluid dynamics matters, because if the fluid moves through the device too quickly, the cells might die; too slowly, and the cells won’t react normally.
Chip technology, while sophisticated, has some downsides. One of them is user friendliness. “We need to get rid of all this tubing and pumps and make something that’s as simple as a well plate for culturing cells,” says Vunjak-Novakovic. Her lab and others are working on simplifying the design and function of such chips so they’re easier to operate and are compatible with robots, which do repetitive tasks like pipetting in many labs.
Cost and sourcing can also be challenging. Emulate’s base model, which looks like a simple rectangular box from the outside,starts at around $100,000 and rises steeply from there. Most human cells come from commercial suppliers that arrange for donations from hospital patients. During the pandemic, when people had fewer elective surgeries, many of those sources dried up. As microphysiological systems become more mainstream, finding reliable sources of human cells will be critical.
“As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore— we could go with chip plus one animal.”
Lorna Ewart, Chief Scientific Officer, Emulate
Another challenge is that every company producing organs on chips uses its own proprietary methods and technologies. Ingber compares the landscape to the early days of personal computing, when every company developed its own hardware and software, and none of them meshed well. For instance, the microfluidic systems in Emulate’s intestine chips are fueled by micropumps, while those made by Mimetas, another biotech company, use an electronic rocker and gravity to circulate fluids and air. “This is not an academic lab type of challenge,” emphasizes Ingber. “It’s a commercial challenge. There’s no way you can get the same results anywhere in the world with individual academics making [organs on chips], so you have to have commercialization.”
Namandje Bumpus, the FDA’s chief scientist, agrees. “You can find differences [in outcomes] depending even on what types of reagents you’re using,” she says. Those differences mean research can’t be easily reproduced, which diminishes its validity and usefulness. “It would be great to have some standardization,” she adds.
On the plus side, the chip technology could help researchers address some of the most deeply entrenched health inequities in science. Clinical trials have historically recruited white men, underrepresenting people of color, women (especially pregnant and lactating women), the elderly, and other groups. And treatments derived from those trials all too often fail in members of those underrepresented groups, as in Moore’s rotavirus vaccine mystery. “With organs on a chip, you may be able to create systems by which you are very, very thoughtful—where you spread the net wider than has ever been done before,” says Moore.
This microfluidic platform, designed by MIT engineers, connects engineered tissue from up to 10 organs.
FELICE FRANKEL
Another advantage is that chips will eventually reduce the need for animals in the lab even as they lead to better human outcomes. “There are aspects of animal research that make all of us uncomfortable, even people that do it,” acknowledges Moore. “The same values that make us uncomfortable about animal research are also the same values that make us uncomfortable with seeing human beings suffer with diseases that we don’t have cures for yet. So we always sort of balance that desire to reduce suffering in all the forms that we see it.”
Lorna Ewart, who spent 20 years at the pharma giant AstraZeneca before joining Emulate, thinks we’re entering a kind of transition time in research, in which scientists use in vitro technologies like organs on chips alongside traditional cell culture methods and animals. “As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore—we could go with chip plus one animal,” she says.
In the meantime, Sean Moore is excited about incorporating intestine chips more and more deeply into his research. His lab has been funded by the Gates Foundation to do what he laughingly describes as a bake-off between intestine chips made by Emulate and Mimetas. They’re infecting the chips with different strains of rotavirus to try to identify the pros and cons of each company’s design. It’s too early for any substantive results, but Moore says he does have data showing that organ chips are a viable model for studying rotavirus infection. That could ultimately be a real game-changer in his lab and in labs around the world.
“There’s more players in the space right now,” says Moore. “And that competition is going to be a healthy thing.”
Harriet Brown writes about health, medicine, and science. Her most recent book is Shadow Daughter: A Memoir of Estrangement. She’s a professor of magazine, news, and digital journalism at Syracuse University’s Newhouse School.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”
His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.
But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.
Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Maillinked Facebook use to cancer.
More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.
Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.
Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.
“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.
The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.
Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.
Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.
Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.
“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”
In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”
But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.
Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.
I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs.
Now read the rest of The Checkup
Read more from MITTechnology Review’s archive:
Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.
Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.
China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.
Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.
One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”
From around the web
Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)
“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)
Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)
Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)
A potential future conflict between Taiwan and China would be shaped by novel methods of drone warfare involving advanced underwater drones and increased levels of autonomy, according to a new war-gaming experiment by the think tank Center for a New American Security (CNAS).
The report comes as concerns about Beijing’s aggression toward Taiwan have been rising: China sent dozens of surveillance balloons over the Taiwan Strait in January during Taiwan’s elections, and in May, two Chinese naval ships entered Taiwan’s restricted waters. The US Department of Defense has said that preparing for potential hostilities is an “absolute priority,” though no such conflict is immediately expected.
The report’s authors detail a number of ways that use of drones in any South China Sea conflict would differ starkly from current practices, most notably in the war in Ukraine, often called the first full-scale drone war.
Differences from the Ukrainian battlefield
Since Russia invaded Ukraine in 2022, drones have been aiding in what military experts describe as the first three steps of the “kill chain”—finding, targeting, and tracking a target—as well as in delivering explosives. The drones have a short life span, since they are often shot down or made useless by frequency jamming devices that prevent pilots from controlling them. Quadcopters—the commercially available drones often used in the war—last just three flights on average, according to the report.
Drones like these would be far less useful in a possible invasion of Taiwan. “Ukraine-Russia has been a heavily land conflict, whereas conflict between the US and China would be heavily air and sea,” says Zak Kallenborn, a drone analyst and adjunct fellow with the Center for Strategic and International Studies, who was not involved in the report but agrees broadly with its projections. The small, off-the-shelf drones popularized in Ukraine have flight times too short for them to be used effectively in the South China Sea.
An underwater war
Instead, a conflict with Taiwan would likely make use of undersea and maritime drones. With Taiwan just 100 miles away from China’s mainland, the report’s authors say, the Taiwan Strait is where the first days of such a conflict would likely play out. The Zhu Hai Yun, China’s high-tech autonomous carrier, might send its autonomous underwater drones to scout for US submarines. The drones could launch attacks that, even if they did not sink the submarines, might divert the attention and resources of the US and Taiwan.
It’s also possible China would flood the South China Sea with decoy drone boats to “make it difficult for American missiles and submarines to distinguish between high-value ships and worthless uncrewed commercial vessels,” the authors write.
Though most drone innovation is not focused on maritime applications, these uses are not without precedent: Ukrainian forces drew attention for modifying jet skis to operate via remote control and using them to intimidate and even sink Russian vessels in the Black Sea.
More autonomy
Drones currently have very little autonomy. They’re typically human-piloted, and though some are capable of autopiloting to a fixed GPS point, that’s generally not very useful in a war scenario, where targets are on the move. But, the report’s authors say, autonomous technology is developing rapidly, and whichever nation possesses a more sophisticated fleet of autonomous drones will hold a significant edge.
What would that look like? Millions of defense research dollars are being spent in the US and China alike on swarming, a strategy where drones navigate autonomously in groups and accomplish tasks. The technology isn’t deployed yet, but if successful, it could be a game-changer in any potential conflict.
A sea-based conflict might also offer an easier starting ground for AI-driven navigation, because object recognition is easier on the “relatively uncluttered surface of the ocean” than on the ground, the authors write.
China’s advantages
A chief advantage for China in a potential conflict is its proximity to Taiwan; it has more than three dozen air bases within 500 miles, while the closest US base is 478 miles away in Okinawa. But an even bigger advantage is that it produces more drones than any other nation.
“China dominates the commercial drone market, absolutely,” says Stacie Pettyjohn, coauthor of the report and director of the defense program at CNAS. That includes drones of the type used in Ukraine.
For Taiwan to use these Chinese drones for their own defenses, they’d first have to make the purchase, which could be difficult because the Chinese government might move to block it. Then they’d need to hack them and disconnect them from the companies that made them, or else those Chinese manufacturers could turn them off remotely or launch cyberattacks. That sort of hacking is unfeasible at scale, so Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make their own drones or find alternative manufacturers, likely in the US. On Wednesday, June 19, the US approved a $360 million sale of 1,000 military-grade drones to Taiwan.
For now, experts can only speculate about how those drones might be used. Though preparing for a conflict in the South China Sea is a priority for the DOD, it’s one of many, says Kallenborn. “The sensible approach, in my opinion, is recognizing that you’re going to potentially have to deal with all of these different things,” he says. “But we don’t know the particular details of how it will work out.”
Pneumatic tubes were touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where the main character, Winston Smith, sits in a room peppered with pneumatic tubes that spit out orders for him to alter previously published news stories and historical records to fit the ruling party’s changing narrative.
Abandoned by most industries at midcentury, pneumatic tube systems have become ubiquitous in hospitals.
ALAMY
In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. “The possibilities of compressed air are not fully realized in this country,” declared an 1890 article in the New York Tribune. “The pneumatic tube system of communication is, of course, in use in many of the downtown stores, in newspaper offices […] but there exists a great deal of ignorance about the use of compressed air, even among engineering experts.”
Pneumatic tube technology involves moving a cylindrical carrier or capsule through a series of tubes with the aid of a blower that pushes or pulls it into motion. For a while, the United States took up the systems with gusto. Retail stores and banks were especially interested in their potential to move money more efficiently: “Besides this saving of time to the customer the store is relieved of all the annoying bustle and confusion of boys running for cash on the various retail floors,” one 1882 article in the Boston Globe reported. The benefit to the owner, of course, was reduced labor costs, with tube manufacturers claiming that stores would see a return on their investment within a year.
“The motto of the company is to substitute machines for men and for children as carriers, in every possible way,” a 1914 Boston Globe article said about Lamson Service, one of the largest proprietors of tubes at the time, adding, “[President] Emeritus Charles W. Eliot of Harvard says: ‘No man should be employed at a task which a machine can perform,’ and the Lamson Company supplements that statement by this: ‘Because it doesn’t pay.’”
By 1912, Lamson had over 60,000 customers globally in sectors including retail, banks, insurance offices, courtrooms, libraries, hotels, and industrial plants. The postal service in cities such as Boston, Philadelphia, Chicago, and New York also used tubes to deliver the mail, with at least 45 miles of Lamson tubing in place by 1912.
On the transportation front, New York City’s first attempt at a subway system, in 1870, also ran on a pneumatic system, and the idea of using tubes to move people continues to beguile innovators to this day. (See Elon Musk’s largely abandoned Hyperloop concept of the 2010s.)
But by the mid to late 20th century, use of the technology had largely fallen by the wayside. It had become cheaper to transport mail by truck than by tube, and as transactions moved to credit cards, there was less demand to make change for cash payments. Electrical rail won out over compressed air, paper records and files disappeared in the wake of digitization, and tubes at bank drive-throughs started being replaced by ATMs, while only a fraction of pharmacies used them for their own such services. Pneumatic tube technology became virtually obsolete.
Except in hospitals.
“A pneumatic tube system today for a new hospital that’s being built is ubiquitous. It’s like putting a washing machine or a central AC system in a new home. It just makes too much sense to not do it,” says Cory Kwarta, CEO of Swisslog Healthcare, a corporation that—under its TransLogic company—has provided pneumatic tube systems in health-care facilities for over 50 years. And while the sophistication of these systems has changed over time, the fundamental technology of using pneumatic force to move a capsule from one destination to another has remained the same.
By the turn of the 20th century, health care had become a more scientific endeavor, and different spaces within a hospital were designated for new technologies (like x-rays) or specific procedures (like surgeries). “Instead of having patients in one place, with the doctors and the nurses and everything coming to them, and it’s all happening in the ward, [hospitals] became a bunch of different parts that each had a role,” explains Jeanne Kisacky, an architectural historian who wrote Rise of the Modern Hospital: An Architectural History of Health and Healing, 1870–1940.
Designating different parts of a building for different medical specialties and services, like specimen analysis, also increased the physical footprint of health-care facilities. The result was that nurses and doctors had to spend much of their days moving from one department to another, which was an inefficient use of their time. Pneumatic tube technology provided a solution.
By the 1920s, more and more hospitals started installing tube systems. At first, the capsules primarily moved medical records, prescription orders, and items like money and receipts—similar cargo to what was moved around in banks and retail stores at the time. As early as 1927, however, the systems were also marketed to hospitals as a way to transfer specimens to a central laboratory for analysis.
Two men stand among the 2,000 pneumatic tube canisters in the basement of the Lexington Avenue Post Office in New York City, circa 1915.
In 1955, clubbers at the Reni Ballroom in Berlin exchanged requests for dances via pneumatic tube in a sort of precursor to texting.
In the late 1940s and ’50s, canisters like this one, traveling at around 35 miles an hour, carried as many as 600 letters daily throughout New York City.
The Hospital of the University of Pennsylvania traffics nearly 4,000 specimens daily through its pneumatic tubes.
By the 1960s, pneumatic tubes were becoming standard in health care. As a hospital administrator explained in the January 1960 issue of Modern Hospital, “We are now getting eight hours’ worth of service per day from each nurse, where previously we had been getting about six hours of nursing plus two hours of errand running.”
As computers and credit cards started to become more prevalent in the 1980s, reducing paperwork significantly, the systems shifted to mostly carrying lab specimens, pharmaceuticals, and blood products. Today, lab specimens are roughly 60% of what hospital tube systems carry; pharmaceuticals account for 30%, and blood products for phlebotomy make up 5%.
The carriers or capsules, which can hold up to five pounds, move through piping six inches in diameter—just big enough to hold a 2,000-milliliter IV bag—at speeds of 18 to 24 feet per second, or roughly 12 to 16 miles per hour. The carriers are limited to those speeds to maintain specimen integrity. If blood samples move faster, for example, blood cells can be destroyed.
The pneumatic systems have also gone through major changes in structure in recent years, evolving from fixed routes to networked systems. “It’s like a train system, and you’re on one track and now you have to go to another track,” says Steve Dahl, an executive vice president at Pevco, a manufacturer of these systems.
Exhibition-goers wait to ride the first pneumatic passenger railway in the US at the Exhibition of the American Institute at the New York City Armory in 1867.
GETTY IMAGES
Manufacturers try to get involved early in the hospital design process, says Swisslog’s Kwarta, so “we can talk to the clinical users and say, ‘Hey, what kind of contents do you anticipate sending through this pneumatic tube system, based on your bed count, based on your patient census, and from where and to where do these specimens or materials need to go?’”
Penn Medicine’s University City Medical District in Philadelphia opened up the state-of-the-art Pavilion in 2021. It has three pneumatic systems: the main one is for items directly related to health care, like specimens, and two separate ones handle linen and trash. The main system runs over 12 miles of pipe and completes more than 6,000 transactions on an average day. Sending a capsule between the two farthest points of the system—a distance of multiple city blocks—takes just under five minutes. Walking that distance would take around 20 minutes, not including getting to the floor where the item needs to go.
Michigan Medicine has a system dedicated solely for use in nuclear medicine, which relies on radioactive materials for treatment. Getting the materials where they need to go is a five- to eight-minute walk—too long given their short shelf life. With the tubes, it gets there—in a lead-lined capsule—in less than a minute.
Steven Fox, who leads the electrical engineering team for the pneumatic tubes at Michigan Medicine, describes the scale of the materials his system moves in terms of African elephants, which weigh about six tons. “We try to keep [a carrier’s] load to five pounds apiece,” he says. “So we could probably transport about 30,000 pounds per day. That’s two and a half African elephants that we transport from one side of the hospital to the other every day.”
The equipment to maintain these labyrinthian highways is vast. Michigan and Penn have between 150 and 200 stations where doctors, nurses, and technicians can pick up a capsule or send one off. Keeping those systems moving also requires around 30 blowers and over 150 transfer units to shift carriers to different tube lines as needed. At Michigan Medicine, moving an item from one end of the system to another requires 20 to 25 pieces of equipment.
Before the turn of the century, triggering the blower to move a capsule from point A to point B would be accomplished by someone turning or pressing an electronic or magnetic switch. In the 2000s, technicians managed the systems on DOS; these days, the latest systems run on programs that monitor every capsule in real time and allow adjustments based on the level of traffic, the priority level of a capsule, and the demand for additional carriers. The systems run 24 hours a day, every day.
“We treat [the tube system] no different than electricity, steam, water, gas. It’s a utility,” says Frank Connelly, an assistant hospital director at Penn. “Without that, you can’t provide services to people that need it in a hospital.”
“You’re nervous—you just got blood taken,” he continues. “‘How long is it going to be before I get my results back?’ Imagine if they had to wait all that extra time because you’re not sending one person for every vial—they’re going to wait awhile until they get a basket full and then walk to the lab. Nowadays they fill up the tube and send it to the lab. And I think that helps patient care.”
Vanessa Armstrong is a freelance writer whose work has appeared in the New York Times, Atlas Obscura, Travel + Leisure, and elsewhere.