Inside the US government’s brilliantly boring websites

The United States has an official web design system and a custom typeface. This public design system aims to make government websites not only good-looking but accessible and functional for all.

Before the internet, Americans may have interacted with the federal government by stepping into grand buildings adorned with impressive stone columns and gleaming marble floors. Today, the neoclassical architecture of those physical spaces has been (at least partially) replaced by the digital architecture of website design—HTML code, tables, forms, and buttons. 

While people visiting a government website to apply for student loans, research veterans’ benefits, or enroll in Medicare might not notice these digital elements, they play a crucial role. If a website is buggy or doesn’t work on a phone, taxpayers may not be able to access the services they have paid for—which can create a negative impression of the government itself.  

There are about 26,000 federal websites in the US. Early on, each site had its own designs, fonts, and log-in systems, creating frustration for the public and wasting government resources. The troubled launch of Healthcare.gov in 2013 highlighted the need for a better way to build government digital services. In 2014, President Obama created two new teams to help improve government tech.

Within the General Services Administration (GSA), a new team called 18F (named for its office at 1800 F Street in Washington, DC) was created to “collaborate with other agencies to fix technical problems, build products, and improve public service through technology.” The team was built to move at the speed of tech startups rather than lumbering bureaucratic agencies. 

The US Digital Service (USDS) was set up “to deliver better government services to the American people through technology and design.” In 2015, the two teams collaborated to build the US Web Design System (USWDS), a style guide and collection of user interface components and design patterns intended to ensure accessibility and a consistent user experience across government websites. “Inconsistency is felt, even if not always precisely articulated in usability research findings,” Dan Williams, the USWDS program lead, said in an email. 

Today, the system defines 47 user interface components such as buttons, alerts, search boxes, and forms, each with design examples, sample code, and guidelines such as “Be polite” and “Don’t overdo it.” Now in its third iteration, it is used in 160 government websites. “As of September 2023, 94 agencies use USWDS code, and it powers about 1.1 billion page views on federal websites,” says Williams.

To ensure clear and consistent typography, the free and open-source typeface Public Sans was created for the US government in 2019. “It started as a design experiment,” says Williams, who designed the typeface. “We were interested in trying to establish an open-source solution space for a typeface, just like we had for the other design elements in the design system.”

The teams behind Public Sans and the USWDS embrace transparency and collaboration with government agencies and the public.

And to ensure that the hard-learned lessons aren’t forgotten, the projects embrace continuous improvement. One of the design principles behind Public Sans offers key guidance in this area: “Strive to be better, not necessarily perfect.”

Jon Keegan writes Beautiful Public Data, a newsletter that curates visually interesting data sets collected by local, state, and federal government agencies
(beautifulpublicdata.com).

Learning from catastrophe

The philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As the metaphor suggests, clock problems obey a certain logic. They are orderly and can be broken down and analyzed piece by piece. When a clock stops working, you’re able to take it apart, look for what’s wrong, and fix it. The fix may not be easy, but it’s achievable. Crucially, you know when you’ve solved the issue because the clock starts telling the time again. 

Wicked Problems: How to Engineer a Better World
Guru Madhavan
W.W. NORTON, 2024

Cloud problems offer no such assurances. They are inherently complex and unpredictable, and they usually have social, psychological, or political dimensions. Because of their dynamic, shape-shifting nature, trying to “fix” a cloud problem often ends up creating several new problems. For this reason, they don’t have a definitive “solved” state—only good and bad (or better and worse) outcomes. Trying to repair a broken-down car is a clock problem. Trying to solve traffic is a cloud problem.  

Engineers are renowned clock-problem solvers. They’re also notorious for treating every problem like a clock. Increasing specialization and cultural expectations play a role in this tendency. But so do engineers themselves, who are typically the ones who get to frame the problems they’re trying to solve in the first place. 

In his latest book, Wicked Problems, Guru Madhavan argues that the growing number of cloudy problems in our world demands a broader, more civic-minded approach to engineering. “Wickedness” is Madhavan’s way of characterizing what he calls “the cloudiest of problems.” It’s a nod to a now-famous coinage by Horst Rittel and Melvin Webber, professors at the University of California, Berkeley, who used the term “wicked” to describe complex social problems that resisted the rote scientific and engineering-based (i.e., clock-like) approaches that were invading their fields of design and urban planning back in the 1970s. 

Madhavan, who’s the senior director of programs at the National Academy of Engineering, is no stranger to wicked problems himself. He’s tackled such daunting examples as trying to make prescription drugs more affordable in the US and prioritizing development of new vaccines. But the book isn’t about his own work. Instead, Wicked Problems weaves together the story of a largely forgotten aviation engineer and inventor, Edwin A. Link, with case studies of man-made and natural disasters that Madhavan uses to explain how wicked problems take shape in society and how they might be tamed.

Link’s story, for those who don’t know it, is fascinating—he was responsible for building the first mechanical flight trainer, using parts from his family’s organ factory—and Madhavan gives a rich and detailed accounting. The challenges this inventor faced in the 1920s and ’30s—which included figuring out how tens of thousands of pilots could quickly and effectively be trained to fly without putting all of them up in the air (and in danger), as well as how to instill trust in “instrument flying” when pilots’ instincts frequently told them their instruments were wrong—were among the quintessential wicked problems of his time. 

To address a world full of wicked problems, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.

Unfortunately, while Link’s biography and many of the interstitial chapters on disasters, like Boston’s Great Molasses Flood of 1919, are interesting and deeply researched, Wicked Problems suffers from some wicked structural choices. 

The book’s elaborate conceptual framework and hodgepodge of narratives feel both fussy and unnecessary, making a complex and nuanced topic even more difficult to grasp at times. In the prologue alone, readers must bounce from the concept of cloud problems to that of wicked problems, which get broken down into hard, soft, and messy problems, which are then reconstituted in different ways and linked to six attributes—efficiency, vagueness, vulnerability, safety, maintenance, and resilience—that, together, form what Madhavan calls a “concept of operations,” which is the primary organizational tool he uses to examine wicked problems.

It’s a lot—or at least enough to make you wonder whether a “systems engineering” approach was the correct lens through which to examine wickedness. It’s also unfortunate because Madhavan’s ultimate argument is an important one, particularly in an age of rampant solutionism and “one neat trick” approaches to complex problems. To effectively address a world full of wicked problems, he says, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.  

Rational Accidents: Reckoning with Catastrophic Technologies
John Downer
MIT PRESS, 2024

While John Downer would likely agree with that sentiment, his new book, Rational Accidents, makes a strong argument that there are hard limits to even the best and broadest engineering approaches. Similarly set in the world of aviation, Downer’s book explores a fundamental paradox at the heart of today’s civil aviation industry: the fact that flying is safer and more reliable than should technically be possible.

Jetliners are an example of what Downer calls a “catastrophic technology.” These are “complex technological systems that require extraordinary, and historically unprecedented, failure rates—of the order of hundreds of millions, or even billions, of operational hours between catastrophic failures.”

Take the average modern jetliner, with its 7 million components and 170 miles’ worth of wiring—an immensely complex system in and of itself. There were over 25,000 jetliners in regular service in 2014, according to Downer. Together, they averaged 100,000 flights every single day. Now consider that in 2017, no passenger-carrying commercial jetliner was involved in a fatal accident. Zero. That year, passenger totals reached 4 billion on close to 37 million flights. Yes, it was a record-setting year for the airline industry, safety-wise, but flying remains an almost unfathomably safe and reliable mode of transportation—even with Boeing’s deadly 737 Max crashes in 2018 and 2019 and the company’s ongoing troubles

Downer, a professor of science and technology studies at the University of Bristol, does an excellent job in the first half of the book dismantling the idea that we can objectively recognize, understand, and therefore control all risk involved in such complex technologies. Using examples from well-known jetliner crashes, as well as from the Fukushima nuclear plant meltdown, he shows why there are simply too many scenarios and permutations of failure for us to assess or foresee such risks, even with today’s sophisticated modeling techniques and algorithmic assistance.

So how does the airline industry achieve its seemingly unachievable record of safety and reliability? It’s not regulation, Downer says. Instead, he points to three unique factors. First is the massive service experience the industry has amassed. Over the course of 70 years, manufacturers have built tens of thousands of jetliners, which have failed (and continue to fail) in all sorts of unpredictable ways. 

This deep and constantly growing data set, combined with the industry’s commitment to thoroughly investigating each and every failure, lets it generalize the lessons learned across the entire industry—the second key to understanding jetliner reliability. 

Finally is what might be the most interesting and counterintuitive factor: Downer argues that the lack of innovation in jetliner design is an essential but overlooked part of the reliability record. The fact that the industry has been building what are essentially iterations of the same jetliner for 70 years ensures that lessons learned from failures are perpetually relevant as well as generalizable, he says. 

That extremely cautious relationship to change flies in the face of the innovate-or-die ethos that drives most technology companies today. And yet it allows the airline industry to learn from decades of failures and continue to chip away at the future “failure performance” of jetliners.

The bad news is that the lessons in jetliner reliability aren’t transferable to other catastrophic technologies. “It is an irony of modernity that the only catastrophic technology with which we have real experience, the jetliner, is highly unrepresentative, and yet it reifies a misleading perception of mastery over catastrophic technologies in general,” writes Downer.

For instance, to make nuclear reactors as reliable as jetliners, that industry would need to commit to one common reactor design, build tens of thousands of reactors, operate them for decades, suffer through thousands of catastrophes, slowly accumulate lessons and insights from those catastrophes, and then use them to refine that common reactor design.  

This obviously won’t happen. And yet “because we remain entranced by the promise of implausible reliability, and implausible certainty about that reliability, our appetite for innovation has outpaced our insight and humility,” writes Downer. With the age of catastrophic technologies still in its infancy, our continued survival may very well hinge not on innovating our way out of cloudy or wicked problems, but rather on recognizing, and respecting, what we don’t know and can probably never understand.  

If Wicked Problems and Rational Accidents are about the challenges and limits of trying to understand complex systems using objective science- and engineering-based methods, Georgina Voss’s new book, Systems Ultra, provides a refreshing alternative. Rather than dispassionately trying to map out or make sense of complex systems from the outside, Voss—a writer, artist, and researcher—uses her book to grapple with what they feel like, and ultimately what they mean, from the inside.

Systems Ultra: Making Sense of Technology in a Complex World
Georgina Voss
VERSO, 2024

“There is something rather wonderful about simply feeling our way through these enormous structures,” she writes before taking readers on a whirlwind tour of systems visible and unseen, corrupt and benign, ancient and new. Stops include the halls of hype at Las Vegas’s annual Consumer Electronics Show (“a hot mess of a Friday casual hellscape”), the “memetic gold mine” that was the container ship Ever Given and the global supply chain it broke when it got stuck in the Suez Canal, and the payment systems that undergird the porn industry. 

For Voss, systems are both structure and behavior. They are relational technologies that are “defined by their ability to scale and, perhaps more importantly, their peculiar relationship to scale.” She’s also keenly aware of the pitfalls of using an “experiential” approach to make sense of these large-scale systems. “Verbal attempts to neatly encapsulate what a system is can feel like a stoner monologue with pointed hand gestures (‘Have you ever thought about how electricity is, like, really big?’),” she writes. 

Nevertheless, her written attempts are a delight to read. Voss manages to skillfully unpack the power structures that make up, and reinforce, the large-scale systems we live in. Along the way, she also dispels many of the stories we’re told about their inscrutability and inevitability. That she does all this with humor, intelligence, and a boundless sense of curiosity makes Systems Ultra both a shining example of the “civic engagement as engineering” approach that Madhavan argues for in Wicked Problems, and proof that his argument is spot on. 

Bryan Gardiner is a writer based in Oakland, California.

Toys can change your life

In a November 1984 story for Technology Review, Carolyn Sumners, curator of astronomy at the Houston Museum of Natural Science, described how toys, games, and even amusement park rides could change how young minds view science and math. “The Slinky,” Sumners noted, “has long served teachers as a medium for demonstrating longitudinal (soundlike) waves and transverse (lightlike) waves.” A yo-yo can be used as a gauge (a “yo-yo meter”) to observe the forces on a roller coaster. Marbles employ mass and velocity. Even a simple ball offers insights into the laws of gravity.

While Sumners focused on physics, she was onto something bigger. Over the last several decades, evidence has emerged that childhood play can shape our future selves: the skills we develop, the professions we choose, our sense of self-worth, and even our relationships.

That doesn’t mean we should foist “educational” toys like telescopes or tiny toolboxes on kids to turn them into astronomers or carpenters. As Sumners explained, even “fun” toys offer opportunities to discover the basic principles of physics. 

According to Jacqueline Harding, a child development expert and author of The Brain That Loves to Play, “If you invest time in play, which helps with executive functioning, decision-making, resilience—all those things—then it’s going to propel you into a much more safe, secure space in the future.”

Sumners was focused mostly on hard skills, the scientific knowledge that toys and games can foster. But there are soft skills, too, like creativity, problem-­solving, teamwork, and empathy. According to Harding, the less structure there is to such play—the fewer rules and goals—the more these soft skills emerge.

“The kinds of playthings, or play activities, that really produce creative thought,” she says, “are natural materials, with no defined end to them—like clay, paint, water, and mud—so that there is no right or wrong way of playing with it.” 

Playing is by definition voluntary, spontaneous, and goal-free; it involves taking risks, testing boundaries, and experimenting. The best kind of play results in joyful discovery, and along the way, the building blocks of innovation and personal development take shape. But in the decades since Sumners wrote her story, the landscape of play has shifted considerably. Recent research by the American Academy of Pediatrics’ Council on Early Childhood suggests that digital games and virtual play don’t appear to confer the same developmental benefits as physical games and outdoor play

“The brain loves the rewards that are coming from digital media,” says Harding. But in screen-based play, “you’re not getting that autonomy.” The lack of physical interaction also concerns her: “It is the quality of human face-to-face interaction, body proximity, eye-to-eye gaze, and mutual engagement in a play activity that really makes a difference.”

Bill Gourgey is a science writer based in Washington, DC.

Do you want to play a game?

For children, play comes so naturally. They don’t have to be encouraged to play. They don’t need equipment, or the latest graphics processors, or the perfect conditions—they just do it. What’s more, study after study has found that play has a crucial role in childhood growth and development. If you want to witness the absolute rapture of creative expression, just observe the unstructured play of children.

So what happens to us as we grow older? Children begin to compete with each other by age four or five. Play begins to transform from something we do purely for fun into something we use to achieve status and rank ourselves against other people. We play to score points. We play to win. 

And with that, play starts to become something different. Not that it can’t still be fun and joyful! Even watching other people play will bring us joy. We enjoy watching other people play so much and get so much joy by proxy from watching their achievements that we spend massive amounts of money to do so. According to StubHub, the average price of a ticket to the Super Bowl this year was $8,600. The average price for a Super Bowl ad was a cool $7 million this year, according to Ad Age

This kind of interest doesn’t just apply to physical games. Video-game streaming has long been a mainstay on YouTube, and entire industries have risen up around it. Top streamers on Twitch—Amazon’s livestreaming service, which is heavily gaming focused—earn upwards of $100,000 per month. And the global market for video games themselves is projected to bring in some $282 billion in revenue this year

Simply put, play is serious business. 

There are fortunes to be had in making our play more appealing, more accessible, more fun. All of the features in this issue dig in on the enormous amount of research and development that goes into making play “better.”  

On our cover this month is executive editor Niall Firth’s feature on the ways AI is going to upend game development. As you will read, we are about to enter the Wild West—Red Dead or not—of game character development. How will games change when they become less predictable and more fully interactive, thanks to AI-driven nonplayer characters who can not only go off script but even continue to play with each other when we’re not there? Will these even be games anymore, or will we simply be playing around in experiences? What kinds of parasocial relationships will we develop in these new worlds? It’s a fascinating read. 

There is no sport more intimately connected to the ocean, and to water, than surfing. It’s pure play on top of the waves. And when you hear surfers talk about entering the flow state, this is very much the same kind of state children experience at play—intensely focused, losing all sense of time and the world around them. Finding that flow no longer means living by the water’s edge, Eileen Guo reports. At surf pools all over the world, we’re piping water into (or out of) deserts to create perfect waves hundreds of miles from the ocean. How will that change the sport, and at what environmental cost? 

Just as we can make games more interesting, or bring the ocean to the desert, we have long pushed the limits of how we can make our bodies better, faster, stronger. Among the most recent ways we have done this is with the advent of so-called supershoes—running shoes with rigid carbon-fiber plates and bouncy proprietary foams. The late Kelvin Kiptum utterly destroyed the men’s world record for the marathon last year wearing a pair of supershoes made by Nike, clocking in at a blisteringly hot 2:00:35. Jonathan W. Rosen explores the science and technology behind these shoes and how they are changing the sport, especially in Kenya. 

There’s plenty more, too. So I hope you enjoy the Play issue. We certainly put a lot of work into it. But of course, what fun is play if you don’t put in the work?

Thanks for reading,

Mat Honan

Why China’s dominance in commercial drones has become a global security matter

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for photography and surveillance, as well as for spraying pesticides, moving parcels, and many other purposes around the world.  

But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations. 

The reason? While its market dominance has attracted scrutiny for years, it’s increasingly clear that DJI’s commercial products are so good and affordable they are also being used on active battlefields to scout out the enemy or carry bombs. As the US worries about the potential for conflict between China and Taiwan, the military implications of DJI’s commercial drones are becoming a top policy concern.

DJI has managed to set the gold standard for commercial drones because it is built on decades of electronic manufacturing prowess and policy support in Shenzhen. It is an example of how China’s manufacturing advantage can turn into a technological one.

“I’ve been to the DJI factory many times … and mainly, China’s industrial base is so deep that every component ends up being a fraction of the cost,” Sam Schmitz, the mechanical engineering lead at Neuralink, wrote on X. Shenzhen and surrounding towns have had a robust factory scene for decades, providing an indispensable supply chain for a hardware industry like drones. “This factory made almost everything, and it’s surrounded by thousands of factories that make everything else … nowhere else in the world can you run out of some weird screw and just walk down the street until you find someone selling thousands of them,” he wrote.

But Shenzhen’s municipal government has also significantly contributed to the industry. For example, it has granted companies more permission for potentially risky experiments and set up subsidies and policy support. Last year, I visited Shenzhen to experience how it’s already incorporating drones in everyday food delivery, but the city is also working with companies to use drones for bigger and bigger jobs—carrying everything from packages to passengers. All of these go into a plan to build up the “low-altitude economy” in Shenzhen that keeps the city on the leading edge of drone technology.

As a result, the supply chain in Shenzhen has become so competitive that the world can’t really use drones without it. Chinese drones are simply the most accessible and affordable out there. 

Most recently, DJI’s drones have been used by both sides in the Ukraine-Russia conflict for reconnaissance and bombing. Some American companies tried to replace DJI’s role, but their drones were more expensive and their performance unsatisfactory. And even as DJI publicly suspended its businesses in Russia and Ukraine and said it would terminate any reseller relationship if its products were found to be used for military purposes, the Ukrainian army is still assembling its own drones with parts sourced from China.

This reliance on one Chinese company and the supply chain behind it is what worries US politicians, but the danger would be more pronounced in any conflict between China and Taiwan, a prospect that is a huge security concern in the US and globally.

Last week, my colleague James O’Donnell wrote about a report by the think tank Center for a New American Security (CNAS) that analyzed the role of drones in a potential war in the Taiwan Strait. Right now, both Ukraine and Russia are still finding ways to source drones or drone parts from Chinese companies, but it’d be much harder for Taiwan to do so, since it would be in China’s interest to block its opponent’s supply. “So Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make its own drones or find alternative manufacturers, likely in the US,” James wrote.

If the ban on DJI sales in the US is eventually passed, it will hit the company hard for sure, as the US drone market is currently worth an estimated $6 billion, the majority of which is going to DJI. But undercutting DJI’s advantage won’t magically grow an alternative drone industry outside China. 

“The actions taken against DJI suggest protectionism and undermine the principles of fair competition and an open market. The Countering CCP Drones Act risks setting a dangerous precedent, where unfounded allegations dictate public policy, potentially jeopardizing the economic well-being of the US,” DJI told MIT Technology Review in an emailed statement.

The Taiwanese government is aware of the risks of relying too much on China’s drone industry, and it’s looking to change. In March, Taiwan’s newly elected president, Lai Ching-te, said that Taiwan wants to become the “Asian center for the democratic drone supply chain.” 

Already the hub of global semiconductor production, Taiwan seems well positioned to grow another hardware industry like drones, but it will probably still take years or even decades to build the economies of scale seen in Shenzhen. With support from the US, can Taiwanese companies really grow fast enough to meaningfully sway China’s control of the industry? That’s a very open question.

A housekeeping note: I’m currently visiting London, and the newsletter will take a break next week. If you are based in the UK and would like to meet up, let me know by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. ByteDance is working with the US chip design company Broadcom to develop a five-nanometer AI chip. This US-China collaboration, which should be compliant with US export restrictions, is rare these days given the political climate. (Reuters $)

2. After both the European Union and China announced new tariffs against each other, the two sides agreed to chat about how to resolve the dispute. (New York Times $)

  • Canada is preparing to announce its own tariffs on Chinese-made electric vehicles. (Bloomberg $)

3. A NASA leader says the US is “on schedule” to send astronauts to the moon within a few years. There’s currently a heated race between the US and China on moon exploration. (Washington Post $)

4. A new cybersecurity report says RedJuliett, a China-backed hacker group, has intensified attacks on Taiwanese organizations this year. (Al Jazeera $)

5. The Canadian government is blocking a rare earth mine from being sold to a Chinese company. Instead, the government will buy the stockpiled rare earth materials for $2.2 million. (Bloomberg $)

6. Economic hardship at home has pushed some Chinese small investors to enter the US marijuana industry. They have been buying lands in the States, setting up marijuana farms, and hiring other new Chinese immigrants. (NPR)

Lost in translation

In the past week, the most talked-about person in China has been a 17-year-old girl named Jiang Ping, according to the Chinese publication Southern Metropolis Daily. Every year since 2018, the Chinese company Alibaba has been hosting a global mathematics contest that attracts students from prestigious universities around the world to compete for a generous prize. But to everyone’s surprise, Jiang, who’s studying fashion design at a vocational high school in a poor town in eastern China, ended up ranking 12th in the qualifying round this year, beating scores of college undergraduate or even master’s students. Other than reading college mathematics textbooks under her math teacher’s guidance, Jiang has received no professional training, as many of her competitors have.

Jiang’s story, highlighted by Alibaba following the announcement of the first-round results, immediately went viral in China. While some saw it as a tale of buried talents and how personal endeavor can overcome unfavorable circumstances, others questioned the legitimacy of her results. She became so famous that people, including social media influencers, kept visiting her home, turning her hometown into an unlikely tourist destination. The town had to hide Jiang from public attention while she prepared for the final round of the competition.

One more thing

After I wrote about the new Chinese generative video model Kling last week, the AI tool added a new feature that can turn a static photo into a short video clip. Well, what better way to test its performance than feeding it the iconic “distracted boyfriend” meme and watching what the model predicts will happen after that moment?

Update: The story has been updated to include a statement from DJI.

The Download: Introducing the Play issue

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Supershoes are reshaping distance running

Since 2016, when Nike introduced the Vaporfly, a paradigm-­shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport.

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.

Still, some argue that they’ve changed the sport too quickly. Read the full story. 

—Jonathan W. Rosen

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to launch tomorrow, so if you don’t already, subscribe now to get a copy when it lands.

Why China’s dominance in commercial drones has become a global security issue

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for everything from photography to spraying pesticides to moving parcels.

But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations. 

To understand why, you need to consider the potential for conflict between China and Taiwan, and the fact that the military implications of DJI’s commercial drones have become a top policy concern for US lawmakers. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The EU has issued antitrust charges against Microsoft 
For bundling Teams with Office—just a day after it announced similar charges against Apple. (WSJ $) 
+ It seems likely it’ll be hit with a gigantic fine. (Ars Technica)
The EU has new powers to regulate the tech sector, and it’s clearly not afraid to use them. (FT $)

2 OpenAI is delaying launching its voice assistant 
 (WP $)
It’s also planning to block access in China—but plenty of Chinese companies stand ready to fill the void. (Mashable)

3 Deepfake creators are re-victimizing sex trafficking survivors
Non-consensual deepfake porn is proliferating at a terrifying pace—but this is the grimmest example I’ve seen. (Wired $)
Three ways we can fight deepfake porn. (MIT Technology Review)

4 Chinese tech company IPOs are a rarity these days
It’s becoming very hard to avoid the risk of it all being derailed by political scrutiny, whether at home or abroad. (NYT $)
Global chip company stock prices have been on a rollercoaster ride recently, thanks to Nvidia. (CNBC)

5 Why AI is not about to replace journalism
It can crank out content, sure—but it’s incredibly boring to read. (404 Media)
After all the hype, it’s no wonder lots of us feel ever-so-slightly disappointed by AI. (WP $)
Despite a troubled launch, Google’s already extending AI Summaries to Gmail as well as Search. (CNET

6 This week of extreme weather is a sign of things to come
Summers come with a side-serving of existential dread now, as we all feel the effects of climate change. (NBC)
+ Scientists have spotted a worrying new tipping point for the loss of ice sheets in Antarctica. (The Guardian

7 Inside the fight over lithium mine expansion in Argentina 
Indigenous communities had been divided in opposition—but as the cash started flowing, cracks started appearing. (The Guardian)
Lithium battery fires are a growing concern for firefighters worldwide. (WSJ $)

8 What even is intelligent life?
We value it, but it’s a slippery concept that’s almost impossible to define. (Aeon
+ What an octopus’s mind can teach us about AI’s ultimate mystery. (MIT Technology Review)

9 Tesla is recalling most Cybertrucks… for the fourth time 
You have to laugh, really. (The Verge
Luckily, it’s not sold that many of them anyway. (Quartz $)

10 The trouble with Meta’s “smart” Ray Bans 
Well… basically they’re just not very smart. At all. (Wired $)

Quote of the day

“We’re making the biggest bet in AI. If transformers go away, we’ll die. But if they stick around, we’re the biggest company of all time.”

—Fighting talk to CNBC from Gavin Uberti, cofounder and CEO of a two-year-old startup called Etched, which believes its AI-optimized chips could take on Nvidia’s near-monopoly.

The big story

This nanoparticle could be the key to a universal covid vaccine

3D model of the mosaic nanoparticle vaccine

COURTESY OF WELLCOME LEAP, CALTECH, AND MERKIN INSTITUTE

September 2022
Long before Alexander Cohen—or anyone else—had heard of the alpha, delta, or omicron variants of covid-19, he and his graduate school advisor Pamela Bjorkman were doing the research that might soon make it possible for a single vaccine to defeat the rapidly evolving virus—along with any other covid-19 variant that might arise in the future.

The pair and their collaborators are now tantalizingly close to achieving their goal of manufacturing a vaccine that broadly triggers an immune response not just to covid and its variants but to a wider variety of coronaviruses. Read the full story.

—Adam Piore

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Happy 80th Birthday to much beloved Muswell Hillbilly Ray Davies, frontman of the Kinks.
+ Need to cool your home down? Plants can help!
+ Well, uh, that’s certainly one way to cope with a long-haul flight. 
+ Glad to know I’m not the only person obsessed with Nongshim instant noodles

Synthesia’s hyperrealistic deepfakes will soon have full bodies

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year. 

“It’s very impressive. No one else is able to do that,” says Jack Saunders, a researcher at the University of Bath, who was not involved in Synthesia’s work. 

The full-body avatars he previewed are very good, he says, despite small errors such as hands “slicing” into each other at times. But “chances are you’re not really going to be looking that close to notice it,” Saunders says. 

Synthesia launched its first version of hyperrealistic AI avatars, also known as deepfakes, in April. These avatars use large language models to match expressions and tone of voice to the sentiment of spoken text. Diffusion models, as used in image- and video-generating AI systems, create the avatar’s look. However, the avatars in this generation appear only from the torso up, which can detract from the otherwise impressive realism. 

To create the full-body avatars, Synthesia is building an even bigger AI model. Users will have to go into a studio to record their body movements.

COURTESY SYNTHESIA

But before these full-body avatars become available, the company is launching another version of AI avatars that have hands and can be filmed from multiple angles. Their predecessors were only available in portrait mode and were just visible from the front. 

Other startups, such as Hour One, have launched similar avatars with hands. Synthesia’s version, which I got to test in a research preview and will be launched in late July, has slightly more realistic hand movements and lip-synching. 

Crucially, the coming update also makes it far easier to  create your own personalized avatar. The company’s previous custom AI avatars required users to go into a studio to record their face and voice over the span of a couple of hours, as I reported in April

This time, I recorded the material needed in just 10 minutes in the Synthesia office, using a digital camera, a lapel mike, and a laptop. But an even more basic setup, such as a laptop camera, would do. And while previously I had to record my facial movements and voice separately, this time the data was collected at the same time. The process also includes reading a script expressing consent to being recorded in this way, and reading out a randomly generated security passcode. 

These changes allow more scale and give the AI models powering the avatars more capabilities with less data, says Riparbelli. The results are also much faster. While I had to wait a few weeks to get my studio-made avatar, the new homemade ones were available the next day. 

Below, you can see my test of the new homemade avatars with hands. 

COURTESY SYNTHESIA

The homemade avatars aren’t as expressive as the studio-made ones yet, and users can’t change the backgrounds of their avatars, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. The hands are animated using an advanced form of looping technology, which repeats the same hand movements in a way that is responsive to the content of the script. 

Hands are tricky for AI to do well—even more so than faces, Vittorio Ferrari, Synthesia’s director of science, told me in in March. That’s because our mouths move in relatively small and predictable ways while we talk, making it possible to sync the deepfake version up with speech, but we move our hands in lots of different ways. On the flip side, while faces require close attention to detail because we tend to focus on them, hands can be less precise, Ferrari says. 

Even if they’re imperfect, AI-generated hands and bodies add a lot to the illusion of realism, which poses serious risks at a time when deepfakes and online misinformation are proliferating. Synthesia has strict content moderation policies, carefully vetting both its customers and the sort of content they’re able to generate. For example, only accredited news outlets can generate content on news.  

These new advancements in avatar technologies are another hammer blow to our ability to believe what we see online, says Saunders. 

“People need to know you can’t trust anything,” he says. “Synthesia is doing this now, and another year down the line it will be better and other companies will be doing it.” 

How generative AI could reinvent what it means to play

First, a confession. I only got into playing video games a little over a year ago (I know, I know). A Christmas gift of an Xbox Series S “for the kids” dragged me—pretty easily, it turns out—into the world of late-night gaming sessions. I was immediately attracted to open-world games, in which you’re free to explore a vast simulated world and choose what challenges to accept. Red Dead Redemption 2 (RDR2), an open-world game set in the Wild West, blew my mind. I rode my horse through sleepy towns, drank in the saloon, visited a vaudeville theater, and fought off bounty hunters. One day I simply set up camp on a remote hilltop to make coffee and gaze down at the misty valley below me.

To make them feel alive, open-world games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.

a man leads his horse through mountainous terrain toward a sunrise in Red Dead Redemption 2
a scene of gunfighters in Red Dead Redemption 2

In open-world games like Red Dead Redemption 2, players can choose diverse interactions within the same simulated experience.

After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun—I had a whale of a time, honestly, looting stagecoaches, fighting in bar brawls, and stalking deer through rainy woods—but the illusion starts to weaken when you poke at it. It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you loot a few stagecoaches, you finish, you move on. 

It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.

Startups employing generative-AI models, like ChatGPT, are using them to create characters that don’t rely on scripts but, instead, converse with you freely. Others are experimenting with NPCs who appear to have entire interior worlds, and who can continue to play even when you, the player, are not around to watch. Eventually, generative AI could create game experiences that are infinitely detailed, twisting and changing every time you experience them. 

The field is still very new, but it’s extremely hot. In 2022 the venture firm Andreessen Horowitz launched Games Fund, a $600 million fund dedicated to gaming startups. A huge number of these are planning to use AI in gaming. And the firm, also known as A16Z, has now invested in two studios that are aiming to create their own versions of AI NPCs. A second $600 million round was announced in April 2024.

Early experimental demos of these experiences are already popping up, and it may not be long before they appear in full games like RDR2. But some in the industry believe this development will not just make future open-world games incredibly immersive; it could change what kinds of game worlds or experiences are even possible. Ultimately, it could change what it means to play.

“What comes after the video game? You know what I mean?” says Frank Lantz, a game designer and director of the NYU Game Center. “Maybe we’re on the threshold of a new kind of game.”

These guys just won’t shut up

The way video games are made hasn’t changed much over the years. Graphics are incredibly realistic. Games are bigger. But the way in which you interact with characters, and the game world around you, uses many of the same decades-old conventions.

“In mainstream games, we’re still looking at variations of the formula we’ve had since the 1980s,” says Julian Togelius, a computer science professor at New York University who has a startup called Modl.ai that does in-game testing. Part of that tried-and-tested formula is a technique called a dialogue tree, in which all of an NPC’s possible responses are mapped out. Which one you get depends on which branch of the dialogue tree you have chosen. For example, say something rude about a passing NPC in RDR2 and the character will probably lash out—you have to quickly apologize to avoid a shootout (unless that’s what you want).

In the most expensive, high-­profile games, the so-called AAA games like Elden Ring or Starfield, a deeper sense of immersion is created by using brute force to build out deep and vast dialogue trees. The biggest studios employ teams of hundreds of game developers who work for many years on a single game in which every line of dialogue is plotted and planned, and software is written so the in-game engine knows when to deploy that particular line. RDR2 reportedly contains an estimated 500,000 lines of dialogue, voiced by around 700 actors. 

“You get around the fact that you can [only] do so much in the world by, like, insane amounts of writing, an insane amount of designing,” says Togelius. 

Generative AI is already helping take some of that drudgery out of making new games. Jonathan Lai, a general partner at A16Z and one of Games Fund’s managers, says that most studios are using image-­generating tools like Midjourney to enhance or streamline their work. And in a 2023 survey by A16Z, 87% of game studios said they were already using AI in their workflow in some way—and 99% planned to do so in the future. Many use AI agents to replace the human testers who look for bugs, such as places where a game might crash. In recent months, the CEO of the gaming giant EA said generative AI could be used in more than 50% of its game development processes.

Ubisoft, one of the biggest game developers, famous for AAA open-world games such as Assassin’s Creed, has been using a large-­language-model-based AI tool called Ghostwriter to do some of the grunt work for its developers in writing basic dialogue for its NPCs. Ghostwriter generates loads of options for background crowd chatter, which the human writer can pick from or tweak. The idea is to free the humans up so they can spend that time on more plot-focused writing.

GEORGE WYLESOL

Ultimately, though, everything is scripted. Once you spend a certain number of hours on a game, you will have seen everything there is to see, and completed every interaction. Time to buy a new one.

But for startups like Inworld AI, this situation is an opportunity. Inworld, based in California, is building tools to make in-game NPCs that respond to a player with dynamic, unscripted dialogue and actions—so they never repeat themselves. The company, now valued at $500 million, is the best-funded AI gaming startup around thanks to backing from former Google CEO Eric Schmidt and other high-profile investors. 

Role-playing games give us a unique way to experience different realities, explains Kylan Gibbs, Inworld’s CEO and founder. But something has always been missing. “Basically, the characters within there are dead,” he says. 

“When you think about media at large, be it movies or TV or books, characters are really what drive our ability to empathize with the world,” Gibbs says. “So the fact that games, which are arguably the most advanced version of storytelling that we have, are lacking these live characters—it felt to us like a pretty major issue.”

Gamers themselves were pretty quick to realize that LLMs could help fill this gap. Last year, some came up with ChatGPT mods (a way to alter an existing game) for the popular role-playing game Skyrim. The mods let players interact with the game’s vast cast of characters using LLM-powered free chat. One mod even included OpenAI’s speech recognition software Whisper AI so that players could speak to the players with their voices, saying whatever they wanted, and have full conversations that were no longer restricted by dialogue trees. 

The results gave gamers a glimpse of what might be possible but were ultimately a little disappointing. Though the conversations were open-ended, the character interactions were stilted, with delays while ChatGPT processed each request. 

Inworld wants to make this type of interaction more polished. It’s offering a product for AAA game studios in which developers can create the brains of an AI NPC that can be then imported into their game. Developers use the company’s “Inworld Studio” to generate their NPC. For example, they can fill out a core description that sketches the character’s personality, including likes and dislikes, motivations, or useful backstory. Sliders let you set levels of traits such as introversion or extroversion, insecurity or confidence. And you can also use free text to make the character drunk, aggressive, prone to exaggeration—pretty much anything.

Developers can also add descriptions of how their character speaks, including examples of commonly used phrases that Inworld’s various AI models, including LLMs, then spin into dialogue in keeping with the character. 

“Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended.”

Jeff Orkin, founder, Bitpart

Game designers can also plug other information into the system: what the character knows and doesn’t know about the world (no Taylor Swift references in a medieval battle game, ideally) and any relevant safety guardrails (does your character curse or not?). Narrative controls will let the developers make sure the NPC is sticking to the story and isn’t wandering wildly off-base in its conversation. The idea is that the characters can then be imported into video-game graphics engines like Unity or Unreal Engine to add a body and features. Inworld is collaborating with the text-to-voice startup ElevenLabs to add natural-sounding voices.

Inworld’s tech hasn’t appeared in any AAA games yet, but at the Game Developers Conference (GDC) in San Francisco in March 2024, the firm unveiled an early demo with Nvidia that showcased some of what will be possible. In Covert Protocol, each player operates as a private detective who must solve a case using input from the various in-game NPCs. Also at the GDC, Inworld unveiled a demo called NEO NPC that it had worked on with Ubisoft. In NEO NPC, a player could freely interact with NPCs using voice-to-text software and use conversation to develop a deeper relationship with them.

LLMs give us the chance to make games more dynamic, says Jeff Orkin, founder of Bitpart, a new startup that also aims to create entire casts of LLM-powered NPCs that can be imported into games. “Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended,” he says.

Bitpart’s approach is in part inspired by Orkin’s PhD research at MIT’s Media Lab. There, he trained AIs to role-play social situations using game-play logs of humans doing the same things with each other in multiplayer games.

Bitpart’s casts of characters are trained using a large language model and then fine-tuned in a way that means the in-game interactions are not entirely open-ended and infinite. Instead, the company uses an LLM and other tools to generate a script covering a range of possible interactions, and then a human game designer will select some. Orkin describes the process as authoring the Lego bricks of the interaction. An in-game algorithm searches out specific bricks to string them together at the appropriate time.

Bitpart’s approach could create some delightful in-game moments. In a restaurant, for example, you might ask a waiter for something, but the bartender might overhear and join in. Bitpart’s AI currently works with Roblox. Orkin says the company is now running trials with AAA game studios, although he won’t yet say which ones.

But generative AI might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

Making the impossible possible

When I asked Frank Lantz about how AI could change gaming, he talked for 26 minutes straight. His initial reaction to generative AI had been visceral: “I was like, oh my God, this is my destiny and is what I was put on the planet for.” 

Lantz has been in and around the cutting edge of the game industry and AI for decades but received a cult level of acclaim a few years ago when he created the Universal Paperclips game. The simple in-browser game gives the player the job of producing as many paper clips as possible. It’s a riff on the famous thought experiment by the philosopher Nick Bostrom, which imagines an AI that is given the same task and optimizes against humanity’s interest by turning all the matter in the known universe into paper clips.

Lantz is bursting with ideas for ways to use generative AI. One is to experience a new work of art as it is being created, with the player participating in its creation. “You’re inside of something like Lord of the Rings as it’s being written. You’re inside a piece of literature that is unfolding around you in real time,” he says. He also imagines strategy games where the players and the AI work together to reinvent what kind of game it is and what the rules are, so it is never the same twice.

For Orkin, LLM-powered NPCs can make games unpredictable—and that’s exciting. “It introduces a lot of open questions, like what you do when a character answers you but that sends a story in a direction that nobody planned for,” he says. 

Generative A I might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

It might mean games that are unlike anything we’ve seen thus far. Gaming experiences that unspool as the characters’ relationships shift and change, as friendships start and end, could unlock entirely new narrative experiences that are less about action and more about conversation and personalities. 

Togelius imagines new worlds built to react to the player’s own wants and needs, populated with NPCs that the player must teach or influence as the game progresses. Imagine interacting with characters whose opinions can change, whom you could persuade or motivate to act in a certain way—say, to go to battle with you. “A thoroughly generative game could be really, really good,” he says. “But you really have to change your whole expectation of what a game is.”

Lantz is currently working on a prototype of a game in which the premise is that you—the player—wake up dead, and the afterlife you are in is a low-rent, cheap version of a synthetic world. The game plays out like a noir in which you must explore a city full of thousands of NPCs powered by a version of ChatGPT, whom you must interact with to work out how you ended up there. 

His early experiments gave him some eerie moments when he felt that the characters seemed to know more than they should, a sensation recognizable to people who have played with LLMs before. Even though you know they’re not alive, they can still freak you out a bit.

“If you run electricity through a frog’s corpse, the frog will move,” he says. “And if you run $10 million worth of computation through the internet … it moves like a frog, you know.” 

But these early forays into generative-­­AI gaming have given him a real sense of excitement for what’s next: “I felt like, okay, this is a thread. There really is a new kind of artwork here.”

If an AI NPC talks and no one is around to listen, is there a sound?

AI NPCs won’t just enhance player interactions—they might interact with one another in weird ways. Red Dead Redemption 2’s NPCs each have long, detailed scripts that spell out exactly where they should go, what work they must complete, and how they’d react if anything unexpected occurred. If you want, you can follow an NPC and watch it go about its day. It’s fun, but ultimately it’s hard-coded.

NPCs built with generative AI could have a lot more leeway—even interacting with one another when the player isn’t there to watch. Just as people have been fooled into thinking LLMs are sentient, watching a city of generated NPCs might feel like peering over the top of a toy box that has somehow magically come alive.

We’re already getting a sense of what this might look like. At Stanford University, Joon Sung Park has been experimenting with AI-generated characters and watching to see how their behavior changes and gains complexity as they encounter one another. 

Because large language models have sucked up the internet and social media, they actually contain a lot of detail about how we behave and interact, he says.

a character from Skyrim
Gamers came up with ChatGPT mods for the popular role-playing game Skyrim.
creatures walking in a verdant landscape
Although 2016’s hugely hyped No Man’s Sky used procedural generation to create endless planets to explore, many saw it as a letdown.
a player interacting with an NPC behind a service desk
In Covert Protocol, players operate as private detectives who must solve the case using input from various in-game NPCs

In Park’s recent research, he and colleagues set up a Sims-like game, called Smallville, with 25 simulated characters that had been trained using generative AI. Each was given a name and a simple biography before being set in motion. When left to interact with each other for two days, they began to exhibit humanlike conversations and behavior, including remembering each other and being able to talk about their past interactions. 

For example, the researchers prompted one character to organize a Valentine’s Day party—and then let the simulation run. That character sent invitations around town, while other members of the community asked each other on dates to go to the party, and all turned up at the venue at the correct time. All of this was carried out through conversations, and past interactions between characters were stored in their “memories” as natural language.

For Park, the implications for gaming are huge. “This is exactly the sort of tech that the gaming community for their NPCs have been waiting for,” he says. 

His research has inspired games like AI Town, an open-source interactive experience on GitHub that lets human players interact with AI NPCs in a simple top-down game. You can leave the NPCs to get along for a few days and check in on them, reading the transcripts of the interactions they had while you were away. Anyone is free to take AI Town’s code to build new NPC experiences through AI. 

For Daniel De Freitas, cofounder of the startup Character AI, which lets users generate and interact with their own LLM-powered characters, the generative-AI revolution will allow new types of games to emerge—ones in which the NPCs don’t even need human players. 

The player is “joining an adventure that is always happening, that the AIs are playing,” he imagines. “It’s the equivalent of joining a theme park full of actors, but unlike the actors, they truly ‘believe’ that they are in those roles.”

If you’re getting Westworld vibes right about now, you’re not alone. There are plenty of stories about people torturing or killing their simple Sims characters in the game for fun. Would mistreating NPCs that pass for real humans cross some sort of new ethical boundary? What if, Lantz asks, an AI NPC that appeared conscious begged for its life when you simulated torturing it?

It raises complex questions he adds. “One is: What are the ethical dimensions of pretend violence? And the other is: At what point do AIs become moral agents to which harm can be done?”

There are other potential issues too. An immersive world that feels real, and never ends, could be dangerously addictive. Some users of AI chatbots have already reported losing hours and even days in conversation with their creations. Are there dangers that the same parasocial relationships could emerge with AI NPCs? 

“We may need to worry about people forming unhealthy relationships with game characters at some point,” says Togelius. Until now, players have been able to differentiate pretty easily between game play and real life. But AI NPCs might change that, he says: “If at some point what we now call ‘video games’ morph into some all-encompassing virtual reality, we will probably need to worry about the effect of NPCs being too good, in some sense.”

A portrait of the artist as a young bot

Not everyone is convinced that never-ending open-ended conversations between the player and NPCs are what we really want for the future of games. 

“I think we have to be cautious about connecting our imaginations with reality,” says Mike Cook, an AI researcher and game designer. “The idea of a game where you can go anywhere, talk to anyone, and do anything has always been a dream of a certain kind of player. But in practice, this freedom is often at odds with what we want from a story.”

In other words, having to generate a lot of the dialogue yourself might actually get kind of … well, boring. “If you can’t think of interesting or dramatic things to say, or are simply too tired or bored to do it, then you’re going to basically be reading your own very bad creative fiction,” says Cook. 

Orkin likewise doesn’t think conversations that could go anywhere are actually what most gamers want. “I want to play a game that a bunch of very talented, creative people have really thought through and created an engaging story and world,” he says.

This idea of authorship is an important part of game play, agrees Togelius. “You can generate as much as you want,” he says. “But that doesn’t guarantee that anything is interesting and worth keeping. In fact, the more content you generate, the more boring it might be.”

GEORGE WYLESOL

Sometimes, the possibility of everything is too much to cope with. No Man’s Sky, a hugely hyped space game launched in 2016 that used algorithms to generate endless planets to explore, was seen by many players as a bit of a letdown when it finally arrived. Players quickly discovered that being able to explore a universe that never ended, with worlds that were endlessly different, actually fell a little flat. (A series of updates over subsequent years has made No Man’s Sky a little more structured, and it’s now generally well thought of.)

One approach might be to keep AI gaming experiences tight and focused.

Hilary Mason, CEO at the gaming startup Hidden Door, likes to joke that her work is “artisanal AI.” She is from Brooklyn, after all, says her colleague Chris Foster, the firm’s game director, laughing.

Hidden Door, which has not yet released any products, is making role-playing text adventures based on classic stories that the user can steer. It’s like Dungeons & Dragons for the generative AI era. It stitches together classic tropes for certain adventure worlds, and an annotated database of thousands of words and phrases, and then uses a variety of machine-learning tools, including LLMs, to make each story unique. Players walk through a semi-­unstructured storytelling experience, free-typing into text boxes to control their character. 

The result feels a bit like hand-annotating an AI-generated novel with Post-it notes.

In a demo with Mason, I got to watch as her character infiltrated a hospital and attempted to hack into the server. Each suggestion prompted the system to spin up the next part of the story, with the large language model creating new descriptions and in-game objects on the fly.

Each experience lasts between 20 and 40 minutes, and for Foster, it creates an “expressive canvas” that people can play with. The fixed length and the added human touch—Mason’s artisanal approach—give players “something really new and magical,” he says.

There’s more to life than games

Park thinks generative AI that makes NPCs feel alive in games will have other, more fundamental implications further down the line.

“This can, I think, also change the meaning of what games are,” he says. 

For example, he’s excited about using generative-AI agents to simulate how real people act. He thinks AI agents could one day be used as proxies for real people to, for example, test out the likely reaction to a new economic policy. Counterfactual scenarios could be plugged in that would let policymakers run time backwards to try to see what would have happened if a different path had been taken. 

“You want to learn that if you implement this social policy or economic policy, what is going to be the impact that it’s going to have on the target population?” he suggests. “Will there be unexpected side effects that we’re not going to be able to foresee on day one?”

And while Inworld is focused on adding immersion to video games, it has also worked with LG in South Korea to make characters that kids can chat with to improve their English language skills. Others are using Inworld’s tech to create interactive experiences. One of these, called Moment in Manzanar, was created to help players empathize with the Japanese-Americans the US government detained in internment camps during World War II. It allows the user to speak to a fictional character called Ichiro who talks about what it was like to be held in the Manzanar camp in California. 

Inworld’s NPC ambitions might be exciting for gamers (my future excursions as a cowboy could be even more immersive!), but there are some who believe using AI to enhance existing games is thinking too small. Instead, we should be leaning into the weirdness of LLMs to create entirely new kinds of experiences that were never possible before, says Togelius. The shortcomings of LLMs “are not bugs—they’re features,” he says. 

Lantz agrees. “You have to start with the reality of what these things are and what they do—this kind of latent space of possibilities that you’re surfing and exploring,” he says. “These engines already have that kind of a psychedelic quality to them. There’s something trippy about them. Unlocking that is the thing that I’m interested in.”

Whatever is next, we probably haven’t even imagined it yet, Lantz thinks. 

“And maybe it’s not about a simulated world with pretend characters in it at all,” he says. “Maybe it’s something totally different. I don’t know. But I’m excited to find out.”

Is this the end of animal testing?

In a clean room in his lab, Sean Moore peers through a microscope at a bit of intestine, its dark squiggles and rounded structures standing out against a light gray background. This sample is not part of an actual intestine; rather, it’s human intestinal cells on a tiny plastic rectangle, one of 24 so-called “organs on chips” his lab bought three years ago.

Moore, a pediatric gastroenterologist at the University of Virginia School of Medicine, hopes the chips will offer answers to a particularly thorny research problem. He studies rotavirus, a common infection that causes severe diarrhea, vomiting, dehydration, and even death in young children. In the US and other rich nations, up to 98% of the children who are vaccinated against rotavirus develop lifelong immunity. But in low-income countries, only about a third of vaccinated children become immune. Moore wants to know why.

His lab uses mice for some protocols, but animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. Researchers have documented this translation gap since at least 1962. “All these pharmaceutical companies know the animal models stink,” says Don Ingber, founder of the Wyss Institute for Biologically Inspired Engineering at Harvard and a leading advocate for organs on chips. “The FDA knows they stink.” 

But until recently there was no other option. Research questions like Moore’s can’t ethically or practically be addressed with a randomized, double-blinded study in humans. Now these organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. These tissues expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.

More than 60 companies now produce organs on chips commercially, focusing on five major organs: liver, kidney, lung, intestines, and brain. They’re already being used to understand diseases, discover and test new drugs, and explore personalized approaches to treatment.

As they continue to be refined, they could solve one of the biggest problems in medicine today. “You need to do three things when you’re making a drug,” says Lorna Ewart, a pharmacologist and chief scientific officer of Emulate, a biotech company based in Boston. “You need to show it’s safe. You need to show it works. You need to be able to make it.” 

All new compounds have to pass through a preclinical phase, where they’re tested for safety and effectiveness before moving to clinical trials in humans. Until recently, those tests had to run in at least two animal species—usually rats and dogs—before the drugs were tried on people. 

But in December 2022, President Biden signed the FDA Modernization Act, which amended the original FDA Act of 1938. With a few small word changes, the act opened the door for non-animal-based testing in preclinical trials. Anything that makes it faster and easier for pharmaceutical companies to identify safe and effective drugs means better, potentially cheaper treatments for all of us. 

Moore, for one, is banking on it, hoping the chips help him and his colleagues shed light on the rotavirus vaccine responses that confound them. “If you could figure out the answer,” he says, “you could save a lot of kids’ lives.”


While many teams have worked on organ chips over the last 30 years, the OG in the field is generally acknowledged to be Michael Shuler, a professor emeritus of chemical engineering at Cornell. In the 1980s, Shuler was a math and engineering guy who imagined an “animal on a chip,” a cell culture base seeded with a variety of human cells that could be used for testing drugs. He wanted to position a handful of different organ cells on the same chip, linked to one another, which could mimic the chemical communication between organs and the way drugs move through the body. “This was science fiction,” says Gordana Vunjak-Novakovic, a professor of biomedical engineering at Columbia University whose lab works with cardiac tissue on chips. “There was no body on a chip. There is still no body on a chip. God knows if there will ever be a body on a chip.”

Shuler had hoped to develop a computer model of a multi-organ system, but there were too many unknowns. The living cell culture system he dreamed up was his bid to fill in the blanks. For a while he played with the concept, but the materials simply weren’t good enough to build what he imagined. 

“You can force mice to menstruate, but it’s not really menstruation. You need the human being.”

Linda Griffith, founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant”

He wasn’t the only one working on the problem. Linda Griffith, a founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant,” designed a crude early version of a liver chip in the late 1990s: a flat silicon chip, just a few hundred micrometers tall, with endothelial cells, oxygen and liquid flowing in and out via pumps, silicone tubing, and a polymer membrane with microscopic holes. She put liver cells from rats on the chip, and those cells organized themselves into three-dimensional tissue. It wasn’t a liver, but it modeled a few of the things a functioning human liver could do. It was a start.

Griffith, who rides a motorcycle for fun and speaks with a soft Southern accent, suffers from endometriosis, an inflammatory condition where cells from the lining of the uterus grow throughout the abdomen. She’s endured decades of nausea, pain, blood loss, and repeated surgeries. She never took medical leaves, instead loading up on Percocet, Advil, and margaritas, keeping a heating pad and couch in her office—a strategy of necessity, as she saw no other choice for a working scientist. Especially a woman. 

And as a scientist, Griffith understood that the chronic diseases affecting women tend to be under-researched, underfunded, and poorly treated. She realized that decades of work with animals hadn’t done a damn thing to make life better for women like her. “We’ve got all this data, but most of that data does not lead to treatments for human diseases,” she says. “You can force mice to menstruate, but it’s not really menstruation. You need the human being.” 

Or, at least, the human cells. Shuler and Griffith, and other scientists in Europe, worked on some of those early chips, but things really kicked off around 2009, when Don Ingber’s lab in Cambridge, Massachusetts, created the first fully functioning organ on a chip. That “lung on a chip” was made from flexible silicone rubber, lined with human lung cells and capillary blood vessel cells that “breathed” like the alveoli—tiny air sacs—in a human lung. A few years later Ingber, an MD-PhD with the tidy good looks of a younger Michael Douglas, founded Emulate, one of the earliest biotech companies making microphysiological systems. Since then he’s become a kind of unofficial ambassador for in vitro technologies in general and organs on chips in particular, giving hundreds of talks, scoring millions in grant money, repping the field with scientists and laypeople. Stephen Colbert once ragged on him after the New York Times quoted him as describing a chip that “walks, talks, and quacks like a human vagina,” a quote Ingber says was taken out of context.

Ingber began his career working on cancer. But he struggled with the required animal research. “I really didn’t want to work with them anymore, because I love animals,” he says. “It was a conscious decision to focus on in vitro models.” He’s not alone; a growing number of young scientists are speaking up about the distress they feel when research protocols cause pain, trauma, injury, and death to lab animals. “I’m a master’s degree student in neuroscience and I think about this constantly. I’ve done such unspeakable, horrible things to mice all in the name of scientific progress, and I feel guilty about this every day,” wrote one anonymous student on Reddit. (Full disclosure: I switched out of a psychology major in college because I didn’t want to cause harm to animals.)

cross-section of a microfluidic chip with the top channel, epithelial cells, vacuum channel, porous membrane, endothelial cells and bottom channel indicated.
Emulate is one of the companies building organ-on-a-chip technology. The devices combine live human cells with a microenvironment designed to emulate specific tissues.
EMULATE

Taking an undergraduate art class led Ingber to an epiphany: mechanical forces are just as important as chemicals and genes in determining the way living creatures work. On a shelf in his office he still displays a model he built in that art class, a simple construction of sticks and fishing line, which helped him realize that cells pull and twist against each other. That realization foreshadowed his current work and helped him design dynamic microfluidic devices that incorporated shear and flow. 

Ingber coauthored a 2022 paper that’s sometimes cited as a watershed in the world of organs on chips. Researchers used Emulate’s liver chips to reevaluate 27 drugs that had previously made it through animal testing and had then gone on to kill 242 people and necessitate more than 60 liver transplants. The liver chips correctly flagged problems with 22 of the 27 drugs, an 87% success rate compared with a 0% success rate for animal testing. It was the first time organs on chips had been directly pitted against animal models, and the results got a lot of attention from the pharmaceutical industry. Dan Tagle, director of the Office of Special Initiatives for the National Center for Advancing Translational Sciences (NCATS), estimates that drug failures cost around $2.6 billion globally each year. The earlier in the process failing compounds can be weeded out, the more room there is for other drugs to succeed.

“The capacity we have to test drugs is more or less fixed in this country,” says Shuler, whose company, Hesperos, also manufactures organs on chips. “There are only so many clinical trials you can do. So if you put a loser into the system, that means something that could have won didn’t get into the system. We want to change the success rate from clinical trials to a much higher number.”

In 2011, the National Institutes of Health established NCATS and started investing in organs on chips and other in vitro technologies. Other government funders, like the Defense Advanced Research Projects Agency and the Food and Drug Administration, have followed suit. For instance, NIH recently funded NASA scientists to send heart tissue on chips into space. Six months in low gravity ages the cardiovascular system 10 years, so this experiment lets researchers study some of the effects of aging without harming animals or humans. 

Scientists have made liver chips, brain chips, heart chips, kidney chips, intestine chips, and even a female reproductive system on a chip (with cells from ovaries, fallopian tubes, and uteruses that release hormones and mimic an actual 28-day menstrual cycle). Each of these chips exhibits some of the specific functions of the organs in question. Cardiac chips, for instance, contain heart cells that beat just like heart muscle, making it possible for researchers to model disorders like cardiomyopathy. 

Shuler thinks organs on chips will revolutionize the world of research for rare diseases. “It is a very good model when you don’t have enough patients for normal clinical trials and you don’t have a good animal model,” he says. “So it’s a way to get drugs to people that couldn’t be developed in our current pharmaceutical model.” Shuler’s own biotech company used organs on chips to test a potential drug for myasthenia gravis, a rare neurological disorder. In 2022,the FDA approved the drug for clinical trials based on that data—one of six Hesperos drugs that have so far made it to that stage. 


Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body. “We try and build a physical replica of the mathematical model of what really occurs in the body,” explains Shuler. That model guides the way the chip is designed, re-creating the amount of time a fluid or chemical stays in that particular organ—what’s known as the residence time. “As long as you have the same residence time, you should get the same response in terms of chemical conversion,” he says.

Tiny channels on each chip, each between 10 and 100 microns in diameter, help bring fluids and oxygen to the cells. “When you get down to less than one micron, you can’t use normal fluid dynamics,” says Shuler. And fluid dynamics matters, because if the fluid moves through the device too quickly, the cells might die; too slowly, and the cells won’t react normally. 

Chip technology, while sophisticated, has some downsides. One of them is user friendliness. “We need to get rid of all this tubing and pumps and make something that’s as simple as a well plate for culturing cells,” says Vunjak-Novakovic. Her lab and others are working on simplifying the design and function of such chips so they’re easier to operate and are compatible with robots, which do repetitive tasks like pipetting in many labs. 

Cost and sourcing can also be challenging. Emulate’s base model, which looks like a simple rectangular box from the outside,starts at around $100,000 and rises steeply from there. Most human cells come from commercial suppliers that arrange for donations from hospital patients. During the pandemic, when people had fewer elective surgeries, many of those sources dried up. As microphysiological systems become more mainstream, finding reliable sources of human cells will be critical.

“As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore— we could go with chip plus one animal.”

Lorna Ewart, Chief Scientific Officer, Emulate

Another challenge is that every company producing organs on chips uses its own proprietary methods and technologies. Ingber compares the landscape to the early days of personal computing, when every company developed its own hardware and software, and none of them meshed well. For instance, the microfluidic systems in Emulate’s intestine chips are fueled by micropumps, while those made by Mimetas, another biotech company, use an electronic rocker and gravity to circulate fluids and air. “This is not an academic lab type of challenge,” emphasizes Ingber. “It’s a commercial challenge. There’s no way you can get the same results anywhere in the world with individual academics making [organs on chips], so you have to have commercialization.”

Namandje Bumpus, the FDA’s chief scientist, agrees. “You can find differences [in outcomes] depending even on what types of reagents you’re using,” she says. Those differences mean research can’t be easily reproduced, which diminishes its validity and usefulness. “It would be great to have some standardization,” she adds.

On the plus side, the chip technology could help researchers address some of the most deeply entrenched health inequities in science. Clinical trials have historically recruited white men, underrepresenting people of color, women (especially pregnant and lactating women), the elderly, and other groups. And treatments derived from those trials all too often fail in members of those underrepresented groups, as in Moore’s rotavirus vaccine mystery. “With organs on a chip, you may be able to create systems by which you are very, very thoughtful—where you spread the net wider than has ever been done before,” says Moore.

two platforms
This microfluidic platform, designed by MIT engineers, connects engineered tissue from up to 10 organs.
FELICE FRANKEL

Another advantage is that chips will eventually reduce the need for animals in the lab even as they lead to better human outcomes. “There are aspects of animal research that make all of us uncomfortable, even people that do it,” acknowledges Moore. “The same values that make us uncomfortable about animal research are also the same values that make us uncomfortable with seeing human beings suffer with diseases that we don’t have cures for yet. So we always sort of balance that desire to reduce suffering in all the forms that we see it.”

Lorna Ewart, who spent 20 years at the pharma giant AstraZeneca before joining Emulate, thinks we’re entering a kind of transition time in research, in which scientists use in vitro technologies like organs on chips alongside traditional cell culture methods and animals. “As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore—we could go with chip plus one animal,” she says. 

In the meantime, Sean Moore is excited about incorporating intestine chips more and more deeply into his research. His lab has been funded by the Gates Foundation to do what he laughingly describes as a bake-off between intestine chips made by Emulate and Mimetas. They’re infecting the chips with different strains of rotavirus to try to identify the pros and cons of each company’s design. It’s too early for any substantive results, but Moore says he does have data showing that organ chips are a viable model for studying rotavirus infection. That could ultimately be a real game-changer in his lab and in labs around the world.

“There’s more players in the space right now,” says Moore. “And that competition is going to be a healthy thing.” 

Harriet Brown writes about health, medicine, and science. Her most recent book is Shadow Daughter: A Memoir of Estrangement. She’s a professor of magazine, news, and digital journalism at Syracuse University’s Newhouse School. 

Should social media come with a health warning?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”

His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.

But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.

Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Mail linked Facebook use to cancer.

More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.

Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.

Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.

“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.

The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.

Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.

Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.

Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.

“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”

In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”

But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.

Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.

I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.

Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.  

China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.

Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.

One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”

From around the web

Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)

“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)

Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)

Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)