What is AI?

Internet nastiness, name-calling, and other not-so-petty, world-altering disagreements

AI is sexy, AI is cool. AI is entrenching inequality, upending the job market, and wrecking education. AI is a theme-park ride, AI is a magic trick. AI is our final invention, AI is a moral obligation. AI is the buzzword of the decade, AI is marketing jargon from 1955. AI is humanlike, AI is alien. AI is super-smart and as dumb as dirt. The AI boom will boost the economy, the AI bubble is about to burst. AI will increase abundance and empower humanity to maximally flourish in the universe. AI will kill us all.

What the hell is everybody talking about?

Artificial intelligence is the hottest technology of our time. But what is it? It sounds like a stupid question, but it’s one that’s never been more urgent. Here’s the short answer: AI is a catchall term for a set of technologies that make computers do things that are thought to require intelligence when done by people. Think of recognizing faces, understanding speech, driving cars, writing sentences, answering questions, creating pictures. But even that definition contains multitudes.

And that right there is the problem. What does it mean for machines to understand speech or write a sentence? What kinds of tasks could we ask such machines to do? And how much should we trust the machines to do them?

As this technology moves from prototype to product faster and faster, these have become questions for all of us. But (spoilers!) I don’t have the answers. I can’t even tell you what AI is. The people making it don’t know what AI is either. Not really. “These are the kinds of questions that are important enough that everyone feels like they can have an opinion,” says Chris Olah, chief scientist at the San Francisco–based AI lab Anthropic. “I also think you can argue about this as much as you want and there’s no evidence that’s going to contradict you right now.”

But if you’re willing to buckle up and come for a ride, I can tell you why nobody really knows, why everybody seems to disagree, and why you’re right to care about it.

Let’s start with an offhand joke.

Back in 2022, partway through the first episode of Mystery AI Hype Theater 3000, a party-pooping podcast in which the irascible cohosts Alex Hanna and Emily Bender have a lot of fun sticking “the sharpest needles’’ into some of Silicon Valley’s most inflated sacred cows, they make a ridiculous suggestion. They’re hate-reading aloud from a 12,500-word Medium post by a Google VP of engineering, Blaise Agüera y Arcas, titled “Can machines learn how to behave?” Agüera y Arcas makes a case that AI can understand concepts in a way that’s somehow analogous to the way humans understand concepts—concepts such as moral values. In short, perhaps machines can be taught to behave. 

Cover for the podcast, Mystery AI Hype Theater 3000

COURTESY IMAGE

Hanna and Bender are having none of it. They decide to replace the term “AI’’ with “mathy math”—you know, just lots and lots of math.

The irreverent phrase is meant to collapse what they see as bombast and anthropomorphism in the sentences being quoted. Pretty soon Hanna, a sociologist and director of research at the Distributed AI Research Institute, and Bender, a computational linguist at the University of Washington (and internet-famous critic of tech industry hype), open a gulf between what Agüera y Arcas wants to say and how they choose to hear it.

“How should AIs, their creators, and their users be held morally accountable?” asks Agüera y Arcas.

How should mathy math be held morally accountable? asks Bender.

“There’s a category error here,” she says. Hanna and Bender don’t just reject what Agüera y Arcas says; they claim it makes no sense. “Can we please stop it with the ‘an AI’ or ‘the AIs’ as if they are, like, individuals in the world?” Bender says.

Alex Hanna
Alex Hanna
BRITTANY HOSEA-SMALL

It might sound as if they’re talking about different things, but they’re not. Both sides are talking about large language models, the technology behind the current AI boom. It’s just that the way we talk about AI is more polarized than ever. In May, OpenAI CEO Sam Altman teased the latest update to GPT-4, his company’s flagship model, by tweeting, “Feels like magic to me.”

There’s a lot of road between math and magic.

Emily Bender
Emily Bender
COURTESY PHOTO

AI has acolytes, with a faith-like belief in the technology’s current power and inevitable future improvement. Artificial general intelligence is in sight, they say; superintelligence is coming behind it. And it has heretics, who pooh-pooh such claims as mystical mumbo-jumbo.

The buzzy popular narrative is shaped by a pantheon of big-name players, from Big Tech marketers in chief like Sundar Pichai and Satya Nadella to edgelords of industry like Elon Musk and Altman to celebrity computer scientists like Geoffrey Hinton. Sometimes these boosters and doomers are one and the same, telling us that the technology is so good it’s bad.

As AI hype has ballooned, a vocal anti-hype lobby has risen in opposition, ready to smack down its ambitious, often wild claims. Pulling in this direction are a raft of researchers, including Hanna and Bender, and also outspoken industry critics like influential computer scientist and former Googler Timnit Gebru and NYU cognitive scientist Gary Marcus. All have a chorus of followers bickering in their replies.

In short, AI has come to mean all things to all people, splitting the field into fandoms. It can feel as if different camps are talking past one another, not always in good faith.

Maybe you find all this silly or tiresome. But given the power and complexity of these technologies—which are already used to determine how much we pay for insurance, how we look up information, how we do our jobs, etc. etc. etc.—it’s about time we at least agreed on what it is we’re even talking about.

Yet in all the conversations I’ve had with people at the cutting edge of this technology, no one has given a straight answer about exactly what it is they’re building. (A quick side note: This piece focuses on the AI debate in the US and Europe, largely because many of the best-funded, most cutting-edge AI labs are there. But of course there’s important research happening elsewhere, too, in countries with their own varying perspectives on AI, particularly China.) Partly, it’s the pace of development. But the science is also wide open. Today’s large language models can do amazing things. The field just can’t find common ground on what’s really going on under the hood.

These models are trained to complete sentences. They appear to be able to do a lot more—from solving high school math problems to writing computer code to passing law exams to composing poems. When a person does these things, we take it as a sign of intelligence. What about when a computer does it? Is the appearance of intelligence enough?

These questions go to the heart of what we mean by “artificial intelligence,” a term people have actually been arguing about for decades. But the discourse around AI has become more acrimonious with the rise of large language models that can mimic the way we talk and write with thrilling/chilling (delete as applicable) realism.

We have built machines with humanlike behavior but haven’t shrugged off the habit of imagining a humanlike mind behind them. This leads to over-egged evaluations of what AI can do; it hardens gut reactions into dogmatic positions, and it plays into the wider culture wars between techno-optimists and techno-skeptics.

Add to this stew of uncertainty a truckload of cultural baggage, from the science fiction that I’d bet many in the industry were raised on, to far more malign ideologies that influence the way we think about the future. Given this heady mix, arguments about AI are no longer simply academic (and perhaps never were). AI inflames people’s passions and makes grownups call each other names.

“It’s not in an intellectually healthy place right now,” Marcus says of the debate. For years Marcus has pointed out the flaws and limitations of deep learning, the tech that launched AI into the mainstream, powering everything from LLMs to image recognition to self-driving cars. His 2001 book The Algebraic Mind argued that neural networks, the foundation on which deep learning is built, are incapable of reasoning by themselves. (We’ll skip over it for now, but I’ll come back to it later and we’ll see just how much a word like “reasoning” matters in a sentence like this.)

Marcus says that he has tried to engage Hinton—who last year went public with existential fears about the technology he helped invent—in a proper debate about how good large language models really are. “He just won’t do it,” says Marcus. “He calls me a twit.” (Having talked to Hinton about Marcus in the past, I can confirm that. “ChatGPT clearly understands neural networks better than he does,” Hinton told me last year.) Marcus also drew ire when he wrote an essay titled “Deep learning is hitting a wall.” Altman responded to it with a tweet: “Give me the confidence of a mediocre deep learning skeptic.”

At the same time, banging his drum has made Marcus a one-man brand and earned him an invitation to sit next to Altman and give testimony last year before the US Senate’s AI oversight committee.

And that’s why all these fights matter more than your average internet nastiness. Sure, there are big egos and vast sums of money at stake. But more than that, these disputes matter when industry leaders and opinionated scientists are summoned by heads of state and lawmakers to explain what this technology is and what it can do (and how scared we should be). They matter when this technology is being built into software we use every day, from search engines to word-processing apps to assistants on your phone. AI is not going away. But if we don’t know what we’re being sold, who’s the dupe?

“It is hard to think of another technology in history about which such a debate could be had—a debate about whether it is everywhere, or nowhere at all,” Stephen Cave and Kanta Dihal write in Imagining AI, a 2023 collection of essays about how different cultural beliefs shape people’s views of artificial intelligence. “That it can be held about AI is a testament to its mythic quality.”

Above all else, AI is an idea—an ideal—shaped by worldviews and sci-fi tropes as much as by math and computer science. Figuring out what we are talking about when we talk about AI will clarify many things. We won’t agree on them, but common ground on what AI is would be a great place to start talking about what AI should be.

What is everyone really fighting about, anyway?

In late 2022, soon after OpenAI released ChatGPT, a new meme started circulating online that captured the weirdness of this technology better than anything else. In most versions, a Lovecraftian monster called the Shoggoth, all tentacles and eyeballs, holds up a bland smiley-face emoji as if to disguise its true nature. ChatGPT presents as humanlike and accessible in its conversational wordplay, but behind that façade lie unfathomable complexities—and horrors. (“It was a terrible, indescribable thing vaster than any subway train—a shapeless congeries of protoplasmic bubbles,” H.P. Lovecraft wrote of the Shoggoth in his 1936 novella At the Mountains of Madness.)  

@ANTHRUPAD VIA KNOWYOURMEME.COM

For years one of the best-known touchstones for AI in pop culture was The Terminator, says Dihal. But by putting ChatGPT online for free, OpenAI gave millions of people firsthand experience of something different. “AI has always been a sort of really vague concept that can expand endlessly to encompass all kinds of ideas,” she says. But ChatGPT made those ideas tangible: “Suddenly, everybody has a concrete thing to refer to.” What is AI? For millions of people the answer was now: ChatGPT.

The AI industry is selling that smiley face hard. Consider how The Daily Show recently skewered the hype, as expressed by industry leaders. Silicon Valley’s VC in chief, Marc Andreessen: “This has the potential to make life much better … I think it’s honestly a layup.” Altman: “I hate to sound like a utopic tech bro here, but the increase in quality of life that AI can deliver is extraordinary.” Pichai: “AI is the most profound technology that humanity is working on. More profound than fire.”

Jon Stewart: “Yeah, suck a dick, fire!”

But as the meme points out, ChatGPT is a friendly mask. Behind it is a monster called GPT-4, a large language model built from a vast neural network that has ingested more words than most of us could read in a thousand lifetimes. During training, which can last months and cost tens of millions of dollars, such models are given the task of filling in blanks in sentences taken from millions of books and a significant fraction of the internet. They do this task over and over again. In a sense, they are trained to be supercharged autocomplete machines. The result is a model that has turned much of the world’s written information into a statistical representation of which words are most likely to follow other words, captured across billions and billions of numerical values.

It’s math—a hell of a lot of math. Nobody disputes that. But is it just that, or does this complex math encode algorithms capable of something akin to human reasoning or the formation of concepts?

Many of the people who answer yes to that question believe we’re close to unlocking something called artificial general intelligence, or AGI, a hypothetical future technology that can do a wide range of tasks as well as humans can. A few of them have even set their sights on what they call superintelligence, sci-fi technology that can do things far better than humans. This cohort believes AGI will drastically change the world—but to what end? That’s yet another point of tension. It could fix all the world’s problems—or bring about its doom. 

Today AGI appears in the mission statements of the world’s top AI labs. But the term was invented in 2007 as a niche attempt to inject some pizzazz into a field that was then best known for applications that read handwriting on bank deposit slips or recommended your next book to buy. The idea was to reclaim the original vision of an artificial intelligence that could do humanlike things (more on that soon).

It was really an aspiration more than anything else, Google DeepMind cofounder Shane Legg, who coined the term, told me last year: “I didn’t have an especially clear definition.”

AGI became the most controversial idea in AI. Some talked it up as the next big thing: AGI was AI but, you know, much better. Others claimed the term was so vague that it was meaningless.

“AGI used to be a dirty word,” Ilya Sutskever told me, before he resigned as chief scientist at OpenAI.

But large language models, and ChatGPT in particular, changed everything. AGI went from dirty word to marketing dream.

Which brings us to what I think is one of the most illustrative disputes of the moment—one that sets up the sides of the argument and the stakes in play. 

Seeing magic in the machine

A few months before the public launch of OpenAI’s large language model GPT-4 in March 2023, the company shared a prerelease version with Microsoft, which wanted to use the new model to revamp its search engine Bing.

At the time, Sebastian Bubeck was studying the limitations of LLMs and was somewhat skeptical of their abilities. In particular, Bubeck—the vice president of generative AI research at Microsoft Research in Redmond, Washington—had been trying and failing to get the technology to solve middle school math problems. Things like: xy = 0; what are x and y? “My belief was that reasoning was a bottleneck, an obstacle,” he says. “I thought that you would have to do something really fundamentally different to get over that obstacle.”

Then he got his hands on GPT-4. The first thing he did was try those math problems. “The model nailed it,” he says. “Sitting here in 2024, of course GPT-4 can solve linear equations. But back then, this was crazy. GPT-3 cannot do that.”

But Bubeck’s real road-to-Damascus moment came when he pushed it to do something new.

The thing about middle school math problems is that they are all over the internet, and GPT-4 may simply have memorized them. “How do you study a model that may have seen everything that human beings have written?” asks Bubeck. His answer was to test GPT-4 on a range of problems that he and his colleagues believed to be novel.

Playing around with Ronen Eldan, a mathematician at Microsoft Research, Bubeck asked GPT-4 to give, in verse, a mathematical proof that there are an infinite number of primes.

Here’s a snippet of GPT-4’s response: “If we take the smallest number in S that is not in P / And call it p, we can add it to our set, don’t you see? / But this process can be repeated indefinitely. / Thus, our set P must also be infinite, you’ll agree.”

Cute, right? But Bubeck and Eldan thought it was much more than that. “We were in this office,” says Bubeck, waving at the room behind him via Zoom. “Both of us fell from our chairs. We couldn’t believe what we were seeing. It was just so creative and so, like, you know, different.” 

The Microsoft team also got GPT-4 to generate the code to add a horn to a cartoon picture of a unicorn drawn in Latex, a word processing program. Bubeck thinks this shows that the model could read the existing Latex code, understand what it depicted, and identify where the horn should go.

“There are many examples, but a few of them are smoking guns of reasoning,” he says—reasoning being a crucial building block of human intelligence.

three sets of shapes vaguely in the form of unicorns made by GPT-4

BUBECK ET AL

Bubeck, Eldan, and a team of other Microsoft researchers described their findings in a paper that they called “Sparks of artificial general intelligence”: “We believe that GPT-4’s intelligence signals a true paradigm shift in the field of computer science and beyond.” When Bubeck shared the paper online, he tweeted: “time to face it, the sparks of #AGI have been ignited.”

The Sparks paper quickly became infamous—and a touchstone for AI boosters. Agüera y Arcas and Peter Norvig, a former director of research at Google and coauthor of Artificial Intelligence: A Modern Approach, perhaps the most popular AI textbook in the world, cowrote an article called “Artificial General Intelligence Is Already Here.” Published in Noema, a magazine backed by an LA think tank called the Berggruen Institute, their argument uses the Sparks paper as a jumping-off point: “Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the current generation of advanced AI large language models,” they wrote. “Decades from now, they will be recognized as the first true examples of AGI.”

Since then, the hype has continued to balloon. Leopold Aschenbrenner, who at the time was a researcher at OpenAI focusing on superintelligence, told me last year: “AI progress in the last few years has been just extraordinarily rapid. We’ve been crushing all the benchmarks, and that progress is continuing unabated. But it won’t stop there. We’re going to have superhuman models, models that are much smarter than us.” (He was fired from OpenAI in April because, he claims, he raised security concerns about the tech he was building and “ruffled some feathers.” He has since set up a Silicon Valley investment fund.)

In June, Aschenbrenner put out a 165-page manifesto arguing that AI will outpace college graduates by “2025/2026” and that “we will have superintelligence, in the true sense of the word” by the end of the decade. But others in the industry scoff at such claims. When Aschenbrenner tweeted a chart to show how fast he thought AI would continue to improve given how fast it had improved in last few years, the tech investor Christian Keil replied that by the same logic, his baby son, who had doubled in size since he was born, would weigh 7.5 trillion tons by the time he was 10.

It’s no surprise that “sparks of AGI” has also become a byword for over-the-top buzz. “I think they got carried away,” says Marcus, speaking about the Microsoft team. “They got excited, like ‘Hey, we found something! This is amazing!’ They didn’t vet it with the scientific community.” Bender refers to the Sparks paper as a “fan fiction novella.”

Not only was it provocative to claim that GPT-4’s behavior showed signs of AGI, but Microsoft, which uses GPT-4 in its own products, has a clear interest in promoting the capabilities of the technology. “This document is marketing fluff masquerading as research,” one tech COO posted on LinkedIn.

Some also felt the paper’s methodology was flawed. Its evidence is hard to verify because it comes from interactions with a version of GPT-4 that was not made available outside OpenAI and Microsoft. The public version has guardrails that restrict the model’s capabilities, admits Bubeck. This made it impossible for other researchers to re-create his experiments.

One group tried to re-create the unicorn example with a coding language called Processing, which GPT-4 can also use to generate images. They found that the public version of GPT-4 could produce a passable unicorn but not flip or rotate that image by 90 degrees. It may seem like a small difference, but such things really matter when you’re claiming that the ability to draw a unicorn is a sign of AGI.

The key thing about the examples in the Sparks paper, including the unicorn, is that Bubeck and his colleagues believe they are genuine examples of creative reasoning. This means the team had to be certain that examples of these tasks, or ones very like them, were not included anywhere in the vast data sets that OpenAI amassed to train its model. Otherwise, the results could be interpreted instead as instances where GPT-4 reproduced patterns it had already seen.

octopus wearing a smiley face mask

JUN IONEDA

Bubeck insists that they set the model only tasks that would not be found on the internet. Drawing a cartoon unicorn in Latex was surely one such task. But the internet is a big place. Other researchers soon pointed out that there are indeed online forums dedicated to drawing animals in Latex. “Just fyi we knew about this,” Bubeck replied on X. “Every single query of the Sparks paper was thoroughly looked for on the internet.”

(This didn’t stop the name-calling: “I’m asking you to stop being a charlatan,” Ben Recht, a computer scientist at the University of California, Berkeley, tweeted back before accusing Bubeck of “being caught flat-out lying.”)

Bubeck insists the work was done in good faith, but he and his coauthors admit in the paper itself that their approach was not rigorous—notebook observations rather than foolproof experiments. 

Still, he has no regrets: “The paper has been out for more than a year and I have yet to see anyone give me a convincing argument that the unicorn, for example, is not a real example of reasoning.”

That’s not to say he can give me a straight answer to the big question—though his response reveals what kind of answer he’d like to give. “What is AI?” Bubeck repeats back to me. “I want to be clear with you. The question can be simple, but the answer can be complex.”

“There are many simple questions out there to which we still don’t know the answer. And some of those simple questions are the most profound ones,” he says. “I’m putting this on the same footing as, you know, What is the origin of life? What is the origin of the universe? Where did we come from? Big, big questions like this.”

Seeing only math in the machine

Before Bender became one of the chief antagonists of AI’s boosters, she made her mark on the AI world as a coauthor on two influential papers. (Both peer-reviewed, she likes to point out—unlike the Sparks paper and many of the others that get much of the attention.) The first, written with Alexander Koller, a fellow computational linguist at Saarland University in Germany, and published in 2020, was called “Climbing towards NLU” (NLU is natural-language understanding).

“The start of all this for me was arguing with other people in computational linguistics whether or not language models understand anything,” she says. (Understanding, like reasoning, is typically taken to be a basic ingredient of human intelligence.)

Bender and Koller argue that a model trained exclusively on text will only ever learn the form of a language, not its meaning. Meaning, they argue, consists of two parts: the words (which could be marks or sounds) plus the reason those words were uttered. People use language for many reasons, such as sharing information, telling jokes, flirting, warning somebody to back off, and so on. Stripped of that context, the text used to train LLMs like GPT-4 lets them mimic the patterns of language well enough for many sentences generated by the LLM to look exactly like sentences written by a human. But there’s no meaning behind them, no spark. It’s a remarkable statistical trick, but completely mindless.

They illustrate their point with a thought experiment. Imagine two English-speaking people stranded on neighboring deserted islands. There is an underwater cable that lets them send text messages to each other. Now imagine that an octopus, which knows nothing about English but is a whiz at statistical pattern matching, wraps its suckers around the cable and starts listening in to the messages. The octopus gets really good at guessing what words follow other words. So good that when it breaks the cable and starts replying to messages from one of the islanders, she believes that she is still chatting with her neighbor. (In case you missed it, the octopus in this story is a chatbot.)

The person talking to the octopus would stay fooled for a reasonable amount of time, but could that last? Does the octopus understand what comes down the wire? 

two characters holding landline phone receivers inset at the top left and right of a tropical scene in ascii code. An octopus inset at the bottom between them is tangled in their cable. The top left character continues speaking into the receiver while the top left character looks confused.

JUN IONEDA

Imagine that the islander now says she has built a coconut catapult and asks the octopus to build one too and tell her what it thinks. The octopus cannot do this. Without knowing what the words in the messages refer to in the world, it cannot follow the islander’s instructions. Perhaps it guesses a reply: “Okay, cool idea!” The islander will probably take this to mean that the person she is speaking to understands her message. But if so, she is seeing meaning where there is none. Finally, imagine that the islander gets attacked by a bear and sends calls for help down the line. What is the octopus to do with these words?

Bender and Koller believe that this is how large language models learn and why they are limited. “The thought experiment shows why this path is not going to lead us to a machine that understands anything,” says Bender. “The deal with the octopus is that we have given it its training data, the conversations between those two people, and that’s it. But then here’s something that comes out of the blue and it won’t be able to deal with it because it hasn’t understood.”

The other paper Bender is known for, “On the Dangers of Stochastic Parrots,” highlights a series of harms that she and her coauthors believe the companies making large language models are ignoring. These include the huge computational costs of making the models and their environmental impact; the racist, sexist, and other abusive language the models entrench; and the dangers of building a system that could fool people by “haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

Google senior management wasn’t happy with the paper, and the resulting conflict led two of Bender’s coauthors, Timnit Gebru and Margaret Mitchell, to be forced out of the company, where they had led the AI Ethics team. It also made “stochastic parrot” a popular put-down for large language models—and landed Bender right in the middle of the name-calling merry-go-round.

The bottom line for Bender and for many like-minded researchers is that the field has been taken in by smoke and mirrors: “I think that they are led to imagine autonomous thinking entities that can make decisions for themselves and ultimately be the kind of thing that could actually be accountable for those decisions.”

Always the linguist, Bender is now at the point where she won’t even use the term AI “without scare quotes,” she tells me. Ultimately, for her, it’s a Big Tech buzzword that distracts from the many associated harms. “I’ve got skin in the game now,” she says. “I care about these issues, and the hype is getting in the way.”

Extraordinary evidence?

Agüera y Arcas calls people like Bender “AI denialists”—the implication being that they won’t ever accept what he takes for granted. Bender’s position is that extraordinary claims require extraordinary evidence, which we do not have.

But there are people looking for it, and until they find something clear-cut—sparks or stochastic parrots or something in between—they’d prefer to sit out the fight. Call this the wait-and-see camp.

As Ellie Pavlick, who studies neural networks at Brown University, tells me: “It’s offensive to some people to suggest that human intelligence could be re-created through these kinds of mechanisms.”

She adds, “People have strong-held beliefs about this issue—it almost feels religious. On the other hand, there’s people who have a little bit of a God complex. So it’s also offensive to them to suggest that they just can’t do it.”

Pavlick is ultimately agnostic. She’s a scientist, she insists, and will follow wherever the science leads. She rolls her eyes at the wilder claims, but she believes there’s something exciting going on. “That’s where I would disagree with Bender and Koller,” she tells me. “I think there’s actually some sparks—maybe not of AGI, but like, there’s some things in there that we didn’t expect to find.”

Ellie Pavlick
Ellie Pavlick
COURTESY PHOTO

The problem is finding agreement on what those exciting things are and why they’re exciting. With so much hype, it’s easy to be cynical.

Researchers like Bubeck seem a lot more cool-headed when you hear them out. He thinks the infighting misses the nuance in his work. “I don’t see any problem in holding simultaneous views,” he says. “There is stochastic parroting; there is reasoning—it’s a spectrum. It’s very complex. We don’t have all the answers.”

“We need a completely new vocabulary to describe what’s going on,” he says. “One reason why people push back when I talk about reasoning in large language models is because it’s not the same reasoning as in human beings. But I think there is no way we can not call it reasoning. It is reasoning.”

Anthropic’s Olah plays it safe when pushed on what we’re seeing in LLMs, though his company, one of the hottest AI labs in the world right now, built Claude 3, an LLM that has received just as much hyperbolic praise as GPT-4 (if not more) since its release earlier this year.

“I feel like a lot of these conversations about the capabilities of these models are very tribal,” he says. “People have preexisting opinions, and it’s not very informed by evidence on any side. Then it just becomes kind of vibes-based, and I think vibes-based arguments on the internet tend to go in a bad direction.”

Olah tells me he has hunches of his own. “My subjective impression is that these things are tracking pretty sophisticated ideas,” he says. “We don’t have a comprehensive story of how very large models work, but I think it’s hard to reconcile what we’re seeing with the extreme ‘stochastic parrots’ picture.”

That’s as far as he’ll go: “I don’t want to go too much beyond what can be really strongly inferred from the evidence that we have.”

Last month, Anthropic released results from a study in which researchers gave Claude 3 the neural network equivalent of an MRI. By monitoring which bits of the model turned on and off as they ran it, they identified specific patterns of neurons that activated when the model was shown specific inputs.

Anthropic also reported patterns that it says correlate with inputs that attempt to describe or show abstract concepts. “We see features related to deception and honesty, to sycophancy, to security vulnerabilities, to bias,” says Olah. “We find features related to power seeking and manipulation and betrayal.”

These results give one of the clearest looks yet at what’s inside a large language model. It’s a tantalizing glimpse at what look like elusive humanlike traits. But what does it really tell us? As Olah admits, they do not know what the model does with these patterns. “It’s a relatively limited picture, and the analysis is pretty hard,” he says.

Even if Olah won’t spell out exactly what he thinks goes on inside a large language model like Claude 3, it’s clear why the question matters to him. Anthropic is known for its work on AI safety—making sure that powerful future models will behave in ways we want them to and not in ways we don’t (known as “alignment” in industry jargon). Figuring out how today’s models work is not only a necessary first step if you want to control future ones; it also tells you how much you need to worry about doomer scenarios in the first place. “If you don’t think that models are going to be very capable,” says Olah, “then they’re probably not going to be very dangerous.”

Chapter 3

Why we all can’t get along

In a 2014 interview with the BBC that looked back on her career, the influential cognitive scientist Margaret Boden, now 87, was asked if she thought there were any limits that would prevent computers (or “tin cans,” as she called them) from doing what humans can do.

“I certainly don’t think there’s anything in principle,” she said. “Because to deny that is to say that [human thinking] happens by magic, and I don’t believe that it happens by magic.”

Margaret Boden
Margaret Boden
ALAMY

But, she cautioned, powerful computers won’t be enough to get us there: the AI field will also need “powerful ideas”—new theories of how thinking happens, new algorithms that might reproduce it. “But these things are very, very difficult and I see no reason to assume that we will one of these days be able to answer all of those questions. Maybe we will; maybe we won’t.” 

Boden was reflecting on the early days of the current boom, but this will-we-or-won’t-we teetering speaks to decades in which she and her peers grappled with the same hard questions that researchers struggle with today. AI began as an ambitious aspiration 70-odd years ago and we are still disagreeing about what is and isn’t achievable, and how we’ll even know if we have achieved it. Most—if not all—of these disputes come down to this: We don’t have a good grasp on what intelligence is or how to recognize it. The field is full of hunches, but no one can say for sure.

We’ve been stuck on this point ever since people started taking the idea of AI seriously. Or even before that, when the stories we consumed started planting the idea of humanlike machines deep in our collective imagination. The long history of these disputes means that today’s fights often reinforce rifts that have been around since the beginning, making it even more difficult for people to find common ground.

To understand how we got here, we need to understand where we’ve been. So let’s dive into AI’s origin story—one that also played up the hype in a bid for cash.

A brief history of AI spin

The computer scientist John McCarthy is credited with coming up with the term “artificial intelligence” in 1955 when writing a funding application for a summer research program at Dartmouth College in New Hampshire.

The plan was for McCarthy and a small group of fellow researchers, a who’s-who of postwar US mathematicians and computer scientists—or “John McCarthy and the boys,” as Harry Law, a researcher who studies the history of AI at the University of Cambridge and ethics and policy at Google DeepMind, puts it—to get together for two months (not a typo) and make some serious headway on this new research challenge they’d set themselves.

From left to right, Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Peter Milner, John McCarthy, and Claude Shannon sitting on the lawn at the 1956 Dartmouth conference.
COURTESY OF THE MINSKY FAMILY

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it,” McCarthy and his coauthors wrote. “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

That list of things they wanted to make machines do—what Bender calls “the starry-eyed dream”—hasn’t changed much. Using language, forming concepts, and solving problems are defining goals for AI today. The hubris hasn’t changed much either: “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer,” they wrote. That summer, of course, has stretched to seven decades. And the extent to which these problems are in fact now solved is something that people still shout about on the internet. 

But what’s often left out of this canonical history is that artificial intelligence almost wasn’t called “artificial intelligence” at all.

John McCarthy
John McCarthy
COURTESY PHOTO

More than one of McCarthy’s colleagues hated the term he had come up with. “The word ‘artificial’ makes you think there’s something kind of phony about this,” Arthur Samuel, a Dartmouth participant and creator of the first checkers-playing computer, is quoted as saying in historian Pamela McCorduck’s 2004 book Machines Who Think. The mathematician Claude Shannon, a coauthor of the Dartmouth proposal who is sometimes billed as “the father of the information age,” preferred the term “automata studies.” Herbert Simon and Allen Newell, two other AI pioneers, continued to call their own work “complex information processing” for years afterwards.

In fact, “artificial intelligence” was just one of several labels that might have captured the hodgepodge of ideas that the Dartmouth group was drawing on. The historian Jonnie Penn has identified possible alternatives that were in play at the time, including “engineering psychology,” “applied epistemology,” “neural cybernetics,” “non-numerical computing,” “neuraldynamics,” “advanced automatic programming,” and “hypothetical automata.” This list of names reveals how diverse the inspiration for their new field was, pulling from biology, neuroscience, statistics, and more. Marvin Minsky, another Dartmouth participant, has described AI as a “suitcase word” because it can hold so many divergent interpretations.

But McCarthy wanted a name that captured the ambitious scope of his vision. Calling this new field “artificial intelligence” grabbed people’s attention—and money. Don’t forget: AI is sexy, AI is cool.

In addition to terminology, the Dartmouth proposal codified a split between rival approaches to artificial intelligence that has divided the field ever since—a divide Law calls the “core tension in AI.”

neural net diagram

McCarthy and his colleagues wanted to describe in computer code “every aspect of learning or any other feature of intelligence” so that machines could mimic them. In other words, if they could just figure out how thinking worked—the rules of reasoning—and write down the recipe, they could program computers to follow it. This laid the foundation of what came to be known as rule-based or symbolic AI (sometimes referred to now as GOFAI, “good old-fashioned AI”). But coming up with hard-coded rules that captured the processes of problem-solving for actual, nontrivial problems proved too hard.

The other path favored neural networks, computer programs that would try to learn those rules by themselves in the form of statistical patterns. The Dartmouth proposal mentions it almost as an aside (referring variously to “neuron nets” and “nerve nets”). Though the idea seemed less promising at first, some researchers nevertheless continued to work on versions of neural networks alongside symbolic AI. But it would take decades—plus vast amounts of computing power and much of the data on the internet—before they really took off. Fast-forward to today and this approach underpins the entire AI boom.

The big takeaway here is that, just like today’s researchers, AI’s innovators fought about foundational concepts and got caught up in their own promotional spin. Even team GOFAI was plagued by squabbles. Aaron Sloman, a philosopher and fellow AI pioneer now in his late 80s, recalls how “old friends” Minsky and McCarthy “disagreed strongly” when he got to know them in the ’70s: “Minsky thought McCarthy’s claims about logic could not work, and McCarthy thought Minsky’s mechanisms could not do what could be done using logic. I got on well with both of them, but I was saying, ‘Neither of you have got it right.’” (Sloman still thinks no one can account for the way human reasoning uses intuition as much as logic, but that’s yet another tangent!)

Marvin Minsky
Marvin Minsky
MIT MUSEUM

As the fortunes of the technology waxed and waned, the term “AI” went in and out of fashion. In the early ’70s, both research tracks were effectively put on ice after the UK government published a report arguing that the AI dream had gone nowhere and wasn’t worth funding. All that hype, effectively, had led to nothing. Research projects were shuttered, and computer scientists scrubbed the words “artificial intelligence” from their grant proposals.

When I was finishing a computer science PhD in 2008, only one person in the department was working on neural networks. Bender has a similar recollection: “When I was in college, a running joke was that AI is anything that we haven’t figured out how to do with computers yet. Like, as soon as you figure out how to do it, it wasn’t magic anymore, so it wasn’t AI.”

But that magic—the grand vision laid out in the Dartmouth proposal—remained alive and, as we can now see, laid the foundations for the AGI dream.

Good and bad behavior

In 1950, five years before McCarthy started talking about artificial intelligence, Alan Turing had published a paper that asked: Can machines think? To address that question, the famous mathematician proposed a hypothetical test, which he called the imitation game. The setup imagines a human and a computer behind a screen and a second human who types questions to each. If the questioner cannot tell which answers come from the human and which come from the computer, Turing claimed, the computer may as well be said to think.

What Turing saw—unlike McCarthy’s crew—was that thinking is a really difficult thing to describe. The Turing test was a way to sidestep that problem. “He basically said: Instead of focusing on the nature of intelligence itself, I’m going to look for its manifestation in the world. I’m going to look for its shadow,” says Law.

In 1952, BBC Radio convened a panel to explore Turing’s ideas further. Turing was joined in the studio by two of his Manchester University colleagues—professor of mathematics Maxwell Newman and professor of neurosurgery Geoffrey Jefferson—and Richard Braithwaite, a philosopher of science, ethics, and religion at the University of Cambridge.

Braithwaite kicked things off: “Thinking is ordinarily regarded as so much the specialty of man, and perhaps of other higher animals, the question may seem too absurd to be discussed. But of course, it all depends on what is to be included in ‘thinking.’”

The panelists circled Turing’s question but never quite pinned it down.

When they tried to define what thinking involved, what its mechanisms were, the goalposts moved. “As soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking but a sort of unimaginative donkey work,” said Turing.

Here was the problem: When one panelist proposed some behavior that might be taken as evidence of thought—reacting to a new idea with outrage, say—another would point out that a computer could be made to do it.

As Newman said, it would be easy enough to program a computer to print “I don’t like this new program.” But he admitted that this would be a trick.

Exactly, Jefferson said: He wanted a computer that would print “I don’t like this new program” because it didn’t like the new program. In other words, for Jefferson, behavior was not enough. It was the process leading to the behavior that mattered.

But Turing disagreed. As he had noted, uncovering a specific process—the donkey work, to use his phrase—did not pinpoint what thinking was either. So what was left?

“From this point of view, one might be tempted to define thinking as consisting of those mental processes that we don’t understand,” said Turing. “If this is right, then to make a thinking machine is to make one which does interesting things without our really understanding quite how it is done.”

It is strange to hear people grapple with these ideas for the first time. “The debate is prescient,” says Tomer Ullman, a cognitive scientist at Harvard University. “Some of the points are still alive—perhaps even more so. What they seem to be going round and round on is that the Turing test is first and foremost a behaviorist test.”

For Turing, intelligence was hard to define but easy to recognize. He proposed that the appearance of intelligence was enough—and said nothing about how that behavior should come about.

character with a toaster for a head

JUN IONEDA

And yet most people, when pushed, will have a gut instinct about what is and isn’t intelligent. There are dumb ways and clever ways to come across as intelligent. In 1981, Ned Block, a philosopher at New York University, showed that Turing’s proposal fell short of those gut instincts. Because it said nothing of what caused the behavior, the Turing test can be beaten through trickery (as Newman had noted in the BBC broadcast).

“Could the issue of whether a machine in fact thinks or is intelligent depend on how gullible human interrogators tend to be?” asked Block. (Or as computer scientist Mark Reidl has remarked: “The Turing test is not for AI to pass but for humans to fail.”)

Imagine, Block said, a vast look-up table in which human programmers had entered all possible answers to all possible questions. Type a question into this machine, and it would look up a matching answer in its database and send it back. Block argued that anyone using this machine would judge its behavior to be intelligent: “But actually, the machine has the intelligence of a toaster,” he wrote. “All the intelligence it exhibits is that of its programmers.”

Block concluded that whether behavior is intelligent behavior is a matter of how it is produced, not how it appears. Block’s toasters, which became known as Blockheads, are one of the strongest counterexamples to the assumptions behind Turing’s proposal.

Looking under the hood

The Turing test is not meant to be a practical metric, but its implications are deeply ingrained in the way we think about artificial intelligence today. This has become particularly relevant as LLMs have exploded in the past several years. These models get ranked by their outward behaviors, specifically how well they do on a range of tests. When OpenAI announced GPT-4, it published an impressive-looking scorecard that detailed the model’s performance on multiple high school and professional exams. Almost nobody talks about how these models get those results.

That’s because we don’t know. Today’s large language models are too complex for anybody to say exactly how their behavior is produced. Researchers outside the small handful of companies making those models don’t know what’s in their training data; none of the model makers have shared details. That makes it hard to say what is and isn’t a kind of memorization—a stochastic parroting. But even researchers on the inside, like Olah, don’t know what’s really going on when faced with a bridge-obsessed bot.

This leaves the question wide open: Yes, large language models are built on math—but are they doing something intelligent with it?

And the arguments begin again.

“Most people are trying to armchair through it,” says Brown University’s Pavlick, meaning that they are arguing about theories without looking at what’s really happening. “Some people are like, ‘I think it’s this way,’ and some people are like, ‘Well, I don’t.’ We’re kind of stuck and everyone’s unsatisfied.”

Bender thinks that this sense of mystery plays into the mythmaking. (“Magicians do not explain their tricks,” she says.) Without a proper appreciation of where the LLM’s words come from, we fall back on familiar assumptions about humans, since that is our only real point of reference. When we talk to another person, we try to make sense of what that person is trying to tell us. “That process necessarily entails imagining a life behind the words,” says Bender. That’s how language works.

magic hat wearing a mask and holding a magic wand with tentacles emerging from the top

JUN IONEDA

“The parlor trick of ChatGPT is so impressive that when we see these words coming out of it, we do the same thing instinctively,” she says. “It’s very good at mimicking the form of language. The problem is that we are not at all good at encountering the form of language and not imagining the rest of it.”

For some researchers, it doesn’t really matter if we can’t understand the how. Bubeck used to study large language models to try to figure out how they worked, but GPT-4 changed the way he thought about them. “It seems like these questions are not so relevant anymore,” he says. “The model is so big, so complex, that we can’t hope to open it up and understand what’s really happening.”

But Pavlick, like Olah, is trying to do just that. Her team has found that models seem to encode abstract relationships between objects, such as that between a country and its capital. Studying one large language model, Pavlick and her colleagues found that it used the same encoding to map France to Paris and Poland to Warsaw. That almost sounds smart, I tell her. “No, it’s literally a lookup table,” she says.

But what struck Pavlick was that, unlike a Blockhead, the model had learned this lookup table on its own. In other words, the LLM figured out itself that Paris is to France as Warsaw is to Poland. But what does this show? Is encoding its own lookup table instead of using a hard-coded one a sign of intelligence? Where do you draw the line?

“Basically, the problem is that behavior is the only thing we know how to measure reliably,” says Pavlick. “Anything else requires a theoretical commitment, and people don’t like having to make a theoretical commitment because it’s so loaded.”

Geoffrey Hinton
Geoffrey Hinton
RAMSEY CARDY / COLLISION / SPORTSFILE

Not all people. A lot of influential scientists are just fine with theoretical commitment. Hinton, for example, insists that neural networks are all you need to re-create humanlike intelligence. “Deep learning is going to be able to do everything,” he told MIT Technology Review in 2020

It’s a commitment that Hinton seems to have held onto from the start. Sloman, who recalls the two of them arguing when Hinton was a graduate student in his lab, remembers being unable to persuade him that neural networks cannot learn certain crucial abstract concepts that humans and some other animals seem to have an intuitive grasp of, such as whether something is impossible. We can just see when something’s ruled out, Sloman says. “Despite Hinton’s outstanding intelligence, he never seemed to understand that point. I don’t know why, but there are large numbers of researchers in neural networks who share that failing.”

And then there’s Marcus, whose view of neural networks is the exact opposite of Hinton’s. His case draws on what he says scientists have discovered about brains.

Brains, Marcus points out, are not blank slates that learn fully from scratch—they come ready-made with innate structures and processes that guide learning. It’s how babies can learn things that the best neural networks still can’t, he argues.

Gary Marcus
Gary Marcus
AP IMAGES

“Neural network people have this hammer, and now everything is a nail,” says Marcus. “They want to do all of it with learning, which many cognitive scientists would find unrealistic and silly. You’re not going to learn everything from scratch.”

Not that Marcus—a cognitive scientist—is any less sure of himself. “If one really looked at who’s predicted the current situation well, I think I would have to be at the top of anybody’s list,” he tells me from the back of an Uber on his way to catch a flight to a speaking gig in Europe. “I know that doesn’t sound very modest, but I do have this perspective that turns out to be very important if what you’re trying to study is artificial intelligence.”

Given his well-publicized attacks on the field, it might surprise you that Marcus still believes AGI is on the horizon. It’s just that he thinks today’s fixation on neural networks is a mistake. “We probably need a breakthrough or two or four,” he says. “You and I might not live that long, I’m sorry to say. But I think it’ll happen this century. Maybe we’ve got a shot at it.”

The power of a technicolor dream

Over Dor Skuler’s shoulder on the Zoom call from his home in Ramat Gan, Israel, a little lamp-like robot is winking on and off while we talk about it. “You can see ElliQ behind me here,” he says. Skuler’s company, Intuition Robotics, develops these devices for older people, and the design—part Amazon Alexa, part R2-D2—must make it very clear that ElliQ is a computer. If any of his customers show signs of being confused about that, Intuition Robotics takes the device back, says Skuler.

ElliQ has no face, no humanlike shape at all. Ask it about sports, and it will crack a joke about having no hand-eye coordination because it has no hands and no eyes. “For the life of me, I don’t understand why the industry is trying to fulfill the Turing test,” Skuler says. “Why is it in the best interest of humanity for us to develop technology whose goal is to dupe us?”

Instead, Skuler’s firm is betting that people can form relationships with machines that present as machines. “Just like we have the ability to build a real relationship with a dog,” he says. “Dogs provide a lot of joy for people. They provide companionship. People love their dog—but they never confuse it to be a human.”

the ElliQ robot station. The screen is displaying a quote by Vincent Van Gogh

ELLIQ

ElliQ’s users, many in their 80s and 90s, refer to the robot as an entity or a presence—sometimes a roommate. “They’re able to create a space for this in-between relationship, something between a device or a computer and something that’s alive,” says Skuler.

But no matter how hard ElliQ’s designers try to control the way people view the device, they are competing with decades of pop culture that have shaped our expectations. Why are we so fixated on AI that’s humanlike? “Because it’s hard for us to imagine something else,” says Skuler (who indeed refers to ElliQ as “she” throughout our conversation). “And because so many people in the tech industry are fans of science fiction. They try to make their dream come true.”

How many developers grew up today thinking that building a smart machine was seriously the coolest thing—if not the most important thing—that they could possibly do?

It was not long ago that OpenAI launched its new voice-controlled version of ChatGPT with a voice that sounded like Scarlett Johansson, after which many people—including Altman—flagged the connection to Spike Jonze’s 2013 movie Her.

Science fiction co-invents what AI is understood to be. As Cave and Dihal write in Imagining AI: “AI was a cultural phenomenon long before it was a technological one.”

Stories and myths about remaking humans as machines have been around for centuries. People have been dreaming of artificial humans for probably as long as they have dreamed of flight, says Dihal. She notes that Daedalus, the figure in Greek mythology famous for building a pair of wings for himself and his son, Icarus, also built what was effectively a giant bronze robot called Talos that threw rocks at passing pirates.

The word robot comes from robota, a term for “forced labor” coined by the Czech playwright Karel Čapek in his 1920 play Rossum’s Universal Robots. The “laws of robotics” outlined in Isaac Asimov’s science fiction, forbidding machines from harming humans, are inverted by movies like The Terminator, which is an iconic reference point for popular fears about real-world technology. The 2014 film Ex Machina is a dramatic riff on the Turing test. Last year’s blockbuster The Creator imagines a future world in which AI has been outlawed because it set off a nuclear bomb, an event that some doomers consider at least an outside possibility.

Cave and Dihal relate how another movie, 2014’s Transcendence, in which an AI expert played by Johnny Depp gets his mind uploaded to a computer, served a narrative pushed by ur-doomers Stephen Hawking, fellow physicist Max Tegmark, and AI researcher Stuart Russell. In an article published in the Huffington Post on the movie’s opening weekend, the trio wrote: “As the Hollywood blockbuster Transcendence debuts this weekend with … clashing visions for the future of humanity, it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake ever.”

ALCON ENTERTAINMENT VIA ALAMY

Right around the same time, Tegmark founded the Future of Life Institute, with a remit to study and promote AI safety. Depp’s costar in the movie, Morgan Freeman, was on the institute’s board, and Elon Musk, who had a cameo in the film, donated $10 million in its first year. For Cave and Dihal, Transcendence is a perfect example of the multiple entanglements between popular culture, academic research, industrial production, and “the billionaire-funded fight to shape the future.”

On the London leg of his world tour last year, Altman was asked what he’d meant when he tweeted: “AI is the tech the world has always wanted.” Standing at the back of the room that day, behind an audience of hundreds, I listened to him offer his own kind of origin story: “I was, like, a very nervous kid. I read a lot of sci-fi. I spent a lot of Friday nights home, playing on the computer. But I was always really interested in AI and I thought it’d be very cool.” He went to college, got rich, and watched as neural networks became better and better. “This can be tremendously good but also could be really bad. What are we going to do about that?” he recalled thinking in 2015. “I ended up starting OpenAI.”

Why you should care that a bunch of nerds are fighting about AI

Okay, you get it: No one can agree on what AI is. But what everyone does seem to agree on is that the current debate around AI has moved far beyond the academic and the scientific. There are political and moral components in play—which doesn’t help with everyone thinking everyone else is wrong.

Untangling this is hard. It can be difficult to see what’s going on when some of those moral views take in the entire future of humanity and anchor them in a technology that nobody can quite define.

But we can’t just throw our hands up and walk away. Because no matter what this technology is, it’s coming, and unless you live under a rock, you’ll use it in one form or another. And the form that technology takes—and the problems it both solves and creates—will be shaped by the thinking and the motivations of people like the ones you just read about. In particular, by the people with the most power, the most cash, and the biggest megaphones.

Which leads me to the TESCREALists. Wait, come back! I realize it’s unfair to introduce yet another new concept so late in the game. But to understand how the people in power may mold the technologies they build, and how they explain them to the world’s regulators and lawmakers, you need to really understand their mindset.

Timnit Gebru
Timnit Gebru
WIKIMEDIA

Gebru, who founded the Distributed AI Research Institute after leaving Google, and Émile Torres, a philosopher and historian at Case Western Reserve University, have traced the influence of several techno-utopian belief systems on Silicon Valley. The pair argue that to understand what’s going on with AI right now—both why companies such as Google DeepMind and OpenAI are in a race to build AGI and why doomers like Tegmark and Hinton warn of a coming catastrophe—the field must be seen through the lens of what Torres has dubbed the TESCREAL framework.

The clunky acronym (pronounced tes-cree-all) replaces an even clunkier list of labels: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. A lot has been written (and will be written) about each of these worldviews, so I’ll spare you here. (There are rabbit holes within rabbit holes for anyone wanting to dive deeper. Pick your forum and pack your spelunking gear.)

Emile Torres
Émile Torres
COURTESY PHOTO

This constellation of overlapping ideologies is attractive to a certain kind of galaxy-brain mindset common in the Western tech world. Some anticipate human immortality; others predict humanity’s colonization of the stars. The common tenet is that an all-powerful technology—AGI or superintelligence, choose your team—is not only within reach but inevitable. You can see this in the do-or-die attitude that’s ubiquitous inside cutting-edge labs like OpenAI: If we don’t make AGI, someone else will.

What’s more, TESCREALists believe that AGI could not only fix the world’s problems but level up humanity. “The development and proliferation of AI—far from a risk that we should fear—is a moral obligation that we have to ourselves, to our children and to our future,” Andreessen wrote in a much-dissected manifesto last year. I have been told many times over that AGI is the way to make the world a better place—by Demis Hassabis, CEO and cofounder of Google DeepMind; by Mustafa Suleyman, CEO of the newly minted Microsoft AI and another cofounder of DeepMind; by Sutskever, Altman, and more.

But as Andreessen noted, it’s a yin-yang mindset. The flip side of techno-utopia is techno-hell. If you believe that you are building a technology so powerful that it will solve all the world’s problems, you probably also believe there’s a non-zero chance it will all go very wrong. When asked at the World Government Summit in February what keeps him up at night, Altman replied: “It’s all the sci-fi stuff.”

It’s a tension that Hinton has been talking up for the last year. It’s what companies like Anthropic claim to address. It’s what Sutskever is focusing on in his new lab, and what he wanted a special in-house team at OpenAI to focus on last year before disagreements over the way the company balanced risk and reward led most members of that team to leave.

Sure, doomerism is part of the spin. (“Claiming that you have created something that is super-intelligent is good for sales figures,” says Dihal. “It’s like, ‘Please, someone stop me from being so good and so powerful.’”) But boom or doom, exactly what (and whose) problems are these guys supposedly solving? Are we really expected to trust what they build and what they tell our leaders?

spinning blue and pink version of a yin-yang symbol with the circles replaced by a magic star and a mechanical cog

Gebru and Torres (and others) are adamant: No, we should not. They are highly critical of these ideologies and how they may influence the development of future technology, especially AI. Fundamentally, they link several of these worldviews—with their common focus on “improving” humanity—to the racist eugenics movements of the 20th century.

One danger, they argue, is that a shift of resources toward the kind of technological innovations that these ideologies demand, from building AGI to extending life spans to colonizing other planets, will ultimately benefit people who are Western and white at the cost of billions of people who aren’t. If your sight is set on fantastical futures, it’s easy to overlook the present-day costs of innovation, such as labor exploitation, the entrenchment of racist and sexist bias, and environmental damage.  

“Are we trying to build a tool that’s useful to us in some way?” asks Bender, reflecting on the casualties of this race to AGI. If so, who’s it for, how do we test it, how well does it work? “But if what we’re building it for is just so that we can say that we’ve done it, that’s not a goal that I can get behind. That’s not a goal that’s worth billions of dollars.”

Bender says that seeing the connections between the TESCREAL ideologies is what made her realize there was something more to these debates. “Tangling with those people was—” she stops. “Okay, there’s more here than just academic ideas. There’s a moral code tied up in it as well.”

Of course, laid out like this without nuance, it doesn’t sound as if we—as a society, as individuals—are getting the best deal. It also all sounds rather silly. When Gebru described parts of the TESCREAL bundle in a talk last year, her audience laughed. It’s also true that few people would identify themselves as card-carrying students of these schools of thought, at least in their extremes.

But if we don’t understand how those building this tech approach it, how can we decide what deals we want to make? What apps we decide to use, what chatbots we want to give personal information to, what data centers we support in our neighborhoods, what politicians we want to vote for?

It used to be like this: There was a problem in the world, and we built something to fix it. Here, everything is backward: The goal seems to be to build a machine that can do everything, and to skip the slow, hard work that goes into figuring out what the problem is before building the solution.

And as Gebru said in that same talk, “A machine that solves all problems: if that’s not magic, what is it?”

Semantics, semantics … semantics?

When asked outright what AI is, a lot of people dodge the question. Not Suleyman. In April, the CEO of Microsoft AI stood on the TED stage and told the audience what he’d told his six-year-old nephew in response to that question. The best answer he could give, Suleyman explained, was that AI was “a new kind of digital species”—a technology so universal, so powerful, that calling it a tool no longer captured what it could do for us.

“On our current trajectory, we are heading toward the emergence of something we are all struggling to describe, and yet we cannot control what we don’t understand,” he said. “And so the metaphors, the mental models, the names—these all matter if we are to get the most out of AI whilst limiting its potential downsides.”

Language matters! I hope that’s clear from the twists and turns and tantrums we’ve been through to get to this point. But I also hope you’re asking: Whose language? And whose downsides? Suleyman is an industry leader at a technology giant that stands to make billions from its AI products. Describing the technology behind those products as a new kind of species conjures something wholly unprecedented, something with agency and capabilities that we have never seen before. That makes my spidey sense tingle. You?

I can’t tell you if there’s magic here (ironically or not). And I can’t tell you how math can realize what Bubeck and many others see in this technology (no one can yet). You’ll have to make up your own mind. But I can pull back the curtain on my own point of view.

Writing about GPT-3 back in 2020, I said that the greatest trick AI ever pulled was convincing the world it exists. I still think that: We are hardwired to see intelligence in things that behave in certain ways, whether it’s there or not. In the last few years, the tech industry has found reasons of its own to convince us that AI exists, too. This makes me skeptical of many of the claims made for this technology.

With large language models—via their smiley-face masks—we are confronted by something we’ve never had to think about before. “It’s taking this hypothetical thing and making it really concrete,” says Pavlick. “I’ve never had to think about whether a piece of language required intelligence to generate because I’ve just never dealt with language that didn’t.”

AI is many things. But I don’t think it’s humanlike. I don’t think it’s the solution to all (or even most) of our problems. It isn’t ChatGPT or Gemini or Copilot. It isn’t neural networks. It’s an idea, a vision, a kind of wish fulfillment. And ideas get shaped by other ideas, by morals, by quasi-religious convictions, by worldviews, by politics, and by gut instinct. “Artificial intelligence” is a helpful shorthand to describe a raft of different technologies. But AI is not one thing; it never has been, no matter how often the branding gets seared into the outside of the box. 

“The truth is these words”—intelligence, reasoning, understanding, and more—“were defined before there was a need to be really precise about it,” says Pavlick. “I don’t really like when the question becomes ‘Does the model understand—yes or no?’ because, well, I don’t know. Words get redefined and concepts evolve all the time.”

I think that’s right. And the sooner we can all take a step back, agree on what we don’t know, and accept that none of this is yet a done deal, the sooner we can—I don’t know, I guess not all hold hands and sing kumbaya. But we can stop calling each other names.

The Chinese government is going all-in on autonomous vehicles

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

There’s been so much news coming out of China’s autonomous-vehicle industry lately that it’s hard to keep track.

The government is finally allowing Tesla to bring its Full Self-Driving (FSD) feature to China. New government permits let companies test driverless cars on the road and allow cities to build smart road infrastructure that will tell these cars where to go. In short, there are a lot of changes taking place. And they all point in the same direction: There’s an immense appetite to make autonomous cars a reality soon. And the Chinese government, on both the central and local levels, has been a major force pushing for it.

So what’s happened lately?

First of all, Tesla got the approval for its FSD feature (rather misleadingly named, since it still has lots of restrictions) after it entered into a deal with the Chinese AI company Baidu to map the country. 

As I reported last summer, in the absence of Tesla FSD, Chinese EV makers were already starting to offer their own driver-assistance programs to help the cars navigate in cities, but they still often look to Tesla for assurance on what technology or strategy to use. The official entry of FSD, which is reportedly set to be rolled out in China sometime later this year, will surely bring another round of competition to the country’s auto market.

Then, the government also handed out the permits that allow companies to test and experiment with driverless cars. On June 4, the Chinese Ministry of Industry and Information Technology issued nine permits for testing a more advanced version of autonomous driving technologies on the road. 

The companies that got the permits include prominent EV makers like BYD and NIO, but they also had to collaborate with companies that sell services like ride-hailing, freight trucks, or public buses to test the technology. The idea seems to be to test autonomous vehicles in realistic use cases to see how the technology performs.

And most recently, on July 3, the central government announced a list of 20 cities that will pilot the building of smart, connected roads. The idea is that if a road has various kinds of sensors, cameras, and data transmitters built into it, it can communicate with self-driving vehicles in real time and help them make better decisions. 

A few of the cities on the list have already commenced the work. Wuhan recently budgeted 17 billion RMB ($2.3 billion) for an infrastructure project that will involve building 15,000 smart parking spots, transforming three miles of roads, and building an industrial park for making autonomous-vehicle chips.

To be honest, I start to get numb about yet another new policy or new permit. The list of news could go on longer if I also included actions taken on the municipal level to give companies more approvals to test on the roads or to expand their services. 

To China, autonomous cars represent one potential way to combine new advantages in car-making and AI and take the lead in a cutting-edge field. And while some foreign companies like Tesla are partaking in the campaign, there are lots of Chinese companies responding to the government’s call, and they’re eager to show off their technological prowess.

But I think the takeaway here is clear: The Chinese government is willing to pour its support into the autonomous-vehicle industry and is eager to come out on top while other countries take a more cautious approach. 

The industry has not agreed on the best way to approach fully self-driving cars, and that’s why there are so many different forms of the technology in experiments: autopilot functions, Tesla FSD, robotaxis, smart connected roads. But it’s telling that all of them are receiving so much regulatory and policy support. 

It’s possible that this could all change overnight in the event of an incident like Waymo’s accident in the US last year, but for now, it feels as if China is opening up its roads to make way for more driverless cars, and gearing up to lead the industry.

What are your takes on the endless stream of news from China about self-driving cars? Let me know by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. An underground network is paying travelers to smuggle Nvidia chips into China, where its chips are in high demand because of the US export ban. (Wall Street Journal $)

2. The Chinese EV leader BYD will spend $1 billion to build a factory in Turkey. (Financial Times $)

  • This will probably help BYD avoid some of the tariffs that Europe has imposed on made-in-China EVs. (MIT Technology Review

3. OpenAI’s recent decision to cut off access to its services in China won’t affect its business clients much, because they can still access ChatGPT through Microsoft’s cloud service. (The Information $)

4. Can you imagine Microsoft telling its employees to use only iPhones for work? Yep, that just happened in China as part of the effort to improve cybersecurity defenses. (Bloomberg $)

5. A Chinese academic recently revealed that the country currently has over 8.1 million data-center racks and a combined processing power of 230 exaflops—as much as 200 of the most advanced supercomputers today. And China wants to increase the total by 30% next year. (The Register)

6. Chinese factory owners are turning into TikTok comedians to find new business partners overseas. (Rest of World)

7. China is spending twice as much as the US on researching fusion energy. (Wall Street Journal $)

Lost in translation

At the 2024 World Artificial Intelligence Conference (WAIC), held last week in Shanghai, humanoid robots were the star of the show, according to the Chinese publication Huxiu. The event saw a significant increase in exhibitors and panels, driven by a surge of new AI startups. But it was the concept of “embodied AI,” which integrates deep learning with robotics, that received the most attention from the audience.

At the conference, one company presented Galbot, a humanoid robot capable of performing complex tasks like opening drawers and hanging clothes, aiming for applications in elder care and household chores. Another introduced AnyFold, which can fold a blanket. Other robots can perform gymnastics, help users lift heavy objects, or just show very nuanced facial expressions.

One more thing

Who says multimodal AI has no real uses? People have figured out that ChatGPT’s image interpretation function is surprisingly good at one task: finding the most ripe and tasty watermelon out of a bunch of them. Now I’m hopeful about AI again.

AI lie detectors are better than humans at spotting lies

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Can you spot a liar? It’s a question I imagine has been on a lot of minds lately, in the wake of various televised political debates. Research has shown that we’re generally pretty bad at telling a truth from a lie.
 
Some believe that AI could help improve our odds, and do better than dodgy old fashioned techniques like polygraph tests. AI-based lie detection systems could one day be used to help us sift fact from fake news, evaluate claims, and potentially even spot fibs and exaggerations in job applications. The question is whether we will trust them. And if we should.

In a recent study, Alicia von Schenk and her colleagues developed a tool that was significantly better than people at spotting lies. Von Schenk, an economist at the University of Würzburg in Germany, and her team then ran some experiments to find out how people used it. In some ways, the tool was helpful—the people who made use of it were better at spotting lies. But they also led people to make a lot more accusations.

In their study published in the journal iScience, von Schenk and her colleagues asked volunteers to write statements about their weekend plans. Half the time, people were incentivized to lie; a believable yet untrue statement was rewarded with a small financial payout. In total, the team collected 1,536 statements from 768 people.
 
They then used 80% of these statements to train an algorithm on lies and truths, using Google’s AI language model BERT. When they tested the resulting tool on the final 20% of statements, they found it could successfully tell whether a statement was true or false 67% of the time. That’s significantly better than a typical human; we usually only get it right around half the time.
 
To find out how people might make use of AI to help them spot lies, von Schenk and her colleagues split 2,040 other volunteers into smaller groups and ran a series of tests.
 
One test revealed that when people are given the option to pay a small fee to use an AI tool that can help them detect lies—and earn financial rewards—they still aren’t all that keen on using it. Only a third of the volunteers given that option decided to use the AI tool, possibly because they’re skeptical of the technology, says von Schenk. (They might also be overly optimistic about their own lie-detection skills, she adds.)
 
But that one-third of people really put their trust in the technology. “When you make the active choice to rely on the technology, we see that people almost always follow the prediction of the AI… they rely very much on its predictions,” says von Schenk.

This reliance can shape our behavior. Normally, people tend to assume others are telling the truth. That was borne out in this study—even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such. But that changed when people chose to make use of the AI tool: the accusation rate rose to 58%.
 
In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media.
 
But it’s not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?
 
And then there’s the question of accuracy. In their study, von Schenk and her colleagues were only interested in creating a tool that was better than humans at lie detection. That isn’t too difficult, given how terrible we are at it. But she also imagines a tool like hers being used to routinely assess the truthfulness of social media posts, or hunt for fake details in a job hunter’s resume or interview responses. In cases like these, it’s not enough for a technology to just be “better than human” if it’s going to be making more accusations. 
 
Would we be willing to accept an accuracy rate of 80%, where only four out of every five assessed statements would be correctly interpreted as true or false? Would even 99% accuracy suffice? I’m not sure.
 
It’s worth remembering the fallibility of historical lie detection techniques. The polygraph was designed to measure heart rate and other signs of “arousal” because it was thought some signs of stress were unique to liars. They’re not. And we’ve known that for a long time. That’s why lie detector results are generally not admissible in US court cases. Despite that, polygraph lie detector tests have endured in some settings, and have caused plenty of harm when they’ve been used to hurl accusations at people who fail them on reality TV shows.
 
Imperfect AI tools stand to have an even greater impact because they are so easy to scale, says von Schenk. You can only polygraph so many people in a day. The scope for AI lie detection is almost limitless by comparison.
 
“Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies,” says von Schenk. “However, you really need to test them—you need to make sure they are substantially better than humans.” If an AI lie detector is generating a lot of accusations, we might be better off not using it at all, she says.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

AI lie detectors have also been developed to look for facial patterns of movement and “microgestures” associated with deception. As Jake Bittle puts it: “the dream of a perfect lie detector just won’t die, especially when glossed over with the sheen of AI.”
 
On the other hand, AI is also being used to generate plenty of disinformation. As of October last year, generative AI was already being used in at least 16 countries to “sow doubt, smear opponents, or influence public debate,” as Tate Ryan-Mosley reported.
 
The way AI language models are developed can heavily influence the way that they work. As a result, these models have picked up different political biases, as my colleague Melissa Heikkilä covered last year.
 
AI, like social media, has the potential for good or ill. In both cases, the regulatory limits we place on these technologies will determine which way the sword falls, argue Nathan E. Sanders and Bruce Schneier.
 
Chatbot answers are all made up.
But there’s a tool that can give a reliability score to large language model outputs, helping users work out how trustworthy they are. Or, as Will Douglas Heaven put it in an article published a few months ago, a BS-o-meter for chatbots.

From around the web

Scientists, ethicists and legal experts in the UK have published a new set of guidelines for research on synthetic embryos, or, as they call them, “stem cell-based embryo models (SCBEMs).” There should be limits on how long they are grown in labs, and they should not be transferred into the uterus of a human or animal, the guideline states. They also note that, if, in future, these structures look like they might have the potential to develop into a fetus, we should stop calling them “models” and instead refer to them as “embryos.”

Antimicrobial resistance is already responsible for 700,000 deaths every year, and could claim 10 million lives per year by 2050. Overuse of broad spectrum antibiotics is partly to blame. Is it time to tax these drugs to limit demand? (International Journal of Industrial Organization)

Spaceflight can alter the human brain, reorganizing gray and white matter and causing the brain to shift upwards in the skull. We need to better understand these effects, and the impact of cosmic radiation on our brains, before we send people to Mars. (The Lancet Neurology)

The vagus nerve has become an unlikely star of social media, thanks to influencers who drum up the benefits of stimulating it. Unfortunately, the science doesn’t stack up. (New Scientist)

A hospital in Texas is set to become the first in the country to enable doctors to see their patients via hologram. Crescent Regional Hospital in Lancaster has installed Holobox—a system that projects a life-sized hologram of a doctor for patient consultations. (ABC News)

What are AI agents? 

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

When ChatGPT was first released, everyone in AI was talking about the new generation of AI assistants. But over the past year, that excitement has turned to a new target: AI agents. 

Agents featured prominently in Google’s annual I/O conference in May, when the company unveiled its new AI agent called Astra, which allows users to interact with it using audio and video. OpenAI’s new GPT-4o model has also been called an AI agent.  

And it’s not just hype, although there is definitely some of that too. Tech companies are plowing vast sums into creating AI agents, and their research efforts could usher in the kind of useful AI we have been dreaming about for decades. Many experts, including Sam Altman, say they are the next big thing.   

But what are they? And how can we use them? 

How are they defined? 

It is still early days for research into AI agents, and the field does not have a definitive definition for them. But simply, they are AI models and algorithms that can autonomously make decisions in a dynamic world, says Jim Fan, a senior research scientist at Nvidia who leads the company’s AI agents initiative. 

The grand vision for AI agents is a system that can execute a vast range of tasks, much like a human assistant. In the future, it could help you book your vacation, but it will also remember if you prefer swanky hotels, so it will only suggest hotels that have four stars or more and then go ahead and book the one you pick from the range of options it offers you. It will then also suggest flights that work best with your calendar, and plan the itinerary for your trip according to your preferences. It could make a list of things to pack based on that plan and the weather forecast. It might even send your itinerary to any friends it knows live in your destination and invite them along. In the workplace, it  could analyze your to-do list and execute tasks from it, such as sending calendar invites, memos, or emails. 

One vision for agents is that they are multimodal, meaning they can process language, audio, and video. For example, in Google’s Astra demo, users could point a smartphone camera at things and ask the agent questions. The agent could respond to text, audio, and video inputs. 

These agents could also make processes smoother for businesses and public organizations, says David Barber, the director of the University College London Centre for Artificial Intelligence. For example, an AI agent might be able to function as a more sophisticated customer service bot. The current generation of language-model-based assistants can only generate the next likely word in a sentence. But an AI agent would have the ability to act on natural-language commands autonomously and process customer service tasks without supervision. For example, the agent would be able to analyze customer complaint emails and then know to check the customer’s reference number, access databases such as customer relationship management and delivery systems to see whether the complaint is legitimate, and process it according to the company’s policies, Barber says. 

Broadly speaking, there are two different categories of agents, says Fan: software agents and embodied agents. 

Software agents run on computers or mobile phones and use apps, much as in the travel agent example above. “Those agents are very useful for office work or sending emails or having this chain of events going on,” he says. 

Embodied agents are agents that are situated in a 3D world such as a video game, or in a robot. These kinds of agents might make video games more engaging by letting people play with nonplayer characters controlled by AI. These sorts of agents could also help build more useful robots that could help us with everyday tasks at home, such as folding laundry and cooking meals. 

Fan was part of a team that built an embodied AI agent called MineDojo in the popular computer game Minecraft. Using a vast trove of data collected from the internet, Fan’s AI agent was able to learn new skills and tasks that allowed it to freely explore the virtual 3D world and complete complex tasks such as encircling llamas with fences or scooping lava into a bucket. Video games are good proxies for the real world, because they require agents to understand physics, reasoning, and common sense. 

In a new paper, which has not yet been peer-reviewed, researchers at Princeton say that AI agents tend to have three different characteristics. AI systems are considered “agentic” if they can pursue difficult goals without being instructed in complex environments. They also qualify if they can be instructed in natural language and act autonomously without supervision. And finally, the term “agent” can also apply to systems that are able to use tools, such as web search or programming, or are capable of planning. 

Are they a new thing?

The term “AI agents” has been around for years and has meant different things at different times, says Chirag Shah, a computer science professor at the University of Washington. 

There have been two waves of agents, says Fan. The current wave is thanks to the language model boom and the rise of systems such as ChatGPT. 

The previous wave was in 2016, when Google DeepMind introduced AlphaGo, its AI system that can play—and win—the game Go. AlphaGo was able to make decisions and plan strategies. This relied on reinforcement learning, a technique that rewards AI algorithms for desirable behaviors. 

“But these agents were not general,” says Oriol Vinyals, vice president of research at Google DeepMind. They were created for very specific tasks—in this case, playing Go. The new generation of foundation-model-based AI makes agents more universal, as they can learn from the world humans interact with. 

“You feel much more that the model is interacting with the world and then giving back to you better answers or better assisted assistance or whatnot,” says Vinyals. 

What are the limitations? 

There are still many open questions that need to be answered. Kanjun Qiu, CEO and founder of the AI startup Imbue, which is working on agents that can reason and code, likens the state of agents to where self-driving cars were just over a decade ago. They can do stuff, but they’re unreliable and still not really autonomous. For example, a coding agent can generate code, but it sometimes gets it wrong, and it doesn’t know how to test the code it’s creating, says Qiu. So humans still need to be actively involved in the process. AI systems still can’t fully reason, which is a critical step in operating in a complex and  ambiguous human world. 

“We’re nowhere close to having an agent that can just automate all of these chores for us,” says Fan. Current systems “hallucinate and they also don’t always follow instructions closely,” Fan says. “And that becomes annoying.”  

Another limitation is that after a while, AI agents lose track of what they are working on. AI systems are limited by their context windows, meaning the amount of data they can take into account at any given time. 

“ChatGPT can do coding, but it’s not able to do long-form content well. But for human developers, we look at an entire GitHub repository that has tens if not hundreds of lines of code, and we have no trouble navigating it,” says Fan. 

To tackle this problem, Google has increased its models’ capacity to process data, which allows users to have longer interactions with them in which they remember more about past interactions. The company said it is working on making its context windows infinite in the future.

For embodied agents such as robots, there are even more limitations. There is not enough training data to teach them, and researchers are only just starting to harness the power of foundation models in robotics. 

So amid all the hype and excitement, it’s worth bearing in mind that research into AI agents is still in its very early stages, and it will likely take years until we can experience their full potential. 

That sounds cool. Can I try an AI agent now? 

Sort of. You’ve most likely tried their early prototypes, such as OpenAI’s ChatGPT and GPT-4. “If you’re interacting with software that feels smart, that is kind of an agent,” says Qiu. 

Right now the best agents we have are systems with very narrow and specific use cases, such as coding assistants, customer service bots, or workflow automation software like Zapier, she says. But these are a far cry from a universal AI agent that can do complex tasks. 

“Today we have these computers and they’re really powerful, but we have to micromanage them,” says Qiu. 

OpenAI’s ChatGPT plug-ins, which allow people to create AI-powered assistants for web browsers, were an attempt at agents, says Qiu. But these systems are still clumsy, unreliable, and not capable of reasoning, she says. 

Despite that, these systems will one day change the way we interact with technology, Qiu believes, and it is a trend people need to pay attention to. 

“It’s not like, ‘Oh my God, all of a sudden we have AGI’ … but more like ‘Oh my God, my computer can do way more than it did five years ago,’” she says.

AI companies are finally being forced to cough up for training data

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The generative AI boom is built on scale. The more training data, the more powerful the model. 

But there’s a problem. AI companies have pillaged the internet for training data, and many websites and data set owners have started restricting the ability to scrape their websites. We’ve also seen a backlash against the AI sector’s practice of indiscriminately scraping online data, in the form of users opting out of making their data available for training and lawsuits from artists, writers, and the New York Times, claiming that AI companies have taken their intellectual property without consent or compensation. 

Last week three major record labels—Sony Music, Warner Music Group, and Universal Music Group—announced they were suing the AI music companies Suno and Udio over alleged copyright infringement. The music labels claim the companies made use of copyrighted music in their training data “at an almost unimaginable scale,” allowing the AI models to generate songs that “imitate the qualities of genuine human sound recordings.” My colleague James O’Donnell dissects the lawsuits in his story and points out that these lawsuits could determine the future of AI music. Read it here

But this moment also sets an interesting precedent for all of generative AI development. Thanks to the scarcity of high-quality data and the immense pressure and demand to build even bigger and better models, we’re in a rare moment where data owners actually have some leverage. The music industry’s lawsuit sends the loudest message yet: High-quality training data is not free. 

It will likely take a few years at least before we have legal clarity around copyright law, fair use, and AI training data. But the cases are already ushering in changes. OpenAI has been striking deals with news publishers such as Politico, the AtlanticTime, the Financial Times, and others, and exchanging publishers’ news archives for money and citations. And YouTube announced in late June that it will offer licensing deals to top record labels in exchange for music for training. 

These changes are a mixed bag. On one hand, I’m concerned that news publishers are making a Faustian bargain with AI. For example, most of the media houses that have made deals with OpenAI say the deal stipulates that OpenAI cite its sources. But language models are fundamentally incapable of being factual and are best at making things up. Reports have shown that ChatGPT and the AI-powered search engine Perplexity frequently hallucinate citations, which makes it hard for OpenAI to honor its promises.   

It’s tricky for AI companies too. This shift could lead to them build smaller, more efficient models, which are far less polluting. Or they may fork out a fortune to access data at the scale they need to build the next big one. Only the companies most flush with cash, and/or with large existing data sets of their own (such as Meta, with its two decades of social media data), can afford to do that. So the latest developments risk concentrating power even further into the hands of the biggest players. 

On the other hand, the idea of introducing consent into this process is a good one—not just for rights holders, who can benefit from the AI boom, but for all of us. We should all have the agency to decide how our data is used, and a fairer data economy would mean we could all benefit. 


Now read the rest of The Algorithm

Deeper Learning

How AI video games can help reveal the mysteries of the human mind

Neuroscientists and psychologists have long been using games as research tools to learn about the human mind. Video games have been either co-opted or specially designed to study how people learn, navigate, and cooperate with others, for example. AI video games—where characters don’t need scripts and appear to play when you’re not watching—could allow us to probe more deeply and unravel enduring mysteries about our brains and behavior, suggests my colleague Jessica Hamzelou in our weekly biotech newsletter, The Checkup.

Ready, set, go: Scientists who have done this type of study were able to observe and study how players behaved in these games: how they explored their virtual environment, how they sought rewards, how they made decisions. And research volunteers didn’t need to travel to a lab—their gaming behavior could be observed from wherever they happened to be playing, whether that was at home, at a library, or even inside an MRI scanner. Read more from Jessica.

Bits and Bytes

AI is already wreaking havoc on global power systems
A really well-done data visualization of the insane amount of electricity AI requires and how it is transforming our energy grid. A startling statistic: Data centers use more electricity than most countries. (Bloomberg

The AI boom has an unlikely early winner: wonky consultants
It seems every company out there is thinking about how to use AI. But the problem is that nobody is sure exactly how to do that. And so in come consultants, who are profiting from AI FOMO. Work related to generative AI will make up about 40% of McKinsey’s business this year. (The New York Times)

Deepfake creators are revictimizing sex trafficking survivors
A new low: For the past few months, the largest deepfake sexual abuse website has posted deepfake videos based on footage from GirlsDoPorn, a now-defunct sex trafficking operation. (Wired)

I paid $365.63 to replace 404 Media with AI
A journalist paid gig workers to use ChatGPT to plagiarize news. The result: grammatically correct nonsense. (404 Media)

A way to let robots learn by listening will make them more useful

Most AI-powered robots today use cameras to understand their surroundings and learn new tasks, but it’s becoming easier to train robots with sound too, helping them adapt to tasks and environments where visibility is limited. 

Though sight is important, there are daily tasks where sound is actually more helpful, like listening to onions sizzling on the stove to see if the pan is at the right temperature. Training robots with audio has only been done in highly controlled lab settings, however, and the techniques have lagged behind other fast robot-teaching methods.

Researchers at the Robotics and Embodied AI Lab at Stanford University set out to change that. They first built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone designed to filter out background noise. Human demonstrators used the gripper for a variety of household tasks and then used this data to teach robotic arms how to execute the task on their own. The team’s new training algorithms help robots gather clues from audio signals to perform more effectively. 

“Thus far, robots have been training on videos that are muted,” says Zeyi Liu, a PhD student at Stanford and lead author of the study. “But there is so much helpful data in audio.”

To test how much more successful a robot can be if it’s capable of “listening,” the researchers chose four tasks: flipping a bagel in a pan, erasing a whiteboard, putting two Velcro strips together, and pouring dice out of a cup. In each task, sounds provide clues that cameras or tactile sensors struggle with, like knowing if the eraser is properly contacting the whiteboard or whether the cup contains dice. 

After demonstrating each task a couple of hundred times, the team compared the success rates of training with audio and training only with vision. The results, published in a paper on arXiv that has not been peer-reviewed, were promising. When using vision alone in the dice test, the robot could tell 27% of the time if there were dice in the cup, but that rose to 94% when sound was included.

It isn’t the first time audio has been used to train robots, says Shuran Song, the head of the lab that produced the study, but it’s a big step toward doing so at scale: “We are making it easier to use audio collected ‘in the wild,’ rather than being restricted to collecting it in the lab, which is more time consuming.” 

The research signals that audio might become a more sought-after data source in the race to train robots with AI. Researchers are teaching robots faster than ever before using imitation learning, showing them hundreds of examples of tasks being done instead of hand-coding each one. If audio could be collected at scale using devices like the one in the study, it could give them an entirely new “sense,” helping them more quickly adapt to environments where visibility is limited or not useful.

“It’s safe to say that audio is the most understudied modality for sensing [in robots],” says Dmitry Berenson, associate professor of robotics at the University of Michigan, who was not involved in the study. That’s because the bulk of research on training robots to manipulate objects has been for industrial pick-and-place tasks, like sorting objects into bins. Those tasks don’t benefit much from sound, instead relying on tactile or visual sensors. But as robots broaden into tasks in homes, kitchens, and other environments, audio will become increasingly useful, Berenson says.

Consider a robot trying to find which bag or pocket contains a set of keys, all with limited visibility. “Maybe even before you touch the keys, you hear them kind of jangling,” Berenson says. “That’s a cue that the keys are in that pocket instead of others.”

Still, audio has limits. The team points out sound won’t be as useful with so-called soft or flexible objects like clothes, which don’t create as much usable audio. The robots also struggled with filtering out the audio of their own motor noises during tasks, since that noise was not present in the training data produced by humans. To fix it, the researchers needed to add robot sounds—whirs, hums, and actuator noises—into the training sets so the robots could learn to tune them out. 

The next step, Liu says, is to see how much better the models can get with more data, which could mean adding more microphones, collecting spatial audio, and incorporating microphones into other types of data-collection devices. 

Training AI music models is about to get very expensive

AI music is suddenly in a make-or-break moment. On June 24, Suno and Udio, two leading AI music startups that make tools to generate complete songs from a prompt in seconds, were sued by major record labels. Sony Music, Warner Music Group, and Universal Music Group claim the companies made use of copyrighted music in their training data “at an almost unimaginable scale,” allowing the AI models to generate songs that “imitate the qualities of genuine human sound recordings.”

Two days later, the Financial Times reported that YouTube is pursuing a comparatively aboveboard approach. Rather than training AI music models on secret data sets, the company is reportedly offering unspecified lump sums to top record labels in exchange for licenses to use their catalogues for training. 

In response to the lawsuits, both Suno and Udio released statements mentioning efforts to ensure that their models don’t imitate copyrighted works, but neither company has specified whether their training sets contain them. Udio said its model “has ‘listened’ to and learned from a large collection of recorded music,” and two weeks before the lawsuits, Suno CEO Mikey Shulman told me its training set is “both industry standard and legal” but the exact recipe is proprietary.

While the ground here is changing fast, none of these moves should be all that surprising: litigious training-data battles have become something like a rite of passage for generative AI companies. The trend has led many of those companies, including OpenAI, to pay for licensing deals while the cases unfold. 

However, the stakes are higher for AI music than for image generators or chatbots. Generative AI companies working in text or photos have options to work around lawsuits; for example, they can cobble together open-source corpuses to train models. In contrast, music in the public domain is much more limited (and not exactly what most people want to listen to). 

Other AI companies can also more easily cut licensing deals with interested publishers and creators, of which there are many; but rights in music are far more concentrated than those in film, images, or text, industry experts say. They’re largely managed by the three biggest record labels—the new plaintiffs—whose publishing arms collectively own more than 10 million songs and much of the music that has defined the last century. (The filing names a long list of artists who the labels allege were wrongfully included in training data, ranging from ABBA to those on the Hamilton soundtrack.) 

On top of all this, it’s also just more difficult to create music worth listening to—generating a readable poem or passable illustration with AI is one technical challenge, but infusing a model with the taste required to create music we like is another. 

It’s of course possible that the AI companies will win the case, and none of this will matter; they would have carte blanche to train on a century of copyrighted music. But experts say the case from the record labels is strong, and it’s more likely that AI companies will soon have to pay up—and pay a lot—if they want to survive. If a court were to rule that AI music companies could not train for free on these labels’ catalogues, then expensive licensing deals, like the one YouTube is reportedly pursuing, would seem to be the only path forward. This would effectively ensure that the company with the deepest pockets ends up on top.

More than any training-data case yet, the outcome of this one will determine the shape of a big slice of AI—and whether there is a future for it at all. 

Merits of the case

Suno’s music generator has been public for less than a year, but the company has already garnered 12 million users, a $125 million funding round last month, and a partnership with Microsoft Copilot. Udio is even newer to the scene, having launched in April with $10 million in seed funding from musician-investors like will.i.am and Common. 

The record labels allege that both of the startups are engaging in copyright infringement on the training and the output sides of their models.

“The plaintiffs here have the best odds of almost anyone suing an AI company,” says James Grimmelmann, a professor of digital and information law at Cornell Law School. He draws comparisons to the ongoing New York Times case against OpenAI, which he says offered, until now, the best example of a rights holder with a strong case against an AI company. But the suit against Suno and Udio “is worse for a bunch of reasons.”

The Times has accused OpenAI of copyright infringement in its model training by using the publication’s articles without consent. Grimmelmann says OpenAI has a bit of plausible deniability in this accusation, because the company could say that it scraped much of the internet for a training corpus and copies of New York Times articles appeared in places without the company’s knowledge. 

For Suno and Udio, that defense is far less believable. “This is not like, ‘We scraped the web for all audio and we couldn’t tell the commercially produced songs apart from everything else,’” Grimmelmann says. “It’s pretty clear that they had to have been pulling in large databases of commercial recordings.” 

In addition to complaints about training, the new case alleges that tools like Suno and Udio are more imitative than generative AI, meaning that their output mimics the style of artists and songs protected by copyright. 

While Grimmelmann notes that the Times cited examples in which ChatGPT reproduced entire copies of its articles, record labels claim they were able to generate problematic responses from the AI music models with much simpler prompts. For instance, prompting Udio with “my tempting 1964 girl smokey sing hitsville soul pop,” the plaintiffs say, yielded a song that “any listener familiar with the Temptations would instantly recognize as resembling the copyrighted sound recording ‘My Girl.’” (The court documents include links to examples on Udio, but the songs appear to have been removed.) The plaintiffs mention similar examples from Suno, including an ABBA-adjacent song called “Prancing Queen” that was generated with the prompt “70s pop” and the lyrics for “Dancing Queen.”

What’s more, Grimmelmann explains, there is more copyrightable information in a song than a news article. “There’s just a lot more information density in capturing the way that Mariah Carey’s voice works than there is in words,” he says, which is perhaps part of the reason past lawsuits navigating music copyright have sometimes been so drawn-out and complex. 

In a statement, Shulman wrote that Suno prioritizes originality and that the model is “designed to generate completely new outputs, not to memorize and regurgitate preexisting content.” He added, “That is why we don’t allow user prompts that reference specific artists.” Udio’s statement similarly mentioned “state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.”

Indeed, the tools will block a request if it names an artist. But the record labels allege that the safeguards have significant loopholes. Following the news of the lawsuits, for instance, social media users shared examples suggesting that if users separate an artist’s name with spaces, the request may go through. My own request for “a song like Kendrick” was blocked by Suno, citing an artist’s name, but “a song like k e n d r i c k” resulted in a “hip-hop rhythmic beat-driven” track and “a song like k o r n” resulted in “nu-metal heavy aggressive.” (To be fair, they didn’t resemble the respective artists’ unique styles, but to even respond in the right tightly defined genre seems to suggest that the model is in fact familiar with each artist’s work.) Similar workarounds were blocked on Udio. 

Possible outcomes

There are three ways the case could go, Grimmelmann says. One is wholly in favor of the AI startups: the lawsuits fail and the court determines that companies did not violate fair use or imitate copyrighted works too closely in their outputs. If the models are found to fall under fair use, it would mean songwriters and rights holders would need to find a different legal mechanism to pursue compensation. 

Another possibility is a mixed bag: the court finds the AI companies did not violate fair use in their training but must better control their models’ output to make sure it does not improperly imitate copyrighted works. Grimmelmann says this would be similar to one of the initial rulings against Napster, in which the company was forced to ban searches for copyrighted works in its libraries (though users quickly found workarounds). 

The third and essentially nuclear option is that the court finds fault on both the training and the output sides of the AI models. This would mean the companies could not train on copyrighted works without licenses, and also could not allow outputs that closely imitate copyrighted works. The companies could be ordered to pay damages for infringement, which could run into the hundreds of millions for each company. If they aren’t bankrupted by such a ruling, it would force them to completely restructure their training through licensing deals, which could also be cost-prohibitive. 

COURTESY SUNO.AI

To license or not to license

Though the immediate goals of the plaintiffs are to get the AI companies to cease training and pay damages, the chairman of the Recording Industry Association of America, Mitch Glazier, is already looking ahead toward a future of licensing. “As in the past, music creators will enforce their rights to protect the creative engine of human artistry and enable the development of a healthy and sustainable licensed market that recognizes the value of both creativity and technology,” he wrote in a recent op-ed in Billboard.

Such a market for licenses could mirror what has already unfolded for text generators. OpenAI has struck licensing deals with a number of news publishers, including Politico, the Atlantic, and the Wall Street Journal. The deals promise to make content from the publishers discoverable in OpenAI’s products, though the ability for the models to transparently cite where they’re getting information from is limited at best.

If AI music companies follow that pattern, the only ones with the means to create powerful music models might be those with the most cash. That’s perhaps exactly what YouTube is thinking. The company did not immediately respond to questions from MIT Technology Review about the details of its negotiations, but given the massive amount of data required to train AI models and the concentration of rights owners in music, it’s fair to assume the price of deals with record labels would be eye-popping. 

In theory, an AI company could bypass the licensing process altogether by building its model exclusively on music in the public domain, but it would be a Herculean task. There have been similar efforts in the realm of text and image generation, including a legal consultancy in Chicago that created a model trained on dense regulatory documents, and a model from Hugging Face that trained on images of Mickey Mouse from the 1920s. But the models are small and unremarkable. If Suno or Udio is forced to train on only what’s in the public domain—think military march music and the royalty-free songs found in corporate videos—the resulting model would be a far cry from what they have today.

If AI companies do move forward with licensing agreements, negotiations may be tricky, says Grimmelmann. Music licensing is complicated by the fact that two different copyrights are at play: one for the song, which generally covers the composition, like the music and lyrics, and one for the master, which covers the recording—like what you’d hear if you streamed the song. 

Some artists, like Taylor Swift and Frank Ocean, have come to own the masters of their catalogues after drawn-out legal battles, and would therefore be in the driver’s seat for any potential licensing deal. Many others, though, retain only the song copyright, while the record labels retain the masters. In these cases, the record label might theoretically be able to grant AI companies a license to use the music without an artist’s permission—but at the risk of burning relationships with artists and sparking more legal battles. 

The question of whether to license their music to such companies has divided musician groups. In contract rules adopted in April by SAG-AFTRA, which represents recording artists as well as actors, AI clones of member voices are allowed, though there are minimum rates for compensation. Back in December, a group called the Indie Musicians Caucus expressed frustrations that the leading instrumental musicians’ union, the 70,000-member American Federation of Musicians (AFM), was not doing enough to protect its rank and file against AI companies in contracts. The caucus wrote that it would vote against any agreement “obligating AFM members to dig [their] own graves by participating—without a right to consent, compensation, or credit—in the training of our permanent Generative AI replacements.”

But at this point, AFM does not appear eager to facilitate any deals. I asked Kenneth Shirk, international secretary-treasurer at AFM, whether he thought musicians should engage with AI companies and push to be fairly compensated, whatever that means, or instead resist licensing deals completely. 

“Looking at those questions makes me think, would you rather have a swarm of fire ants crawling all over you, or roll around in a bed of broken glass?” he told me. “We want musicians to get paid. But we also want to ensure that there’s a career in music to be had for those that are going to come after us.”

Synthesia’s hyperrealistic deepfakes will soon have full bodies

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year. 

“It’s very impressive. No one else is able to do that,” says Jack Saunders, a researcher at the University of Bath, who was not involved in Synthesia’s work. 

The full-body avatars he previewed are very good, he says, despite small errors such as hands “slicing” into each other at times. But “chances are you’re not really going to be looking that close to notice it,” Saunders says. 

Synthesia launched its first version of hyperrealistic AI avatars, also known as deepfakes, in April. These avatars use large language models to match expressions and tone of voice to the sentiment of spoken text. Diffusion models, as used in image- and video-generating AI systems, create the avatar’s look. However, the avatars in this generation appear only from the torso up, which can detract from the otherwise impressive realism. 

To create the full-body avatars, Synthesia is building an even bigger AI model. Users will have to go into a studio to record their body movements.

COURTESY SYNTHESIA

But before these full-body avatars become available, the company is launching another version of AI avatars that have hands and can be filmed from multiple angles. Their predecessors were only available in portrait mode and were just visible from the front. 

Other startups, such as Hour One, have launched similar avatars with hands. Synthesia’s version, which I got to test in a research preview and will be launched in late July, has slightly more realistic hand movements and lip-synching. 

Crucially, the coming update also makes it far easier to  create your own personalized avatar. The company’s previous custom AI avatars required users to go into a studio to record their face and voice over the span of a couple of hours, as I reported in April

This time, I recorded the material needed in just 10 minutes in the Synthesia office, using a digital camera, a lapel mike, and a laptop. But an even more basic setup, such as a laptop camera, would do. And while previously I had to record my facial movements and voice separately, this time the data was collected at the same time. The process also includes reading a script expressing consent to being recorded in this way, and reading out a randomly generated security passcode. 

These changes allow more scale and give the AI models powering the avatars more capabilities with less data, says Riparbelli. The results are also much faster. While I had to wait a few weeks to get my studio-made avatar, the new homemade ones were available the next day. 

Below, you can see my test of the new homemade avatars with hands. 

COURTESY SYNTHESIA

The homemade avatars aren’t as expressive as the studio-made ones yet, and users can’t change the backgrounds of their avatars, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. The hands are animated using an advanced form of looping technology, which repeats the same hand movements in a way that is responsive to the content of the script. 

Hands are tricky for AI to do well—even more so than faces, Vittorio Ferrari, Synthesia’s director of science, told me in in March. That’s because our mouths move in relatively small and predictable ways while we talk, making it possible to sync the deepfake version up with speech, but we move our hands in lots of different ways. On the flip side, while faces require close attention to detail because we tend to focus on them, hands can be less precise, Ferrari says. 

Even if they’re imperfect, AI-generated hands and bodies add a lot to the illusion of realism, which poses serious risks at a time when deepfakes and online misinformation are proliferating. Synthesia has strict content moderation policies, carefully vetting both its customers and the sort of content they’re able to generate. For example, only accredited news outlets can generate content on news.  

These new advancements in avatar technologies are another hammer blow to our ability to believe what we see online, says Saunders. 

“People need to know you can’t trust anything,” he says. “Synthesia is doing this now, and another year down the line it will be better and other companies will be doing it.” 

How generative AI could reinvent what it means to play

First, a confession. I only got into playing video games a little over a year ago (I know, I know). A Christmas gift of an Xbox Series S “for the kids” dragged me—pretty easily, it turns out—into the world of late-night gaming sessions. I was immediately attracted to open-world games, in which you’re free to explore a vast simulated world and choose what challenges to accept. Red Dead Redemption 2 (RDR2), an open-world game set in the Wild West, blew my mind. I rode my horse through sleepy towns, drank in the saloon, visited a vaudeville theater, and fought off bounty hunters. One day I simply set up camp on a remote hilltop to make coffee and gaze down at the misty valley below me.

To make them feel alive, open-world games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.

a man leads his horse through mountainous terrain toward a sunrise in Red Dead Redemption 2
a scene of gunfighters in Red Dead Redemption 2

In open-world games like Red Dead Redemption 2, players can choose diverse interactions within the same simulated experience.

After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun—I had a whale of a time, honestly, looting stagecoaches, fighting in bar brawls, and stalking deer through rainy woods—but the illusion starts to weaken when you poke at it. It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you loot a few stagecoaches, you finish, you move on. 

It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.

Startups employing generative-AI models, like ChatGPT, are using them to create characters that don’t rely on scripts but, instead, converse with you freely. Others are experimenting with NPCs who appear to have entire interior worlds, and who can continue to play even when you, the player, are not around to watch. Eventually, generative AI could create game experiences that are infinitely detailed, twisting and changing every time you experience them. 

The field is still very new, but it’s extremely hot. In 2022 the venture firm Andreessen Horowitz launched Games Fund, a $600 million fund dedicated to gaming startups. A huge number of these are planning to use AI in gaming. And the firm, also known as A16Z, has now invested in two studios that are aiming to create their own versions of AI NPCs. A second $600 million round was announced in April 2024.

Early experimental demos of these experiences are already popping up, and it may not be long before they appear in full games like RDR2. But some in the industry believe this development will not just make future open-world games incredibly immersive; it could change what kinds of game worlds or experiences are even possible. Ultimately, it could change what it means to play.

“What comes after the video game? You know what I mean?” says Frank Lantz, a game designer and director of the NYU Game Center. “Maybe we’re on the threshold of a new kind of game.”

These guys just won’t shut up

The way video games are made hasn’t changed much over the years. Graphics are incredibly realistic. Games are bigger. But the way in which you interact with characters, and the game world around you, uses many of the same decades-old conventions.

“In mainstream games, we’re still looking at variations of the formula we’ve had since the 1980s,” says Julian Togelius, a computer science professor at New York University who has a startup called Modl.ai that does in-game testing. Part of that tried-and-tested formula is a technique called a dialogue tree, in which all of an NPC’s possible responses are mapped out. Which one you get depends on which branch of the dialogue tree you have chosen. For example, say something rude about a passing NPC in RDR2 and the character will probably lash out—you have to quickly apologize to avoid a shootout (unless that’s what you want).

In the most expensive, high-­profile games, the so-called AAA games like Elden Ring or Starfield, a deeper sense of immersion is created by using brute force to build out deep and vast dialogue trees. The biggest studios employ teams of hundreds of game developers who work for many years on a single game in which every line of dialogue is plotted and planned, and software is written so the in-game engine knows when to deploy that particular line. RDR2 reportedly contains an estimated 500,000 lines of dialogue, voiced by around 700 actors. 

“You get around the fact that you can [only] do so much in the world by, like, insane amounts of writing, an insane amount of designing,” says Togelius. 

Generative AI is already helping take some of that drudgery out of making new games. Jonathan Lai, a general partner at A16Z and one of Games Fund’s managers, says that most studios are using image-­generating tools like Midjourney to enhance or streamline their work. And in a 2023 survey by A16Z, 87% of game studios said they were already using AI in their workflow in some way—and 99% planned to do so in the future. Many use AI agents to replace the human testers who look for bugs, such as places where a game might crash. In recent months, the CEO of the gaming giant EA said generative AI could be used in more than 50% of its game development processes.

Ubisoft, one of the biggest game developers, famous for AAA open-world games such as Assassin’s Creed, has been using a large-­language-model-based AI tool called Ghostwriter to do some of the grunt work for its developers in writing basic dialogue for its NPCs. Ghostwriter generates loads of options for background crowd chatter, which the human writer can pick from or tweak. The idea is to free the humans up so they can spend that time on more plot-focused writing.

GEORGE WYLESOL

Ultimately, though, everything is scripted. Once you spend a certain number of hours on a game, you will have seen everything there is to see, and completed every interaction. Time to buy a new one.

But for startups like Inworld AI, this situation is an opportunity. Inworld, based in California, is building tools to make in-game NPCs that respond to a player with dynamic, unscripted dialogue and actions—so they never repeat themselves. The company, now valued at $500 million, is the best-funded AI gaming startup around thanks to backing from former Google CEO Eric Schmidt and other high-profile investors. 

Role-playing games give us a unique way to experience different realities, explains Kylan Gibbs, Inworld’s CEO and founder. But something has always been missing. “Basically, the characters within there are dead,” he says. 

“When you think about media at large, be it movies or TV or books, characters are really what drive our ability to empathize with the world,” Gibbs says. “So the fact that games, which are arguably the most advanced version of storytelling that we have, are lacking these live characters—it felt to us like a pretty major issue.”

Gamers themselves were pretty quick to realize that LLMs could help fill this gap. Last year, some came up with ChatGPT mods (a way to alter an existing game) for the popular role-playing game Skyrim. The mods let players interact with the game’s vast cast of characters using LLM-powered free chat. One mod even included OpenAI’s speech recognition software Whisper AI so that players could speak to the players with their voices, saying whatever they wanted, and have full conversations that were no longer restricted by dialogue trees. 

The results gave gamers a glimpse of what might be possible but were ultimately a little disappointing. Though the conversations were open-ended, the character interactions were stilted, with delays while ChatGPT processed each request. 

Inworld wants to make this type of interaction more polished. It’s offering a product for AAA game studios in which developers can create the brains of an AI NPC that can be then imported into their game. Developers use the company’s “Inworld Studio” to generate their NPC. For example, they can fill out a core description that sketches the character’s personality, including likes and dislikes, motivations, or useful backstory. Sliders let you set levels of traits such as introversion or extroversion, insecurity or confidence. And you can also use free text to make the character drunk, aggressive, prone to exaggeration—pretty much anything.

Developers can also add descriptions of how their character speaks, including examples of commonly used phrases that Inworld’s various AI models, including LLMs, then spin into dialogue in keeping with the character. 

“Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended.”

Jeff Orkin, founder, Bitpart

Game designers can also plug other information into the system: what the character knows and doesn’t know about the world (no Taylor Swift references in a medieval battle game, ideally) and any relevant safety guardrails (does your character curse or not?). Narrative controls will let the developers make sure the NPC is sticking to the story and isn’t wandering wildly off-base in its conversation. The idea is that the characters can then be imported into video-game graphics engines like Unity or Unreal Engine to add a body and features. Inworld is collaborating with the text-to-voice startup ElevenLabs to add natural-sounding voices.

Inworld’s tech hasn’t appeared in any AAA games yet, but at the Game Developers Conference (GDC) in San Francisco in March 2024, the firm unveiled an early demo with Nvidia that showcased some of what will be possible. In Covert Protocol, each player operates as a private detective who must solve a case using input from the various in-game NPCs. Also at the GDC, Inworld unveiled a demo called NEO NPC that it had worked on with Ubisoft. In NEO NPC, a player could freely interact with NPCs using voice-to-text software and use conversation to develop a deeper relationship with them.

LLMs give us the chance to make games more dynamic, says Jeff Orkin, founder of Bitpart, a new startup that also aims to create entire casts of LLM-powered NPCs that can be imported into games. “Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended,” he says.

Bitpart’s approach is in part inspired by Orkin’s PhD research at MIT’s Media Lab. There, he trained AIs to role-play social situations using game-play logs of humans doing the same things with each other in multiplayer games.

Bitpart’s casts of characters are trained using a large language model and then fine-tuned in a way that means the in-game interactions are not entirely open-ended and infinite. Instead, the company uses an LLM and other tools to generate a script covering a range of possible interactions, and then a human game designer will select some. Orkin describes the process as authoring the Lego bricks of the interaction. An in-game algorithm searches out specific bricks to string them together at the appropriate time.

Bitpart’s approach could create some delightful in-game moments. In a restaurant, for example, you might ask a waiter for something, but the bartender might overhear and join in. Bitpart’s AI currently works with Roblox. Orkin says the company is now running trials with AAA game studios, although he won’t yet say which ones.

But generative AI might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

Making the impossible possible

When I asked Frank Lantz about how AI could change gaming, he talked for 26 minutes straight. His initial reaction to generative AI had been visceral: “I was like, oh my God, this is my destiny and is what I was put on the planet for.” 

Lantz has been in and around the cutting edge of the game industry and AI for decades but received a cult level of acclaim a few years ago when he created the Universal Paperclips game. The simple in-browser game gives the player the job of producing as many paper clips as possible. It’s a riff on the famous thought experiment by the philosopher Nick Bostrom, which imagines an AI that is given the same task and optimizes against humanity’s interest by turning all the matter in the known universe into paper clips.

Lantz is bursting with ideas for ways to use generative AI. One is to experience a new work of art as it is being created, with the player participating in its creation. “You’re inside of something like Lord of the Rings as it’s being written. You’re inside a piece of literature that is unfolding around you in real time,” he says. He also imagines strategy games where the players and the AI work together to reinvent what kind of game it is and what the rules are, so it is never the same twice.

For Orkin, LLM-powered NPCs can make games unpredictable—and that’s exciting. “It introduces a lot of open questions, like what you do when a character answers you but that sends a story in a direction that nobody planned for,” he says. 

Generative A I might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

It might mean games that are unlike anything we’ve seen thus far. Gaming experiences that unspool as the characters’ relationships shift and change, as friendships start and end, could unlock entirely new narrative experiences that are less about action and more about conversation and personalities. 

Togelius imagines new worlds built to react to the player’s own wants and needs, populated with NPCs that the player must teach or influence as the game progresses. Imagine interacting with characters whose opinions can change, whom you could persuade or motivate to act in a certain way—say, to go to battle with you. “A thoroughly generative game could be really, really good,” he says. “But you really have to change your whole expectation of what a game is.”

Lantz is currently working on a prototype of a game in which the premise is that you—the player—wake up dead, and the afterlife you are in is a low-rent, cheap version of a synthetic world. The game plays out like a noir in which you must explore a city full of thousands of NPCs powered by a version of ChatGPT, whom you must interact with to work out how you ended up there. 

His early experiments gave him some eerie moments when he felt that the characters seemed to know more than they should, a sensation recognizable to people who have played with LLMs before. Even though you know they’re not alive, they can still freak you out a bit.

“If you run electricity through a frog’s corpse, the frog will move,” he says. “And if you run $10 million worth of computation through the internet … it moves like a frog, you know.” 

But these early forays into generative-­­AI gaming have given him a real sense of excitement for what’s next: “I felt like, okay, this is a thread. There really is a new kind of artwork here.”

If an AI NPC talks and no one is around to listen, is there a sound?

AI NPCs won’t just enhance player interactions—they might interact with one another in weird ways. Red Dead Redemption 2’s NPCs each have long, detailed scripts that spell out exactly where they should go, what work they must complete, and how they’d react if anything unexpected occurred. If you want, you can follow an NPC and watch it go about its day. It’s fun, but ultimately it’s hard-coded.

NPCs built with generative AI could have a lot more leeway—even interacting with one another when the player isn’t there to watch. Just as people have been fooled into thinking LLMs are sentient, watching a city of generated NPCs might feel like peering over the top of a toy box that has somehow magically come alive.

We’re already getting a sense of what this might look like. At Stanford University, Joon Sung Park has been experimenting with AI-generated characters and watching to see how their behavior changes and gains complexity as they encounter one another. 

Because large language models have sucked up the internet and social media, they actually contain a lot of detail about how we behave and interact, he says.

a character from Skyrim
Gamers came up with ChatGPT mods for the popular role-playing game Skyrim.
creatures walking in a verdant landscape
Although 2016’s hugely hyped No Man’s Sky used procedural generation to create endless planets to explore, many saw it as a letdown.
a player interacting with an NPC behind a service desk
In Covert Protocol, players operate as private detectives who must solve the case using input from various in-game NPCs

In Park’s recent research, he and colleagues set up a Sims-like game, called Smallville, with 25 simulated characters that had been trained using generative AI. Each was given a name and a simple biography before being set in motion. When left to interact with each other for two days, they began to exhibit humanlike conversations and behavior, including remembering each other and being able to talk about their past interactions. 

For example, the researchers prompted one character to organize a Valentine’s Day party—and then let the simulation run. That character sent invitations around town, while other members of the community asked each other on dates to go to the party, and all turned up at the venue at the correct time. All of this was carried out through conversations, and past interactions between characters were stored in their “memories” as natural language.

For Park, the implications for gaming are huge. “This is exactly the sort of tech that the gaming community for their NPCs have been waiting for,” he says. 

His research has inspired games like AI Town, an open-source interactive experience on GitHub that lets human players interact with AI NPCs in a simple top-down game. You can leave the NPCs to get along for a few days and check in on them, reading the transcripts of the interactions they had while you were away. Anyone is free to take AI Town’s code to build new NPC experiences through AI. 

For Daniel De Freitas, cofounder of the startup Character AI, which lets users generate and interact with their own LLM-powered characters, the generative-AI revolution will allow new types of games to emerge—ones in which the NPCs don’t even need human players. 

The player is “joining an adventure that is always happening, that the AIs are playing,” he imagines. “It’s the equivalent of joining a theme park full of actors, but unlike the actors, they truly ‘believe’ that they are in those roles.”

If you’re getting Westworld vibes right about now, you’re not alone. There are plenty of stories about people torturing or killing their simple Sims characters in the game for fun. Would mistreating NPCs that pass for real humans cross some sort of new ethical boundary? What if, Lantz asks, an AI NPC that appeared conscious begged for its life when you simulated torturing it?

It raises complex questions he adds. “One is: What are the ethical dimensions of pretend violence? And the other is: At what point do AIs become moral agents to which harm can be done?”

There are other potential issues too. An immersive world that feels real, and never ends, could be dangerously addictive. Some users of AI chatbots have already reported losing hours and even days in conversation with their creations. Are there dangers that the same parasocial relationships could emerge with AI NPCs? 

“We may need to worry about people forming unhealthy relationships with game characters at some point,” says Togelius. Until now, players have been able to differentiate pretty easily between game play and real life. But AI NPCs might change that, he says: “If at some point what we now call ‘video games’ morph into some all-encompassing virtual reality, we will probably need to worry about the effect of NPCs being too good, in some sense.”

A portrait of the artist as a young bot

Not everyone is convinced that never-ending open-ended conversations between the player and NPCs are what we really want for the future of games. 

“I think we have to be cautious about connecting our imaginations with reality,” says Mike Cook, an AI researcher and game designer. “The idea of a game where you can go anywhere, talk to anyone, and do anything has always been a dream of a certain kind of player. But in practice, this freedom is often at odds with what we want from a story.”

In other words, having to generate a lot of the dialogue yourself might actually get kind of … well, boring. “If you can’t think of interesting or dramatic things to say, or are simply too tired or bored to do it, then you’re going to basically be reading your own very bad creative fiction,” says Cook. 

Orkin likewise doesn’t think conversations that could go anywhere are actually what most gamers want. “I want to play a game that a bunch of very talented, creative people have really thought through and created an engaging story and world,” he says.

This idea of authorship is an important part of game play, agrees Togelius. “You can generate as much as you want,” he says. “But that doesn’t guarantee that anything is interesting and worth keeping. In fact, the more content you generate, the more boring it might be.”

GEORGE WYLESOL

Sometimes, the possibility of everything is too much to cope with. No Man’s Sky, a hugely hyped space game launched in 2016 that used algorithms to generate endless planets to explore, was seen by many players as a bit of a letdown when it finally arrived. Players quickly discovered that being able to explore a universe that never ended, with worlds that were endlessly different, actually fell a little flat. (A series of updates over subsequent years has made No Man’s Sky a little more structured, and it’s now generally well thought of.)

One approach might be to keep AI gaming experiences tight and focused.

Hilary Mason, CEO at the gaming startup Hidden Door, likes to joke that her work is “artisanal AI.” She is from Brooklyn, after all, says her colleague Chris Foster, the firm’s game director, laughing.

Hidden Door, which has not yet released any products, is making role-playing text adventures based on classic stories that the user can steer. It’s like Dungeons & Dragons for the generative AI era. It stitches together classic tropes for certain adventure worlds, and an annotated database of thousands of words and phrases, and then uses a variety of machine-learning tools, including LLMs, to make each story unique. Players walk through a semi-­unstructured storytelling experience, free-typing into text boxes to control their character. 

The result feels a bit like hand-annotating an AI-generated novel with Post-it notes.

In a demo with Mason, I got to watch as her character infiltrated a hospital and attempted to hack into the server. Each suggestion prompted the system to spin up the next part of the story, with the large language model creating new descriptions and in-game objects on the fly.

Each experience lasts between 20 and 40 minutes, and for Foster, it creates an “expressive canvas” that people can play with. The fixed length and the added human touch—Mason’s artisanal approach—give players “something really new and magical,” he says.

There’s more to life than games

Park thinks generative AI that makes NPCs feel alive in games will have other, more fundamental implications further down the line.

“This can, I think, also change the meaning of what games are,” he says. 

For example, he’s excited about using generative-AI agents to simulate how real people act. He thinks AI agents could one day be used as proxies for real people to, for example, test out the likely reaction to a new economic policy. Counterfactual scenarios could be plugged in that would let policymakers run time backwards to try to see what would have happened if a different path had been taken. 

“You want to learn that if you implement this social policy or economic policy, what is going to be the impact that it’s going to have on the target population?” he suggests. “Will there be unexpected side effects that we’re not going to be able to foresee on day one?”

And while Inworld is focused on adding immersion to video games, it has also worked with LG in South Korea to make characters that kids can chat with to improve their English language skills. Others are using Inworld’s tech to create interactive experiences. One of these, called Moment in Manzanar, was created to help players empathize with the Japanese-Americans the US government detained in internment camps during World War II. It allows the user to speak to a fictional character called Ichiro who talks about what it was like to be held in the Manzanar camp in California. 

Inworld’s NPC ambitions might be exciting for gamers (my future excursions as a cowboy could be even more immersive!), but there are some who believe using AI to enhance existing games is thinking too small. Instead, we should be leaning into the weirdness of LLMs to create entirely new kinds of experiences that were never possible before, says Togelius. The shortcomings of LLMs “are not bugs—they’re features,” he says. 

Lantz agrees. “You have to start with the reality of what these things are and what they do—this kind of latent space of possibilities that you’re surfing and exploring,” he says. “These engines already have that kind of a psychedelic quality to them. There’s something trippy about them. Unlocking that is the thing that I’m interested in.”

Whatever is next, we probably haven’t even imagined it yet, Lantz thinks. 

“And maybe it’s not about a simulated world with pretend characters in it at all,” he says. “Maybe it’s something totally different. I don’t know. But I’m excited to find out.”

How underwater drones could shape a potential Taiwan-China conflict

A potential future conflict between Taiwan and China would be shaped by novel methods of drone warfare involving advanced underwater drones and increased levels of autonomy, according to a new war-gaming experiment by the think tank Center for a New American Security (CNAS). 

The report comes as concerns about Beijing’s aggression toward Taiwan have been rising: China sent dozens of surveillance balloons over the Taiwan Strait in January during Taiwan’s elections, and in May, two Chinese naval ships entered Taiwan’s restricted waters. The US Department of Defense has said that preparing for potential hostilities is an “absolute priority,” though no such conflict is immediately expected. 

The report’s authors detail a number of ways that use of drones in any South China Sea conflict would differ starkly from current practices, most notably in the war in Ukraine, often called the first full-scale drone war. 

Differences from the Ukrainian battlefield

Since Russia invaded Ukraine in 2022, drones have been aiding in what military experts describe as the first three steps of the “kill chain”—finding, targeting, and tracking a target—as well as in delivering explosives. The drones have a short life span, since they are often shot down or made useless by frequency jamming devices that prevent pilots from controlling them. Quadcopters—the commercially available drones often used in the war—last just three flights on average, according to the report. 

Drones like these would be far less useful in a possible invasion of Taiwan. “Ukraine-Russia has been a heavily land conflict, whereas conflict between the US and China would be heavily air and sea,” says Zak Kallenborn, a drone analyst and adjunct fellow with the Center for Strategic and International Studies, who was not involved in the report but agrees broadly with its projections. The small, off-the-shelf drones popularized in Ukraine have flight times too short for them to be used effectively in the South China Sea. 

An underwater war

Instead, a conflict with Taiwan would likely make use of undersea and maritime drones. With Taiwan just 100 miles away from China’s mainland, the report’s authors say, the Taiwan Strait is where the first days of such a conflict would likely play out. The Zhu Hai Yun, China’s high-tech autonomous carrier, might send its autonomous underwater drones to scout for US submarines. The drones could launch attacks that, even if they did not sink the submarines, might divert the attention and resources of the US and Taiwan. 

It’s also possible China would flood the South China Sea with decoy drone boats to “make it difficult for American missiles and submarines to distinguish between high-value ships and worthless uncrewed commercial vessels,” the authors write.

Though most drone innovation is not focused on maritime applications, these uses are not without precedent: Ukrainian forces drew attention for modifying jet skis to operate via remote control and using them to intimidate and even sink Russian vessels in the Black Sea. 

More autonomy

Drones currently have very little autonomy. They’re typically human-piloted, and though some are capable of autopiloting to a fixed GPS point, that’s generally not very useful in a war scenario, where targets are on the move. But, the report’s authors say, autonomous technology is developing rapidly, and whichever nation possesses a more sophisticated fleet of autonomous drones will hold a significant edge.

What would that look like? Millions of defense research dollars are being spent in the US and China alike on swarming, a strategy where drones navigate autonomously in groups and accomplish tasks. The technology isn’t deployed yet, but if successful, it could be a game-changer in any potential conflict.  

A sea-based conflict might also offer an easier starting ground for AI-driven navigation, because object recognition is easier on the “relatively uncluttered surface of the ocean” than on the ground, the authors write.

China’s advantages

A chief advantage for China in a potential conflict is its proximity to Taiwan; it has more than three dozen air bases within 500 miles, while the closest US base is 478 miles away in Okinawa. But an even bigger advantage is that it produces more drones than any other nation.

“China dominates the commercial drone market, absolutely,” says Stacie Pettyjohn, coauthor of the report and director of the defense program at CNAS. That includes drones of the type used in Ukraine.

For Taiwan to use these Chinese drones for their own defenses, they’d first have to make the purchase, which could be difficult because the Chinese government might move to block it. Then they’d need to hack them and disconnect them from the companies that made them, or else those Chinese manufacturers could turn them off remotely or launch cyberattacks. That sort of hacking is unfeasible at scale, so Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make their own drones or find alternative manufacturers, likely in the US. On Wednesday, June 19, the US approved a $360 million sale of 1,000 military-grade drones to Taiwan.

For now, experts can only speculate about how those drones might be used. Though preparing for a conflict in the South China Sea is a priority for the DOD, it’s one of many, says Kallenborn. “The sensible approach, in my opinion, is recognizing that you’re going to potentially have to deal with all of these different things,” he says. “But we don’t know the particular details of how it will work out.”