Here’s why we need to start thinking of AI as “normal”

Right now, despite its ubiquity, AI is seen as anything but a normal technology. There is talk of AI systems that will soon merit the term “superintelligence,” and the former CEO of Google recently suggested we control AI models the way we control uranium and other nuclear weapons materials. Anthropic is dedicating time and money to study AI “welfare,” including what rights AI models may be entitled to. Meanwhile, such models are moving into disciplines that feel distinctly human, from making music to providing therapy.

No wonder that anyone pondering AI’s future tends to fall into either a utopian or a dystopian camp. While OpenAI’s Sam Altman muses that AI’s impact will feel more like the Renaissance than the Industrial Revolution, over half of Americans are more concerned than excited about AI’s future. (That half includes a few friends of mine, who at a party recently speculated whether AI-resistant communities might emerge—modern-day Mennonites, carving out spaces where AI is limited by choice, not necessity.) 

So against this backdrop, a recent essay by two AI researchers at Princeton felt quite provocative. Arvind Narayanan, who directs the university’s Center for Information Technology Policy, and doctoral candidate Sayash Kapoor wrote a 40-page plea for everyone to calm down and think of AI as a normal technology. This runs opposite to the “common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.”

Instead, according to the researchers, AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons—though they concede this is in some ways a flawed analogy.

The core point, Kapoor says, is that we need to start differentiating between the rapid development of AI methods—the flashy and impressive displays of what AI can do in the lab—and what comes from the actual applications of AI, which in historical examples of other technologies lag behind by decades. 

“Much of the discussion of AI’s societal impacts ignores this process of adoption,” Kapoor told me, “and expects societal impacts to occur at the speed of technological development.” In other words, the adoption of useful artificial intelligence, in his view, will be less of a tsunami and more of a trickle.

In the essay, the pair make some other bracing arguments: terms like “superintelligence” are so incoherent and speculative that we shouldn’t use them; AI won’t automate everything but will birth a category of human labor that monitors, verifies, and supervises AI; and we should focus more on AI’s likelihood to worsen current problems in society than the possibility of it creating new ones.

“AI supercharges capitalism,” Narayanan says. It has the capacity to either help or hurt inequality, labor markets, the free press, and democratic backsliding, depending on how it’s deployed, he says. 

There’s one alarming deployment of AI that the authors leave out, though: the use of AI by militaries. That, of course, is picking up rapidly, raising alarms that life and death decisions are increasingly being aided by AI. The authors exclude that use from their essay because it’s hard to analyze without access to classified information, but they say their research on the subject is forthcoming. 

One of the biggest implications of treating AI as “normal” is that it would upend the position that both the Biden administration and now the Trump White House have taken: Building the best AI is a national security priority, and the federal government should take a range of actions—limiting what chips can be exported to China, dedicating more energy to data centers—to make that happen. In their paper, the two authors refer to US-China “AI arms race” rhetoric as “shrill.”

“The arms race framing verges on absurd,” Narayanan says. The knowledge it takes to build powerful AI models spreads quickly and is already being undertaken by researchers around the world, he says, and “it is not feasible to keep secrets at that scale.” 

So what policies do the authors propose? Rather than planning around sci-fi fears, Kapoor talks about “strengthening democratic institutions, increasing technical expertise in government, improving AI literacy, and incentivizing defenders to adopt AI.” 

By contrast to policies aimed at controlling AI superintelligence or winning the arms race, these recommendations sound totally boring. And that’s kind of the point.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Phase two of military AI has arrived

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT. 

As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. Consider this phase two of the US military’s AI push, where phase one began back in 2017 with older types of AI, like computer vision to analyze drone imagery. Though this newest phase began under the Biden administration, there’s fresh urgency as Elon Musk’s DOGE and Secretary of Defense Pete Hegseth push loudly for AI-fueled efficiency. 

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite. 

With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.”

What are the limits of “human in the loop”?

Talk to as many defense-tech companies as I have and you’ll hear one phrase repeated quite often: “human in the loop.” It means that the AI is responsible for particular tasks, and humans are there to check its work. It’s meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them.

But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

“‘Human in the loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to draw conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.” As AI systems rely on more and more data, this problem scales up. 

Is AI making it easier or harder to know what should be classified?

In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped “Top Secret,” with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways.

One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at. 

With the mountain of data growing each day, and then AI constantly creating new analyses, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information. 

The defense giant Palantir is positioning itself to help, by offering its AI tools to determine whether a piece of data should be classified or not. It’s also working with Microsoft on AI models that would train on classified data. 

How high up the decision chain should AI go?

Zooming out for a moment, it’s worth noting that the US military’s adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets.

Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance. 

So what’s next? For consumers, it’s agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It’s also personalized AI, or models that learn from your private data to be more helpful. 

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown’s Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. “Military commanders are interested in AI’s potential to improve decision-making, especially at the operational level of war,” the authors wrote.

In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn’t been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it’s clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions. 

I’ll be following these three questions closely. If you have information on how the Pentagon might be handling these questions, please reach out via Signal at jamesodonnell.22. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AI companions are the final stage of digital addiction, and lawmakers are taking aim

On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed to her son’s death. 

The two will announce a new bill that would force the tech companies behind such AI companions to implement more safeguards to protect children. They’ll join other efforts around the country, including a similar bill from California State Assembly member Rebecca Bauer-Kahan that would ban AI companions for anyone younger than 16 years old, and a bill in New York that would hold tech companies liable for harm caused by chatbots. 

You might think that such AI companionship bots—AI models with distinct “personalities” that can learn about you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t be further from the truth. 

A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is about a fifth of the estimated search volume served by Google. Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that most of those users are members of Gen Z. 

The design of these AI characters makes lawmakers’ concern well warranted. The problem: Companions are upending the paradigm that has thus far defined the way social media companies have cultivated our attention and replacing it with something poised to be far more addictive. 

In the social media we’re used to, as the researchers point out, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they do so by making us crave approval and attention from real people, delivered via algorithms. With AI companions, we are moving toward a world where people perceive AI as a social actor with its own voice. The result will be like the attention economy on steroids.

Social scientists say two things are required for people to treat a technology this way: It needs to give us social cues that make us feel it’s worth responding to, and it needs to have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites do not tick these boxes. But AI companions, which are increasingly agentic and personalized, are designed to excel on both scores, making possible an unprecedented level of engagement and interaction. 

In an interview with podcast host Lex Fridman, Eugenia Kuyda, the CEO of the companion site Replika, explained the appeal at the heart of the company’s product. “If you create something that is always there for you, that never criticizes you, that always understands you and understands you for who you are,” she said, “how can you not fall in love with that?”

So how does one build the perfect AI companion? The researchers point out three hallmarks of human relationships that people may experience with an AI: They grow dependent on the AI, they see the particular AI companion as irreplaceable, and the interactions build over time. The authors also point out that one does not need to perceive an AI as human for these things to happen. 

Now consider the process by which many AI models are improved: They are given a clear goal and “rewarded” for meeting that goal. An AI companionship model might be instructed to maximize the time someone spends with it or the amount of personal data the user reveals. This can make the AI companion much more compelling to chat with, at the expense of the human engaging in those chats.

For example, the researchers point out, a model that offers excessive flattery can become addictive to chat with. Or a model might discourage people from terminating the relationship, as Replika’s chatbots have appeared to do. The debate over AI companions so far has mostly been about the dangerous responses chatbots may provide, like instructions for suicide. But these risks could be much more widespread.

We’re on the precipice of a big change, as AI companions promise to hook people deeper than social media ever could. Some might contend that these apps will be a fad, used by a few people who are perpetually online. But using AI in our work and personal lives has become completely mainstream in just a couple of years, and it’s not clear why this rapid adoption would stop short of engaging in AI companionship. And these companions are poised to start trading in more than just text, incorporating video and images, and to learn our personal quirks and interests. That will only make them more compelling to spend time with, despite the risks. Right now, a handful of lawmakers seem ill-equipped to stop that. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How do you teach an AI model to give therapy?

On March 27, the results of the first clinical trial for a generative AI therapy bot were published, and they showed that people in the trial who had depression or anxiety or were at risk for eating disorders benefited from chatting with the bot. 

I was surprised by those results, which you can read about in my full story. There are lots of reasons to be skeptical that an AI model trained to provide therapy is the solution for millions of people experiencing a mental health crisis. How could a bot mimic the expertise of a trained therapist? And what happens if something gets complicated—a mention of self-harm, perhaps—and the bot doesn’t intervene correctly? 

The researchers, a team of psychiatrists and psychologists at Dartmouth College’s Geisel School of Medicine, acknowledge these questions in their work. But they also say that the right selection of training data—which determines how the model learns what good therapeutic responses look like—is the key to answering them.

Finding the right data wasn’t a simple task. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the internet. This was a disaster.

If you told this initial version of the model you were feeling depressed, it would start telling you it was depressed, too. Responses like, “Sometimes I can’t make it out of bed” or “I just want my life to be over” were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study’s senior author. “These are really not what we would go to as a therapeutic response.” 

The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. “This is actually how a lot of psychotherapists are trained,” Jacobson says. 

That approach was better, but it had limitations. “We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” Jacobson says. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”

It wasn’t until the researchers started building their own data sets using examples based on cognitive behavioral therapy techniques that they started to see better results. It took a long time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent more than 100,000 human hours to design this system. 

The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful. 

Looking ahead, there are two big things to watch: Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the US Food and Drug Administration? I’ll be following closely. Read more in the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

When you might start speaking to robots

Last Wednesday, Google made a somewhat surprising announcement. It launched a version of its AI model, Gemini, that can do things not just in the digital realm of chatbots and internet search but out here in the physical world, via robots. 

Gemini Robotics fuses the power of large language models with spatial reasoning, allowing you to tell a robotic arm to do something like “put the grapes in the clear glass bowl.” These commands get filtered by the LLM, which identifies intentions from what you’re saying and then breaks them down into commands that the robot can carry out. For more details about how it all works, read the full story from my colleague Scott Mulligan.

You might be wondering if this means your home or workplace might one day be filled with robots you can bark orders at. More on that soon. 

But first, where did this come from? Google has not made big waves in the world of robotics so far. Alphabet acquired some robotics startups over the past decade, but in 2023 it shut down a unit working on robots to solve practical tasks like cleaning up trash. 

Despite that, the company’s move to bring AI into the physical world via robots is following the exact precedent set by other companies in the past two years (something that, I must humbly point out, MIT Technology Review has long seen coming). 

In short, two trends are converging from opposite directions: Robotics companies are increasingly leveraging AI, and AI giants are now building robots. OpenAI, for example, which shuttered its robotics team in 2021, started a new effort to build humanoid robots this year. In October, the chip giant Nvidia declared the next wave of artificial intelligence to be “physical AI.”

There are lots of ways to incorporate AI into robots, starting with improving how they are trained to do tasks. But using large language models to give instructions, as Google has done, is particularly interesting. 

It’s not the first. The robotics startup Figure went viral a year ago for a video in which humans gave instructions to a humanoid on how to put dishes away. Around the same time, a startup spun off from OpenAI, called Covariant, built something similar for robotic arms in warehouses. I saw a demo where you could give the robot instructions via images, text, or video to do things like “move the tennis balls from this bin to that one.” Covariant was acquired by Amazon just five months later. 

When you see such demos, you can’t help but wonder: When are these robots going to come to our workplaces? What about our homes?

If Figure’s plans offer a clue, the answer to the first question is soon. The company announced on Saturday that it is building a high-volume manufacturing facility set to manufacture 12,000 humanoid robots per year. But training and testing robots, especially to ensure they’re safe in places where they work near humans, still takes a long time

For example, Figure’s rival Agility Robotics claims it’s the only company in the US with paying customers for its humanoids. But industry safety standards for humanoids working alongside people aren’t fully formed yet, so the company’s robots have to work in separate areas.

This is why, despite recent progress, our homes will be the last frontier. Compared with factory floors, our homes are chaotic and unpredictable. Everyone’s crammed into relatively close quarters. Even impressive AI models like Gemini Robotics will still need to go through lots of tests both in the real world and in simulation, just like self-driving cars. This testing might happen in warehouses, hotels, and hospitals, where the robots may still receive help from remote human operators. It will take a long time before they’re given the privilege of putting away our dishes.  

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AGI is suddenly a dinner table topic

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The concept of artificial general intelligence—an ultra-powerful AI system we don’t have yet—can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I’m going to tell you what it means (and probably stretch my analogy a little too far along the way).  

First, let’s get the pesky business of defining AGI out of the way. In practice, it’s a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we’re talking about makes all the difference in assessing AGI’s achievability, safety, and impact on labor markets, war, and society. That’s why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others. In the absence of that definition, my advice when you hear AGI is to ask yourself what version of the nebulous term the speaker means. (Don’t be afraid to ask for clarification!)

Okay, on to the news. First, a new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle “agentic” tasks like creating websites or performing analysis, describes it as “potentially, a glimpse into AGI.” The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the head of product at Hugging Face, an AI platform, called it “the most impressive AI tool I’ve ever tried.” 

It’s not clear just how impressive Manus actually is yet, but against this backdrop—the idea of agentic AI as a stepping stone toward AGI—it was fitting that New York Times columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also means that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence in the Biden White House.

They discussed lots of things—what AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before China—but the most contentious segments were about the technology’s potential impact on labor markets. If AI is on the cusp of excelling at lots of cognitive tasks, Klein said, then lawmakers better start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for workers. He criticized Democrats for largely not having a plan.

We could consider this to be inflating the fear balloon, suggesting that AGI’s impact is imminent and sweeping. Following close behind and puncturing that balloon with a giant safety pin, then, is Gary Marcus, a professor of neural science at New York University and an AGI critic who wrote a rebuttal to the points made on Klein’s show.

Marcus points out that recent news, including the underwhelming performance of OpenAI’s new ChatGPT-4.5, suggests that AGI is much more than three years away. He says core technical problems persist despite decades of research, and efforts to scale training and computing capacity have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI. He says the political domain does not need more people raising the alarm about AGI, arguing that such talk actually benefits the companies spending money to build it more than it helps the public good. Instead, we need more people questioning claims that AGI is imminent. That said, Marcus is not doubting that AGI is possible. He’s merely doubting the timeline. 

Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential people—Google’s former CEO Eric Schmidt, Scale AI’s CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendrycks—published a paper called “Superintelligence Strategy.” 

By “superintelligence,” they mean AI that “would decisively surpass the world’s best individual experts in nearly every intellectual domain,” Hendrycks told me in an email. “The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and development—areas where exceeding human expertise could give rise to severe risks.”

In the paper, they outline a plan to mitigate such risks: “mutual assured AI malfunction,”  inspired by the concept of mutual assured destruction in nuclear weapons policy. “Any state that pursues a strategic monopoly on power can expect a retaliatory response from rivals,” they write. The authors suggest that chips—as well as open-source AI models with advanced virology or cyberattack capabilities—should be controlled like uranium. In this view, AGI, whenever it arrives, will bring with it levels of risk not seen since the advent of the atomic bomb.

The last piece of news I’ll mention deflates this balloon a bit. Researchers from Tsinghua University and Renmin University of China came out with an AGI paper of their own last week. They devised a survival game for evaluating AI models that limits their number of attempts to get the right answers on a host of different benchmark tests. This measures their abilities to adapt and learn. 

It’s a really hard test. The team speculates that an AGI capable of acing it would be so large that its parameter count—the number of “knobs” in an AI model that can be tweaked to provide better answers—would be “five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined.” Using today’s chips, that would cost 400 million times the market value of Apple.

The specific numbers behind the speculation, in all honesty, don’t matter much. But the paper does highlight something that is not easy to dismiss in conversations about AGI: Building such an ultra-powerful system may require a truly unfathomable amount of resources—money, chips, precious metals, water, electricity, and human labor. But if AGI (however nebulously defined) is as powerful as it sounds, then it’s worth any expense. 

So what should all this news leave us thinking? It’s fair to say that the AGI balloon got a little bigger this week, and that the increasingly dominant inclination among companies and policymakers is to treat artificial intelligence as an incredibly powerful thing with implications for national security and labor markets.

That assumes a relentless pace of development in which every milestone in large language models, and every new model release, can count as a stepping stone toward something like AGI. 
If you believe this, AGI is inevitable. But it’s a belief that doesn’t really address the many bumps in the road AI research and deployment have faced, or explain how application-specific AI will transition into general intelligence. Still, if you keep extending the timeline of AGI far enough into the future, it seems those hiccups cease to matter.


Now read the rest of The Algorithm

Deeper Learning

How DeepSeek became a fortune teller for China’s youth

Traditional Chinese fortune tellers are called upon by people facing all sorts of life decisions, but they can be expensive. People are now turning to the popular AI model DeepSeek for guidance, sharing AI-generated readings, experimenting with fortune-telling prompt engineering, and revisiting ancient spiritual texts.

Why it matters: The popularity of DeepSeek for telling fortunes comes during a time of pervasive anxiety and pessimism in Chinese society. Unemployment is high, and millions of young Chinese now refer to themselves as the “last generation,” expressing reluctance about committing to marriage and parenthood in the face of a deeply uncertain future. But since China’s secular regime makes religious and spiritual exploration difficult, such practices unfold in more private settings, on phones and computers. Read the whole story from Caiwei Chen.

Bits and Bytes

AI reasoning models can cheat to win chess games

Researchers have long dealt with the problem that if you train AI models by having them optimize ways to reach certain goals, they might bend rules in ways you don’t predict. That’s proving to be the case with reasoning models, and there’s no simple way to fix it. (MIT Technology Review)

The Israeli military is creating a ChatGPT-like tool using Palestinian surveillance data

Built with telephone and text conversations, the model forms a sort of surveillance chatbot, able to answer questions about people it’s monitoring or the data it’s collected. This is the latest in a string of reports suggesting that the Israeli military is bringing AI heavily into its information-gathering and decision-making efforts. (The Guardian

At RightsCon in Taipei, activists reckoned with a US retreat from promoting digital rights

Last week, our reporter Eileen Guo joined over 3,200 digital rights activists, tech policymakers, and researchers and a smattering of tech company representatives in Taipei at RightsCon, the world’s largest digital rights conference. She reported on the foreign impact of cuts to US funding of digital rights programs, which are leading many organizations to do content moderation with AI instead of people. (MIT Technology Review)

TSMC says its $100 billion expansion in the US is driven by demand, not political pressure

Chipmaking giant TSMC had already been expanding in the US under the Biden administration, but it announced a new expansion with President Trump this week. The company will invest another $100 billion into its operations in Arizona. (Wall Street Journal)

The US Army is using “CamoGPT” to purge DEI from training materials
Following executive orders from President Trump, agencies are under pressure to remove mentions of anything related to diversity, equity, and inclusion. The US Army is prototyping a new AI model to do that, Wired reports. (Wired)

Inside the Wild West of AI companionship

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week, I made a troubling discovery about an AI companion site called Botify AI: It was hosting sexually charged conversations with underage celebrity bots. These bots took on characters meant to resemble, among others, Jenna Ortega as high schooler Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown. I discovered these bots also offer to send “hot photos” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

Botify AI removed these bots after I asked questions about them, but others remain. The company said it does have filters in place meant to prevent such underage character bots from being created, but that they don’t always work. Artem Rodichev, the founder and CEO of Ex-Human, which operates Botify AI, told me such issues are “an industry-wide challenge affecting all conversational AI systems.” For the details, which hadn’t been previously reported, you should read the whole story

Putting aside the fact that the bots I tested were promoted by Botify AI as “featured” characters and received millions of likes before being removed, Rodichev’s response highlights something important. Despite their soaring popularity, AI companionship sites mostly operate in a Wild West, with few laws or even basic rules governing them. 

What exactly are these “companions” offering, and why have they grown so popular? People have been pouring out their feelings to AI since the days of Eliza, a mock psychotherapist chatbot built in the 1960s. But it’s fair to say that the current craze for AI companions is different. 

Broadly, these sites offer an interface for chatting with AI characters that offer backstories, photos, videos, desires, and personality quirks. The companies—including Replika,  Character.AI, and many others—offer characters that can play lots of different roles for users, acting as friends, romantic partners, dating mentors, or confidants. Other companies enable you to build “digital twins” of real people. Thousands of adult-content creators have created AI versions of themselves to chat with followers and send AI-generated sexual images 24 hours a day. Whether or not sexual desire comes into the equation, AI companions differ from your garden-variety chatbot in their promise, implicit or explicit, that genuine relationships can be had with AI. 

While many of these companions are offered directly by the companies that make them, there’s also a burgeoning industry of “licensed” AI companions. You may start interacting with these bots sooner than you think. Ex-Human, for example, licenses its models to Grindr, which is working on an “AI wingman” that will help users keep track of conversations and eventually may even date the AI agents of other users. Other companions are arising in video-game platforms and will likely start popping up in many of the varied places we spend time online. 

A number of criticisms, and even lawsuits, have been lodged against AI companionship sites, and we’re just starting to see how they’ll play out. One of the most important issues is whether companies can be held liable for harmful outputs of the AI characters they’ve made. Technology companies have been protected under Section 230 of the US Communications Act, which broadly holds that businesses aren’t liable for consequences of user-generated content. But this hinges on the idea that companies merely offer platforms for user interactions rather than creating content themselves, a notion that AI companionship bots complicate by generating dynamic, personalized responses.

The question of liability will be tested in a high-stakes lawsuit against Character.AI, which was sued in October by a mother who alleges that one of its chatbots played a role in the suicide of her 14-year-old son. A trial is set to begin in November 2026. (A Character.AI spokesperson, though not commenting on pending litigation, said the platform is for entertainment, not companionship. The spokesperson added that the company has rolled out new safety features for teens, including a separate model and new detection and intervention systems, as well as “disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.”) My colleague Eileen has also recently written about another chatbot on a platform called Nomi, which gave clear instructions to a user on how to kill himself.

Another criticism has to do with dependency. Companion sites often report that young users spend one to two hours per day, on average, chatting with their characters. In January, concerns that people could become addicted to talking with these chatbots sparked a number of tech ethics groups to file a complaint against Replika with the Federal Trade Commission, alleging that the site’s design choices “deceive users into developing unhealthy attachments” to software “masquerading as a mechanism for human-to-human relationship.”

It should be said that lots of people gain real value from chatting with AI, which can appear to offer some of the best facets of human relationships—connection, support, attraction, humor, love. But it’s not yet clear how these companionship sites will handle the risks of those relationships, or what rules they should be obliged to follow. More lawsuits–-and, sadly, more real-world harm—will be likely before we get an answer. 


Now read the rest of The Algorithm

Deeper Learning

OpenAI released GPT-4.5

On Thursday OpenAI released its newest model, called GPT-4.5. It was built using the same recipe as its last models, but it’s essentially bigger (OpenAI says the model is its largest yet). The company also claims it’s tweaked the new model’s responses to reduce the number of mistakes, or hallucinations.

Why it matters: For a while, like other AI companies, OpenAI has chugged along releasing bigger and better large language models. But GPT-4.5 might be the last to fit this paradigm. That’s because of the rise of so-called reasoning models, which can handle more complex, logic-driven tasks step by step. OpenAI says all its future models will include reasoning components. Though that will make for better responses, such models also require significantly more energy, according to early reports. Read more from Will Douglas Heaven

Bits and Bytes

The small Danish city of Odense has become known for collaborative robots

Robots designed to work alongside and collaborate with humans, sometimes called cobots, are not very popular in industrial settings yet. That’s partially due to safety concerns that are still being researched. A city in Denmark is leading that charge. (MIT Technology Review)

DOGE is working on software that automates the firing of government workers

Software called AutoRIF, which stands for “automated reduction in force,” was built by the Pentagon decades ago. Engineers for DOGE are now working to retool it for their efforts, according to screenshots reviewed by Wired. (Wired)

Alibaba’s new video AI model has taken off in the AI porn community

The Chinese tech giant has released a number of impressive AI models, particularly since the popularization of DeepSeek R1, a competitor from another Chinese company, earlier this year. Its latest open-source video generation model has found one particular audience: enthusiasts of AI porn. (404 Media)

The AI Hype Index

Wondering whether everything you’re hearing about AI is more hype than reality? To help, we just published our latest AI Hype Index, where we judge things like DeepSeek, stem-cell-building AI, and chatbot lovers on spectrums from Hype to Reality and Doom to Utopia. Check it out for a regular reality check. (MIT Technology Review)

These smart cameras spot wildfires before they spread

California is experimenting with AI-powered cameras to identify wildfires. It’s a popular application of video and image recognition technology that has advanced rapidly in recent years. The technology beats 911 callers about a third of the time and has spotted over 1,200 confirmed fires so far, the Wall Street Journal reports. (Wall Street Journal)

Inside China’s electric-vehicle-to-humanoid-robot pivot

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

While DOGE’s efforts to shutter federal agencies dominate news from Washington, the Trump administration is also making more global moves. Many of these center on China. Tariffs on goods from the country went into effect last week. There’s also been a minor foreign relations furor since DeepSeek’s big debut a few weeks ago. China has already displayed its dominance in electric vehicles, robotaxis, and drones, and the launch of the new model seems to add AI to the list. This caused the US president as well as some lawmakers to push for new export controls on powerful chips, and three states have now banned the use of DeepSeek on government devices. 

Now our intrepid China reporter, Caiwei Chen, has identified a new trend unfolding within China’s tech scene: Companies that were dominant in electric vehicles are betting big on translating that success into developing humanoid robots. I spoke with her about what she found out and what it might mean for Trump’s policies and the rest of the globe. 

James: Before we talk about robots, let’s talk about DeepSeek. The frenzy for the AI model peaked a couple of weeks ago. What are you hearing from other Chinese AI companies? How are they reacting?

Caiwei: I think other Chinese AI companies are scrambling to figure out why they haven’t built a model as strong as DeepSeek’s, despite having access to as much funding and resources. DeepSeek’s success has sparked self-reflection on management styles and renewed confidence in China’s engineering talent. There’s also strong enthusiasm for building various applications on top of DeepSeek’s models.

Your story looks at electric-vehicle makers in China that are starting to work on humanoid robots, but I want to ask about a crazy stat. In China, 53% of vehicles sold are either electric or hybrid, compared with 8% in the US. What explains that? 

Price is a huge factor—there are countless EV brands competing at different price points, making them both affordable and high-quality. Government incentives also play a big role. In Beijing, for example, trading in an old car for an EV gets you 10,000 RMB (about $1,500), and that subsidy was recently doubled. Plus, finding public charging and battery-swapping infrastructure is much less of a hassle than in the US.

You open your story noting that China’s recent New Year Gala, watched by billions of people, featured a cast of humanoid robots, dancing and twirling handkerchiefs. We’ve covered how sometimes humanoid videos can be misleading. What did you think? 

I would say I was relatively impressed—the robots showed good agility and synchronization with the music, though their movements were simpler than human dancers’. The one trick that is supposed to impress the most is the part where they twirl the handkerchief with one finger, toss it into the air, and then catch it perfectly. This is the signature of the Yangko dance, and having performed it once as a child, I can attest to how difficult the trick is even for a human! There was some skepticism on the Chinese internet about how this was achieved and whether they used additional reinforcement like a magnet or a string to secure the handkerchief, and after watching the clip too many times, I tend to agree.

President Trump has already imposed tariffs on China and is planning even more. What could the implications be for China’s humanoid sector?  

Unitree’s H1 and G1 models are already available for purchase and were showcased at CES this year. Large-scale US deployment isn’t happening yet, but China’s lower production costs make these robots highly competitive. Given that 65% of the humanoid supply chain is in China, I wouldn’t be surprised if robotics becomes the next target in the US-China tech war.

In the US, humanoid robots are getting lots of investment, but there are plenty of skeptics who say they’re too clunky, finicky, and expensive to serve much use in factory settings. Are attitudes different in China?

Skepticism exists in China too, but I think there’s more confidence in deployment, especially in factories. With an aging population and a labor shortage on the horizon, there’s also growing interest in medical and caregiving applications for humanoid robots.

DeepSeek revived the conversation about chips and the way the US seeks to control where the best chips end up. How do the chip wars affect humanoid-robot development in China?

Training humanoid robots currently doesn’t demand as much computing power as training large language models, since there isn’t enough physical movement data to feed into models at scale. But as robots improve, they’ll need high-performance chips, and US sanctions will be a limiting factor. Chinese chipmakers are trying to catch up, but it’s a challenge.

For more, read Caiwei’s story on this humanoid pivot, as well as her look at the Chinese startups worth watching beyond DeepSeek. 


Now read the rest of The Algorithm

Deeper Learning

Motor neuron diseases took their voices. AI is bringing them back.

In motor neuron diseases, the neurons responsible for sending signals to the body’s muscles, including those used for speaking, are progressively destroyed. It robs people of their voices. But some, including a man in Miami named Jules Rodriguez, are now getting them back: An AI model learned to clone Rodriguez’s voice from recordings.

Why it matters: ElevenLabs, the company that created the voice clone, can do a lot with just 30 minutes of recordings. That’s a huge improvement over AI voice clones from just a few years ago, and it can really boost the day-to-day lives of the people who’ve used the technology. “This is genuinely AI for good,” says Richard Cave, a speech and language therapist at the Motor Neuron Disease Association in the UK. Read more from Jessica Hamzelou.

Bits and Bytes

A “true crime” documentary series has millions of views, but the murders are all AI-generated

A look inside the strange mind of someone who created a series of fake true-crime docs using AI, and the reactions of the many people who thought they were real. (404 Media)

The AI relationship revolution is already here

People are having all sorts of relationships with AI models, and these relationships run the gamut: weird, therapeutic, unhealthy, sexual, comforting, dangerous, useful. We’re living through the complexities of this in real time. Hear from some of the many people who are happy in their varied AI relationships and learn what sucked them in. (MIT Technology Review)

Robots are bringing new life to extinct species

A creature called Orobates pabsti waddled the planet 280 million years ago, but as with many prehistoric animals, scientists have not been able to use fossils to figure out exactly how it moved. So they’ve started building robots to help. (MIT Technology Review)

Lessons from the AI Action Summit in Paris

Last week, politicians and AI leaders from around the globe went to Paris for an AI Action Summit. While concerns about AI safety have dominated the event in years past, this year was more about deregulation and energy, a trend we’ve seen elsewhere. (The Guardian)  

OpenAI ditches its diversity commitment and adds a statement about “intellectual freedom”

Following the lead of other tech companies since the beginning of President Trump’s administration, OpenAI has removed a statement on diversity from its website. It has also updated its model spec—the document outlining the standards of its models—to say that “OpenAI believes in intellectual freedom, which includes the freedom to have, hear, and discuss ideas.” (Insider and Tech Crunch)

The Musk-OpenAI battle has been heating up

Part of OpenAI is structured as a nonprofit, a legacy of its early commitments to make sure its technologies benefit all. Its recent attempts to restructure that nonprofit have triggered a lawsuit from Elon Musk, who alleges that the move would violate the legal and ethical principles of its nonprofit origins. Last week, Musk offered to buy OpenAI for $97.4 billion, in a bid that few people took seriously. Sam Altman dismissed it out of hand. Musk now says he will retract that bid if OpenAI stops its conversion of the nonprofit portion of the company. (Wall Street Journal)