This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Lots of influential people in tech last week were describing Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them (one person used the platform to help him negotiate a deal on a new car). Sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?
The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon.
Back in 2014, someone set up a game of Pokémon in which the main character could be controlled by anyone on the internet via the streaming platform Twitch. Playing was as clunky as it sounds, but it was incredibly popular: at one point, a million people were playing the game at the same time.
“It was yet another weird online social experiment that got picked up by the mainstream media: What did this mean for the future?” Will says. “Not a lot, it turned out.”
The frenzy about Moltbook struck a similar tone to Will, and it turned out that one of the sources he spoke to had been thinking about Pokémon too. Jason Schloetzer, at the Georgetown Psaros Center for Financial Markets and Policy, saw the whole thing as a sort of Pokémon battle for AI enthusiasts, in which they created AI agents and deployed them to interact with other agents. In this light, the news that many AI agents were actually being instructed by people to say certain things that made them sound sentient or intelligent makes a whole lot more sense.
“It’s basically a spectator sport,” he told Will, “but for language models.”
Will wrote an excellent piece about why Moltbook was not the glimpse into the future that it was said to be. Even if you are excited about a future of agentic AI, he points out, there are some key pieces that Moltbook made clear are still missing. It was a forum of chaos, but a genuinely helpful hive mind would require more coordination, shared objectives, and shared memory.
“More than anything else, I think Moltbook was the internet having fun,” Will says. “The biggest question that now leaves me with is: How far will people push AI just for the laughs?”
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably.
On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public. The news comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI (like a video about “Christmas after mass deportations”).
But I received two types of reactions from readers that may explain just as much about the epistemic crisis we’re in.
One was from people who weren’t surprised, because on January 22 the White House had posted a digitally altered photo of a woman arrested at an ICE protest, one that made her appear hysterical and in tears. Kaelan Dorr, the White House’s deputy communications director, did not respond to questions about whether the White House altered the photo but wrote, “The memes will continue.”
The second was from readers who saw no point in reporting that DHS was using AI to edit content shared with the public, because news outlets were apparently doing the same. They pointed to the fact that the news network MS Now (formerly MSNBC) shared an image of Alex Pretti that was AI-edited and appeared to make him look more handsome, a fact that led to many viral clips this week, including one from Joe Rogan’s podcast. Fight fire with fire, in other words? A spokesperson for MS Now told Snopes that the news outlet aired the image without knowing it was edited.
There is no reason to collapse these two cases of altered content into the same category, or to read them as evidence that truth no longer matters. One involved the US government sharing a clearly altered photo with the public and declining to answer whether it was intentionally manipulated; the other involved a news outlet airing a photo it should have known was altered but taking some steps to disclose the mistake.
What these reactions reveal instead is a flaw in how we were collectively preparing for this moment. Warnings about the AI truth crisis revolved around a core thesis: that not being able to tell what is real will destroy us, so we need tools to independently verify the truth. My two grim takeaways are that these tools are failing, and that while vetting the truth remains essential, it is no longer capable on its own of producing the societal trust we were promised.
For example, there was plenty of hype in 2024 about the Content Authenticity Initiative, cofounded by Adobe and adopted by major tech companies, which would attach labels to content disclosing when it was made, by whom, and whether AI was involved. But Adobe applies automatic labels only when the content is wholly AI-generated. Otherwise the labels are opt-in on the part of the creator.
And platforms like X, where the altered arrest photo was posted, can strip content of such labels anyway (a note that the photo was altered was added by users). Platforms can also simply not choose to show the label; indeed, when Adobe launched the initiative, it noted that the Pentagon’s website for sharing official images, DVIDS, would display the labels to prove authenticity, but a review of the website today shows no such labels.
Noticing how much traction the White House’s photo got even after it was shown to be AI-altered, I was struck by the findings of a very relevant new paper published in the journal Communications Psychology. In the study, participants watched a deepfake “confession” to a crime, and the researchers found that even when they were told explicitly that the evidence was fake, participants relied on it when judging an individual’s guilt.In other words, even when people learn that the content they’re looking at is entirely fake, they remain emotionally swayed by it.
“Transparency helps, but it isn’t enough on its own,” the disinformation expert Christopher Nehring wrote recently about the study’s findings. “We have to develop a new masterplan of what to do about deepfakes.”
AI tools to generate and edit content are getting more advanced, easier to operate, and cheaper to run—all reasons why the US government is increasingly paying to use them. We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.
Update: This story was updated on February 2 with details about how Adobe applies its content authenticity labels.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
How do tech companies check if their users are kids?
This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates.
In one corner is the Republican Party, which has supported laws passed in several states that require sites with adult content to verify users’ ages. Critics say this provides cover to block anything deemed “harmful to minors,” which could include sex education. Other states, like California, are coming after AI companies with laws to protect kids who talk to chatbots (by requiring them to verify who’s a kid). Meanwhile, President Trump is attempting to keep AI regulation a national issue rather than allowing states to make their own rules. Support for various bills in Congress is constantly in flux.
So what might happen? The debate is quickly moving away from whether age verification is necessary and toward who will be responsible for it.This responsibility is a hot potato that no company wants to hold.
In a blog post last Tuesday, OpenAI revealed that it plans to roll out automatic age prediction. In short, the company will apply a model that uses factors like the time of day, among others, to predict whether a person chatting is under 18. For those identified as teens or children, ChatGPT will apply filters to “reduce exposure” to content like graphic violence or sexual role-play. YouTube launched something similar last year.
If you support age verification but are concerned about privacy, this might sound like a win. But there’s a catch. The system is not perfect, of course, so it could classify a child as an adult or vice versa. People who are wrongly labeled under 18 can verify their identity by submitting a selfie or government ID to a company called Persona.
Selfie verifications have issues: They fail more often for people of color and those with certain disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that Persona will need to hold millions of government IDs and masses of biometric data is another weak point. “When those get breached, we’ve exposed massive populations all at once,” he says.
Hinduja instead advocates for device-level verification, where a parent specifies a child’s age when setting up the child’s phone for the first time. This information is then kept on the device and shared securely with apps and websites.
That’s more or less what Tim Cook, the CEO of Apple, recently lobbied US lawmakers to call for. Cook was fighting lawmakers who wanted to require app stores to verify ages, which would saddle Apple with lots of liability.
More signals of where this is all headed will come on Wednesday, when the Federal Trade Commission—the agency that would be responsible for enforcing these new laws—is holding an all-day workshop on age verification. Apple’s head of government affairs, Nick Rossi, will be there. He’ll be joined by higher-ups in child safety at Google and Meta, as well as a company that specializes in marketing to children.
The FTC has become increasingly politicized under President Trump (his firing of the sole Democratic commissioner was struck down by a federal court, a decision that is now pending review by the US Supreme Court). In July, I wrote about signals that the agency is softening its stance toward AI companies. Indeed, in December, the FTC overturned a Biden-era ruling against an AI company that allowed people to flood the internet with fake product reviews, writing that it clashed with President Trump’s AI Action Plan.
Wednesday’s workshop may shed light on how partisan the FTC’s approach to age verification will be. Red states favor laws that require porn websites to verify ages (but critics warn this could be used to block a much wider range of content). Bethany Soye, a Republican state representative who is leading an effort to pass such a bill in her state of South Dakota, is scheduled to speak at the FTC meeting. The ACLU generally opposes laws requiring IDs to visit websites and has instead advocated for an expansion of existing parental controls.
While all this gets debated, though, AI has set the world of child safety on fire. We’re dealing with increased generation of child sexual abuse material, concerns (and lawsuits) about suicides and self-harm following chatbot conversations, and troubling evidence of kids’ forming attachments to AI companions. Colliding stances on privacy, politics, free expression, and surveillance will complicate any effort to find a solution. Write to me with your thoughts.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
I decided to go to CES kind of at the last minute. Over the holiday break, contacts from China kept messaging me about their travel plans. After the umpteenth “See you in Vegas?” I caved. As a China tech writer based in the US, I have one week a year when my entire beat seems to come to me—no 20-hour flights required.
CES, the Consumer Electronics Show, is the world’s biggest tech show, where companies launch new gadgets and announce new developments, and it happens every January. This year, it attracted over 148,000 attendees and over 4,100 exhibitors. It sprawls across the Las Vegas Convention Center, the city’s biggest exhibition space, and spills over into adjacent hotels.
China has long had a presence at CES, but this year it showed up in a big way. Chinese exhibitors accounted for nearly a quarter of all companies at the show, and in pockets like AI hardware and robotics, China’s presence felt especially dominant. On the floor, I saw tons of Chinese industry attendees roaming around, plus a notable number of Chinese VCs. Multiple experienced CES attendees told me this is the first post-covid CES where China was present in a way you couldn’t miss. Last year might have been trending that way too, but a lot of Chinese attendees reportedly ran into visa denials. Now AI has become the universal excuse, and reason, to make the trip.
As expected, AI was the biggest theme this year, seen on every booth wall. It’s both the biggest thing everyone is talking about and a deeply confusing marketing gimmick. “We added AI” is slapped onto everything from the reasonable (PCs, phones, TVs, security systems) to the deranged (slippers, hair dryers, bed frames).
Consumer AI gadgets still feel early and of very uneven quality. The most common categories are educational devices and emotional support toys—which, as I’ve written about recently, are all the rage in China. There are some memorable ones: Luka AI makes a robotic panda that scuttles around and keeps a watchful eye on your baby. Fuzozo, a fluffy keychain-size AI robot, is basically a digital pet in physical form. It comes with a built-in personality and reacts to how you treat it. The companies selling these just hope you won’t think too hard about the privacy implications.
Ian Goh, an investor at 01.VC, told me China’s manufacturing advantage gives it a unique edge in AI consumer electronics, because a lot of Western companies feel they simply cannot fight and win in the arena of hardware.
Another area where Chinese companies seem to be at the head of the pack is household electronics. The products they make are becoming impressively sophisticated. Home robots, 360 cams, security systems, drones, lawn-mowing machines, pool heat pumps … Did you know two Chinese brands basically dominate the market for home cleaning robots in the US and are eating the lunch of Dyson and Shark? Did you know almost all the suburban yard tech you can buy in the West comes from Shenzhen, even though that whole backyard-obsessed lifestyle barely exists in China? This stuff is so sleek that you wouldn’t clock it as Chinese unless you went looking. The old “cheap and repetitive” stereotype doesn’t explain what I saw. I walked away from CES feeling that I needed a major home appliance upgrade.
Of course, appliances are a safe, mature market. On the more experiential front, humanoid robots were a giant magnet for crowds, and Chinese companies put on a great show. Every robot seemed to be dancing, in styles from Michael Jackson to K-pop to lion dancing, some even doing back flips. Hangzhou-based Unitree even set up a boxing ring where people could “challenge” its robots. The robot fighters were about half the size of an adult human and the matches often ended in a robot knockout, but that’s not really the point. What Unitree was actually showing off was its robots’ stability and balance: they got shoved, stumbled across the ring, and stayed upright, recovering mid-motion. Beyond flexing dynamic movements like these there were also impressive showcases of dexterity: Robots could be seen folding paper pinwheels, doing laundry, playing piano, and even making latte art.
CAL SPORT MEDIA VIA AP IMAGES
However, most of these robots, even the good ones, are one-trick ponies. They’re optimized for a specific task on the show floor. I tried to make one fold a T-shirt after I’d flipped the garment around, and it got confused very quickly.
Still, they’re getting a lot of hype as an important next frontier because they could help drag AI out of text boxes and into the physical world. As LLMs mature, vision-language models feel like the logical next step. But then you run into the big problem: There’s far less physical-world data than text data to train AI on. Humanoid robots become both applications and roaming data-collection terminals. China is uniquely positioned here because of supply chains, manufacturing depth, and spillover from adjacent industries (EVs, batteries, motors, sensors), and it’s already developing a humanoid training industry, as Rest of World reported recently.
Most Chinese companies believe that if you can manufacture at scale, you can innovate, and they’re not wrong. A lot of the confidence in China’s nascent humanoid robot industry and beyond is less about a single breakthrough and more about “We can iterate faster than the West.”
Chinese companies are not just selling gadgets, though—they’re working on every layer of the tech stack. Not just on end products but frameworks, tooling, IoT enablement, spatial data. Open-source culture feels deeply embedded; engineers from Hangzhou tell me there are AI hackathons every week in the city, where China’s new “little Silicon Valley” is located.
Indeed, the headline innovations at CES 2026 were not on devices but in cloud: platforms, ecosystems, enterprise deployments, and “hybrid AI” (cloud + on-device) applications. Lenovo threw the buzziest main-stage events this year, and yes, there were PCs—but the core story was its cross-device AI agent system, Qira, and a partnership pitch with Nvidia aimed at AI cloud providers. Nvidia’s CEO, Jensen Huang, launched Vera Rubin, a new data-center platform, claiming it would dramatically lower costs for training and running AI. AMD’s CEO, Lisa Su, introduced Helios, another data-center system built to run huge AI workloads. These solutions point to the ballooning AI computing workload at data centers, and the real race of making cloud services cheap and powerful enough to keep up.
As I spoke with China-related attendees, the overall mood I felt was a cautious optimism. At a house party I went to, VCs and founders from China were mingling effortlessly with Bay Area transplants. Everyone is building something. Almost no one wants to just make money from Chinese consumers anymore. The new default is: Build in China, sell to the world, and treat the US market like the proving ground.
Hassabis was replying on X to an overexcited post by Sébastien Bubeck, a research scientist at the rival firm OpenAI, announcing that two mathematicians had used OpenAI’s latest large language model, GPT-5, to find solutions to 10 unsolved problems in mathematics. “Science acceleration via AI has officially begun,” Bubeck crowed.
Put your math hats on for a minute, and let’s take a look at what this beef from mid-October was about. It’s a perfect example of what’s wrong with AI right now.
Bubeck was excited that GPT-5 seemed to have somehow solved a number of puzzles known as Erdős problems.
Paul Erdős, one of the most prolific mathematicians of the 20thcentury, left behind hundreds of puzzles when he died. To help keep track of which ones have been solved, Thomas Bloom, a mathematician at the University of Manchester, UK, set uperdosproblems.com, which lists more than 1,100 problems and notes that around 430 of them come with solutions.
When Bubeck celebrated GPT-5’s breakthrough, Bloom was quick to call him out. “This is a dramatic misrepresentation,” he wrote on X. Bloom explained that a problem isn’t necessarily unsolved if this website does not list a solution. That simply means Bloom wasn’t aware of one. There are millions of mathematics papers out there, and nobody has read all of them. But GPT-5 probably has.
It turned out that instead of coming up with new solutions to 10 unsolved problems, GPT-5 had scoured the internet for 10 existing solutions that Bloom hadn’t seen before. Oops!
There are two takeaways here. One is that breathless claims about big breakthroughs shouldn’t be made via social media: Less knee jerk and more gut check.
The second is that GPT-5’s ability to find references to previous work that Bloom wasn’t aware of is also amazing. The hype overshadowed something that should have been pretty cool in itself.
Mathematicians are very interested in using LLMs to trawl through vast numbers of existing results, François Charton, a research scientist who studies the application of LLMs to mathematics at the AI startup Axiom Math, told me when I talked to him about this Erdős gotcha.
But literature search is dull compared with genuine discovery, especially to AI’s fervent boosters on social media. Bubeck’s blunder isn’t the only example.
In August, a pair of mathematicians showed that no LLM at the time was able to solve a math puzzle known as Yu Tsumura’s 554th Problem. Two months later, social media erupted with evidence that GPT-5 now could. “Lee Sedol moment is coming for many,” one observer commented, referring to the Go master who lost to DeepMind’s AI AlphaGo in 2016.
But Charton pointed out that solving Yu Tsumura’s 554th Problem isn’t a big deal to mathematicians. “It’s a question you would give an undergrad,” he said. “There is this tendency to overdo everything.”
Meanwhile, more sober assessments of what LLMs may or may not be good at are coming in. At the same time that mathematicians were fighting on the internet about GPT-5, two new studies came out that looked in depth at the use of LLMs in medicine and law (two fields that model makers have claimed their tech excels at).
But that’s not the kind of message that goes down well on X. “You’ve got that excitement because everybody is communicating like crazy—nobody wants to be left behind,” Charton said. X is where a lot of AI news drops first, it’s where new results are trumpeted, and it’s where key players like Sam Altman, Yann LeCun, and Gary Marcus slug it out in public. It’s hard to keep up—and harder to look away.
Bubeck’s post was only embarrassing because his mistake was caught. Not all errors are. Unless something changes researchers, investors, and non-specific boosters will keep teeing each other up. “Some of them are scientists, many are not, but they are all nerds,” Charton told me. “Huge claims work very well on these networks.”
*****
There’s a coda! I wrote everything you’ve just read above for the Algorithm column in the January/February 2026 issue of MIT Technology Review magazine (out very soon). Two days after that went to press, Axiom told me its own math model, AxiomProver, had solved two open Erdős problems (#124 and #481, for the math fans in the room). That’s impressive stuff for a small startup founded just a few months ago. Yup—AI moves fast!
But that’s not all. Five days later the company announced that AxiomProver had solved nine out of 12 problems in this year’s Putnam competition, a college-level math challenge that some people consider harder than the better-known International Math Olympiad (which LLMs from both Google DeepMind and OpenAI aced a few months back).
The Putnam result was lauded on X by big names in the field, including Jeff Dean, chief scientist at Google DeepMind, and Thomas Wolf, cofounder at the AI firm Hugging Face. Once again familiar debates played out in the replies. A few researchers pointed out that while the International Math Olympiad demands more creative problem-solving, the Putnam competition tests math knowledge—which makes it notoriously hard for undergrads, but easier, in theory, for LLMs that have ingested the internet.
How should we judge Axiom’s achievements? Not on social media, at least. And the eye-catching competition wins are just a starting point. Determining just how good LLMs are at math will require a deeper dive into exactly what these models are doing when they solve hard (read: hard for humans) math problems.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Can I ask you a question: How do you feel about AI right now? Are you still excited? When you hear that OpenAI or Google just dropped a new model, do you still get that buzz? Or has the shine come off it, maybe just a teeny bit? Come on, you can be honest with me.
Truly, I feel kind of stupid even asking the question, like a spoiled brat who has too many toys at Christmas. AI is mind-blowing. It’s one of the most important technologies to have emerged in decades (despite all its many many drawbacks and flaws and, well, issues).
At the same time I can’t help feeling a little bit: Is that it?
If you feel the same way, there’s good reason for it: The hype we have been sold for the past few years has been overwhelming. We were told that AI would solve climate change. That it would reach human-level intelligence. That it would mean we no longer had to work!
Instead we got AI slop, chatbot psychosis, and tools that urgently prompt you to write better email newsletters. Maybe we got what we deserved. Or maybe we need to reevaluate what AI is for.
As my colleague Will Douglas Heaven puts it in the package’s intro essay, “You can’t help but wonder: When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?”
Elsewhere in the package, James O’Donnell looks at Sam Altman, the ultimate AI hype man, through the medium of his own words. And Alex Heath explains the AI bubble, laying out for us what it all means and what we should look out for.
Michelle Kim analyzes one of the biggest claims in the AI hype cycle: that AI would completely eliminate the need for certain classes of jobs. If ChatGPT can pass the bar, surely that means it will replace lawyers? Well, not yet, and maybe not ever.
Similarly, Edd Gent tackles the big question around AI coding. Is it as good as it sounds? Turns out the jury is still out. And elsewhere David Rotman looks at the real-world work that needs to be done before AI materials discovery has its breakthrough ChatGPT moment.
Meanwhile, Garrison Lovely spends time with some of the biggest names in the AI safety world and asks: Are the doomers still okay? I mean, now that people are feeling a bit less scared about their impending demise at the hands of superintelligent AI? And Margaret Mitchell reminds us that hype around generative AI can blind us to the AI breakthroughs we should really celebrate.
Let’s remember: AI was here before ChatGPT and it will be here after. This hype cycle has been wild, and we don’t know what its lasting impact will be. But AI isn’t going anywhere. We shouldn’t be so surprised that those dreams we were sold haven’t come true—yet.
The more likely story is that the real winners, the killer apps, are still to come. And a lot of money is being bet on that prospect. So yes: The hype could never sustain itself over the short term. Where we’re at now is maybe the start of a post-hype phase. In an ideal world, this hype correction will reset expectations.
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next two weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.
This week, Richard Waters, FT columnist and former West Coast editor, talks with MIT Technology Review’s editor at large David Rotman about the true impact of AI on the job market.
Bonus: If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside MIT Technology Review’s editor in chief, Mat Honan, for an exclusive conversation live on Tuesday, December 9 at 1pm ET about this topic. Sign up to be a part here.
Richard Waters writes:
Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.
At one extreme, AI coding assistants have revolutionized the work of software developers. Mark Zuckerberg recently predicted that half of Meta’s code would be written by AI within a year. At the other extreme, most companies are seeing little if any benefit from their initial investments. A widely cited study from MIT found that so far, 95% of gen AI projects produce zero return.
That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business.
To many students of tech history, though, the lack of immediate impact is just the normal lag associated with transformative new technologies. Erik Brynjolfsson, then an assistant professor at MIT, first described what he called the “productivity paradox of IT” in the early 1990s. Despite plenty of anecdotal evidence that technology was changing the way people worked, it wasn’t showing up in the aggregate data in the form of higher productivity growth. Brynjolfsson’s conclusion was that it just took time for businesses to adapt.
Big investments in IT finally showed through with a notable rebound in US productivity growth starting in the mid-1990s. But that tailed off a decade later and was followed by a second lull.
FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK
In the case of AI, companies need to build new infrastructure (particularly data platforms), redesign core business processes, and retrain workers before they can expect to see results. If a lag effect explains the slow results, there may at least be reasons for optimism: Much of the cloud computing infrastructure needed to bring generative AI to a wider business audience is already in place.
The opportunities and the challenges are both enormous. An executive at one Fortune 500 company says his organization has carried out a comprehensive review of its use of analytics and concluded that its workers, overall, add little or no value. Rooting out the old software and replacing that inefficient human labor with AI might yield significant results. But, as this person says, such an overhaul would require big changes to existing processes and take years to carry out.
There are some early encouraging signs. US productivity growth, stuck at 1% to 1.5% for more than a decade and a half, rebounded to more than 2% last year. It probably hit the same level in the first nine months of this year, though the lack of official data due to the recent US government shutdown makes this impossible to confirm.
It is impossible to tell, though, how durable this rebound will be or how much can be attributed to AI. The effects of new technologies are seldom felt in isolation. Instead, the benefits compound. AI is riding earlier investments in cloud and mobile computing. In the same way, the latest AI boom may only be the precursor to breakthroughs in fields that have a wider impact on the economy, such as robotics. ChatGPT might have caught the popular imagination, but OpenAI’s chatbot is unlikely to have the final word.
David Rotman replies:
This is my favorite discussion these days when it comes to artificial intelligence. How will AI affect overall economic productivity? Forget about the mesmerizing videos, the promise of companionship, and the prospect of agents to do tedious everyday tasks—the bottom line will be whether AI can grow the economy, and that means increasing productivity.
But, as you say, it’s hard to pin down just how AI is affecting such growth or how it will do so in the future. Erik Brynjolfsson predicts that, like other so-called general purpose technologies, AI will follow a J curve in which initially there is a slow, even negative, effect on productivity as companies invest heavily in the technology before finally reaping the rewards. And then the boom.
But there is a counterexample undermining the just-be-patient argument. Productivity growth from IT picked up in the mid-1990s but since the mid-2000s has been relatively dismal. Despite smartphones and social media and apps like Slack and Uber, digital technologies have done little to produce robust economic growth. A strong productivity boost never came.
Daron Acemoglu, an economist at MIT and a 2024 Nobel Prize winner, argues that the productivity gains from generative AI will be far smaller and take far longer than AI optimists think. The reason is that though the technology is impressive in many ways, the field is too narrowly focused on products that have little relevance to the largest business sectors.
The statistic you cite that 95% of AI projects lack business benefits is telling.
Take manufacturing. No question, some version of AI could help; imagine a worker on the factory floor snapping a picture of a problem and asking an AI agent for advice. The problem is that the big tech companies creating AI aren’t really interested in solving such mundane tasks, and their large foundation models, mostly trained on the internet, aren’t all that helpful.
It’s easy to blame the lack of productivity impact from AI so far on business practices and poorly trained workers. Your example of the executive of the Fortune 500 company sounds all too familiar. But it’s more useful to ask how AI can be trained and fine-tuned to give workers, like nurses and teachers and those on the factory floor, more capabilities and make them more productive at their jobs.
The distinction matters. Some companies announcing large layoffs recently cited AI as the reason. The worry, however, is that it’s just a short-term cost-saving scheme. As economists like Brynjolfsson and Acemoglu agree, the productivity boost from AI will come when it’s used to create new types of jobs and augment the abilities of workers, not when it is used just to slash jobs to reduce costs.
Richard Waters responds :
I see we’re both feeling pretty cautious, David, so I’ll try to end on a positive note.
Some analyses assume that a much greater share of existing work is within the reach of today’s AI. McKinsey reckons 60% (versus 20% for Acemoglu) and puts annual productivity gains across the economy at as much as 3.4%. Also, calculations like these are based on automation of existing tasks; any new uses of AI that enhance existing jobs would, as you suggest, be a bonus (and not just in economic terms).
Cost-cutting always seems to be the first order of business with any new technology. But we’re still in the early stages and AI is moving fast, so we can always hope.
Further reading
FT chief economics commentator Martin Wolf has been skeptical about whether tech investment boosts productivity but says AI might prove him wrong. The downside: Job losses and wealth concentration might lead to “techno-feudalism.”
The FT‘s Robert Armstrong argues that the boom in data center investment need not turn to bust. The biggest risk is that debt financing will come to play too big a role in the buildout.
Last year, David Rotman wrote for MIT Technology Review about how we can make sure AI works for us in boosting productivity, and what course corrections will be required.
David also wrote this piece about how we can best measure the impact of basic R&D funding on economic growth, and why it can often be bigger than you might think.
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.
In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.
Helen Warrell, FT investigations reporter
It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.
Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.
Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.
Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”
Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza.
FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK
There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.
Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”
It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself?
James O’Donnell replies:
Hi Helen,
One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield.
This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense.
What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could.
Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups.
I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?
But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.
The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?
Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less.
Helen responds:
Hi James,
We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield.
There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.
A few weeks ago, I set out on what I thought would be a straightforward reporting journey.
After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?
I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised.
But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives?
There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.
But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to.
“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it.
The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.
Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.”
Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise.
So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)
Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a “cameo” of yourself—a hyperrealistic avatar that mimics your appearance and voice—and insert other peoples’ cameos into your own videos (depending on what permissions they set).
To some people who believed earnestly in OpenAI’s promise to build AI that benefits all of humanity, the app is a punchline. A former OpenAI researcher who left to build an AI-for-science startup referred to Sora as an “infinite AI tiktok slop machine.”
That hasn’t stopped it from soaring to the top spot on Apple’s US App Store. After I downloaded the app, I quickly learned what types of videos are, at least currently, performing well: bodycam-style footage of police pulling over pets or various trademarked characters, including SpongeBob and Scooby Doo; deepfake memes of Martin Luther King Jr. talking about Xbox; and endless variations of Jesus Christ navigating our modern world.
Just as quickly, I had a bunch of questions about what’s coming next for Sora. Here’s what I’ve learned so far.
Can it last?
OpenAI is betting that a sizable number of people will want to spend time on an app in which you can suspend your concerns about whether what you’re looking at is fake and indulge in a stream of raw AI. One reviewer put it this way: “It’s comforting because you know that everything you’re scrolling through isn’t real, where other platforms you sometimes have to guess if it’s real or fake. Here, there is no guessing, it’s all AI, all the time.”
This may sound like hell to some. But judging by Sora’s popularity, lots of people want it.
So what’s drawing these people in? There are two explanations. One is that Sora is a flash-in-the-pan gimmick, with people lining up to gawk at what cutting-edge AI can create now (in my experience, this is interesting for about five minutes). The second, which OpenAI is betting on, is that we’re witnessing a genuine shift in what type of content can draw eyeballs, and that users will stay with Sora because it allows a level of fantastical creativity not possible in any other app.
There are a few decisions down the pike that may shape how many people stick around: how OpenAI decides to implement ads, what limits it sets for copyrighted content (see below), and what algorithms it cooks up to decide who sees what.
Can OpenAI afford it?
OpenAI is not profitable, but that’s not particularly strange given how Silicon Valley operates. What is peculiar, though, is that the company is investing in a platform for generating video, which is the most energy-intensive (and therefore expensive) form of AI we have. The energy it takes dwarfs the amount required to create images or answer text questions via ChatGPT.
This isn’t news to OpenAI, which has joined a half-trillion-dollar project to build data centers and new power plants. But Sora—which currently allows you to generate AI videos, for free, without limits—raises the stakes: How much will it cost the company?
OpenAI is making moves toward monetizing things (you can now buy products directly through ChatGPT, for example). On October 3, its CEO, Sam Altman, wrote in a blog post that “we are going to have to somehow make money for video generation,” but he didn’t get into specifics. One can imagine personalized ads and more in-app purchases.
Still, it’s concerning to imagine the mountain of emissions might result if Sora becomes popular. Altman has accurately described the emissions burden of one query to ChatGPT as impossibly small. What he has not quantified is what that figure is for a 10-second video generated by Sora. It’s only a matter of time until AI and climate researchers start demanding it.
How many lawsuits are coming?
Sora is awash in copyrighted and trademarked characters. It allows you to easily deepfake deceased celebrities. Its videos use copyrighted music.
Last week, the Wall Street Journalreported that OpenAI has sent letters to copyright holders notifying them that they’ll have to opt out of the Sora platform if they don’t want their material included, which is not how these things usually work. The law on how AI companies should handle copyrighted material is far from settled, and it’d be reasonable to expect lawsuits challenging this.
In last week’s blog post, Altman wrote that OpenAI is “hearing from a lot of rightsholders” who want more control over how their characters are used in Sora. He says that the company plans to give those parties more “granular control” over their characters. Still, “there may be some edge cases of generations that get through that shouldn’t,” he wrote.
But another issue is the ease with which you can use the cameos of real people. People can restrict who can use their cameo, but what limits will there be for what these cameos can be made to do in Sora videos?
This is apparently already an issue OpenAI is being forced to respond to. The head of Sora, Bill Peebles, posted on October 5 that users can now restrict how their cameo can be used—preventing it from appearing in political videos or saying certain words, for example. How well will this work? Is it only a matter of time until someone’s cameo is used for something nefarious, explicit, illegal, or at least creepy, sparking a lawsuit alleging that OpenAI is responsible?
Overall, we haven’t seen what full-scale Sora looks like yet (OpenAI is still doling out access to the app via invite codes). When we do, I think it will serve as a grim test: Can AI create videos so fine-tuned for endless engagement that they’ll outcompete “real” videos for our attention? In the end, Sora isn’t just testing OpenAI’s technology—it’s testing us, and how much of our reality we’re willing to trade for an infinite scroll of simulation.