The State of AI: Welcome to the economic singularity

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next two weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

This week, Richard Waters, FT columnist and former West Coast editor, talks with MIT Technology Review’s editor at large David Rotman about the true impact of AI on the job market.

Bonus: If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside MIT Technology Review’s editor in chief, Mat Honan, for an exclusive conversation live on Tuesday, December 9 at 1pm ET about this topic. Sign up to be a part here.

Richard Waters writes:

Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.

At one extreme, AI coding assistants have revolutionized the work of software developers. Mark Zuckerberg recently predicted that half of Meta’s code would be written by AI within a year. At the other extreme, most companies are seeing little if any benefit from their initial investments. A widely cited study from MIT found that so far, 95% of gen AI projects produce zero return.

That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business.

To many students of tech history, though, the lack of immediate impact is just the normal lag associated with transformative new technologies. Erik Brynjolfsson, then an assistant professor at MIT, first described what he called the “productivity paradox of IT” in the early 1990s. Despite plenty of anecdotal evidence that technology was changing the way people worked, it wasn’t showing up in the aggregate data in the form of higher productivity growth. Brynjolfsson’s conclusion was that it just took time for businesses to adapt.

Big investments in IT finally showed through with a notable rebound in US productivity growth starting in the mid-1990s. But that tailed off a decade later and was followed by a second lull.

Richard Waters and David Rotman

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

In the case of AI, companies need to build new infrastructure (particularly data platforms), redesign core business processes, and retrain workers before they can expect to see results. If a lag effect explains the slow results, there may at least be reasons for optimism: Much of the cloud computing infrastructure needed to bring generative AI to a wider business audience is already in place.

The opportunities and the challenges are both enormous. An executive at one Fortune 500 company says his organization has carried out a comprehensive review of its use of analytics and concluded that its workers, overall, add little or no value. Rooting out the old software and replacing that inefficient human labor with AI might yield significant results. But, as this person says, such an overhaul would require big changes to existing processes and take years to carry out.

There are some early encouraging signs. US productivity growth, stuck at 1% to 1.5% for more than a decade and a half, rebounded to more than 2% last year. It probably hit the same level in the first nine months of this year, though the lack of official data due to the recent US government shutdown makes this impossible to confirm.

It is impossible to tell, though, how durable this rebound will be or how much can be attributed to AI. The effects of new technologies are seldom felt in isolation. Instead, the benefits compound. AI is riding earlier investments in cloud and mobile computing. In the same way, the latest AI boom may only be the precursor to breakthroughs in fields that have a wider impact on the economy, such as robotics. ChatGPT might have caught the popular imagination, but OpenAI’s chatbot is unlikely to have the final word.

David Rotman replies: 

This is my favorite discussion these days when it comes to artificial intelligence. How will AI affect overall economic productivity? Forget about the mesmerizing videos, the promise of companionship, and the prospect of agents to do tedious everyday tasks—the bottom line will be whether AI can grow the economy, and that means increasing productivity. 

But, as you say, it’s hard to pin down just how AI is affecting such growth or how it will do so in the future. Erik Brynjolfsson predicts that, like other so-called general purpose technologies, AI will follow a J curve in which initially there is a slow, even negative, effect on productivity as companies invest heavily in the technology before finally reaping the rewards. And then the boom. 

But there is a counterexample undermining the just-be-patient argument. Productivity growth from IT picked up in the mid-1990s but since the mid-2000s has been relatively dismal. Despite smartphones and social media and apps like Slack and Uber, digital technologies have done little to produce robust economic growth. A strong productivity boost never came.

Daron Acemoglu, an economist at MIT and a 2024 Nobel Prize winner, argues that the productivity gains from generative AI will be far smaller and take far longer than AI optimists think. The reason is that though the technology is impressive in many ways, the field is too narrowly focused on products that have little relevance to the largest business sectors.

The statistic you cite that 95% of AI projects lack business benefits is telling. 

Take manufacturing. No question, some version of AI could help; imagine a worker on the factory floor snapping a picture of a problem and asking an AI agent for advice. The problem is that the big tech companies creating AI aren’t really interested in solving such mundane tasks, and their large foundation models, mostly trained on the internet, aren’t all that helpful. 

It’s easy to blame the lack of productivity impact from AI so far on business practices and poorly trained workers. Your example of the executive of the Fortune 500 company sounds all too familiar. But it’s more useful to ask how AI can be trained and fine-tuned to give workers, like nurses and teachers and those on the factory floor, more capabilities and make them more productive at their jobs. 

The distinction matters. Some companies announcing large layoffs recently cited AI as the reason. The worry, however, is that it’s just a short-term cost-saving scheme. As economists like Brynjolfsson and Acemoglu agree, the productivity boost from AI will come when it’s used to create new types of jobs and augment the abilities of workers, not when it is used just to slash jobs to reduce costs. 

Richard Waters responds : 

I see we’re both feeling pretty cautious, David, so I’ll try to end on a positive note. 

Some analyses assume that a much greater share of existing work is within the reach of today’s AI. McKinsey reckons 60% (versus 20% for Acemoglu) and puts annual productivity gains across the economy at as much as 3.4%. Also, calculations like these are based on automation of existing tasks; any new uses of AI that enhance existing jobs would, as you suggest, be a bonus (and not just in economic terms).

Cost-cutting always seems to be the first order of business with any new technology. But we’re still in the early stages and AI is moving fast, so we can always hope.

Further reading

FT chief economics commentator Martin Wolf has been skeptical about whether tech investment boosts productivity but says AI might prove him wrong. The downside: Job losses and wealth concentration might lead to “techno-feudalism.”

The FT‘s Robert Armstrong argues that the boom in data center investment need not turn to bust. The biggest risk is that debt financing will come to play too big a role in the buildout.

Last year, David Rotman wrote for MIT Technology Review about how we can make sure AI works for us in boosting productivity, and what course corrections will be required.

David also wrote this piece about how we can best measure the impact of basic R&D funding on economic growth, and why it can often be bigger than you might think.

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.

Eileen Guo writes:

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 

It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. 

Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 

But tellingly, one area the laws fail to address is user privacy.

This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.

After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.” 

Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023: 

“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.) 

All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place

So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe? 

Melissa Heikkilä replies:

Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids. 

In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything. 

Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable. 

This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave. 

Because people generally like answers that are agreeable, such responses are weighted more heavily in training. 

AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 

After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features. 

AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 

This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before. 

By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed. 

We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models. 

Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.

We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.

Eileen responds:

I think the comparison between AI companions and social media is both apt and concerning. 

As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.

Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.

And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.

In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening. 

Further reading 

FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges

Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy 

In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.

Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.

The State of AI: Energy is king, and the US is falling behind

Welcome to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

This week, Casey Crownhart, senior reporter for energy at MIT Technology Review and Pilita Clark, FT’s columnist, consider how China’s rapid renewables buildout could help it leapfrog on AI progress.

Casey Crownhart writes:

In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying here in the US, where massive data centers are waiting to come online, and it doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.

It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace. With too little new power capacity coming online, the strain is starting to show: Electricity bills are ballooning for people who live in places where data centers place a growing load on the grid.

If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China.

China installed 429 GW of new power generation capacity in 2024, more than six times the net capacity added in the US during that time.

China still generates much of its electricity with coal, but that makes up a declining share of the mix. Rather, the country is focused on installing solar, wind, nuclear, and gas at record rates.

The US, meanwhile, is focused on reviving its ailing coal industry. Coal-fired power plants are polluting and, crucially, expensive to run. Aging plants in the US are also less reliable than they used to be, generating electricity just 42% of the time, compared with a 61% capacity factor in 2014.

It’s not a great situation. And unless the US changes something, we risk becoming consumers as opposed to innovators in both energy and AI tech. Already, China earns more from exporting renewables than the US does from oil and gas exports. 

Building and permitting new renewable power plants would certainly help, since they’re currently the cheapest and fastest to bring online. But wind and solar are politically unpopular with the current administration. Natural gas is an obvious candidate, though there are concerns about delays with key equipment.

One quick fix would be for data centers to be more flexible. If they agreed not to suck electricity from the grid during times of stress, new AI infrastructure might be able to come online without any new energy infrastructure.

One study from Duke University found that if data centers agree to curtail their consumption just 0.25% of the time (roughly 22 hours over the course of the year), the grid could provide power for about 76 GW of new demand. That’s like adding about 5% of the entire grid’s capacity without needing to build anything new.

But flexibility wouldn’t be enough to truly meet the swell in AI electricity demand. What do you think, Pilita? What would get the US out of these energy constraints? Is there anything else we should be thinking about when it comes to AI and its energy use? 

Pilita Clark responds:

I agree. Data centers that can cut their power use at times of grid stress should be the norm, not the exception. Likewise, we need more deals like those giving cheaper electricity to data centers that let power utilities access their backup generators. Both reduce the need to build more power plants, which makes sense regardless of how much electricity AI ends up using.

This is a critical point for countries across the world, because we still don’t know exactly how much power AI is going to consume. 

Forecasts for what data centers will need in as little as five years’ time vary wildly, from less than twice today’s rates to four times as much.

This is partly because there’s a lack of public data about AI systems’ energy needs. It’s also because we don’t know how much more efficient these systems will become. The US chip designer Nvidia said last year that its specialized chips had become 45,000 times more energy efficient over the previous eight years. 

Moreover, we have been very wrong about tech energy needs before. At the height of the dot-com boom in 1999, it was erroneously claimed that the internet would need half the US’s electricity within a decade—necessitating a lot more coal power.

Still, some countries are clearly feeling the pressure already. In Ireland, data centers chew up so much power that new connections have been restricted around Dublin to avoid straining the grid.

Some regulators are eyeing new rules forcing tech companies to provide enough power generation to match their demand. I hope such efforts grow. I also hope AI itself helps boost power abundance and, crucially, accelerates the global energy transition needed to combat climate change. OpenAI’s Sam Altman said in 2023 that “once we have a really powerful super intelligence, addressing climate change will not be particularly difficult.” 

The evidence so far is not promising, especially in the US, where renewable projects are being axed. Still, the US may end up being an outlier in a world where ever cheaper renewables made up more than 90% of new power capacity added globally last year. 

Europe is aiming to power one of its biggest data centers predominantly with renewables and batteries. But the country leading the green energy expansion is clearly China.

The 20th century was dominated by countries rich in the fossil fuels whose reign the US now wants to prolong. China, in contrast, may become the world’s first green electrostate. If it does this in a way that helps it win an AI race the US has so far controlled, it will mark a striking chapter in economic, technological, and geopolitical history.

Casey Crownhart replies:

I share your skepticism of tech executives’ claims that AI will be a groundbreaking help in the race to address climate change. To be fair, AI is progressing rapidly. But we don’t have time to wait for technologies standing on big claims with nothing to back them up. 

When it comes to the grid, for example, experts say there’s potential for AI to help with planning and even operating, but these efforts are still experimental.  

Meanwhile, much of the world is making measurable progress on transitioning to newer, greener forms of energy. How that will affect the AI boom remains to be seen. What is clear is that AI is changing our grid and our world, and we need to be clear-eyed about the consequences. 

Further reading 

MIT Technology Review reporters did the math on the energy needs of an AI query.

There are still a few reasons to be optimistic about AI’s energy demands.  

The FT’s visual data team take a look inside the relentless race for AI capacity.

And global FT reporters ask whether data centers can ever truly be green.

The State of AI: Is China about to win the race? 

The State of AI is a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

In this conversation, the FT’s tech columnist and Innovation Editor John Thornhill and MIT Technology Review’s Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy.

John Thornhill writes:

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

Here in the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: “Never bet against America.” He is right that for more than two centuries, no other “incubator for unleashing human potential” has matched the US.

Today, however, China has the means, motive, and opportunity to commit the equivalent of technological murder. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be just as rash to bet against. 

The data highlights the trends. In AI publications and patents, China leads. By 2023, China accounted for 22.6% of all citations, compared with 20.9% from Europe and 13% from the US, according to Stanford University’s Artificial Intelligence Index Report 2025. As of 2023, China also accounted for 69.7% of all AI patents. True, the US maintains a strong lead in the top 100 most cited publications (50 versus 34 in 2023), but its share has been steadily declining. 

Similarly, the US outdoes China in top AI research talent, but the gap is narrowing. According to a report from the US Council of Economic Advisers, 59% of the world’s top AI researchers worked in the US in 2019, compared with 11% in China. But by 2022 those figures were 42% and 28%. 

The Trump administration’s tightening of restrictions for foreign H-1B visa holders may well lead more Chinese AI researchers in the US to return home. The talent ratio could move further in China’s favor.

Regarding the technology itself, US-based institutions produced 40 of the world’s most notable AI models in 2024, compared with 15 from China. But Chinese researchers have learned to do more with less, and their strongest large language models—including the open-source DeepSeek-V3 and Alibaba’s Qwen 2.5-Max—surpass the best US models in terms of algorithmic efficiency.

Where China is really likely to excel in future is in applying these open-source models. The latest report from Air Street Capital shows that China has now overtaken the US in terms of monthly downloads of AI models. In AI-enabled fintech, e-commerce, and logistics, China already outstrips the US. 

Perhaps the most intriguing—and potentially the most productive—applications of AI may yet come in hardware, particularly in drones and industrial robotics. With the research field evolving toward embodied AI, China’s advantage in advanced manufacturing will shine through.

Dan Wang, the tech analyst and author of Breakneck, has rightly highlighted the strengths of China’s engineering state in developing manufacturing process knowledge—even if he has also shown the damaging effects of applying that engineering mentality in the social sphere. “China has been growing technologically stronger and economically more dynamic in all sorts of ways,” he told me. “But repression is very real. And it is getting worse in all sorts of ways as well.”

I’d be fascinated to hear from you, Caiwei, about your take on the strengths and weaknesses of China’s AI dream. To what extent will China’s engineered social control hamper its technological ambitions? 

Caiwei Chen responds:

Hi, John!

You’re right that the US still holds a clear lead in frontier research and infrastructure. But “winning” AI can mean many different things. Jeffrey Ding, in his book Technology and the Rise of Great Powers, makes a counterintuitive point: For a general-purpose technology like AI, long-term advantage often comes down to how widely and deeply technologies spread across society. And China is in a good position to win that race (although “murder” might be pushing it a bit!).

Chips will remain China’s biggest bottleneck. Export restrictions have throttled access to top GPUs, pushing buyers into gray markets and forcing labs to recycle or repair banned Nvidia stock. Even as domestic chip programs expand, the performance gap at the very top still stands.

Yet those same constraints have pushed Chinese companies toward a different playbook: pooling compute, optimizing efficiency, and releasing open-weight models. DeepSeek-V3’s training run, for example, used just 2.6 million GPU-hours—far below the scale of US counterparts. But Alibaba’s Qwen models now rank among the most downloaded open-weights globally, and companies like Zhipu and MiniMax are building competitive multimodal and video models. 

China’s industrial policy means new models can move from lab to implementation fast. Local governments and major enterprises are already rolling out reasoning models in administration, logistics, and finance. 

Education is another advantage. Major Chinese universities are implementing AI literacy programs in their curricula, embedding skills before the labor market demands them. The Ministry of Education has also announced plans to integrate AI training for children of all school ages. I’m not sure the phrase “engineering state” fully captures China’s relationship with new technologies, but decades of infrastructure building and top-down coordination have made the system unusually effective at pushing large-scale adoption, often with far less social resistance than you’d see elsewhere. The use at scale, naturally, allows for faster iterative improvements.

Meanwhile, Stanford HAI’s 2025 AI Index found Chinese respondents to be the most optimistic in the world about AI’s future—far more optimistic than populations in the US or the UK. It’s striking, given that China’s economy has slowed since the pandemic for the first time in over two decades. Many in government and industry now see AI as a much-needed spark. Optimism can be powerful fuel, but whether it can persist through slower growth is still an open question.

Social control remains part of the picture, but a different kind of ambition is taking shape. The Chinese AI founders in this new generation are the most globally minded I’ve seen, moving fluidly between Silicon Valley hackathons and pitch meetings in Dubai. Many are fluent in English and in the rhythms of global venture capital. Having watched the last generation wrestle with the burden of a Chinese label, they now build companies that are quietly transnational from the start.

The US may still lead in speed and experimentation, but China could shape how AI becomes part of daily life, both at home and abroad. Speed matters, but speed isn’t the same thing as supremacy.

John Thornhill replies:

You’re right, Caiwei, that speed is not the same as supremacy (and “murder” may be too strong a word). And you’re also right to amplify the point about China’s strength in open-weight models and the US preference for proprietary models. This is not just a struggle between two different countries’ economic models but also between two different ways of deploying technology.  

Even OpenAI’s chief executive, Sam Altman, admitted earlier this year: “We have been on the wrong side of history here and need to figure out a different open-source strategy.” That’s going to be a very interesting subplot to follow. Who’s called that one right?

Further reading on the US-China competition

There’s been a lot of talk about how people may be using generative AI in their daily lives. This story from the FT’s visual story team explores the reality 

From China, FT reporters ask how long Nvidia can maintain its dominance over Chinese rivals

When it comes to real-world uses, toys and companions devices are a novel but emergent application of AI that is gaining traction in China—but is also heading to the US. This MIT Technology Review story explored it.

The once-frantic data center buildout in China has hit walls, and as the sanctions and AI demands shift, this MIT Technology Review story took an on-the-ground look at how stakeholders are figuring it out.