Why 2026 is a hot year for lithium

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now.

The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. Prices have been on quite the roller coaster over the last few years, and they’re ticking up again after a low period. What happens next could have big implications for mining and battery technology.

Before we look ahead, let’s take a quick trip down memory lane. In 2020, global EV sales started to really take off, driving up demand for the lithium used in their batteries. Because of that growing demand and a limited supply, prices shot up dramatically, with lithium carbonate going from under $10 per kilogram to a high of roughly $70 per kilogram in just two years.

And the tech world took notice. During those high points, there was a ton of interest in developing alternative batteries that didn’t rely on lithium. I was writing about sodium-based batteries, iron-air batteries, and even experimental ones that were made with plastic.

Researchers and startups were also hunting for alternative ways to get lithium, including battery recycling and processing methods like direct lithium extraction (more on this in a moment).

But soon, prices crashed back down to earth. We saw lower-than-expected demand for EVs in the US, and developers ramped up mining and processing to meet demand. Through late 2024 and 2025, lithium carbonate was back around $10 a kilogram again. Avoiding lithium or finding new ways to get it suddenly looked a lot less crucial.

That brings us to today: lithium prices are ticking up again. So far, it’s nowhere close to the dramatic rise we saw a few years ago, but analysts are watching closely. Strong EV growth in China is playing a major role—EVs still make up about 75% of battery demand today. But growth in stationary storage, batteries for the grid, is also contributing to rising demand for lithium in both China and the US.

Higher prices could create new opportunities. The possibilities include alternative battery chemistries, specifically sodium-ion batteries, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF. (I’ll note here that we recently named sodium-ion batteries to our 2026 list of 10 Breakthrough Technologies.)

It’s not just batteries, though. Another industry that could see big changes from a lithium price swing: extraction.

Today, most lithium is mined from rocks, largely in Australia, before being shipped to China for processing. There’s a growing effort to process the mineral in other places, though, as countries try to create their own lithium supply chains. Tesla recently confirmed that it’s started production at its lithium refinery in Texas, which broke ground in 2023. We could see more investment in processing plants outside China if prices continue to climb.

This could also be a key year for direct lithium extraction, as Katie Brigham wrote in a recent story for Heatmap. That technology uses chemical or electrochemical processes to extract lithium from brine (salty water that’s usually sourced from salt lakes or underground reservoirs), quickly and cheaply. Companies including Lilac Solutions, Standard Lithium, and Rio Tinto are all making plans or starting construction on commercial facilities this year in the US and Argentina. 

If there’s anything I’ve learned about following batteries and minerals over the past few years, it’s that predicting the future is impossible. But if you’re looking for tea leaves to read, lithium prices deserve a look. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Dispatch from Davos: hot air, big egos and cold flexes

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

It’s supposed to be frigid in Davos this time of year. Part of the charm is seeing the world’s elite tromp through the streets in respectable suits and snow boots. But this year it’s positively balmy, with highs in the mid 30s, or a little over 1°C. The current conditions when I flew out of New York were colder, and definitely snowier. I’m told this is due to something called a föhn, a dry warm wind that’s been blowing across the Alps. 

I’m no meteorologist, but it’s true that there is a lot of hot air here. 

On Wednesday, President Donald Trump arrived in Davos to address the assembly, and held forth for more than 90 minutes, weaving his way through remarks about the economy, Greenland, windmills, Switzerland, Rolexes, Venezuela, and drug prices. It was a talk lousy with gripes, grievances and outright falsehoods. 

One small example: Trump made a big deal of claiming that China, despite being the world leader in manufacturing windmill componentry, doesn’t actually use them for energy generation itself. In fact, it is the world leader in generation, as well. 

I did not get to watch this spectacle from the room itself. Sad! 

By the time I got to the Congress Hall where the address was taking place, there was already a massive scrum of people jostling to get in. 

I had just wrapped up moderating a panel on “the intelligent co-worker,” ie: AI agents in the workplace. I was really excited for this one as the speakers represented a diverse cross-section of the AI ecosystem. Christoph Schweizer, CEO of BCG had the macro strategic view; Enrique Lores, HP CEO, could speak to both hardware and large enterprises, Workera CEO Kian Katanforoosh has the inside view on workforce training and transformation, Manjul Shah CEO of Hippocratic AI addressed working in the high stakes field of healthcare, and Kate Kallot CEO of Amini AI gave perspective on the global south and Africa in particular. 

Interestingly, most of the panel shied away from using the term co-worker, and some even rejected the term agent. But the view they painted was definitely one of humans working alongside AI and augmenting what’s possible. Shah, for example, talked about having agents call 16,000 people in Texas during a heat wave to perform a health and safety check. It was a great discussion. You can watch the whole thing here

But by the time it let out, the push of people outside the Congress Hall was already too thick for me to get in. In fact I couldn’t even get into a nearby overflow room. I did make it into a third overflow room, but getting in meant navigating my way through a mass of people, so jammed in tight together that it reminded me of being at a Turnstile concert. 

The speech blew way past its allotted time, and I had to step out early to get to yet another discussion. Walking through the halls while Trump spoke was a truly surreal experience. He had truly captured the attention of the gathered global elite. I don’t think I saw a single person not starting at a laptop, or phone or iPad, all watching the same video. 

Trump is speaking again on Thursday in a previously unscheduled address to announce his Board of Peace. As is (I heard) Elon Musk. So it’s shaping up to be another big day for elite attention capture. 

I should say, though, there are elites, and then there are elites. And there are all sorts of ways of sorting out who is who. Your badge color is one of them. I have a white participant badge, because I was moderating panels. This gets you in pretty much anywhere and therefore is its own sort of status symbol. Where you are staying is another. I’m in Klosters, a neighboring town that’s a 40 minute train ride away from the Congress Centre. Not so elite. 

There are more subtle ways of status sorting, too. Yesterday I learned that when people ask if this is your first time at Davos, it’s sometimes meant as a way of trying to figure out how important you are. If you’re any kind of big deal, you’ve probably been coming for years. 

But the best one I’ve yet encountered happened when I made small talk with the woman sitting next to me as I changed back into my snow boots. It turned out that, like me, she lived in California–at least part time. “But I don’t think I’ll stay there much longer,” she said, “due to the new tax law.” This was just an ice cold flex. 

Because California’s newly proposed tax legislation? It only targets billionaires. 

Welcome to Davos.

“Dr. Google” had its issues. Can ChatGPT Health do better?

<div data-chronoton-summary="

OpenAI’s health play The AI giant launched ChatGPT Health amid reports that 230 million people already ask ChatGPT health-related questions weekly. The new feature isn’t a separate model but rather a wrapper that can access medical records and fitness data when permitted.

  • Better than Dr. Google? Early research suggests LLMs might outperform traditional web searches for medical information. One study found GPT-4o, an earlier model, answered realistic health questions correctly about 85% of the time, potentially reducing misinformation compared to unfiltered internet searches.
  • Hallucination concerns persist Earlier versions of GPT have been shown to fabricate definitions for fake medical conditions and accept incorrect information in users’ prompts. This sycophantic tendency could be particularly dangerous when users seek to confirm biases against legitimate medical advice.
  • Trust vs. expertise The articulate, confident communication style of ChatGPT might lead users to trust it over qualified medical professionals. While OpenAI emphasizes the tool is meant to supplement rather than replace doctors, researchers worry some patients will rely too heavily on AI guidance.
  • ” data-chronoton-post-id=”1131692″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

    For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week. 

    That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. It landed at an inauspicious time: Two days earlier, the news website SFGate had broken the story of Sam Nelson, a teenager who died of an overdose last year after extensive conversations with ChatGPT about how best to combine various drugs. In the wake of both pieces of news, multiple journalists questioned the wisdom of relying for medical advice on a tool that could cause such extreme harm.

    Though ChatGPT Health lives in a separate sidebar tab from the rest of ChatGPT, it isn’t a new model. It’s more like a wrapper that provides one of OpenAI’s preexisting models with guidance and tools it can use to provide health advice—including some that allow it to access a user’s electronic medical records and fitness app data, if granted permission. There’s no doubt that ChatGPT and other large language models can make medical mistakes, and OpenAI emphasizes that ChatGPT Health is intended as an additional support, rather than a replacement for one’s doctor. But when doctors are unavailable or unable to help, people will turn to alternatives. 

    Some doctors see LLMs as a boon for medical literacy. The average patient might struggle to navigate the vast landscape of online medical information—and, in particular, to distinguish high-quality sources from polished but factually dubious websites—but LLMs can do that job for them, at least in theory. Treating patients who had searched for their symptoms on Google required “a lot of attacking patient anxiety [and] reducing misinformation,” says Marc Succi, an associate professor at Harvard Medical School and a practicing radiologist. But now, he says, “you see patients with a college education, a high school education, asking questions at the level of something an early med student might ask.”

    The release of ChatGPT Health, and Anthropic’s subsequent announcement of new health integrations for Claude, indicate that the AI giants are increasingly willing to acknowledge and encourage health-related uses of their models. Such uses certainly come with risks, given LLMs’ well-documented tendencies to agree with users and make up information rather than admit ignorance. 

    But those risks also have to be weighed against potential benefits. There’s an analogy here to autonomous vehicles: When policymakers consider whether to allow Waymo in their city, the key metric is not whether its cars are ever involved in accidents but whether they cause less harm than the status quo of relying on human drivers. If Dr. ChatGPT is an improvement over Dr. Google—and early evidence suggests it may be—it could potentially lessen the enormous burden of medical misinformation and unnecessary health anxiety that the internet has created.

    Pinning down the effectiveness of a chatbot such as ChatGPT or Claude for consumer health, however, is tricky. “It’s exceedingly difficult to evaluate an open-ended chatbot,” says Danielle Bitterman, the clinical lead for data science and AI at the Mass General Brigham health-care system. Large language models score well on medical licensing examinations, but those exams use multiple-choice questions that don’t reflect how people use chatbots to look up medical information.

    Sirisha Rambhatla, an assistant professor of management science and engineering at the University of Waterloo, attempted to close that gap by evaluating how GPT-4o responded to licensing exam questions when it did not have access to a list of possible answers. Medical experts who evaluated the responses scored only about half of them as entirely correct. But multiple-choice exam questions are designed to be tricky enough that the answer options don’t give them entirely away, and they’re still a pretty distant approximation for the sort of thing that a user would type into ChatGPT.

    A different study, which tested GPT-4o on more realistic prompts submitted by human volunteers, found that it answered medical questions correctly about 85% of the time. When I spoke with Amulya Yadav, an associate professor at Pennsylvania State University who runs the Responsible AI for Social Emancipation Lab and led the study, he made it clear that he wasn’t personally a fan of patient-facing medical LLMs. But he freely admits that, technically speaking, they seem up to the task—after all, he says, human doctors misdiagnose patients 10% to 15% of the time. “If I look at it dispassionately, it seems that the world is gonna change, whether I like it or not,” he says.

    For people seeking medical information online, Yadav says, LLMs do seem to be a better choice than Google. Succi, the radiologist, also concluded that LLMs can be a better alternative to web search when he compared GPT-4’s responses to questions about common chronic medical conditions with the information presented in Google’s knowledge panel, the information box that sometimes appears on the right side of the search results.

    Since Yadav’s and Succi’s studies appeared online, in the first half of 2025, OpenAI has released multiple new versions of GPT, and it’s reasonable to expect that GPT-5.2 would perform even better than its predecessors. But the studies do have important limitations: They focus on straightforward, factual questions, and they examine only brief interactions between users and chatbots or web search tools. Some of the weaknesses of LLMs—most notably their sycophancy and tendency to hallucinate—might be more likely to rear their heads in more extensive conversations and with people who are dealing with more complex problems. Reeva Lederman, a professor at the University of Melbourne who studies technology and health, notes that patients who don’t like the diagnosis or treatment recommendations that they receive from a doctor might seek out another opinion from an LLM—and the LLM, if it’s sycophantic, might encourage them to reject their doctor’s advice.

    Some studies have found that LLMs will hallucinate and exhibit sycophancy in response to health-related prompts. For example, one study showed that GPT-4 and GPT-4o will happily accept and run with incorrect drug information included in a user’s question. In another, GPT-4o frequently concocted definitions for fake syndromes and lab tests mentioned in the user’s prompt. Given the abundance of medically dubious diagnoses and treatments floating around the internet, these patterns of LLM behavior could contribute to the spread of medical misinformation, particularly if people see LLMs as trustworthy.

    OpenAI has reported that the GPT-5 series of models is markedly less sycophantic and prone to hallucination than their predecessors, so the results of these studies might not apply to ChatGPT Health. The company also evaluated the model that powers ChatGPT Health on its responses to health-specific questions, using their publicly available HeathBench benchmark. HealthBench rewards models that express uncertainty when appropriate, recommend that users seek medical attention when necessary, and refrain from causing users unnecessary stress by telling them their condition is more serious that it truly is. It’s reasonable to assume that the model underlying ChatGPT Health exhibited those behaviors in testing, though Bitterman notes that some of the prompts in HealthBench were generated by LLMs, not users, which could limit how well the benchmark translates into the real world.

    An LLM that avoids alarmism seems like a clear improvement over systems that have people convincing themselves they have cancer after a few minutes of browsing. And as large language models, and the products built around them, continue to develop, whatever advantage Dr. ChatGPT has over Dr. Google will likely grow. The introduction of ChatGPT Health is certainly a move in that direction: By looking through your medical records, ChatGPT can potentially gain far more context about your specific health situation than could be included in any Google search, although numerous experts have cautioned against giving ChatGPT that access for privacy reasons.

    Even if ChatGPT Health and other new tools do represent a meaningful improvement over Google searches, they could still conceivably have a negative effect on health overall. Much as automated vehicles, even if they are safer than human-driven cars, might still prove a net negative if they encourage people to use public transit less, LLMs could undermine users’ health if they induce people to rely on the internet instead of human doctors, even if they do increase the quality of health information available online.

    Lederman says that this outcome is plausible. In her research, she has found that members of online communities centered on health tend to put their trust in users who express themselves well, regardless of the validity of the information they are sharing. Because ChatGPT communicates like an articulate person, some people might trust it too much, potentially to the exclusion of their doctor. But LLMs are certainly no replacement for a human doctor—at least not yet.

    All anyone wants to talk about at Davos is AI and Donald Trump

    This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

    Hello from the World Economic Forum annual meeting in Davos, Switzerland. I’ve been here for two days now, attending meetings, speaking on panels, and basically trying to talk to anyone I can. And as far as I can tell, the only things anyone wants to talk about are AI and Trump. 

    Davos is physically defined by the Congress Center, where the official WEF sessions take place, and the Promenade, a street running through the center of the town lined with various “houses”—mostly retailers that are temporarily converted into meeting hubs for various corporate or national sponsors. So there is a Ukraine House, a Brazil House, Saudi House, and yes, a USA House (more on that tomorrow). There are a handful of media houses from the likes of CNBC and the Wall Street Journal. Some houses are devoted to specific topics; for example, there’s one for science and another for AI. 

    But like everything else in 2026, the Promenade is dominated by tech companies. At one point I realized that literally everything I could see, in a spot where the road bends a bit, was a tech company house. Palantir, Workday, Infosys, Cloudflare, C3.ai. Maybe this should go without saying, but their presence, both in the houses and on the various stages and parties and platforms here at the World Economic Forum, really drove home to me how utterly and completely tech has captured the global economy. 

    While the houses host events and serve as networking hubs, the big show is inside the Congress Center. On Tuesday morning, I kicked off my official Davos experience there by moderating a panel with the CEOs of Accenture, Aramco, Royal Philips, and Visa. The topic was scaling up AI within organizations. All of these leaders represented companies that have gone from pilot projects to large internal implementations. It was, for me, a fascinating conversation. You can watch the whole thing here, but my takeaway was that while there are plenty of stories about AI being overhyped (including from us), it is certainly having substantive effects at large companies.  

    Aramco CEO Amin Nasser, for example, described how that company has found $3 billion to $5 billion in cost savings by improving the efficiency of its operations. Royal Philips CEO Roy Jakobs described how it was allowing health-care practitioners to spend more time with patients by doing things such as automated note-taking. (This really resonated with me, as my wife is a pediatrics nurse, and for decades now I’ve heard her talk about how much of her time is devoted to charting.) And Visa CEO Ryan McInerney talked about his company’s push into agentic commerce and the way that will play out for consumers, small businesses, and the global payments industry. 

    To elaborate a little on that point, McInerney painted a picture of commerce where agents won’t just shop for things you ask them to, which will be basically step one, but will eventually be able to shop for things based on your preferences and previous spending patterns. This could be your regular grocery shopping, or even a vacation getaway. That’s going to require a lot of trust and authentication to protect both merchants and consumers, but it is clear that the steps into agentic commerce we saw in 2025 were just baby ones. There are much bigger ones coming for 2026. (Coincidentally, I had a discussion with a senior executive from Mastercard on Monday, who made several of the same points.) 

    But the thing that really resonated with me from the panel was a comment from Accenture CEO Julie Sweet, who has a view not only of her own large org but across a spectrum of companies: “It’s hard to trust something until you understand it.” 

    I felt that neatly summed up where we are as a society with AI. 

    Clearly, other people feel the same. Before the official start of the conference I was at AI House for a panel. The place was packed. There was a consistent, massive line to get in, and once inside, I literally had to muscle my way through the crowd. Everyone wanted to get in. Everyone wanted to talk about AI. 

    (A quick aside on what I was doing there: I sat on a panel called “Creativity and Identity in the Age of Memes and Deepfakes,” led by Atlantic CEO Nicholas Thompson; it featured the artist Emi Kusano, who works with AI, and Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, who has been at the center of a lot of the debates about AI in the film and gaming industries. I’m not going to spend much time describing it because I’m already running long, but it was a rip-roarer of a panel. Check it out.)

    And, okay. Sigh. Donald Trump. 

    The president is due here Wednesday, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance. While AI is all over the stages, Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild. 

    These conversations are also starting to spill out into the public. Just after my panel on Tuesday, I headed to a pavilion outside the main hall in the Congress Center. I saw someone coming down the stairs with a small entourage, who was suddenly mobbed by cameras and phones. 

    Moments earlier in the same spot, the press had been surrounding David Beckham, shouting questions at him. So I was primed for it to be another celebrity—after all, captains of industry were everywhere you looked. I mean, I had just bumped into Eric Schmidt, who was literally standing in line in front of me at the coffee bar. Davos is weird. 

    But in fact, it was Gavin Newsom, the governor of California, who is increasingly seen as the leading voice of the Democratic opposition to President Trump, and a likely contender, or even front-runner, in the race to replace him. Because I live in San Francisco I’ve encountered Newsom many times, dating back to his early days as a city supervisor before he was even mayor. I’ve rarely, rarely, seen him quite so worked up as he was on Tuesday. 

    Among other things, he called Trump a narcissist who follows “the law of the jungle, the rule of Don” and compared him to a T-Rex, saying, “You mate with him or he devours you.” And he was just as harsh on the world leaders, many of whom are gathered in Davos, calling them “pathetic” and saying he should have brought knee pads for them. 

    Yikes.

    There was more of this sentiment, if in more measured tones, from Canadian prime minister Mark Carney during his address at Davos. While I missed his remarks, they had people talking. “If we’re not at the table, we’re on the menu,” he argued. 

    Everyone wants AI sovereignty. No one can truly have it.

    Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.  

    But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.  

    If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership. 

    Why infrastructure-first strategies hit walls 

    A November survey by Accenture found that 62% of European organizations are now seeking sovereign AI solutions, driven primarily by geopolitical anxiety rather than technical necessity. That figure rises to 80% in Denmark and 72% in Germany. The European Union has appointed its first Commissioner for Tech Sovereignty. 

    This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays. 

    And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.  

    What works: An orchestrated sovereignty

    What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape. 

    The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them. 

    Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete. 

    Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size. 

    South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.  

    Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism. 

    The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone. 

    Three ways to align ambition with reality 

    1.  Measure added value, not inputs.  

    Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.  

    Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.  

    2. Cultivate a strong AI innovation ecosystem. 

    Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.   

    3. Build global partnerships.  

    Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.  

    What’s at stake 

    Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete. 

    The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter. 

    The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade. 

    The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing. 

    Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.

    The UK government is backing AI that can run its own lab experiments

    A number of startups and universities that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work.

    ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.

    “There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer. 

    ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.

    Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.

    “We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”

    Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.

    And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. The experiments will then be run in an automated lab by Sandia National Laboratories in the US.

    Taking the temperature

    Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.   

    Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we’ve got to know what the frontier is.”

    For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”

    Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.  

    AI scientists sit in a layer above those human-made tools and call ton hose tools as needed, says Rowstron. “But there’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”

    That’s still some way off, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.

    There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off track or making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included changes in the initial specifications and “overexcitement that declares success despite obvious failures.”

    “Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”

    “But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”

    What it’s like to be banned from the US for fighting online hate

    It was early evening in Berlin, just a day before Christmas Eve, when Josephine Ballon got an unexpected email from US Customs and Border Protection. The status of her ability to travel to the United States had changed—she’d no longer be able to enter the country. 

    At first, she couldn’t find any information online as to why, though she had her suspicions. She was one of the directors of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. As the organization has become a strong advocate of EU tech regulations, it has increasingly found itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship. 

    It was only later that she saw what US Secretary of State Marco Rubio had posted on X:

    Rubio was promoting a conspiracy theory about what he has called the “censorship-industrial complex,” which alleges widespread collusion between the US government, tech companies, and civil society organizations to silence conservative voices—the very conspiracy theory HateAid has recently been caught up in. 

    Then Undersecretary of State Sarah B. Rogers posted on X the names of the people targeted by travel bans. The list included Ballon, as well as her HateAid co-director, Anna Lena von Hodenberg. Also named were three others doing similar or related work: former EU commissioner Thierry Breton, who had helped author Europe’s Digital Services Act (DSA); Imran Ahmed of the Center for Countering Digital Hate, which documents hate speech on social media platforms; and Clare Melford of the Global Disinformation Index, which provides risk ratings warning advertisers about placing ads on websites promoting hate speech and disinformation. 

    It was an escalation in the Trump administration’s war on digital rights—fought in the name of free speech. But EU officials, freedom of speech experts, and the five people targeted all flatly reject the accusations of censorship. Ballon, von Hodenberg, and some of their clients tell me that their work is fundamentally about making people feel safer online. And their experiences over the past few weeks show just how politicized and besieged their work in online safety has become. They almost certainly won’t be the last people targeted in this way. 

    Ballon was the one to tell von Hodenberg that both their names were on the list. “We kind of felt a chill in our bones,” von Hodenberg told me when I caught up with the pair in early January. 

    But she added that they also quickly realized, “Okay, it’s the old playbook to silence us.” So they got to work—starting with challenging the narrative the US government was pushing about them.

    Within a few hours, Ballon and von Hodenberg had issued a strongly worded statement refuting the allegations: “We will not be intimidated by a government that uses accusations of censorship to silence those who stand up for human rights and freedom of expression,” they wrote. “We demand a clear signal from the German government and the European Commission that this is unacceptable. Otherwise, no civil society organisation, no politician, no researcher, and certainly no individual will dare to denounce abuses by US tech companies in the future.” 

    Those signals came swiftly. On X, Johann Wadephul, the German foreign minister, called the entry bans “not acceptable,” adding that “the DSA was democratically adopted by the EU, for the EU—it does not have extraterritorial effect.” Also on X, French president Emmanuel Macron wrote that “these measures amount to intimidation and coercion aimed at undermining European digital sovereignty.” The European Commission issued a statement that it “strongly condemns” the Trump administration’s actions and reaffirmed its “sovereign right to regulate economic activity in line with our democratic values.” 

    Ahmed, Melford, Breton, and their respective organizations also made their own statements denouncing the entry bans. Ahmed, the only one of the five based in the United States, also successfully filed suit to preempt any attempts to detain him, which the State Department had indicated it would consider doing.  

    But alongside the statements of solidarity, Ballon and von Hodenberg said, they also received more practical advice: Assume the travel ban was just the start and that more consequences could be coming. Service providers might preemptively revoke access to their online accounts; banks might restrict their access to money or the global payment system; they might see malicious attempts to get hold of their personal data or that of their clients. Perhaps, allies told them, they should even consider moving their money into friends’ accounts or keeping cash on hand so that they could pay their team’s salaries—and buy their families’ groceries. 

    These warnings felt particularly urgent given that just days before, the Trump administration had sanctioned two International Criminal Court judges for “illegitimate targeting of Israel.” As a result, they had lost access to many American tech platforms, including Microsoft, Amazon, and Gmail. 

    “If Microsoft does that to someone who is a lot more important than we are,” Ballon told me, “they will not even blink to shut down the email accounts from some random human rights organization in Germany.”   

    “We have now this dark cloud over us that any minute, something can happen,” von Hodenberg added. “We’re running against time to take the appropriate measures.”

    Helping navigate “a lawless place”

    Founded in 2018 to support people experiencing digital violence, HateAid has since evolved to defend digital rights more broadly. It provides ways for people to report illegal online content and offers victims advice, digital security, emotional support, and help with evidence preservation. It also educates German police, prosecutors, and politicians about how to handle online hate crimes. 

    Once the group is contacted for help, and if its lawyers determine that the type of harassment has likely violated the law, the organization connects victims with legal counsel who can help them file civil and criminal lawsuits against perpetrators, and if necessary, helps finance the cases. (HateAid itself does not file cases against individuals.) Ballon and von Hodenberg estimate that HateAid has worked with around 7,500 victims and helped them file 700 criminal cases and 300 civil cases, mostly against individual offenders.

    For 23-year-old German law student and outspoken political activist Theresia Crone, HateAid’s support has meant that she has been able to regain some sense of agency in her life, both on and offline. She had reached out after she discovered entire online forums dedicated to making deepfakes of her. Without HateAid, she told me, “I would have had to either put my faith into the police and the public prosecutor to prosecute this properly, or I would have had to foot the bill of an attorney myself”—a huge financial burden for “a student with basically no fixed income.” 

    In addition, working alone would have been retraumatizing: “I would have had to document everything by myself,” she said—meaning “I would have had to see all of these pictures again and again.” 

    “The internet is a lawless place,” Ballon told me when we first spoke, back in mid-December, a few weeks before the travel ban was announced. In a conference room at the HateAid office in Berlin, she said there are many cases that “cannot even be prosecuted, because no perpetrator is identified.” That’s why the nonprofit also advocates for better laws and regulations governing technology companies in Germany and across the European Union. 

    On occasion, they have also engaged in strategic litigation against the platforms themselves. In 2023, for example, HateAid and the European Union of Jewish Students sued X for failing to enforce its terms of service against posts that were antisemitic or that denied the Holocaust, which is illegal in Germany. 

    This almost certainly put the organization in the crosshairs of X owner Elon Musk; it also made HateAid a frequent target of Germany’s far right party, the Alternative für Deutschland, which Musk has called “the only hope for Germany.” (X did not respond to a request to comment on this lawsuit.)

    HateAid gets caught in Trump World’s dragnet

    For better and worse, HateAid’s profile grew further when it took on another critical job in online safety. In June 2024, it was named as a trusted flagger organization under the Digital Services Act, a 2022 EU law that requires social media companies to remove certain content (including hate speech and violence) that violates national laws, and to provide more transparency to the public, in part by allowing more appeals on platforms’ moderation decisions. 

    Trusted flaggers are entities designated by individual EU countries to point out illegal content, and they are a key part of DSA enforcement. While anyone can report such content, trusted flaggers’ reports are prioritized and legally require a response from the platforms. 

    The Trump administration has loudly argued that the trusted flagger program and the DSA more broadly are examples of censorship that disproportionately affect voices on the right and American technology companies, like X. 

    When we first spoke in December, Ballon said these claims of censorship simply don’t hold water: “We don’t delete content, and we also don’t, like, flag content publicly for everyone to see and to shame people. The only thing that we do: We use the same notification channels that everyone can use, and the only thing that is in the Digital Services Act is that platforms should prioritize our reporting.” Then it is on the platforms to decide what to do. 

    Nevertheless, the idea that HateAid and like-minded organizations are censoring the right has become a powerful conspiracy theory with real-world consequences. (Last year, MIT Technology Review covered the closure of a small State Department office following allegations that it had conducted “censorship,” as well as an unusual attempt by State leadership to access internal records related to supposed censorship—including information about two of the people who have now been banned, Medford and Ahmed, and both of their organizations.) 

    HateAid saw a fresh wave of harassment starting last February, when 60 Minutes aired a documentary on hate speech laws in Germany; it featured a quote from Ballon that “free speech needs boundaries,” which, she added, “are part of our constitution.” The interview happened to air just days before Vice President JD Vance attended the Munich Security Conference; there he warned that “across Europe, free speech … is in retreat.” This, Ballon told me, led to heightened hostility toward her and her organization. 

    Fast-forward to July, when a report by Republicans in the US House of Representatives claimed that the DSA “compels censorship and infringes on American free speech.” HateAid was explicitly named in the report. 

    All of this has made its work “more dangerous,” Ballon told me in December. Before the 60 Minutes interview, “maybe one and a half years ago, as an organization, there were attacks against us, but mostly against our clients, because they were the activists, the journalists, the politicians at the forefront. But now … we see them becoming more personal.” 

    As a result, over the last year, HateAid has taken more steps to protect its reputation and get ahead of the damaging narratives. Ballon has reported the hate speech targeted at her—“More [complaints] than in all the years I did this job before,” she said—as well as defamation lawsuits on behalf of HateAid. 

    All these tensions finally came to a head in December. At the start of the month, the European Commission fined X $140 million for DSA violations. This set off yet another round of recriminations about supposed censorship of the right, with Trump calling the fine “a nasty one” and warning: “Europe has to be very careful.”

    Just a few weeks later, the day before Christmas Eve, retaliation against individuals finally arrived. 

    Who gets to define—and experience—free speech

    Digital rights groups are pushing back against the Trump administration’s narrow view of what constitutes free speech and censorship.

    “What we see from this administration is a conception of freedom of expression that is not a human-rights-based conception where this is an inalienable, indelible right that’s held by every person,” says David Greene, the civil liberties director of the Electronic Frontier Foundation, a US-based digital rights group. Rather, he sees an “expectation that… [if] anybody else’s speech is challenged, there’s a good reason for it, but it should never happen to them.” 

    Since Trump won his second term, social media platforms have walked back their commitments to trust and safety. Meta, for example, ended fact-checking on Facebook and adopted much of the administration’s censorship language, with CEO Mark Zuckerberg telling the podcaster Joe Rogan that it would “work with President Trump to push back on governments around the world” if they are seen as “going after American companies and pushing to censor more.”

    Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

    And as the recent fines on X show, Musk’s platform has gone even further in flouting European law—and, ultimately, ignoring the user rights that the DSA was written to protect. In perhaps one of the most egregious examples yet, in recent weeks X allowed people to use Grok, its AI generator, to create nonconsensual nude images of women and children, with few limits—and, so far at least, few consequences. (Last week, X released a statement that it would start limiting users’ ability to create explicit images with Grok; in response to a number of questions, X representative Rosemarie Esposito pointed me to that statement.) 

    For Ballon, it makes perfect sense: “You can better make money if you don’t have to implement safety measures and don’t have to invest money in making your platform the safest place,” she told me.

    “It goes both ways,” von Hodenberg added. “It’s not only the platforms who profit from the US administration undermining European laws … but also, obviously, the US administration also has a huge interest in not regulating the platforms … because who is amplified right now? It’s the extreme right.”

    She believes this explains why HateAid—and Ahmed’s Center for Countering Digital Hate and Melford’s Global Disinformation Index, as well as Breton and the DSA—have been targeted: They are working to disrupt this “unholy deal where the platforms profit economically and the US administration is profiting in dividing the European Union,” she said. 

    The travel restrictions intentionally send a strong message to all groups that work to hold tech companies accountable. “It’s purely vindictive,” Greene says. “It’s designed to punish people from pursuing further work on disinformation or anti-hate work.” (The State Department did not respond to a request for comment.)

    And ultimately, this has a broad effect on who feels safe enough to participate online. 

    Ballon pointed to research that shows the “silencing effect” of harassment and hate speech, not only for “those who have been attacked,” but also for those who witness such attacks. This is particularly true for women, who tend to face more online hate that is also more sexualized and violent. It’ll only be worse if groups like HateAid get deplatformed or lose funding. 

    Von Hodenberg put it more bluntly: “They reclaim freedom of speech for themselves when they want to say whatever they want, but they silence and censor the ones that criticize them.”

    Still, the HateAid directors insist they’re not backing down. They say they’re taking “all advice” they have received seriously, especially with regard to “becoming more independent from service providers,” Ballon told me.

    “Part of the reason that they don’t like us is because we are strengthening our clients and empowering them,” said von Hodenberg. “We are making sure that they are not succeeding, and not withdrawing from the public debate.” 

    “So when they think they can silence us by attacking us? That is just a very wrong perception.”

    Martin Sona contributed reporting.

    Correction: This article originally misstated the name of Germany’s far right party.

    Three technologies that will shape biotech in 2026

    Earlier this week, MIT Technology Review published its annual list of Ten Breakthrough Technologies. As always, it features technologies that made the news last year, and which—for better or worse—stand to make waves in the coming years. They’re the technologies you should really be paying attention to.

    This year’s list includes tech that’s set to transform the energy industry, artificial intelligence, space travel—and of course biotech and health. Our breakthrough biotechnologies for 2026 involve editing a baby’s genes and, separately, resurrecting genes from ancient species. We also included a controversial technology that offers parents the chance to screen their embryos for characteristics like height and intelligence. Here’s the story behind our biotech choices.

    A base-edited baby!

    In August 2024, KJ Muldoon was born with a rare genetic disorder that allowed toxic ammonia to build up in his blood. The disease can be fatal, and KJ was at risk of developing neurological disorders. At the time, his best bet for survival involved waiting for a liver transplant.

    Then he was offered an experimental gene therapy—a personalized “base editing” treatment designed to correct the specific genetic “misspellings” responsible for his disease. It seems to have worked! Three doses later, KJ is doing well. He took his first steps in December, shortly before spending his first Christmas at home.

    KJ’s story is hugely encouraging. The team behind his treatment is planning a clinical trial for infants with similar disorders caused by different genetic mutations. The team members hope to win regulatory approval on the back of a small trial—a move that could make the expensive treatment (KJ’s cost around $1 million) more accessible, potentially within a few years.

    Others are getting in on the action, too. Fyodor Urnov, a gene-editing scientist at the University of California, Berkeley, assisted the team that developed KJ’s treatment. He recently cofounded Aurora Therapeutics, a startup that hopes to develop gene-editing drugs for another disorder called phenylketonuria (PKU). The goal is to obtain regulatory approval for a single drug that can then be adjusted or personalized for individuals without having to go through more clinical trials.

    US regulators seem to be amenable to the idea and have described a potential approval pathway for such “bespoke, personalized therapies.” Watch this space.

    Gene resurrection

    It was a big year for Colossal Biosciences, the biotech company hoping to “de-extinct” animals like the woolly mammoth and the dodo. In March, the company created what it called “woolly mice”—rodents with furry coats and curly whiskers akin to those of woolly mammoths.

    The company made an even more dramatic claim the following month, when it announced it had created three dire wolves. These striking snow-white animals were created by making 20 genetic changes to the DNA of gray wolves based on genetic research on ancient dire wolf bones, the company said at the time.

    Whether these animals can really be called dire wolves is debatable, to say the least. But the technology behind their creation is undeniably fascinating. We’re talking about the extraction and analysis of ancient DNA, which can then be introduced into cells from other, modern-day species.

    Analysis of ancient DNA can reveal all sorts of fascinating insights into human ancestors and other animals. And cloning, another genetic tool used here, has applications not only in attempts to re-create dead pets but also in wildlife conservation efforts. Read more here.

    Embryo scoring

    IVF involves creating embryos in a lab and, typically, “scoring” them on their likelihood of successful growth before they are transferred to a person’s uterus. So far, so uncontroversial.

    Recently, embryo scoring has evolved. Labs can pinch off a couple of cells from an embryo, look at its DNA, and screen for some genetic diseases. That list of diseases is increasing. And now some companies are taking things even further, offering prospective parents the opportunity to select embryos for features like height, eye color, and even IQ.

    This is controversial for lots of reasons. For a start, there are many, many factors that contribute to complex traits like IQ (a score that doesn’t capture all aspects of intelligence at any rate). We don’t have a perfect understanding of those factors, or how selecting for one trait might influence another.

    Some critics warn of eugenics. And others note that whichever embryo you end up choosing, you can’t control exactly how your baby will turn out (and why should you?!). Still, that hasn’t stopped Nucleus, one of the companies offering these services, from inviting potential customers to have their “best baby.” Read more here.

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    Three climate technologies breaking through in 2026

    Happy New Year! I know it’s a bit late to say, but it never quite feels like the year has started until the new edition of our 10 Breakthrough Technologies list comes out. 

    For 25 years, MIT Technology Review has put together this package, which highlights the technologies that we think are going to matter in the future. This year’s version has some stars, including gene resurrection (remember all the dire wolf hype last year?) and commercial space stations

    And of course, the world of climate and energy is represented with sodium-ion batteries, next-generation nuclear, and hyperscale AI data centers. Let’s take a look at what ended up on the list, and what it says about this moment for climate tech. 

    Sodium-ion batteries

    I’ve been covering sodium-ion batteries for years, but this moment feels like a breakout one for the technology. 

    Today, lithium-ion cells power everything from EVs, phones, and computers to huge stationary storage arrays that help support the grid. But researchers and battery companies have been racing to develop an alternative, driven by the relative scarcity of lithium and the metal’s volatile price in recent years. 

    Sodium-ion batteries could be that alternative. Sodium is much more abundant than lithium, and it could unlock cheaper batteries that hold a lower fire risk.  

    There are limitations here: Sodium-ion batteries won’t be able to pack as much energy into cells as their lithium counterparts. But it might not matter, especially for grid storage and smaller EVs. 

    In recent years, we’ve seen a ton of interest in sodium-based batteries, particularly from major companies in China. Now the new technology is starting to make its way into the world—CATL says it started manufacturing these batteries at scale in 2025. 

    Next-generation nuclear

    Nuclear reactors are an important part of grids around the world today—massive workhorse reactors generate reliable, consistent electricity. But the countries with the oldest and most built-out fleets have struggled to add to them in recent years, since reactors are massive and cost billions. Recent high-profile projects have gone way over budget and faced serious delays. 

    Next-generation reactor designs could help the industry break out of the old blueprint and get more nuclear power online more quickly, and they’re starting to get closer to becoming reality. 

    There’s a huge variety of proposals when it comes to what’s next for nuclear. Some companies are building smaller reactors, which they say could make it easier to finance new projects, and get them done on time. 

    Other companies are focusing on tweaking key technical bits of reactors, using alternative fuels or coolants that help ferry heat out of the reactor core. These changes could help reactors generate electricity more efficiently and safely. 

    Kairos Power was the first US company to receive approval to begin construction on a next-generation reactor to produce electricity. China is emerging as a major center of nuclear development, with the country’s national nuclear company reportedly working on several next-gen reactors. 

    Hyperscale data centers

    This one isn’t quite what I would call a climate technology, but I spent most of last year reporting on the climate and environmental impacts of AI, and the AI boom is deeply intertwined with climate and energy. 

    Data centers aren’t new, but we’re seeing a wave of larger centers being proposed and built to support the rise of AI. Some of these facilities require a gigawatt or more of power—that’s like the output of an entire conventional nuclear power plant, just for one data center. 

    (This feels like a good time to mention that our Breakthrough Technologies list doesn’t just highlight tech that we think will have a straightforwardly positive influence on the world. I think back to our 2023 list, which included mass-market military drones.)

    There’s no denying that new, supersize data centers are an important force driving electricity demand, sparking major public pushback, and emerging as a key bit of our new global infrastructure. 

    This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

    Data centers are amazing. Everyone hates them.

    Behold, the hyperscale data center! 

    Massive structures, with thousands of specialized computer chips running in parallel to perform the complex calculations required by advanced AI models. A single facility can cover millions of square feet, built with millions of pounds of steel, aluminum, and concrete; feature hundreds of miles of wiring, connecting some hundreds of thousands of high-end GPU chips, and chewing through hundreds of megawatt-hours of electricity. These facilities run so hot from all that computing power that their cooling systems are triumphs of engineering complexity in themselves. But the star of the show are those chips with their advanced processors. A single chip in these vast arrays can cost upwards of $30,000. Racked together and working in concert, they process hundreds of thousands of tokens—the basic building blocks of an AI model—per second. Ooooomph. 

    Given the incredible amounts of capital that the world’s biggest companies have been pouring into building data centers you can make the case (and many people have) that their construction is single-handedly propping up the US stock market and the economy. 

    So important are they to our way of life that none other than the President of the United States himself, on his very first full day in office, stood side by side with the CEO of OpenAI to announce a $500 billion private investment in data center construction.

    Truly, the hyperscale datacenter is a marvel of our age. A masterstroke of engineering across multiple disciplines. They are nothing short of a technological wonder. 

    People hate them. 

    People hate them in Virginia, which leads the nation in their construction. They hate them in Nevada, where they slurp up the state’s precious water. They hate them in Michigan, and Arizona, and South Dakota, where the good citizens of Sioux Falls hurled obscenities at their city councilmembers following a vote to permit a data center on the city’s northeastern side. They hate them all around the world, it’s true. But they really hate them in Georgia. 

    So, let’s go to Georgia. The purplest of purple states. A state with both woke liberal cities and MAGA magnified suburbs and rural areas. The state of Stacey Abrams and Newt Gingrich. If there is one thing just about everyone there seemingly agrees on, it’s that they’ve had it with data centers. 

    Last year, the state’s Public Service Commission election became unexpectedly tight, and wound up delivering a stunning upset to incumbent Republican commissioners. Although there were likely shades of national politics at play (voters favored Democrats in an election cycle where many things went that party’s way), the central issue was skyrocketing power bills. And that power bill inflation was oft-attributed to a data center building boom rivaled only by Virginia’s. 

    This boom did not come out of the blue. At one point, Georgia wanted data centers. Or at least, its political leadership did. In 2018 the state’s General Assembly passed legislation that provided data centers with tax breaks for their computer systems and cooling infrastructure, more tax breaks for job creation, and even more tax breaks for property taxes. And then… boom!   

    But things have not played out the way the Assembly and other elected officials may have expected. 

    Journey with me now to Bolingbroke, Georgia. Not far outside of Atlanta, in Monroe County (population 27,954), county commissioners were considering rezoning 900 acres of land to make room for a new data center near the town of Bolingbroke (population 492). Data centers have been popping up all across the state, but especially in areas close to Atlanta. Public opinion is, often enough, irrelevant. In nearby Twiggs County, despite strong and organized opposition, officials decided to allow a 300-acre data center to move forward. But at a packed meeting to discuss the Bolingbroke plans, some 900 people showed up to voice near unanimous opposition to the proposed data center, according to Macon, Georgia’s The Telegraph. Seeing which way the wind had blown, the Monroe county commission shot it down in August last year. 

    The would-be developers of the proposed site had claimed it would bring in millions of dollars for the county. That it would be hidden from view. That it would “uphold the highest environmental standards.” That it would bring jobs and prosperity. Yet still, people came gunning for it. 

    Why!? Data centers have been around for years. So why does everyone hate them all of the sudden? 

    What is it about these engineering marvels that will allow us to build AI that will cure all diseases, bring unprecedented prosperity, and even cheat death (if you believe what the AI sellers are selling) that so infuriates their prospective neighbors? 

    There are some obvious reasons. First is just the speed and scale of their construction, which has had effects on power grids. No one likes to see their power bills go up. The rate hikes that so incensed Georgians come as monthly reminders that the eyesore in your backyard profits California billionaires at your expense, on your grid. In Wyoming, for example, a planned Meta data center will require more electricity than every household in the state, combined. To meet demand for power-hungry data centers, utilities are adding capacity to the grid. But although that added capacity may benefit tech companies, the cost is shared by local consumers

    Similarly, there are environmental concerns. To meet their electricity needs, data centers often turn to dirty forms of energy. xAI, for example, famously threw a bunch of polluting methane-powered generators at its data center in Memphis. While nuclear energy is oft-bandied about as a greener solution, traditional plants can take a decade or more to build; even new and more nimble reactors will take years to come online. In addition, data centers often require massive amounts of water. But the amount can vary widely depending on the facility, and is often shrouded in secrecy. (A number of states are attempting to require facilities to disclose water usage.) 

    A different type of environmental consequence of data centers is that they are noisy. A low, constant, machine hum. Not just sometimes, but always. 24 hours a day. 365 days a year. “A highway that never stops.” 

    And as to the jobs they bring to communities. Well, I have some bad news there too. Once construction ends, they tend to employ very few people, especially for such resource-intensive facilities. 

    These are all logical reasons to oppose data centers. But I suspect there is an additional, emotional one. And it echoes one we’ve heard before. 

    More than a decade ago, the large tech firms of Silicon Valley began operating buses to ferry workers to their campuses from San Francisco and other Bay Area cities. Like data centers, these buses used shared resources such as public roads without, people felt, paying their fair share. Protests erupted. But while the protests were certainly about shared resource use, they were also about something much bigger. 

    Tech companies, big and small, were transforming San Francisco. The early 2010s were a time of rapid gentrification in the city. And what’s more, the tech industry itself was transforming society. Smartphones were newly ubiquitous. The way we interacted with the world was fundamentally changing, and people were, for the most part, powerless to do anything about it. You couldn’t stop Google. 

    But you could stop a Google bus. 

    You could stand in front of it and block its path. You could yell at the people getting on it. You could yell at your elected officials and tell them to do something. And in San Francisco, people did. The buses were eventually regulated. 

    The data center pushback has a similar vibe. AI, we are told, is transforming society. It is suddenly everywhere. Even if you opt not to use ChatGPT or Claude or Gemini, generative AI is  increasingly built into just about every app and service you likely use. People are worried AI will harvest jobs in the coming years. Or even kill us all. And for what? So far, the returns have certainly not lived up to the hype

    You can’t stop Google. But maybe, just maybe, you can stop a Google data center. 

    Then again, maybe not. The tech buses in San Francisco, though regulated, remain commonplace. And the city is more gentrified than ever. Meanwhile, in Monroe County, life goes on. In October, Google confirmed it had purchased 950 acres of land just off the interstate. It plans to build a data center there.