What it’s like to be in the middle of a conspiracy theory (according to a conspiracy theory expert)

On a gloomy Saturday morning this past May, a few months after entire blocks of Altadena, California, were destroyed by wildfires, several dozen survivors met at a local church to vent their built-up frustration, anger, blame, and anguish. As I sat there listening to one horror story after another, I almost felt sorry for the very polite consultants who were being paid to sit there, and who couldn’t do a thing about what they were hearing.

Hosted by a third-party arbiter at the behest of Los Angeles County, the gathering was a listening session in which survivors could “share their experiences with emergency alerts and evacuations” for a report on how the response to the Eaton Fire months earlier had succeeded and failed. 

It didn’t take long to see just how much failure there had been.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


After a small fire started in the bone-dry brush of Pasadena’s Eaton Canyon early in the evening of Tuesday, January 7, 2025, the raging Santa Ana winds blew its embers into nearby Altadena, the historically Black and middle-class town just to the north. By Wednesday morning, much of it was burning. Its residents spent the night making frantic, desperate scrambles to grab whatever they could and get to safety. 

In the aftermath, many claimed that they received no warning to evacuate, saw no first responders battling the blazes, and had little interaction with official personnel. Most were simply left to fend for themselves. 

Making matters worse, while no place is “good” for a wildfire, Altadena was especially vulnerable. It was densely packed with 100-year-old wooden homes, many of which were decades behind on the code upgrades that would have better protected them. It was full of trees and other plants that had dried out during the rain-free winter. Few residents or officials were prepared for the seemingly remote possibility that the fires that often broke out in the mountains nearby would jump into town. As a result, resources were strained to the breaking point, and many homes simply burned freely.

So the people packed into the room that morning had a lot to be angry about. They unloaded their own personal ordeals, the traumas their community had experienced, and even catastrophes they’d heard about secondhand. Each was like a dagger to the heart, met with head-nods and “uh-huhs” from people all going through the same thing.

LA County left us to die because we couldn’t get alerts!

I’m sleeping in my car because I was a renter and have no insurance coverage!

Millions of dollars in aid were raised for us, and we haven’t gotten anything!

Developers are buying up Altadena and pricing out the Black families who made this place!

The firefighting planes were grounded on purpose by Joe Biden so he could fly around LA!

One of these things was definitely not like the others. And I knew why.

Two trains collide

It’s something of a familiar cycle by now: Tragedy hits; rampant misinformation and conspiracy theories follow. Think of the deluge of “false flag” and “staged gun grab” conspiracy theories after mass shootings, or the rampant disinformation around covid-19 and the 2020 election. It’s often even more acute in the case of a natural disaster, when conspiracy theories about what “really” caused the calamity run right into culture-war-driven climate change denialism. Put together, these theories obscure real causes while elevating fake ones, with both sides battling it out on social media and TV. 

I’ve studied these ideas extensively, having spent the last 10 years writing about conspiracy theories and disinformation as a journalist and researcher. I’ve covered everything from the rise of QAnon to whether Donald Trump faked his assassination attempt to the alarming rises in antisemitism, antivaccine conspiracism, and obsession with human trafficking. I’ve written three books, testified to Congress, and even written a report for the January 6th Committee. So this has been my life for quite a while. 

Still, I’d never lived it. Not until the Eaton Fire.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life.

My house, a cottage built in 1925, was one of those that burned back in January. Our only official notification to flee had come at 3:25 a.m., nine hours after the fires started. We grabbed what we could in 10 minutes, I locked our front door, and six hours later, it was all gone. We could have died. Eighteen Altadena residents did die—and all but one were in the area that was warned too late.

Previously in my professional life, I’d always been able to look at the survivors of a tragedy, crying on TV about how they’d lost everything, and think sympathetically but distantly, Oh, those poor people. And soon enough, the conspiracy theories I was following about the incident for work would die down, and then it was no longer in my official purview—I could move on to the next disaster and whatever mess came with it. 

Now I was one of those poor people. The Eaton Fire had changed everything about my life. Would it change everything about my work as well? It felt as though two trains I’d managed to keep on parallel tracks had collided.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life. And I wondered: Did I have a duty to journalism to push back on the wild thinking—or on this particular idea that Biden was responsible? 

Or did I have a duty to myself and my sanity to just stay quiet?

Just true enough

In the days following the Eaton Fire, which coincided with another devastating fire in Los Angeles’ Pacific Palisades neighborhood, the Biden plane storyline was just one of countless rumors, false claims, hoaxes, and accusations about what had happened and who was behind them.

Most were culture-war nonsense or political fodder. I also saw clearly fake AI slop (no, the Hollywood sign was not on fire) and bits of TikTok ephemera that could largely be ignored. 

They were from something like an alternate world, one where forest floors hadn’t been “raked” and where incompetent “DEI firefighters” let houses burn while water waited in a giant spigot that California’s governor, Gavin Newsom, refused to “turn on” because he preferred to protect an endangered fish. There were claims that the fires were set on purpose to clear land for the Olympics, or to cover up evidence of human trafficking. Rumors flew that LA had donated all its firefighting money and gear to Ukraine. Some speculated that the fires were started by undocumented immigrants (one was suspected of causing one of the fires but never charged) or “antifa” or Black Lives Matter activists—never mind that one of the most demographically Black areas in the city was wiped out. Or, as always, it was the Jews. In this case, blame fell on a “wealthy Jewish couple” who supposedly owned most of LA’s water and wouldn’t let it go.

These claims originated from the same “just asking questions” influencers who run the same playbook for every disaster. And they spread rapidly through X, a platform where breaking news had been drowned out by hysterical conspiracism. 

But many did have elements of truth to them, surrounded by layers of lies and accusations. A few were just true enough to be impossible to dismiss out of hand, but also not actually true.

So, for the record: Biden did not ground firefighting aircraft in Los Angeles. 

According to fact-checking by both USA Today and Reuters, Biden flew into Los Angeles the day before the Eaton Fire broke out (which was also the same day that the Palisades Fire started, roughly 30 miles to the west), to dedicate two new national monuments. He left two days later. And while there were security measures in place, including flight restrictions over the area where he was staying, firefighting planes simply had to coordinate with air traffic controllers to cross into the closed-off space. 

But when my sort-of neighbor brought up this particular theory that day in May, I wasn’t able to debunk it. For one thing, this was my first time hearing the rumor. But more than that, what could I say that would assuage this man’s anger? And if he wanted to blame Biden for his house burning down, was it really my place to tell him he was wrong—even if he was? 

It’s common for survivors of a disaster to be aware of only parts of the story, struggle to understand the full picture, or fail to fully recollect what happened to them in the moment of survival. Once the trauma ebbs, we’re left looking for answers and clarity and someone who knows what’s going on, because we certainly don’t have a clue. Hoaxes and misinformation stem from anger, confusion, and a lack of clear answers to rapidly evolving questions.  

I can confirm that it was dizzying. Rumors and hoaxes were going around in my personal circles too, even if they weren’t so lurid and even if we didn’t really believe them. Bits of half-heard news circulated constantly in our group texts, WhatsApp chains, Facebook groups, and in-person gatherings. 

There was confusion over who was responsible for the extent of the devastation, genuine anger about purported LA Fire Department budget cuts (though those had not actually happened to the extent conspiracists claimed they did), and fears that a Trump-controlled federal government would abandon California. 

Many of the homes and businesses that we heard had burned down hadn’t, and others that we heard had survived were gone. In an especially heartbreaking early bit of misinformation, a local child-care facility shared a Facebook post stating that FEMA was handing out vouchers to pay 90% of your rent for the next three years—except FEMA doesn’t hand out rent vouchers without an application process. I quietly reached out to the source, who took it down. 

In this information vacuum, and given my work, friends started asking me questions, and answering them took energy and time I didn’t have. Honestly, the “disinformation researcher” was largely just as clueless as everyone else. 

Some of the questions were harmless enough. At one point a friend texted me about a picture from Facebook of a burned Bible page that survived the fire when everything else had turned to ash. It looked too corny and convenient to be real. But I had also found a burned page of Psalms that had survived. I kept it in a ziplock bag because it seemed like the right thing to do. So I told my friend I didn’t know if it was real. I still don’t—but I also still have that ziplock somewhere.

Under attack

As weeks passed, we began to deal with another major issue where truth and misinformation walked together: the reasonable worry that a new president who constantly belittled California would not be willing to provide relief funds

Recovery depended on FEMA to distribute grants, on the EPA to clear toxic debris, on the Small Business Administration to make loans for rebuilding or repairing homes, on the Army Corps of Engineers to remove the detritus of burned structures, and so much more. How would this square with the new “government efficiency” mandate touting the trillions of dollars and tens of thousands of jobs to be cut from the federal budget? 

Nobody knew—including the many kind government employees who spent months in Altadena helping us recover while silently wondering if they were about to be fired.

We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

Many residents of Altadena began to have trepidation about accepting government assistance, particularly in its Black community, which already had a well-earned deep distrust of the federal government. Many Black residents felt that their needs and stories were being left behind in the recovery, and feared they would be the first to be priced out of whatever Altadena would become in the future.

Outreach in person became critical. I happened to meet the two-star general in charge of the Army Corps’ effort at lunch one day, as he and his team tried to find outside-the-box ways to engage with exhausted and wary residents. He told me they had tried to use technology—texts, emails, clips designed to go viral—but it was too much information, all apparently delivered in the wrong way. Many of the people they needed to reach, particularly older residents, didn’t use social media, weren’t able to communicate well via text, and were easy prey for sophisticated scammers. It was also easy for the real information to get lost as we got bombarded with communications, including many from hoaxers and frauds.

This, too, wasn’t new to me. Many of the movements I’ve covered are awash in grift and worthless wellness products. I know the signs of a scam and a snake-oil salesman. Still, I watched helplessly as my friends and my community, desperate for help, were turned into chum for cash-hungry sharks opening their jaws wide. 

The community was hammered by dodgy contractors and fly-by-night debris removal companies, relief scams and phony grants, and spam calls from “repair companies” and builders. We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. We were also victimized by looting: Abandoned wires on our lot were stripped for copper, and our neighbor’s unlocked garage was ransacked. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

The fear of being conned was easily transmittable, even to me personally. After hearing of friends who couldn’t get a FEMA grant because a previous owner of their home had fraudulently filed an application, we delayed our own appointment with FEMA for weeks. The agency’s call had come so out of the blue that we were convinced it was fake. Maybe my job made me overcautious, or maybe we were just paralyzed by the sheer tonnage of decisions and calls that needed to be handled. Whatever the reason, the fear meant we later had to make multiple calls just to get our meeting rescheduled. It’s a small thing, but when you’re as exhausted and dispirited as we were, there are no small things. 

Contractors for the US Army Corps of Engineers remove hazardous materials from a home destroyed in the Eaton Fire, near a burned-out car.
STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY IMAGES

Making all this even more frustrating was that the scammers, the people spinning tales of lasers and endangered fish and antifa, were very much ignoring the reality: that our planet is trying to kill us. While federal officials recently made an arrest in the Palisades Fire, the direct causes of that fire and the nearby Eaton Fire may still take years of investigation and litigation to be fully known. But even now, it can’t be denied to any reasonable degree that climate change worsened the wind that made the fires spread more quickly.

The Santa Ana winds bombarding Southern California were among the worst ever to hit the region. Their ferocity drove the embers well beyond the nominal fire danger line, particularly in Altadena. Many landed in brush left brittle and dead by the decades-long drought plaguing California. And they had even more fuel because the previous two winters had been among the wettest in the region’s recent history. Such rapid swings between wet and dry or cold and hot have become so common around the world that they even have a name: climate whiplash

There are the conspiracy theory gurus who see this and make money off it, peddling disinformation on their podcasts and livestreams, while blaming everyone and everything but the real reasons. Many of these figures have spent decades railing against the very idea that the climate could change. And if it is changing, they claimed, human consumption and urbanization have nothing to do with it. When faced with a disaster that undeniably reflected climate change at work, their business models—which rely on sales of subscriptions and merchandise—demanded that they just keep denying it was climate change at work.

As more cities and countries deal with “once in a century” climate disasters, I have no doubt that these figures will continue to deflect attention away from human activity. They will use crackpot science, conspiracy theories, politics, and—increasingly—fake videos depicting whatever AI can generate. They will prey on their audiences’ limited understanding of basic science, their inability to perceive how climate and weather differ, and their fears that globalist power brokers will somehow use the weather against them. And their message will spread with little pushback from social media platforms more concerned with virality and shareholder value than truth.

Resisting the temptation

When you cover disinformation and live through an event creating a massive volume of disinformation, it’s like floating outside your body on an operating table as your heart is being worked on, while also being a heart surgeon. I knew I should be trying to help. But I did not have the mental capacity, the time, or, to be honest, the interest in covering what the worst people on the internet were saying about the worst time of my life. I had very real questions about where my family would live. Thinking about my career was not a priority. 

But of course, these experiences cannot now be excised from my career. I’ve spent a lot of time talking about how trauma influences conspiracism; see how the isolation and boredom of covid created a new generation of conspiracy theory believers. And now I had my own trauma, and it has been a test of my abilities as a journalist and a thinker to avoid falling into the pit of despair.

At the same time, I have a much deeper understanding of the psychology at work in conspiracy belief. One of the biggest reasons conspiracy theories take off after a disaster is that they serve to make sense out of something that makes no sense. Neighborhoods aren’t supposed to burn down in an era of highly trained firefighters and seemingly fireproof materials. They especially aren’t supposed to burn down in Los Angeles, one of the wealthiest cities on the planet. These were seven- and eight-figure homes going up like matches. There must be a reason, people figured. Someone, or something, must be responsible.

So, as I emerge from the haze to something resembling “normal,” I feel more compassion and understanding for trauma victims who turn to conspiracy theories. Having faced the literal burning down of my life, I get the urge to assign meaning to such a calamity and point a finger at whoever we think did it to us. 

Meanwhile, the people of Altadena and Pacific Palisades continue to slowly put our lives and communities back together. The effects of both our warming planet and our disinformation crisis continue to assert themselves every day. It’s still alluring to look for easy answers in outrageous conspiracy theories, but such answers are not real and offer no actual help—only the illusion of help.

It’s equally tempting for someone who researches and debunks conspiracy theories to mock or belittle the people who believe these ideas. How could anyone be so dumb as to think Joe Biden caused the fire that burned down my home?

I kept my mouth shut that day at the meeting in the church, though, again, I can now sympathize much more deeply with something I’d otherwise think completely inane. 

But even a journalist who lost his house is still a journalist. So I decided early on that what I really needed to do was keep Altadena in the news. I went on TV and radio, blogged, and happily told our story to anyone who asked. I focused on the community, the impact, the people who would be working to recover long after the national spotlight moved to the next shiny object.

If there is a professional lesson to be taken from this nightmare, it might be that the people caught up in tragedies are exactly that: caught up. And those who believe this nonsense find something of value in it. They find hope and comfort and the reassurance that whoever did this to them will get what they deserve. 

I could have done it too, throwing away years of experience to embrace conspiracist nihilism in the face of unspeakable trauma. After all, those poor people going through this weren’t just on my TV. 

They were my friends. They were me. They could be anyone.

Mike Rothschild is a journalist and an expert on the growth and impact of conspiracy theories and disinformation. He has written three books, including The Storm Is Upon Us, about the QAnon conspiracy movement, and Jewish Space Lasers, about the myths around the Rothschild banking family. He also is a frequent expert witness in legal cases involving conspiracy theories and has spoken at colleges and conferences around the country. He lives in Southern California.

Four thoughts from Bill Gates on climate tech

Bill Gates doesn’t shy away or pretend modesty when it comes to his stature in the climate world today. “Well, who’s the biggest funder of climate innovation companies?” he asked a handful of journalists at a media roundtable event last week. “If there’s someone else, I’ve never met them.”

The former Microsoft CEO has spent the last decade investing in climate technology through Breakthrough Energy, which he founded in 2015. Ahead of the UN climate meetings kicking off next week, Gates published a memo outlining what he thinks activists and negotiators should focus on and how he’s thinking about the state of climate tech right now. Let’s get into it. 

Are we too focused on near-term climate goals?

One of the central points Gates made in his new memo is that he thinks the world is too focused on near-term emissions goals and national emissions reporting.

So in parallel with the national accounting structure for emissions, Gates argues, we should have high-level climate discussions at events like the UN climate conference. Those discussions should take a global view on how to reduce emissions in key sectors like energy and heavy industry.

“The way everybody makes steel, it’s the same. The way everybody makes cement, it’s the same. The way we make fertilizer, it’s all the same,” he says.

As he noted in one recent essay for MIT Technology Review, he sees innovation as the key to cutting the cost of clean versions of energy, cement, vehicles, and so on. And once products get cheaper, they can see wider adoption.

What’s most likely to power our grid in the future?

“In the long run, probably either fission or fusion will be the cheapest way to make electricity,” he says. (It should be noted that, as with most climate technologies, Gates has investments in both fission and fusion companies through Breakthrough Energy Ventures, so he has a vested interest here.)

He acknowledges, though, that reactors likely won’t come online quickly enough to meet rising electricity demand in the US: “I wish I could deliver nuclear fusion, like, three years earlier than I can.”

He also spoke to China’s leadership in both nuclear fission and fusion energy. “The amount of money they’re putting [into] fusion is more than the rest of the world put together times two. I mean, it’s not guaranteed to work. But name your favorite fusion approach here in the US—there’s a Chinese project.”

Can carbon removal be part of the solution?

I had my colleague James Temple’s recent story on what’s next for carbon removal at the top of my mind, so I asked Gates if he saw carbon credits or carbon removal as part of the problematic near-term thinking he wrote about in the memo.

Gates buys offsets to cancel out his own personal emissions, to the tune of about $9 million a year, he said at the roundtable, but doesn’t expect many of those offsets to make a significant dent in climate progress on a broader scale: “That stuff, most of those technologies, are a complete dead end. They don’t get you cheap enough to be meaningful.

“Carbon sequestration at $400, $200, $100, can never be a meaningful part of this game. If you have a technology that starts at $400 and can get to $4, then hallelujah, let’s go. I haven’t seen that one. There are some now that look like they can get to $40 or $50, and that can play somewhat of a role.”

 Will AI be good news for innovation? 

During the discussion, I started a tally in the corner of my notebook, adding a tick every time Gates mentioned AI. Over the course of about an hour, I got to six tally marks, and I definitely missed making a few.

Gates acknowledged that AI is going to add electricity demand, a challenge for a US grid that hasn’t seen net demand go up for decades. But so too will electric cars and heat pumps. 

I was surprised at just how positively he spoke about AI’s potential, though:

“AI will accelerate every innovation pipeline you can name: cancer, Alzheimer’s, catalysts in material science, you name it. And we’re all trying to figure out what that means. That is the biggest change agent in the world today, moving at a pace that is very, very rapid … every breakthrough energy company will be able to move faster because of using those tools, some very dramatically.”

I’ll add that, as I’ve noted here before, I’m skeptical of big claims about AI’s potential to be a silver bullet across industries, including climate tech. (If you missed it, check out this story about AI and the grid from earlier this year.) 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

DeepSeek may have found a new way to improve AI’s ability to remember

<div data-chronoton-summary="

  • Memory Through Images: DeepSeek’s new OCR model stores information as visual rather than text tokens, a technique that allows it to retain more data. This approach could drastically reduce computing costs and carbon footprint while improving AI’s ability to ‘remember’.
  • Addressing Context Rot: The model works a bit like human memory, storing older or less critical information in slightly blurred form to save space. This could help address the fact current AI systems forget or muddle information over long conversations, a problem dubbed “context rot.”
  • DeepSeek Disruption: DeepSeek shocked the AI industry with its efficient DeepSeek-R1 reasoning model in January, and is again pushing boundaries. The OCR system can generate over 200,000 training data pages daily on a single GPU, potentially addressing the industry’s severe shortage of quality training text.

” data-chronoton-post-id=”1126932″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. 

OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek’s new model performs on par with top models on key benchmarks.

But researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves memories. Improving how AI models “remember” information could reduce the computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. 

Currently, most large language models break text down into thousands of tiny units called tokens. This turns the text into representations that models can understand. However, these tokens quickly become expensive to store and compute with as conversations with end users grow longer. When a user chats with an AI for lengthy periods, this challenge can cause the AI to forget things it’s been told and get information muddled, a problem some call “context rot.”

The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found. 

Essentially, the OCR model is a test bed for these new methods that permit more information to be packed into AI models more efficiently. 

Besides using visual tokens instead of just text tokens, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue, this compressed content can still remain accessible in the background while maintaining a high level of system efficiency.

Text tokens have long been the default building block in AI systems. Using visual tokens instead is unconventional, and as a result, DeepSeek’s model is quickly capturing researchers’ attention. Andrej Karpathy, the former Tesla AI chief and a founding member of OpenAI, praised the paper on X, saying that images may ultimately be better than text as inputs for LLMs. Text tokens might be “wasteful and just terrible at the input,” he wrote. 

Manling Li, an assistant professor of computer science at Northwestern University, says the paper offers a new framework for addressing the existing challenges in AI memory. “While the idea of using image-based tokens for context storage isn’t entirely new, this is the first study I’ve seen that takes it this far and shows it might actually work,” Li says.

The method could open up new possibilities in AI research and applications, especially in creating more useful AI agents, says Zihan Wang, a PhD candidate at Northwestern University. He believes that since conversations with AI are continuous, this approach could help models remember more and assist users more effectively.

The technique can also be used to produce more training data for AI models. Model developers are currently grappling with a severe shortage of quality text to train systems on. But the DeepSeek paper says that the company’s OCR system can generate over 200,000 pages of training data a day on a single GPU.

The model and paper, however, are only an early exploration of using image tokens rather than text tokens for AI memorization. Li says she hopes to see visual tokens applied not just to memory storage but also to reasoning. Future work, she says, should explore how to make AI’s memory fade in a more dynamic way, akin to how we can recall a life-changing moment from years ago but forget what we ate for lunch last week. Currently, even with DeepSeek’s methods, AI tends to forget and remember in a very linear way—recalling whatever was most recent, but not necessarily what was most important, she says. 

Despite its attempts to keep a low profile, DeepSeek, based in Hangzhou, China, has built a reputation for pushing the frontier in AI research. The company shocked the industry at the start of this year with the release of DeepSeek-R1, an open-source reasoning model that rivaled leading Western systems in performance despite using far fewer computing resources. 

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Just about all businesses these days seem to be pivoting to AI, even when they don’t seem to know exactly why they’re investing in it—or even what it really does. “Optimization,” “scaling,” and “maximizing efficiency” are convenient buzzwords bandied about to describe what AI can achieve in theory, but for most of AI companies’ eager customers, the hundreds of billions of dollars they’re pumping into the industry aren’t adding up. And maybe they never will.

This month’s news doesn’t exactly cast the technology in a glowing light either. A bunch of NGOs and aid agencies are using AI models to generate images of fake suffering people to guilt their Instagram followers. AI translators are pumping out low-quality Wikipedia pages in the languages most vulnerable to going extinct. And thanks to the construction of new AI data centers, lots of neighborhoods living in their shadows are getting forced into their own sort of pivots—fighting back against the power blackouts and water shortages the data centers cause. How’s that for optimization?

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

“We will never build a sex robot,” says Mustafa Suleyman

<div data-chronoton-summary="

  • Balancing humanlike interaction with safety concerns: Suleyman emphasizes that Microsoft’s new Copilot features—including group chat and the “Real Talk” personality—are designed to keep AI as a tool serving humanity rather than a replacement for human connection. The company deliberately avoids building chatbots that encourage romantic or sexual relationships, drawing clear boundaries where others in the industry see market opportunity.
  • Personality as craft, not deception: While acknowledging that engaging personalities make AI more useful, Suleyman argues the industry must learn to “sculpt” emotional intelligence carefully.
  • Reframing the “digital species” metaphor: Suleyman clarifies that describing AI as a new digital species isn’t endorsing consciousness or rights for machines; rather, it’s a warning about what’s coming that demands proper containment. He insists the goal is keeping AI subordinate to human interests, not granting it autonomy or moral consideration that would distract from protecting actual human rights.

” data-chronoton-post-id=”1126781″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called “seemingly conscious artificial intelligence,” or SCAI.

On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.

I talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.

One key Copilot update is a group-chat feature that lets multiple people talk to the chatbot at the same time. A big part of the idea seems to be to stop people from falling down a rabbit hole in a one-on-one conversation with a yes-man bot. Another feature, called Real Talk, lets people tailor how much Copilot pushes back on you, dialing down the sycophancy so that the chatbot challenges what you say more often.

Copilot also got a memory upgrade, so that it can now remember your upcoming events or long-term goals and bring up things that you told it in past conversations. And then there’s Mico, an animated yellow blob—a kind of Chatbot Clippy—that Microsoft hopes will make Copilot more accessible and engaging for new and younger users.  

Microsoft says the updates were designed to make Copilot more expressive, engaging, and helpful. But I’m curious how far those features can be pushed without starting down the SCAI path that Suleyman has warned about.  

Suleyman’s concerns about SCAI come at a time when we are starting to hear more and more stories about people being led astray by chatbots that are too engaging, too expressive, too helpful. OpenAI is being sued by the parents of a teenager who they allege was talked into killing himself by ChatGPT. There’s even a growing scene that celebrates romantic relationships with chatbots.

With all that in mind, I wanted to dig a bit deeper into Suleyman’s views. Because a couple of years ago he gave a TED Talk in which he told us that the best way to think about AI is as a new kind of digital species. Doesn’t that kind of hype feed the misperceptions Suleyman is now concerned about?  

In our conversation, Suleyman told me what he was trying to get across in that TED Talk, why he really believes SCAI is a problem, and why Microsoft would never build sex robots (his words). He had a lot of answers, but he left me with more questions.

Our conversation has been edited for length and clarity.

In an ideal world, what kind of chatbot do you want to build? You’ve just launched a bunch of updates to Copilot. How do you get the balance right when you’re building a chatbot that has to compete in a market in which people seem to value humanlike interaction, but you also say you want to avoid seemingly conscious AI?

It’s a good question. With group chat, this will be the first time that a large group of people will be able to speak to an AI at the same time. It really is a way of emphasizing that AIs shouldn’t be drawing you out of the real world. They should be helping you to connect, to bring in your family, your friends, to have community groups, and so on.

That is going to become a very significant differentiator over the next few years. My vision of AI has always been one where an AI is on your team, in your corner.

This is a very simple, obvious statement, but it isn’t about exceeding and replacing humanity—it’s about serving us. That should be the test of technology at every step. Does it actually, you know, deliver on the quest of civilization, which is to make us smarter and happier and more productive and healthier and stuff like that?

So we’re just trying to build features that constantly remind us to ask that question, and remind our users to push us on that issue.

Last time we spoke, you told me that you weren’t interested in making a chatbot that would role-play personalities. That’s not true of the wider industry. Elon Musk’s Grok is selling that kind of flirty experience. OpenAI has said it’s interested in exploring new adult interactions with ChatGPT. There’s a market for that. And yet this is something you’ll just stay clear of?

Yeah, we will never build sex robots. Sad in a way that we have to be so clear about that, but that’s just not our mission as a company. The joy of being at Microsoft is that for 50 years, the company has built, you know, software to empower people, to put people first.

Sometimes, as a result, that means the company moves slower than other startups and is more deliberate and more careful. But I think that’s a feature, not a bug, in this age, when being attentive to potential side effects and longer-term consequences is really important.

And that means what, exactly?

We’re very clear on, you know, trying to create an AI that fosters a meaningful relationship. It’s not that it’s trying to be cold and anodyne—it cares about being fluid and lucid and kind. It definitely has some emotional intelligence.

So where does it—where do you—draw those boundaries?

Our newest chat model, which is called Real Talk, is a little bit more sassy. It’s a bit more cheeky, it’s a bit more fun, it’s quite philosophical. It’ll happily talk about the big-picture questions, the meaning of life, and so on. But if you try and flirt with it, it’ll push back and it’ll be very clear—not in a judgmental way, but just, like: “Look, that’s not for me.”

There are other places where you can go to get that kind of experience, right? And I think that’s just a decision we’ve made as a company.

Is a no-flirting policy enough? Because if the idea is to stop people even imagining an entity, a consciousness, behind the interactions, you could still get that with a chatbot that wanted to keep things SFW. You know, I can imagine some people seeing something that’s not there even with a personality that’s saying, hey, let’s keep this professional.

Here’s a metaphor to try to make sense of it. We hold each other accountable in the workplace. There’s an entire architecture of boundary management, which essentially sculpts human behavior to fit a mold that’s functional and not irritating.

The same is true in our personal lives. The way that you interact with your third cousin is very different to the way you interact with your sibling. There’s a lot to learn from how we manage boundaries in real human interactions.

It doesn’t have to be either a complete open book of emotional sensuality or availability—drawing people into a spiraled rabbit hole of intensity—or, like, a cold dry thing. There’s a huge spectrum in between, and the craft that we’re learning as an industry and as a species is to sculpt these attributes.

And those attributes obviously reflect the values of the companies that design them. And I think that’s where Microsoft has a lot of strengths, because our values are pretty clear, and that’s what we’re standing behind.

A lot of people seem to like personalities. Some of the backlash to GPT-5, for example, was because the previous model’s personality had been taken away. Was it a mistake for OpenAI to have put a strong personality there in the first place, to give people something that they then missed?

No, personality is great. My point is that we’re trying to sculpt personality attributes in a more fine-grained way, right?

Like I said, Real Talk is a cool personality. It’s quite different to normal Copilot. We are also experimenting with Mico, which is this visual character, that, you know, people—some people—really love. It’s much more engaging. It’s easier to talk to about all kinds of emotional questions and stuff.

I guess this is what I’m trying to get straight. Features like Mico are meant to make Copilot more engaging and nicer to use, but it seems to go against the idea of doing whatever you can to stop people thinking there’s something there that you are actually having a friendship with.

Yeah. I mean, it doesn’t stop you necessarily. People want to talk to somebody, or something, that they like. And we know that if your teacher is nice to you at school, you’re going to be more engaged. The same with your manager, the same with your loved ones. And so emotional intelligence has always been a critical part of the puzzle, so it’s not to say that we don’t want to pursue it.

It’s just that the craft is in trying to find that boundary. And there are some things which we’re saying are just off the table, and there are other things which we’re going to be more experimental with. Like, certain people have complained that they don’t get enough pushback from Copilot—they want it to be more challenging. Other people aren’t looking for that kind of experience—they want it to be a basic information provider. The task for us is just learning to disentangle what type of experience to give to different people.

I know you’ve been thinking about how people engage with AI for some time. Was there an inciting incident that made you want to start this conversation in the industry about seemingly conscious AI?

I could see that there was a group of people emerging in the academic literature who were taking the question of moral consideration for artificial entities very seriously. And I think it’s very clear that if we start to do that, it would detract from the urgent need to protect the rights of many humans that already exist, let alone animals.

If you grant AI rights, that implies—you know—fundamental autonomy, and it implies that it might have free will to make its own decisions about things. So I’m really trying to frame a counter to that, which is that it won’t ever have free will. It won’t ever have complete autonomy like another human being.

AI will be able to take actions on our behalf. But these models are working for us. You wouldn’t want a pack of, you know, wolves wandering around that weren’t tame and that had complete freedom to go and compete with us for resources and weren’t accountable to humans. I mean, most people would think that was a bad idea and that you would want to go and kill the wolves.

Okay. So the idea is to stop some movement that’s calling for AI welfare or rights before it even gets going, by making sure that we don’t build AI that appears to be conscious? What about not building that kind of AI because certain vulnerable people may be tricked by it in a way that may be harmful? I mean, those seem to be two different concerns.

I think the test is going to be in the kinds of features the different labs put out and in the types of personalities that they create. Then we’ll be able to see how that’s affecting human behavior.

But is it a concern of yours that we are building a technology that might trick people into seeing something that isn’t there? I mean, people have claimed they’ve seen sentience inside far less sophisticated models than we have now. Or is that just something that some people will always do?

It’s possible. But my point is that a responsible developer has to do our best to try and detect these patterns emerging in people as quickly as possible and not take it for granted that people are going to be able to disentangle those kinds of experiences themselves.

When I read your post about seemingly conscious AI, I was struck by a line that says: “We must build AI for people; not to be a digital person.” It made me think of a TED Talk you gave last year where you say that the best way to think about AI is as a new kind of digital species. Can you help me understand why talking about this technology as a digital species isn’t a step down the path of thinking about AI models as digital persons or conscious entities?

I think the difference is that I’m trying to offer metaphors that make it easier for people to understand where things might be headed, and therefore how to avert that and how to control it.

Okay.

It’s not to say that we should do those things. It’s just pointing out that this is the emergence of a technology which is unique in human history. And if you just assume that it’s a tool or just a chatbot or a dumb— you know, I kind of wrote that TED Talk in the context of a lot of skepticism. And I think it’s important to be clear-eyed about what’s coming so that one can think about the right guardrails.

And yet, if you’re telling me this technology is a new digital species, I have some sympathy for the people who say, well, then we need to consider welfare.

I wouldn’t. [He starts laughing.] Just not in the slightest. No way. It’s not a direction that any of us want to go in.

No, that’s not what I meant. I don’t think chatbots should have welfare. I’m saying I’d have some sympathy for where such people were coming from when they hear, you know, Mustafa Suleyman tell them that this thing he’s building was a new digital species. I’d understand why they might then say that they wanted to stand up for it. I’m saying the words we use matter, I guess.

The rest of the TED Talk was all about how to contain AI and how not to let this species take over, right? That was the whole point of setting it up as, like, this is what’s coming. I mean, that’s what my whole book [The Coming Wave, published in 2023] was about—containment and alignment and stuff like that. There’s no point in pretending that it’s something that it’s not and then building guardrails and boundaries that don’t apply because you think it’s just a tool.

Honestly, it does have the potential to recursively self-improve. It does have the potential to set its own goals. Those are quite profound things. No other technology we’ve ever invented has that. And so, yeah, I think that it is accurate to say that it’s like a digital species, a new digital species. That’s what we’re trying to restrict to make sure it’s always in service of people. That’s the target for containment.

I tried OpenAI’s new Atlas browser but I still don’t know what it’s for

OpenAI rolled out a new web browser last week called Atlas. It comes with ChatGPT built in, along with an agent, so that you can browse, get direct answers, and have automated tasks performed on your behalf all at the same time. 

I’ve spent the past several days tinkering with Atlas. I’ve used it to do all my normal web browsing, and also tried to take advantage of the ChatGPT functions—plus I threw some weird agentic tasks its way to see how it did with those. And my impression is that Atlas is…  fine? But my big takeaway is that it’s pretty pointless for anyone not employed by OpenAI, and that Atlas is little more than cynicism masquerading as software. 

If you want to know why, let’s start by looking at its agentic capabilities—which is really where it differentiates.

When I was browsing Amazon, I asked the Atlas agent to do some shopping for me, using a pre-set prompt of its own suggestion. (“Start a cart with items I’m likely to want based on my browsing here and highlight any active promo codes. Let me review before checkout.”) It picked out a notebook that I’d recently purchased and no longer needed, some deodorant I’d recently purchased and no longer needed, and a vacuum cleaner that I’d considered but decided was too expensive and no longer needed because I bought a cheaper one. 

I would guess that it took 10 minutes or so for it to do all that. I cleaned out my cart and considered myself lucky that it didn’t buy anything.  

When I logged onto Facebook, which is already lousy with all sorts of AI slop, I asked it to create a status update for me. So it dug through my browser history and came back with an incredibly long status I won’t bore you with all of it (and there was a lot) but here are the highlights from what it suggested:  “I dipped into Smartsheet and TeamSnap (because editors juggle rosters too!), flirted with Shopify and Amazon (holiday gift‑shopping? side hustle? you decide), and kept tabs on the news … . Somewhere in there I even remembered to log into Slack, schedule Zoom meetings, and read a few NYTimes and Technology Review pieces. Who says an editor’s life isn’t glamorous? 😊” 

Uh. Okay. I decided against posting that. There were some other equally unillustrious examples as well, but you get the picture. 

Aside from the agent, the other unique feature is having ChatGPT built right into the browser. Notice I said “unique,” not “useful.” I struggled with finding any obvious utility by having this right there, versus just going to chatgpt dot com. In some cases, the built-in chatbot was worse and dumber. 

For example, I asked the built-in ChatGPT to summarize a MIT Technology Review article I was reading for me. Yet instead of answering the question about the page I was on, it referred back to the page I had previously been on when I started the session. Which is to say it spit back some useless nonsense. Thanks, AI. 

OpenAI is marketing Atlas pretty aggressively when you come to ChatGPT now, suggesting people download it. And it may in fact score a lot of downloads because of that. But without giving people more of a reason to actually switch from more entrenched browsers, like Chrome or Safari, this feels like a real empty salvo in the new browser wars. 

It’s been hard for me to understand why Atlas exists. Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing.

This review first appeared in The Debrief, Mat Honan’s weekly subscriber-only newsletter.

An AI app to measure pain is here

How are you feeling?

I’m genuinely interested in the well-being of all my treasured Checkup readers, of course. But this week I’ve also been wondering how science and technology can help answer that question—especially when it comes to pain. 
In the latest issue of MIT Technology Review magazine, Deena Mousa describes how an AI-powered smartphone app is being used to assess how much pain a person is in.

The app, and other tools like it, could help doctors and caregivers. They could be especially useful in the care of people who aren’t able to tell others how they are feeling.

But they are far from perfect. And they open up all kinds of thorny questions about how we experience, communicate, and even treat pain.

Pain can be notoriously difficult to describe, as almost everyone who has ever been asked to will know. At a recent medical visit, my doctor asked me to rank my pain on a scale from 1 to 10. I found it incredibly difficult to do. A 10, she said, meant “the worst pain imaginable,” which brought back unpleasant memories of having appendicitis.

A short while before the problem that brought me in, I’d broken my toe in two places, which had hurt like a mother—but less than appendicitis. If appendicitis was a 10, breaking a toe was an 8, I figured. If that was the case, maybe my current pain was a 6. As a pain score, it didn’t sound as bad as I actually felt. I couldn’t help wondering if I might have given a higher score if my appendix were still intact. I wondered, too, how someone else with my medical issue might score their pain.

In truth, we all experience pain in our own unique ways. Pain is subjective, and it is influenced by our past experiences, our moods, and our expectations. The way people describe their pain can vary tremendously, too.

We’ve known this for ages. In the 1940s, the anesthesiologist Henry Beecher noted that wounded soldiers were much less likely to ask for pain relief than similarly injured people in civilian hospitals. Perhaps they were putting on a brave face, or maybe they just felt lucky to be alive, given their circumstances. We have no way of knowing how much pain they were really feeling.

Given this messy picture, I can see the appeal of a simple test that can score pain and help medical professionals understand how best to treat their patients. That’s what is being offered by PainChek, the smartphone app Deena wrote about. The app works by assessing small facial movements, such as lip raises or brow pinches. A user is then required to fill a separate checklist to identify other signs of pain the patient might be displaying. It seems to work well, and it is already being used in hospitals and care settings.

But the app is judged against subjective reports of pain. It might be useful for assessing the pain of people who can’t describe it themselves—perhaps because they have dementia, for example—but it won’t add much to assessments from people who can already communicate their pain levels.

There are other complications. Say a test could spot that a person was experiencing pain. What can a doctor do with that information? Perhaps prescribe pain relief—but most of the pain-relieving drugs we have were designed to treat acute, short-term pain. If a person is grimacing from a chronic pain condition, the treatment options are more limited, says Stuart Derbyshire, a pain neuroscientist at the National University of Singapore.

The last time I spoke to Derbyshire was back in 2010, when I covered work by researchers in London who were using brain scans to measure pain. That was 15 years ago. But pain-measuring brain scanners are yet to become a routine part of clinical care.

That scoring system was also built on subjective pain reports. Those reports are, as Derbyshire puts it, “baked into the system.” It’s not ideal, but when it comes down to it, we must rely on these wobbly, malleable, and sometimes incoherent self-descriptions of pain. It’s the best we have.

Derbyshire says he doesn’t think we’ll ever have a “pain-o-meter” that can tell you what a person is truly experiencing. “Subjective report is the gold standard, and I think it always will be,” he says.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What’s next for carbon removal?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In the early 2020s, a little-known aquaculture company in Portland, Maine, snagged more than $50 million by pitching a plan to harness nature to fight back against climate change. The company, Running Tide, said it could sink enough kelp to the seafloor to sequester a billion tons of carbon dioxide by this year, according to one of its early customers.

Instead, the business shut down its operations last summer, marking the biggest bust to date in the nascent carbon removal sector.

Its demise was the most obvious sign of growing troubles and dimming expectations for a space that has spawned hundreds of startups over the last few years. A handful of other companies have shuttered, downsized, or pivoted in recent months as well. Venture investments have flagged. And the collective industry hasn’t made a whole lot more progress toward that billion-ton benchmark.

The hype phase is over and the sector is sliding into the turbulent business trough that follows, warns Robert Höglund, cofounder of CDR.fyi, a public-benefit corporation that provides data and analysis on the carbon removal industry.

“We’re past the peak of expectations,” he says. “And with that, we could see a lot of companies go out of business, which is natural for any industry.”

The open question is: If the carbon removal sector is heading into a painful if inevitable clearing-out cycle, where will it go from there? 

The odd quirk of carbon removal is that it never made a lot of sense as a business proposition: It’s an atmospheric cleanup job, necessary for the collective societal good of curbing climate change. But it doesn’t produce a service or product that any individual or organization strictly needs—or is especially eager to pay for.

To date, a number of businesses have voluntarily agreed to buy tons of carbon dioxide that companies intend to eventually suck out of the air. But whether they’re motivated by sincere climate concerns or pressures from investors, employees, or customers, corporate do-goodism will only scale any industry so far. 

Most observers argue that whether carbon removal continues to bobble along or transforms into something big enough to make a dent in climate change will depend largely on whether governments around the world decide to pay for a whole, whole lot of it—or force polluters to. 

“Private-sector purchases will never get us there,” says Erin Burns, executive director of Carbon180, a nonprofit that advocates for the removal and reuse of carbon dioxide. “We need policy; it has to be policy.”

What’s the problem?

The carbon removal sector began to scale up in the early part of this decade, as increasingly grave climate studies revealed the need to dramatically cut emissions and suck down vast amounts of carbon dioxide to keep global warming in check.

Specifically, nations may have to continually remove as much as 11 billion tons of carbon dioxide per year by around midcentury to have a solid chance of keeping the planet from warming past 2 °C over preindustrial levels, according to a UN climate panel report in 2022.

A number of startups sprang up to begin developing the technology and building the infrastructure that would be needed, trying out a variety of approaches like sinking seaweed or building carbon-dioxide-sucking factories.

And they soon attracted customers. Companies including Stripe, Google, Shopify, Microsoft, and others began agreeing to pre-purchase tons of carbon removal, hoping to stand up the nascent industry and help offset their own climate emissions. Venture investments also flooded into the space, peaking in 2023 at nearly $1 billion, according to data provided by PitchBook.

From early on, players in the emerging sector sought to draw a sharp distinction between conventional carbon offset projects, which studies have shown frequently exaggerate climate benefits, and “durable” carbon removal that could be relied upon to suck down and store away the greenhouse gas for decades to centuries. There’s certainly a big difference in the price: While buying carbon offsets through projects that promise to preserve forests or plant trees might cost a few dollars per ton, a ton of carbon removal can run hundreds to thousands of dollars, depending on the approach. 

That high price, however, brings big challenges. Removing 10 billion tons of carbon dioxide a year at, say, $300 a ton adds up to a global price tag of $3 trillion—a year. 

Which brings us back to the fundamental question: Who should or would foot the bill to develop and operate all the factories, pipelines, and wells needed to capture, move, and bury billions upon billions of tons of carbon dioxide?

The state of the market

The market is still growing, as companies voluntarily purchase tons of carbon removal to make strides toward their climate goals. In fact, sales reached an all-time high in the second quarter of this year, mostly thanks to several massive purchases by Microsoft.

But industry sources fear that demand isn’t growing fast enough to support a significant share of the startups that have formed or even the projects being built, undermining the momentum required to scale the sector up to the size needed by midcentury.

To date, all those hundreds of companies that have spun up in recent years have disclosed deals to sell some 38 million tons of carbon dioxide pulled from the air, according to CDR.fyi. That’s roughly the amount the US pumps out in energy-related emissions every three days. 

And they’ve only delivered around 940,000 tons of carbon removal. The US emits that much carbon dioxide in less than two hours. (Not every transaction is publicly announced or revealed to CDR.fyi, so the actual figures could run a bit higher.)

Another concern is that the same handful of big players continue to account for the vast majority of the overall purchases, leaving the health and direction of the market dependent on their whims and fortunes. 

Most glaringly, Microsoft has agreed to buy 80% of all the carbon removal purchased to date, according to  CDR.fyi. The second-biggest buyer is Frontier, a coalition of companies that includes Google, Meta, Stripe, and Shopify, which has committed to spend $1 billion.

If you strip out those two buyers, the market shrinks from 16 million tons under contract during the first half of this year to just 1.2 million, according to data provided to MIT Technology Review by CDR.fyi. 

Signs of trouble

Meanwhile, the investor appetite for carbon removal is cooling. For the 12-month period ending in the second quarter of 2025, venture capital investments in the sector fell more than 13% from the same period last year, according to data provided by PitchBook. That tightening funding will make it harder and harder for companies that aren’t bringing in revenue to stay afloat.

Other companies that have already shut down include the carbon removal marketplace Nori, the direct air capture company Noya and Alkali Earth, which was attempting to use industrial by-products to tie up carbon dioxide.

Still other businesses are struggling. Climeworks, one of the first companies to build direct-air-capture (DAC) factories, announced it was laying off 10% of its staff in May, as it grapples with challenges on several fronts.

The company’s plans to collaborate on the development of a major facility in the US have been at least delayed as the Trump administration has held back tens of millions of dollars in funding granted in 2023 under the Department of Energy’s Regional Direct Air Capture Hubs program. It now appears the government could terminate the funding altogether, along with perhaps tens of billions of dollars’ worth of additional grants previously awarded for a variety of other US carbon removal and climate tech projects.

“Market rumors have surfaced, and Climeworks is prepared for all scenarios,” Christoph Gebald, one of the company’s co-CEOs, said in a previous statement to MIT Technology Review. “The need for DAC is growing as the world falls short of its climate goals and we’re working to achieve the gigaton capacity that will be needed.”

But purchases from direct-air-capture projects fell nearly 16% last year and account for just 8% of all carbon removal transactions to date. Buyers are increasingly looking to categories that promise to deliver tons faster and for less money, notably including burying biochar or installing carbon capture equipment on bioenergy plants. (Read more in my recent story on that method of carbon removal, known as BECCS, here.)

CDR.fyi recently described the climate for direct air capture in grim terms: “The sector has grown rapidly, but the honeymoon is over: Investment and sales are falling, while deployments are delayed across almost every company.”

“Most DAC companies,” the organization added, “will fold or be acquired.”

What’s next?

In the end, most observers believe carbon removal isn’t really going to take off unless governments bring their resources and regulations to bear. That could mean making direct purchases, subsidizing these sectors, or getting polluters to pay the costs to do so—for instance, by folding carbon removal into market-based emissions reductions mechanisms like cap-and-trade systems. 

More government support does appear to be on the way. Notably, the European Commission recently proposed allowing “domestic carbon removal” within its EU Emissions Trading System after 2030, integrating the sector into one of the largest cap-and-trade programs. The system forces power plants and other polluters in member countries to increasingly cut their emissions or pay for them over time, as the cap on pollution tightens and the price on carbon rises. 

That could create incentives for more European companies to pay direct-air-capture or bioenergy facilities to draw down carbon dioxide as a means of helping them meet their climate obligations.

There are also indications that the International Civil Aviation Organization, a UN organization that establishes standards for the aviation industry, is considering incorporating carbon removal into its market-based mechanism for reducing the sector’s emissions. That might take several forms, including allowing airlines to purchase carbon removal to offset their use of traditional jet fuel or requiring the use of carbon dioxide obtained through direct air capture in some share of sustainable aviation fuels.

Meanwhile, Canada has committed to spend $10 million on carbon removal and is developing a protocol to allow direct air capture in its national offsets program. And Japan will begin accepting several categories of carbon removal in its emissions trading system

Despite the Trump administration’s efforts to claw back funding for the development of carbon-sucking projects, the US does continue to subsidize storage of carbon dioxide, whether it comes from power plants, ethanol refineries, direct-air-capture plants, or other facilities. The so-called 45Q tax credit, which is worth up to $180 a ton, was among the few forms of government support for climate-tech-related sectors that survived in the 2025 budget reconciliation bill. In fact, the subsidies for putting carbon dioxide to other uses increased.

Even in the current US political climate, Burns is hopeful that local or federal legislators will continue to enact policies that support specific categories of carbon removal in the regions where they make the most sense, because the projects can provide economic growth and jobs as well as climate benefits.

“I actually think there are lots of models for what carbon removal policy can look like that aren’t just things like tax incentives,” she says. “And I think that this particular political moment gives us the opportunity in a unique way to start to look at what those regionally specific and pathway specific policies look like.”

The dangers ahead

But even if more nations do provide the money or enact the laws necessary to drive the business of durable carbon renewal forward, there are mounting concerns that a sector conceived as an alternative to dubious offset markets could increasingly come to replicate their problems.

Various incentives are pulling in that direction.

Financial pressures are building on suppliers to deliver tons of carbon removal. Corporate buyers are looking for the fastest and most affordable way of hitting their climate goals. And the organizations that set standards and accredit carbon removal projects often earn more money as the volume of purchases rises, creating clear conflicts of interest.

Some of the same carbon registries that have long signed off on carbon offset projects have begun creating standards or issuing credits for various forms of carbon removal, including Verra and Gold Standard.

“Reliable assurance that a project’s declared ton of carbon savings equates to a real ton of emissions removed, reduced, or avoided is crucial,” Cynthia Giles, a senior EPA advisor under President Biden, and Cary Coglianese, a law professor at the University of Pennsylvania, wrote in a recent editorial in Science. “Yet extensive research from many contexts shows that auditors selected and paid by audited organizations often produce results skewed toward those entities’ interests.”

Noah McQueen, the director of science and innovation at Carbon180, has stressed that the industry must strive to counter the mounting credibility risks, noting in a recent LinkedIn post: “Growth matters, but growth without integrity isn’t growth at all.”

In an interview, McQueen said that heading off the problem will require developing and enforcing standards to truly ensure that carbon removal projects deliver the climate benefits promised. McQueen added that to gain trust, the industry needs to earn buy-in from the communities in which these projects are built and avoid the environmental and health impacts that power plants and heavy industry have historically inflicted on disadvantaged communities.

Getting it right will require governments to take a larger role in the sector than just subsidizing it, argues David Ho, a professor at the University of Hawaiʻi at Mānoa who focuses  on ocean-based carbon removal.

He says there should be a massive, multinational research drive to determine the most effective ways of mopping up the atmosphere with minimal environmental or social harm, likening it to a Manhattan Project (minus the whole nuclear bomb bit).

“If we’re serious about doing this, then let’s make it a government effort,” he says, “so that you can try out all the things, determine what works and what doesn’t, and you don’t have to please your VCs or concentrate on developing [intellectual property] so you can sell yourself to a fossil-fuel company.”

Ho adds that there’s a moral imperative for the world’s historically biggest climate polluters to build and pay for the carbon-sucking and storage infrastructure required to draw down billions of tons of greenhouse gas. That’s because the world’s poorest, hottest nations, which have contributed the least to climate change, will nevertheless face the greatest dangers from intensifying heat waves, droughts, famines, and sea-level rise.

“It should be seen as waste management for the waste we’re going to dump on the Global South,” he says, “because they’re the people who will suffer the most from climate change.”

Correction (October 24): An earlier version of this article referred to Noya as a carbon removal marketplace. It was a direct air capture company.

This startup is about to conduct the biggest real-world test of aluminum as a zero-carbon fuel

The crushed-up soda can disappears in a cloud of steam and—though it’s not visible—hydrogen gas. “I can just keep this reaction going by adding more water,” says Peter Godart, squirting some into the steaming beaker. “This is room-temperature water, and it’s immediately boiling. Doing this on your stove would be slower than this.” 

Godart is the founder and CEO of Found Energy, a startup in Boston that aims to harness the energy in scraps of aluminum metal to power industrial processes without fossil fuels. Since 2022, the company has worked to develop ways to rapidly release energy from aluminum on a small scale. Now it’s just switched on a much larger version of its aluminum-powered engine, which Godart claims is the largest aluminum-water reactor ever built. 

Early next year, it will be installed to supply heat and hydrogen to a tool manufacturing facility in the southeastern US, using the aluminum waste produced by the plant itself as fuel. (The manufacturer did not want to be named until the project is formally announced.)

If everything works as planned, this technology, which uses a catalyst to unlock the energy stored within aluminum metal, could transform a growing share of aluminum scrap into a zero-carbon fuel. The high heat generated by the engine could be especially valuable to reduce the substantial greenhouse-gas emissions generated by industrial processes, like cement production and metal refining, that are difficult to power with electricity directly.

“We invented the fuel, which is a blessing and a curse,” says Godart, surrounded by the pipes and wires of the experimental reactor. “It’s a huge opportunity for us, but it also means we do have to develop all of the systems around us. We’re redefining what even is an engine.”

Engineers have long eyed using aluminum as a fuel thanks to its superior energy density. Once it has been refined and smelted from ore, aluminum metal contains more than twice as much energy as diesel fuel by volume and almost eight times as much as hydrogen gas. When it reacts with oxygen in water or air, it forms aluminum oxides. This reaction releases heat and hydrogen gas, which can be tapped for zero-carbon power.

Liquid metal

The trouble with aluminum as a fuel—and the reason your soda can doesn’t spontaneously combust—is that as soon as the metal starts to react, an oxidized layer forms across its surface that prevents the rest of it from reacting. It’s like a fire that puts itself out as it generates ash. “People have tried it and abandoned this idea many, many times,” says Godart.

Some believe using aluminum as a fuel remains a fool’s errand. “This potential use of aluminum crops up every few years and has no possibility of success even if aluminum scrap is used as the fuel source,” says Geoff Scamans, a metallurgist at Brunel University of London who spent a decade working on using aluminum to power vehicles in the 1980s. He says the aluminum-water reaction isn’t efficient enough for the metal to make sense as a fuel given how much energy it takes to refine and smelt aluminum from ore to begin with: “A crazy idea is always a crazy idea.”

But Godart believes he and his company have found a way to make it work. “The real breakthrough was thinking about catalysis in a different way,” he says: Instead of trying to speed up the reaction by bringing water and aluminum together onto a catalyst, they “flipped it around” and “found a material that we could actually dissolve into the aluminum.”

Petert Godart holding up two glass jars; one with metal spheres and the other with flat metal shapes

JAMES DINNEEN

The liquid metal catalyst at the heart of the company’s approach “permeates the microstructure” of the aluminum, says Godart. As the aluminum reacts with water, the catalyst forces the metal to froth and split open, exposing more unreacted aluminum to the water. 

The composition of the catalyst is proprietary, but Godart says it is a “low-melting-point liquid metal that’s not mercury.” His dissertation research focused on using a liquid mixture of gallium and indium as the catalyst, and he says the principle behind the current material is the same.

During a visit in early October, Godart demonstrated the central reaction in the Found R&D lab, which after the company’s $12 million seed round last year now fills the better part of two floors of an industrial building in Boston’s Charlestown neighborhood. Using a pair of tongs to avoid starting the reaction with the moisture on his fingers, he placed a pellet of aluminum treated with the secret catalyst in a beaker and then added water. Immediately, the metal began to bubble with hydrogen. Then the water steamed away, leaving behind a frothing gray mass of aluminum hydroxide.

“One of the impediments to this technology taking off is that [the aluminum-water reaction] was just too sluggish,” says Godart. “But you can see here we’re making steam. We just made a boiler.”

From Europa to Earth

Godart was a scientist at NASA when he first started thinking about fresh ways to unlock the energy stored in aluminum. He was working on building aluminum robots that could consume themselves for fuel when roving on Jupiter’s icy moon Europa. But that work was cut short when Congress reduced funding for the mission.

“I was sort of having this little mini crisis where I was like, I need to do something about climate change, about Earth problems,” says Godart. “And I was like, you know—I bet this aluminum technology would be even better for Earth applications.” After completing a dissertation on aluminum fuels at MIT, he started Found Energy in his house in Cambridge in 2022 (the next year, he earned a place on MIT Technology Review’s annual 35 Innovators under 35 list).

Until this year, the company was working at a tiny scale, tweaking the catalyst and testing different conditions within a small 10-kilowatt reactor to make the reaction release more heat and hydrogen more quickly. Then, in January, it began designing an engine that’s 10 times larger, big enough to supply a useful amount of power for industrial processes beyond the lab.

This larger engine took up most of the lab on the second floor. The reactor vessel resembled a water boiler turned on its side, with piping and wires connected to monitoring equipment that took up almost as much space as the engine itself. On one end, there was a pipe to inject water and a piston to deliver pellets of aluminum fuel into the reactor at variable rates. On the other end, outflow pipes carried away the reaction products: steam, hydrogen gas, aluminum hydroxide, and the recovered catalyst. Godart says none of the catalyst is lost in the reaction, so it can be used again to make more fuel.

The company first switched on the engine to begin testing in July. In September, it managed to power it up to its targeted power of 100 kilowatts—roughly as much as can be supplied by the diesel engine in a small pickup truck. In early 2026, it plans to install the 100-kilowatt engine to supply heat and hydrogen to the tool manufacturing facility. This pilot project is meant to serve as the proof of concept needed to raise the money for a 1-megawatt reactor, 10 times larger again.

The initial pilot will use the engine to supply hot steam and hydrogen. But the energy released in the reactor could be put to use in a variety of ways across a range of temperatures, according to Godart. The hot steam could spin a turbine to produce electricity, or the hydrogen could produce electricity in a fuel cell. By burning the hydrogen within the steam, the engine can produce superheated steam as hot as 1,300 °C, which could be used to generate electricity more efficiently or refine chemicals. Burning the hydrogen alone could generate temperatures of 2,400 °C, hot enough to make steel.

Picking up scrap

Godart says he and his colleagues hope the engine will eventually power many different industrial processes, but the initial target is the aluminum refining and recycling industry itself, as it already handles scrap metal and aluminum oxide supply chains. “Aluminum recyclers are coming to us, asking us to take their aluminum waste that’s difficult to recycle and then turn that into clean heat that they can use to re-melt other aluminum,” he says. “They are begging us to implement this for them.”

Citing nondisclosure agreements, he wouldn’t name any of the companies offering up their unrecyclable aluminum, which he says is something of a “dirty secret” for an industry that’s supposed to be recycling all it collects. But estimates from the International Aluminium Institute, an industry group, suggest that globally a little over 3 million metric tons of aluminum collected for recycling currently goes unrecycled each year; another 9 million metric tons isn’t collected for recycling at all or is incinerated with other waste. Together, that’s a little under a third of the estimated 43 million metric tons of aluminum scrap that currently gets recycled each year.

Even if all that unused scrap was recovered for fuel, it would still supply only a fraction of the overall industrial demand for heat, let alone the overall industrial demand for energy. But the plan isn’t to be limited by available scrap. Eventually, Godart says, the hope is to “recharge” the aluminum hydroxide that comes out of the reactor by using clean electricity to convert it back into aluminum metal and react it again. According to the company’s estimates, this “closed loop” approach could supply all global demand for industrial heat by using and reusing a total of around 300 million metric tons of aluminum—around 4% of Earth’s abundant aluminum reserves. 

However, all that recharging would require a lot of energy. “If you’re doing that, [aluminum fuel] is an energy storage technology, not so much an energy providing technology,” says Jeffrey Rissman, who studies industrial decarbonization at Energy Innovation, a think tank in California. As with other forms of energy storage like thermal batteries or green hydrogen, he says, that could still make sense if the fuel can be recharged using low-cost, clean electricity. But that will be increasingly hard to come by amid the scramble for clean power for everything from AI data centers to heat pumps.

Despite these obstacles, Godart is confident his company will find a way to make it work. The existing engine may already be able to squeeze out more power from aluminum than anticipated. “We actually believe this can probably do half a megawatt,” he says. “We haven’t fully throttled it.”

James Dinneen is a science and environmental journalist based in New York City.