This scientist rewarmed and studied pieces of his friend’s cryopreserved brain

L. Stephen Coles’s brain sits cushioned in a vat at a storage facility in Arizona. It has been held there at a temperature of around −146 degrees °C for over a decade, largely undisturbed.

That is, apart from the time, a little over a year ago, when scientists slowly lifted the brain to take photos of it. Years before, the team had removed tiny pieces of it to send to Coles’s friend. Coles, a researcher who studied aging, was interested in cryogenics—the long-term storage of human bodies and brains in the hope that they might one day be brought back to life. Before he died, he asked cryobiologist Greg Fahy to study the effects of the preservation procedure on his brain. Coles was especially curious about whether his cooled brain would crack, says Fahy.

Coles’s brain was preserved shortly after he died in 2014, but Fahy has only recently got around to analyzing those samples. He says that Coles’s brain is “astonishingly well preserved.”

“We can see every detail [in the structure of the brain biopsies],” says Fahy, who is chief scientific officer at biotech companies Intervene Immune and 21st Century Medicine (where he is also executive director). He hopes this means that Coles’s brain still stands a chance of reanimation at some point in the future.

Other cryobiologists are less optimistic. “This brain is not alive,” says John Bischof, who works on ways to cryopreserve human organs at the University of Minnesota.

Still, Fahy’s research could help provide a tool to neuroscientists looking for new ways to study the brain. And while human reanimation after cryopreservation may be the stuff of science fiction, using the technology to preserve organs for transplantation is within reach.

Banking a brain

Coles, a gerontologist who spent the latter part of his career studying human longevity, opted to have his brain cryogenically preserved when he died of pancreatic cancer.

After he was declared dead, Coles’s body was kept at a low temperature while he was transferred to Alcor, a cryonics facility in Arizona. His head was removed from his body, and a team perfused his brain with “cryoprotective” chemicals that would prevent it from freezing. They then removed it from his skull and cooled it to −146 °C.

Coles had another request. As a scientist, he wanted his cryopreserved brain to be studied. Hundreds of people have opted to have their brains—with or without the rest of their bodies—stored at cryonic facilities (the remains of 259 individuals are currently stored as either whole bodies or heads at Alcor). But scientists know very little about what has happened to those brains, and there’s no evidence to suggest they could be revived. Coles had met Fahy through their shared interest in longevity, and he asked him to investigate.

“He thought that if he had himself cryopreserved, we could learn from his brain whether cracking was going to happen or not,” says Fahy. That’s what typically happens when organs are put into liquid nitrogen at −196 °C, he says. The extreme cooling creates “tension in the system,” he says. “If you tap it, it’ll just shatter.” This cracking is less likely at the slightly warmer temperatures used for preservation. 

Fahy was involved from the time the samples were taken.

“We had Greg Fahy on the phone coordinating the whole thing, [including] where the biopsies were taken,” says Nick Llewellyn, who oversees research at Alcor. (Llewellyn was not at Alcor at the time but has discussed the procedure with his colleagues.) The biopsied samples were stored in liquid nitrogen and earmarked for Fahy. The rest of the brain was cooled and kept in a temperature-controlled storage container at Alcor.

Bouncing back

It wasn’t until years later that Fahy got around to studying those biopsies. He was interested in how the cryoprotectant—which is toxic—might have affected the brain cells. Previous research has shown that flooding tissues with cryoprotectant can distort the structure of cells, essentially squashing them.

It’s one of the many challenges facing cryobiologists interested in storing human tissues at very low temperatures. While the vitrification of eggs and embryos—which cools them to −196 °C and essentially turns them to glass—has become relatively routine (thanks in part to Fahy’s own work on mouse embryos back in the 1980s), preserving whole organs this way is much harder. It is difficult to cool bigger objects in a uniform way, and they are prone to damaging ice crystal formation, even when cryoprotectants are used, as well as cracking.

Fahy found that when he rewarmed and rehydrated Coles’s brain cells, their structure seemed to bounce back to some degree. Fahy demonstrated the effect over a Zoom call: “It looks like this,” he said with his hands as if in prayer, “and it goes back to this,” he added, connecting his forefingers and thumbs to create a triangle shape.

The structure of the tissue looks pretty intact, too, to him at least, though he admits a purist expecting a pristine structure would be disappointed. He and his colleagues have been able to see remarkable details in the cells and their component parts. “There’s nothing we don’t see,” says Fahy, who has shared his results, which have not yet been peer reviewed, at the preprint server bioRxiv. “It seems that [by taking the cryogenic approach] you can preserve everything.”

As for the cracking, “from what I was told, no cracks were observed [by the team that initially preserved the brain],” says Fahy. The team at Alcor took photographs of the brain when they took the biopsies, but the images were later lost due to a server malfunction, he says. In the more recent photos, the brain is covered in a layer of frost, which makes it impossible to see if there are any cracks, he adds. Attempts to remove the frost might damage the brain, so the team has decided to leave it alone, he says.

Back to life?

Fahy and his colleagues used chemicals to “fix” Coles’s brain samples once they had been rewarmed. That process is typically used to stop fresh tissue samples from decaying, but it also effectively kills them.

But he thinks his results suggest that it might be possible to cryopreserve small pieces of brain tissue and reanimate them to learn more about how they work. Functional recovery seems to be possible in mice—a few weeks ago a team in Germany showed that they were able to revive brain slices that had been stored at −196 °C. Those brain samples showed electrical activity after being cooled and rewarmed.

If cryobiologists can achieve the same feat with human brain samples, those samples could provide neuroscientists with new insights into how living brains work.

Brain cryopreservation “can capture a little bit more of the complexities of the brain,” says Shannon Tessier, a cryobiologist at Massachusetts General Hospital who is developing technologies to preserve hearts, livers, and kidneys for transplantation. “[Being] able to use human brains from deceased individuals [could] add another layer to the research tool kit,” she says.

And Fahy’s paper shows “what happens when we try and vitrify a one-liter, dense, massive goop,” says Matthew Powell-Palm, a cryobiologist at Texas A&M University. “We now have a strong indication that quite large [tissues and organs] can be vitrified by perfusion [without forming too much ice],” he says.

All of the scientists I spoke to, including Fahy, are also working on ways to cool and preserve organs for transplantation. These are in short supply partly because once an organ is removed from a donor, it usually must be transplanted into its recipient within a matter of hours. 

Cryopreservation could buy enough time to make use of more organs, find better organ-donor matches, and potentially even prepare recipients’ immune systems and save them from a lifetime of immunosuppressant drugs, says Bischof, who has also been developing new technologies for organ cryopreservation.

Bischof, Fahy, and others have made huge strides in their attempts so far, and they have managed to remove, cryopreserve, and transplant organs in rabbits and rats, for example. “We’re at the cusp of human-scale organ cryopreservation,” says Bischof.

But when it comes to preserving brains, donation isn’t the aim. Coles had hoped to be reanimated—a far more ambitious goal that hinges on the ability to restore brain function.

Brain reanimation

Fahy acknowledges that while the structure of Coles’s brain samples did bounce back, there is no evidence to suggest the cells could be brought back to life and regain electrical activity and a functioning metabolism. “Restoring it to function … that’s a whole other story,” he says.

But he thinks that successful cryopreservation of the brain “is the gateway to human suspended animation, which [could allow] us to get to the stars someday.” Figuring out human preservation would also allow people to avoid death through what he calls “medical time travel”—journeying to an unspecified time in the future when science will have found a cure for whatever was due to kill that person. “That would be an ultimate goal to pursue,” he says.

“I put the chances [of brain reanimation] at pretty low,” says Alcor’s own Llewellyn. “The kind of technology we need is practically unfathomable.”

The brains already in storage at Alcor and other facilities have been preserved in ways that “have not been validated to work for reanimation,” says Tessier. An expectation that they’ll one day be brought back to life in some form is “quite a jump of faith and hope that’s not based on science,” she says.

As Powell-Palm puts it: “There are so many ways in which those neurons could be toast.”

The Bay Area’s animal welfare movement wants to recruit AI

In early February, animal welfare advocates and AI researchers gathered in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. Yellow and red canopies billowed overhead, Persian rugs blanketed the floor, and mosaic lamps glowed beside potted plants. 

In the common area, a wildlife advocate spoke passionately to a crowd lounging in beanbags about a form of rodent birth control that could manage rat populations without poison. In the “Crustacean Room,” a dozen people sat in a circle, debating whether the sentience of insects could tell us anything about the inner lives of chatbots. In front of the “Bovine Room” stood a bookshelf stacked with copies of Eliezer Yudkowsky’s If Anyone Builds It, Everyone Dies, a manifesto arguing that AI could wipe out humanity

The event was hosted by Sentient Futures, an organization that believes the future of animal welfare will depend on AI. Like many Bay Area denizens, the attendees were decidedly “AGI-pilled”—they believe that artificial general intelligence, powerful AI that can compete with humans on most cognitive tasks, is on the horizon. If that’s true, they reason, then AI will likely prove key to solving society’s thorniest problems—including animal suffering.

To be clear, experts still fiercely debate whether today’s AI systems will ever achieve human- or superhuman-level intelligence, and it’s not clear what will happen if they do. But some conference attendees envision a possible future in which it is AI systems, and not humans, who call the shots. Eventually, they think, the welfare of animals could hinge on whether we’ve trained AI systems to value animal lives. 

“AI is going to be very transformative, and it’s going to pretty much flip the game board,” said Constance Li, founder of Sentient Futures. “If you think that AI will make the majority of decisions, then it matters how they value animals and other sentient beings”—those that can feel and, therefore, suffer.

Like Li, many summit attendees have been committed to animal welfare since long before AI came into the picture. But they’re not the types to donate a hundred bucks to an animal shelter. Instead of focusing on local actions, they prioritize larger-scale solutions, such as reducing factory farming by promoting cultivated meat, which is grown in a lab from animal cells. 

The Bay Area animal welfare movement is closely linked to effective altruism, a philanthropic movement committed to maximizing the amount of good one does in the world—indeed, many conference attendees work for organizations funded by effective altruists. That philosophy might sound great on paper, but “maximizing good” is a tricky puzzle that might not admit a clear solution. The movement has been widely criticized for some of its conclusions, such as promoting working in exploitative industries to maximize charitable donations and ignoring present-day harms in favor of  issues that could cause suffering for a large number of people who haven’t been born yet. Critics also argue that effective altruists neglect the importance of systemic issues such as racism and economic exploitation and overlook the insights that marginalized communities might have into the best ways to improve their own lives.

When it comes to animal welfare, this exactingly utilitarian approach can lead to some strange conclusions. For example, some effective altruists say it makes sense to commit significant resources to improving the welfare of insects and shrimp because they exist in such staggering numbers, even though they may not have much individual capacity for suffering. 

Now the movement is sorting out how AI fits in. At the summit, Jasmine Brazilek, cofounder of a nonprofit called Compassion in Machine Learning, opened her sticker-stamped laptop to pull up a benchmark she devised to measure how LLMs reason about animal welfare. A cloud security engineer turned animal advocate, she’d flown in from La Paz, Mexico, where she runs her nonprofit with a handful of volunteers and a shoestring budget. 

Brazilek urged the AI researchers in the room to train their models with synthetic documents that reflect concern for animal welfare. “Hopefully, future superintelligent systems consider nonhuman interest, and there is a world where AI amplifies the best of human values and not the worst,” she said. 

The power of the purse 

The technologically inclined side of the animal welfare movement has faced some major setbacks in recent years. Dreams of transitioning people away from a diet dependent on factory farming have been dampened by developments such as the decimation of the plant-based-meat company Beyond Meat’s stock price and the passage of laws banning cultivated meat in several US states.

AI has injected a shot of optimism. Like much of Silicon Valley, many attendees at the summit subscribe to the idea that AI might dramatically increase their productivity—though their goal is not to maximize their seed round but, rather, to prevent as much animal suffering as possible. Some brainstormed how to use Claude Code and custom agents to handle the coding and administrative tasks in their advocacy work. Others pitched the idea of developing new, cheaper methods for cultivating meat using scientific AI tools such as AlphaFold, which aids in molecular biology research by predicting the three-dimensional structures of proteins.

But the real talk of the event was a flood of funding that advocates expect will soon be committed to animal welfare charities—not by individual megadonors, but by AI lab employees. 

Much of the funding for the farm animal welfare movement, which includes nonprofits advocating for improved conditions on farms, promoting veganism, and endorsing cultivated meat, comes from people in the tech industry, says Lewis Bollard, the managing director of the farm animal welfare fund at Coefficient Giving, a philanthropic funder that used to be called Open Philanthropy. Coefficient Giving is backed by Facebook cofounder Dustin Moskovitz and his wife, Cari Tuna, who are among a handful of Silicon Valley billionaires who embrace effective altruism

“This has just been an area that was completely neglected by traditional philanthropies,” such as the Gates Foundation and the Ford Foundation, Bollard says. “It’s primarily been people in tech who have been open to [it].”

The next generation of big donors, Bollard expects, will be AI researchers—particularly those who work at Anthropic, the AI lab behind the chatbot Claude. Anthropic’s founding team also has connections to the effective altruism movement, and the company has a generous donation matching program. In February, Anthropic’s valuation reached $380 billion and it gave employees the option to cash in on their equity, so some of that money could soon be flowing into charitable coffers.

The prospect of new funding sustained a constant buzz of conversation at the summit. Animal welfare advocates huddled in the “Arthropod Room” and scrawled big dollar figures and catchy acronyms for projects on a whiteboard. One person pitched a $100 million animal super PAC that would place staffers with Congress members and lobby for animal welfare legislation. Some wanted to start a media company that creates AI-generated content on TikTok promoting veganism. Others spoke about placing animal advocates inside AI labs.

“The amount of new funding does give us more confidence to be bolder about things,” said Aaron Boddy, cofounder of the Shrimp Welfare Project, an organization that aims to reduce the suffering of farmed shrimp through humane slaughter, among other initiatives. 

The question of AI welfare

But animal welfare was only half the focus of the Sentient Futures summit. Some attendees probed far headier territory. They took seriously the controversial idea that AI systems might one day develop the capacity to feel and therefore suffer, and they worry that this future AI suffering, if ignored, could constitute a moral catastrophe.

AI suffering is a tricky research problem, not least because scientists don’t yet have a solid grip on why humans and other animals are sentient. But at the summit, a niche cadre of philosophers, largely funded by the effective altruism movement, and a handful of freewheeling academics grappled with the question. Some presented their research on using LLMs to evaluate whether other LLMs might be sentient. On Debate Night, attendees argued about whether we should ironically call sentient AI systems “clankers,” a derogatory term for robots from the film Star Wars, asking if the robot slur could shape how we treat a new kind of mind. 

“It doesn’t matter if it’s a cow or a pig or an AI, as long as they have the capacity to feel happiness or suffering,” says Li. 

In some ways, bringing AI sentience into an animal welfare conference isn’t as strange a move as it might seem. Researchers who work on machine sentience often draw on theories and approaches pioneered in the study of animal sentience, and if you accept that invertebrates likely feel pain and believe that AI systems might soon achieve superhuman intelligence, entertaining the possibility that those systems might also suffer may not be much of a leap.

“Animal welfare advocates are used to going against the grain,” says Derek Shiller, an AI consciousness researcher at the think tank Rethink Priorities, who was once a web developer at the animal advocacy nonprofit Humane League. “They’re more open to being concerned about AI welfare, even though other people think it’s silly.”

But outside the niche Bay Area circle, caring about the possibility of AI sentience is a harder sell. Li says she faced pushback from other animal welfare advocates when, inspired by a conference on AI sentience she attended in 2023, she rebranded her farm animal welfare advocacy organization as Sentient Futures last year. “Many people were extremely confident that AIs would never become sentient and [argued that] by investing any energy or money into AI welfare, we’re just burning money and throwing it away,” she says.

Matt Dominguez, executive director of Compassion in World Farming, echoed the concern. “I would hate to see people pulling money out of farm animal welfare or animal welfare and moving it into something that is hypothetical at this particular moment,” he says.

Still, Dominguez, who started partnering with the Shrimp Welfare Project after learning about invertebrate suffering, believes compassion is expansive. “When we get someone to care about one of those things, it creates capacity for their circle of compassion to grow to include others,” he says.

The hardest question to answer about AI-fueled delusions

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I was originally going to write this week’s newsletter about AI and Iran, particularly the news we broke last Tuesday that the Pentagon is making plans for AI companies to train on classified data. AI models have already been used to answer questions in classified settings but don’t currently learn from the data they see. That’s expected to change, I reported, and new security risks will result. Read that story for more. 

But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals. 

There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.

The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually.

Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.

Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations. 

And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases.

But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI?

“It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there.

Delusions, Mehta says, tend to be “a complex network that unfolds over a long period of time.” He’s conducting follow-up research aiming to find whether delusional messages from chatbots or those from people are more likely to lead to harmful outcomes.

The reason I see this as one of the most pressing questions in AI is that massive legal cases currently set to go to trial will shape whether AI companies are held accountable for these sorts of dangerous interactions. The companies, I presume, will argue that humans come into their conversations with AI with delusions in hand and may have been unstable before they ever spoke to a chatbot.

Mehta’s initial findings, though, support the idea that chatbots have a unique ability to turn a benign delusion-like thought into the source of a dangerous obsession. Chatbots act as a conversational partner that’s always available and programmed to cheer you on, and unlike a friend, they have little ability to know if your AI conversations are starting to interrupt your real life.

More research is still needed, and let’s remember the environment we’re in: AI deregulation is being pursued by President Trump, and states aiming to pass laws that hold AI companies accountable for this sort of harm are being threatened with legal action by the White House. This type of research into AI delusions is hard enough to do as it is, with limited access to data and a minefield of ethical concerns. But we need more of it, and a tech culture interested in learning from it, if we have any hope of making AI safer to interact with.

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity.

Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive.

Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become.

To some in the field, the hype is not necessarily a bad thing. Let me explain.

The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.”

The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.

That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge.

In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug.

The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write.

Not great news so far.

The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant.

The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone.

“When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.”

His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points.

When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo.

But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK.

But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will.

The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says.

This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti.

If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti.

So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.

“Yeah, nobody would care,” Szigeti agrees.

It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”

I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation.

Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says.

“At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.”

Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential.

“These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

OpenAI is throwing everything into building a fully automated researcher

<div data-chronoton-summary="

  • A fully automated research lab: OpenAI has set a new “North Star” — building an AI system capable of tackling large, complex scientific problems entirely on its own, with a research intern prototype due by September and a full multi-agent system planned for 2028.
  • Coding agents as a proof of concept: OpenAI’s existing tool Codex, which can already handle substantial programming tasks autonomously, is the early blueprint — the bet is that if AI can solve coding problems, it can solve almost any problem formulated in text or code.
  • Serious risks with no clean answers: Chief scientist Jakub Pachocki admits that a system this powerful running with minimal human oversight raises hard questions — with risks from hacking and misuse to bioweapons — and that chain-of-thought monitoring is the best safeguard available, for now.
  • Power concentrated in very few hands: Pachocki says governments, not just OpenAI, will need to figure out where the lines are drawn.

” data-chronoton-post-id=”1134438″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that this new research goal will be its “North Star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.

There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with.

Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code, or whiteboard scribbles—which covers a lot.

OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   

A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist, who sets the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems. 

In an exclusive interview this week, Pachocki talked me through OpenAI’s latest vision. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”

Solving hard problems

Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there.

In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. (Other firms have released similar tools, such as Anthropic’s Claude Code and Claude Cowork.)

OpenAI claims that most of its technical staffers now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.”

The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks [to] that would take a person a few days,” says Pachocki.

“There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?”

For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models that can work longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says. 

So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better.

But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force the models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks.

The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician. We have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds.

Right now that means taking what Codex can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem.

The line always goes up

It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry, and physics puzzles.   

“Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says.

But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says.

He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.)

But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time-saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says.

“I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do—I mean, that’s hard to argue with.”

Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.  

Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.

But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors.

“If you have to chain tasks together, then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast, and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says. 

Serious unanswered questions

I asked Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time.

“If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world. That’s a big thing,” he told me. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?”

The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions.

The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.

In short, LLMs are trained to jot down notes about what they are doing in a kind of scratch pad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in house to study Codex

“Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki.

The idea would be to monitor an AI researcher’s scratch pads using other LLMs and catch unwanted behavior before it’s a problem, rather than trying to stop that bad behavior from happening in the first place. LLMs are not understood well enough for us to control them fully.

“I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes, cut off from anything they could break or use to cause harm. 

AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki. 

“It’s going to be a very weird thing. It’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organizations would now be done by a couple of people.”

“I think this is a big challenge for governments to figure out,” he adds.

And yet some people would say governments are part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky.

I pushed Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policymakers.”

Where does that leave us? Are we really on a path to the kind of AI Pachocki envisions? When I asked the Allen Institute’s Downey, he laughed. “I’ve been in this field for a couple of decades and I no longer trust my predictions for how near or far certain capabilities are,” he says. 

OpenAI’s stated mission is to ensure that artificial general intelligence (a hypothetical future technology that many AI boosters believe will be able to match humans on most cognitive tasks) will benefit all of humanity. OpenAI aims to do that by being the first to build it. But the only time Pachocki mentioned AGI in our conversation, he was quick to clarify what he meant by talking about “economically transformative technology” instead.

LLMs are not like human brains, he says: “They are superficially similar to people in some ways because they’re kind of mostly trained on people talking. But they’re not formed by evolution to be really efficient.” 

“Even by 2028, I don’t expect that we’ll get systems as smart as people in all ways. I don’t think that will happen,” he adds. “But I don’t think it’s absolutely necessary. The interesting thing is you don’t need to be as smart as people in all their ways in order to be very transformative.”

Why the world doesn’t recycle more nuclear waste

The prospect of making trash useful is always fascinating to me. Whether it’s used batteries, solar panels, or spent nuclear fuel, getting use out of something destined for disposal sounds like a win all around.

In nuclear energy, figuring out what to do with waste has always been a challenge, since the material needs to be dealt with carefully. In a new story, I dug into the question of what advanced nuclear reactors will mean for spent fuel waste. New coolants, fuels, and logistics popping up in companies’ designs could require some adjustments.

My reporting also helped answer another question that was lingering in my brain: Why doesn’t the world recycle more nuclear waste?

There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Getting more use out of the spent fuel could cut down on both waste and the need to mine new material, but the process is costly, complicated, and not 100% effective.

France has the largest and most established reprocessing program in the world today. The La Hague plant in northern France has the capacity to reprocess about 1,700 tons of spent fuel each year.

The plant uses a process called PUREX—spent fuel is dissolved in acid and goes through chemical processing to pull out the uranium and plutonium, which are then separated. The plutonium is used to make mixed oxide (or MOX) fuel, which can be used in a mixture to fuel conventional nuclear reactors or alone as fuel in some specialized designs. And the uranium can go on to be re-enriched and used in standard low-enriched uranium fuel.

Reprocessing can cut down on the total volume of high-level nuclear waste that needs special handling, says Allison Macfarlane, director of the school of public policy and global affairs at the University of British Columbia and a former chair of the NRC.

But there’s a bit of a catch. Today, the gold standard for permanent nuclear waste storage is a geological repository, a deep underground storage facility. Heat, not volume, is often the key limiting factor for how much material can be socked away in those facilities, depending on the specific repository. And spent MOX fuel gives off much more heat than conventional spent fuel, Macfarlane says. So even if there’s a smaller volume, the material might take up as much, or even more, space in a repository. 

It’s also tricky to make this a true loop: The uranium that’s produced from reprocessing is contaminated with isotopes that can be difficult to separate, Macfarlane says. Today, France essentially saves the uranium for possible future enrichment as a sort of strategic stockpile. (Historically, it’s also exported some to Russia for enrichment.) And while MOX fuel can be used in some reactors, once it is spent, it is technically challenging to reprocess. So today, the best case is that fuel could be used twice, not infinitely.

“Every responsible analyst understands that no matter what, no matter how good your recycling process is, you’re still going to need a geological repository in the end,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists.

Reprocessing also has its downsides, Lyman adds. One risk comes from the plutonium made in the process, which can be used in nuclear weapons. France handles that risk with high security, and by quickly turning that plutonium into the MOX fuel product.

Reprocessing is also quite expensive, and uranium supply isn’t meaningfully limited. “There’s no economic benefit to reprocessing at this time,” says Paul Dickman, a former Department of Energy and NRC official.

France bears the higher cost that comes with reprocessing largely for political reasons, he says. The country doesn’t have uranium resources, importing its supply today. Reprocessing helps ensure its energy independence: “They’re willing to pay a national security premium.”

Japan is currently constructing a spent-fuel reprocessing facility, though delays have plagued the project, which started construction in 1993 and was originally supposed to start up by 1997. Now the facility is expected to open by 2027.

It’s possible that new technologies could make reprocessing more appealing, and agencies like the Department of Energy should do longer-term research on advanced separation technologies, Dickman says. Some companies working on advanced reactors say they plan to use alternative reprocessing methods in their fuel cycle.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

Can quantum computers now solve health care problems? We’ll soon find out.

<div data-chronoton-summary="

  • A $5 million health care challenge: A nonprofit called Wellcome Leap is offering up to $5 million to quantum computing teams that can solve real-world health care problems classical computers can’t handle—using machines that are still noisy, error-prone, and far from perfect.
  • Hybrid computing is the real breakthrough: Facing limited quantum hardware, all six finalist teams developed clever quantum-classical hybrid approaches—offloading most work to conventional processors, then using quantum only where classical methods fall short.
  • Cancer, muscular dystrophy, and drug design are on the table: Teams are tackling problems ranging from identifying cancer origins to simulating light-activated cancer drugs to finding treatments for muscular dystrophy—applications previously impossible to model classically.
  • Even failure would count as progress: The competition’s own director doubts anyone will claim the grand prize, but says the field has already been transformed—teams now know where quantum computing can genuinely matter, even if the machines to fully prove it don’t exist yet.

” data-chronoton-post-id=”1134409″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

I’m standing in front of a quantum computer built out of atoms and light at the UK’s National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik’s Cube–size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam. 

The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I’d be unlikely to get very far, though. It’s small but powerful—and so it’s very valuable. Infleqtion, the Colorado-based company that owns it, is hoping the machine’s abilities will win $5 million next week, at an event to be held in Marina del Rey, California. 

Infleqtion is one of six teams that have made it to the final stage of a 30-month-long quantum computing competition called Quantum for Bio (Q4Bio). Run by the nonprofit Wellcome Leap, it aims to show that today’s quantum computers, though messy and error-prone and far from the large-scale machines engineers hope to build, could actually benefit human health. Success would be a significant step forward in proving the worth of quantum computers. But for now, it turns out, that worth seems to be linked to harnessing and improving the performance of conventional (also called classical) computers in tandem, creating a quantum-classical hybrid that can exceed what’s possible on classical machines by themselves.

There are two prize categories. A prize of $2 million will go to any and all teams that can run a significantly useful health care algorithm on computers with 50 or more qubits (a qubit is the basic processing unit in a quantum computer). To win the $5 million grand prize, a team must successfully run a quantum algorithm that solves a significant real-world problem in health care, and the work must use 100 or more qubits. Winners have to meet strict performance criteria, and they must solve a health care problem that can’t be solved with conventional computers—a tough task.

Despite the scale of the challenge, most of the teams think some of this money could be theirs. “I think we’re in with a good shout,” says Jonathan D. Hirst, a computational chemist at the University of Nottingham, UK. “We’re very firmly within the criteria for the $2 million prize,” says Stanford University’s Grant Rotskoff, whose collaboration is investigating the quantum properties of the ATP molecule that powers biological cells. 

The grand prize is perhaps less of a sure thing. “This is really at the very edge of doable,” Rotskoff says. Insiders say the challenge is so difficult, given the state of quantum computing technology, that much of the money could stay in Wellcome Leap’s account. 

With most of the Q4Bio work unpublished and protected by NDAs, and the quantum computing field already rife with claims and counterclaims about performance and achievements, only the judges will be in a position to decide who’s right. 

A hybrid solution

The idea behind quantum computers is that they can use small-scale objects that obey the laws of quantum mechanics, such as atoms and photons of light,  to simulate real-world processes too complex to model on our everyday classical machines. 

Researchers have been working for decades to build such systems, which could deliver insights for creating new materials, developing pharmaceuticals, and improving chemical processes such as fertilizer production.  But dealing with quantum stuff like atoms is excruciatingly difficult. The biggest, shiniest applications require huge, robust machines capable of withstanding the environmental “noise” that can very easily disrupt delicate quantum systems. We don’t have those yet—and it’s unclear when we will. 

Wellcome Leap wanted to find out if the smaller-scale machines we have today can be made to do something—anything—useful for health care while we wait for the era of powerful, large-scale quantum computers. The group started the competition in 2024, offering $1.5 million in funding to each group of 12 selected teams.

The six Q4Bio finalists have taken a range of approaches. Crucially, they’ve all come up with ingenious ways to overcome quantum computing’s drawbacks. Faced with noisy, limited machines, they have learned how to outsource much of the computational load to classical processors running newly developed algorithms that are, in many cases, better than the previous state of the art. The quantum processors are then required only for the parts of the problem where classical methods don’t scale well enough as the calculation gets bigger.

For example, a team led by Sergii Strelchuk of Oxford University is using a quantum computer to map genetic diversity among humans and pathogens on complex graph-based structures. These will—the researchers hope—expose hidden connections and potential treatment pathways. “You can think about it as a platform for solving difficult problems in computational genomics,” Strelchuk says. 

The corresponding classical tools struggle with even modest scale-up to large databases. Strelchuk’s team has built an automated pipeline that provides a way of determining whether classical solvers will struggle with a particular problem, and how a quantum algorithm might be able to formulate the data so that it becomes solvable on a classical computer or handleable on a noisy quantum one. “You can do all this before you start spending money on computing,” Strelchuk says.

In collaboration with Cleveland Clinic, Helsinki-based Algorithmiq has used a superconducting quantum computer built by IBM to simulate a cancer drug that is triggered by specific types of light. “The idea is you take the drug, and it’s everywhere in your body, but it’s doing nothing, just sitting there, until there’s light on it of a certain wavelength,” says Guillermo García-Pérez, Algorithmiq’s chief scientific officer. Then it acts as a molecular bullet, attacking the tumor only at the location in the body where that light is directed. 

The drug with which Algorithmiq began its work is already in phase II clinical trials for treating bladder cancers. The quantum-computed simulation, which adapts and improves on classical algorithms, will allow it to be redesigned for treating other conditions. “It has remained a niche treatment precisely because it can’t be simulated classically,” says Sabrina Maniscalco, Algorithmiq’s CEO and cofounder. 

Maniscalco, who is also confident of walking away from the competition with prize money, believes the methods used to create the algorithm will have wide applications:  “What we’ve done in the period of the Q4Bio program is something unique that can change how to simulate chemistry for health care and life sciences.”

Infleqtion’s entry, running on its cesium-powered machine, is an effort to improve the identification of cancer signatures in medical data. Together with collaborators at the University of Chicago and MIT, the company’s scientists have developed a quantum algorithm that mines huge data sets such as the Cancer Genome Atlas. 

The aim is to find patterns that allow clinicians to determine factors such as the likely origin of a patient’s metastasized cancer. “It’s very important to know where it came from because that can inform the best treatment,” says Teague Tomesh, a quantum software engineer who is Infleqtion’s Q4Bio project lead.

Unfortunately, those patterns are hidden inside data sets so large that they overwhelm classical solvers. Infleqtion uses the quantum computer to find correlations in the data that can reduce the size of the computation. “Then we hand the reduced problem back to the classical solver,” Teague says. “I’m basically trying to use the best of my quantum and my classical resources.”

The Nottingham-based team, meanwhile, is using quantum computing to nail down a drug candidate that can cure myotonic dystrophy, the most common adult-onset form of muscular dystrophy. One member of the team, David Brook, played a role in identifying the gene behind this condition in 1992. Over 30 years later, Brook, Hirst, and the others in their group—which includes QuEra, a Boston company developing a quantum computer based on neutral atoms—has now quantum-computed a way in which drugs can form chemical bonds with the protein that brings on the disease, blocking the mechanism that causes the problem.

Low expectations 

The entrants’ confidence might be high, but Shihan Sajeed’s is much lower. Sajeed, a quantum computing entrepreneur based in Waterloo, Ontario, is program director for Q4Bio. He believes the error-prone quantum machines the researchers must work with are unlikely to deliver on all the grand prize criteria. “It is very difficult to achieve something with a noisy quantum computer that a classical machine can’t do,” he says.

That said, he has been surprised by the progress. “When we started the program, people didn’t know about any use cases where quantum can definitely impact biology,” he says. But the teams have found promising applications, he adds: “We now know the fields where quantum can matter.” 

And the developments in “hybrid quantum-classical” processing that the entrants are using are “transformational,” Sajeed reckons.

Will it be enough to make him part with Wellcome Leap’s money? That’s down to a judging panel, whose members’ identities are a closely guarded secret to ensure that no one tailors their presentation to a particular kind of approach. But we won’t know the outcome for a while; the winner, or winners, will be announced in mid-April. 

If it does turn out that there are no winners, Sajeed has some words of comfort for the competitors. The goal has always been about running a useful algorithm on a machine that exists today, he points out; missing the mark doesn’t mean your algorithm won’t be useful on a future quantum computer. “It just means the machine you need doesn’t exist yet.”

What do new nuclear reactors mean for waste?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

The way the world currently deals with nuclear waste is as creative as it is varied: Drown it in water pools, encase it in steel, bury it hundreds of meters underground. 

These methods are how the nuclear industry safely manages the 10,000 metric tons of spent fuel waste that reactors produce as they churn out 10% of the world’s electricity every year. But as new nuclear designs emerge, they could introduce new wrinkles for nuclear waste management.  

Most operating reactors at nuclear power plants today follow a similar basic blueprint: They’re fueled with low-enriched uranium and cooled with water, and they’re mostly gigantic, sited at central power plants. But a large menu of new reactor designs that could come online in the next few years will likely require tweaks to ensure that existing systems can handle their waste.

“There’s no one answer about whether this panoply of new reactors and fuel types are going to make waste management any easier,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists.

A nuclear disposal playbook

Nuclear waste can be roughly split into two categories: low-level waste, like contaminated protection equipment from hospitals and research centers, and high-level waste, which requires more careful handling. 

The vast majority by volume is low-level waste. This material can be stored onsite and often, once its radioactivity has decayed enough, largely handled like regular trash (with some additional precautions). High-level waste, on the other hand, is much more radioactive and often quite hot. This second category consists largely of spent fuel, a combination of materials including uranium-235, which is the fissile portion of nuclear fuel—the part that can sustain the chain reaction required for nuclear power plants to work. The material also contains fission products—the sometimes radioactive by-products of the splitting atoms that release energy.

Many experts agree that the best long-term solution for spent fuel and other high-level nuclear waste is a geologic repository—essentially, a very deep, very carefully managed hole in the ground. Finland is the furthest along with plans to build one, and its site on the southwest coast of the country should be operational this year.

The US designated a site for a geological repository in the 1980s, but political conflict has stalled progress. So today, used fuel in the US is stored onsite at operational and shuttered nuclear power plants. Once it’s removed from a reactor, it’s typically placed into wet storage, essentially submerged in pools of water to cool down. The material can then be put in protective cement and steel containers called dry casks, a stage known as dry storage.

Experts say the industry won’t need to entirely rewrite this playbook for the new reactor designs.  

“The way we’re going to manage spent fuel is going to be largely the same,” says Erik Cothron, manager of research and strategy at the Nuclear Innovation Alliance, a nonprofit think tank focused on the nuclear industry. “I don’t stay up late at night worried about how we’re going to manage spent fuel.” 

But new designs and materials could require some engineering solutions. And there’s a huge range of reactor designs, meaning there’s an equally wide range of potential waste types to handle.

Unusual waste

Some new nuclear reactors will look quite similar to operating models, so their spent fuel will be managed in much the same way that it is today. But others use novel materials as coolants and fuels. 

“Unusual materials will create unusual waste,” says Syed Bahauddin Alam, an assistant professor of nuclear, plasma, and radiological engineering at the University of Illinois Urbana-Champaign.

Some advanced designs could increase the volume of material that needs to be handled as high-level waste. Take reactors that use TRISO (tri-structural isotropic) fuel, for example. TRISO contains a uranium kernel surrounded by several layers of protective material and then embedded in graphite shells. The graphite that encases TRISO will likely be lumped together with the rest of the spent fuel, making the waste much bulkier than current fuel.

Today, separating those layers would be difficult and expensive, according to a 2024 report from the Nuclear Innovation Alliance. That means the entire package would be lumped together as high-level waste.  

The company X-energy is designing high-temperature gas-cooled reactors that use TRISO fuel. It has already submitted plans for dealing with spent fuel to the Nuclear Regulatory Commission, which oversees reactors in the US. The fuel’s form could actually help with waste management: The protective shells used in TRISO eliminate X-energy’s need for wet storage, allowing for dry storage from day one, according to the company.

Liquid-fueled molten-salt reactors, another new type, could increase waste volume too. In these designs, fuel and coolant are not kept separate as in most reactors; instead, the fuel is dissolved directly into a molten salt that’s used as the coolant. That means the entire vat of molten salt would need to be handled as high-level waste.

On the other hand, some other reactor designs could produce a smaller volume of spent fuel, but that isn’t necessarily a smaller problem. Fast reactors, for example, achieve a higher burn-up, consuming more of the fissile material and extracting more energy from their fuel. That means spent fuel from these reactors typically has a higher concentration of fission products and emits more heat. And that heat could be the killer factor for designing waste solutions. 

Spent fuel needs to be kept relatively cool, so it doesn’t melt and release hazardous by-products. Too much heat in a repository could also damage the surrounding rock. “Heat is what really drives how much you can put inside a repository,” says Paul Dickman, a former Department of Energy and NRC official.

Some spent fuel could require chemical processing prior to disposal, says Allison MacFarlane, director of the school of public policy and global affairs at the University of British Columbia and a former chair of the NRC. That could add complication and cost.

In fast reactors cooled by sodium metal, for example, the coolant can get into the fuel and fuse to its casing. Separation could be tricky, and sodium is highly reactive with water, so the spent fuel will require specialized treatment.

TerraPower’s Natrium reactor, a sodium fast reactor that received a construction permit from the NRC in early March, is designed to safely manage this challenge, says Jeffrey Miller, senior vice president for business development at TerraPower. The company has a plan to blow nitrogen over the material before it’s put into wet storage pools, removing the sodium.

Location, location, location

Regardless of what materials are used, even just changing the size of reactors and where they’re sited could introduce complications for waste management. 

Some new reactors are essentially smaller versions of the large reactors used today. These small modular reactors and microreactors may produce waste that can be handled in the same way as waste from today’s conventional reactors. But for places like the US, where waste is stored onsite, it would be impractical to have a ton of small sites that each hosts its own waste.  

Some companies are looking at sending their microreactors, and the waste material they produce, back to a single location, potentially the same one where reactors are manufactured.

Companies should be required to think carefully about waste and design in management protocols, and they should be held responsible for the waste they produce, UBC’s MacFarlane says. 

She also notes that so far, planning for waste has relied on research and modeling, and the reality will become clear only once the reactors are actually operational. As she puts it: “These reactors don’t exist yet, so we don’t really know a whole lot, in great gory detail, about the waste they’re going to produce.”

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. 

AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. 

Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.)

Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said. 

Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery. 

The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.

Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks. 

The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI. 

“You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military.

However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge. 

The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports.

There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect. 

It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

Where OpenAI’s technology could show up in Iran

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.

It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?

Targets and strikes

Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)

If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 

A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first. 

It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.

Drone defense

At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people. 

Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.

The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses. 

Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack. 

Back-office AI

In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world. 

Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.

Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.