The two people shaping the future of OpenAI’s research

For the past couple of years, OpenAI has felt like a one-man brand. With his showbiz style and fundraising glitz, CEO Sam Altman overshadows all other big names on the firm’s roster. Even his bungled ouster ended with him back on top—and more famous than ever. But look past the charismatic frontman and you get a clearer sense of where this company is going. After all, Altman is not the one building the technology on which its reputation rests. 

That responsibility falls to OpenAI’s twin heads of research—chief research officer Mark Chen and chief scientist Jakub Pachocki. Between them, they share the role of making sure OpenAI stays one step ahead of powerhouse rivals like Google.

I sat down with Chen and Pachocki for an exclusive conversation during a recent trip the pair made to London, where OpenAI set up its first international office in 2023. We talked about how they manage the inherent tension between research and product. We also talked about why they think coding and math are the keys to more capable all-purpose models; what they really mean when they talk about AGI; and what happened to OpenAI’s superalignment team, set up by the firm’s cofounder and former chief scientist Ilya Sutskever to prevent a hypothetical superintelligence from going rogue, which disbanded soon after he quit. 

In particular, I wanted to get a sense of where their heads are at in the run-up to OpenAI’s biggest product release in months: GPT-5.

Reports are out that the firm’s next-generation model will be launched in August. OpenAI’s official line—well, Altman’s—is that it will release GPT-5 “soon.” Anticipation is high. The leaps OpenAI made with GPT-3 and then GPT-4 raised the bar of what was thought possible with this technology. And yet delays to the launch of GPT-5 have fueled rumors that OpenAI has struggled to build a model that meets its own—not to mention everyone else’s—expectations.

But expectation management is part of the job for a company that for the last several years has set the agenda for the industry. And Chen and Pachocki set the agenda inside OpenAI.

Twin peaks 

The firm’s main London office is in St James’s Park, a few hundred meters east of Buckingham Palace. But I met Chen and Pachocki in a conference room in a coworking space near King’s Cross, which OpenAI keeps as a kind of pied-à-terre in the heart of London’s tech neighborhood (Google DeepMind and Meta are just around the corner). OpenAI’s head of research communications, Laurance Fauconnet, sat with an open laptop at the end of the table. 

Chen, who was wearing a maroon polo shirt, is clean-cut, almost preppy. He’s media trained and comfortable talking to a reporter. (That’s him flirting with a chatbot in the “Introducing GPT-4o” video.) Pachocki, in a black elephant-logo tee, has more of a TV-movie hacker look. He stares at his hands a lot when he speaks.

But the pair are a tighter double act than they first appear. Pachocki summed up their roles. Chen shapes and manages the research teams, he said. “I am responsible for setting the research roadmap and establishing our long-term technical vision.”

“But there’s fluidity in the roles,” Chen said. “We’re both researchers, we pull on technical threads. Whatever we see that we can pull on and fix, that’s what we do.”

Chen joined the company in 2018 after working as a quantitative trader at the Wall Street firm Jane Street Capital, where he developed machine-learning models for futures trading. At OpenAI he spearheaded the creation of DALL-E, the firm’s breakthrough generative image model. He then worked on adding image recognition to GPT‑4 and led the development of Codex, the generative coding model that powers GitHub Copilot.

Pachocki left an academic career in theoretical computer science to join OpenAI in 2017 and replaced Sutskever as chief scientist in 2024. He is the key architect of OpenAI’s so-called reasoning models—especially o1 and o3—which are designed to tackle complex tasks in science, math, and coding. 

When we met they were buzzing, fresh off the high of two new back-to-back wins for their company’s technology.

On July 16, one of OpenAI’s large language models came in second in the AtCoder World Tour Finals, one of the world’s most hardcore programming competitions. On July 19, OpenAI announced that one of its models had achieved gold-medal-level results on the 2025 International Math Olympiad, one of the world’s most prestigious math contests.

The math result made headlines, not only because of OpenAI’s remarkable achievement, but because rival Google DeepMind revealed two days later that one of its models had achieved the same score in the same competition. Google DeepMind had played by the competition’s rules and waited for its results to be checked by the organizers before making an announcement; OpenAI had in effect marked its own answers.

For Chen and Pachocki, the result speaks for itself. Anyway, it’s the programming win they’re most excited about. “I think that’s quite underrated,” Chen told me. A gold medal result in the International Math Olympiad puts you somewhere in the top 20 to 50 competitors, he said. But in the AtCoder contest OpenAI’s model placed in the top two: “To break into a really different tier of human performance—that’s unprecedented.”

Ship, ship, ship!

People at OpenAI still like to say they work at a research lab. But the company is very different from the one it was before the release of ChatGPT three years ago. The firm is now in a race with the biggest and richest technology companies in the world and valued at $300 billion. Envelope-pushing research and eye-catching demos no longer cut it. It needs to ship products and get them into people’s hands—and boy, it does. 

OpenAI has kept up a run of new releases—putting out major updates to its GPT-4 series, launching a string of generative image and video models, and introducing the ability to talk to ChatGPT with your voice. Six months ago it kicked off a new wave of so-called reasoning models with its o1 release, soon followed by o3. And last week it released its browser-using agent Operator to the public. It now claims that more than 400 million people use its products every week and submit 2.5 billion prompts a day. 

OpenAI’s incoming CEO of applications, Fidji Simo, plans to keep up the momentum. In a memo to the company, she told employees she is looking forward to “helping get OpenAI’s technologies into the hands of more people around the world,” where they will “unlock more opportunities for more people than any other technology in history.” Expect the products to keep coming.

I asked how OpenAI juggles open-ended research and product development. “This is something we have been thinking about for a very long time, long before ChatGPT,” Pachocki said. “If we are actually serious about trying to build artificial general intelligence, clearly there will be so much that you can do with this technology along the way, so many tangents you can go down that will be big products.” In other words, keep shaking the tree and harvest what you can.

A talking point that comes up with OpenAI folks is that putting experimental models out into the world was a necessary part of research. The goal was to make people aware of how good this technology had become. “We want to educate people about what’s coming so that we can participate in what will be a very hard societal conversation,” Altman told me back in 2022. The makers of this strange new technology were also curious what it might be for: OpenAI was keen to get it into people’s hands to see what they would do with it.

Is that still the case? They answered at the same time. “Yeah!” Chen said. “To some extent,” Pachocki said. Chen laughed: “No, go ahead.” 

“I wouldn’t say research iterates on product,” said Pachocki. “But now that models are at the edge of the capabilities that can be measured by classical benchmarks and a lot of the long-standing challenges that we’ve been thinking about are starting to fall, we’re at the point where it really is about what the models can do in the real world.”

Like taking on humans in coding competitions. The person who beat OpenAI’s model at this year’s AtCoder contest, held in Japan, was a programmer named Przemysław Dębiak, also known as Psyho. The contest was a puzzle-solving marathon in which competitors had 10 hours to find the most efficient way to solve a complex coding problem. After his win, Psyho posted on X: “I’m completely exhausted … I’m barely alive.”  

Chen and Pachocki have strong ties to the world of competitive coding. Both have competed in international coding contests in the past and Chen coaches the USA Computing Olympiad team. I asked whether that personal enthusiasm for competitive coding colors their sense of how big a deal it is for a model to perform well at such a challenge.

They both laughed. “Definitely,” said Pachocki. “So: Psyho is kind of a legend. He’s been the number one competitor for many years. He’s also actually a friend of mine—we used to compete together in these contests.” Dębiak also used to work with Pachocki at OpenAI.

When Pachocki competed in coding contests he favored those that focused on shorter problems with concrete solutions. But Dębiak liked longer, open-ended problems without an obvious correct answer.

“He used to poke fun at me, saying that the kind of contest I was into will be automated long before the ones he liked,” Pachocki recalled. “So I was seriously invested in the performance of this model in this latest competition.”

Pachocki told me he was glued to the late-night livestream from Tokyo, watching his model come in second: “Psyho resists for now.” 

“We’ve tracked the performance of LLMs on coding contests for a while,” said Chen. “We’ve watched them become better than me, better than Jakub. It feels something like Lee Sedol playing Go.”

Lee is the master Go player who lost a series of matches to DeepMind’s game-playing model AlphaGo in 2016. The results stunned the international Go community and led Lee to give up professional play. Last year he told the New York Times: “Losing to AI, in a sense, meant my entire world was collapsing … I could no longer enjoy the game.” And yet, unlike Lee, Chen and Pachocki are thrilled to be surpassed.   

But why should the rest of us care about these niche wins? It’s clear that this technology—designed to mimic and, ultimately, stand in for human intelligence—is being built by people whose idea of peak intelligence is acing a math contest or holding your own against a legendary coder. Is it a problem that this view of intelligence is skewed toward the mathematical, analytical end of the scale?

“I mean, I think you are right that—you know, selfishly, we do want to create models which accelerate ourselves,” Chen told me. “We see that as a very fast factor to progress.”  

The argument researchers like Chen and Pachocki make is that math and coding are the bedrock for a far more general form of intelligence, one that can solve a wide range of problems in ways we might not have thought of ourselves. “We’re talking about programming and math here,” said Pachocki. “But it’s really about creativity, coming up with novel ideas, connecting ideas from different places.”

Look at the two recent competitions: “In both cases, there were problems which required very hard, out-of-the-box thinking. Psyho spent half the programming competition thinking and then came up with a solution that was really novel and quite different from anything that our model looked at.”

“This is really what we’re after,” Pachocki continued. “How do we get models to discover this sort of novel insight? To actually advance our knowledge? I think they are already capable of that in some limited ways. But I think this technology has the potential to really accelerate scientific progress.” 

I returned to the question about whether the focus on math and programming was a problem, conceding that maybe it’s fine if what we’re building are tools to help us do science. We don’t necessarily want large language models to replace politicians and have people skills, I suggested.

Chen pulled a face and looked up at the ceiling: “Why not?”

What’s missing

OpenAI was founded with a level of hubris that stood out even by Silicon Valley standards, boasting about its goal of building AGI back when talk of AGI still sounded kooky. OpenAI remains as gung-ho about AGI as ever, and it has done more than most to make AGI a mainstream multibillion-dollar concern. It’s not there yet, though. I asked Chen and Pachocki what they think is missing.

“I think the way to envision the future is to really, deeply study the technology that we see today,” Pachocki said. “From the beginning, OpenAI has looked at deep learning as this very mysterious and clearly very powerful technology with a lot of potential. We’ve been trying to understand its bottlenecks. What can it do? What can it not do?”  

At the current cutting edge, Chen said, are reasoning models, which break down problems into smaller, more manageable steps, but even they have limits: “You know, you have these models which know a lot of things but can’t chain that knowledge together. Why is that? Why can’t it do that in a way that humans can?”

OpenAI is throwing everything at answering that question.

“We are probably still, like, at the very beginning of this reasoning paradigm,” Pachocki told me. “Really, we are thinking about how to get these models to learn and explore over the long term and actually deliver very new ideas.”

Chen pushed the point home: “I really don’t consider reasoning done. We’ve definitely not solved it. You have to read so much text to get a kind of approximation of what humans know.”

OpenAI won’t say what data it uses to train its models or give details about their size and shape—only that it is working hard to make all stages of the development process more efficient.

Those efforts make them confident that so-called scaling laws—which suggest that models will continue to get better the more compute you throw at them—show no sign of breaking down.

“I don’t think there’s evidence that scaling laws are dead in any sense,” Chen insisted. “There have always been bottlenecks, right? Sometimes they’re to do with the way models are built. Sometimes they’re to do with data. But fundamentally it’s just about finding the research that breaks you through the current bottleneck.” 

The faith in progress is unshakeable. I brought up something Pachocki had said about AGI in an interview with Nature in May: “When I joined OpenAI in 2017, I was still among the biggest skeptics at the company.” He looked doubtful. 

“I’m not sure I was skeptical about the concept,” he said. “But I think I was—” He paused, looking at his hands on the table in front of him. “When I joined OpenAI, I expected the timelines to be longer to get to the point that we are now.”

“There’s a lot of consequences of AI,” he said. “But the one I think the most about is automated research. When we look at human history, a lot of it is about technological progress, about humans building new technologies. The point when computers can develop new technologies themselves seems like a very important, um, inflection point.

“We already see these models assist scientists. But when they are able to work on longer horizons—when they’re able to establish research programs for themselves—the world will feel meaningfully different.”

For Chen, that ability for models to work by themselves for longer is key. “I mean, I do think everyone has their own definitions of AGI,” he said. “But this concept of autonomous time—just the amount of time that the model can spend making productive progress on a difficult problem without hitting a dead end—that’s one of the big things that we’re after.”

It’s a bold vision—and far beyond the capabilities of today’s models. But I was nevertheless struck by how Chen and Pachocki made AGI sound almost mundane. Compare this with how Sutskever responded when I spoke to him 18 months ago. “It’s going to be monumental, earth-shattering,” he told me. “There will be a before and an after.” Faced with the immensity of what he was building, Sutskever switched the focus of his career from designing better and better models to figuring out how to control a technology that he believed would soon be smarter than himself.

Two years ago Sutskever set up what he called a superalignment team that he would co-lead with another OpenAI safety researcher, Jan Leike. The claim was that this team would funnel a full fifth of OpenAI’s resources into figuring out how to control a hypothetical superintelligence. Today, most of the people on the superalignment team, including Sutskever and Leike, have left the company and the team no longer exists.   

When Leike quit, he said it was because the team had not been given the support he felt it deserved. He posted this on X: “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” Other departing researchers shared similar statements.

I asked Chen and Pachocki what they make of such concerns. “A lot of these things are highly personal decisions,” Chen said. “You know, a researcher can kind of, you know—”

He started again. “They might have a belief that the field is going to evolve in a certain way and that their research is going to pan out and is going to bear fruit. And, you know, maybe the company doesn’t reshape in the way that you want it to. It’s a very dynamic field.”

“A lot of these things are personal decisions,” he repeated. “Sometimes the field is just evolving in a way that is less consistent with the way that you’re doing research.”

But alignment, both of them insist, is now part of the core business rather than the concern of one specific team. According to Pachocki, these models don’t work at all unless they work as you expect them to. There’s also little desire to focus on aligning a hypothetical superintelligence with your objectives when doing so with existing models is already enough of a challenge.

“Two years ago the risks that we were imagining were mostly theoretical risks,” Pachocki said. “The world today looks very different, and I think a lot of alignment problems are now very practically motivated.”

Still, experimental technology is being spun into mass-market products faster than ever before. Does that really never lead to disagreements between the two of them?

I am often afforded the luxury of really kind of thinking about the long term, where the technology is headed,” Pachocki said. “Contending with the reality of the process—both in terms of people and also, like, the broader company needs—falls on Mark. It’s not really a disagreement, but there is a natural tension between these different objectives and the different challenges that the company is facing that materializes between us.”

Chen jumped in: “I think it’s just a very delicate balance.”  

Correction: we have removed a line referring to an Altman message on X about GPT-5.

An EPA rule change threatens to gut US climate regulations

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

The mechanism that allows the US federal government to regulate climate change is on the chopping block.

On Tuesday, US Environmental Protection Agency administrator Lee Zeldin announced that the agency is taking aim at the endangerment finding, a 2009 rule that’s essentially the tentpole supporting federal greenhouse-gas regulations.

This might sound like an obscure legal situation, but it’s a really big deal for climate policy in the US. So buckle up, and let’s look at what this rule says now, what the proposed change looks like, and what it all means.

To set the stage, we have to go back to the Clean Air Act of 1970, the law that essentially gave the EPA the power to regulate air pollution. (Stick with me—I promise I’ll keep this short and not get too into the legal weeds.)

There were some pollutants explicitly called out in this law and its amendments, including lead and sulfur dioxide. But it also required the EPA to regulate new pollutants that were found to be harmful. In the late 1990s and early 2000s, environmental groups and states started asking for the agency to include greenhouse-gas pollution.

In 2007, the Supreme Court ruled that greenhouse gases qualify as air pollutants under the Clean Air Act, and that the EPA should study whether they’re a danger to public health. In 2009, the incoming Obama administration looked at the science and ruled that greenhouse gases pose a threat to public health because they cause climate change. That’s the endangerment finding, and it’s what allows the agency to pass rules to regulate greenhouse gases.  

The original case and argument were specifically about vehicles and the emissions from tailpipes, but this finding was eventually used to allow the agency to set rules around power plants and factories, too. It essentially underpins climate regulations in the US.

Fast-forward to today, and the Trump administration wants to reverse the endangerment finding. In a proposed rule released on Tuesday, the EPA argues that the Clean Air Act does not, in fact, authorize the agency to set emissions standards to address global climate change. Zeldin, in an appearance on the conservative politics and humor podcast Ruthless that preceded the official announcement, called the proposal the “largest deregulatory action in the history of America.”

The administration was already moving to undermine the climate regulations that rely on this rule. But this move directly targets a “fundamental building block of EPA’s climate policy,” says Deborah Sivas, an environmental-law professor at Stanford University.

The proposed rule will go up for public comment, and the agency will then take that feedback and come up with a final version. It’ll almost certainly get hit with legal challenges and will likely wind up in front of the Supreme Court.

One note here is that the EPA makes a mostly legal argument in the proposed rule reversal rather than focusing on going after the science of climate change, says Madison Condon, an associate law professor at Boston University. That could make it easier for the Supreme Court to eventually uphold it, she says, though this whole process is going to take a while. 

If the endangerment finding goes down, it would have wide-reaching ripple effects. “We could find ourselves in a couple years with no legal tools to try and address climate change,” Sivas says.

To take a step back for a moment, it’s wild that we’ve ended up in this place where a single rule is so central to regulating emissions. US climate policy is held up by duct tape and a dream. Congress could have, at some point, passed a law that more directly allows the EPA to regulate greenhouse-gas emissions (the last time we got close was a 2009 bill that passed the House but never made it to the Senate). But here we are.

This move isn’t a surprise, exactly. The Trump administration has made it very clear that it is going after climate policy in every way that it can. But what’s most striking to me is that we’re not operating in a shared reality anymore when it comes to this subject. 

While top officials tend to acknowledge that climate change is real, there’s often a “but” followed by talking points from climate denial’s list of greatest hits. (One of the more ridiculous examples is the statement that carbon dioxide is good, actually, because it helps plants.) 

Climate change is real, and it’s a threat. And the US has emitted more greenhouse gases into the atmosphere than any other country in the world. It shouldn’t be controversial to expect the government to be doing something about it. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The AI Hype Index: The White House’s war on “woke AI”

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

The Trump administration recently declared war on so-called “woke AI,” issuing an executive order aimed at preventing companies whose models exhibit a liberal bias from landing federal contracts. Simultaneously, the Pentagon inked a deal with Elon Musk’s xAI just days after its chatbot, Grok, spouted harmful antisemitic stereotypes on X, while the White House has partnered with an anti-DEI nonprofit to create AI slop videos of the Founding Fathers. What comes next is anyone’s guess.

What you may have missed about Trump’s AI Action Plan

A number of the executive orders and announcements coming from the White House since Donald Trump returned to office have painted an ambitious vision for America’s AI future—crushing competition with China, abolishing “woke” AI models that suppress conservative speech, jump-starting power-hungry AI data centers. But the details have been sparse. 

The White House’s AI Action Plan, released last week, is meant to fix that. Many of the points in the plan won’t come as a surprise, and you’ve probably heard of the big ones by now. Trump wants to boost the buildout of data centers by slashing environmental rules; withhold funding from states that pass “burdensome AI regulations”; and contract only with AI companies whose models are “free from top-down ideological bias.”

But if you dig deeper, certain parts of the plan that didn’t pop up in any headlines reveal more about where the administration’s AI plans are headed. Here are three of the most important issues to watch. 

Trump is escalating his fight with the Federal Trade Commission

When Americans get scammed, they’re supposed to be helped by the Federal Trade Commission. As I wrote last week, the FTC under President Biden increasingly targeted AI companies that overhyped the accuracy of their systems, as well as deployments of AI it found to have harmed consumers. 

The Trump plan vows to take a fresh look at all the FTC actions under the previous administration as part of an effort to get rid of “onerous” regulation that it claims is hampering AI’s development. The administration may even attempt to repeal some of the FTC’s actions entirely. This would weaken a major AI watchdog agency, but it’s just the latest in the Trump administration’s escalating attacks on the FTC. Read more in my story

The White House is very optimistic about AI for science

The opening to the AI Action Plan describes a future where AI is doing everything from discovering new materials and drugs to “unraveling ancient scrolls once thought unreadable” to making breakthroughs in science and math

That type of unbounded optimism about AI for scientific discovery echoes what tech companies are promising. Some of that optimism is grounded in reality: AI’s role in predicting protein structures has indeed led to material scientific wins (and just last week, Google DeepMind released a new AI meant to help interpret ancient Latin engravings). But the idea that large language models—essentially very good text prediction machines—will act as scientists in their own right has less merit so far. 

Still, the plan shows that the Trump administration wants to award money to labs trying to make it a reality, even as it has worked to slash the funding the National Science Foundation makes available to human scientists, some of whom are now struggling to complete their research. 

And some of the steps the plan proposes are likely to be welcomed by researchers, like funding to build AI systems that are more transparent and interpretable.

The White House’s messaging on deepfakes is confused

Compared with President Biden’s executive orders on AI, the new action plan is mostly devoid of anything related to making AI safer. 

However, there’s a notable exception: a section in the plan that takes on the harms posed by deepfakes. In May, Trump signed legislation to protect people from nonconsensual sexually explicit deepfakes, a growing concern for celebrities and everyday people alike as generative video gets more advanced and cheaper to use. The law had bipartisan support.

Now, the White House says it’s concerned about the issues deepfakes could pose for the legal system. For example, it says, “fake evidence could be used to attempt to deny justice to both plaintiffs and defendants.” It calls for new standards for deepfake detection and asks the Department of Justice to create rules around it. Legal experts I’ve spoken with are more concerned with a different problem: Lawyers are adopting AI models that make errors such as citing cases that don’t exist, which judges may not catch. This is not addressed in the action plan. 

It’s also worth noting that just days before releasing a plan that targets “malicious deepfakes,” President Trump shared a fake AI-generated video of former president Barack Obama being arrested in the Oval Office.

Overall, the AI Action Plan affirms what President Trump and those in his orbit have long signaled: It’s the defining social and political weapon of our time. They believe that AI, if harnessed correctly, can help them win everything from culture wars to geopolitical conflicts. The right AI, they argue, will help defeat China. Government pressure on leading companies can force them to purge “woke” ideology from their models. 

The plan includes crowd-pleasers—like cracking down on deepfakes—but overall, it reflects how tech giants have cozied up to the Trump administration. The fact that it contains almost no provisions challenging their power shows how their investment in this relationship is paying off.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This startup wants to use the Earth as a massive battery

The Texas-based startup Quidnet Energy just completed a test showing it can store energy for up to six months by pumping water underground.

Using water to store electricity is hardly a new concept—pumped hydropower storage has been around for over a century. But the company hopes its twist on the technology could help bring cheap, long-duration energy storage to new places.

In traditional pumped hydro storage facilities, electric pumps move water uphill, into a natural or manmade body of water. Then, when electricity is needed, that water is released and flows downhill past a turbine, generating electricity. Quidnet’s approach instead pumps water down into impermeable rock formations and keeps it under pressure so it flows up when released. “It’s like pumped hydro, upside down,” says CEO Joe Zhou.

Quidnet started a six-month test of its technology in late 2024, pressurizing the system. In June, the company was able to discharge 35 megawatt-hours of energy from the well. There was virtually no self-discharge, meaning no energy loss, Zhou says.

Inexpensive forms of energy storage that can store electricity for weeks or months could help inconsistent electricity sources like wind and solar go further for the grid. And Quidnet’s approach, which uses commercially available equipment, could be deployed quickly and qualify for federal tax credits to help make it even cheaper.

However, there’s still a big milestone ahead: turning the pressurized water back into electricity. The company is currently building a facility with the turbines and support equipment to do that—all the components are available to purchase from established companies. “We don’t need to invent new things based on what we’ve already developed today,” Zhou says. “We can now start just deploying at very, very substantial scales.”

That process will come with energy losses. Energy storage systems are typically measured by their round-trip efficiency: how much of the electricity that’s put into the system is returned at the end as electricity. Modeling suggests that Quidnet’s technology could reach a maximum efficiency of about 65%, Zhou says, though some design choices made to optimize for economics will likely cause the system to land at roughly 50%.

That’s less efficient than lithium-ion batteries, but long-duration systems, if they’re cheap enough, can operate at low efficiencies and still be useful for the grid, says Paul Denholm, a senior research fellow at the National Renewable Energy Laboratory.

“It’s got to be cost-competitive; it all comes down to that,” Denholm says.

Lithium-ion batteries, the fastest-growing technology in energy storage, are the target that new forms of energy storage, like Quidnet’s, must chase. Lithium-ion batteries are about 90% cheaper today than they were 15 years ago. They’ve become a price-competitive alternative to building new natural-gas plants, Denholm says.

When it comes to competing with batteries, one potential differentiator for Quidnet could be government subsidies. While the Trump administration has clawed back funding for clean energy technologies, there’s still an energy storage tax credit, though recently passed legislation added new supply chain restrictions.

Starting in 2026, new energy storage facilities hoping to qualify for tax credits will need to prove that at least 55% of the value of a project’s materials are not from foreign entities of concern. That rules out sourcing batteries from China, which dominates battery production today. Quidnet has a “high level of domestic content” and expects to qualify for tax credits under the new rules, Zhou says.

The facility Quidnet is building is a project with utility partner CPS Energy, and it should come online in early 2026. 

OpenAI is launching a version of ChatGPT for college students

OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. It’s part of a wider push by the company to get AI more embedded into classrooms when the new academic year starts in September.

A demonstration for reporters from OpenAI showed what happens when a student asks Study Mode about an academic subject like game theory. The chatbot begins by asking what the student wants to know and then attempts to build an exchange, where the pair work methodically toward the answer together. OpenAI says the tool was built after consulting with pedagogy experts from over 40 institutions.

A handful of college students who were part of OpenAI’s testing cohort—hailing from Princeton, Wharton, and the University of Minnesota—shared positive reviews of Study Mode, saying it did a good job of checking their understanding and adapting to their pace.

The learning approaches that OpenAI has programmed into Study Mode, which are based partially on Socratic methods, appear sound, says Christopher Harris, an educator in New York who has created a curriculum aimed at AI literacy. They might grant educators more confidence about allowing, or even encouraging, their students to use AI. “Professors will see this as working with them in support of learning as opposed to just being a way for students to cheat on assignments,” he says.

But there’s a more ambitious vision behind Study Mode. As demonstrated in OpenAI’s recent partnership with leading teachers’ unions, the company is currently trying to rebrand chatbots as tools for personalized learning rather than cheating. Part of this promise is that AI will act like the expensive human tutors that currently only the most well-off students’ families can typically afford.

“We can begin to close the gap between those with access to learning resources and high-quality education and those who have been historically left behind,” says OpenAI’s head of education. Leah Belsky.

But painting Study Mode as an education equalizer obfuscates one glaring problem. Underneath the hood, it is not a tool trained exclusively on academic textbooks and other approved materials—it’s more like the same old ChatGPT, tuned with a new conversation filter that simply governs how it responds to students, encouraging fewer answers and more explanations. 

This AI tutor, therefore, more resembles what you’d get if you hired a human tutor who has read every required textbook, but also every flawed explanation of the subject ever posted to Reddit, Tumblr, and the farthest reaches of the web. And because of the way AI works, you can’t expect it to distinguish right information from wrong. 

Professors encouraging their students to use it run the risk of it teaching them to approach problems in the wrong way—or worse, being taught material that is fabricated or entirely false. 

Given this limitation, I asked OpenAI if Study Mode is limited to particular subjects. The company said no—students will be able to use it to discuss anything they’d normally talk to ChatGPT about. 

It’s true that access to human tutors—which for certain subjects can cost upward of $200 an hour—is typically for the elite few. The notion that AI models can spread the benefits of tutoring to the masses holds an allure. Indeed, it is backed up by at least some early research that shows AI models can adapt to individual learning styles and backgrounds.

But this improvement comes with a hidden cost. Tools like Study Mode, at least for now, take a shortcut by using large language models’ humanlike conversational style without fixing their inherent flaws. 

OpenAI also acknowledges that this tool won’t prevent a student who’s frustrated and wants an answer from simply going back to normal ChatGPT. “If someone wants to subvert learning, and sort of get answers and take the easier route, that is possible,” Belsky says. 

However, one thing going for Study Mode, the students say, is that it’s simply more fun to study with a chatbot that’s always encouraging you along than to stare at a textbook on Bayesian theorem for the hundredth time. “It’s like the reward signal of like, oh, wait, I can learn this small thing,” says Maggie Wang, a student from Princeton who tested it. The tool is free for now, but Praja Tickoo, a student from Wharton, says it wouldn’t have to be for him to use it. “I think it’s absolutely something I would be willing to pay for,” he says.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

A baby boy born over the weekend holds the new record for the “oldest baby.” Thaddeus Daniel Pierce, who arrived on July 26, developed from an embryo that had been in storage for 30 and a half years.

“We had a rough birth but we are both doing well now,” says Lindsey Pierce, his mother. “He is so chill. We are in awe that we have this precious baby!”

Lindsey and her husband, Tim Pierce, who live in London, Ohio, “adopted” the embryo from a woman who had it created in 1994. She says her family and church family think “it’s like something from a sci-fi movie.” 

“The baby has a 30-year-old sister,” she adds. Tim was a toddler when the embryos were first created.

“It’s been pretty surreal,” says Linda Archerd, 62, who donated the embryo. “It’s hard to even believe.”

Three little hopes

The story starts back in the early 1990s. Archerd had been trying—and failing—to get pregnant for six years. She and her husband decided to try IVF, a fairly new technology at the time. “People were [unfamiliar] with it,” says Archerd. “A lot of people were like, what are you doing?”

They did it anyway, and in May 1994, they managed to create four embryos. One of them was transferred to Linda’s uterus. It resulted in a healthy baby girl. “I was so blessed to have a baby,” Archerd says. The remaining three embryos were cryopreserved and kept in a storage tank.

That was 31 years ago. The healthy baby girl is now a 30-year-old woman who has her own 10-year-old daughter. But the other three embryos remained frozen in time.

Archerd originally planned to use the embryos herself. “I always wanted another baby desperately,” she says. “I called them my three little hopes.” Her then husband felt differently, she says. Archerd went on to divorce him, but she won custody of the embryos and kept them in storage, still hopeful she might use them one day, perhaps with another partner.

That meant paying annual storage fees, which increased over time and ended up costing Archerd around a thousand dollars a year, she says. To her, it was worth it. “I always thought it was the right thing to do,” she says. 

Things changed when she started going through menopause, she says. She considered her options. She didn’t want to discard the embryos or donate them for research. And she didn’t want to donate them to another family anonymously—she wanted to meet the parents and any resulting babies. “It’s my DNA; it came from me … and [it’s] my daughter’s sibling,” she says.

Then she found out about embryo “adoption.” This is a type of embryo donation in which both donors and recipients have a say in whom they “place” their embryos with or “adopt” them from. It is overseen by agencies—usually explicitly religious ones—that believe an embryo is morally equivalent to a born human. Archerd is Christian.

There are several agencies that offer these adoption services in the US, but not all of them accept embryos that have been stored for a very long time. That’s partly because those embryos will have been frozen and stored in unfamiliar, old-fashioned ways, and partly because old embryos are thought to be less likely to survive thawing and transfer to successfully develop into a baby.

“So many places wouldn’t even take my information,” says Archerd. Then she came across the Snowflakes program run by the Nightlight Christian Adoptions agency. The agency was willing to accept her embryos, but it needed Archerd’s medical records from the time the embryos had been created, as well as the embryos’ lab records.

So Archerd called the fertility doctor who had treated her decades before. “I still remembered his phone number by heart,” she says. That doctor, now in his 70s, is still practicing at a clinic in Oregon. He dug Archerd’s records out from his basement, she says. “Some of [them] were handwritten,” she adds. Her embryos entered Nightlight’s “matching pool” in 2022.

Making a match

“Our matching process is really driven by the preferences of the placing family,” says Beth Button, executive director of the Snowflakes program. Archerd’s preference was for a married Caucasian, Christian couple living in the US. “I didn’t want them to go out of the country,” says Archerd. “And being Christian is very important to me, because I am.”

It took a while to find a match. Most of the “adopting parents” signed up for the Snowflakes program were already registered at fertility clinics that wouldn’t have accepted the embryos, says Button. “I would say that over 90% of clinics in the US would not have accepted these embryos,” she says.

Expecting parents Tim and Lindsey Pierce.
Lindsey and Tim Pierce at Rejoice Fertility.
COURTESY LINDSEY PIERCE

Archerd’s embryos were assigned to the agency’s Open Hearts program for embryos that are “hard to place,” along with others that have been in storage for a long time or are otherwise thought to be less likely to result in a healthy birth.

Lindsey and Tim Pierce had also signed up for the Open Hearts program. The couple, aged 35 and 34, respectively, had been trying for a baby for seven years and had seen multiple doctors.

Lindsey was researching child adoption when she came across the Snowflakes program. 

When the couple were considering their criteria for embryos they might receive, they decided that they’d be open to any. “We checkmarked anything and everything,” says Tim. That’s how they ended up being matched with Archerd’s embryos. “We thought it was wild,” says Lindsey. “We didn’t know they froze embryos that long ago.”

Lindsey and Tim had registered with Rejoice Fertility, an IVF clinic in Knoxville, Tennessee, run by John Gordon, a reproductive endocrinologist who prides himself on his efforts to reduce the number of embryos in storage. The huge numbers of embryos left in storage tanks was weighing on his conscience, he says, so around six years ago, he set up Rejoice Fertility with the aim of doing things differently.  

“Now we’re here in the belt buckle of the Bible Belt,” says Gordon, who is Reformed Presbyterian. “I’ve changed my mode of practice.” IVF treatments performed at the clinic are designed to create as few excess embryos as possible. The clinic works with multiple embryo adoption agencies and will accept any embryo, no matter how long it has been in storage.

A portrait of Linda Archerd.

COURTESY LINDA ARCHERD

It was his clinic that treated the parents who previously held the record for the longest-stored embryo—in 2022, Rachel and Philip Ridgeway had twins from embryos created more than 30 years earlier. “They’re such a lovely couple,” says Gordon. When we spoke, he was making plans to meet the family for breakfast. The twins are “growing like weeds,” he says with a laugh.

“We have certain guiding principles, and they’re coming from our faith,” says Gordon, although he adds that he sees patients who hold alternative views. One of those principles is that “every embryo deserves a chance at life and that the only embryo that cannot result in a healthy baby is the embryo not given the opportunity to be transferred into a patient.”

That’s why his team will endeavor to transfer any embryo they receive, no matter the age or conditions. That can be challenging, especially when the embryos have been frozen or stored in unusual or outdated ways. “It’s scary for people who don’t know how to do it,” says Sarah Atkinson, lab supervisor and head embryologist at Rejoice Fertility. “You don’t want to kill someone’s embryos if you don’t know what you’re doing.”

Cumbersome and explosive

In the early days of IVF, embryos earmarked for storage were slow-frozen. This technique involves gradually lowering the temperature of the embryos. But because slow freezing can cause harmful ice crystals to form, clinics switched in the 2000s to a technique called vitrification, in which the embryos are placed in thin plastic tubes called straws and lowered into tanks of liquid nitrogen. This rapidly freezes the embryos and converts them into a glass-like state. 

The embryos can later be thawed by removing them from the tanks and rapidly—within two seconds—plunging them into warm “thaw media,” says Atkinson. Thawing slow-frozen embryos is more complicated. And the exact thawing method required varies, depending on how the embryos were preserved and what they were stored in. Some of the devices need to be opened while they are inside the storage tank, which can involve using forceps, diamond-bladed knives, and other tools in the liquid nitrogen, says Atkinson.

Sarah Atkinson, lab supervisor and head embryologist at Rejoice Fertility, directly injects sperm into two eggs to fertilize them.
COURTESY OF SARAH ATKINSON AT REJOICE FERTILITY.

Recently, she was tasked with retrieving embryos that had been stored inside a glass vial. The vial was made from blown glass and had been heat-sealed with the embryo inside. Atkinson had to use her diamond-bladed knife to snap open the seal inside the nitrogen tank. It was fiddly work, and when the device snapped, a small shard of glass flew out and hit Atkinson’s face. “Hit me on the cheek, cut my cheek, blood running down my face, and I’m like, Oh shit,” she says. Luckily, she had her safety goggles on. And the embryos survived, she adds.

The two embryos that were transferred to Lindsey Pierce.

Atkinson has a folder in her office with notes she’s collected on various devices over the years. She flicks through it over a video call and points to the notes she made about the glass vial. “Might explode; wear face shield and eye protection,” she reads. A few pages later, she points to another embryo-storage device. “You have to thaw this one in your fingers,” she tells me. “I don’t like it.”

The record-breaking embryos had been slow-frozen and stored in a plastic vial, says Atkinson. Thawing them was a cumbersome process. But all three embryos survived it.

The Pierces had to travel from their home in Ohio to the clinic in Tennessee five times over a two-week period. “It was like a five-hour drive,” says Lindsey. One of the three embryos stopped growing. The other two were transferred to Lindsey’s uterus on November 14, she says. And one developed into a fetus.

Now that the baby has arrived, Archerd is keen to meet him. “The first thing that I noticed when Lindsey sent me his pictures is how much he looks like my daughter when she was a baby,” she says. “I pulled out my baby book and compared them side by side, and there is no doubt that they are siblings.”

She doesn’t yet have plans to meet the baby, but doing so would be “a dream come true,” she says. “I wish that they didn’t live so far away from me … He is perfect!”

“We didn’t go into it thinking we would break any records,” says Lindsey. “We just wanted to have a baby.”

Chinese universities want students to use more AI, not less

Just two years ago, Lorraine He, now a 24-year-old law student,  was told to avoid using AI for her assignments. At the time, to get around a national block on ChatGPT, students had to buy a mirror-site version from a secondhand marketplace. Its use was common, but it was at best tolerated and more often frowned upon. Now, her professors no longer warn students against using AI. Instead, they’re encouraged to use it—as long as they follow best practices.

She is far from alone. Just like those in the West, Chinese universities are going through a quiet revolution. According to a recent survey by the Mycos Institute, a Chinese higher-education research group, the use of generative AI on campus has become nearly universal. The same survey reports that just 1% of university faculty and students in China reported never using AI tools in their studies or work. Nearly 60% said they used them frequently—either multiple times a day or several times a week.

However, there’s a crucial difference. While many educators in the West see AI as a threat they have to manage, more Chinese classrooms are treating it as a skill to be mastered. In fact, as the Chinese-developed model DeepSeek gains in popularity globally, people increasingly see it as a source of national pride. The conversation in Chinese universities has gradually shifted from worrying about the implications for academic integrity to encouraging literacy, productivity, and staying ahead. 

The cultural divide is even more apparent in public sentiment. A report on global AI attitudes from Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) found that China leads the world in enthusiasm. About 80% of Chinese respondents said they were “excited” about new AI services—compared with just 35% in the US and 38% in the UK.

“This attitude isn’t surprising,” says Fang Kecheng, a professor in communications at the Chinese University of Hong Kong. “There’s a long tradition in China of believing in technology as a driver of national progress, tracing back to the 1980s, when Deng Xiaoping was already saying that science and technology are primary productive forces.”

From taboo to toolkit

Liu Bingyu, one of He’s professors at the China University of Political Science and Law, says AI can act as “instructor, brainstorm partner, secretary, and devil’s advocate.” She added a full session on AI guidelines to her lecture series this year, after the university encouraged “responsible and confident” use of AI. 

Liu recommends that students use generative AI to write literature reviews, draft abstracts, generate charts, and organize thoughts. She’s created slides that lay out detailed examples of good and bad prompts, along with one core principle: AI can’t replace human judgment. “Only high-quality input and smart prompting can lead to good results,” she says.

“The ability to interact with machines is one of the most important skills in today’s world,” Liu told her class. “And instead of having students do it privately, we should talk about it out in the open.”

This reflects a growing trend across the country. MIT Technology Review reviewed the AI strategies of 46 top Chinese universities and found that almost all of them have added interdisciplinary AI general‑education classes, AI related degree programs and AI literacy modules in the past year. Tsinghua, for example, is establishing a new undergraduate general education college to train students in AI plus another traditional discipline, like biology, healthcare, science, or humanities.

Major institutions like Remin, Nanjing, and Fudan Universities have rolled out general-access AI courses and degree programs that are open to all students, not reserved for computer science majors like the traditional machine-learning classes. At Zhejiang University, an introductory AI class will become mandatory for undergraduates starting in 2024. 

Lin Shangxin, principal of Renmin University of China recently told local media that AI was an “unprecedented opportunity” for humanities and social sciences. “Intead of a challenge, I believe AI would empower humanities studies,” Lin said told The Paper.

The collective action echoes a central government push. In April 2025, the Ministry of Education released new national guidelines calling for sweeping “AI+ education” reforms, aimed at cultivating critical thinking, digital fluency, and real‐world skills at all education levels. Earlier this year, the Beijing municipal government mandated AI education across all schools in the city—from universities to K–12.

Fang believes that more formal AI literacy education will help bridge an emerging divide between students. “There’s a big gap in digital literacy,” he says. “Some students are fluent in AI tools. Others are lost.”

Building the AI university

In the absence of Western tools like ChatGPT and Claude, many Chinese universities have begun deploying local versions of DeepSeek on campus servers to support students. Many top universities have deployed their own locally hosted versions of Deepseek. These campus-specific AI systems–often referred to as the “full-blood version” of Deepseek—offer longer context windows, unlimited dialogue rounds and broader functionality than public-facing free versions. 

This mirrors a broader trend in the West, where companies like OpenAI and Anthropic are rolling out campus-wide education tiers—OpenAI recently offered free ChatGPT Plus to all U.S. and Canadian college students, while Anthropic launched Claude for Education with partners like Northeastern and LSE. But in China, the initiative is typically university-led rather than driven by the companies themselves.

The goal, according to Zhejiang University, is to offer students full access to AI tools so they can stay up to date with the fast-changing technology. Students can use their ID to access the models for free. 

Yanyan Li and Meifang Zhuo, two researchers at Warwick University who have studied students’ use of AI at universities in the UK, believe that AI literacy education has become crucial to students’ success. 

With their colleague Gunisha Aggarwal, they conducted focus groups including college students from different backgrounds and levels to find out how AI is used in academic studies. They found that students’ knowledge of how to use AI comes mainly from personal exploration. “While most students understand that AI output is not always trustworthy, we observed a lot of anxiety on how to use it right,” says Li.

“The goal shouldn’t be preventing students from using AI but guiding them to harness it for effective learning and higher-order thinking,” says Zhuo. 

That lesson has come slowly. A student at Central China Normal University in Wuhan told MIT Technology Review that just a year ago, most of his classmates paid for mirror websites of ChatGPT, using VPNs or semi-legal online marketplaces to access Western models. “Now, everyone just uses DeepSeek and Doubao,” he said. “It’s cheaper, it works in Chinese, and no one’s worried about getting flagged anymore.”

Still, even with increased institutional support, many students feel anxious about whether they’re using AI correctly—or ethically. The use of AI detection tools has created an informal gray economy, where students pay hundreds of yuan to freelancers promising to “AI-detection-proof” their writing, according to a Rest of World report. Three students told MIT Technology Review that this environment has created confusion, stress, and increased anxiety. Across the board, they said they appreciate it when their professor offers clear policies and practical advice, not just warnings.

He, the law student in Beijing, recently joined a career development group to learn more AI skills to prepare for the job market. To many like her, understanding how to use AI better is not just a studying hack but a necessary skill in China’s fragile job market. Eighty percent of job openings available to fresh graduates listed AI-related skills as a plus in 2025, according to a report by the Chinese media outlet YiCai. In a slowed-down economy and a competitive job market, many students see AI as a lifeline. 

 “We need to rethink what is considered ‘original work’ in the age of AI” says Zhuo, “and universities are a crucial site of that conversation”.

What role should oil and gas companies play in climate tech?

This week, I have a new story out about Quaise, a geothermal startup that’s trying to commercialize new drilling technology. Using a device called a gyrotron, the company wants to drill deeper, cheaper, in an effort to unlock geothermal power anywhere on the planet. (For all the details, check it out here.) 

For the story, I visited Quaise’s headquarters in Houston. I also took a trip across town to Nabors Industries, Quaise’s investor and tech partner and one of the biggest drilling companies in the world. 

Standing on top of a drilling rig in the backyard of Nabors’s headquarters, I couldn’t stop thinking about the role oil and gas companies are playing in the energy transition. This industry has resources and energy expertise—but also a vested interest in fossil fuels. Can it really be part of addressing climate change?

The relationship between Quaise and Nabors is one that we see increasingly often in climate tech—a startup partnering up with an established company in a similar field. (Another one that comes to mind is in the cement industry, where Sublime Systems has seen a lot of support from legacy players including Holcim, one of the biggest cement companies in the world.) 

Quaise got an early investment from Nabors in 2021, to the tune of $12 million. Now the company also serves as a technical partner for the startup. 

“We are agnostic to what hole we’re drilling,” says Cameron Maresh, a project engineer on the energy transition team at Nabors Industries. The company is working on other investments and projects in the geothermal industry, Maresh says, and the work with Quaise is the culmination of a yearslong collaboration: “We’re just truly excited to see what Quaise can do.”

From the outside, this sort of partnership makes a lot of sense for Quaise. It gets resources and expertise. Meanwhile, Nabors is getting involved with an innovative company that could represent a new direction for geothermal. And maybe more to the point, if fossil fuels are to be phased out, this deal gives the company a stake in next-generation energy production.

There is so much potential for oil and gas companies to play a productive role in addressing climate change. One report from the International Energy Agency examined the role these legacy players could take:  “Energy transitions can happen without the engagement of the oil and gas industry, but the journey to net zero will be more costly and difficult to navigate if they are not on board,” the authors wrote. 

In the agency’s blueprint for what a net-zero emissions energy system could look like in 2050, about 30% of energy could come from sources where the oil and gas industry’s knowledge and resources are useful. That includes hydrogen, liquid biofuels, biomethane, carbon capture, and geothermal. 

But so far, the industry has hardly lived up to its potential as a positive force for the climate. Also in that report, the IEA pointed out that oil and gas producers made up only about 1% of global investment in climate tech in 2022. Investment has ticked up a bit since then, but still, it’s tough to argue that the industry is committed. 

And now that climate tech is falling out of fashion with the government in the US, I’d venture to guess that we’re going to see oil and gas companies increasingly pulling back on their investments and promises. 

BP recently backtracked on previous commitments to cut oil and gas production and invest in clean energy. And last year the company announced that it had written off $1.1 billion in offshore wind investments in 2023 and wanted to sell other wind assets. Shell closed down all its hydrogen fueling stations for vehicles in California last year. (This might not be all that big a loss, since EVs are beating hydrogen by a huge margin in the US, but it’s still worth noting.) 

So oil and gas companies are investing what amounts to pennies and often backtrack when the political winds change direction. And, let’s not forget, fossil-fuel companies have a long history of behaving badly. 

In perhaps the most notorious example, scientists at Exxon modeled climate change in the 1970s, and their forecasts turned out to be quite accurate. Rather than publish that research, the company downplayed how climate change might affect the planet. (For what it’s worth, company representatives have argued that this was less of a coverup and more of an internal discussion that wasn’t fit to be shared outside the company.) 

While fossil fuels are still part of our near-term future, oil and gas companies, and particularly producers, would need to make drastic changes to align with climate goals—changes that wouldn’t be in their financial interest. Few seem inclined to really take the turn needed. 

As the IEA report puts it:  “In practice, no one committed to change should wait for someone else to move first.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The deadly saga of the controversial gene therapy Elevidys

It has been a grim few months for the Duchenne muscular dystrophy (DMD) community. There had been some excitement when, a couple of years ago, a gene therapy for the disorder was approved by the US Food and Drug Administration for the first time. That drug, Elevidys, has now been implicated in the deaths of two teenage boys.

The drug’s approval was always controversial—there was a lack of evidence that it actually worked, for starters. But the agency that once rubber-stamped the drug has now turned on its manufacturer, Sarepta Therapeutics. In a remarkable chain of events, the FDA asked the company to stop shipping the drug on July 18. Sarepta refused to comply.

In the days since, the company has acquiesced. But its reputation has already been hit. And the events have dealt a devastating blow to people desperate for treatments that might help them, their children, or other family members with DMD.

DMD is a rare genetic disorder that causes muscles to degenerate over time. It’s caused by a mutation in a gene that codes for a protein called dystrophin. That protein is essential for muscles—without it, muscles weaken and waste away. The disease mostly affects boys, and symptoms usually start in early childhood.

At first, affected children usually start to find it hard to jump or climb stairs. But as the disease progresses, other movements become difficult too. Eventually, the condition might affect the heart and lungs. The life expectancy of a person with DMD has recently improved, but it is still only around 30 or 40 years. There is no cure. It’s a devastating diagnosis.

Elevidys was designed to replace missing dystrophin with a shortened, engineered version of the protein. In June 2023, the FDA approved the therapy for eligible four- and five-year-olds. It came with a $3.2 million price tag.

The approval was celebrated by people affected by DMD, says Debra Miller, founder of CureDuchenne, an organization that funds research into the condition and offers support to those affected by it. “We’ve not had much in the way of meaningful therapies,” she says. “The excitement was great.”

But the approval was controversial. It came under an “accelerated approval” program that essentially lowers the bar of evidence for drugs designed to treat “serious or life-threatening diseases where there is an unmet medical need.”

Elevidys was approved because it appeared to increase levels of the engineered protein in patients’ muscles. But it had not been shown to improve patient outcomes: It had failed a randomized clinical trial.

The FDA approval was granted on the condition that Sarepta complete another clinical trial. The topline results of that trial were described in October 2023 and were published in detail a year later. Again, the drug failed to meet its “primary endpoint”—in other words, it didn’t work as well as hoped.

In June 2024, the FDA expanded the approval of Elevidys. It granted traditional approval for the drug to treat people with DMD who are over the age of four and can walk independently, and another accelerated approval for those who can’t.

Some experts were appalled at the FDA’s decision—even some within the FDA disagreed with it. But things weren’t so simple for people living with DMD. I spoke to some parents of such children a couple of years ago. They pointed out that drug approvals can help bring interest and investment to DMD research. And, above all, they were desperate for any drug that might help their children. They were desperate for hope.

Unfortunately, the treatment does not appear to be delivering on that hope. There have always been questions over whether it works. But now there are serious questions over how safe it is. 

In March 2025, a 16-year-old boy died after being treated with Elevidys. He had developed acute liver failure (ALF) after having the treatment, Sarepta said in a statement. On June 15, the company announced a second death—a 15-year-old who also developed ALF following Elevidys treatment. The company said it would pause shipments of the drug, but only for patients who are not able to walk.

The following day, Sarepta held an online presentation in which CEO Doug Ingram said that the company was exploring ways to make the treatment safer, perhaps by treating recipients with another drug that dampens their immune systems. But that same day, the company announced that it was laying off 500 employees—36% of its workforce. Sarepta did not respond to a request for comment.

On June 24, the FDA announced that it was investigating the risks of serious outcomes “including hospitalization and death” associated with Elevidys, and “evaluating the need for further regulatory action.”

There was more tragic news on July 18, when there were reports that a third patient had died following a Sarepta treatment. This patient, a 51-year-old, hadn’t been taking Elevidys but was enrolled in a clinical trial for a different Sarepta gene therapy designed to treat limb-girdle muscular dystrophy. The same day, the FDA asked Sarepta to voluntarily pause all shipments of Elevidys. Sarepta refused to do so.

The refusal was surprising, says Michael Kelly, chief scientific officer at CureDuchenne: “It was an unusual step to take.”

After significant media coverage, including reporting that the FDA was “deeply troubled” by the decision and would use its “full regulatory authority,” Sarepta backed down a few days later. On July 21, the company announced its decision to “voluntarily and temporarily” pause all shipments of Elevidys in the US.

Sarepta says it will now work with the FDA to address safety and labeling concerns. But in the meantime, the saga has left the DMD community grappling with “a mix of disappointment and concern,” says Kelly. Many are worried about the risks of taking the treatment. Others are devastated that they are no longer able to access it.

Miller says she knows of families who have been working with their insurance providers to get authorization for the drug. “It’s like the rug has been pulled out from under them,” she says. Many families have no other treatment options. “And we know what happens when you do nothing with Duchenne,” she says. Others, particularly those with teenage children with DMD, are deciding against trying the drug, she adds.

The decision over whether to take Elevidys was already a personal one based on several factors, he says. People with DMD and their families deserve clear and transparent information about the treatment in order to make that decision.

The FDA’s decision to approve Elevidys was made on limited data, says Kelly. But as things stand today, over 900 people have been treated with Elevidys. “That gives the FDA… an opportunity to look at real data and make informed decisions,” he says.

“Families facing Duchenne do not have time to waste,” Kelly says. “They must navigate a landscape where hope is tempered by the realities of medical complexity.”

A version of this article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.