How do AI models generate videos?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

It’s been a big year for video generation. In the last nine months OpenAI made Sora public, Google DeepMind launched Veo 3, the video startup Runway launched Gen-4. All can produce video clips that are (almost) impossible to distinguish from actual filmed footage or CGI animation. This year also saw Netflix debut an AI visual effect in its show The Eternaut, the first time video generation has been used to make mass-market TV.

Sure, the clips you see in demo reels are cherry-picked to showcase a company’s models at the top of their game. But with the technology in the hands of more users than ever before—Sora and Veo 3 are available in the ChatGPT and Gemini apps for paying subscribers—even the most casual filmmaker can now knock out something remarkable. 

The downside is that creators are competing with AI slop, and social media feeds are filling up with faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation. 

With AI-generated videos everywhere, let’s take a moment to talk about the tech that makes them work.

How do you generate a video?

Let’s assume you’re a casual user. There are now a range of high-end tools that allow pro video makers to insert video generation models into their workflows. But most people will use this technology in an app or via a website. You know the drill: “Hey, Gemini, make me a video of a unicorn eating spaghetti. Now make its horn take off like a rocket.” What you get back will be hit or miss, and you’ll typically need to ask the model to take another pass or 10 before you get more or less what you wanted. 

So what’s going on under the hood? Why is it hit or miss—and why does it take so much energy? The latest wave of video generation models are what’s known as latent diffusion transformers. Yes, that’s quite a mouthful. Let’s unpack each part in turn, starting with diffusion. 

What’s a diffusion model?

Imagine taking an image and adding a random spattering of pixels to it. Take that pixel-spattered image and spatter it again and then again. Do that enough times and you will have turned the initial image into a random mess of pixels, like static on an old TV set. 

A diffusion model is a neural network trained to reverse that process, turning random static into images. During training, it gets shown millions of images in various stages of pixelation. It learns how those images change each time new pixels are thrown at them and, thus, how to undo those changes. 

The upshot is that when you ask a diffusion model to generate an image, it will start off with a random mess of pixels and step by step turn that mess into an image that is more or less similar to images in its training set. 

But you don’t want any image—you want the image you specified, typically with a text prompt. And so the diffusion model is paired with a second model—such as a large language model (LLM) trained to match images with text descriptions—that guides each step of the cleanup process, pushing the diffusion model toward images that the large language model considers a good match to the prompt. 

An aside: This LLM isn’t pulling the links between text and images out of thin air. Most text-to-image and text-to-video models today are trained on large data sets that contain billions of pairings of text and images or text and video scraped from the internet (a practice many creators are very unhappy about). This means that what you get from such models is a distillation of the world as it’s represented online, distorted by prejudice (and pornography).

It’s easiest to imagine diffusion models working with images. But the technique can be used with many kinds of data, including audio and video. To generate movie clips, a diffusion model must clean up sequences of images—the consecutive frames of a video—instead of just one image. 

What’s a latent diffusion model? 

All this takes a huge amount of compute (read: energy). That’s why most diffusion models used for video generation use a technique called latent diffusion. Instead of processing raw data—the millions of pixels in each video frame—the model works in what’s known as a latent space, in which the video frames (and text prompt) are compressed into a mathematical code that captures just the essential features of the data and throws out the rest. 

A similar thing happens whenever you stream a video over the internet: A video is sent from a server to your screen in a compressed format to make it get to you faster, and when it arrives, your computer or TV will convert it back into a watchable video. 

And so the final step is to decompress what the latent diffusion process has come up with. Once the compressed frames of random static have been turned into the compressed frames of a video that the LLM guide considers a good match for the user’s prompt, the compressed video gets converted into something you can watch.  

With latent diffusion, the diffusion process works more or less the way it would for an image. The difference is that the pixelated video frames are now mathematical encodings of those frames rather than the frames themselves. This makes latent diffusion far more efficient than a typical diffusion model. (Even so, video generation still uses more energy than image or text generation. There’s just an eye-popping amount of computation involved.) 

What’s a latent diffusion transformer?

Still with me? There’s one more piece to the puzzle—and that’s how to make sure the diffusion process produces a sequence of frames that are consistent, maintaining objects and lighting and so on from one frame to the next. OpenAI did this with Sora by combining its diffusion model with another kind of model called a transformer. This has now become standard in generative video. 

Transformers are great at processing long sequences of data, like words. That has made them the special sauce inside large language models such as OpenAI’s GPT-5 and Google DeepMind’s Gemini, which can generate long sequences of words that make sense, maintaining consistency across many dozens of sentences. 

But videos are not made of words. Instead, videos get cut into chunks that can be treated as if they were. The approach that OpenAI came up with was to dice videos up across both space and time. “It’s like if you were to have a stack of all the video frames and you cut little cubes from it,” says Tim Brooks, a lead researcher on Sora.

A selection of videos generated with Veo 3 and Midjourney. The clips have been enhanced in postproduction with Topaz, an AI video-editing tool. Credit: VaigueMan

Using transformers alongside diffusion models brings several advantages. Because they are designed to process sequences of data, transformers also help the diffusion model maintain consistency across frames as it generates them. This makes it possible to produce videos in which objects don’t pop in and out of existence, for example. 

And because the videos are diced up, their size and orientation do not matter. This means that the latest wave of video generation models can be trained on a wide range of example videos, from short vertical clips shot with a phone to wide-screen cinematic films. The greater variety of training data has made video generation far better than it was just two years ago. It also means that video generation models can now be asked to produce videos in a variety of formats. 

What about the audio? 

A big advance with Veo 3 is that it generates video with audio, from lip-synched dialogue to sound effects to background noise. That’s a first for video generation models. As Google DeepMind CEO Demis Hassabis put it at this year’s Google I/O: “We’re emerging from the silent era of video generation.” 

The challenge was to find a way to line up video and audio data so that the diffusion process would work on both at the same time. Google DeepMind’s breakthrough was a new way to compress audio and video into a single piece of data inside the diffusion model. When Veo 3 generates a video, its diffusion model produces audio and video together in a lockstep process, ensuring that the sound and images are synched.  

You said that diffusion models can generate different kinds of data. Is this how LLMs work too? 

No—or at least not yet. Diffusion models are most often used to generate images, video, and audio. Large language models—which generate text (including computer code)—are built using transformers. But the lines are blurring. We’ve seen how transformers are now being combined with diffusion models to generate videos. And this summer Google DeepMind revealed that it was building an experimental large language model that used a diffusion model instead of a transformer to generate text. 

Here’s where things start to get confusing: Though video generation (which uses diffusion models) consumes a lot of energy, diffusion models themselves are in fact more efficient than transformers. Thus, by using a diffusion model instead of a transformer to generate text, Google DeepMind’s new LLM could be a lot more efficient than existing LLMs. Expect to see more from diffusion models in the near future!

Texas banned lab-grown meat. What’s next for the industry?

Last week, a legal battle over lab-grown meat kicked off in Texas. On September 1, a two-year ban on the technology went into effect across the state; the following day, two companies filed a lawsuit against state officials.

The two companies, Wildtype Foods and Upside Foods, are part of a growing industry that aims to bring new types of food to people’s plates. These products, often called cultivated meat by the industry, take live animal cells and grow them in the lab to make food products without the need to slaughter animals.

Texas joins six other US states and the country of Italy in banning these products. These legal challenges are adding barriers to an industry that’s still in its infancy and already faces plenty of challenges before it can reach consumers in a meaningful way.

The agriculture sector makes up a hefty chunk of global greenhouse-gas emissions, with livestock alone accounting for somewhere between 10% and 20% of climate pollution. Alternative meat products, including those grown in a lab, could help cut the greenhouse gases from agriculture.

The industry is still in its early days, though. In the US, just a handful of companies can legally sell products including cultivated chicken, pork fat, and salmon. Australia, Singapore, and Israel also allow a few companies to sell within their borders.

Upside Foods, which makes cultivated chicken, was one of the first to receive the legal go-ahead to sell its products in the US, in 2022. Wildtype Foods, one of the latest additions to the US market, was able to start selling its cultivated salmon in June.

Upside, Wildtype, and other cultivated-meat companies are still working to scale up production. Products are generally available at pop-up events or on special menus at high-end restaurants. (I visited San Francisco to try Upside’s cultivated chicken at a Michelin-starred restaurant a few years ago.)

Until recently, the only place you could reliably find lab-grown meat in Texas was a sushi restaurant in Austin. Otoko featured Wildtype’s cultivated salmon on a special tasting menu starting in July. (The chef told local publication Culture Map Austin that the cultivated fish tastes like wild salmon, and it was included in a dish with grilled yellowtail to showcase it side-by-side with another type of fish.)

The as-yet-limited reach of lab-grown meat didn’t stop state officials from moving to ban the technology, effective from now until September 2027.

The office of state senator Charles Perry, the author of the bill, didn’t respond to requests for comment. Neither did the Texas and Southwestern Cattle Raisers Association, whose president, Carl Ray Polk Jr., testified in support of the bill in a March committee hearing.

“The introduction of lab-grown meat could disrupt traditional livestock markets, affecting rural communities and family farms,” Perry said during the meeting.

In an interview with the Texas Tribune, Polk said the two-year moratorium would help the industry put checks and balances in place before the products could be sold. He also expressed concern about how clearly cultivated-meat companies will be labeling their products.

“The purpose of these bans is to try to kill the cultivated-meat industry before it gets off the ground,” said Myra Pasek, general counsel of Upside Foods, via email. The company is working to scale up its manufacturing and get the product on the market, she says, “but that can’t happen if we’re not allowed to compete in the marketplace.”

Others in the industry have similar worries. “Moratoriums on sale like this not only deny Texans new choices and economic growth, but they also send chilling signals to researchers and entrepreneurs across the country,” said Pepin Andrew Tuma, the vice president of policy and government relations for the Good Food Institute, a nonprofit think tank focused on alternative proteins, in a statement. (The group isn’t involved in the lawsuit.) 

One day after the moratorium took effect on September 1, Wildtype Foods and Upside Foods filed a lawsuit challenging the ban, naming Jennifer Shuford, commissioner of the Texas Department of State Health Services, among other state officials.

A lawsuit wasn’t necessarily part of the scale-up plan. “This was really a last resort for us,” says Justin Kolbeck, cofounder and CEO of Wildtype.

Growing cells to make meat in the lab isn’t easy—some companies have spent a decade or more trying to make significant amounts of a product that people want to eat. These legal battles certainly aren’t going to help. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

We can’t “make American children healthy again” without tackling the gun crisis

Note for readers: This newsletter discusses gun violence, a raw and tragic issue in America. It was already in progress on Wednesday when a school shooting occurred at Evergreen High School in Colorado and Charlie Kirk was shot and killed at Utah Valley University. 

Earlier this week, the Trump administration’s Make America Healthy Again movement released a strategy for improving the health and well-being of American children. The report was titled—you guessed it—Make Our Children Healthy Again.

Robert F. Kennedy Jr., who leads the Department of Health and Human Services, and his colleagues are focusing on four key aspects of child health: diet, exercise, chemical exposure, and overmedicalization.

Anyone who’s been listening to RFK Jr. posturing on health and wellness won’t be surprised by these priorities. And the first two are pretty obvious. On the whole, American children should be eating more healthily. And they should be getting more exercise.

But there’s a glaring omission. The leading cause of death for American children and teenagers isn’t ultraprocessed food or exposure to some chemical. It’s gun violence

Yesterday’s news of yet more high-profile shootings at schools in the US throws this disconnect into even sharper relief. Experts believe it is time to treat gun violence in the US as what it is: a public health crisis.

I live in London, UK, with my husband and two young children. We don’t live in a particularly fancy part of the city—in one recent ranking of London boroughs from most to least posh, ours came in at 30th out of 33. I do worry about crime. But I don’t worry about gun violence.

That changed when I temporarily moved my family to the US a couple of years ago. We rented the ground-floor apartment of a lovely home in Cambridge, Massachusetts—a beautiful area with good schools, pastel-colored houses, and fluffy rabbits hopping about. It wasn’t until after we’d moved in that my landlord told me he had guns in the basement.

My daughter joined the kindergarten of a local school that specialized in music, and we took her younger sister along to watch the kids sing songs about friendship. It was all so heartwarming—until we noticed the school security officer at the entrance carrying a gun.

Later in the year, I received an email alert from the superintendent of the Cambridge Public Schools. “At approximately 1:45 this afternoon, a Cambridge Police Department Youth Officer assigned to Cambridge Rindge and Latin School accidentally discharged their firearm while using a staff bathroom inside the school,” the message began. “The school day was not disrupted.”

These experiences, among others, truly brought home to me the cultural differences over firearms between the US and the UK (along with most other countries). For the first time, I worried about my children’s exposure to them. I banned my children from accessing parts of the house. I felt guilty that my four-year-old had to learn what to do if a gunman entered her school. 

But it’s the statistics that are the most upsetting.

In 2023, 46,728 people died from gun violence in the US, according to a report published in June by the Johns Hopkins Bloomberg School of Public Health. That includes both homicides and suicides, and it breaks down to 128 deaths per day, on average. The majority of those who die from gun violence are adults. But the figures for children are sickening, too. In 2023, 2,566 young people died from gun violence. Of those, 234 were under the age of 10.

Gun death rates among children have more than doubled since 2013. Firearms are involved in more child deaths than cancer or car crashes.

Many other children survive gun violence with nonfatal—but often life-changing—injuries. And the impacts are felt beyond those who are physically injured. Witnessing gun violence or hearing gunshots can understandably cause fear, sadness, and distress.  

That’s worth bearing in mind when you consider that there have been 434 school shootings in the US since Columbine in 1999. The Washington Post estimates that 397,000 students have experienced gun violence at school in that period. Another school shooting took place at Evergreen High School in Colorado on Wednesday, adding to that total.

“Being indirectly exposed to gun violence takes its toll on our mental health and children’s ability to learn,” says Daniel Webster, Bloomberg Professor of American Health at the Johns Hopkins Center for Gun Violence Solutions in Baltimore.

The MAHA report states that “American youth face a mental health crisis,” going on to note that “suicide deaths among 10- to 24-year-olds increased by 62% from 2007 to 2021” and that “suicide is now the leading cause of death in teens aged 15-19.” What it doesn’t say is that around half of these suicides involve guns.

“When you add all these dimensions, [gun violence is] a very huge public health problem,” says Webster.

Researchers who study gun violence have been saying the same thing for years. And in 2024, then US Surgeon General Vivek Murthy declared it a public health crisis. “We don’t have to subject our children to the ongoing horror of firearm violence in America,” Murthy said in a statement at the time. Instead, he argued, we should tackle the problem using a public health approach.

Part of that approach involves identifying who is at the greatest risk and offering support to lower that risk, says Webster. Young men who live in poor communities tend to have the highest risk of gun violence, he says, as do those who experience crisis or turmoil. Trying to mediate conflicts or limit access to firearms, even temporarily, can help lower the incidence of gun violence, he says.

There’s an element of social contagion, too, adds Webster. Shooting begets more shooting. He likens it to the outbreak of an infectious disease. “When more people get vaccinated … infection rates go down,” he says. “Almost exactly the same thing happens with gun violence.”

But existing efforts are already under threat. The Trump administration has eliminated hundreds of millions of dollars in grants for organizations working to reduce gun violence.

Webster thinks the MAHA report has “missed the mark” when it comes to the health and well-being of children in the US. “This document is almost the polar opposite to how many people in public health think,” he says. “We have to acknowledge that injuries and deaths from firearms are a big threat to the health and safety of children and adolescents.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Help! My therapist is secretly using ChatGPT

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted. 

It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found. 

I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?

In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.

In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?

Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.

There is some evidence AI can deliver more standardized, “manualized” therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.

What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?

At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.

OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?

I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things. 

Read the full story from Laurie Clarke

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AI is changing the grid. Could it help more than it harms?

The rising popularity of AI is driving an increase in electricity demand so significant it has the potential to reshape our grid. Energy consumption by data centers has gone up by 80% from 2020 to 2025 and is likely to keep growing. Electricity prices are already rising, especially in places where data centers are most concentrated. 

Yet many people, especially in Big Tech, argue that AI will be, on balance, a positive force for the grid. They claim that the technology could help get more clean power online faster, run our power system more efficiently, and predict and prevent failures that cause blackouts. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


There are early examples where AI is helping already, including AI tools that utilities are using to help forecast supply and demand. The question is whether these big promises will be realized fast enough to outweigh the negative effects of AI on local grids and communities. 

A delicate balance

One area where AI is already being used for the grid is in forecasting, says Utkarsha Agwan, a member of the nonprofit group Climate Change AI.

Running the grid is a balancing act: Operators have to understand how much electricity demand there is and turn on the right combination of power plants to meet it. They optimize for economics along the way, choosing the sources that will keep prices lowest for the whole system.

That makes it necessary to look ahead hours and in some cases days. Operators consider factors such as historical data (holidays often see higher demand) and the weather (a hot day means more air conditioners sucking up power). These predictions also consider what level of supply is expected from intermittent sources like solar panels.

There’s little risk in using AI tools in forecasting; it’s often not as time sensitive as other applications, which can require reactions within seconds or even milliseconds. A grid operator might use a forecast to determine which plants will need to turn on. Other groups might run their own forecasts as well, using AI tools to decide how to staff a plant, for example. The tools also can’t physically control anything. Rather, they can be used alongside more conventional methods to provide more data.  

Today, grid operators make a lot of approximations to model the grid, because the system is so incredibly complex that it’s impossible to truly know what’s going on in every place at every time. Not only are there a whole host of power plants and consumers to think about, but there are considerations like making sure power lines don’t get overloaded.

Working with those estimates can lead to some inefficiencies, says Kyri Baker, a professor at the University of Colorado Boulder. Operators tend to generate a bit more electricity than the system uses, for example. Using AI to create a better model could reduce some of those losses and allow operators to make decisions about how to control infrastructure in real time to reach a closer match of supply and demand.

She gives the example of a trip to the airport. Imagine there’s a route you know will get you there in about 45 minutes. There might be another, more complicated route that could save you some time in ideal conditions—but you’re not sure whether it’s better on any particular day. What the grid does now is the equivalent of taking the reliable route.

“So that’s the gap that AI can help close. We can solve this more complex problem, fast enough and reliably enough that we can possibly use it and shave off emissions,” Baker says. 

In theory, AI could be used to operate the grid entirely without human intervention. But that work is largely still in the research phase. Grid operators are running some of the most critical infrastructure in this country, and the industry is hesitant to mess with something that’s already working, Baker says. If this sort of technology is ever used in grid operations, there will still be humans in the loop to help make decisions, at least when it’s first deployed.  

Planning ahead

Another fertile area for AI is planning future updates to the grid. Building a power plant can take a very long time—the typical time from an initial request to commercial operation in the US is roughly four years. One reason for the lengthy wait is that new power plants have to demonstrate how they might affect the rest of the grid before they can connect. 

An interconnection study examines whether adding a new power plant of a particular type in a particular place would require upgrades to the grid to prevent problems. After regulators and utilities determine what upgrades might be needed, they estimate the cost, and the energy developer generally foots the bill. 

Today, those studies can take months. They involve trying to understand an incredibly complicated system, and because they rely on estimates of other existing and proposed power plants, only a few can happen in an area at any given time. This has helped create the years-long interconnection queue, a long line of plants waiting for their turn to hook up to the grid in markets like the US and Europe. The vast majority of projects in the queue today are renewables, which means there’s clean power just waiting to come online. 

AI could help speed this process, producing these reports more quickly. The Midcontinent Independent System Operator, a grid operator that covers 15 states in the central US, is currently working with a company called Pearl Street to help automate these reports.

AI won’t be a cure-all for grid planning; there are other steps to clearing the interconnection queue, including securing the necessary permits. But the technology could help move things along. “The sooner we can speed up interconnection, the better off we’ll be,” says Rob Gramlich, president of Grid Strategies, a consultancy specializing in transmission and power markets.

There’s a growing list of other potential uses for AI on the grid and in electricity generation. The technology could monitor and plan ahead for failures in equipment ranging from power lines to gear boxes. Computer vision could help detect everything from wildfires to faulty lines. AI could also help balance supply and demand in virtual power plants, systems of distributed resources like EV chargers or smart water heaters. 

While there are early examples of research and pilot programs for AI from grid planning to operation, some experts are skeptical that the technology will deliver at the level some are hoping for. “It’s not that AI has not had some kind of transformation on power systems,” Climate Change AI’s Agwan says. “It’s that the promise has always been bigger, and the hope has always been bigger.”

Some places are already seeing higher electricity prices because of power needs from data centers. The situation is likely to get worse. Electricity demand from data centers is set to double by the end of the decade, reaching 945 terawatt-hours, roughly the annual demand from the entire country of Japan. 

The infrastructure growth needed to support AI load growth has outpaced the promises of the technology, “by quite a bit,” says Panayiotis Moutis, an assistant professor of electrical engineering at the City College of New York. Higher bills caused by the increasing energy needs of AI aren’t justified by existing ways of using the technology for the grid, he says. 

“At the moment, I am very hesitant to lean on the side of AI being a silver bullet,” Moutis says. 

Correction: This story has been updated to correct Moutis’s affiliation.

Three big things we still don’t know about AI’s energy burden

Earlier this year, when my colleague Casey Crownhart and I spent six months researching the climate and energy burden of AI, we came to see one number in particular as our white whale: how much energy the leading AI models, like ChatGPT or Gemini, use up when generating a single response. 

This fundamental number remained elusive even as the scramble to power AI escalated to the White House and the Pentagon, and as projections showed that in three years AI could use as much electricity as 22% of all US households. 

The problem with finding that number, as we explain in our piece published in May, was that AI companies are the only ones who have it. We pestered Google, OpenAI, and Microsoft, but each company refused to provide its figure. Researchers we spoke to who study AI’s impact on energy grids compared it to trying to measure the fuel efficiency of a car without ever being able to drive it, making guesses based on rumors of its engine size and what it sounds like going down the highway.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


But then this summer, after we published, a strange thing started to happen. In June, OpenAI’s Sam Altman wrote that an average ChatGPT query uses 0.34 watt-hours of energy. In July, the French AI startup Mistral didn’t publish a number directly but released an estimate of the emissions generated. In August, Google revealed that answering a question to Gemini uses about 0.24 watt-hours of energy. The figures from Google and OpenAI were similar to what Casey and I estimated for medium-size AI models. 

So with this newfound transparency, is our job complete? Did we finally harpoon our white whale, and if so, what happens next for people studying the climate impact of AI? I reached out to some of our old sources, and some new ones, to find out.

The numbers are vague and chat-only

The first thing they told me is that there’s a lot missing from the figures tech companies published this summer. 

OpenAI’s number, for example, did not appear in a detailed technical paper but rather in a blog post by Altman that leaves lots of unanswered questions, such as which model he was referring to, how the energy use was measured, and how much it varies. Google’s figure, as Crownhart points out, refers to the median amount of energy per query, which doesn’t give us a sense of the more energy-demanding Gemini responses, like when it uses a reasoning model to “think” through a hard problem or generates a really long response. 

The numbers also refer only to interactions with chatbots, not the other ways that people are becoming increasingly reliant on generative AI. 

“As video and image becomes more prominent and used by more and more people, we need the numbers from different modalities and how they measure up,” says Sasha Luccioni, AI and climate lead at the AI platform Hugging Face. 

This is also important because the figures for asking a question to a chatbot are, as expected, undoubtedly small—the same amount of electricity used by a microwave in just seconds. That’s part of the reason AI and climate researchers don’t suggest that any one individual’s AI use creates a significant climate burden. 

A full accounting of AI’s energy demands—one that goes beyond what’s used to answer an individual query to help us understand its full net impact on the climate—would require application-specific information on how all this AI is being used. Ketan Joshi, an analyst for climate and energy groups, acknowledges that researchers don’t usually get such specific information from other industries but says it might be justified in this case.

“The rate of data center growth is inarguably unusual,” Joshi says. “Companies should be subject to significantly more scrutiny.”

We have questions about energy efficiency

Companies making billion-dollar investments into AI have struggled to square this growth in energy demand with their sustainability goals. In May, Microsoft said that its emissions have soared by over 23% since 2020, owing largely to AI, while the company has promised to be carbon negative by 2030. “It has become clear that our journey towards being carbon negative is a marathon, not a sprint,” Microsoft wrote.

Tech companies often justify this emissions burden by arguing that soon enough, AI itself will unlock efficiencies that will make it a net positive for the climate. Perhaps the right AI system, the thinking goes, could design more efficient heating and cooling systems for a building, or help discover the minerals required for electric-vehicle batteries. 

But there are no signs that AI has been usefully used to do these things yet. Companies have shared anecdotes about using AI to find methane emission hot spots, for example, but they haven’t been transparent enough to help us know if these successes outweigh the surges in electricity demand and emissions that Big Tech has produced in the AI boom. In the meantime, more data centers are planned, and AI’s energy demand continues to rise and rise. 

The ‘bubble’ question

One of the big unknowns in the AI energy equation is whether society will ever adopt AI at the levels that figure into tech companies’ plans. OpenAI has said that ChatGPT receives 2.5 billion prompts per day. It’s possible that this number, and the equivalent numbers for other AI companies, will continue to soar in the coming years. Projections released last year by the Lawrence Berkeley National Laboratory suggest that if they do, AI alone could consume as much electricity annually as 22% of all US households by 2028.

But this summer also saw signs of a slowdown that undercut the industry’s optimism. OpenAI’s launch of GPT-5 was largely considered a flop, even by the company itself, and that flop led critics to wonder if AI may be hitting a wall. When a group at MIT found that 95% of businesses are seeing no return on their massive AI investments, stocks floundered. The expansion of AI-specific data centers might be an investment that’s hard to recoup, especially as revenues for AI companies remain elusive. 

One of the biggest unknowns about AI’s future energy burden isn’t how much a single query consumes, or any other figure that can be disclosed. It’s whether demand will ever reach the scale companies are building for or whether the technology will collapse under its own hype. The answer will determine whether today’s buildout becomes a lasting shift in our energy system or a short-lived spike.

2025 Innovator of the Year: Sneha Goenka for developing an ultra-fast sequencing technology

Sneha Goenka is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

Up to a quarter of children entering intensive care have undiagnosed genetic conditions. To be treated properly, they must first get diagnoses—which means having their genomes sequenced. This process typically takes up to seven weeks. Sadly, that’s often too slow to save a critically ill child.

Hospitals may soon have a faster option, thanks to a groundbreaking system built in part by Sneha Goenka, an assistant professor of electrical and computer engineering at Princeton—and MIT Technology Review’s 2025 Innovator of the Year. 

Five years ago, Goenka and her colleagues designed a rapid-sequencing pipeline that can provide a genetic diagnosis in less than eight hours. Goenka’s software computations and hardware architectures were critical to speeding up each stage of the process. 

“Her work made everyone realize that genome sequencing is not only for research and medical application in the future but can have immediate impact on patient care,” says Jeroen de Ridder, a professor at UMC Utrecht in the Netherlands, who has developed an ultrafast sequencing tool for cancer diagnosis. 

Now, as cofounder and scientific lead of a new company, she is working to make that technology widely available to patients around the world.

Goenka grew up in Mumbai, India. Her mother was an advocate for women’s education, but as a child, Goenka had to fight to persuade other family members to let her continue her studies. She moved away from home at 15 to attend her final two years of school and enroll in a premier test-­preparation academy in Kota, Rajasthan. Thanks to that education, she passed what she describes as “one of the most competitive exams in the world,” to get into the Indian Institute of Technology Bombay. 

Once admitted to a combined bachelor’s and master’s program in electrical engineering, she found that “it was a real boys’ club.” But Goenka excelled in developing computer architecture systems that accelerate computation. As an undergraduate, she began applying those skills to medicine, driven by her desire to “have real-world impact”—in part because she had seen her family struggle with painful uncertainty after her brother was born prematurely when she was eight years old. 

While working on a PhD in electrical engineering at Stanford, she turned her focus to evolutionary and clinical genomics. One day a senior colleague, Euan Ashley, presented her with a problem. He said, “We want to see how fast we can make a genetic diagnosis. If you had unlimited funds and resources, just how fast do you think you could make the compute?”

Streaming DNA

A genetic diagnosis starts with a blood sample, which is prepped to extract the DNA—a process that takes about three hours. Next that DNA needs to be “read.” One of the world’s leading long-read sequencing technologies, developed by Oxford Nanopore Technologies, can generate highly detailed raw data of an individual’s genetic code in about an hour and a half. Unfortunately, processing all this data to identify mutations can take another 21 hours. Shipping samples to a central lab and figuring out which mutations are of interest often leads the process to stretch out to weeks. 

Goenka saw a better way: Build a real-time system that could “stream” the sequencing data, analyzing it as it was being generated, like streaming a film on Netflix rather than downloading it to watch later.

Sneha Goenka

To do this, she designed a cloud computing architecture to pull in more processing power. Goenka’s first challenge was to increase the speed at which her team could upload the raw data for processing, by streamlining the requests between the sequencer and the cloud to avoid unnecessary “chatter.” She worked out the exact number of communication channels needed—and created algorithms that allowed those channels to be reused in the most efficient way.

The next challenge was “base calling”—converting the raw signal from the sequencing machine into the nucleotide bases A, C, T, and G, the language that makes up our DNA. Rather than using a central node to orchestrate this process, which is an inefficient, error-prone approach, Goenka wrote software to automatically assign dozens of data streams directly from the sequencer to dedicated nodes in the cloud.

Meet the rest of this year’s 
Innovators Under 35.

Then, to identify mutations, the sequences were aligned for comparison with a reference genome. She coded a custom program that triggers alignment as soon as base calling finishes for one batch of sequences while simultaneously initiating base calling for the next batch, thus ensuring that the system’s computational resources are used efficiently.

Add all these im­­prove­­ments together, and Goenka’s approach reduced the total time required to analyze a genome for mutations from around 20 hours to 1.5 hours. Finally, the team worked with genetic counselors and physicians to create a filter that identified which mutations were most critical to a person’s health, and that set was then given a final manual curation by a genetic specialist. These final stages take up to three hours. The technology was close to being fully operational when, suddenly, the first patient arrived. 

A critical test

When 13-year-old Matthew was flown to Stanford’s children’s hospital in 2021, he was struggling to breathe and his heart was failing. Doctors needed to know whether the inflammation in his heart was due to a virus or to a genetic mutation that would necessitate a transplant.  

His blood was drawn on a Thursday. The transplant committee made its decisions on Fridays. “It meant we had a small window of time,” says Goenka.

Goenka was in Mumbai when the sequencing began. She stayed up all night, monitoring the computations. That was when the project stopped being about getting faster for the sake of it, she says: “It became about ‘How fast can we get this result to save this person’s life?’”

The results revealed a genetic mutation that explained Matthew’s condition, and he was placed on the transplant list the next day. Three weeks later, he received a new heart. “He’s doing great now,” Goenka says.

So far, Goenka’s technology has been tested on 26 patients, including Matthew. Her pipeline is “directly affecting the medical care of newborns in the Stanford intensive care units,” Ashley says.

Now she’s aiming for even broader impact—Goenka and her colleagues are laying the groundwork for a startup that they hope will bring the technology to market and make sure it reaches as many patients as possible. Meanwhile, she has been refining the computational pipeline, reducing the time to diagnosis to about six hours.

The demand is clear, she says: “In an in-depth study involving more than a dozen laboratory directors and neonatologists, every respondent stressed urgency. One director put it succinctly: ‘I need this platform today—preferably yesterday.’”

Goenka is also developing software to make the technology more inclusive. The reference genome is skewed toward people of European descent. The Human Pangenome Project is an international collaboration to create reference genomes from more diverse populations, which Goenka aims to use to personalize her team’s filters, allowing them to flag mutations that may be more prevalent in the population to which a patient belongs.

Since seeing her work, Goenka’s extended family has become more appreciative of her education and career. “The entire family is very proud about the impact I’ve made,” she says. 

Helen Thomson is a freelance science journalist based in London.

Meet the Ethiopian entrepreneur who is reinventing ammonia production

Iwnetim Abate is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

“I’m the only one who wears glasses and has eye problems in the family,” Iwnetim Abate says with a smile as sun streams in through the windows of his MIT office. “I think it’s because of the candles.”

In the small town in Ethiopia where he grew up, Abate’s family had electricity, but it was unreliable. So, for several days each week when they were without power, Abate would finish his homework by candlelight.

Today, Abate, 32, is an assistant professor at MIT in the department of materials science and engineering. Part of his research focuses on sodium-ion batteries, which could be cheaper than the lithium-based ones that typically power electric vehicles and grid installations. He’s also pursuing a new research path, examining how to harness the heat and pressure under the Earth’s surface to make ammonia, a chemical used in fertilizer and as a green fuel.

Growing up without the ubiquitous access to electricity that many people take for granted shaped the way Abate thinks about energy issues, he says. He recalls rushing to dry out his school uniform over a fire before he left in the morning. One of his chores was preparing cow dung to burn as fuel—the key is strategically placing holes to ensure proper drying, he says.

Abate’s desire to devote his attention to energy crystallized in a high school chemistry class on fuel cells. “It was like magic,” he says, to learn it’s possible to basically convert water into energy. “Sometimes science is magic, right?”

Abate scored the highest of any student in Ethiopia on the national exam the year he took it, and he knew he wanted to go to the US to further his education. But actually getting there proved to be a challenge. 

Abate applied to US colleges for three years before he was granted admission to Concordia College Moorhead, a small liberal arts college, with a partial scholarship. To raise the remaining money, he reached out to various companies and wealthy people across Ethiopia. He received countless rejections but didn’t let that phase him. He laughs recalling how guards would chase him off when he dropped by prospects’ homes in person. Eventually, a family friend agreed to help.

When Abate finally made it to the Minnesota college, he walked into a room in his dorm building and the lights turned on automatically. “I both felt happy to have all this privilege and I felt guilty at the same time,” he says.

Lab notes

His college wasn’t a research institute, so Abate quickly set out to get into a laboratory. He reached out to Sossina Haile, then at the California Institute of Technology, to ask about a summer research position.

Haile, now at Northwestern University, recalls thinking that Abate was particularly eager. As a visible Ethiopian scientist, she gets a lot of email requests, but his stood out. “No obstacle was going to stand in his way,” she says. It was risky to take on a young student with no research experience who’d only been in the US for a year, but she offered him a spot in her lab.

Abate spent the summer working on materials for use in solid oxide fuel cells. He returned for the following summer, then held a string of positions in energy-materials research, including at IBM and Los Alamos National Lab, before completing his graduate degree at Stanford and postdoctoral work at the University of California, Berkeley.

Meet the rest of this year’s 
Innovators Under 35.

He joined the MIT faculty in 2023 and set out to build a research group of his own. Today, there are two major focuses of his lab. One is sodium-ion batteries, which are a popular alternative to the lithium-based cells used in EVs and grid storage installations. Sodium-ion batteries don’t require the kinds of critical minerals lithium-ion batteries do, which can be both expensive and tied up by geopolitics.  

One major stumbling block for sodium-ion batteries is their energy density. It’s possible to improve energy density by operating at higher voltages, but some of the materials used tend to degrade quickly at high voltages. That limits the total energy density of the battery, so it’s a problem for applications like electric vehicles, where a low energy density would restrict range.

Abate’s team is developing materials that could extend the lifetime of sodium-ion batteries while avoiding the need for nickel, which is considered a critical mineral in the US. The team is examining additives and testing materials-engineering techniques to help the batteries compete with lithium-ion cells.

Irons in the fire

Another vein of Abate’s work is in some ways a departure from his history in batteries and fuel cells. In January, his team published research describing a process to make ammonia underground, using naturally-occurring heat and pressure to drive the necessary chemical reactions.  

Today, making ammonia generates between 1% and 2% of global greenhouse gas emissions. It’s primarily used to fertilize crops, but it’s also being considered as a fuel for sectors like long-distance shipping.

Abate cofounded a company called Addis Energy to commercialize the research, alongside MIT serial entrepreneur Yet-Ming Chiang and a pair of oil industry experts. (Addis means “new” in Amharic, the official language of Ethiopia.) For an upcoming pilot, the company aims to build an underground reactor that can produce ammonia. 

When he’s not tied up in research or the new startup, Abate runs programs for African students. In 2017, he cofounded an organization called Scifro, which runs summer school programs in Ethiopia and plans to expand to other countries, including Rwanda. The programs focus on providing mentorship and educating students about energy and medical devices, which is the specialty of his cofounder. 

While Abate holds a position at one of the world’s most prestigious universities and serves as chief science officer of a buzzy startup, he’s quick to give credit to those around him. “It takes a village to build something, and it’s not just me,” he says.

Abate often thinks about his friends, family, and former neighbors in Ethiopia as he works on new energy solutions. “Of course, science is beautiful, and we want to make an impact,” he says. “Being good at what you do is important, but ultimately, it’s about people.”

How Yichao “Peak” Ji became a global AI app hitmaker

Yichao “Peak” Ji is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production—it was directed by cofounder Zhang Tao and filmed in a corner of their Beijing office. But something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people. 

At first sight, Manus works like most chatbots: Users can ask it questions in a chat window. However, besides providing answers, it can also carry out tasks (for example, finding an apartment that meets specified criteria within a certain budget). It does this by breaking tasks down into steps, then using a cloud-based virtual machine equipped with a browser and other tools to execute them—perusing websites, filling in forms, and so on.

Ji is the technical core of the team. Now based in Singapore, he leads product and infrastructure development as the company pushes forward with its global expansion. 

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility among both engineers and investors—and put him at the forefront of a rising class of Chinese technologists with AI products and global ambitions. 

Serial builder

The son of a professor and an IT professional, Ji moved to Boulder, Colorado, at age four for his father’s visiting scholar post, returning to Beijing in second grade.

His fluent English set him apart early on, but it was an elementary school robotics team that sparked his interest in programming. By high school, he was running the computer club, teaching himself how to build operating systems, and drawing inspiration from Bill Gates, Linux, and open-source culture. He describes himself as a lifelong Apple devotee, and it was Apple’s launch of the App Store in 2008 that ignited his passion for development.

In 2010, as a high school sophomore, Ji created the Mammoth browser, a customizable third-party iPhone browser. It quickly became the most-downloaded third-party browser developed by an individual in China and earned him the Macworld Asia Grand Prize in 2011. International tech site AppAdvice called it a product that “redefined the way you browse the internet.” At age 20, he was on the cover of Forbes magazine and made its “30 Under 30” list. 

Meet the rest of this year’s 
Innovators Under 35.

During his teenage years, Ji developed several other iOS apps, including a budgeting tool designed for Hasbro’s Monopoly game, which sold well—until it attracted a legal notice for using the trademarked name. But Ji wasn’t put off a career in tech by that early brush with a multinational legal team. If anything, he says, it sharpened his instincts for both product and risk. 

In 2012, Ji launched his own company, Peak Labs, and later led the development of Magi, a search engine. The tool extracted information from across the web to answer queries—conceptually similar to today’s AI-powered search, but powered by a custom language model. 

​​Magi was briefly popular, drawing millions of users in its first month, but consumer adoption didn’t stick. It did, however, attract enterprise interest, and Ji adapted it for B2B use, before selling it in 2022. 

AI acumen 

Manus would become his next act—and a more ambitious one. His cofounders, Zhang Tao and Xiao Hong, complement Ji’s technical core with product know-how, storytelling, and organizational savvy. Both Xiao and Ji are serial entrepreneurs who have been backed by venture capital firm ZhenFund multiple times. Together, they represent the kind of long-term collaboration and international ambition that increasingly defines China’s next wave of entrepreneurs.

JULIANA TAN

People who have worked with Ji describe him as a clear thinker, a fast talker, and a tireless, deeply committed builder who thinks in systems, products, and user flows. He represents a new generation of Chinese technologists: equally at home coding or in pitch meetings, fluent in both building and branding. He’s also a product of open-source culture, and remains an active contributor whose projects regularly garner attention—and GitHub stars—across developer communities.

With new funding led by US venture capital firm Benchmark, Ji and his team are taking Manus to the wider world, relocating operations outside of China, to Singapore, and actively targeting consumers around the world. The product is built on US-based infrastructure, drawing on technologies like Claude Sonnet, Microsoft Azure, and open-source tools such as Browser Use. It’s a distinctly global setup: an AI agent developed by a Chinese team, powered by Western platforms, and designed for international users. That isn’t incidental; it reflects the more fluid nature of AI entrepreneurship today, where talent, infrastructure, and ambition move across borders just as quickly as the technology itself.

For Ji, the goal isn’t just building a global company—it’s building a legacy. “I hope Manus is the last product I’ll ever build,” Ji says. “Because if I ever have another wild idea—(I’ll just) leave it to Manus!”

How Trump’s policies are affecting early-career scientists—in their own words

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

Every year MIT Technology Review celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list. We’ve just published the 2025 edition. This year, though, the context is pointedly different: The US scientific community finds itself in an unprecedented position, with the very foundation of its work under attack

Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities individually and academia more broadly, and made substantial funding cuts to the country’s science and technology infrastructure. It has also upended longstanding rights and norms related to free speech, civil rights, and immigration—all of which further affects the overall environment for research and innovation in science and technology. 

We wanted to understand how these changes are affecting the careers and work of our most recent classes of innovators. The US government is the largest source of research funding at US colleges and universities, and many of our honorees are new professors and current or recent graduate or PhD students, while others work with government-funded entities in other ways. Meanwhile, about 16% of those in US graduate programs are international students. 

We sent surveys to the six most recent cohorts, which include 210 people. We asked people about both positive and negative impacts of the administration’s new policies and invited them to tell us more in an optional interview. Thirty-seven completed our survey, and we spoke with 14 of them in follow-up calls. Most respondents are academic researchers (about two-thirds) and are based in the US (81%); 11 work in the private sector (six of whom are entrepreneurs). Their responses provide a glimpse into the complexities of building their labs, companies, and careers in today’s political climate. 

Twenty-six people told us that their work has been affected by the Trump administration’s changes; only one of them described those effects as “mostly positive.” The other 25 reported primarily negative effects. While a few agreed to be named in this story, most asked to be identified only by their job titles and general areas of work, or wished to remain anonymous, for fear of retaliation. “I would not want to flag the ire of the US government,” one interviewee told us. 

Across interviews and surveys, certain themes appeared repeatedly: the loss of jobs, funding, or opportunities; restrictions on speech and research topics; and limits on who can carry out that research. These shifts have left many respondents deeply concerned about the “long-term implications in IP generation, new scientists, and spinout companies in the US,” as one respondent put it. 

One of the things we heard most consistently is that the uncertainty of the current moment is pushing people to take a more risk-averse approach to their scientific work—either by selecting projects that require fewer resources or that seem more in line with the administration’s priorities, or by erring on the side of hiring fewer people. “We’re not thinking so much about building and enabling … we’re thinking about surviving,” said one respondent. 

Ultimately, many are worried that all the lost opportunities will result in less innovation overall—and caution that it will take time to grasp the full impact. 

“We’re not going to feel it right now, but in like two to three years from now, you will feel it,” said one entrepreneur with a PhD who started his company directly from his area of study. “There are just going to be fewer people that should have been inventing things.”

The money: “Folks are definitely feeling the pressure”

The most immediate impact has been financial. Already, the Trump administration has pulled back support for many areas of science—ending more than a thousand awards by the National Institutes of Health and over 100 grants for climate-related projects by the National Science Foundation. The rate of new awards granted by both agencies has slowed, and the NSF has cut the number of graduate fellowships it’s funding by half for this school year. 

The administration has also cut or threatened to cut funding from a growing number of universities, including Harvard, Columbia, Brown, and UCLA, for supposedly not doing enough to combat antisemitism.

As a result, our honorees said that finding funding to support their work has gotten much harder—and it was already a big challenge before. 

A biochemist at a public university told us she’d lost a major NIH grant. Since it was terminated earlier this year, she’s been spending less time in the lab and more on fundraising. 

Others described uncertainty about the status of grants from a wide range of agencies, including NSF, the Advanced Research Projects Agency for Health, the Department of Energy, and the Centers for Disease Control and Prevention, which collectively could pay out more than $44 million to the researchers we’ve recognized. Several had waited months for news on an application’s status or updates on when funds they had already won would be disbursed. One AI researcher who studies climate-related issues is concerned that her multiyear grant may not be renewed, even though renewal would have been “fairly standard” in the past.

Two individuals lamented the cancellation of 24 awards in May by the DOE’s Office of Clean Energy Demonstrations, including grants for carbon capture projects and a clean cement plant. One said the decision had “severely disrupted the funding environment for climate-tech startups” by creating “widespread uncertainty,” “undermining investor confidence,” and “complicating strategic planning.” 

Climate research and technologies have been a favorite target of the Trump administration: The recently passed tax and spending bill put stricter timelines in place that make it harder for wind and solar installations to qualify for tax credits via the Inflation Reduction Act. Already, at least 35 major commercial climate-tech projects have been canceled or downsized this year. 

In response to a detailed list of questions, a DOE spokesperson said, “Secretary [Chris] Wright and President Trump have made it clear that unleashing American scientific innovation is a top priority.” They pointed to “robust investments in science” in the president’s proposed budget and the spending bill and cited special areas of focus “to maintain America’s global competitiveness,” including nuclear fusion, high-performance computing, quantum computing, and AI. 

Other respondents cited tighter budgets brought on by a change in how the government calculates indirect costs, which are funds included in research grants to cover equipment, institutional overhead, and in some cases graduate students’ salaries. In February, the NIH instituted a 15% cap on indirect costs—which ran closer to 28% of the research funds the NIH awarded in 2023. The DOE, DOD, and NSF all soon proposed similar caps. This collective action has sparked lawsuits, and indirect costs remain in limbo. (MIT, which owns MIT Technology Review, is involved in several of these lawsuits; MIT Technology Review is editorially independent from the university.) 

Looking ahead, an academic at a public university in Texas, where the money granted for indirect costs funds student salaries, said he plans to hire fewer students for his own lab. “It’s very sad that I cannot promise [positions] at this point because of this,” he told us, adding that the cap could also affect the competitiveness of public universities in Texas, since schools elsewhere may fund their student researchers differently. 

At the same time, two people with funding through the Defense Department—which could see a surge of investment under the president’s proposed budget—said their projects were moving forward as planned. A biomedical engineer at a public university in the Midwest expressed excitement about what he perceives as a fresh surge of federal interest in industrial and defense applications of synthetic biology. Still, he acknowledged colleagues working on different projects don’t feel as optimistic: “Folks are definitely feeling the pressure.”

Many who are affected by cuts or delays are now looking for new funding sources in a bid to become less reliant on the federal government. Eleven people said they are pursuing or plan to pursue philanthropic and foundation funding or to seek out industry support. However, the amount of private funding available can’t begin to make up the difference in federal funds lost, and investors often focus more on low-risk, short-term applications than on open scientific questions. 

The NIH responded to a detailed list of questions with a statement pointing to unspecified investments in early-career researchers. “Recent updates to our priorities and processes are designed to broaden scientific opportunity rather than restrict it, ensuring that taxpayer-funded research is rigorous, reproducible, and relevant to all Americans,” it reads. The NSF declined a request for comment from MIT Technology Review

Further complicating this financial picture are tariffs—some of which are already in effect, and many more of which have been threatened. Nine people who responded to our survey said their work is already being affected by these taxes imposed on goods imported into the US. For some scientists, this has meant higher operating costs for their labs: An AI researcher said tariffs are making computational equipment more expensive, while the Texas academic said the cost of buying microscopes from a German firm had gone up by thousands of dollars since he first budgeted for them. (Neither the White House press office nor the White House Office of Science and Technology Policy responded to requests for comment.) 

One cleantech entrepreneur saw a positive impact on his business as more US companies reevaluated their supply chains and sought to incorporate more domestic suppliers. The entrepreneur’s firm, which is based in the US, has seen more interest for its services from potential customers seeking “tariff-proof vendors.”  

“Everybody is proactive on tariffs and we’re one of these solutions—we’re made in America,” he said. 

Another person, who works for a European firm, is factoring potential tariffs into decisions about where to open new production facilities. Though the Trump administration has said the taxes are meant to reinvigorate US manufacturing, she’s now less inclined to build out a significant presence in the US because, she said, tariffs may drive up the costs of importing raw materials that are required to make the company’s product. 

What’s more, financial backers have encouraged her company to stay rooted abroad because of the potential impact of tariffs for US-based facilities: “People who invest worldwide—they are saying it’s reassuring for them right now to consider investing in Europe,” she said.

The climate of fear: “It will impact the entire university if there is retaliation” 

Innovators working in both academia and the private sector described new concerns about speech and the politicization of science. Many have changed how they describe their work in order to better align with the administration’s priorities—fearing funding cuts, job terminations, immigration action, and other potential retaliation. 

This is particularly true for those who work at universities. The Trump administration has reached deals with some institutions, including Columbia and Brown, that would restore part of the funding it slashed—but only after the universities agreed to pay hefty fines and abide by terms that, critics say, hand over an unprecedented level of oversight to administration officials. 

Some respondents had received guidance on what they could or couldn’t say from program managers at their funding agencies or their universities or investors; others had not received any official guidance but made personal decisions on what to say and share publicly based on recent news of grant cancellations.

Both on and off campus, there is substantial pressure on diversity, equity, and inclusion (DEI) initiatives, which have been hit particularly hard as the administration seeks to eliminate what it called “illegal and immoral discrimination programs” in one of the first executive orders of President Trump’s second term.  

One respondent, whose work focuses on fighting child sexual abuse materials, recalled rewriting a grant abstract “3x to remove words banned” by Senator Ted Cruz of Texas, an administration ally; back in February, Cruz identified 3,400 NSF grants as “woke DEI” research advancing “neo-Marxist class warfare propaganda.” (His list includes grants to research self-driving cars and solar eclipses. His office did not respond to a request for comment.) 

Many other researchers we spoke with are also taking steps to avoid being put in the DEI bucket. A technologist at a Big Tech firm whose work used to include efforts to provide more opportunities for marginalized communities to get into computing has stopped talking about those recruiting efforts. One biologist described hearing that grant applications for the NIH now have to avoid words like “cell type diversity” for “DEI reasons”—no matter that “cell type diversity” is, she said, a common and “neutral” scientific term in microbiology. (In its statement, the NIH said: “To be clear, no scientific terms are banned, and commonly used terms like ‘cell type diversity’ are fully acceptable in applications and research proposals.”) 

Plenty of other research has also gotten caught up in the storm

One person who works in climate technology said that she now talks about “critical minerals,” “sovereignty,” and “energy independence” or “dominance” rather than “climate” or “industrial decarbonization.” (Trump’s Energy Department has boosted investment in critical minerals, pledging nearly $1 billion to support related projects.) Another individual working in AI said she has been instructed to talk less about “regulation,” “safety,” or “ethics” as they relate to her work. One survey respondent described the language shift as “definitely more red-themed.”

Some said that shifts in language won’t change the substance of their work, but others feared they will indeed affect the research itself. 

Emma Pierson, an assistant professor of computer science at the University of California, Berkeley, worried that AI companies may kowtow to the administration, which could in turn “influence model development.” While she noted that this fear is speculative, the Trump administration’s AI Action Plan contains language that directs the federal government to purchase large language models that generate “truthful responses” (by the administration’s definition), with a goal of “preventing woke AI in the federal government.” 

And one biomedical researcher fears that the administration’s effective ban on DEI will force an end to outreach “favoring any one community” and hurt efforts to improve the representation of women and people of color in clinical trials. The NIH and the Food and Drug Administration had been working for years to address the historic underrepresentation of these groups through approaches including specific funding opportunities to address health disparities; many of these efforts have recently been cut

Respondents from both academia and the private sector told us they’re aware of the high stakes of speaking out. 

“As an academic, we have to be very careful about how we voice our personal opinion because it will impact the entire university if there is retaliation,” one engineering professor told us. 

“I don’t want to be a target,” said one cleantech entrepreneur, who worries not only about reprisals from the current administration but also about potential blowback from Democrats if he cooperates with it. 

“I’m not a Trumper!” he said. “I’m just trying not to get fined by the EPA.” 

The people: “The adversarial attitude against immigrants … is posing a brain drain”

Immigrants are crucial to American science, but what one respondent called a broad “persecution of immigrants,” and an increasing climate of racism and xenophobia, are matters of growing concern. 

Some people we spoke with feel vulnerable, particularly those who are immigrants themselves. The Trump administration has revoked 6,000 international student visas (causing federal judges to intervene in some cases) and threatened to “aggressively” revoke the visas of Chinese students in particular. In recent months, the Justice Department has prioritized efforts to denaturalize certain citizens, while similar efforts to revoke green cards granted decades ago were shut down by court order. One entrepreneur who holds a green card told us, “I find myself definitely being more cognizant of what I’m saying in public and certainly try to stay away from anything political as a result of what’s going on, not just in science but in the rest of the administration’s policies.” 

On top of all this, federal immigration raids and other enforcement actions—authorities have turned away foreign academics upon arrival to the US and detained others with valid academic visas, sometimes because of their support for Palestine—have created a broad climate of fear.  

Four respondents said they were worried about their own immigration status, while 16 expressed concerns about their ability to attract or retain talent, including international students. More than a million international students studied in the US last year, with nearly half of those enrolling in graduate programs, according to the Institute of International Education

“The adversarial attitude against immigrants, especially those from politically sensitive countries, is posing a brain drain,” an AI researcher at a large public university on the West Coast told us. 

This attack on immigration in the US can be compounded by state-level restrictions. Texas and Florida both restrict international collaborations with and recruitment of scientists from countries including China, even though researchers told us that international collaborations could help mitigate the impacts of decreased domestic funding. “I cannot collaborate at this point because there’s too many restrictions and Texas also can limit us from visiting some countries,” the Texas academic said. “We cannot share results. We cannot visit other institutions … and we cannot give talks.”

All this is leading to more interest in positions outside the United States. One entrepreneur, whose business is multinational, said that their company has received a much higher share of applications from US-based candidates to openings in Europe than it did a year ago, despite the lower salaries offered there. 

“It is becoming easier to hire good people in the UK,” confirmed Karen Sarkisyan, a synthetic biologist based in London. 

At least one US-based respondent, an academic in climate technology, accepted a tenured position in the United Kingdom. Another said that she was looking for positions in other countries, despite her current job security and “very good” salary. “I can tell more layoffs are coming, and the work I do is massively devalued. I can’t stand to be in a country that treats their scientists and researchers and educated people like this,” she told us. 

Some professors reported in our survey and interviews that their current students are less interested in pursuing academic careers because graduate and PhD students are losing offers and opportunities as a result of grant cancellations. So even as the number of international students dwindles, there may also be “shortages in domestic grad students,” one mechanical engineer at a public university said, and “research will fall behind.”  

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

In the end, this will affect not just academic research but also private-sector innovation. One biomedical entrepreneur told us that academic collaborators frequently help his company generate lots of ideas: “We hope that some of them will pan out and become very compelling areas for us to invest in.” Particularly for small startups without large research budgets, having fewer academics to work with will mean that “we just invest less, we just have fewer options to innovate,” he said. “The level of risk that industry is willing to take is generally lower than academia, and you can’t really bridge that gap.” 

Despite it all, a number of researchers and entrepreneurs who generally expressed frustration about the current political climate said they still consider the US the best place to do science. 

Pierson, the AI researcher at Berkeley, described staying committed to her research into social inequities despite the political backlash: “I’m an optimist. I do believe this will pass, and these problems are not going to pass unless we work on them.” 

And a biotech entrepreneur pointed out that US-based scientists can still command more resources than those in most other countries. “I think the US still has so much going for it. Like, there isn’t a comparable place to be if you’re trying to be on the forefront of innovation—trying to build a company or find opportunities,” he said.

Several academics and founders who came to the US to pursue scientific careers spoke about still being drawn to America’s spirit of invention and the chance to advance on their own merits. “For me, I’ve always been like, the American dream is something real,” said one. They said they’re holding fast to those ideals—for now.