Four reasons to be optimistic about AI’s energy usage

The day after his inauguration in January, President Donald Trump announced Stargate, a $500 billion initiative to build out AI infrastructure, backed by some of the biggest companies in tech. Stargate aims to accelerate the construction of massive data centers and electricity networks across the US to ensure it keeps its edge over China.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


The whatever-it-takes approach to the race for worldwide AI dominance was the talk of Davos, says Raquel Urtasun, founder and CEO of the Canadian robotruck startup Waabi, referring to the World Economic Forum’s annual January meeting in Switzerland, which was held the same week as Trump’s announcement. “I’m pretty worried about where the industry is going,” Urtasun says. 

She’s not alone. “Dollars are being invested, GPUs are being burned, water is being evaporated—it’s just absolutely the wrong direction,” says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI.

But sift through the talk of rocketing costs—and climate impact—and you’ll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock.

Here’s what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI.

1/ More efficient models

The most obvious place to start is with the models themselves—the way they’re created and the way they’re run.

AI models are built by training neural networks on lots and lots of data. Large language models are trained on vast amounts of text, self-driving models are trained on vast amounts of driving data, and so on.

But the way such data is collected is often indiscriminate. Large language models are trained on data sets that include text scraped from most of the internet and huge libraries of scanned books. The practice has been to grab everything that’s not nailed down, throw it into the mix, and see what comes out. This approach has certainly worked, but training a model on a massive data set over and over so it can extract relevant patterns by itself is a waste of time and energy.

There might be a more efficient way. Children aren’t expected to learn just by reading everything that’s ever been written; they are given a focused curriculum. Urtasun thinks we should do something similar with AI, training models with more curated data tailored to specific tasks. (Waabi trains its robotrucks inside a superrealistic simulation that allows fine-grained control of the virtual data its models are presented with.)

It’s not just Waabi. Writer, an AI startup that builds large language models for enterprise customers, claims that its models are cheaper to train and run in part because it trains them using synthetic data. Feeding its models bespoke data sets rather than larger but less curated ones makes the training process quicker (and therefore less expensive). For example, instead of simply downloading Wikipedia, the team at Writer takes individual Wikipedia pages and rewrites their contents in different formats—as a Q&A instead of a block of text, and so on—so that its models can learn more from less.

Training is just the start of a model’s life cycle. As models have become bigger, they have become more expensive to run. So-called reasoning models that work through a query step by step before producing a response are especially power-hungry because they compute a series of intermediate subresponses for each response. The price tag of these new capabilities is eye-watering: OpenAI’s o3 reasoning model has been estimated to cost up to $30,000 per task to run.  

But this technology is only a few months old and still experimental. Farhadi expects that these costs will soon come down. For example, engineers will figure out how to stop reasoning models from going too far down a dead-end path before they determine it’s not viable. “The first time you do something it’s way more expensive, and then you figure out how to make it smaller and more efficient,” says Farhadi. “It’s a fairly consistent trend in technology.”

One way to get performance gains without big jumps in energy consumption is to run inference steps (the computations a model makes to come up with its response) in parallel, he says. Parallel computing underpins much of today’s software, especially large language models (GPUs are parallel by design). Even so, the basic technique could be applied to a wider range of problems. By splitting up a task and running different parts of it at the same time, parallel computing can generate results more quickly. It can also save energy by making more efficient use of available hardware. But it requires clever new algorithms to coordinate the multiple subtasks and pull them together into a single result at the end. 

The largest, most powerful models won’t be used all the time, either. There is a lot of talk about small models, versions of large language models that have been distilled into pocket-size packages. In many cases, these more efficient models perform as well as larger ones, especially for specific use cases.

As businesses figure out how large language models fit their needs (or not), this trend toward more efficient bespoke models is taking off. You don’t need an all-purpose LLM to manage inventory or to respond to niche customer queries. “There’s going to be a really, really large number of specialized models, not one God-given model that solves everything,” says Farhadi.

Christina Shim, chief sustainability officer at IBM, is seeing this trend play out in the way her clients adopt the technology. She works with businesses to make sure they choose the smallest and least power-hungry models possible. “It’s not just the biggest model that will give you a big bang for your buck,” she says. A smaller model that does exactly what you need is a better investment than a larger one that does the same thing: “Let’s not use a sledgehammer to hit a nail.”

2/ More efficient computer chips

As the software becomes more streamlined, the hardware it runs on will become more efficient too. There’s a tension at play here: In the short term, chipmakers like Nvidia are racing to develop increasingly powerful chips to meet demand from companies wanting to run increasingly powerful models. But in the long term, this race isn’t sustainable.

“The models have gotten so big, even running the inference step now starts to become a big challenge,” says Naveen Verma, cofounder and CEO of the upstart microchip maker EnCharge AI.

Companies like Microsoft and OpenAI are losing money running their models inside data centers to meet the demand from millions of people. Smaller models will help. Another option is to move the computing out of the data centers and into people’s own machines.

That’s something that Microsoft tried with its Copilot+ PC initiative, in which it marketed a supercharged PC that would let you run an AI model (and cover the energy bills) yourself. It hasn’t taken off, but Verma thinks the push will continue because companies will want to offload as much of the costs of running a model as they can.

But getting AI models (even small ones) to run reliably on people’s personal devices will require a step change in the chips that typically power those devices. These chips need to be made even more energy efficient because they need to be able to work with just a battery, says Verma.

That’s where EnCharge comes in. Its solution is a new kind of chip that ditches digital computation in favor of something called analog in-memory computing. Instead of representing information with binary 0s and 1s, like the electronics inside conventional, digital computer chips, the electronics inside analog chips can represent information along a range of values in between 0 and 1. In theory, this lets you do more with the same amount of power. 

SHIWEN SVEN WANG

EnCharge was spun out from Verma’s research lab at Princeton in 2022. “We’ve known for decades that analog compute can be much more efficient—orders of magnitude more efficient—than digital,” says Verma. But analog computers never worked well in practice because they made lots of errors. Verma and his colleagues have discovered a way to do analog computing that’s precise.

EnCharge is focusing just on the core computation required by AI today. With support from semiconductor giants like TSMC, the startup is developing hardware that performs high-dimensional matrix multiplication (the basic math behind all deep-learning models) in an analog chip and then passes the result back out to the surrounding digital computer.

EnCharge’s hardware is just one of a number of experimental new chip designs on the horizon. IBM and others have been exploring something called neuromorphic computing for years. The idea is to design computers that mimic the brain’s super-efficient processing powers. Another path involves optical chips, which swap out the electrons in a traditional chip for light, again cutting the energy required for computation. None of these designs yet come close to competing with the electronic digital chips made by the likes of Nvidia. But as the demand for efficiency grows, such alternatives will be waiting in the wings. 

It is also not just chips that can be made more efficient. A lot of the energy inside computers is spent passing data back and forth. IBM says that it has developed a new kind of optical switch, a device that controls digital traffic, that is 80% more efficient than previous switches.   

3/ More efficient cooling in data centers

Another huge source of energy demand is the need to manage the waste heat produced by the high-end hardware on which AI models run. Tom Earp, engineering director at the design firm Page, has been building data centers since 2006, including a six-year stint doing so for Meta. Earp looks for efficiencies in everything from the structure of the building to the electrical supply, the cooling systems, and the way data is transferred in and out.

For a decade or more, as Moore’s Law tailed off, data-center designs were pretty stable, says Earp. And then everything changed. With the shift to processors like GPUs, and with even newer chip designs on the horizon, it is hard to predict what kind of hardware a new data center will need to house—and thus what energy demands it will have to support—in a few years’ time. But in the short term the safe bet is that chips will continue getting faster and hotter: “What I see is that the people who have to make these choices are planning for a lot of upside in how much power we’re going to need,” says Earp.

One thing is clear: The chips that run AI models, such as GPUs, require more power per unit of space than previous types of computer chips. And that has big knock-on implications for the cooling infrastructure inside a data center. “When power goes up, heat goes up,” says Earp.

With so many high-powered chips squashed together, air cooling (big fans, in other words) is no longer sufficient. Water has become the go-to coolant because it is better than air at whisking heat away. That’s not great news for local water sources around data centers. But there are ways to make water cooling more efficient.

One option is to use water to send the waste heat from a data center to places where it can be used. In Denmark water from data centers has been used to heat homes. In Paris, during the Olympics, it was used to heat swimming pools.  

Water can also serve as a type of battery. Energy generated from renewable sources, such as wind turbines or solar panels, can be used to chill water that is stored until it is needed to cool computers later, which reduces the power usage at peak times.

But as data centers get hotter, water cooling alone doesn’t cut it, says Tony Atti, CEO of Phononic, a startup that supplies specialist cooling chips. Chipmakers are creating chips that move data around faster and faster. He points to Nvidia, which is about to release a chip that processes 1.6 terabytes a second: “At that data rate, all hell breaks loose and the demand for cooling goes up exponentially,” he says.

According to Atti, the chips inside servers suck up around 45% of the power in a data center. But cooling those chips now takes almost as much power, around 40%. “For the first time, thermal management is becoming the gate to the expansion of this AI infrastructure,” he says.

Phononic’s cooling chips are small thermoelectric devices that can be placed on or near the hardware that needs cooling. Power an LED chip and it emits photons; power a thermoelectric chip and it emits phonons (which are to vibrational energy—a.k.a. temperature—as photons are to light). In short, phononic chips push heat from one surface to another.

Squeezed into tight spaces inside and around servers, such chips can detect minute increases in heat and switch on and off to maintain a stable temperature. When they’re on, they push excess heat into a water pipe to be whisked away. Atti says they can also be used to increase the efficiency of existing cooling systems. The faster you can cool water in a data center, the less of it you need.

4/ Cutting costs goes hand in hand with cutting energy use

Despite the explosion in AI’s energy use, there’s reason to be optimistic. Sustainability is often an afterthought or a nice-to-have. But with AI, the best way to reduce overall costs is to cut your energy bill. That’s good news, as it should incentivize companies to increase efficiency. “I think we’ve got an alignment between climate sustainability and cost sustainability,” says Verma. ”I think ultimately that will become the big driver that will push the industry to be more energy efficient.”

Shim agrees: “It’s just good business, you know?”

Companies will be forced to think hard about how and when they use AI, choosing smaller, bespoke options whenever they can, she says: “Just look at the world right now. Spending on technology, like everything else, is going to be even more critical going forward.”

Shim thinks the concerns around AI’s energy use are valid. But she points to the rise of the internet and the personal computer boom 25 years ago. As the technology behind those revolutions improved, the energy costs stayed more or less stable even though the number of users skyrocketed, she says.

It’s a general rule Shim thinks will apply this time around as well: When tech matures, it gets more efficient. “I think that’s where we are right now with AI,” she says.

AI is fast becoming a commodity, which means that market competition will drive prices down. To stay in the game, companies will be looking to cut energy use for the sake of their bottom line if nothing else. 

In the end, capitalism may save us after all. 

Can nuclear power really fuel the rise of AI?

In the AI arms race, all the major players say they want to go nuclear.  

Over the past year, the likes of Meta, Amazon, Microsoft, and Google have sent out a flurry of announcements related to nuclear energy. Some are about agreements to purchase power from existing plants, while others are about investments looking to boost unproven advanced technologies.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


These somewhat unlikely partnerships could be a win for both the nuclear power industry and large tech companies. Tech giants need guaranteed sources of energy, and many are looking for low-emissions ones to hit their climate goals. For nuclear plant operators and nuclear technology developers, the financial support of massive established customers could help keep old nuclear power plants open and push new technologies forward.

“There [are] a lot of advantages to nuclear,” says Michael Terrell, senior director of clean energy and carbon reduction at Google. Among them, he says, are that it’s “clean, firm, carbon-free, and can be sited just about anywhere.” (Firm energy sources are those that provide constant power.) 

But there’s one glaring potential roadblock: timing. “There are needs on different time scales,” says Patrick White, former research director at the Nuclear Innovation Alliance. Many of these tech companies will require large amounts of power in the next three to five years, White says, but building new nuclear plants can take close to a decade. 

Some next-generation nuclear technologies, especially small modular reactors, could take less time to build, but the companies promising speed have yet to build their first reactors—and in some cases they are still years away from even modestly sized demonstrations. 

This timing mismatch means that even as tech companies tout plans for nuclear power, they’ll actually be relying largely on fossil fuels, keeping coal plants open, and even building new natural gas plants that could stay open for decades. AI and nuclear could genuinely help each other grow, but the reality is that the growth could be much slower than headlines suggest. 

AI’s need for speed

The US alone has roughly 3,000 data centers, and current projections say the AI boom could add thousands more by the end of the decade. The rush could increase global data center power demand by as much as 165% by 2030, according to one recent analysis from Goldman Sachs. In the US, estimates from industry and academia suggest energy demand for data centers could be as high as 400 terawatt-hours by 2030—up from fewer than 100 terawatt-hours in 2020 and higher than the total electricity demand from the entire country of Mexico.

There are indications that the data center boom might be decelerating, with some companies slowing or pausing some projects in recent weeks. But even the most measured projections, in analyses like one recent report from the International Energy Agency, predict that energy demand will increase. The only question is by how much.  

Many of the same tech giants currently scrambling to build data centers have also set climate goals, vowing to reach net-zero emissions or carbon-free energy within the next couple of decades. So they have a vested interest in where that electricity comes from. 

Nuclear power has emerged as a strong candidate for companies looking to power data centers while cutting emissions. Unlike wind turbines and solar arrays that generate electricity intermittently, nuclear power plants typically put out a constant supply of energy to the grid, which aligns well with what data centers need. “Data center companies pretty much want to run full out, 24/7,” says Rob Gramlich, president of Grid Strategies, a consultancy focused on electricity and transmission.

It also doesn’t hurt that, while renewables are increasingly politicized and under attack by the current administration in the US, nuclear has broad support on both sides of the aisle. 

The problem is how to build up nuclear capacity—existing facilities are limited, and new technologies will take time to build. In 2022, all the nuclear reactors in the US together provided around 800 terawatt-hours of electricity to the power grid, a number that’s been basically steady for the past two decades. To meet electricity demand from data centers expected in 2030 with nuclear power, we’d need to expand the fleet of reactors in the country by half.

New nuclear news 

Some of the most exciting headlines regarding the burgeoning relationship between AI and nuclear technology involve large, established companies jumping in to support innovations that could bring nuclear power into the 21st century. 

In October 2024, Google signed a deal with Kairos Power, a next-generation nuclear company that recently received construction approval for two demonstration reactors from the US Nuclear Regulatory Commission (NRC). The company is working to build small, molten-salt-cooled reactors, which it says will be safer and more efficient than conventional technology. The Google deal is a long-term power-purchase agreement: The tech giant will buy up to 500 megawatts of electricity by 2035 from whatever plants Kairos manages to build, with the first one scheduled to come online by 2030. 

Amazon is also getting involved with next-generation nuclear technology with a direct investment in Maryland-based X-energy. The startup is among those working to create smaller, more-standardized reactors that can be built more quickly and with less expense.

In October, Amazon signed a deal with Energy Northwest, a utility in Washington state, that will see Amazon fund the initial phase of a planned X-energy small modular reactor project in the state. The tech giant will have a right to buy electricity from one of the modules in the first project, which could generate 320 megawatts of electricity and be expanded to generate as much as 960 megawatts. Many new AI-focused data centers under construction will require 500 megawatts of power or more, so this project might be just large enough to power a single site. 

The project will help meet energy needs “beginning in the early 2030s,” according to Amazon’s website. X-energy is currently in the pre-application process with the NRC, which must grant approval before the Washington project can move forward.

Solid, long-term plans could be a major help in getting next-generation technologies off the ground. “It’s going to be important in the next couple [of] years to see more firm commitments and actual money going out for these projects,” says Jessica Lovering, who cofounded the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy. 

However, these early projects won’t be enough to make a dent in demand. The next-generation reactors Amazon and Google are supporting are modestly sized demonstrations—the first commercial installations of new technologies. They won’t be close to the scale needed to meet the energy demand expected from new data centers by 2030. 

To provide a significant fraction of the terawatt-hours of electricity large tech companies use each year, nuclear companies will likely need to build dozens of new plants, not just a couple of reactors. 

Purchasing power 

One approach to get around this mismatch is to target existing reactors. 

Microsoft made headlines in this area last year when it signed a long-term power purchase agreement with Constellation, the owner of the Three Mile Island Unit 1 nuclear plant in Pennsylvania. Constellation plans to reopen one of the reactors at that site and rename it the Crane Clean Energy Center. The deal with Microsoft ensures that there will be a customer for the electricity from the plant, if it successfully comes back online. (It’s currently on track to do so in 2028.)

“If you don’t want to wait a decade for new technology, one of the biggest tools that we have in our tool kit today is to support relicensing of operating power plants,” says Urvi Parekh, head of global energy for Meta. Older facilities can apply for 20-year extensions from the NRC, a process that customers buying the energy can help support as it tends to be expensive and lengthy, Parekh says. 

While these existing reactors provide some opportunity for Big Tech to snap up nuclear energy now, a limited number are in good enough shape to extend or reopen. 

In the US, 24 reactors have licenses that will be up for renewal before 2035, roughly a quarter of those in operation today. A handful of plants could potentially be reopened in addition to Three Mile Island, White says. Palisades Nuclear Plant in Michigan has received a $1.52 billion loan guarantee from the US Department of Energy to reopen, and the owner of the Duane Arnold Energy Center in Iowa has filed a request with regulators that could begin the reopening process.

Some sites have reactors that could be upgraded to produce more power without building new infrastructure, adding a total of between two and eight gigawatts, according to a recent report from the Department of Energy. That could power a handful of moderately sized data centers, but power demand is growing for individual projects—OpenAI has suggested the need for data centers that would require at least five gigawatts of power. 

Ultimately, new reactors will be needed to expand capacity significantly, whether they use established technology or next-generation designs. Experts tend to agree that neither would be able to happen at scale until at least the early 2030s. 

In the meantime, decisions made today in response to this energy demand boom will have ripple effects for years. Most power plants can last for several decades or more, so what gets built today will likely stay on the grid through 2040 and beyond. Whether the AI boom will entrench nuclear energy, fossil fuels, or other sources of electricity on the grid will depend on what is introduced to meet demand now. 

No individual technology, including nuclear power, is likely to be the one true solution. As Google’s Terrell puts it, everything from wind and solar, energy storage, geothermal, and yes, nuclear, will be needed to meet both energy demand and climate goals. “I think nuclear gets a lot of love,” he says. “But all of this is equally as important.”

How US research cuts are threatening crucial climate data

Over the last few months, and especially the last few weeks, there’s been an explosion of news about proposed budget cuts to science in the US. One trend I’ve noticed: Researchers and civil servants are sounding the alarm that those cuts mean we might lose key data that helps us understand our world and how climate change is affecting it.

My colleague James Temple has a new story out today about researchers who are attempting to measure the temperature of mountain snowpack across the western US. Snow that melts in the spring is a major water source across the region, and monitoring the temperature far below the top layer of snow could help scientists more accurately predict how fast water will flow down the mountains, allowing farmers, businesses, and residents to plan accordingly.

But long-running government programs that monitor the snowpack across the West are among those being threatened by cuts across the US federal government. Also potentially in trouble: carbon dioxide measurements in Hawaii, hurricane forecasting tools, and a database that tracks the economic impact of natural disasters. It’s all got me thinking: What do we lose when data is in danger?

Take for example the work at Mauna Loa Observatory, which sits on the northern side of the world’s largest active volcano. In this Hawaii facility, researchers have been measuring the concentration of carbon dioxide in the atmosphere since 1958.

The resulting graph, called the Keeling Curve (after Charles David Keeling, the scientist who kicked off the effort) is a pillar of climate research. It shows that carbon dioxide, the main greenhouse gas warming the planet, has increased in the atmosphere from around 313 parts per million in 1958 to over 420 parts per million today.

Proposed cuts to the National Oceanic and Atmospheric Administration (NOAA) jeopardize the Keeling Curve’s future. As Ralph Keeling (current steward of the curve and Keeling’s son) put it in a new piece for Wired, “If successful, this loss will be a nightmare scenario for climate science, not just in the United States, but the world.”

This story has echoes across the climate world right now. A lab at Princeton that produces what some consider the top-of-the-line climate models used to make hurricane forecasts could be in trouble because of NOAA budget cuts. And last week, NOAA announced it would no longer track the economic impact of the biggest natural disasters in the US.

Some of the largest-scale climate efforts will feel the effects of these cuts, and as James’s new story shows, they could also seep into all sorts of specialized fields. Even seemingly niche work can have a huge impact not just on research, but on people.

The frozen reservoir of the Sierra snowpack provides about a third of California’s groundwater, as well as the majority used by towns and cities in northwest Nevada. Researchers there are hoping to help officials better forecast the timing of potential water supplies across the region.

This story brought to mind my visit to El Paso, Texas, a few years ago. I spoke with farmers there who rely on water coming down the Rio Grande, alongside dwindling groundwater, to support their crops. There, water comes down from the mountains in Colorado and New Mexico in the spring and is held in the Elephant Butte Reservoir. One farmer I met showed me pages and pages of notes of reservoir records, which he had meticulously copied by hand. Those crinkled pages were a clear sign: Publicly available data was crucial to his work.

The endeavor of scientific research, particularly when it involves patiently gathering data, isn’t always exciting. Its importance is often overlooked. But as cuts continue, we’re keeping a lookout, because losing data could harm our ability to track, address, and adapt to our changing climate. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Why climate researchers are taking the temperature of mountain snow

On a crisp morning in early April, Dan McEvoy and Bjoern Bingham cut clean lines down a wide run at the Heavenly Ski Resort in South Lake Tahoe, then ducked under a rope line cordoning off a patch of untouched snow. 

They side-stepped up a small incline, poled past a row of Jeffrey pines, then dropped their packs. 

The pair of climate researchers from the Desert Research Institute (DRI) in Reno, Nevada, skied down to this research plot in the middle of the resort to test out a new way to take the temperature of the Sierra Nevada snowpack. They were equipped with an experimental infrared device that can take readings as it’s lowered down a hole in the snow to the ground.

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. As it melts through the spring and summer, dam operators, water agencies, and communities have to manage the flow of billions of gallons of runoff, storing up enough to get through the inevitable dry summer months without allowing reservoirs and canals to flood.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods. 

In the past, it has been arduous work to gather such snowpack observations. Now, a new generation of tools, techniques, and models promises to ease that process, improve water forecasts, and help California and other states safely manage one of their largest sources of water in the face of increasingly severe droughts and flooding.

Observers, however, fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies, including the one that oversees federal snowpack monitoring and survey work. That could jeopardize ongoing efforts to produce the water data and forecasts on which Western communities rely.

“If we don’t have those measurements, it’s like driving your car around without a fuel gauge,” says Larry O’Neill, Oregon’s state climatologist. “We won’t know how much water is up in the mountains, and whether there’s enough to last through the summer.”

The birth of snow surveys

The snow survey program in the US was born near Lake Tahoe, the largest alpine lake in North America, around the turn of the 20th century. 

Without any reliable way of knowing how much water would flow down the mountain each spring, lakefront home and business owners, fearing floods, implored dam operators to release water early in the spring. Downstream communities and farmers pushed back, however, demanding that the dam was used to hold onto as much water as possible to avoid shortages later in the year. 

In 1908, James Church, a classics professor at the University of Nevada, Reno, whose passion for hiking around the mountains sparked an interest in the science of snow, invented a device that helped resolve the so-called Lake Tahoe Water Wars: the Mt. Rose snow sampler, named after the peak of a Sierra spur that juts into Nevada.

Professor James E. Church wearing goggles and snowshoes, standing on a snowy hillside
James Church, a professor of classics at the University of Nevada, Reno, became a pioneer in the field of snow surveys.
COURTESY OF UNIVERSITY OF NEVADA, RENO

It’s a simple enough device, with sections of tube that screw together, a sharpened end, and measurement ticks along the side. Snow surveyors measure the depth of the snow by plunging the sampler down to the ground. They then weigh the filled tube on a specialized scale to calculate the water content of the snow. 

Church used the device to take measurements at various points across the range, and calibrated his water forecasts by comparing his readings against the rising and falling levels of Lake Tahoe. 

It worked so well that the US began a federal snow survey program in the mid-1930s, which evolved into the one carried on today by the Department of Agriculture’s Natural Resources Conservation Service (NRCS). Throughout the winter, hundreds of snow surveyors across the American West head up to established locations on snowshoes, backcountry skis, or snowmobiles to deploy their Mt. Rose samplers, which have barely changed over more than a century. 

In the 1960s, the US government also began setting up a network of permanent monitoring sites across the mountains, now known as the SNOTEL network. There are more than 900 stations continuously transmitting readings from across Western states and Alaska. They’re equipped with sensors that measure air temperature, snow depth, and soil moisture, and include pressure-sensitive “snow pillows” that weigh the snow to determine the water content. 

The data from the snow surveys and SNOTEL sites all flows into snow depth and snow water content reports that the NRCS publishes, along with forecasts of the amount of water that will fill the streams and reservoirs through the spring and summer.

Taking the temperature

None of these survey and monitoring programs, however, provide the temperature throughout the snowpack. 

The Sierra Nevada snowpack can reach more than 6 meters (20 feet), and the temperature within it may vary widely, especially toward the top. Readings taken at increments throughout can determine what’s known as the cold content, or the amount of energy required to shift the snowpack to a uniform temperature of 32˚F. 

Knowing the cold content of the snowpack helps researchers understand the conditions under which it will begin to rapidly melt, particularly as it warms up in the spring or after rain falls on top of the snow.

If the temperature of the snow, for example, is close to 32˚F even at several feet deep, a few warm days could easily set it melting. If, on the other hand, the temperature measurements show a colder profile throughout the middle, the snowpack is more stable and will hold up longer as the weather warms.

a person with raising a snow shovel up at head height
Bjoern Bingham, a research scientist at the Desert Research Institute, digs at snowpit at a research plot within the Heavenly Ski Resort, near South Lake Tahoe, California.
JAMES TEMPLE

The problem is that taking the temperature of the entire snowpack has been, until now, tough and time-consuming work. When researchers do it at all, they mainly do so by digging snow pits down to the ground and then taking readings with probe thermometers along an inside wall.

There have been a variety of efforts to take continuous remote readings from sensors attached to fences, wires, or towers, which the snowpack eventually buries. But the movement and weight of the dense shifting snow tends to break the devices or snap the structures they’re assembled upon.

“They rarely last a season,” McAvoy says.

Anne Heggli, a professor of mountain hydrometeorology at DRI, happened upon the idea of using an infrared device to solve this problem during a tour of the institute’s campus in 2019, when she learned that researchers there were using an infrared meat thermometer to take contactless readings of the snow surface.

In 2021, Heggli began collaborating with RPM Systems, a gadget manufacturing company, to design an infrared device optimized for snowpack field conditions. The resulting snow temperature profiler is skinny enough to fit down a hole dug by snow surveyors and dangles on a cord marked off at 10-centimeter (4-inch) increments.

a researcher stands in a snowy trench taking notes, while a second researcher drops a yellow measure down from the surface level
Bingham and Daniel McEvoy, an associate research professor at the Desert Research Institute, work together to take temperature readings from inside the snowpit as well as from within the hole left behind by a snow sampler.
JAMES TEMPLE

At Heavenly on that April morning, Bingham, a staff scientist at DRI, slowly fed the device down a snow sampler hole, calling out temperature readings at each marking. McEvoy scribbled them down on a worksheet fastened to his clipboard as he used a probe thermometer to take readings of his own from within a snow pit the pair had dug down to the ground.

They were comparing the measurements to assess the reliability of the infrared device in the field, but the eventual aim is to eliminate the need to dig snow pits. The hope is that state and federal surveyors could simply carry along a snow temperature profiler and drop it into the snowpack survey holes they’re creating anyway, to gather regular snowpack temperature readings from across the mountains.

In 2023, the US Bureau of Reclamation, the federal agency that operates many of the nation’s dams, funded a three-year research project to explore the use of the infrared gadgets in determining snowpack temperatures. Through it, the DRI research team has now handed devices out to 20 snow survey teams across California, Colorado, Idaho, Montana, Nevada, and Utah to test their use in the field and supplement the snowpack data they’re collecting.

The Snow Lab

The DRI research project is one piece of a wider effort to obtain snowpack temperature data across the mountains of the West.

By early May, the snow depth had dropped from an April peak of 114 inches to 24 inches (2.9 meters to 0.6 meters) at the UC Berkeley Central Sierra Snow Lab, an aging wooden structure perched in the high mountains northwest of Lake Tahoe.

Megan Mason, a research scientist at the lab, used a backcountry ski shovel to dig out a trio of instruments from what was left of the pitted snowpack behind the building. Each one featured different types of temperature sensors, arrayed along a strong polymer beam meant to hold up under the weight and movement of the Sierra snowpack.  

She was pulling up the devices after running the last set of observations for the season, as part of an effort to develop a resilient system that can survive the winter and transmit hourly temperature readings.

The lab is working on the project, dubbed the California Cold Content Initiative, in collaboration with the state’s Department of Water Resources. California is the only western state that opted to maintain its own snow survey program and run its own permanent monitoring stations, all of which are managed by the water department. 

The plan is to determine which instruments held up and functioned best this winter. Then, they can begin testing the most promising approaches at several additional sites next season. Eventually, the goal is to attach the devices at more than 100 of California’s snow monitoring stations, says Andrew Schwartz, the director of the lab.

The NRCS is conducting a similar research effort at select SNOTEL sites equipped with a beaded temperature cable. One such cable is visible at the Heavenly SNOTEL station, next to where McEvoy and Bingham dug their snow pit, strung vertically between an arm extended from the main tower and the snow-covered ground. 

a gloved hand inserts a probe wire into a hole in the snow
DRI’s Bjoern Bingham feeds the snow temperature profiler, an infrared device, down a hole in the Sierra snowpack.
JAMES TEMPLE

Schwartz said that the different research groups are communicating and collaborating openly on the projects, all of which promise to provide complementary information, expanding the database of snowpack temperature readings across the West.

For decades, agencies and researchers generally produced water forecasts using relatively simple regression models that translated the amount of water in the snowpack into the amount of water that will flow down the mountain, based largely on the historic relationships between those variables. 

But these models are becoming less reliable as climate change alters temperatures, snow levels, melt rates, and evaporation, and otherwise drives alpine weather patterns outside of historic patterns.

“As we have years that scatter further and more frequently from the norm, our models aren’t prepared,” Heggli says.

Plugging direct temperature observations into more sophisticated models that have emerged in recent years, Schwartz says, promises to significantly improve the accuracy of water forecasts. That, in turn, should help communities manage through droughts and prevent dams from overtopping even as climate change fuels alternately wetter, drier, warmer, and weirder weather.

About a quarter of the world’s population relies on water stored in mountain snow and glaciers, and climate change is disrupting the hydrological cycles that sustain these natural frozen reservoirs in many parts of the world. So any advances in observations and modeling could deliver broader global benefits.

Ominous weather

There’s an obvious threat to this progress, though.

Even if these projects work as well as hoped, it’s not clear how widely these tools and techniques will be deployed at a time when the White House is gutting staff across federal agencies, terminating thousands of scientific grants, and striving to eliminate tens of billions of dollars in funding at research departments. 

The Trump administration has fired or put on administrative leave nearly 6,000 employees across the USDA, or 6% of the department’s workforce. Those cutbacks have reached regional NRCS offices, according to reporting by local and trade outlets.

That includes more than half of the roles at the Portland office, according to O’Neill, the state climatologist. Those reductions prompted a bipartisan group of legislators to call on the Secretary of Agriculture to restore the positions, warning the losses could impair water data and analyses that are crucial for the state’s “agriculture, wildland fire, hydropower, timber, and tourism sectors,” as the Statesman Journal reported.

There are more than 80 active SNOTEL stations in Oregon.

The fear is there won’t be enough people left to reach all the sites this summer to replace batteries, solar panels, and drifting or broken sensors, which could quickly undermine the reliability of the data or cut off the flow of information. 

“Staff and budget reductions at NRCS will make it impossible to maintain SNOTEL instruments and conduct routine manual observations, leading to inoperability of the network within a year,” the lawmakers warned.

The USDA and NRCS didn’t respond to inquiries from MIT Technology Review

looking down at a researcher standing in a snowy trench with a clipboard of notes
DRI’s Daniel McEvoy scribbles down temperature readings at the Heavenly site.
JAMES TEMPLE

If the federal cutbacks deplete the data coming back from SNOTEL stations or federal snow survey work, the DRI infrared method could at least “still offer a simplistic way of measuring the snowpack temperatures” in places where state and regional agencies continue to carry out surveys, McEvoy says.

But most researchers stress the field needs more surveys, stations, sensors, and readings to understand how the climate and water cycles are changing from month to month and season to season. Heggli stresses that there should be broad bipartisan support for programs that collect snowpack data and provide the water forecasts that farmers and communities rely on. 

“This is how we account for one of, if not the, most valuable resource we have,” she says. “In the West, we go into a seasonal drought every summer; our snowpack is what trickles down and gets us through that drought. We need to know how much we have.”

Did solar power cause Spain’s blackout?

At roughly midday on Monday, April 28, the lights went out in Spain. The grid blackout, which extended into parts of Portugal and France, affected tens of millions of people—flights were grounded, cell networks went down, and businesses closed for the day.

Over a week later, officials still aren’t entirely sure what happened, but some (including the US energy secretary, Chris Wright) have suggested that renewables may have played a role, because just before the outage happened, wind and solar accounted for about 70% of electricity generation. Others, including Spanish government officials, insisted that it’s too early to assign blame.

It’ll take weeks to get the full report, but we do know a few things about what happened. And even as we wait for the bigger picture, there are a few takeaways that could help our future grid.

Let’s start with what we know so far about what happened, according to the Spanish grid operator Red Eléctrica:

  • A disruption in electricity generation took place a little after 12:30 p.m. This may have been a power plant flipping off or some transmission equipment going down.
  • A little over a second later, the grid lost another bit of generation.
  • A few seconds after that, the main interconnector between Spain and southwestern France got disconnected as a result of grid instability.
  • Immediately after, virtually all of Spain’s electricity generation tripped offline.

One of the theories floating around is that things went wrong because the grid diverged from its normal frequency. (All power grids have a set frequency: In Europe the standard is 50 hertz, which means the current switches directions 50 times per second.) The frequency needs to be constant across the grid to keep things running smoothly.

There are signs that the outage could be frequency-related. Some experts pointed out that strange oscillations in the grid frequency occurred shortly before the blackout.

Normally, our grid can handle small problems like an oscillation in frequency or a drop that comes from a power plant going offline. But some of the grid’s ability to stabilize itself is tied up in old ways of generating electricity.

Power plants like those that run on coal and natural gas have massive rotating generators. If there are brief issues on the grid that upset the balance, those physical bits of equipment have inertia: They’ll keep moving at least for a few seconds, providing some time for other power sources to respond and pick up the slack. (I’m simplifying here—for more details I’d highly recommend this report from the National Renewable Energy Laboratory.)

Solar panels don’t have inertia—they rely on inverters to change electricity into a form that’s compatible with the grid and matches its frequency. Generally, these inverters are “grid-following,” meaning if frequency is dropping, they follow that drop.

In the case of the blackout in Spain, it’s possible that having a lot of power on the grid coming from sources without inertia made it more possible for a small problem to become a much bigger one.

Some key questions here are still unanswered. The order matters, for example. During that drop in generation, did wind and solar plants go offline first? Or did everything go down together?

Whether or not solar and wind contributed to the blackout as a root cause, we do know that wind and solar don’t contribute to grid stability in the same way that some other power sources do, says Seaver Wang, climate lead of the Breakthrough Institute, an environmental research organization. Regardless of whether renewables are to blame, more capability to stabilize the grid would only help, he adds.

It’s not that a renewable-heavy grid is doomed to fail. As Wang put it in an analysis he wrote last week: “This blackout is not the inevitable outcome of running an electricity system with substantial amounts of wind and solar power.”

One solution: We can make sure the grid includes enough equipment that does provide inertia, like nuclear power and hydropower. Reversing a plan to shut down Spain’s nuclear reactors beginning in 2027 would be helpful, Wang says. Other options include building massive machines that lend physical inertia and using inverters that are “grid-forming,” meaning they can actively help regulate frequency and provide a sort of synthetic inertia.

Inertia isn’t everything, though. Grid operators can also rely on installing a lot of batteries that can respond quickly when problems arise. (Spain has much less grid storage than other places with a high level of renewable penetration, like Texas and California.)

Ultimately, if there’s one takeaway here, it’s that as the grid evolves, our methods to keep it reliable and stable will need to evolve too.

If you’re curious to hear more on this story, I’d recommend this Q&A from Carbon Brief about the event and its aftermath and this piece from Heatmap about inertia, renewables, and the blackout.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A long-abandoned US nuclear technology is making a comeback in China

China has once again beat everyone else to a clean energy milestone—its new nuclear reactor is reportedly one of the first to use thorium instead of uranium as a fuel and the first of its kind that can be refueled while it’s running.

It’s an interesting (if decidedly experimental) development out of a country that’s edging toward becoming the world leader in nuclear energy. China has now surpassed France in terms of generation, though not capacity; it still lags behind the US in both categories. But one recurring theme in media coverage about the reactor struck me, because it’s so familiar: This technology was invented decades ago, and then abandoned.

You can basically copy and paste that line into countless stories about today’s advanced reactor technology. Molten-salt cooling systems? Invented in the mid-20th century but never commercialized. Same for several alternative fuels, like TRISO. And, of course, there’s thorium.

This one research reactor in China running with an alternative fuel says a lot about this moment for nuclear energy technology: Many groups are looking into the past for technologies, with a new appetite for building them.

First, it’s important to note that China is the hot spot for nuclear energy right now. While the US still has the most operational reactors in the world, China is catching up quickly. The country is building reactors at a remarkable clip and currently has more reactors under construction than any other country by far. Just this week, China approved 10 new reactors, totaling over $27 billion in investment.

China is also leading the way for some advanced reactor technologies (that category includes basically anything that deviates from the standard blueprint of what’s on the grid today: large reactors that use enriched uranium for fuel and high-pressure water to keep the reactor cool). High-temperature reactors that use gas as a coolant are one major area of focus for China—a few reactors that use this technology have recently started up, and more are in the planning stages or under construction.

Now, Chinese state media is reporting that scientists in the country reached a milestone with a thorium-based reactor. The reactor came online in June 2024, but researchers say it recently went through refueling without shutting down. (Conventional reactors generally need to be stopped to replenish the fuel supply.) The project’s lead scientists shared the results during a closed meeting at the Chinese Academy of Sciences.

I’ll emphasize here that this isn’t some massive power plant: This reactor is tiny. It generates just two megawatts of heat—less than the research reactor on MIT’s campus, which rings in at six megawatts. (To be fair, MIT’s is one of the largest university research reactors in the US, but still … it’s small.)

Regardless, progress is progress for thorium reactors, as the world has been entirely focused on uranium for the last 50 years or so.

Much of the original research on thorium came out of the US, which pumped resources into all sorts of different reactor technologies in the 1950s and ’60s. A reactor at Oak Ridge National Laboratory in Tennessee that ran in the 1960s used Uranium-233 fuel (which can be generated when thorium is bombarded with radiation).

Eventually, though, the world more or less settled on a blueprint for nuclear reactors, focusing on those that use Uranium-238 as fuel and are cooled by water at a high pressure. One reason for the focus on uranium for energy tech? The research could also be applied to nuclear weapons.

But now there’s a renewed interest in alternative nuclear technologies, and the thorium-fueled reactor is just one of several examples. A prominent one we’ve covered before: Kairos Power is building reactors that use molten salt as a coolant for small nuclear reactors, also a technology invented and developed in the 1950s and ’60s before being abandoned. 

Another old-but-new concept is using high-temperature gas to cool reactors, as X-energy is aiming to do in its proposed power station at a chemical plant in Texas. (That reactor will be able to be refueled while it’s running, like the new thorium reactor.) 

Some problems from decades ago that contributed to technologies being abandoned will still need to be dealt with today. In the case of molten-salt reactors, for example, it can be tricky to find materials that can withstand the corrosive properties of super-hot salt. For thorium reactors, the process of transforming thorium into U-233 fuel has historically been one of the hurdles. 

But as early progress shows, the archives could provide fodder for new commercial reactors, and revisiting these old ideas could give the nuclear industry a much-needed boost. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The vibes are shifting for US climate tech

The past few years have been an almost nonstop parade of good news for climate tech in the US. Headlines about billion-dollar grants from the government, massive private funding rounds, and labs churning out advance after advance have been routine. Now, though, things are starting to shift.  

About $8 billion worth of US climate tech projects have been canceled or downsized so far in 2025. (You can see a map of those projects in my latest story here.) 

There are still projects moving forward, but these cancellations definitely aren’t a good sign. And now we have tariffs to think about, adding additional layers of expense and, worse, uncertainty. (Businesses, especially those whose plans require gobs of money, really don’t like uncertainty.) Honestly, I’m still getting used to an environment that isn’t such a positive one for climate technology. How worried should we be? Let’s get into the context.

Sometimes, one piece of news can really drive home a much larger trend. For example, I’ve read a bazillion studies about extreme weather and global warming, but every time a hurricane comes close to my mom’s home in Florida, the threat of climate-fueled extreme weather becomes much more real for me. A recent announcement about climate tech hit me in much the same fashion.

In February, Aspen Aerogels announced it was abandoning plans for a Georgia factory that would have made materials that can suppress battery fires. The news struck me, because just a few months before, in October, I had written about the Department of Energy’s $670 million loan commitment for the project. It was a really fun story, both because I found the tech fascinating and because MIT Technology Review got the exclusive access to cover it first.

And now, suddenly, that plan is just dead. Aspen said it will shift some of its production to a factory in Rhode Island and send some overseas. (I reached out to the company with questions for my story last week, but they didn’t get back to me.)

One example doesn’t always mean there’s a trend; I got food poisoning at a sushi restaurant once, but I haven’t cut out sashimi permanently. The bad news, though, is that Aspen’s cancellation is just one of many. Over a dozen major projects in climate technology have gotten killed so far this year, as the nonprofit E2 tallied up in a new report last week. That’s far from typical.

I got some additional context from Jay Turner, who runs Big Green Machine, a database that also tracks investments in the climate-tech supply chain. That project includes some data that E2 doesn’t account for: news about when projects are delayed or take steps forward. On Monday, the Big Green Machine team released a new update, one that Turner called “concerning.”

Since Donald Trump took office on January 20, about $10.5 billion worth of investment in climate tech projects has progressed in some way. That basically means 26 projects were announced, secured new funding, increased in scale, or started construction or production.

Meanwhile, $12.2 billion across 14 projects has slowed down in some way. This covers projects that were canceled, were delayed significantly, or lost funding, as well as companies that went bankrupt. So by total investment, there’s been more bad news in climate tech than good news, according to Turner’s tracking.

It’s tempting to look for the silver lining here. The projects still moving forward are certainly positive, and we’ll hopefully continue to see some companies making progress even as we head into even more uncertain times. But the signs don’t look good.

One question that I have going forward is how a seemingly inevitable US slowdown on climate technology will ripple around the rest of the world. Several experts I’ve spoken with seem to agree that this will be a great thing for China, which has aggressively and consistently worked to establish itself as a global superpower in industries like EVs and batteries.

In other words, the energy transition is rolling on. Will the US get left behind? 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The quest to build islands with ocean currents in the Maldives

In satellite images, the 20-odd coral atolls of the Maldives look something like skeletal remains or chalk lines at a crime scene. But these landforms, which circle the peaks of a mountain range that has vanished under the Indian Ocean, are far from inert. They’re the products of living processes—places where coral has grown toward the surface over hundreds of thousands of years. Shifting ocean currents have gradually pushed sand—made from broken-up bits of this same coral—into more than 1,000 other islands that poke above the surface. 

But these currents can also be remarkably transient, constructing new sandbanks or washing them away in a matter of weeks. In the coming decades, the daily lives of the half-million people who live on this archipelago—the world’s lowest-lying nation—will depend on finding ways to keep a solid foothold amid these shifting sands. More than 90% of the islands have experienced severe erosion, and climate change could make much of the country uninhabitable by the middle of the century.

Off one atoll, just south of the Maldives’ capital, Malé, researchers are testing one way to capture sand in strategic locations—to grow islands, rebuild beaches, and protect coastal communities from sea-level rise. Swim 10 minutes out into the En’boodhoofinolhu Lagoon and you’ll find the Ramp Ring, an unusual structure made up of six tough-skinned geotextile bladders. These submerged bags, part of a recent effort called the Growing Islands project, form a pair of parentheses separated by 90 meters (around 300 feet).

The bags, each about two meters tall, were deployed in December 2024, and by February, underwater images showed that sand had climbed about a meter and a half up the surface of each one, demonstrating how passive structures can quickly replenish beaches and, in time, build a solid foundation for new land. “There’s just a ton of sand in there. It’s really looking good,” says Skylar Tibbits, an architect and founder of the MIT Self-Assembly Lab, which is developing the project in partnership with the Malé-based climate tech company Invena.

The Self-Assembly Lab designs material technologies that can be programmed to transform or “self-assemble” in the air or underwater, exploiting natural forces like gravity, wind, waves, and sunlight. Its creations include sheets of wood fiber that form into three-dimensional structures when splashed with water, which the researchers hope could be used for tool-free flat-pack furniture. 

Growing Islands is their largest-scale undertaking yet. Since 2017, the project has deployed 10 experiments in the Maldives, testing different materials, locations, and strategies, including inflatable structures and mesh nets. The Ramp Ring is many times larger than previous deployments and aims to overcome their biggest limitation. 

In the Maldives, the direction of the currents changes with the seasons. Past experiments have been able to capture only one seasonal flow, meaning they lie dormant for months of the year. By contrast, the Ramp Ring is “omnidirectional,” capturing sand year-round. “It’s basically a big ring, a big loop, and no matter which monsoon season and which wave direction, it accumulates sand in the same area,” Tibbits says.

The approach points to a more sustainable way to protect the archipelago, whose growing population is supported by an economy that caters to 2 million annual tourists drawn by its white beaches and teeming coral reefs. Most of the country’s 187 inhabited islands have already had some form of human intervention to reclaim land or defend against erosion, such as concrete blocks, jetties, and breakwaters. Since the 1990s, dredging has become by far the most significant strategy. Boats equipped with high-power pumping systems vacuum up sand from one part of the seabed and spray it into a pile somewhere else. This temporary process allows resort developers and densely populated islands like Malé to quickly replenish beaches and build limitlessly customizable islands. But it also leaves behind dead zones where sand has been extracted—and plumes of sediment that cloud the water with a sort of choking marine smog. Last year, the government placed a temporary ban on dredging to prevent damage to reef ecosystems, which were already struggling amid spiking ocean temperatures.

Holly East, a geographer at the University of Northumbria, says Growing Islands’ structures offer an exciting alternative to dredging. But East, who is not involved in the project, warns that they must be sited carefully to avoid interrupting sand flows that already build up islands’ coastlines. 

To do this, Tibbits and Invena cofounder Sarah Dole are conducting long-term satellite analysis of the En’boodhoofinolhu Lagoon to understand how sediment flows move around atolls. On the basis of this work, the team is currently spinning out a predictive coastal intelligence platform called Littoral. The aim is for it to be “a global health monitoring system for sediment transport,” Dole says. It’s meant not only to show where beaches are losing sand but to “tell us where erosion is going to happen,” allowing government agencies and developers to know where new structures like Ramp Rings can best be placed.

Growing Islands has been supported by the National Geographic Society, MIT, the Sri Lankan engineering group Sanken, and tourist resort developers. In 2023, it got a big bump from the US Agency for International Development: a $250,000 grant that funded the construction of the Ramp Ring deployment and would have provided opportunities to scale up the approach. But the termination of nearly all USAID contracts following the inauguration of President Trump means the project is looking for new partners.

Matthew Ponsford is a freelance reporter based in London.

$8 billion of US climate tech projects have been canceled so far in 2025

This year has been rough for climate technology: Companies have canceled, downsized, or shut down at least 16 large-scale projects worth $8 billion in total in the first quarter of 2025, according to a new report.

That’s far more cancellations than have typically occurred in recent years, according to a new report from E2, a nonpartisan policy group. The trend is due to a variety of reasons, including drastically revised federal policies.

In recent months, the White House has worked to claw back federal investments, including some of those promised under the Inflation Reduction Act. New tariffs on imported goods, including those from China (which dominates supply chains for batteries and other energy technologies), are also contributing to the precarious environment. And demand for some technologies, like EVs, is lagging behind expectations. 

E2, which has been tracking new investments in manufacturing and large-scale energy projects, is now expanding its regular reports to include project cancellations, shutdowns, and downsizings as well.  From August 2022 to the end of 2024, 18 projects were canceled, closed, or downsized, according to E2’s data. The first three months of 2025 have already seen 16 projects canceled.

“I wasn’t sure it was going to be this clear,” says Michael Timberlake, communications director of E2. “What you’re really seeing is that there’s a lot of market uncertainty.”

Despite the big number, it is not comprehensive. The group only tracks large-scale investments, not smaller announcements that can be more difficult to follow. The list also leaves out projects that companies have paused.

“The incredible uncertainty in the clean energy sector is leading to a lot of projects being canceled or downsized, or just slowed down,” says Jay Turner, a professor of environmental studies at Wellesley College. Turner leads a team that also tracks the supply chain for clean energy in the US in a database called the Big Green Machine.

Some turnover is normal, and there have been a lot of projects announced since the Inflation Reduction Act was passed in 2022—so there are more in the pipeline to potentially be canceled, Turner says. So many battery and EV projects were announced that supply would have exceeded demand “even in a best-case scenario,” Turner says. So some of the project cancellations are a result of right-sizing, or getting supply and demand in sync.

Other projects are still moving forward, with hundreds of manufacturing facilities under construction or operational. But it’s not as many as we’d see in a more stable policy landscape, Turner says.

The cancellations include a factory in Georgia from Aspen Aerogels, which received a $670 million loan commitment from the US Department of Energy in October. The facility would have made materials that can help prevent or slow fires in battery packs. In a February earnings call, executives said the company plans to focus on an existing Rhode Island facility and projects in other countries, including China and Mexico. Aspen Aerogels didn’t respond to a request for further comment. 

Hundreds of projects that have been announced in just the last few years are under construction or operational despite the wave of cancellations. But it is an early sign of growing uncertainty for climate technology. 

 “You’re seeing a business environment that’s just unsure what’s next and is hesitant to commit one way or another,” Timberlake says.

These four charts sum up the state of AI and energy

While it’s rare to look at the news without finding some headline related to AI and energy, a lot of us are stuck waving our hands when it comes to what it all means.

Sure, you’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. That’s true even for people working in the field. 

A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up. Here are four charts from the report that sum up the crucial points about AI and energy demand.

1. AI is power hungry, and the world will need to ramp up electricity supply to meet demand. 

This point is the most obvious, but it bears repeating: AI is exploding, and it’s going to lead to higher energy demand from data centers. “AI has gone from an academic pursuit to an industry with trillions of dollars at stake,” as the IEA report’s executive summary puts it.

Data centers used less than 300 terawatt-hours of electricity in 2020. That could increase to nearly 1,000 terawatt-hours in the next five years, which is more than Japan’s total electricity consumption today.

Today, the US has about 45% of the world’s data center capacity, followed by China. Those two countries will continue to represent the overwhelming majority of capacity through 2035.  

2. The electricity needed to power data centers will largely come from fossil fuels like coal and natural gas in the near term, but nuclear and renewables could play a key role, especially after 2030.

The IEA report is relatively optimistic on the potential for renewables to power data centers, projecting that nearly half of global growth by 2035 will be met with renewables like wind and solar. (In Europe, the IEA projects, renewables will meet 85% of new demand.)

In the near term, though, natural gas and coal will also expand. An additional 175 terawatt-hours from gas will help meet demand in the next decade, largely in the US, according to the IEA’s projections. Another report, published this week by the energy consultancy BloombergNEF, suggests that fossil fuels will play an even larger role than the IEA projects, accounting for two-thirds of additional electricity generation between now and 2035.

Nuclear energy, a favorite of big tech companies looking to power operations without generating massive emissions, could start to make a dent after 2030, according to the IEA data.

3. Data centers are just a small piece of expected electricity demand growth this decade.

We should be talking more about appliances, industry, and EVs when we talk about energy! Electricity demand is on the rise from a whole host of sources: Electric vehicles, air-conditioning, and appliances will each drive more electricity demand than data centers between now and the end of the decade. In total, data centers make up a little over 8% of electricity demand expected between now and 2030.

There are interesting regional effects here, though. Growing economies will see more demand from the likes of air-conditioning than from data centers. On the other hand, the US has seen relatively flat electricity demand from consumers and industry for years, so newly rising demand from high-performance computing will make up a larger chunk. 

4. Data centers tend to be clustered together and close to population centers, making them a unique challenge for the power grid.  

The grid is no stranger to facilities that use huge amounts of energy: Cement plants, aluminum smelters, and coal mines all pull a lot of power in one place. However, data centers are a unique sort of beast.

First, they tend to be closely clustered together. Globally, data centers make up about 1.5% of total electricity demand. However, in Ireland, that number is 20%, and in Virginia, it’s 25%. That trend looks likely to continue, too: Half of data centers under development in the US are in preexisting clusters.

Data centers also tend to be closer to urban areas than other energy-intensive facilities like factories and mines. 

Since data centers are close both to each other and to communities, they could have significant impacts on the regions where they’re situated, whether by bringing on more fossil fuels close to urban centers or by adding strain to the local grid. Or both.

Overall, AI and data centers more broadly are going to be a major driving force for electricity demand. It’s not the whole story, but it’s a unique part of our energy picture to continue watching moving forward. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.