The data center boom in the desert

In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city.

Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities within the Tahoe Reno Industrial Center, a business park bigger than the city of Detroit. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


Meanwhile, Microsoft acquired more than 225 acres of undeveloped property within the center and an even larger plot in nearby Silver Springs, Nevada. Apple is expanding its data center, located just across the Truckee River from the industrial park. OpenAI has said it’s considering building a data center in Nevada as well.

The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. 

Switch, a data center company based in Las Vegas, says the full build-out of its campus at the Tahoe Reno Industrial Center could exceed seven million square feet.
EMILY NAJERA

The full scale and potential environmental impacts of the developments aren’t known, because the footprint, energy needs, and water requirements are often closely guarded corporate secrets. Most of the companies didn’t respond to inquiries from MIT Technology Review, or declined to provide additional information about the projects. 

But there’s “a whole lot of construction going on,” says Kris Thompson, who served as the longtime project manager for the industrial center before stepping down late last year. “The last number I heard was 13 million square feet under construction right now, which is massive.”

Indeed, it’s the equivalent of almost five Empire State Buildings laid out flat. In addition, public filings from NV Energy, the state’s near-monopoly utility, reveal that a dozen data-center projects, mostly in this area, have requested nearly six gigawatts of electricity capacity within the next decade. 

That would make the greater Reno area—the biggest little city in the world—one of the largest data-center markets around the globe.

It would also require expanding the state’s power sector by about 40%, all for a single industry in an explosive growth stage that may, or may not, prove sustainable. The energy needs, in turn, suggest those projects could consume billions of gallons of water per year, according to an analysis conducted for this story. 

Construction crews are busy building data centers throughout the Tahoe Reno Industrial Center.
EMILY NAJERA

The build-out of a dense cluster of energy and water-hungry data centers in a small stretch of the nation’s driest state, where climate change is driving up temperatures faster than anywhere else in the country, has begun to raise alarms among water experts, environmental groups, and residents. That includes members of the Pyramid Lake Paiute Tribe, whose namesake water body lies within their reservation and marks the end point of the Truckee River, the region’s main source of water.

Much of Nevada has suffered through severe drought conditions for years, farmers and communities are drawing down many of the state’s groundwater reservoirs faster than they can be refilled, and global warming is sucking more and more moisture out of the region’s streams, shrubs, and soils.

“Telling entities that they can come in and stick more straws in the ground for data centers is raising a lot of questions about sound management,” says Kyle Roerink, executive director of the Great Basin Water Network, a nonprofit that works to protect water resources throughout Nevada and Utah. 

“We just don’t want to be in a situation where the tail is wagging the dog,” he later added, “where this demand for data centers is driving water policy.”

Luring data centers

In the late 1850s, the mountains southeast of Reno began enticing prospectors from across the country, who hoped to strike silver or gold in the famed Comstock Lode. But Storey County had few residents or economic prospects by the late 1990s, around the time when Don Roger Norman, a media-shy real estate speculator, spotted a new opportunity in the sagebrush-covered hills. 

He began buying up tens of thousands of acres of land for tens of millions of dollars and lining up development approvals to lure industrial projects to what became the Tahoe Reno Industrial Center. His partners included Lance Gilman, a cowboy-hat-wearing real estate broker, who later bought the nearby Mustang Ranch brothel and won a seat as a county commissioner.

In 1999, the county passed an ordinance that preapproves companies to develop most types of commercial and industrial projects across the business park, cutting months to years off the development process. That helped cinch deals with a flock of tenants looking to build big projects fast, including Walmart, Tesla, and Redwood Materials. Now the promise of fast permits is helping to draw data centers by the gigawatt.

On a clear, cool January afternoon, Brian Armon, a commercial real estate broker who leads the industrial practices group at NAI Alliance, takes me on a tour of the projects around the region, which mostly entails driving around the business center.

Lance Gilman standing on a hill overlooking building in the industrial center
Lance Gilman, a local real estate broker, helped to develop the Tahoe Reno Industrial Center and land some of its largest tenants.
GREGG SEGAL

After pulling off Interstate 80 onto USA Parkway, he points out the cranes, earthmovers, and riprap foundations, where a variety of data centers are under construction. Deeper into the industrial park, Armon pulls up near Switch’s long, low, arched-roof facility, which sits on a terrace above cement walls and security gates. The Las Vegas–based company says the first phase of its data center campus encompasses more than a million square feet, and that the full build-out will cover seven times that space. 

Over the next hill, we turn around in Google’s parking lot. Cranes, tents, framing, and construction equipment extend behind the company’s existing data center, filling much of the 1,210-acre lot that the search engine giant acquired in 2017.

Last August, during an event at the University of Nevada, Reno, the company announced it would spend $400 million to expand the data center campus along with another one in Las Vegas.

Thompson says that the development company, Tahoe Reno Industrial LLC, has now sold off every parcel of developable land within the park (although several lots are available for resale following the failed gamble of one crypto tenant).

When I ask Armon what’s attracting all the data centers here, he starts with the fast approvals but cites a list of other lures as well: The inexpensive land. NV Energy’s willingness to strike deals to supply relatively low-cost electricity. Cool nighttime and winter temperatures, as far as American deserts go, which reduce the energy and water needs. The proximity to tech hubs such as Silicon Valley, which cuts latency for applications in which milliseconds matter. And the lack of natural disasters that could shut down the facilities, at least for the most part.

“We are high in seismic activity,” he says. “But everything else is good. We’re not going to have a tornado or flood or a devastating wildfire.”

Then there’s the generous tax policies.

In 2023, Novva, a Utah-based data center company, announced plans to build a 300,000-square-foot facility within the industrial business park.

Nevada doesn’t charge corporate income tax, and it has also enacted deep tax cuts specifically for data centers that set up shop in the state. That includes abatements of up to 75% on property tax for a decade or two—and nearly as much of a bargain on the sales and use taxes applied to equipment purchased for the facilities.

Data centers don’t require many permanent workers to run the operations, but the projects have created thousands of construction jobs. They’re also helping to diversify the region’s economy beyond casinos and generating tax windfalls for the state, counties, and cities, says Jeff Sutich, executive director of the Northern Nevada Development Authority. Indeed, just three data-center projects, developed by Apple, Google, and Vantage, will produce nearly half a billion dollars in tax revenue for Nevada, even with those generous abatements, according to the Nevada Governor’s Office of Economic Development.

The question is whether the benefits of data centers are worth the tradeoffs for Nevadans, given the public health costs, greenhouse-gas emissions, energy demands, and water strains.

The rain shadow

The Sierra Nevada’s granite peaks trace the eastern edge of California, forcing Pacific Ocean winds to rise and cool. That converts water vapor in the air into the rain and snow that fill the range’s tributaries, rivers, and lakes. 

But the same meteorological phenomenon casts a rain shadow over much of neighboring Nevada, forming an arid expanse known as the Great Basin Desert. The state receives about 10 inches of precipitation a year, about a third of the national average.

The Truckee River draws from the melting Sierra snowpack at the edge of Lake Tahoe, cascades down the range, and snakes through the flatlands of Reno and Sparks. It forks at the Derby Dam, a Reclamation Act project a few miles from the Tahoe Reno Industrial Center, which diverts water to a farming region further east while allowing the rest to continue north toward Pyramid Lake. 

Along the way, an engineered system of reservoirs, canals, and treatment plants divert, store, and release water from the river, supplying businesses, cities, towns, and native tribes across the region. But Nevada’s population and economy are expanding, creating more demands on these resources even as they become more constrained. 

The Truckee River, which originates at Lake Tahoe and terminates at Pyramid Lake, is the major water source for cities, towns, and farms across northwestern Nevada.
EMILY NAJERA

Throughout much of the 2020s the state has suffered through one of the hottest and most widespread droughts on record, extending two decades of abnormally dry conditions across the American West. Some scientists fear it may constitute an emerging megadrought

About 50% of Nevada currently faces moderate to exceptional drought conditions. In addition, more than half of the state’s hundreds of groundwater basins are already “over-appropriated,” meaning the water rights on paper exceed the levels believed to be underground. 

It’s not clear if climate change will increase or decrease the state’s rainfall levels, on balance. But precipitation patterns are expected to become more erratic, whiplashing between short periods of intense rainfall and more-frequent, extended, or severe droughts. 

In addition, more precipitation will fall as rain rather than snow, shortening the Sierra snow season by weeks to months over the coming decades. 

“In the extreme case, at the end of the century, that’s pretty much all of winter,” says Sean McKenna, executive director of hydrologic sciences at the Desert Research Institute, a research division of the Nevada System of Higher Education.

That loss will undermine an essential function of the Sierra snowpack: reliably delivering water to farmers and cities when it’s most needed in the spring and summer, across both Nevada and California. 

These shifting conditions will require the region to develop better ways to store, preserve, and recycle the water it does get, McKenna says. Northern Nevada’s cities, towns, and agencies will also need to carefully evaluate and plan for the collective impacts of continuing growth and development on the interconnected water system, particularly when it comes to water-hungry projects like data centers, he adds.

“We can’t consider each of these as a one-off, without considering that there may be tens or dozens of these in the next 15 years,” McKenna says.

Thirsty data centers

Data centers suck up water in two main ways.

As giant rooms of server racks process information and consume energy, they generate heat that must be shunted away to prevent malfunctions and damage to the equipment. The processing units optimized for training and running AI models often draw more electricity and, in turn, produce more heat.

To keep things cool, more and more data centers have turned to liquid cooling systems that don’t need as much electricity as fan cooling or air-conditioning.

These often rely on water to absorb heat and transfer it to outdoor cooling towers, where much of the moisture evaporates. Microsoft’s US data centers, for instance, could have directly evaporated nearly 185,000 gallons of “clean freshwater” in the course of training OpenAI’s GPT-3 large language model, according to a 2023 preprint study led by researchers at the University of California, Riverside. (The research has since been peer-reviewed and is awaiting publication.)

What’s less appreciated, however, is that the larger data-center drain on water generally occurs indirectly, at the power plants generating extra electricity for the turbocharged AI sector. These facilities, in turn, require more water to cool down equipment, among other purposes.

You have to add up both uses “to reflect the true water cost of data centers,” says Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside and coauthor of the study.

Ren estimates that the 12 data-center projects listed in NV Energy’s report would directly consume between 860 million gallons and 5.7 billion gallons a year, based on the requested electricity capacity. (“Consumed” here means the water is evaporated, not merely withdrawn and returned to the engineered water system.) The indirect water drain associated with electricity generation for those operations could add up to 15.5 billion gallons, based on the average consumption of the regional grid.

The exact water figures would depend on shifting climate conditions, the type of cooling systems each data center uses, and the mix of power sources that supply the facilities.

Solar power, which provides roughly a quarter of Nevada’s power, requires relatively little water to operate, for instance. But natural-gas plants, which generate about 56%, withdraw 2,803 gallons per megawatt-hour on average, according to the Energy Information Administration

Geothermal plants, which produce about 10% of the state’s electricity by cycling water through hot rocks, generally consume less water than fossil fuel plants do but often require more water than other renewables, according to some research

But here too, the water usage varies depending on the type of geothermal plant in question. Google has lined up several deals to partially power its data centers through Fervo Energy, which has helped to commercialize an emerging approach that injects water under high pressure to fracture rock and form wells deep below the surface. 

The company stresses that it doesn’t evaporate water for cooling and that it relies on brackish groundwater, not fresh water, to develop and run its plants. In a recent post, Fervo noted that its facilities consume significantly less water per megawatt-hour than coal, nuclear, or natural-gas plants do.

Part of NV Energy’s proposed plan to meet growing electricity demands in Nevada includes developing several natural-gas peaking units, adding more than one gigawatt of solar power and installing another gigawatt of battery storage. It’s also forging ahead with a more than $4 billion transmission project.

But the company didn’t respond to questions concerning how it will supply all of the gigawatts of additional electricity requested by data centers, if the construction of those power plants will increase consumer rates, or how much water those facilities are expected to consume.

NV Energy operates a transmission line, substation, and power plant in or around the Tahoe Reno Industrial Center.
EMILY NAJERA

“NV Energy teams work diligently on our long-term planning to make investments in our infrastructure to serve new customers and the continued growth in the state without putting existing customers at risk,” the company said in a statement.

An added challenge is that data centers need to run around the clock. That will often compel utilities to develop new electricity-generating sources that can run nonstop as well, as natural-gas, geothermal, or nuclear plants do, says Emily Grubert, an associate professor of sustainable energy policy at the University of Notre Dame, who has studied the relative water consumption of electricity sources. 

“You end up with the water-intensive resources looking more important,” she adds.

Even if NV Energy and the companies developing data centers do strive to power them through sources with relatively low water needs, “we only have so much ability to add six gigawatts to Nevada’s grid,” Grubert explains. “What you do will never be system-neutral, because it’s such a big number.”

Securing supplies

On a mid-February morning, I meet TRI’s Thompson and Don Gilman, Lance Gilman’s son, at the Storey County offices, located within the industrial center. 

“I’m just a country boy who sells dirt,” Gilman, also a real estate broker, says by way of introduction. 

We climb into his large SUV and drive to a reservoir in the heart of the industrial park, filled nearly to the lip. 

Thompson explains that much of the water comes from an on-site treatment facility that filters waste fluids from companies in the park. In addition, tens of millions of gallons of treated effluent will also likely flow into the tank this year from the Truckee Meadows Water Authority Reclamation Facility, near the border of Reno and Sparks. That’s thanks to a 16-mile pipeline that the developers, the water authority, several tenants, and various local cities and agencies partnered to build, through a project that began in 2021.

“Our general improvement district is furnishing that water to tech companies here in the park as we speak,” Thompson says. “That helps preserve the precious groundwater, so that is an environmental feather in the cap for these data centers. They are focused on environmental excellence.”

The reservoir within the industrial business park provides water to data centers and other tenants.
EMILY NAJERA

But data centers often need drinking-quality water—not wastewater merely treated to irrigation standards—for evaporative cooling, “to avoid pipe clogs and/or bacterial growth,” the UC Riverside study notes. For instance, Google says its data centers withdrew about 7.7 billion gallons of water in 2023, and nearly 6 billion of those gallons were potable. 

Tenants in the industrial park can potentially obtain access to water from the ground and the Truckee River, as well. From early on, the master developers worked hard to secure permits to water sources, since they are nearly as precious as development entitlements to companies hoping to build projects in the desert.

Initially, the development company controlled a private business, the TRI Water and Sewer Company, that provided those services to the business park’s tenants, according to public documents. The company set up wells, a water tank, distribution lines, and a sewer disposal system. 

But in 2000, the board of county commissioners established a general improvement district, a legal mechanism for providing municipal services in certain parts of the state, to manage electricity and then water within the center. It, in turn, hired TRI Water and Sewer as the operating company.

As of its 2020 service plan, the general improvement district held permits for nearly 5,300 acre-feet of groundwater, “which can be pumped from well fields within the service area and used for new growth as it occurs.” The document lists another 2,000 acre-feet per year available from the on-site treatment facility, 1,000 from the Truckee River, and 4,000 more from the effluent pipeline. 

Those figures haven’t budged much since, according to Shari Whalen, general manager of the TRI General Improvement District. All told, they add up to more than 4 billion gallons of water per year for all the needs of the industrial park and the tenants there, data centers and otherwise.

Whalen says that the amount and quality of water required for any given data center depends on its design, and that those matters are worked out on a case-by-case basis. 

When asked if the general improvement district is confident that it has adequate water resources to supply the needs of all the data centers under development, as well as other tenants at the industrial center, she says: “They can’t just show up and build unless they have water resources designated for their projects. We wouldn’t approve a project if it didn’t have those water resources.”

Water battles

As the region’s water sources have grown more constrained, lining up supplies has become an increasingly high-stakes and controversial business.

More than a century ago, the US federal government filed a lawsuit against an assortment of parties pulling water from the Truckee River. The suit would eventually establish that the Pyramid Lake Paiute Tribe’s legal rights to water for irrigation superseded other claims. But the tribe has been fighting to protect those rights and increase flows from the river ever since, arguing that increasing strains on the watershed from upstream cities and businesses threaten to draw away water reserved for reservation farming, decrease lake levels, and harm native fish.

The Pyramid Lake Paiute Tribe considers the water body and its fish, including the endangered cui-ui and threatened Lahontan cutthroat trout, to be essential parts of its culture, identity, and way of life. The tribe was originally named Cui-ui Ticutta, which translates to cui-ui eaters. The lake continues to provide sustenance as well as business for the tribe and its members, a number of whom operate boat charters and fishing guide services.

“It’s completely tied into us as a people,” says Steven Wadsworth, chairman of the Pyramid Lake Paiute Tribe.

“That is what has sustained us all this time,” he adds. “It’s just who we are. It’s part of our spiritual well-being.”

Steven Wadsworth, chairman of the Pyramid Lake Paiute Tribe, fears that data centers will divert water that would otherwise reach the tribe’s namesake lake.
EMILY NAJERA

In recent decades, the tribe has sued the Nevada State Engineer, Washoe County, the federal government, and others for overallocating water rights and endangering the lake’s fish. It also protested the TRI General Improvement District’s applications to draw thousands of additional acre‑feet of groundwater from a basin near the business park. In 2019, the State Engineer’s office rejected those requests, concluding that the basin was already fully appropriated. 

More recently, the tribe took issue with the plan to build the pipeline and divert effluent that would have flown into the Truckee, securing an agreement that required the Truckee Meadows Water Authority and other parties to add back several thousand acre‑feet of water to the river. 

Whalen says she’s sensitive to Wadsworth’s concerns. But she says that the pipeline promises to keep a growing amount of treated wastewater out of the river, where it could otherwise contribute to rising salt levels in the lake.

“I think that the pipeline from [the Truckee Meadows Water Authority] to our system is good for water quality in the river,” she says. “I understand philosophically the concerns about data centers, but the general improvement district is dedicated to working with everyone on the river for regional water-resource planning—and the tribe is no exception.”

Water efficiency 

In an email, Thompson added that he has “great respect and admiration,” for the tribe and has visited the reservation several times in an effort to help bring industrial or commercial development there.

He stressed that all of the business park’s groundwater was “validated by the State Water Engineer,” and that the rights to surface water and effluent were purchased “for fair market value.”

During the earlier interview at the industrial center, he and Gilman had both expressed confidence that tenants in the park have adequate water supplies, and that the businesses won’t draw water away from other areas. 

“We’re in our own aquifer, our own water basin here,” Thompson said. “You put a straw in the ground here, you’re not going to pull water from Fernley or from Reno or from Silver Springs.”

Gilman also stressed that data-center companies have gotten more water efficient in recent years, echoing a point others made as well.

“With the newer technology, it’s not much of a worry,” says Sutich, of the Northern Nevada Development Authority. “The technology has come a long way in the last 10 years, which is really giving these guys the opportunity to be good stewards of water usage.”

An aerial view of the cooling tower fans at Google’s data center in the Tahoe Reno Industrial Center.
GOOGLE

Indeed, Google’s existing Storey County facility is air-cooled, according to the company’s latest environmental report. The data center withdrew 1.9 million gallons in 2023 but only consumed 200,000 gallons. The rest cycles back into the water system.

Google said all the data centers under construction on its campus will also “utilize air-cooling technology.” The company didn’t respond to a question about the scale of its planned expansion in the Tahoe Reno Industrial Center, and referred a question about indirect water consumption to NV Energy.

The search giant has stressed that it strives to be water efficient across all of its data centers, and decides whether to use air or liquid cooling based on local supply and projected demand, among other variables.

Four years ago, the company set a goal of replenishing more water than it consumes by 2030. Locally, it also committed to provide half a million dollars to the National Forest Foundation to improve the Truckee River watershed and reduce wildfire risks. 

Microsoft clearly suggested in earlier news reports that the Silver Springs land it purchased around the end of 2022 would be used for a data center. NAI Alliance’s market real estate report identifies that lot, as well as the parcel Microsoft purchased within the Tahoe Reno Industrial Center, as data center sites.

But the company now declines to specify what it intends to build in the region. 

“While the land purchase is public knowledge, we have not disclosed specific details [of] our plans for the land or potential development timelines,” wrote Donna Whitehead, a Microsoft spokesperson, in an email. 

Workers have begun grading land inside a fenced off lot within the Tahoe Reno Industrial Center.
EMILY NAJERA

Microsoft has also scaled down its global data-center ambitions, backing away from several projects in recent months amid shifting economic conditions, according to various reports.

Whatever it ultimately does or doesn’t build, the company stresses that it has made strides to reduce water consumption in its facilities. Late last year, the company announced that it’s using “chip-level cooling solutions” in data centers, which continually circulate water between the servers and chillers through a closed loop that the company claims doesn’t lose any water to evaporation. It says the design requires only a “nominal increase” in energy compared to its data centers that rely on evaporative water cooling.

Others seem to be taking a similar approach. EdgeCore also said its 900,000-square-foot data center at the Tahoe Reno Industrial Center will rely on an “air-cooled closed-loop chiller” that doesn’t require water evaporation for cooling. 

But some of the companies seem to have taken steps to ensure access to significant amounts of water. Switch, for instance, took a lead role in developing the effluent pipeline. In addition, Tract, which develops campuses on which third-party data centers can build their own facilities, has said it lined up more than 1,100 acre-feet of water rights, the equivalent of nearly 360 million gallons a year. 

Apple, Novva, Switch, Tract, and Vantage didn’t respond to inquiries from MIT Technology Review

Coming conflicts 

The suggestion that companies aren’t straining water supplies when they adopt air cooling is, in many cases, akin to saying they’re not responsible for the greenhouse gas produced through their power use simply because it occurs outside of their facilities. In fact, the additional water used at a power plant to meet the increased electricity needs of air cooling may exceed any gains at the data center, Ren, of UC Riverside, says.

“That’s actually very likely, because it uses a lot more energy,” he adds.

That means that some of the companies developing data centers in and around Storey County may simply hand off their water challenges to other parts of Nevada or neighboring states across the drying American West, depending on where and how the power is generated, Ren says. 

Google has said its air-cooled facilities require about 10% more electricity, and its environmental report notes that the Storey County facility is one of its two least-energy-efficient data centers. 

Pipes running along Google’s data center campus help the search company cool its servers.
GOOGLE

Some fear there’s also a growing mismatch between what Nevada’s water permits allow, what’s actually in the ground, and what nature will provide as climate conditions shift. Notably, the groundwater committed to all parties from the Tracy Segment basin—a long-fought-over resource that partially supplies the TRI General Improvement District—already exceeds the “perennial yield.” That refers to the maximum amount that can be drawn out every year without depleting the reservoir over the long term.

“If pumping does ultimately exceed the available supply, that means there will be conflict among users,” Roerink, of the Great Basin Water Network, said in an email. “So I have to wonder: Who could be suing whom? Who could be buying out whom? How will the tribe’s rights be defended?”

The Truckee Meadows Water Authority, the community-owned utility that manages the water system for Reno and Sparks, said it is planning carefully for the future and remains confident there will be “sufficient resources for decades to come,” at least within its territory east of the industrial center.

Storey County, the Truckee-Carson Irrigation District, and the State Engineer’s office didn’t respond to questions or accept interview requests. 

Open for business

As data center proposals have begun shifting into Northern Nevada’s cities, more local residents and organizations have begun to take notice and express concerns. The regional division of the Sierra Club, for instance, recently sought to overturn the approval of Reno’s first data center, about 20 miles west of the Tahoe Reno Industrial Center. 

Olivia Tanager, director of the Sierra Club’s Toiyabe Chapter, says the environmental organization was shocked by the projected electricity demands from data centers highlighted in NV Energy’s filings.

Nevada’s wild horses are a common sight along USA Parkway, the highway cutting through the industrial business park. 
EMILY NAJERA

“We have increasing interest in understanding the impact that data centers will have to our climate goals, to our grid as a whole, and certainly to our water resources,” she says. “The demands are extraordinary, and we don’t have that amount of water to toy around with.”

During a city hall hearing in January that stretched late into the evening, she and a line of residents raised concerns about the water, energy, climate, and employment impacts of AI data centers. At the end, though, the city council upheld the planning department’s approval of the project, on a 5-2 vote.

“Welcome to Reno,” Kathleen Taylor, Reno’s vice mayor, said before casting her vote. “We’re open for business.”

Where the river ends

In late March, I walk alongside Chairman Wadsworth, of the Pyramid Lake Paiute Tribe, on the shores of Pyramid Lake, watching a row of fly-fishers in waders cast their lines into the cold waters. 

The lake is the largest remnant of Lake Lahontan, an Ice Age inland sea that once stretched across western Nevada and would have submerged present-day Reno. But as the climate warmed, the lapping waters retreated, etching erosional terraces into the mountainsides and exposing tufa deposits around the lake, large formations of porous rock made of calcium-carbonate. That includes the pyramid-shaped island on the eastern shore that inspired the lake’s name.

A lone angler stands along the shores of Pyramid Lake.

In the decades after the US Reclamation Service completed the Derby Dam in 1905, Pyramid Lake declined another 80 feet and nearby Winnemucca Lake dried up entirely.

“We know what happens when water use goes unchecked,” says Wadsworth, gesturing eastward toward the range across the lake, where Winnemucca once filled the next basin over. “Because all we have to do is look over there and see a dry, barren lake bed that used to be full.”

In an earlier interview, Wadsworth acknowledged that the world needs data centers. But he argued they should be spread out across the country, not densely clustered in the middle of the Nevada desert.

Given the fierce competition for resources up to now, he can’t imagine how there could be enough water to meet the demands of data centers, expanding cities, and other growing businesses without straining the limited local supplies that should, by his accounting, flow to Pyramid Lake.

He fears these growing pressures will force the tribe to wage new legal battles to protect their rights and preserve the lake, extending what he refers to as “a century of water wars.”

“We have seen the devastating effects of what happens when you mess with Mother Nature,” Wadsworth says. “Part of our spirit has left us. And that’s why we fight so hard to hold on to what’s left.”

Everything you need to know about estimating AI’s energy and emissions burden

When we set out to write a story on the best available estimates for AI’s energy and emissions burden, we knew there would be caveats and uncertainties to these numbers. But, we quickly discovered, the caveats are the story too. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


Measuring the energy used by an AI model is not like evaluating a car’s fuel economy or an appliance’s energy rating. There’s no agreed-upon method or public database of values. There are no regulators who enforce standards, and consumers don’t get the chance to evaluate one model against another. 

Despite the fact that billions of dollars are being poured into reshaping energy infrastructure around the needs of AI, no one has settled on a way to quantify AI’s energy usage. Worse, companies are generally unwilling to disclose their own piece of the puzzle. There are also limitations to estimating the emissions associated with that energy demand, because the grid hosts a complicated, ever-changing mix of energy sources. 

It’s a big mess, basically. So, that said, here are the many variables, assumptions, and caveats that we used to calculate the consequences of an AI query. (You can see the full results of our investigation here.)

Measuring the energy a model uses

Companies like OpenAI, dealing in “closed-source” models, generally offer access to their  systems through an interface where you input a question and receive an answer. What happens in between—which data center in the world processes your request, the energy it takes to do so, and the carbon intensity of the energy sources used—remains a secret, knowable only to the companies. There are few incentives for them to release this information, and so far, most have not.

That’s why, for our analysis, we looked at open-source models. They serve as a very imperfect proxy but the best one we have. (OpenAI, Microsoft, and Google declined to share specifics on how much energy their closed-source models use.) 

The best resources for measuring the energy consumption of open-source AI models are AI Energy Score, ML.Energy, and MLPerf Power. The team behind ML.Energy assisted us with our text and image model calculations, and the team behind AI Energy Score helped with our video model calculations.

Text models

AI models use up energy in two phases: when they initially learn from vast amounts of data, called training, and when they respond to queries, called inference. When ChatGPT was launched a few years ago, training was the focus, as tech companies raced to keep up and build ever-bigger models. But now, inference is where the most energy is used.

The most accurate way to understand how much energy an AI model uses in the inference stage is to directly measure the amount of electricity used by the server handling the request. Servers contain all sorts of components—powerful chips called GPUs that do the bulk of the computing, other chips called CPUs, fans to keep everything cool, and more. Researchers typically measure the amount of power the GPU draws and estimate the rest (more on this shortly). 

To do this, we turned to PhD candidate Jae-Won Chung and associate professor Mosharaf Chowdhury at the University of Michigan, who lead the ML.Energy project. Once we collected figures for different models’ GPU energy use from their team, we had to estimate how much energy is used for other processes, like cooling. We examined research literature, including a 2024 paper from Microsoft, to understand how much of a server’s total energy demand GPUs are responsible for. It turns out to be about half. So we took the team’s GPU energy estimate and doubled it to get a sense of total energy demands. 

The ML.Energy team uses a batch of 500 prompts from a larger dataset to test models. The hardware is kept the same throughout; the GPU is a popular Nvidia chip called the H100. We decided to focus on models of three sizes from the Meta Llama family: small (8 billion parameters), medium (70 billion), and large (405 billion). We also identified a selection of prompts to test. We compared these with the averages for the entire batch of 500 prompts. 

Image models

Stable Diffusion 3 from Stability AI is one of the most commonly used open-source image-generating models, so we made it our focus. Though we tested multiple sizes of the text-based Meta Llama model, we focused on one of the most popular sizes of Stable Diffusion 3, with 2 billion parameters. 

The team uses a dataset of example prompts to test a model’s energy requirements. Though the energy used by large language models is determined partially by the prompt, this isn’t true for diffusion models. Diffusion models can be programmed to go through a prescribed number of “denoising steps” when they generate an image or video, with each step being an iteration of the algorithm that adds more detail to the image. For a given step count and model, all images generated have the same energy footprint.

The more steps, the higher quality the end result—but the more energy used. Numbers of steps vary by model and application, but 25 is pretty common, and that’s what we used for our standard quality. For higher quality, we used 50 steps. 

We mentioned that GPUs are usually responsible for about half of the energy demands of large language model requests. There is not sufficient research to know how this changes for diffusion models that generate images and videos. In the absence of a better estimate, and after consulting with researchers, we opted to stick with this 50% rule of thumb for images and videos too.

Video models

Chung and Chowdhury do test video models, but only ones that generate short, low-quality GIFs. We don’t think the videos these models produce mirror the fidelity of the AI-generated video that many people are used to seeing. 

Instead, we turned to Sasha Luccioni, the AI and climate lead at Hugging Face, who directs the AI Energy Score project. She measures the energy used by the GPU during AI requests. We chose two versions of the CogVideoX model to test: an older, lower-quality version and a newer, higher-quality one. 

We asked Luccioni to use her tool, called Code Carbon, to test both and measure the results of a batch of video prompts we selected, using the same hardware as our text and image tests to keep as many variables as possible the same. She reported the GPU energy demands, which we again doubled to estimate total energy demands. 

Tracing where that energy comes from

After we understand how much energy it takes to respond to a query, we can translate that into the total emissions impact. Doing so requires looking at the power grid from which data centers draw their electricity. 

Nailing down the climate impact of the grid can be complicated, because it’s both interconnected and incredibly local. Imagine the grid as a system of connected canals and pools of water. Power plants add water to the canals, and electricity users, or loads, siphon it out. In the US, grid interconnections stretch all the way across the country. So, in a way, we’re all connected, but we can also break the grid up into its component pieces to get a sense for how energy sources vary across the country. 

Understanding carbon intensity

The key metric to understand here is called carbon intensity, which is basically a measure of how many grams of carbon dioxide pollution are released for every kilowatt-hour of electricity that’s produced. 

To get carbon intensity figures, we reached out to Electricity Maps, a Danish startup company that gathers data on grids around the world. The team collects information from sources including governments and utilities and uses them to publish historical and real-time estimates of the carbon intensity of the grid. You can find more about their methodology here

The company shared with us historical data from 2024, both for the entire US and for a few key balancing authorities (more on this in a moment). After discussions with Electricity Maps founder Olivier Corradi and other experts, we made a few decisions about which figures we would use in our calculations. 

One way to measure carbon intensity is to simply look at all the power plants that are operating on the grid, add up the pollution they’re producing at the moment, and divide that total by the electricity they’re producing. But that doesn’t account for the emissions that are associated with building and tearing down power plants, which can be significant. So we chose to use carbon intensity figures that account for the whole life cycle of a power plant. 

We also chose to use the consumption-based carbon intensity of energy rather than production-based. This figure accounts for imports and exports moving between different parts of the grid and best represents the electricity that’s being used, in real time, within a given region. 

For most of the calculations you see in the story, we used the average carbon intensity for the US for 2024, according to Electricity Maps, which is 402.49 grams of carbon dioxide equivalent per kilowatt-hour. 

Understanding balancing authorities

While understanding the picture across the entire US can be helpful, the grid can look incredibly different in different locations. 

One way we can break things up is by looking at balancing authorities. These are independent bodies responsible for grid balancing in a specific region. They operate mostly independently, though there’s a constant movement of electricity between them as well. There are 66 balancing authorities in the US, and we can calculate a carbon intensity for the part of the grid encompassed by a specific balancing authority.

Electricity Maps provided carbon intensity figures for a few key balancing authorities, and we focused on several that play the largest roles in data center operations. ERCOT (which covers most of Texas) and PJM (a cluster of states on the East Coast, including Virginia, Pennsylvania, and New Jersey) are two of the regions with the largest burden of data centers, according to research from the Harvard School of Public Health

We added CAISO (in California) because it covers the most populated state in the US. CAISO also manages a grid with a significant number of renewable energy sources, making it a good example of how carbon intensity can change drastically depending on the time of day. (In the middle of the day, solar tends to dominate, while natural gas plays a larger role overnight, for example.)

One key caveat here is that we’re not entirely sure where companies tend to send individual AI inference requests. There are clusters of data centers in the regions we chose as examples, but when you use a tech giant’s AI model, your request could be handled by any number of data centers owned or contracted by the company. One reasonable approximation is location: It’s likely that the data center servicing a request is close to where it’s being made, so a request on the West Coast might be most likely to be routed to a data center on that side of the country. 

Explaining what we found

To better contextualize our calculations, we introduced a few comparisons people might be more familiar with than kilowatt-hours and grams of carbon dioxide. In a few places, we took the amount of electricity estimated to be used by a model and calculated how long that electricity would be able to power a standard microwave, as well as how far it might take someone on an e-bike. 

In the case of the e-bike, we assumed an efficiency of 25 watt-hours per mile, which falls in the range of frequently cited efficiencies for a pedal-assisted bike. For the microwave, we assumed an 800-watt model, which falls within the average range in the US. 

We also introduced a comparison to contextualize greenhouse gas emissions: miles driven in a gas-powered car. For this, we used data from the US Environmental Protection Agency, which puts the weighted average fuel economy of vehicles in the US in 2022 at 393 grams of carbon dioxide equivalent per mile. 

Predicting how much energy AI will use in the future

After measuring the energy demand of an individual query and the emissions it generated, it was time to estimate how all of this added up to national demand. 

There are two ways to do this. In a bottom-up analysis, you estimate how many individual queries there are, calculate the energy demands of each, and add them up to determine the total. For a top-down look, you estimate how much energy all data centers are using by looking at larger trends. 

Bottom-up is particularly difficult, because, once again, closed-source companies do not share such information and declined to talk specifics with us. While we can make some educated guesses to give us a picture of what might be happening right now, looking into the future is perhaps better served by taking a top-down approach.

This data is scarce as well. The most important report was published in December by the Lawrence Berkeley National Laboratory, which is funded by the Department of Energy, and the report authors noted that it’s only the third such report released in the last 20 years. Academic climate and energy researchers we spoke with said it’s a major problem that AI is not considered its own economic sector for emissions measurements, and there aren’t rigorous reporting requirements. As a result, it’s difficult to track AI’s climate toll. 

Still, we examined the report’s results, compared them with other findings and estimates, and consulted independent experts about the data. While much of the report was about data centers more broadly, we drew out data points that were specific to the future of AI. 

Company goals

We wanted to contrast these figures with the amounts of energy that AI companies themselves say they need. To do so, we collected reports by leading tech and AI companies about their plans for energy and data center expansions, as well as the dollar amounts they promised to invest. Where possible, we fact-checked the promises made in these claims. (Meta and Microsoft’s pledges to use more nuclear power, for example, would indeed reduce the carbon emissions of the companies, but it will take years, if not decades, for these additional nuclear plants to come online.) 

Requests to companies

We submitted requests to Microsoft, Google, and OpenAI to have data-driven conversations about their models’ energy demands for AI inference. None of the companies made executives or leadership available for on-the-record interviews about their energy usage.

This story was supported by a grant from the Tarbell Center for AI Journalism.

Four reasons to be optimistic about AI’s energy usage

The day after his inauguration in January, President Donald Trump announced Stargate, a $500 billion initiative to build out AI infrastructure, backed by some of the biggest companies in tech. Stargate aims to accelerate the construction of massive data centers and electricity networks across the US to ensure it keeps its edge over China.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


The whatever-it-takes approach to the race for worldwide AI dominance was the talk of Davos, says Raquel Urtasun, founder and CEO of the Canadian robotruck startup Waabi, referring to the World Economic Forum’s annual January meeting in Switzerland, which was held the same week as Trump’s announcement. “I’m pretty worried about where the industry is going,” Urtasun says. 

She’s not alone. “Dollars are being invested, GPUs are being burned, water is being evaporated—it’s just absolutely the wrong direction,” says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI.

But sift through the talk of rocketing costs—and climate impact—and you’ll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock.

Here’s what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI.

1/ More efficient models

The most obvious place to start is with the models themselves—the way they’re created and the way they’re run.

AI models are built by training neural networks on lots and lots of data. Large language models are trained on vast amounts of text, self-driving models are trained on vast amounts of driving data, and so on.

But the way such data is collected is often indiscriminate. Large language models are trained on data sets that include text scraped from most of the internet and huge libraries of scanned books. The practice has been to grab everything that’s not nailed down, throw it into the mix, and see what comes out. This approach has certainly worked, but training a model on a massive data set over and over so it can extract relevant patterns by itself is a waste of time and energy.

There might be a more efficient way. Children aren’t expected to learn just by reading everything that’s ever been written; they are given a focused curriculum. Urtasun thinks we should do something similar with AI, training models with more curated data tailored to specific tasks. (Waabi trains its robotrucks inside a superrealistic simulation that allows fine-grained control of the virtual data its models are presented with.)

It’s not just Waabi. Writer, an AI startup that builds large language models for enterprise customers, claims that its models are cheaper to train and run in part because it trains them using synthetic data. Feeding its models bespoke data sets rather than larger but less curated ones makes the training process quicker (and therefore less expensive). For example, instead of simply downloading Wikipedia, the team at Writer takes individual Wikipedia pages and rewrites their contents in different formats—as a Q&A instead of a block of text, and so on—so that its models can learn more from less.

Training is just the start of a model’s life cycle. As models have become bigger, they have become more expensive to run. So-called reasoning models that work through a query step by step before producing a response are especially power-hungry because they compute a series of intermediate subresponses for each response. The price tag of these new capabilities is eye-watering: OpenAI’s o3 reasoning model has been estimated to cost up to $30,000 per task to run.  

But this technology is only a few months old and still experimental. Farhadi expects that these costs will soon come down. For example, engineers will figure out how to stop reasoning models from going too far down a dead-end path before they determine it’s not viable. “The first time you do something it’s way more expensive, and then you figure out how to make it smaller and more efficient,” says Farhadi. “It’s a fairly consistent trend in technology.”

One way to get performance gains without big jumps in energy consumption is to run inference steps (the computations a model makes to come up with its response) in parallel, he says. Parallel computing underpins much of today’s software, especially large language models (GPUs are parallel by design). Even so, the basic technique could be applied to a wider range of problems. By splitting up a task and running different parts of it at the same time, parallel computing can generate results more quickly. It can also save energy by making more efficient use of available hardware. But it requires clever new algorithms to coordinate the multiple subtasks and pull them together into a single result at the end. 

The largest, most powerful models won’t be used all the time, either. There is a lot of talk about small models, versions of large language models that have been distilled into pocket-size packages. In many cases, these more efficient models perform as well as larger ones, especially for specific use cases.

As businesses figure out how large language models fit their needs (or not), this trend toward more efficient bespoke models is taking off. You don’t need an all-purpose LLM to manage inventory or to respond to niche customer queries. “There’s going to be a really, really large number of specialized models, not one God-given model that solves everything,” says Farhadi.

Christina Shim, chief sustainability officer at IBM, is seeing this trend play out in the way her clients adopt the technology. She works with businesses to make sure they choose the smallest and least power-hungry models possible. “It’s not just the biggest model that will give you a big bang for your buck,” she says. A smaller model that does exactly what you need is a better investment than a larger one that does the same thing: “Let’s not use a sledgehammer to hit a nail.”

2/ More efficient computer chips

As the software becomes more streamlined, the hardware it runs on will become more efficient too. There’s a tension at play here: In the short term, chipmakers like Nvidia are racing to develop increasingly powerful chips to meet demand from companies wanting to run increasingly powerful models. But in the long term, this race isn’t sustainable.

“The models have gotten so big, even running the inference step now starts to become a big challenge,” says Naveen Verma, cofounder and CEO of the upstart microchip maker EnCharge AI.

Companies like Microsoft and OpenAI are losing money running their models inside data centers to meet the demand from millions of people. Smaller models will help. Another option is to move the computing out of the data centers and into people’s own machines.

That’s something that Microsoft tried with its Copilot+ PC initiative, in which it marketed a supercharged PC that would let you run an AI model (and cover the energy bills) yourself. It hasn’t taken off, but Verma thinks the push will continue because companies will want to offload as much of the costs of running a model as they can.

But getting AI models (even small ones) to run reliably on people’s personal devices will require a step change in the chips that typically power those devices. These chips need to be made even more energy efficient because they need to be able to work with just a battery, says Verma.

That’s where EnCharge comes in. Its solution is a new kind of chip that ditches digital computation in favor of something called analog in-memory computing. Instead of representing information with binary 0s and 1s, like the electronics inside conventional, digital computer chips, the electronics inside analog chips can represent information along a range of values in between 0 and 1. In theory, this lets you do more with the same amount of power. 

SHIWEN SVEN WANG

EnCharge was spun out from Verma’s research lab at Princeton in 2022. “We’ve known for decades that analog compute can be much more efficient—orders of magnitude more efficient—than digital,” says Verma. But analog computers never worked well in practice because they made lots of errors. Verma and his colleagues have discovered a way to do analog computing that’s precise.

EnCharge is focusing just on the core computation required by AI today. With support from semiconductor giants like TSMC, the startup is developing hardware that performs high-dimensional matrix multiplication (the basic math behind all deep-learning models) in an analog chip and then passes the result back out to the surrounding digital computer.

EnCharge’s hardware is just one of a number of experimental new chip designs on the horizon. IBM and others have been exploring something called neuromorphic computing for years. The idea is to design computers that mimic the brain’s super-efficient processing powers. Another path involves optical chips, which swap out the electrons in a traditional chip for light, again cutting the energy required for computation. None of these designs yet come close to competing with the electronic digital chips made by the likes of Nvidia. But as the demand for efficiency grows, such alternatives will be waiting in the wings. 

It is also not just chips that can be made more efficient. A lot of the energy inside computers is spent passing data back and forth. IBM says that it has developed a new kind of optical switch, a device that controls digital traffic, that is 80% more efficient than previous switches.   

3/ More efficient cooling in data centers

Another huge source of energy demand is the need to manage the waste heat produced by the high-end hardware on which AI models run. Tom Earp, engineering director at the design firm Page, has been building data centers since 2006, including a six-year stint doing so for Meta. Earp looks for efficiencies in everything from the structure of the building to the electrical supply, the cooling systems, and the way data is transferred in and out.

For a decade or more, as Moore’s Law tailed off, data-center designs were pretty stable, says Earp. And then everything changed. With the shift to processors like GPUs, and with even newer chip designs on the horizon, it is hard to predict what kind of hardware a new data center will need to house—and thus what energy demands it will have to support—in a few years’ time. But in the short term the safe bet is that chips will continue getting faster and hotter: “What I see is that the people who have to make these choices are planning for a lot of upside in how much power we’re going to need,” says Earp.

One thing is clear: The chips that run AI models, such as GPUs, require more power per unit of space than previous types of computer chips. And that has big knock-on implications for the cooling infrastructure inside a data center. “When power goes up, heat goes up,” says Earp.

With so many high-powered chips squashed together, air cooling (big fans, in other words) is no longer sufficient. Water has become the go-to coolant because it is better than air at whisking heat away. That’s not great news for local water sources around data centers. But there are ways to make water cooling more efficient.

One option is to use water to send the waste heat from a data center to places where it can be used. In Denmark water from data centers has been used to heat homes. In Paris, during the Olympics, it was used to heat swimming pools.  

Water can also serve as a type of battery. Energy generated from renewable sources, such as wind turbines or solar panels, can be used to chill water that is stored until it is needed to cool computers later, which reduces the power usage at peak times.

But as data centers get hotter, water cooling alone doesn’t cut it, says Tony Atti, CEO of Phononic, a startup that supplies specialist cooling chips. Chipmakers are creating chips that move data around faster and faster. He points to Nvidia, which is about to release a chip that processes 1.6 terabytes a second: “At that data rate, all hell breaks loose and the demand for cooling goes up exponentially,” he says.

According to Atti, the chips inside servers suck up around 45% of the power in a data center. But cooling those chips now takes almost as much power, around 40%. “For the first time, thermal management is becoming the gate to the expansion of this AI infrastructure,” he says.

Phononic’s cooling chips are small thermoelectric devices that can be placed on or near the hardware that needs cooling. Power an LED chip and it emits photons; power a thermoelectric chip and it emits phonons (which are to vibrational energy—a.k.a. temperature—as photons are to light). In short, phononic chips push heat from one surface to another.

Squeezed into tight spaces inside and around servers, such chips can detect minute increases in heat and switch on and off to maintain a stable temperature. When they’re on, they push excess heat into a water pipe to be whisked away. Atti says they can also be used to increase the efficiency of existing cooling systems. The faster you can cool water in a data center, the less of it you need.

4/ Cutting costs goes hand in hand with cutting energy use

Despite the explosion in AI’s energy use, there’s reason to be optimistic. Sustainability is often an afterthought or a nice-to-have. But with AI, the best way to reduce overall costs is to cut your energy bill. That’s good news, as it should incentivize companies to increase efficiency. “I think we’ve got an alignment between climate sustainability and cost sustainability,” says Verma. ”I think ultimately that will become the big driver that will push the industry to be more energy efficient.”

Shim agrees: “It’s just good business, you know?”

Companies will be forced to think hard about how and when they use AI, choosing smaller, bespoke options whenever they can, she says: “Just look at the world right now. Spending on technology, like everything else, is going to be even more critical going forward.”

Shim thinks the concerns around AI’s energy use are valid. But she points to the rise of the internet and the personal computer boom 25 years ago. As the technology behind those revolutions improved, the energy costs stayed more or less stable even though the number of users skyrocketed, she says.

It’s a general rule Shim thinks will apply this time around as well: When tech matures, it gets more efficient. “I think that’s where we are right now with AI,” she says.

AI is fast becoming a commodity, which means that market competition will drive prices down. To stay in the game, companies will be looking to cut energy use for the sake of their bottom line if nothing else. 

In the end, capitalism may save us after all. 

Can nuclear power really fuel the rise of AI?

In the AI arms race, all the major players say they want to go nuclear.  

Over the past year, the likes of Meta, Amazon, Microsoft, and Google have sent out a flurry of announcements related to nuclear energy. Some are about agreements to purchase power from existing plants, while others are about investments looking to boost unproven advanced technologies.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


These somewhat unlikely partnerships could be a win for both the nuclear power industry and large tech companies. Tech giants need guaranteed sources of energy, and many are looking for low-emissions ones to hit their climate goals. For nuclear plant operators and nuclear technology developers, the financial support of massive established customers could help keep old nuclear power plants open and push new technologies forward.

“There [are] a lot of advantages to nuclear,” says Michael Terrell, senior director of clean energy and carbon reduction at Google. Among them, he says, are that it’s “clean, firm, carbon-free, and can be sited just about anywhere.” (Firm energy sources are those that provide constant power.) 

But there’s one glaring potential roadblock: timing. “There are needs on different time scales,” says Patrick White, former research director at the Nuclear Innovation Alliance. Many of these tech companies will require large amounts of power in the next three to five years, White says, but building new nuclear plants can take close to a decade. 

Some next-generation nuclear technologies, especially small modular reactors, could take less time to build, but the companies promising speed have yet to build their first reactors—and in some cases they are still years away from even modestly sized demonstrations. 

This timing mismatch means that even as tech companies tout plans for nuclear power, they’ll actually be relying largely on fossil fuels, keeping coal plants open, and even building new natural gas plants that could stay open for decades. AI and nuclear could genuinely help each other grow, but the reality is that the growth could be much slower than headlines suggest. 

AI’s need for speed

The US alone has roughly 3,000 data centers, and current projections say the AI boom could add thousands more by the end of the decade. The rush could increase global data center power demand by as much as 165% by 2030, according to one recent analysis from Goldman Sachs. In the US, estimates from industry and academia suggest energy demand for data centers could be as high as 400 terawatt-hours by 2030—up from fewer than 100 terawatt-hours in 2020 and higher than the total electricity demand from the entire country of Mexico.

There are indications that the data center boom might be decelerating, with some companies slowing or pausing some projects in recent weeks. But even the most measured projections, in analyses like one recent report from the International Energy Agency, predict that energy demand will increase. The only question is by how much.  

Many of the same tech giants currently scrambling to build data centers have also set climate goals, vowing to reach net-zero emissions or carbon-free energy within the next couple of decades. So they have a vested interest in where that electricity comes from. 

Nuclear power has emerged as a strong candidate for companies looking to power data centers while cutting emissions. Unlike wind turbines and solar arrays that generate electricity intermittently, nuclear power plants typically put out a constant supply of energy to the grid, which aligns well with what data centers need. “Data center companies pretty much want to run full out, 24/7,” says Rob Gramlich, president of Grid Strategies, a consultancy focused on electricity and transmission.

It also doesn’t hurt that, while renewables are increasingly politicized and under attack by the current administration in the US, nuclear has broad support on both sides of the aisle. 

The problem is how to build up nuclear capacity—existing facilities are limited, and new technologies will take time to build. In 2022, all the nuclear reactors in the US together provided around 800 terawatt-hours of electricity to the power grid, a number that’s been basically steady for the past two decades. To meet electricity demand from data centers expected in 2030 with nuclear power, we’d need to expand the fleet of reactors in the country by half.

New nuclear news 

Some of the most exciting headlines regarding the burgeoning relationship between AI and nuclear technology involve large, established companies jumping in to support innovations that could bring nuclear power into the 21st century. 

In October 2024, Google signed a deal with Kairos Power, a next-generation nuclear company that recently received construction approval for two demonstration reactors from the US Nuclear Regulatory Commission (NRC). The company is working to build small, molten-salt-cooled reactors, which it says will be safer and more efficient than conventional technology. The Google deal is a long-term power-purchase agreement: The tech giant will buy up to 500 megawatts of electricity by 2035 from whatever plants Kairos manages to build, with the first one scheduled to come online by 2030. 

Amazon is also getting involved with next-generation nuclear technology with a direct investment in Maryland-based X-energy. The startup is among those working to create smaller, more-standardized reactors that can be built more quickly and with less expense.

In October, Amazon signed a deal with Energy Northwest, a utility in Washington state, that will see Amazon fund the initial phase of a planned X-energy small modular reactor project in the state. The tech giant will have a right to buy electricity from one of the modules in the first project, which could generate 320 megawatts of electricity and be expanded to generate as much as 960 megawatts. Many new AI-focused data centers under construction will require 500 megawatts of power or more, so this project might be just large enough to power a single site. 

The project will help meet energy needs “beginning in the early 2030s,” according to Amazon’s website. X-energy is currently in the pre-application process with the NRC, which must grant approval before the Washington project can move forward.

Solid, long-term plans could be a major help in getting next-generation technologies off the ground. “It’s going to be important in the next couple [of] years to see more firm commitments and actual money going out for these projects,” says Jessica Lovering, who cofounded the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy. 

However, these early projects won’t be enough to make a dent in demand. The next-generation reactors Amazon and Google are supporting are modestly sized demonstrations—the first commercial installations of new technologies. They won’t be close to the scale needed to meet the energy demand expected from new data centers by 2030. 

To provide a significant fraction of the terawatt-hours of electricity large tech companies use each year, nuclear companies will likely need to build dozens of new plants, not just a couple of reactors. 

Purchasing power 

One approach to get around this mismatch is to target existing reactors. 

Microsoft made headlines in this area last year when it signed a long-term power purchase agreement with Constellation, the owner of the Three Mile Island Unit 1 nuclear plant in Pennsylvania. Constellation plans to reopen one of the reactors at that site and rename it the Crane Clean Energy Center. The deal with Microsoft ensures that there will be a customer for the electricity from the plant, if it successfully comes back online. (It’s currently on track to do so in 2028.)

“If you don’t want to wait a decade for new technology, one of the biggest tools that we have in our tool kit today is to support relicensing of operating power plants,” says Urvi Parekh, head of global energy for Meta. Older facilities can apply for 20-year extensions from the NRC, a process that customers buying the energy can help support as it tends to be expensive and lengthy, Parekh says. 

While these existing reactors provide some opportunity for Big Tech to snap up nuclear energy now, a limited number are in good enough shape to extend or reopen. 

In the US, 24 reactors have licenses that will be up for renewal before 2035, roughly a quarter of those in operation today. A handful of plants could potentially be reopened in addition to Three Mile Island, White says. Palisades Nuclear Plant in Michigan has received a $1.52 billion loan guarantee from the US Department of Energy to reopen, and the owner of the Duane Arnold Energy Center in Iowa has filed a request with regulators that could begin the reopening process.

Some sites have reactors that could be upgraded to produce more power without building new infrastructure, adding a total of between two and eight gigawatts, according to a recent report from the Department of Energy. That could power a handful of moderately sized data centers, but power demand is growing for individual projects—OpenAI has suggested the need for data centers that would require at least five gigawatts of power. 

Ultimately, new reactors will be needed to expand capacity significantly, whether they use established technology or next-generation designs. Experts tend to agree that neither would be able to happen at scale until at least the early 2030s. 

In the meantime, decisions made today in response to this energy demand boom will have ripple effects for years. Most power plants can last for several decades or more, so what gets built today will likely stay on the grid through 2040 and beyond. Whether the AI boom will entrench nuclear energy, fossil fuels, or other sources of electricity on the grid will depend on what is introduced to meet demand now. 

No individual technology, including nuclear power, is likely to be the one true solution. As Google’s Terrell puts it, everything from wind and solar, energy storage, geothermal, and yes, nuclear, will be needed to meet both energy demand and climate goals. “I think nuclear gets a lot of love,” he says. “But all of this is equally as important.”

How US research cuts are threatening crucial climate data

Over the last few months, and especially the last few weeks, there’s been an explosion of news about proposed budget cuts to science in the US. One trend I’ve noticed: Researchers and civil servants are sounding the alarm that those cuts mean we might lose key data that helps us understand our world and how climate change is affecting it.

My colleague James Temple has a new story out today about researchers who are attempting to measure the temperature of mountain snowpack across the western US. Snow that melts in the spring is a major water source across the region, and monitoring the temperature far below the top layer of snow could help scientists more accurately predict how fast water will flow down the mountains, allowing farmers, businesses, and residents to plan accordingly.

But long-running government programs that monitor the snowpack across the West are among those being threatened by cuts across the US federal government. Also potentially in trouble: carbon dioxide measurements in Hawaii, hurricane forecasting tools, and a database that tracks the economic impact of natural disasters. It’s all got me thinking: What do we lose when data is in danger?

Take for example the work at Mauna Loa Observatory, which sits on the northern side of the world’s largest active volcano. In this Hawaii facility, researchers have been measuring the concentration of carbon dioxide in the atmosphere since 1958.

The resulting graph, called the Keeling Curve (after Charles David Keeling, the scientist who kicked off the effort) is a pillar of climate research. It shows that carbon dioxide, the main greenhouse gas warming the planet, has increased in the atmosphere from around 313 parts per million in 1958 to over 420 parts per million today.

Proposed cuts to the National Oceanic and Atmospheric Administration (NOAA) jeopardize the Keeling Curve’s future. As Ralph Keeling (current steward of the curve and Keeling’s son) put it in a new piece for Wired, “If successful, this loss will be a nightmare scenario for climate science, not just in the United States, but the world.”

This story has echoes across the climate world right now. A lab at Princeton that produces what some consider the top-of-the-line climate models used to make hurricane forecasts could be in trouble because of NOAA budget cuts. And last week, NOAA announced it would no longer track the economic impact of the biggest natural disasters in the US.

Some of the largest-scale climate efforts will feel the effects of these cuts, and as James’s new story shows, they could also seep into all sorts of specialized fields. Even seemingly niche work can have a huge impact not just on research, but on people.

The frozen reservoir of the Sierra snowpack provides about a third of California’s groundwater, as well as the majority used by towns and cities in northwest Nevada. Researchers there are hoping to help officials better forecast the timing of potential water supplies across the region.

This story brought to mind my visit to El Paso, Texas, a few years ago. I spoke with farmers there who rely on water coming down the Rio Grande, alongside dwindling groundwater, to support their crops. There, water comes down from the mountains in Colorado and New Mexico in the spring and is held in the Elephant Butte Reservoir. One farmer I met showed me pages and pages of notes of reservoir records, which he had meticulously copied by hand. Those crinkled pages were a clear sign: Publicly available data was crucial to his work.

The endeavor of scientific research, particularly when it involves patiently gathering data, isn’t always exciting. Its importance is often overlooked. But as cuts continue, we’re keeping a lookout, because losing data could harm our ability to track, address, and adapt to our changing climate. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Why climate researchers are taking the temperature of mountain snow

On a crisp morning in early April, Dan McEvoy and Bjoern Bingham cut clean lines down a wide run at the Heavenly Ski Resort in South Lake Tahoe, then ducked under a rope line cordoning off a patch of untouched snow. 

They side-stepped up a small incline, poled past a row of Jeffrey pines, then dropped their packs. 

The pair of climate researchers from the Desert Research Institute (DRI) in Reno, Nevada, skied down to this research plot in the middle of the resort to test out a new way to take the temperature of the Sierra Nevada snowpack. They were equipped with an experimental infrared device that can take readings as it’s lowered down a hole in the snow to the ground.

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. As it melts through the spring and summer, dam operators, water agencies, and communities have to manage the flow of billions of gallons of runoff, storing up enough to get through the inevitable dry summer months without allowing reservoirs and canals to flood.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods. 

In the past, it has been arduous work to gather such snowpack observations. Now, a new generation of tools, techniques, and models promises to ease that process, improve water forecasts, and help California and other states safely manage one of their largest sources of water in the face of increasingly severe droughts and flooding.

Observers, however, fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies, including the one that oversees federal snowpack monitoring and survey work. That could jeopardize ongoing efforts to produce the water data and forecasts on which Western communities rely.

“If we don’t have those measurements, it’s like driving your car around without a fuel gauge,” says Larry O’Neill, Oregon’s state climatologist. “We won’t know how much water is up in the mountains, and whether there’s enough to last through the summer.”

The birth of snow surveys

The snow survey program in the US was born near Lake Tahoe, the largest alpine lake in North America, around the turn of the 20th century. 

Without any reliable way of knowing how much water would flow down the mountain each spring, lakefront home and business owners, fearing floods, implored dam operators to release water early in the spring. Downstream communities and farmers pushed back, however, demanding that the dam was used to hold onto as much water as possible to avoid shortages later in the year. 

In 1908, James Church, a classics professor at the University of Nevada, Reno, whose passion for hiking around the mountains sparked an interest in the science of snow, invented a device that helped resolve the so-called Lake Tahoe Water Wars: the Mt. Rose snow sampler, named after the peak of a Sierra spur that juts into Nevada.

Professor James E. Church wearing goggles and snowshoes, standing on a snowy hillside
James Church, a professor of classics at the University of Nevada, Reno, became a pioneer in the field of snow surveys.
COURTESY OF UNIVERSITY OF NEVADA, RENO

It’s a simple enough device, with sections of tube that screw together, a sharpened end, and measurement ticks along the side. Snow surveyors measure the depth of the snow by plunging the sampler down to the ground. They then weigh the filled tube on a specialized scale to calculate the water content of the snow. 

Church used the device to take measurements at various points across the range, and calibrated his water forecasts by comparing his readings against the rising and falling levels of Lake Tahoe. 

It worked so well that the US began a federal snow survey program in the mid-1930s, which evolved into the one carried on today by the Department of Agriculture’s Natural Resources Conservation Service (NRCS). Throughout the winter, hundreds of snow surveyors across the American West head up to established locations on snowshoes, backcountry skis, or snowmobiles to deploy their Mt. Rose samplers, which have barely changed over more than a century. 

In the 1960s, the US government also began setting up a network of permanent monitoring sites across the mountains, now known as the SNOTEL network. There are more than 900 stations continuously transmitting readings from across Western states and Alaska. They’re equipped with sensors that measure air temperature, snow depth, and soil moisture, and include pressure-sensitive “snow pillows” that weigh the snow to determine the water content. 

The data from the snow surveys and SNOTEL sites all flows into snow depth and snow water content reports that the NRCS publishes, along with forecasts of the amount of water that will fill the streams and reservoirs through the spring and summer.

Taking the temperature

None of these survey and monitoring programs, however, provide the temperature throughout the snowpack. 

The Sierra Nevada snowpack can reach more than 6 meters (20 feet), and the temperature within it may vary widely, especially toward the top. Readings taken at increments throughout can determine what’s known as the cold content, or the amount of energy required to shift the snowpack to a uniform temperature of 32˚F. 

Knowing the cold content of the snowpack helps researchers understand the conditions under which it will begin to rapidly melt, particularly as it warms up in the spring or after rain falls on top of the snow.

If the temperature of the snow, for example, is close to 32˚F even at several feet deep, a few warm days could easily set it melting. If, on the other hand, the temperature measurements show a colder profile throughout the middle, the snowpack is more stable and will hold up longer as the weather warms.

a person with raising a snow shovel up at head height
Bjoern Bingham, a research scientist at the Desert Research Institute, digs at snowpit at a research plot within the Heavenly Ski Resort, near South Lake Tahoe, California.
JAMES TEMPLE

The problem is that taking the temperature of the entire snowpack has been, until now, tough and time-consuming work. When researchers do it at all, they mainly do so by digging snow pits down to the ground and then taking readings with probe thermometers along an inside wall.

There have been a variety of efforts to take continuous remote readings from sensors attached to fences, wires, or towers, which the snowpack eventually buries. But the movement and weight of the dense shifting snow tends to break the devices or snap the structures they’re assembled upon.

“They rarely last a season,” McAvoy says.

Anne Heggli, a professor of mountain hydrometeorology at DRI, happened upon the idea of using an infrared device to solve this problem during a tour of the institute’s campus in 2019, when she learned that researchers there were using an infrared meat thermometer to take contactless readings of the snow surface.

In 2021, Heggli began collaborating with RPM Systems, a gadget manufacturing company, to design an infrared device optimized for snowpack field conditions. The resulting snow temperature profiler is skinny enough to fit down a hole dug by snow surveyors and dangles on a cord marked off at 10-centimeter (4-inch) increments.

a researcher stands in a snowy trench taking notes, while a second researcher drops a yellow measure down from the surface level
Bingham and Daniel McEvoy, an associate research professor at the Desert Research Institute, work together to take temperature readings from inside the snowpit as well as from within the hole left behind by a snow sampler.
JAMES TEMPLE

At Heavenly on that April morning, Bingham, a staff scientist at DRI, slowly fed the device down a snow sampler hole, calling out temperature readings at each marking. McEvoy scribbled them down on a worksheet fastened to his clipboard as he used a probe thermometer to take readings of his own from within a snow pit the pair had dug down to the ground.

They were comparing the measurements to assess the reliability of the infrared device in the field, but the eventual aim is to eliminate the need to dig snow pits. The hope is that state and federal surveyors could simply carry along a snow temperature profiler and drop it into the snowpack survey holes they’re creating anyway, to gather regular snowpack temperature readings from across the mountains.

In 2023, the US Bureau of Reclamation, the federal agency that operates many of the nation’s dams, funded a three-year research project to explore the use of the infrared gadgets in determining snowpack temperatures. Through it, the DRI research team has now handed devices out to 20 snow survey teams across California, Colorado, Idaho, Montana, Nevada, and Utah to test their use in the field and supplement the snowpack data they’re collecting.

The Snow Lab

The DRI research project is one piece of a wider effort to obtain snowpack temperature data across the mountains of the West.

By early May, the snow depth had dropped from an April peak of 114 inches to 24 inches (2.9 meters to 0.6 meters) at the UC Berkeley Central Sierra Snow Lab, an aging wooden structure perched in the high mountains northwest of Lake Tahoe.

Megan Mason, a research scientist at the lab, used a backcountry ski shovel to dig out a trio of instruments from what was left of the pitted snowpack behind the building. Each one featured different types of temperature sensors, arrayed along a strong polymer beam meant to hold up under the weight and movement of the Sierra snowpack.  

She was pulling up the devices after running the last set of observations for the season, as part of an effort to develop a resilient system that can survive the winter and transmit hourly temperature readings.

The lab is working on the project, dubbed the California Cold Content Initiative, in collaboration with the state’s Department of Water Resources. California is the only western state that opted to maintain its own snow survey program and run its own permanent monitoring stations, all of which are managed by the water department. 

The plan is to determine which instruments held up and functioned best this winter. Then, they can begin testing the most promising approaches at several additional sites next season. Eventually, the goal is to attach the devices at more than 100 of California’s snow monitoring stations, says Andrew Schwartz, the director of the lab.

The NRCS is conducting a similar research effort at select SNOTEL sites equipped with a beaded temperature cable. One such cable is visible at the Heavenly SNOTEL station, next to where McEvoy and Bingham dug their snow pit, strung vertically between an arm extended from the main tower and the snow-covered ground. 

a gloved hand inserts a probe wire into a hole in the snow
DRI’s Bjoern Bingham feeds the snow temperature profiler, an infrared device, down a hole in the Sierra snowpack.
JAMES TEMPLE

Schwartz said that the different research groups are communicating and collaborating openly on the projects, all of which promise to provide complementary information, expanding the database of snowpack temperature readings across the West.

For decades, agencies and researchers generally produced water forecasts using relatively simple regression models that translated the amount of water in the snowpack into the amount of water that will flow down the mountain, based largely on the historic relationships between those variables. 

But these models are becoming less reliable as climate change alters temperatures, snow levels, melt rates, and evaporation, and otherwise drives alpine weather patterns outside of historic patterns.

“As we have years that scatter further and more frequently from the norm, our models aren’t prepared,” Heggli says.

Plugging direct temperature observations into more sophisticated models that have emerged in recent years, Schwartz says, promises to significantly improve the accuracy of water forecasts. That, in turn, should help communities manage through droughts and prevent dams from overtopping even as climate change fuels alternately wetter, drier, warmer, and weirder weather.

About a quarter of the world’s population relies on water stored in mountain snow and glaciers, and climate change is disrupting the hydrological cycles that sustain these natural frozen reservoirs in many parts of the world. So any advances in observations and modeling could deliver broader global benefits.

Ominous weather

There’s an obvious threat to this progress, though.

Even if these projects work as well as hoped, it’s not clear how widely these tools and techniques will be deployed at a time when the White House is gutting staff across federal agencies, terminating thousands of scientific grants, and striving to eliminate tens of billions of dollars in funding at research departments. 

The Trump administration has fired or put on administrative leave nearly 6,000 employees across the USDA, or 6% of the department’s workforce. Those cutbacks have reached regional NRCS offices, according to reporting by local and trade outlets.

That includes more than half of the roles at the Portland office, according to O’Neill, the state climatologist. Those reductions prompted a bipartisan group of legislators to call on the Secretary of Agriculture to restore the positions, warning the losses could impair water data and analyses that are crucial for the state’s “agriculture, wildland fire, hydropower, timber, and tourism sectors,” as the Statesman Journal reported.

There are more than 80 active SNOTEL stations in Oregon.

The fear is there won’t be enough people left to reach all the sites this summer to replace batteries, solar panels, and drifting or broken sensors, which could quickly undermine the reliability of the data or cut off the flow of information. 

“Staff and budget reductions at NRCS will make it impossible to maintain SNOTEL instruments and conduct routine manual observations, leading to inoperability of the network within a year,” the lawmakers warned.

The USDA and NRCS didn’t respond to inquiries from MIT Technology Review

looking down at a researcher standing in a snowy trench with a clipboard of notes
DRI’s Daniel McEvoy scribbles down temperature readings at the Heavenly site.
JAMES TEMPLE

If the federal cutbacks deplete the data coming back from SNOTEL stations or federal snow survey work, the DRI infrared method could at least “still offer a simplistic way of measuring the snowpack temperatures” in places where state and regional agencies continue to carry out surveys, McEvoy says.

But most researchers stress the field needs more surveys, stations, sensors, and readings to understand how the climate and water cycles are changing from month to month and season to season. Heggli stresses that there should be broad bipartisan support for programs that collect snowpack data and provide the water forecasts that farmers and communities rely on. 

“This is how we account for one of, if not the, most valuable resource we have,” she says. “In the West, we go into a seasonal drought every summer; our snowpack is what trickles down and gets us through that drought. We need to know how much we have.”

Did solar power cause Spain’s blackout?

At roughly midday on Monday, April 28, the lights went out in Spain. The grid blackout, which extended into parts of Portugal and France, affected tens of millions of people—flights were grounded, cell networks went down, and businesses closed for the day.

Over a week later, officials still aren’t entirely sure what happened, but some (including the US energy secretary, Chris Wright) have suggested that renewables may have played a role, because just before the outage happened, wind and solar accounted for about 70% of electricity generation. Others, including Spanish government officials, insisted that it’s too early to assign blame.

It’ll take weeks to get the full report, but we do know a few things about what happened. And even as we wait for the bigger picture, there are a few takeaways that could help our future grid.

Let’s start with what we know so far about what happened, according to the Spanish grid operator Red Eléctrica:

  • A disruption in electricity generation took place a little after 12:30 p.m. This may have been a power plant flipping off or some transmission equipment going down.
  • A little over a second later, the grid lost another bit of generation.
  • A few seconds after that, the main interconnector between Spain and southwestern France got disconnected as a result of grid instability.
  • Immediately after, virtually all of Spain’s electricity generation tripped offline.

One of the theories floating around is that things went wrong because the grid diverged from its normal frequency. (All power grids have a set frequency: In Europe the standard is 50 hertz, which means the current switches directions 50 times per second.) The frequency needs to be constant across the grid to keep things running smoothly.

There are signs that the outage could be frequency-related. Some experts pointed out that strange oscillations in the grid frequency occurred shortly before the blackout.

Normally, our grid can handle small problems like an oscillation in frequency or a drop that comes from a power plant going offline. But some of the grid’s ability to stabilize itself is tied up in old ways of generating electricity.

Power plants like those that run on coal and natural gas have massive rotating generators. If there are brief issues on the grid that upset the balance, those physical bits of equipment have inertia: They’ll keep moving at least for a few seconds, providing some time for other power sources to respond and pick up the slack. (I’m simplifying here—for more details I’d highly recommend this report from the National Renewable Energy Laboratory.)

Solar panels don’t have inertia—they rely on inverters to change electricity into a form that’s compatible with the grid and matches its frequency. Generally, these inverters are “grid-following,” meaning if frequency is dropping, they follow that drop.

In the case of the blackout in Spain, it’s possible that having a lot of power on the grid coming from sources without inertia made it more possible for a small problem to become a much bigger one.

Some key questions here are still unanswered. The order matters, for example. During that drop in generation, did wind and solar plants go offline first? Or did everything go down together?

Whether or not solar and wind contributed to the blackout as a root cause, we do know that wind and solar don’t contribute to grid stability in the same way that some other power sources do, says Seaver Wang, climate lead of the Breakthrough Institute, an environmental research organization. Regardless of whether renewables are to blame, more capability to stabilize the grid would only help, he adds.

It’s not that a renewable-heavy grid is doomed to fail. As Wang put it in an analysis he wrote last week: “This blackout is not the inevitable outcome of running an electricity system with substantial amounts of wind and solar power.”

One solution: We can make sure the grid includes enough equipment that does provide inertia, like nuclear power and hydropower. Reversing a plan to shut down Spain’s nuclear reactors beginning in 2027 would be helpful, Wang says. Other options include building massive machines that lend physical inertia and using inverters that are “grid-forming,” meaning they can actively help regulate frequency and provide a sort of synthetic inertia.

Inertia isn’t everything, though. Grid operators can also rely on installing a lot of batteries that can respond quickly when problems arise. (Spain has much less grid storage than other places with a high level of renewable penetration, like Texas and California.)

Ultimately, if there’s one takeaway here, it’s that as the grid evolves, our methods to keep it reliable and stable will need to evolve too.

If you’re curious to hear more on this story, I’d recommend this Q&A from Carbon Brief about the event and its aftermath and this piece from Heatmap about inertia, renewables, and the blackout.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A long-abandoned US nuclear technology is making a comeback in China

China has once again beat everyone else to a clean energy milestone—its new nuclear reactor is reportedly one of the first to use thorium instead of uranium as a fuel and the first of its kind that can be refueled while it’s running.

It’s an interesting (if decidedly experimental) development out of a country that’s edging toward becoming the world leader in nuclear energy. China has now surpassed France in terms of generation, though not capacity; it still lags behind the US in both categories. But one recurring theme in media coverage about the reactor struck me, because it’s so familiar: This technology was invented decades ago, and then abandoned.

You can basically copy and paste that line into countless stories about today’s advanced reactor technology. Molten-salt cooling systems? Invented in the mid-20th century but never commercialized. Same for several alternative fuels, like TRISO. And, of course, there’s thorium.

This one research reactor in China running with an alternative fuel says a lot about this moment for nuclear energy technology: Many groups are looking into the past for technologies, with a new appetite for building them.

First, it’s important to note that China is the hot spot for nuclear energy right now. While the US still has the most operational reactors in the world, China is catching up quickly. The country is building reactors at a remarkable clip and currently has more reactors under construction than any other country by far. Just this week, China approved 10 new reactors, totaling over $27 billion in investment.

China is also leading the way for some advanced reactor technologies (that category includes basically anything that deviates from the standard blueprint of what’s on the grid today: large reactors that use enriched uranium for fuel and high-pressure water to keep the reactor cool). High-temperature reactors that use gas as a coolant are one major area of focus for China—a few reactors that use this technology have recently started up, and more are in the planning stages or under construction.

Now, Chinese state media is reporting that scientists in the country reached a milestone with a thorium-based reactor. The reactor came online in June 2024, but researchers say it recently went through refueling without shutting down. (Conventional reactors generally need to be stopped to replenish the fuel supply.) The project’s lead scientists shared the results during a closed meeting at the Chinese Academy of Sciences.

I’ll emphasize here that this isn’t some massive power plant: This reactor is tiny. It generates just two megawatts of heat—less than the research reactor on MIT’s campus, which rings in at six megawatts. (To be fair, MIT’s is one of the largest university research reactors in the US, but still … it’s small.)

Regardless, progress is progress for thorium reactors, as the world has been entirely focused on uranium for the last 50 years or so.

Much of the original research on thorium came out of the US, which pumped resources into all sorts of different reactor technologies in the 1950s and ’60s. A reactor at Oak Ridge National Laboratory in Tennessee that ran in the 1960s used Uranium-233 fuel (which can be generated when thorium is bombarded with radiation).

Eventually, though, the world more or less settled on a blueprint for nuclear reactors, focusing on those that use Uranium-238 as fuel and are cooled by water at a high pressure. One reason for the focus on uranium for energy tech? The research could also be applied to nuclear weapons.

But now there’s a renewed interest in alternative nuclear technologies, and the thorium-fueled reactor is just one of several examples. A prominent one we’ve covered before: Kairos Power is building reactors that use molten salt as a coolant for small nuclear reactors, also a technology invented and developed in the 1950s and ’60s before being abandoned. 

Another old-but-new concept is using high-temperature gas to cool reactors, as X-energy is aiming to do in its proposed power station at a chemical plant in Texas. (That reactor will be able to be refueled while it’s running, like the new thorium reactor.) 

Some problems from decades ago that contributed to technologies being abandoned will still need to be dealt with today. In the case of molten-salt reactors, for example, it can be tricky to find materials that can withstand the corrosive properties of super-hot salt. For thorium reactors, the process of transforming thorium into U-233 fuel has historically been one of the hurdles. 

But as early progress shows, the archives could provide fodder for new commercial reactors, and revisiting these old ideas could give the nuclear industry a much-needed boost. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The vibes are shifting for US climate tech

The past few years have been an almost nonstop parade of good news for climate tech in the US. Headlines about billion-dollar grants from the government, massive private funding rounds, and labs churning out advance after advance have been routine. Now, though, things are starting to shift.  

About $8 billion worth of US climate tech projects have been canceled or downsized so far in 2025. (You can see a map of those projects in my latest story here.) 

There are still projects moving forward, but these cancellations definitely aren’t a good sign. And now we have tariffs to think about, adding additional layers of expense and, worse, uncertainty. (Businesses, especially those whose plans require gobs of money, really don’t like uncertainty.) Honestly, I’m still getting used to an environment that isn’t such a positive one for climate technology. How worried should we be? Let’s get into the context.

Sometimes, one piece of news can really drive home a much larger trend. For example, I’ve read a bazillion studies about extreme weather and global warming, but every time a hurricane comes close to my mom’s home in Florida, the threat of climate-fueled extreme weather becomes much more real for me. A recent announcement about climate tech hit me in much the same fashion.

In February, Aspen Aerogels announced it was abandoning plans for a Georgia factory that would have made materials that can suppress battery fires. The news struck me, because just a few months before, in October, I had written about the Department of Energy’s $670 million loan commitment for the project. It was a really fun story, both because I found the tech fascinating and because MIT Technology Review got the exclusive access to cover it first.

And now, suddenly, that plan is just dead. Aspen said it will shift some of its production to a factory in Rhode Island and send some overseas. (I reached out to the company with questions for my story last week, but they didn’t get back to me.)

One example doesn’t always mean there’s a trend; I got food poisoning at a sushi restaurant once, but I haven’t cut out sashimi permanently. The bad news, though, is that Aspen’s cancellation is just one of many. Over a dozen major projects in climate technology have gotten killed so far this year, as the nonprofit E2 tallied up in a new report last week. That’s far from typical.

I got some additional context from Jay Turner, who runs Big Green Machine, a database that also tracks investments in the climate-tech supply chain. That project includes some data that E2 doesn’t account for: news about when projects are delayed or take steps forward. On Monday, the Big Green Machine team released a new update, one that Turner called “concerning.”

Since Donald Trump took office on January 20, about $10.5 billion worth of investment in climate tech projects has progressed in some way. That basically means 26 projects were announced, secured new funding, increased in scale, or started construction or production.

Meanwhile, $12.2 billion across 14 projects has slowed down in some way. This covers projects that were canceled, were delayed significantly, or lost funding, as well as companies that went bankrupt. So by total investment, there’s been more bad news in climate tech than good news, according to Turner’s tracking.

It’s tempting to look for the silver lining here. The projects still moving forward are certainly positive, and we’ll hopefully continue to see some companies making progress even as we head into even more uncertain times. But the signs don’t look good.

One question that I have going forward is how a seemingly inevitable US slowdown on climate technology will ripple around the rest of the world. Several experts I’ve spoken with seem to agree that this will be a great thing for China, which has aggressively and consistently worked to establish itself as a global superpower in industries like EVs and batteries.

In other words, the energy transition is rolling on. Will the US get left behind? 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.