Why recycling isn’t enough to address the plastic problem

I remember using a princess toothbrush when I was little. The handle was purple, teal, and sparkly. Like most of the other pieces of plastic that have ever been made, it’s probably still out there somewhere, languishing in a landfill. (I just hope it’s not in the ocean.)

I’ve been thinking about that toothbrush again this week after UN talks about a plastic treaty broke down on Friday. Nations had gotten together to try and write a binding treaty to address plastic waste, but negotiators left without a deal.

Plastic is widely recognized as a huge source of environmental pollution—again, I’m wondering where that toothbrush is—but the material is also a contributor to climate change. Let’s dig into why talks fell apart and how we might address emissions from plastic.

I’ve defended plastic before in this newsletter (sort of). It’s a wildly useful material, integral in everything from glasses lenses to IV bags.

But the pace at which we’re producing and using plastic is absolutely bonkers. Plastic production has increased at an average rate of 9% every year since 1950. Production hit 460 million metric tons in 2019. And an estimated 52 million metric tons are dumped into the environment or burned each year.

So, in March 2022, the UN Environment Assembly set out to develop an international treaty to address plastic pollution. Pretty much everyone should agree that a bunch of plastic waste floating in the ocean is a bad thing. But as we’ve learned over the past few years, as these talks developed, opinions diverge on what to do about it and how any interventions should happen.

One phrase that’s become quite contentious is the “full life cycle” of plastic. Basically, some groups are hoping to go beyond efforts to address just the end of the plastic life cycle (collecting and recycling it) by pushing for limits on plastic production. There was even talk at the Assembly of a ban on single-use plastic.

Petroleum-producing nations strongly opposed production limits in the talks. Representatives from Saudi Arabia and Kuwait told the Guardian that they considered limits to plastic production outside the scope of talks. The US reportedly also slowed down talks and proposed to strike a treaty article that references the full life cycle of plastics.

Petrostates have a vested interest because oil, natural gas, and coal are all burned for energy used to make plastic, and they’re also used as raw materials. This stat surprised me: 12% of global oil demand and over 8% of natural gas demand is for plastic production.  

That translates into a lot of greenhouse gas emissions. One report from Lawrence Berkeley National Lab found that plastics production accounted for 2.24 billion metric tons of carbon dioxide emissions in 2019—that’s roughly 5% of the global total.  

And looking into the future, emissions from plastics are only set to grow. Another estimate, from the Organisation for Economic Co-operation and Development, projects that emissions from plastics could swell from about 2 billion metric tons to 4 billion metric tons by 2060.

This chart is what really strikes me and makes the conclusion of the plastic treaty talks such a disappointment.

Recycling is a great tool, and new methods could make it possible to recycle more plastics and make it easier to do so. (I’m particularly interested in efforts to recycle a mix of plastics, cutting down on the slow and costly sorting process.)

But just addressing plastic at its end of life won’t be enough to address the climate impacts of the material. Most emissions from plastic come from making it. So we need new ways to make plastic, using different ingredients and fuels to take oil and gas out of the equation. And we need to be smarter about the volume of plastic we produce.  

One positive note here: The plastic treaty isn’t dead, just on hold for the moment. Officials say that there’s going to be an effort to revive the talks.

Less than 10% of plastic that’s ever been produced has been recycled. Whether it’s a water bottle, a polyester shirt you wore a few times, or a princess toothbrush from when you were a kid, it’s still out there somewhere in a landfill or in the environment. Maybe you already knew that. But also consider this: The greenhouse gases emitted to make the plastic are still in the atmosphere, too, contributing to climate change. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The US could really use an affordable electric truck

On Monday, Ford announced plans for an affordable electric truck with a 2027 delivery date and an expected price tag of about $30,000, thanks in part to a new manufacturing process that it says will help cut costs.

This could be the shot in the arm that the slowing US EV market needs. Sales are slowing, and Ford in particular has struggled recently—the automaker has lost $12 billion over the last two and a half years on its EV division. And the adoption barriers continue to mount, with the Trump administration cutting tax credits as well as rules designed to push automakers toward zero-emissions vehicles. And that’s not to mention tariffs.

But if anything can get Americans excited, it’s a truck, especially an affordable one. (There was a ton of buzz over the announcement of a bare-bones truck from Bezos-backed Slate Auto earlier this year, for example.) The big question is whether the company can deliver in this environment.

One key thing to note here: This is not the first time that there’s been a big splashy truck announcement from Ford that was supposed to change everything. The F-150 Lightning was hailed as a turning point for vehicle electrification, a signal that decarbonization had entered a new era. We cited the truck when we put “The Inevitable EV” on our 10 Breakthrough Technologies list in 2023. 

Things haven’t quite turned out that way. One problem is that the Lightning was supposed to be relatively affordable, with a price tag of about $40,000 when it was first announced in 2021. The starting price inflated to $52,000 when it actually went on sale in 2022.

The truck was initially popular and became quite hard to find at dealerships. But prices climbed and interest leveled off. The base model hit nearly $60,000 by 2023. For the past few years, Ford has cut Lightning production several times and laid off employees who assembled the trucks.

Now, though, Ford is once again promising an affordable truck, and it’s supposed to be even cheaper this time. To help cut costs, the company says it’s simplifying, creating one universal platform for a new set of EVs. Using a common structure and set of components will help produce not only a midsize truck but also other trucks, vans, and SUVs. There are also planned changes to the manufacturing process (rather than one assembly line, multiple lines will join together to form what they’re calling an assembly tree). 

Another supporting factor for cost savings is the battery. The company plans to use lithium-iron phosphate (or LFP) cells—a type of lithium-ion battery that doesn’t contain nickel or cobalt. Leaving out those relatively pricey metals means lower costs.

Side note here: That battery could be surprisingly small. In a media briefing, a Ford official reportedly said that the truck’s battery would be 15% smaller than the one in the Atto crossover from the Chinese automaker BYD. Since that model has a roughly 60-kilowatt-hour pack, that could put this new battery at 51 kilowatt-hours. That’s only half the capacity of the Ford Lightning’s battery and similar to the smallest pack offered in a Tesla Model 3 today. (This could mean the truck has a relatively limited range, though the company hasn’t shared any details on that front yet.) 

A string of big promises isn’t too unusual for a big company announcement. What was unusual was the tone from officials during the event on Monday.

As Andrew Hawkins pointed out in The Verge this week, “Ford seems to realize its timing is unfortunate.” During the announcement, executives emphasized that this was a bet, one that might not work out.

CEO Jim Farley put it bluntly: “The automotive industry has a graveyard littered with affordable vehicles that were launched in our country with all good intentions, and they fizzled out with idle plants, laid-off workers, and red ink.” Woof.

From where I’m standing, it’s hard to be optimistic that this announcement will turn out differently from all those failed ones, given where the US EV market is right now.   

In a new report published in June, the energy consultancy BNEF slashed its predictions for future EV uptake. Last year, the organization predicted that 48% of new vehicles sold in the US in 2030 would be electric. In this year’s edition, that number got bumped down to just 27%.

To be clear: BNEF and other organizations are still expecting more EVs on the roads in the future than today, since the vehicles make up less than 10% of new sales in the US. But expectations are way down, in part because of a broad cut in public support for EVs. 

The tax credits that gave drivers up to $7,500 off the purchase of a new EV end in just over a month. Tariffs are going to push costs up even for domestic automakers like Ford, which still rely on imported steel and aluminum.

A revamped manufacturing process and a cheaper, desirable vehicle could be exactly the sort of move that automakers need to make for the US EV market. But I’m skeptical that this truck will be able to turn it all around. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The greenhouse gases we’re not accounting for

In the spring of 2021, climate scientists were stumped. 

The global economy was just emerging from the covid-19 lockdowns, but for some reason the levels of methane—a greenhouse gas emitted mainly through agriculture and fossil-fuel production—had soared in the atmosphere the previous year, rising at the fastest rate on record.

Researchers around the world set to work unraveling the mystery, reviewing readings from satellites, aircraft, and greenhouse-gas monitoring stations. They eventually spotted a clear pattern: Methane emissions had increased sharply across the tropics, where wetlands were growing wetter and warmer. 

That created the ideal conditions for microbes that thrive in anaerobic muck, which gobbled up more of the carbon-rich organic matter and spat out more methane as a by-product. (Reduced pollution from nitrogen oxides, which help to break down methane in the atmosphere, also likely played a substantial role.)

The findings offer one of the clearest cases so far where climate change itself is driving additional greenhouse-gas emissions from natural systems, triggering a feedback effect that threatens to produce more warming, more emissions, and on and on. 

There are numerous additional ways this is happening or soon could, including wildfires and thawing permafrost. These are major emissions sources that aren’t included in the commitments nations have made under the Paris climate agreement—and climate risks that largely aren’t accounted for in the UN Intergovernmental Panel on Climate Change’s most recent warming scenarios.

Spark Climate Solutions (not to be confused with this newsletter) hopes to change that.

The San Francisco nonprofit is launching what’s known as a model intercomparison project, in which different research teams run the same set of experiments on different models across a variety of emissions scenarios to determine how climate change could play out. This one would specifically explore how a range of climate feedback effects could propel additional warming, additional emissions, and additional types of feedback.

“These increased emissions from natural sources add to human emissions and amplify climate change,” says Phil Duffy, chief scientist at Spark Climate Solutions, who previously served as climate science advisor to President Joe Biden. “And if you don’t look at all of them together, you can’t quantify the strength of that feedback effect.”

Other participants in the effort will include scientists at the Environmental Defense Fund, Stanford University, the Woodwell Climate Research Center, and other institutions in Europe and Australia, according to Spark Climate Solutions.

The nonprofit hopes to publish the findings in time for them to be incorporated into the UN climate panel’s seventh major assessment report, which is just getting underway, to help ensure that these dangers are more fully represented. That, in turn, would give nations a more accurate sense of the world’s carbon budgets, or the quantity of greenhouse gases they can produce before the planet reaches temperatures 1.5 °C or  2 °C over preindustrial levels. 

But one thing is already clear: Since the current scenarios don’t fully account for these feedback effects, the world will almost certainly warm faster than is now forecast, which underscores the importance of carrying out this exercise. 

Scientists at EDF, Woodwell and other institutions found that fires in the world’s northernmost forests, thawing permafrost and warming tropical wetlands could together push the planet beyond 2 °C years faster, eliminating up to a quarter of the time left before the world passes the core goal of the Paris agreement, in a paper under review. 

Earlier this year, Spark Climate Solutions set up a broader program to advance research and awareness of what’s known as warming-induced emissions, which will launch additional collaborations similar to the modeling intercomparison project.  

The goal of the program and the research project is “to really mainstream the inclusion of this topic in climate science and climate policy, and to drive research around climate solutions,” says Ben Poulter, who leads the program at Spark Climate Solutions and was previously a scientist at the NASA Goddard Space Flight Center.

Spark notes that warming temperatures could also release more carbon dioxide from the oceans, in a process known as outgassing; additional carbon dioxide and nitrous oxide, a potent greenhouse gas that also depletes the protective ozone layer, from farmland; more carbon dioxide and methane from wildfires; and still more of all three of these gases as permafrost thaws.

The ground remains frozen year round across a vast expanse of the Northern Hemisphere, creating a frosty underground storehouse from Alaska to Siberia that’s packed with twice as much carbon as the atmosphere.

But as it thaws, it starts to decompose and release greenhouse gases, says Susan Natali, an Arctic climate scientist focused on permafrost at Woodwell. A study published in Nature in January noted that 30% of the world’s Arctic–Boreal Zone has already flipped from a carbon sink to a carbon source, when wildfires, thawing permafrost and other factors are taken into account.

Despite these increasing risks, only a minority of the models that fed into the UN climate panel’s last major report incorporated the feedback effects of thawing permafrost. And the emissions risks still weren’t fully accounted for because these ecosystems are difficult to monitor and model, Natali says.

Among the complexities: Wildfires, which are themselves hard to predict, can accelerate thawing. It’s also hard to foresee which regions will grow drier or wetter, which determines whether they release mostly methane or carbon dioxide—and those gases have very different warming effects over different time periods. There are counterbalancing effects that must be taken into account as well—for instance, as carbon-absorbing plants replace ice and snow in certain areas.

Natali says improving our understanding of these complex feedback effects is essential to understanding the dangers we face.

“It’s going to mean additional costs to human health, human life,” she says. “We want people to be safe—and it’s very hard to do that if you don’t know what’s coming and you’re not prepared for it.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

An EPA rule change threatens to gut US climate regulations

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

The mechanism that allows the US federal government to regulate climate change is on the chopping block.

On Tuesday, US Environmental Protection Agency administrator Lee Zeldin announced that the agency is taking aim at the endangerment finding, a 2009 rule that’s essentially the tentpole supporting federal greenhouse-gas regulations.

This might sound like an obscure legal situation, but it’s a really big deal for climate policy in the US. So buckle up, and let’s look at what this rule says now, what the proposed change looks like, and what it all means.

To set the stage, we have to go back to the Clean Air Act of 1970, the law that essentially gave the EPA the power to regulate air pollution. (Stick with me—I promise I’ll keep this short and not get too into the legal weeds.)

There were some pollutants explicitly called out in this law and its amendments, including lead and sulfur dioxide. But it also required the EPA to regulate new pollutants that were found to be harmful. In the late 1990s and early 2000s, environmental groups and states started asking for the agency to include greenhouse-gas pollution.

In 2007, the Supreme Court ruled that greenhouse gases qualify as air pollutants under the Clean Air Act, and that the EPA should study whether they’re a danger to public health. In 2009, the incoming Obama administration looked at the science and ruled that greenhouse gases pose a threat to public health because they cause climate change. That’s the endangerment finding, and it’s what allows the agency to pass rules to regulate greenhouse gases.  

The original case and argument were specifically about vehicles and the emissions from tailpipes, but this finding was eventually used to allow the agency to set rules around power plants and factories, too. It essentially underpins climate regulations in the US.

Fast-forward to today, and the Trump administration wants to reverse the endangerment finding. In a proposed rule released on Tuesday, the EPA argues that the Clean Air Act does not, in fact, authorize the agency to set emissions standards to address global climate change. Zeldin, in an appearance on the conservative politics and humor podcast Ruthless that preceded the official announcement, called the proposal the “largest deregulatory action in the history of America.”

The administration was already moving to undermine the climate regulations that rely on this rule. But this move directly targets a “fundamental building block of EPA’s climate policy,” says Deborah Sivas, an environmental-law professor at Stanford University.

The proposed rule will go up for public comment, and the agency will then take that feedback and come up with a final version. It’ll almost certainly get hit with legal challenges and will likely wind up in front of the Supreme Court.

One note here is that the EPA makes a mostly legal argument in the proposed rule reversal rather than focusing on going after the science of climate change, says Madison Condon, an associate law professor at Boston University. That could make it easier for the Supreme Court to eventually uphold it, she says, though this whole process is going to take a while. 

If the endangerment finding goes down, it would have wide-reaching ripple effects. “We could find ourselves in a couple years with no legal tools to try and address climate change,” Sivas says.

To take a step back for a moment, it’s wild that we’ve ended up in this place where a single rule is so central to regulating emissions. US climate policy is held up by duct tape and a dream. Congress could have, at some point, passed a law that more directly allows the EPA to regulate greenhouse-gas emissions (the last time we got close was a 2009 bill that passed the House but never made it to the Senate). But here we are.

This move isn’t a surprise, exactly. The Trump administration has made it very clear that it is going after climate policy in every way that it can. But what’s most striking to me is that we’re not operating in a shared reality anymore when it comes to this subject. 

While top officials tend to acknowledge that climate change is real, there’s often a “but” followed by talking points from climate denial’s list of greatest hits. (One of the more ridiculous examples is the statement that carbon dioxide is good, actually, because it helps plants.) 

Climate change is real, and it’s a threat. And the US has emitted more greenhouse gases into the atmosphere than any other country in the world. It shouldn’t be controversial to expect the government to be doing something about it. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

What role should oil and gas companies play in climate tech?

This week, I have a new story out about Quaise, a geothermal startup that’s trying to commercialize new drilling technology. Using a device called a gyrotron, the company wants to drill deeper, cheaper, in an effort to unlock geothermal power anywhere on the planet. (For all the details, check it out here.) 

For the story, I visited Quaise’s headquarters in Houston. I also took a trip across town to Nabors Industries, Quaise’s investor and tech partner and one of the biggest drilling companies in the world. 

Standing on top of a drilling rig in the backyard of Nabors’s headquarters, I couldn’t stop thinking about the role oil and gas companies are playing in the energy transition. This industry has resources and energy expertise—but also a vested interest in fossil fuels. Can it really be part of addressing climate change?

The relationship between Quaise and Nabors is one that we see increasingly often in climate tech—a startup partnering up with an established company in a similar field. (Another one that comes to mind is in the cement industry, where Sublime Systems has seen a lot of support from legacy players including Holcim, one of the biggest cement companies in the world.) 

Quaise got an early investment from Nabors in 2021, to the tune of $12 million. Now the company also serves as a technical partner for the startup. 

“We are agnostic to what hole we’re drilling,” says Cameron Maresh, a project engineer on the energy transition team at Nabors Industries. The company is working on other investments and projects in the geothermal industry, Maresh says, and the work with Quaise is the culmination of a yearslong collaboration: “We’re just truly excited to see what Quaise can do.”

From the outside, this sort of partnership makes a lot of sense for Quaise. It gets resources and expertise. Meanwhile, Nabors is getting involved with an innovative company that could represent a new direction for geothermal. And maybe more to the point, if fossil fuels are to be phased out, this deal gives the company a stake in next-generation energy production.

There is so much potential for oil and gas companies to play a productive role in addressing climate change. One report from the International Energy Agency examined the role these legacy players could take:  “Energy transitions can happen without the engagement of the oil and gas industry, but the journey to net zero will be more costly and difficult to navigate if they are not on board,” the authors wrote. 

In the agency’s blueprint for what a net-zero emissions energy system could look like in 2050, about 30% of energy could come from sources where the oil and gas industry’s knowledge and resources are useful. That includes hydrogen, liquid biofuels, biomethane, carbon capture, and geothermal. 

But so far, the industry has hardly lived up to its potential as a positive force for the climate. Also in that report, the IEA pointed out that oil and gas producers made up only about 1% of global investment in climate tech in 2022. Investment has ticked up a bit since then, but still, it’s tough to argue that the industry is committed. 

And now that climate tech is falling out of fashion with the government in the US, I’d venture to guess that we’re going to see oil and gas companies increasingly pulling back on their investments and promises. 

BP recently backtracked on previous commitments to cut oil and gas production and invest in clean energy. And last year the company announced that it had written off $1.1 billion in offshore wind investments in 2023 and wanted to sell other wind assets. Shell closed down all its hydrogen fueling stations for vehicles in California last year. (This might not be all that big a loss, since EVs are beating hydrogen by a huge margin in the US, but it’s still worth noting.) 

So oil and gas companies are investing what amounts to pennies and often backtrack when the political winds change direction. And, let’s not forget, fossil-fuel companies have a long history of behaving badly. 

In perhaps the most notorious example, scientists at Exxon modeled climate change in the 1970s, and their forecasts turned out to be quite accurate. Rather than publish that research, the company downplayed how climate change might affect the planet. (For what it’s worth, company representatives have argued that this was less of a coverup and more of an internal discussion that wasn’t fit to be shared outside the company.) 

While fossil fuels are still part of our near-term future, oil and gas companies, and particularly producers, would need to make drastic changes to align with climate goals—changes that wouldn’t be in their financial interest. Few seem inclined to really take the turn needed. 

As the IEA report puts it:  “In practice, no one committed to change should wait for someone else to move first.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

In defense of air-conditioning

I’ll admit that I’ve rarely hesitated to point an accusing finger at air-conditioning. I’ve outlined in many stories and newsletters that AC is a significant contributor to global electricity demand, and it’s only going to suck up more power as temperatures rise.

But I’ll also be the first to admit that it can be a life-saving technology, one that may become even more necessary as climate change intensifies. And in the wake of Europe’s recent deadly heat wave, it’s been oddly villainized

We should all be aware of the growing electricity toll of air-conditioning, but the AC hate is misplaced. Yes, AC is energy intensive, but so is heating our homes, something that’s rarely decried in the same way that cooling is. Both are tools for comfort and, more important, for safety.  So why is air-conditioning cast as such a villain?

In the last days of June and the first few days of July, temperatures hit record highs across Europe. Over 2,300 deaths during that period were attributed to the heat wave, according to early research from World Weather Attribution, an academic collaboration that studies extreme weather. And human-caused climate change accounted for 1,500 of the deaths, the researchers found. (That is, the number of fatalities would have been under 800 if not for higher temperatures because of climate change.)

We won’t have the official death toll for months, but these early figures show just how deadly heat waves can be. Europe is especially vulnerable, because in many countries, particularly in the northern part of the continent, air-conditioning is not common.

Popping on a fan, drawing the shades, or opening the windows on the hottest days used to cut it in many European countries. Not anymore. The UK was 1.24 °C (2.23 °F) warmer over the past decade than it was between 1961 and 1990, according to the Met Office, the UK’s national climate and weather service. One recent study found that homes across the country are uncomfortably or dangerously warm much more frequently than they used to be.

The reality is, some parts of the world are seeing an upward shift in temperatures that’s not just uncomfortable but dangerous. As a result, air-conditioning usage is going up all over the world, including in countries with historically low rates.

The reaction to this long-term trend, especially in the face of the recent heat wave, has been apoplectic. People are decrying AC across social media and opinion pages, arguing that we need to suck it up and deal with being a little bit uncomfortable.

Now, let me preface this by saying that I do live in the US, where roughly 90% of homes are cooled with air-conditioning today. So perhaps I am a little biased in favor of AC. But it baffles me when people talk about air-conditioning this way.

I spent a good amount of my childhood in the southeastern US, where it’s very obvious that heat can be dangerous. I was used to many days where temperatures were well above 90 °F (32 °C), and the humidity was so high your clothes would stick to you as soon as you stepped outdoors. 

For some people, being active or working in those conditions can lead to heatstroke. Prolonged exposure, even if it’s not immediately harmful, can lead to heart and kidney problems. Older people, children, and those with chronic conditions can be more vulnerable

In other words, air-conditioning is more than a convenience; in certain conditions, it’s a safety measure. That should be an easy enough concept to grasp. After all, in many parts of the world we expect access to heating in the name of safety. Nobody wants to freeze to death. 

And it’s important to clarify here that while air-conditioning does use a lot of electricity in the US, heating actually has a higher energy footprint. 

In the US, about 19% of residential electricity use goes to air-conditioning. That sounds like a lot, and it’s significantly more than the 12% of electricity that goes to space heating. However, we need to zoom out to get the full picture, because electricity makes up only part of a home’s total energy demand. A lot of homes in the US use natural gas for heating—that’s not counted in the electricity being used, but it’s certainly part of the home’s total energy use.

When we look at the total, space heating accounts for a full 42% of residential energy consumption in the US, while air conditioning accounts for only 9%.

I’m not letting AC off the hook entirely here. There’s obviously a difference between running air-conditioning (or other, less energy-intensive technologies) when needed to stay safe and blasting systems at max capacity because you prefer it chilly. And there’s a lot of grid planning we’ll need to do to make sure we can handle the expected influx of air-conditioning around the globe. 

But the world is changing, and temperatures are rising. If you’re looking for a villain, look beyond the air conditioner and into the atmosphere.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

China’s energy dominance in three charts

China is the dominant force in next-generation energy technologies today. It’s pouring hundreds of billions of dollars into putting renewable sources like wind and solar on its grid, manufacturing millions of electric vehicles, and building out capacity for energy storage, nuclear power, and more. This investment has been transformational for the country’s economy and has contributed to establishing China as a major player in global politics. 

Meanwhile, in the US, a massive new tax and spending bill just cut hundreds of billions in credits, grants, and loans for clean energy technologies. It’s a stark reversal from previous policies, and it could have massive effects at a time when it feels as if everyone is chasing China on energy.

So while we all try to get our heads around what’s next for climate tech in the US and beyond, let’s look at just how dominant China is when it comes to clean energy, as documented in three charts.

China is on an absolute tear installing wind and solar power. The country reached nearly 900 gigawatts of installed capacity for solar at the end of 2024, and the rapid pace of building has continued into this year. An additional 198 GW was installed between January and May, with 93 GW coming in May alone

For context, those additions over the first five months of the year account for more than double the capacity of the grid in California. Not the renewables capacity of that state—the entire grid. 

Meanwhile, the policy shift in the US is projected to slow down new solar and wind additions. With tax credits and other support stripped away, much of the new capacity that was expected to come online by the end of the decade will now face delays or cancellations. 

That’s significant because of all the new electricity generation capacity that’s come online in the US recently, renewables make up the vast majority. Solar and battery storage alone are expected to make up over 80% of capacity additions in 2025. So slowing down wind and solar basically means slowing down adding new electricity capacity, at a time when demand is very much set to rise. (Hello, AI?)

China’s EV market is also booming—the country is currently flirting with a big symbolic milestone, nearing the point where over half of all new vehicles sold in the country are electric. (It already passed that mark for a single month and could do so on a yearly basis in the next couple of years.)

It’s not just selling those vehicles within China, either: the country exports them globally, with customers including established markets like Europe and growing ones like India and Brazil. As of 2024, more than 70% of electric and plug-in hybrid vehicles on roads around the world were built in ChinaSome leaders in legacy automakers are taking notice. Ford CEO Jim Farley shared some striking comments at the Aspen Ideas Festival last month about how far ahead China is on vehicle technology and price. “They have far superior in-vehicle technology,” Farley said. “We are in a global competition with China, and it’s not just EVs. And if we lose this, we do not have a future Ford.” 

Looking ahead, China is still pouring money into renewables, storage, grids, and energy efficiency technologies. It’s also outspending the rest of the world on nuclear power. The country tripled its investment in renewable power from 2015 to 2025.

The situation isn’t set in stone, though: The US actually very briefly overtook China on battery investments over the past year, as Cat Clifford at Cipher reported last week. But changes resulting from the new bill could very quickly reverse that progress, cementing China as the place for battery manufacturing and innovation.

In a story earlier this week, the MIT economist David Autor laid out the high stakes for this race. Advanced manufacturing and technology are beneficial for US prosperity, and putting public support and trade protections in place for key industries could be crucial to keeping them going, he says.  

I’d add that this whole discussion shouldn’t be about a zero-sum competition between the US and China. But many experts argue that the US, where I and many readers live, is surrendering its leadership and ability to develop key energy technologies of the future.  

Ultimately, the numbers don’t lie: By a lot of measures, China is the world’s leader in energy. The question is, will that change anytime soon?  

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Google’s electricity demand is skyrocketing

We got two big pieces of energy news from Google this week. The company announced that it’s signed an agreement to purchase electricity from a fusion company’s forthcoming first power plant. Google also released its latest environmental report, which shows that its energy use from data centers has doubled since 2020.

Taken together, these two bits of news offer a fascinating look at just how desperately big tech companies are hunting for clean electricity to power their data centers as energy demand and emissions balloon in the age of AI. Of course, we don’t know exactly how much of this pollution is attributable to AI because Google doesn’t break that out. (Also a problem!) So, what’s next and what does this all mean? 

Let’s start with fusion: Google’s deal with Commonwealth Fusion Systems is intended to provide the tech giant with 200 megawatts of power. This will come from Commonwealth’s first commercial plant, a facility planned for Virginia that the company refers to as the Arc power plant. The agreement represents half its capacity.

What’s important to note here is that this power plant doesn’t exist yet. In fact, Commonwealth still needs to get its Sparc demonstration reactor, located outside Boston, up and running. That site, which I visited in the fall, should be completed in 2026.

(An aside: This isn’t the first deal between Big Tech and a fusion company. Microsoft signed an agreement with Helion a couple of years ago to buy 50 megawatts of power from a planned power plant, scheduled to come online in 2028. Experts expressed skepticism in the wake of that deal, as my colleague James Temple reported.)

Nonetheless, Google’s announcement is a big moment for fusion, in part because of the size of the commitment and also because Commonwealth, a spinout company from MIT’s Plasma Science and Fusion Center, is seen by many in the industry as a likely candidate to be the first to get a commercial plant off the ground. (MIT Technology Review is owned by MIT but is editorially independent.)

Google leadership was very up-front about the length of the timeline. “We would certainly put this in the long-term category,” said Michael Terrell, Google’s head of advanced energy, in a press call about the deal.

The news of Google’s foray into fusion comes just days after the tech giant’s release of its latest environmental report. While the company highlighted some wins, some of the numbers in this report are eye-catching, and not in a positive way.

Google’s emissions have increased by over 50% since 2019, rising 6% in the last year alone. That’s decidedly the wrong direction for a company that’s set a goal to reach net-zero greenhouse-gas emissions by the end of the decade.

It’s true that the company has committed billions to clean energy projects, including big investments in next-generation technologies like advanced nuclear and enhanced geothermal systems. Those deals have helped dampen emissions growth, but it’s an arguably impossible task to keep up with the energy demand the company is seeing.

Google’s electricity consumption from data centers was up 27% from the year before. It’s doubled since 2020, reaching over 30 terawatt-hours. That’s nearly the annual electricity consumption from the entire country of Ireland.

As an outsider, it’s tempting to point the finger at AI, since that technology has crashed into the mainstream and percolated into every corner of Google’s products and business. And yet the report downplays the role of AI. Here’s one bit that struck me:

“However, it’s important to note that our growing electricity needs aren’t solely driven by AI. The accelerating growth of Google Cloud, continued investments in Search, the expanding reach of YouTube, and more, have also contributed to this overall growth.”

There is enough wiggle room in that statement to drive a large electric truck through. When I asked about the relative contributions here, company representative Mara Harris said via email that they don’t break out what portion comes from AI. When I followed up asking if the company didn’t have this information or just wouldn’t share it, she said she’d check but didn’t get back to me.

I’ll make the point here that we’ve made before, including in our recent package on AI and energy: Big companies should be disclosing more about the energy demands of AI. We shouldn’t be guessing at this technology’s effects.

Google has put a ton of effort and resources into setting and chasing ambitious climate goals. But as its energy needs and those of the rest of the industry continue to explode, it’s obvious that this problem is getting tougher, and it’s also clear that more transparency is a crucial part of the way forward.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

It’s officially summer, and the grid is stressed

It’s crunch time for the grid this week. As I’m writing this newsletter, it’s 100 °F (nearly 38 °C) here in New Jersey, and I’m huddled in the smallest room in my apartment with the shades drawn and a single window air conditioner working overtime.  

Large swaths of the US have seen brutal heat this week, with multiple days in a row nearing or exceeding record-breaking temperatures. Spain recently went through a dramatic heat wave too, as did the UK, which is unfortunately bracing for another one soon. As I’ve been trying to stay cool, I’ve had my eyes on a website tracking electricity demand, which is also hitting record highs. 

We rely on electricity to keep ourselves comfortable, and more to the point, safe. These are the moments we design the grid for: when need is at its very highest. The key to keeping everything running smoothly during these times might be just a little bit of flexibility. 

While heat waves happen all over the world, let’s take my local grid as an example. I’m one of the roughly 65 million people covered by PJM Interconnection, the largest grid operator in the US. PJM covers Virginia, West Virginia, Ohio, Pennsylvania, and New Jersey, as well as bits of a couple of neighboring states.

Earlier this year, PJM forecast that electricity demand would peak at 154 gigawatts (GW) this summer. On Monday, just a few days past the official start of the season, the grid blew past that, averaging over 160 GW between 5 p.m. and 6 p.m. 

The fact that we’ve already passed both last year’s peak and this year’s forecasted one isn’t necessarily a disaster (PJM says the system’s total capacity is over 179 GW this year). But it is a good reason to be a little nervous. Usually, PJM sees its peak in July or August. As a reminder, it’s June. So we shouldn’t be surprised if we see electricity demand creep to even higher levels later in the summer.

It’s not just PJM, either. MISO, the grid that covers most of the Midwest and part of the US South, put out a notice that it expected to be close to its peak demand this week. And the US Department of Energy released an emergency order for parts of the Southeast, which allows the local utility to boost generation and skirt air pollution limits while demand is high.

This pattern of maxing out the grid is only going to continue. That’s because climate change is pushing temperatures higher, and electricity demand is simultaneously swelling (in part because of data centers like those that power AI). PJM’s forecasts show that the summer peak in 2035 could reach nearly 210 GW, well beyond the 179 GW it can provide today. 

Of course, we need more power plants to be built and connected to the grid in the coming years (at least if we don’t want to keep ancient, inefficient, expensive coal plants running, as we covered last week). But there’s a quiet strategy that could limit the new construction needed: flexibility.

The power grid has to be built for moments of the absolute highest demand we can predict, like this heat wave. But most of the time, a decent chunk of capacity that exists to get us through these peaks sits idle—it only has to come online when demand surges. Another way to look at that, however, is that by shaving off demand during the peak, we can reduce the total infrastructure required to run the grid. 

If you live somewhere that’s seen a demand crunch during a heat wave, you might have gotten an email from your utility asking you to hold off on running the dishwasher in the early evening or to set your air conditioner a few degrees higher. These are called demand response programs. Some utilities run more organized programs, where utilities pay customers to ramp down their usage during periods of peak demand.

PJM’s demand response programs add up to almost eight gigawatts of power—enough to power over 6 million homes. With these programs, PJM basically avoids having to fire up the equivalent of multiple massive nuclear power plants. (It did activate these programs on Monday afternoon during the hottest part of the day.)

As electricity demand goes up, building in and automating this sort of flexibility could go a long way to reducing the amount of new generation needed. One report published earlier this year found that if data centers agreed to have their power curtailed for just 0.5% of the time (around 40 hours out of a year of continuous operation), the grid could handle about 18 GW of new power demand in the PJM region without adding generation capacity. 

For the whole US, this level of flexibility would allow the grid to take on an additional 98 gigawatts of new demand without building any new power plants to meet it. To give you a sense of just how significant that would be, all the nuclear reactors in the US add up to 97 gigawatts of capacity.

Tweaking the thermostat and ramping down data centers during hot summer days won’t solve the demand crunch on their own, but it certainly won’t hurt to have more flexibility.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Inside the US power struggle over coal

Coal power is on life support in the US. It used to carry the grid with cheap electricity, but now plants are closing left and right.

There are a lot of potential reasons to let coal continue its journey to the grave. Carbon emissions from coal plants are a major contributor to climate change. And those facilities are also often linked with health problems in nearby communities, as reporter Alex Kaufman explored in a new feature story on Puerto Rico’s only coal-fired power plant.

But the Trump administration wants to keep coal power alive, and the US Department of Energy recently ordered some plants to stay open past their scheduled closures. Here’s why there’s a power struggle over coal.

Coal used to be king in the US, but the country has dramatically reduced its dependence on the fuel over the past two decades. It accounted for about 20% of the electricity generated in 2024, down from roughly half in 2000.

While the demise of coal has been great for US emissions, the real driver is economics. Coal used to be the cheapest form of electricity generation around, but the fracking boom handed that crown to natural gas over a decade ago. And now, even cheaper wind and solar power is coming online in droves.

Economics was a major factor in the planned retirement of the J.H. Campbell coal plant in Michigan, which was set to close at the end of May, Dan Scripps, chair of the Michigan Public Service Commission, told the Washington Post.

Then, on May 23, US Energy Secretary Chris Wright released an emergency order that requires the plant to remain open. Wright’s order mandates 90 more days of operation, and the order can be extended past that, too. It states that the goal is to minimize the risk of blackouts and address grid security issues before the start of summer.

The DOE’s authority to require power plants to stay open is something that’s typically used in emergencies like hurricanes, rather than in response to something as routine as … seasons changing. 

It’s true that there’s growing concern in the US about meeting demand for electricity, which is rising for the first time after being basically flat for decades. (The recent rise is in large part due to massive data centers, like those needed to run AI. Have I mentioned we have a great package on AI and energy?)

And we are indeed heading toward summer, which is when the grid is stretched to its limits. In the New York area, the forecast high is nearly 100 °F (38 °C) for several days next week—I’ll certainly have my air conditioner on, and I’m sure I’ll soon be getting texts asking me to limit electricity use during times of peak demand.

But is keeping old coal plants open the answer to a stressed grid?

It might not be the most economical way forward. In fact, in almost every case today, it’s actually cheaper to build new renewables capacity than to keep existing coal plants running in the US, according to a 2023 report from Energy Innovation, an energy think tank. And coal is only getting more expensive—in an updated analysis, Energy Innovation found that three-quarters of coal plants saw costs rising faster than inflation between 2021 and 2024.

Granted, solar and wind aren’t always available, while coal plants can be fired up on demand. And getting new projects built and connected to the grid will take time (right now, there’s a huge backlog of renewable projects waiting in the interconnection queue). But some experts say we actually don’t need new generation that urgently anyway, if big electricity users can be flexible with their demand

And we’re already seeing batteries come to the rescue on the grid at times of stress. Between May 2024 and April 2025, US battery storage capacity increased by about 40%. When Texas faced high temperatures last month, batteries did a lot to help the state make it through without blackouts, as this Bloomberg story points out. Costs are falling, too; prices are about 19% lower in 2024 than they were in 2023. 

Even as the Trump administration is raising concerns about grid reliability, it’s moved to gut programs designed to get more electricity generation and storage online, like the tax credits that support wind, solar, and battery production and installation. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.