As many of us are ramping up with shopping, baking, and planning for the holiday season, nuclear power plants are also getting ready for one of their busiest seasons of the year.
Here in the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.
This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. It leaves some big shoes to fill for next-generation technology hoping to join the fleet in the next few years.
Generally, nuclear reactors operate at constant levels, as close to full capacity as possible. In 2024, for commercial reactors worldwide, the average capacity factor—the ratio of actual energy output to the theoretical maxiumum—was 83%. North America rang in at an average of about 90%.
(I’ll note here that it’s not always fair to just look at this number to compare different kinds of power plants—natural-gas plants can have lower capacity factors, but it’s mostly because they’re more likely to be intentionally turned on and off to help meet uneven demand.)
Those high capacity factors also undersell the fleet’s true reliability—a lot of the downtime is scheduled. Reactors need to refuel every 18 to 24 months, and operators tend to schedule those outages for the spring and fall, when electricity demand isn’t as high as when we’re all running our air conditioners or heaters at full tilt.
Take a look at this chart of nuclear outages from the US Energy Information Administration. There are some days, especially at the height of summer, when outages are low, and nearly all commercial reactors in the US are operating at nearly full capacity. On July 28 of this year, the fleet was operating at 99.6%. Compare that with the 77.6% of capacity on October 18, as reactors were taken offline for refueling and maintenance. Now we’re heading into another busy season, when reactors are coming back online and shutdowns are entering another low point.
That’s not to say all outages are planned. At the Sequoyah nuclear power plant in Tennessee, a generator failure in July 2024 took one of two reactors offline, an outage that lasted nearly a year. (The utility also did some maintenance during that time to extend the life of the plant.) Then, just days after that reactor started back up, the entire plant had to shut down because of low water levels.
And who can forget the incident earlier this year when jellyfish wreaked havoc on not one but two nuclear power plants in France? In the second instance, the squishy creatures got into the filters of equipment that sucks water out of the English Channel for cooling at the Paluel nuclear plant. They forced the plant to cut output by nearly half, though it was restored within days.
Barring jellyfish disasters and occasional maintenance, the global nuclear fleet operates quite reliably. That wasn’t always the case, though. In the 1970s, reactors operated at an average capacity factor of just 60%. They were shut down nearly as often as they were running.
The fleet of reactors today has benefited from decades of experience. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry.
Next-generation reactors that use new materials for fuel or cooling will be able to borrow some lessons from the existing fleet, but they’ll also face novel challenges.
That could mean early demonstration reactors aren’t as reliable as the current commercial fleet at first. “First-of-a-kind nuclear, just like with any other first-of-a-kind technologies, is very challenging,” says Koroush Shirvan, a professor of nuclear science and engineering at MIT.
That means it will probably take time for molten-salt reactors or small modular reactors, or any of the other designs out there to overcome technical hurdles and settle into their own rhythm. It’s taken decades to get to a place where we take it for granted that the nuclear fleet can follow a neat seasonal curve based on electricity demand.
There will always be hurricanes and electrical failures and jellyfish invasions that cause some unexpected problems and force nuclear plants (or any power plants, for that matter) to shut down. But overall, the fleet today operates at an extremely high level of consistency. One of the major challenges ahead for next-generation technologies will be proving that they can do the same.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on the planet’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.
A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years.
Historically, finding new sites for geothermal power was a matter of brute force. Companies spent a lot of time and money drilling deep wells, looking for places where it made sense to build a plant.
Zanskar’s approach is more precise. With advancements in AI, the company aims to “solve this problem that had been unsolvable for decades, and go and finally find those resources and prove that they’re way bigger than previously thought,” says Carl Hoiland, the company’s cofounder and CEO.
To support a successful geothermal power plant, a site needs high temperatures at an accessible depth and space for fluid to move through the rock and deliver heat. In the case of the new site, which the company calls Big Blind, the prize is a reservoir that reaches 250 °F at about 2,700 feet below the surface.
As electricity demand rises around the world, geothermal systems like this one could provide a source of constant power without emitting the greenhouse gases that cause climate change.
The company has used its technology to identify many potential hot spots. “We have dozens of sites that look just like this,” says Joel Edwards, Zanskar’s cofounder and CTO. But for Big Blind, the team has done the fieldwork to confirm its model’s predictions.
The first step to identifying a new site is to use regional AI models to search large areas. The team trains models on known hot spots and on simulations it creates. Then it feeds in geological, satellite, and other types of data, including information about fault lines. The models can then predict where potential hot spots might be.
One strength of using AI for this task is that it can handle the immense complexity of the information at hand. “If there’s something learnable in the earth, even if it’s a very complex phenomenon that’s hard for us humans to understand, neural nets are capable of learning that, if given enough data,” Hoiland says.
Once models identify a potential hot spot, a field crew heads to the site, which might be roughly 100 square miles or so, and collects additional information through techniques that include drilling shallow holes to look for elevated underground temperatures.
In the case of Big Blind, this prospecting information gave the company enough confidence to purchase a federal lease, allowing it to develop a geothermal plant. With that lease secured, the team returned with large drill rigs and drilled thousands of feet down in July and August. The workers found the hot, permeable rock they expected.
Next they must secure permits to build and connect to the grid and line up the investments needed to build the plant. The team will also continue testing at the site, including long-term testing to track heat and water flow.
“There’s a tremendous need for methodology that can look for large-scale features,” says John McLennan, technical lead for resource management at Utah FORGE, a national lab field site for geothermal energy funded by the US Department of Energy. The new discovery is “promising,” McLennan adds.
Big Blind is Zanskar’s first confirmed discovery that wasn’t previously explored or developed, but the company has used its tools for other geothermal exploration projects. Earlier this year, it announced a discovery at a site that had previously been explored by the industry but not developed. The company also purchased and revived a geothermal power plant in New Mexico.
And this could be just the beginning for Zanskar. As Edwards puts it, “This is the start of a wave of new, naturally occurring geothermal systems that will have enough heat in place to support power plants.”
If we didn’t have pictures and videos, I almost wouldn’t believe the imagery that came out of this year’s UN climate talks.
Over the past few weeks in Belem, Brazil, attendees dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.
While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”
As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?
This is the 30th time that leaders have gathered for the Conference of the Parties, or COP, an annual UN conference focused on climate change. COP30 also marks 10 years since the gathering that produced the Paris Agreement, in which world powers committed to limiting global warming to “well below” 2.0 °C above preindustrial levels, with a goal of staying below the 1.5 °C mark. (That’s 3.6 °F and 2.7 °F, respectively, for my fellow Americans.)
Before the conference kicked off this year, host country Brazil’s president, Luiz Inácio Lula da Silva, cast this as the “implementation COP” and called for negotiators to focus on action, and specifically to deliver a road map for a global transition away from fossil fuels.
The science is clear—burning fossil fuels emits greenhouse gases and drives climate change. Reports have shown that meeting the goal of limiting warming to 1.5 °C would require stopping new fossil-fuel exploration and development.
The problem is, “fossil fuels” might as well be a curse word at global climate negotiations. Two years ago, fights over how to address fossil fuels brought talks at COP28 to a standstill. (It’s worth noting that the conference was hosted in Dubai in the UAE, and the leader was literally the head of the country’s national oil company.)
The agreement in Dubai ended up including a line that called on countries to transition away from fossil fuels in energy systems. It was short of what many advocates wanted, which was a more explicit call to phase out fossil fuels entirely. But it was still hailed as a win. As I wrote at the time: “The bar is truly on the floor.”
And yet this year, it seems we’ve dug into the basement.
At one point about 80 countries, a little under half of those present, demanded a concrete plan to move away from fossil fuels.
But oil producers like Saudi Arabia were insistent that fossil fuels not be singled out. Other countries, including some in Africa and Asia, also made a very fair point: Western nations like the US have burned the most fossil fuels and benefited from it economically. This contingent maintains that legacy polluters have a unique responsibility to finance the transition for less wealthy and developing nations rather than simply barring them from taking the same development route.
The US, by the way, didn’t send a formal delegation to the talks, for the first time in 30 years. But the absence spoke volumes. In a statement to the New York Times that sidestepped the COP talks, White House spokesperson Taylor Rogers said that president Trump had “set a strong example for the rest of the world” by pursuing new fossil-fuel development.
To sum up: Some countries are economically dependent on fossil fuels, some don’t want to stop depending on fossil fuels without incentives from other countries, and the current US administration would rather keep using fossil fuels than switch to other energy sources.
All those factors combined help explain why, in its final form, COP30’s agreement doesn’t name fossil fuels at all. Instead, there’s a vague line that leaders should take into account the decisions made in Dubai, and an acknowledgement that the “global transition towards low greenhouse-gas emissions and climate-resilient development is irreversible and the trend of the future.”
Hopefully, that’s true. But it’s concerning that even on the world’s biggest stage, naming what we’re supposed to be transitioning away from and putting together any sort of plan to actually do it seems to be impossible.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Electricity demand is surging globally. Global electricity demand will grow 40% over the next decade. Data center investment hit $580 billion in 2025 alone—surpassing global oil spending. In the US, data centers will account for half of all electricity growth through 2030.
Air-conditioning and emerging economies are reshaping energy consumption. Rising temperatures and growing prosperity in developing nations will add over 500 gigawatts of peak demand by 2035, dwarfing data centers’ contribution to overall electricity growth.
Renewables are finally overtaking coal, but the transition remains too slow. Solar and wind led electricity generation in the first half of 2025 with nuclear capacity poised to increase by a third this decade. Yet global emissions are likely to hit record highs again this year.
One of the dominant storylines I’ve been following through 2025 is electricity—where and how demand is going up, how much it costs, and how this all intersects with that topic everyone is talking about: AI.
Last week, the International Energy Agency released the latest version of the World Energy Outlook, the annual report that takes stock of the current state of global energy and looks toward the future. It contains some interesting insights and a few surprising figures about electricity, grids, and the state of climate change. So let’s dig into some numbers, shall we?
We’re in the age of electricity
Energy demand in general is going up around the world as populations increase and economies grow. But electricity is the star of the show, with demand projected to grow by 40% in the next 10 years.
China has accounted for the bulk of electricity growth for the past 10 years, and that’s going to continue. But emerging economies outside China will be a much bigger piece of the pie going forward. And while advanced economies, including the US and Europe, have seen flat demand in the past decade, the rise of AI and data centers will cause demand to climb there as well.
Air-conditioning is a major source of rising demand. Growing economies will give more people access to air-conditioning; income-driven AC growth will add about 330 gigawatts to global peak demand by 2035. Rising temperatures will tack on another 170 GW in that time. Together, that’s an increase of over 10% from 2024 levels.
AI is a local story
This year, AI has been the story that none of us can get away from. One number that jumped out at me from this report: In 2025, investment in data centers is expected to top $580 billion. That’s more than the $540 billion spent on the global oil supply.
It’s no wonder, then, that the energy demands of AI are in the spotlight. One key takeaway is that these demands are vastly different in different parts of the world.
Data centers still make up less than 10% of the projected increase in total electricity demand between now and 2035. It’s not nothing, but it’s far outweighed by sectors like industry and appliances, including air conditioners. Even electric vehicles will add more demand to the grid than data centers.
But AI will be the factor for the grid in some parts of the world. In the US, data centers will account for half the growth in total electricity demand between now and 2030.
And as we’ve covered in this newsletter before, data centers present a unique challenge, because they tend to be clustered together, so the demand tends to be concentrated around specific communities and on specific grids. Half the data center capacity that’s in the pipeline is close to large cities.
Look out for a coal crossover
As we ask more from our grid, the key factor that’s going to determine what all this means for climate change is what’s supplying the electricity we’re using.
As it stands, the world’s grids still primarily run on fossil fuels, so every bit of electricity growth comes with planet-warming greenhouse-gas emissions attached. That’s slowly changing, though.
Together, solar and wind were the leading source of electricity in the first half of this year, overtaking coal for the first time. Coal use could peak and begin to fall by the end of this decade.
Nuclear could play a role in replacing fossil fuels: After two decades of stagnation, the global nuclear fleet could increase by a third in the next 10 years. Solar is set to continue its meteoric rise, too. Of all the electricity demand growth we’re expecting in the next decade, 80% is in places with high-quality solar irradiation—meaning they’re good spots for solar power.
Ultimately, there are a lot of ways in which the world is moving in the right direction on energy. But we’re far from moving fast enough. Global emissions are, once again, going to hit a record high this year. To limit warming and prevent the worst effects of climate change, we need to remake our energy system, including electricity, and we need to do it faster.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. Over the course of three days of main-stage sessions, I learned about innovations in AI, biotech, and robotics.
But as you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google.
They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise.
I’ve been closely following Google’s work in energy this year. Like the rest of the tech industry, the company is seeing ballooning electricity demand in its data centers. That could get in the way of a major goal that Google has been talking about for years.
See, back in 2020, the company announced an ambitious target: by 2030, it aimed to run on carbon-free energy 24-7. Basically, that means Google would purchase enough renewable energy on the grids where it operates to meet its entire electricity demand, and the purchases would match up so the electricity would have to be generated when the company was actually using energy. (For more on the nuances of Big Tech’s renewable-energy pledges, check out James’s piece from last year.)
Google’s is an ambitious goal, and on stage, Tian said that the company is still aiming for it but acknowledged that it’s looking tough with the rise of AI.
“It was always a moonshot,” she said. “It’s something very, very hard to achieve, and it’s only harder in the face of this growth. But our perspective is, if we don’t move in that direction, we’ll never get there.”
Google’s total electricity demand more than doubled from 2020 to 2024, according to its latest Environmental Report. As for that goal of 24-7 carbon-free energy? The company is basically treading water. While it was at 67% for its data centers in 2020, last year it came in at 66%.
Not going backwards is something of an accomplishment, given the rapid growth in electricity demand. But it still leaves the company some distance away from its finish line.
To close the gap, Google has been signing what feels like constant deals in the energy space. Two recent announcements that Tian talked about on stage were a project involving carbon capture and storage at a natural-gas plant in Illinois and plans to reopen a shuttered nuclear power plant in Iowa.
Let’s start with carbon capture. Google signed an agreement to purchase most of the electricity from a new natural-gas plant, which will capture and store about 90% of its carbon dioxide emissions.
That announcement was controversial, with critics arguing that carbon capture keeps fossil-fuel infrastructure online longer and still releases greenhouse gases and other pollutants into the atmosphere.
One question that James raised on stage: Why build a new natural-gas plant rather than add equipment to an already existing facility? Tacking on equipment to an operational plant would mean cutting emissions from the status quo, rather than adding entirely new fossil-fuel infrastructure.
The company did consider many existing plants, Tian said. But, as she put it, “Retrofits aren’t going to make sense everywhere.” Space can be limited at existing plants, for example, and many may not have the right geology to store carbon dioxide underground.
“We wanted to lead with a project that could prove this technology at scale,” Tian said. This site has an operational Class VI well, the type used for permanent sequestration, she added, and it also doesn’t require a big pipeline buildout.
Tian also touched on the company’s recent announcement that it’s collaborating with NextEra Energy to reopen Duane Arnold Energy Center, a nuclear power plant in Iowa. The company will purchase electricity from that plant, which is scheduled to reopen in 2029.
As I covered in a story earlier this year, Duane Arnold was basically the final option in the US for companies looking to reopen shuttered nuclear power plants. “Just a few years back, we were still closing down nuclear plants in this country,” Tian said on stage.
While each reopening will look a little different, Tian highlighted the groups working to restart the Palisades plant in Michigan, which was the first reopening to be announced, last spring. “They’re the real heroes of the story,” she said.
I’m always interested to get a peek behind the curtain at how Big Tech is thinking about energy. I’m skeptical but certainly interested to see how Google’s, and the rest of the industry’s, goals shape up over the next few years.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Welcome to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.
This week, Casey Crownhart, senior reporter for energy at MIT Technology Review and Pilita Clark, FT’s columnist, consider how China’s rapid renewables buildout could help it leapfrog on AI progress.
Casey Crownhart writes:
In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying here in the US, where massive data centers are waiting to come online, and it doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.
It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace. With too little new power capacity coming online, the strain is starting to show: Electricity bills are ballooning for people who live in places where data centers place a growing load on the grid.
If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China.
China installed 429 GW of new power generation capacity in 2024, more than six times the net capacity added in the US during that time.
China still generates much of its electricity with coal, but that makes up a declining share of the mix. Rather, the country is focused on installing solar, wind, nuclear, and gas at record rates.
The US, meanwhile, is focused on reviving its ailing coal industry. Coal-fired power plants are polluting and, crucially, expensive to run. Aging plants in the US are also less reliable than they used to be, generating electricity just 42% of the time, compared with a 61% capacity factor in 2014.
It’s not a great situation. And unless the US changes something, we risk becoming consumers as opposed to innovators in both energy and AI tech. Already, China earns more from exporting renewables than the US does from oil and gas exports.
Building and permitting new renewable power plants would certainly help, since they’re currently the cheapest and fastest to bring online. But wind and solar are politically unpopular with the current administration. Natural gas is an obvious candidate, though there are concerns about delays with key equipment.
One quick fix would be for data centers to be more flexible. If they agreed not to suck electricity from the grid during times of stress, new AI infrastructure might be able to come online without any new energy infrastructure.
One study from Duke University found that if data centers agree to curtail their consumption just 0.25% of the time (roughly 22 hours over the course of the year), the grid could provide power for about 76 GW of new demand. That’s like adding about 5% of the entire grid’s capacity without needing to build anything new.
But flexibility wouldn’t be enough to truly meet the swell in AI electricity demand. What do you think, Pilita? What would get the US out of these energy constraints? Is there anything else we should be thinking about when it comes to AI and its energy use?
Pilita Clark responds:
I agree. Data centers that can cut their power use at times of grid stress should be the norm, not the exception. Likewise, we need more deals like those giving cheaper electricity to data centers that let power utilities access their backup generators. Both reduce the need to build more power plants, which makes sense regardless of how much electricity AI ends up using.
This is a critical point for countries across the world, because we still don’t know exactly how much power AI is going to consume.
Forecasts for what data centers will need in as little as five years’ time vary wildly, from less than twice today’s rates to four times as much.
This is partly because there’s a lack of public data about AI systems’ energy needs. It’s also because we don’t know how much more efficient these systems will become. The US chip designer Nvidia said last year that its specialized chips had become 45,000 times more energy efficient over the previous eight years.
Moreover, we have been very wrong about tech energy needs before. At the height of the dot-com boom in 1999, it was erroneously claimed that the internet would need half the US’s electricity within a decade—necessitating a lot more coal power.
Still, some countries are clearly feeling the pressure already. In Ireland, data centers chew up so much power that new connections have been restricted around Dublin to avoid straining the grid.
Some regulators are eyeing new rules forcing tech companies to provide enough power generation to match their demand. I hope such efforts grow. I also hope AI itself helps boost power abundance and, crucially, accelerates the global energy transition needed to combat climate change. OpenAI’s Sam Altman said in 2023 that “once we have a really powerful super intelligence, addressing climate change will not be particularly difficult.”
The evidence so far is not promising, especially in the US, where renewable projects are being axed. Still, the US may end up being an outlier in a world where ever cheaper renewables made up more than 90% of new power capacity added globally last year.
Europe is aiming to power one of its biggest data centers predominantly with renewables and batteries. But the country leading the green energy expansion is clearly China.
The 20th century was dominated by countries rich in the fossil fuels whose reign the US now wants to prolong. China, in contrast, may become the world’s first green electrostate. If it does this in a way that helps it win an AI race the US has so far controlled, it will mark a striking chapter in economic, technological, and geopolitical history.
Casey Crownhart replies:
I share your skepticism of tech executives’ claims that AI will be a groundbreaking help in the race to address climate change. To be fair, AI is progressing rapidly. But we don’t have time to wait for technologies standing on big claims with nothing to back them up.
When it comes to the grid, for example, experts say there’s potential for AI to help with planning and even operating, but these efforts are still experimental.
Meanwhile, much of the world is making measurable progress on transitioning to newer, greener forms of energy. How that will affect the AI boom remains to be seen. What is clear is that AI is changing our grid and our world, and we need to be clear-eyed about the consequences.
Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.”
This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.
That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.)
But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?”
That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals.
The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table.
I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture.
Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.
But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak. (To learn more, read our Power Hungry series online.)
Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online.
Just as with climate change, we need to look at this as a system rather than a series of individual choices.
Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too.
That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact.
Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.
The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.
Heat Exchange
MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.
As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings.
Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts. In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.
The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases.
There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.
The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.
In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.
Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.
Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.
Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.
The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.
The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.
Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”
But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.
(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)
In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.
If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment.
Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms.
The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns.
Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.
We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?
Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.
But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.
The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?
Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture.
Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?
David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.
Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs.
The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. The hope is that this alternative will also help ease growing strain on the copper supply chain.
“We’re really focused on addressing the copper supply crisis that’s looming ahead of us,” says Randy Allen, Still Bright’s cofounder and CEO.
Copper is a crucial ingredient in everything from electrical wiring to cookware today. And clean energy technologies like solar panels and electric vehicles are introducing even more demand for the metal. Global copper demand is expected to grow by 40% between now and 2040.
As demand swells, so do the climate and environmental impacts of copper extraction, the process of refining ore into a pure metal. There’s also growing concern about the geographic concentration of the copper supply chain. Copper is mined all over the world, and historically, many of those mines had smelters on-site to process what they extracted. (Smelters form pure copper metal by essentially burning concentrated copper ore at high temperatures.) But today, the smelting industry has consolidated, with many mines shipping copper concentrates to smelters in Asia, particularly China.
That’s partly because smelting uses a lot of energy and chemicals, and it can produce sulfur-containing emissions that can harm air quality. “They shipped the environmental and social problems elsewhere,” says Simon Jowitt, a professor at the University of Nevada, Reno, and director of the Nevada Bureau of Mines and Geology.
It’s possible to scrub pollution out of a smelter’s emissions, and smelters are much cleaner than they used to be, Jowitt says. But overall, smelting centers aren’t exactly known for environmental responsibility.
So even countries like the US, which have plenty of copper reserves and operational mines, largely ship copper concentrates, which contain up to around 30% copper, to China or other countries for smelting. (There are just two operational ore smelters in the US today.)
Still Bright avoids the pyrometallurgic process that smelters use in favor of a chemical approach, partially inspired by devices called vanadium flow batteries.
In the startup’s reactor, vanadium reacts with the copper compounds in copper concentrates. The copper metal remains a solid, leaving many of the impurities behind in the liquid phase. The whole thing takes between 30 and 90 minutes. The solid, which contains roughly 70% copper after this reaction, can then be fed into another, established process in the mining industry, called solvent extraction and electrowinning, to make copper that’s over 99% pure.
This is far from the first attempt to use a water-based, chemical approach to processing copper. Today, some copper ore is processed with acid, for example, and Ceibo, a startup based in Chile, is trying to use a version of that process on the type of copper that’s traditionally smelted. The difference here is the particular chemistry, particularly the choice to use vanadium.
One of Still Bright’s founders, Jon Vardner, was researching copper reactions and vanadium flow batteries when he came up with the idea to marry a copper extraction reaction with an electrical charging step that could recycle the vanadium.
COURTESY OF STILL BRIGHT
After the vanadium reacts with the copper, the liquid soup can be fed into an electrolyzer, which uses electricity to turn the vanadium back into a form that can react with copper again. It’s basically the same process that vanadium flow batteries use to charge up.
While other chemical processes for copper refining require high temperatures or extremely acidic conditions to get the copper into solution and force the reaction to proceed quickly and ensure all the copper gets reacted, Still Bright’s process can run at ambient temperatures.
One of the major benefits to this approach is cutting the pollution from copper refining. Traditional smelting heats the target material to over 1,200 °C (2,000 °F), forming sulfur-containing gases that are released into the atmosphere.
Still Bright’s process produces hydrogen sulfide gas as a by-product instead. It’s still a dangerous material, but one that can be effectively captured and converted into useful side products, Allen says.
Another source of potential pollution is the sulfide minerals left over after the refining process, which can form sulfuric acid when exposed to air and water (this is called acid mine drainage, common in mining waste). Still Bright’s process will also produce that material, and the company plans to carefully track it, ensuring that it doesn’t leak into groundwater.
The company is currently testing its process in the lab in New Jersey and designing a pilot facility in Colorado, which will have the capacity to make about two tons of copper per year. Next will be a demonstration-scale reactor, which will have a 500-ton annual capacity and should come online in 2027 or 2028 at a mine site, Allen says. Still Bright recently raised an $18.7 million seed round to help with the scale-up process.
How scale up goes will be a crucial test of the technology and whether the typically conservative mining industry will jump on board, UNR’s Jowitt says: “You want to see what happens on an industrial scale. And I think until that happens, people might be a little reluctant to get into this.”
It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.”
There was no word on who “they” were, but maybe it was better that way.
She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding unknown technology to control the weather and wreak havoc on their supposed enemies. This claim, fundamentally preposterous from a scientific standpoint, has grown louder and more common in recent years. It pops up over and over when extreme weather strikes: in Dubai in April 2024, in Australia in July 2022, in the US after California floods and hurricanes like Helene and Milton. In the UK, conspiracy theorists claimed that the government had fixed the weather to be sunny and rain-free during the first covid lockdown in March 2020. Most recently, the theories spread again when disastrous floods hit central Texas this past July. The idea has even inspired some antigovernment extremists to threaten and try to destroy weather radar towers.
This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.
But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth behind the grandiose claims.
Sure, there is no current way for humans to control the weather. We can’t cause major floods or redirect hurricanes or other powerful storm systems, simply because the energy involved is far too great for humans to alter significantly.
But there are ways we can modify the weather. The key difference is the scale of what is possible.
The most common weather modification practice is called cloud seeding, and it involves injecting small amounts of salts or other materials into clouds with the goal of juicing levels of rain or snow. This is typically done in dry areas that lack regular precipitation. Research shows that it can in fact work, though advances in technology reveal that its impact is modest—coaxing maybe 5% to 10% more moisture out of otherwise stubborn clouds.
But the fact that humans can influence weather at all gives conspiracy theorists a foothold in the truth. Add to this a spotty history of actual efforts by governments and militaries to control major storms, as well as other emerging but not-yet-deployed-at-any-scale technologies that aim to address climate change … and you can see where things get confusing.
So while more sweeping claims of weather control are ultimately ridiculous from a scientific standpoint, they can’t be dismissed as entirely stupid.
This all helped make the conspiracytheoriesswirling after the recent Texas floods particularly loud and powerful. Just days earlier, 100 miles away from the epicenter of the floods, in a town called Runge, the cloud-seeding company Rainmaker had flown a single-engine plane and released about 70 grams of silver iodide into some clouds; a modest drizzle of less than half a centimeter of rain followed. But once the company saw a storm front in the forecast, it suspended its work; there was no need to seed with rain already on the way.
“We conducted an operation on July 2, totally within the scope of what we were regulatorily permitted to do,” Augustus Doricko, Rainmaker’s founder and CEO, recently told me. Still, when as much as 20 inches of rain fell soon afterward not too far away, and more than 100 people died, the conspiracy theory machine whirred into action.
As Doricko told the Washington Post in the tragedy’s aftermath, he and his company faced “nonstop pandemonium” on social media; eventually someone even posted photos from outside Rainmaker’s office, along with its address. Doricko told me a few factors played into the pile-on, including a lack of familiarity with the specifics of cloud seeding, as well as what he called “deliberately inflammatory messaging from politicians.” Indeed, theories about Rainmaker and cloud seeding spread online via prominent figures including Greene and former national security advisor Mike Flynn.
Unfortunately, all this is happening at the same time as the warming climate is making heavy rainfall and the floods that accompany it more and more likely. “These events will become more frequent,” says Emily Yeh, a professor of geography at the University of Colorado who has examined approaches and reactions to weather modification around the world. “There is a large, vocal group of people who are willing to believe anything but climate change as the reason for Texas floods, or hurricanes.”
Worsening extremes, increasing weather modification activity, improving technology, a sometimes shady track record—the conditions are perfect for an otherwise niche conspiracy theory to spread to anyone desperate for tidy explanations of increasingly disastrous events.
Here, we break down just what’s possible and what isn’t—and address some of the more colorful reasons why people may believe things that go far beyond the facts.
What we can do with the weather—and who is doing it
The basic concepts behind cloud seeding have been around for about 80 years, and government interest in the topic goes back even longer than that.
The primary practice involves using planes, drones, or generators on the ground to inject tiny particles of stuff, usually silver iodide, into existing clouds. The particles act as nuclei around which moisture can build up, forming ice crystals that can get heavy enough to fall out of the cloud as snow or rain.
“Weather modification is an old field; starting in the 1940s there was a lot of excitement,” says David Delene, a research professor of atmospheric sciences at the University of North Dakota and an expert on cloud seeding. In a US Senate report from 1952 to establish a committee to study weather modification, authors noted that a small amount of extra rain could “produce electric power worth hundreds of thousands of dollars” and “greatly increase crop yields.” It also cited potential uses like “reducing soil erosion,” “breaking up hurricanes,” and even “cutting holes in clouds so that aircraft can operate.”
But, as Delene adds, “that excitement … was not realized.”
Through the 1980s, extensive research often funded or conducted by Washington yielded a much better understanding of atmospheric science and cloud physics, though it proved extremely difficult to actually demonstrate the efficacy of the technology itself. In other words, scientists learned the basic principles behind cloud seeding, and understood on a theoretical level that it should work—but it was hard to tell how big an impact it was having on rainfall.
There is huge variability between one cloud and another, one storm system and another, one mountain or valley and another; for decades, the tools available to researchers did not really allow for firm conclusions on exactly how much extra moisture, if any, they were getting out of any given operation. Interest in the practice died down to a low hum by the 1990s.
But over the past couple of decades, the early excitement has returned.
Cloud seeding can enhance levels of rain and snow
While the core technology has largely stayed the same, severalprojects launched in the US and abroad starting in the 2000s have combined statistical modeling with new and improved aircraft-based measurements, ground-based radar, and more to provide better answers on what results are actually achievable when seeding clouds.
“I think we’ve identified unequivocally that we can indeed modify the cloud,” says Jeff French, an associate professor and head of the University of Wyoming’s Department of Atmospheric Science, who has worked for years on the topic. But even as scientists have come to largely agree that the practice can have an impact on precipitation, they also largely recognize that the impact probably has some fairly modest upper limits—far short of massive water surges.
“There is absolutely no evidence that cloud seeding can modify a cloud to the extent that would be needed to cause a flood,” French says. Floods require a few factors, he adds—a system with plenty of moisture available that stays localized to a certain spot for an extended period. “All of these things which cloud seeding has zero effect on,” he says.
The technology simply operates on a different level. “Cloud seeding really is looking at making an inefficient system a little bit more efficient,” French says.
As Delene puts it: “Originally [researchers] thought, well, we could, you know, do 50%, 100% increases in precipitation,” but “I think if you do a good program you’re not going to get more than a 10% increase.”
Asked for his take on a theoretical limit, French was hesitant—“I don’t know if I’m ready to stick my neck out”—but agreed on “maybe 10-ish percent” as a reasonable guess.
Another cloud seeding expert, Katja Friedrich from the University of Colorado–Boulder, says that any grander potential would be obvious by this point: We wouldn’t have “spent the last 100 years debating—within the scientific community—if cloud seeding works,” she writes in an email. “It would have been easy to separate the signal (from cloud seeding) from the noise (natural precipitation).”
It can also (probably) suppress precipitation
Sometimes cloud seeding is used not to boost rain and snow but rather to try to reduce its severity—or, more specifically, to change the size of individual rain droplets or hailstones.
One of the most prominent examples has beenin parts of Canada, where hailstorms can be devastating; a 2024 event in Calgary, for instance, was the country’s second-most-expensive disaster ever, with over $2 billion in damages.
Insurance companies in Alberta have been working together for nearly three decades on a cloud seeding program that’s aimed at reducing some of that damage. In these cases, the silver iodide or other particles are meant to act essentially as competition for other “embryos” inside the cloud, increasing the total number of hailstones and thus reducing each individual stone’s average size.
Smaller hailstones means less damage when they reach the ground. The insurance companies—which continue to pay for the program—say losses have been cut by 50% since the program started, though scientists aren’t quite as confident in its overall success. A 2023 study published in Atmospheric Research examined 10 years of cloud seeding efforts in the province and found that the practice did appear to reduce potential for damage in about 60% of seeded storms—while in others, it had no effect or was even associated with increased hail (though the authors said this could have been due to natural variation).
Similar techniques are also sometimes deployed to try to improve the daily forecast just a bit. During the 2008 Olympics, for instance, China engaged in a form of cloud seeding aimed at reducing rainfall. As MIT Technology Reviewdetailed back then, officials with the Beijing Weather Modification Office planned to use a liquid-nitrogen-based coolant that could increase the number of water droplets in a cloud while reducing their size; this can get droplets to stay aloft a little longer instead of falling out of the cloud. Though it is tough to prove that it definitively would have rained without the effort, the targeted opening ceremony did stay dry.
So, where is this happening?
The United Nations’ World Meteorological Organization says that some form of weather modification is taking place in “more than 50 countries” and that “demand for these weather modification activities is increasing steadily due to the incidence of droughts and other calamities.”
The biggest user of cloud-seeding tech is arguably China. Following the work around the Olympics, the country announced a huge expansion of its weather modification program in 2020, claiming it would eventually run operations for agricultural relief and other functions, including hail suppression, over an area about the size of India and Algeria combined. Since then, China has occasionally announced bits of progress—including updates to weather modification aircraft and the first use of drones for artificial snow enhancement. Overall, it spends billions on the practice, with more to come.
Elsewhere, desert countries have taken an interest. In 2024, Saudi Arabia announced an expanded research program on cloud seeding—Delene, of the University of North Dakota, was part of a team that conducted experiments in various parts of that country in late 2023. Its neighbor the United Arab Emirates began “rain enhancement” activities back in 1990; this program too has faced outcry, especially after more than a typical year’s worth of rain fell in a single day in 2024, causing massive flooding. (Bloomberg recently published a story about persistent questions regarding the country’s cloud seeding program; in response to the story, French wrote in an email that the “best scientific understanding is still that cloud seeding CANNOT lead to these types of events.” Other experts we asked agreed.)
In the US, a 2024 Government Accountability Office report on cloud seeding said that at least nine states have active programs. These are sometimes run directly by the state and sometimes contracted out through nonprofits like the South Texas Weather Modification Association to private companies, including Doricko’s Rainmaker and North Dakota–based Weather Modification. In August, Doricko told me that Rainmaker had grown to 76 employees since it launched in 2023. It now runs cloud seeding operations in Utah, Idaho, Oregon, California, and Texas, as well as forecasting services in New Mexico and Arizona. And in an answer that may further fuel the conspiracy fire, he added they are also operating in one Middle Eastern country; when I asked which one, he’d only say, “Can’t tell you.”
What we cannot do
The versions of weather modification that the conspiracy theorists envision most often—significantly altering monsoons or hurricanes or making the skies clear and sunny for weeks at a time—have so far proved impossible to carry out. But that’s not necessarily for lack of trying.
The US government attempted to alter a hurricane in 1947 as part of a program dubbed Project Cirrus. In collaboration with GE, government scientists seeded clouds with pellets of dry ice, the idea being that the falling pellets could induce supercooled liquid in the clouds to crystallize into ice. After they did this, the storm took a sharp left turn and struck the area around Savannah, Georgia. This was a significant moment for budding conspiracy theories, since a GE scientist who had been working with the government said he was “99% sure” the cyclone swerved because of their work. Other experts disagreed and showed that such storm trajectories are, in reality, perfectly possible without intervention. Perhaps unsurprisingly, public outrage and threats of lawsuits followed.
It took some time for the hubbub to die down, after which several US government agencies continued—unsuccessfully—trying to alter and weaken hurricanes with a long-running cloud seeding program called Project Stormfury. Around the same time, the US military joined the fray with Operation Popeye, essentially trying to harness weather as a weapon in the Vietnam War—engaging in cloud seeding efforts over Vietnam, Cambodia, and Laos in the late 1960s and early 1970s, with an eye toward increasing monsoon rains and bogging down the enemy. Though it was never really clear whether these efforts worked, the Nixon administration tried to deny them, going so far as to lie to the public and even to congressional committees.
More recently and less menacingly, there have been experiments with Dyn-O-Gel—a Florida company’s super-absorbent powder, intended to be dropped into storm clouds to sop up their moisture. In the early 2000s, the company carried out experiments with the stuff in thunderstorms, and it had grand plans to use it to weaken tropical cyclones. But according to one former NOAA scientist, you would need to drop almost 38,000 tons of it, requiring nearly 380 individual plane trips, in and around even a relatively small cyclone’s eyewall to really affect the storm’s strength. And then you would have to do that again an hour and a half later, and so on. Reality tends to get in the way of the biggest weather modification ideas.
Beyond trying to control storms, there are some other potential weather modification technologies out there that are either just getting started or have never taken off. Swiss researchers have tried to use powerful lasers to induce cloud formation, for example; in Australia, where climate change is imperiling the Great Barrier Reef, artificial clouds created when ship-based nozzles spray moisture into the sky have been used to try to protect the vital ecosystem. In each case, the efforts remain small, localized, and not remotely close to achieving the kinds of control the conspiracy theorists allege.
What is not weather modification—but gets lumped in with it
Further worsening weather control conspiracies is that there is a tendency to conflate cloud seeding and other promising weather modification research with concepts such as chemtrails—a full-on conspiracist fever dream about innocuous condensation trails left by jets—and solar geoengineering, a theoretical stopgap to cool the planet that has been subject to much discussion and modeling research but has never been deployed in any large-scale way.
One controversial form of solar geoengineering, known as stratospheric aerosol injection, would involve having high-altitude jets drop tiny aerosol particles—sulfur dioxide, most likely—into the stratosphere to act essentially as tiny mirrors. They would reflect a small amount of sunlight back into space, leaving less energy to reach the ground and contribute to warming. To date, attempts to launch physical experiments in this space have been shouted down, and only tiny—though stillcontroversial—commercial efforts have taken place.
One can see why it gets lumped in with cloud seeding: bits of stuff, dumped into the sky, with the aim of altering what happens down below. But the aims are entirely separate; geoengineering would alter the global average temperature rather than having measurable effects on momentary cloudbursts or hailstorms. Some research has suggested that the practice could alter monsoon patterns, a significant issue given their importance to much of the world’s agriculture, but it remains a fundamentally different practice from cloud seeding.
Still, the political conversation around supposed weather control often reflects this confusion. Greene, for instance, introduced a bill in July called the Clear Skies Act, which would ban all weather modification and geoengineering activities. (Greene’s congressional office did not respond to a request for comment.) And last year, Tennessee became the first state to enact a law to prohibit the “intentional injection, release, or dispersion, by any means, of chemicals, chemical compounds, substances, or apparatus … into the atmosphere with the express purpose of affecting temperature, weather, or the intensity of the sunlight.” Florida followed suit, with Governor Ron DeSantis signing SB 56 into law in June of this year for the same stated purpose.
Also this year, lawmakers in more than 20 otherstates have also proposed some version of a ban on weather modification, often lumping it in with geoengineering, even though caution on the latter is more widely accepted or endorsed. “It’s not a conspiracy theory,” one Pennsylvania lawmaker who cosponsored a similar bill told NBC News. “All you have to do is look up.”
Oddly enough, as Yeh of the University of Colorado points out, the places where bans have passed are states where weather modification isn’t really happening. “In a way, it’s easy for them to ban it, because, you know, nothing actually has to be done,” she says. In general, neither Florida nor Tennessee—nor any other part of the Southeast—needs any help finding rain. Basically, all weather modification activity in the US happens in the drier areas west of the Mississippi.
Finding a culprit
Doricko told me that in the wake of the Texas disaster, he has seen more people become willing to learn about the true capabilities of cloud seeding and move past the more sinister theories about it.
I asked him, though, about some of his company’s flashier branding: Until recently, visitors to the Rainmaker website were greeted right up top with the slogan “Making Earth Habitable.” Might this level of hypecontribute to public misunderstanding or fear?
He said he is indeed aware that Earth is, currently, habitable, and called the slogan a “tongue-in-cheek, deliberately provocative statement.” Still, in contrast to the academics who seem more comfortable acknowledging weather modification’s limits, he has continued to tout its revolutionary potential. “If we don’t produce more water, then a lot of the Earth will become less habitable,” he said. “By producing more water via cloud seeding, we’re helping to conserve the ecosystems that do currently exist, that are at risk of collapse.”
While other experts cited that 10% figure as a likely upper limit of cloud seeding’s effectiveness, Doricko said they could eventually approach 20%, though that might be years away. “Is it literally magic? Like, can I snap my fingers and turn the Sahara green? No,” he said. “But can it help make a greener, verdant, and abundant world? Yeah, absolutely.”
It’s not all that hard to see why people still cling to magical thinking here. The changing climate is, after all, offering up what’s essentially weaponized weather, only with a much broader and long-term mechanism behind it. There is no single sinister agency or company with its finger on the trigger, though it can be tempting to look for one; rather, we just have an atmosphere capable of holding more moisture and dropping it onto ill-prepared communities, and many of the people in power are doing little to mitigate the impacts.
“Governments are not doing a good job of responding to the climate crisis; they are often captured by fossil-fuel interests, which drive policy, and they can be slow and ineffective when responding to disasters,” Naomi Smith, a lecturer in sociology at the University of the Sunshine Coast in Australia who has written about conspiracy theories and weather events, writes in an email. “It’s hard to hold all this complexity, and conspiracy theorizing is one way of making it intelligible and understandable.”
“Conspiracy theories give us a ‘big bad’ to point the finger at, someone to blame and a place to put our feelings of anger, despair, and grief,” she writes. “It’s much less satisfying to yell at the weather, or to engage in the sustained collective action we actually need to tackle climate change.”
The sinister “they” in Greene’s accusations is, in other words, a far easier target than the real culprit.
Dave Levitan is an independent journalist, focused on science, politics, and policy. Find his work at davelevitan.com and subscribe to his newsletter at gravityisgone.com.