The SEC’s new climate rules were a missed opportunity to accelerate corporate action

This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

But the SEC could and should have gone much further. 

After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

It is now just good business to look for carbon reductions that actually save money.

The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

Solar geoengineering could start soon if it starts small

For half a century, climate researchers have considered the possibility of injecting small particles into the stratosphere to counteract some aspects of climate change. The idea is that by reflecting a small fraction of sunlight back to space, these particles could partially offset the energy imbalance caused by accumulating carbon dioxide, thereby reducing warming as well as extreme storms and many other climate risks. 

Debates about this idea, a form of solar geoengineering called stratospheric aerosol injection (SAI), commonly focus either on small-scale outdoor research that seeks to understand the physical processes involved or on deployment at a climate-altering scale. The gulf between these is gigantic: an experiment might use mere kilograms of aerosol material whereas deployment that could substantially slow or even reverse warming would involve millions of metric tons per year—a billionfold difference in scale. Appreciably cooling the planet via SAI would also require a purpose-built fleet of high-altitude aircraft, which could take one or two decades to assemble. This long lead time encourages policymakers to ignore the hard decisions about regulating deployment of SAI. 

Such complacency is ill-advised. The barrier between research and deployment may be less distinct than is often assumed. Our analysis suggests a country or group of countries could conceivably start a subscale solar geoengineering deployment in as little as five years, one that would produce unmistakable changes in the composition of the stratosphere. A well-managed subscale deployment would benefit research by reducing important uncertainties about SAI, but it could not be justified as research alone—similar research could be carried out with a much smaller amount of aerosol particles. And it would have a non-negligible impact on the climate, providing as much cooling as sulfur pollution from international shipping did before the recent cleanup of shipping fuels. At the same time, the magnitude of the cooling would be small enough that its effects on climate, on a national or regional scale, would be very difficult to detect in the face of normal variability. 

While the climate impact of such a subscale deployment would be small (and most likely beneficial), the political impact could be profound. It could trigger a backlash that would upend climate geopolicy and threaten international stability. It could be an on-ramp to large-scale deployment. And it could be exploited by fossil fuel interests seeking to slow the essential task of cutting emissions. 

We oppose near-term deployment of solar geoengineering. In accord with the Climate Overshoot Commission, the most senior group of political leaders to examine the topic, we support a moratorium on deployment until the science is internationalized and critically assessed, and until some governance architecture is widely agreed upon. But if we are correct that such subscale deployments are plausible, then policymakers may need to confront solar geoengineering—its promise and disruptive potential, and its profound challenges to global governance—earlier than is now widely assumed. 

Obstacles to early deployment 

Humans already emit a huge quantity of aerosols into the troposphere (the turbulent lowest layer of the atmosphere) from sources such as shipping and heavy industry, but these aerosols fall to Earth or are removed by rainfall and other processes within about a week. Volcanic eruptions can have a more lasting effect. When eruptions are powerful enough to punch through the troposphere into the stratosphere, the aerosols deposited there can endure for roughly a year. SAI would, like the largest volcanic eruptions, inject aerosols or their precursors into the stratosphere. Given their vastly longer atmospheric endurance, aerosols placed there can have a cooling impact 100 times larger than they would if emitted at the surface. 

Getting aerosols to the stratosphere is another matter. Passenger jets routinely reach the lower stratosphere on transpolar flights. But to get efficient global coverage, aerosols are best deployed at low latitudes, where the stratosphere’s natural overturning circulation will carry them poleward and thus distribute them worldwide. The average height of the top of the troposphere is about 17 kilometers in the tropics, and models suggest injection needs to be a few kilometers higher than that to be captured in the upwelling stratospheric circulation. The altitude for efficient deployment is commonly assumed to be at least 20 kilometers, nearly twice the height at which commercial jets or large military aircraft cruise. 

Although small spy planes can cruise in this very thin air, they can carry only one to two metric tons of payload. That would be insufficient except for small-scale tests: offsetting a substantial fraction of global warming—say, 1 °C of cooling—would require platforms that could deliver several million metric tons per year of material to the stratosphere. Neither rockets nor balloons are suitable for hauling such a large mass to this high perch. Consequently, full-scale deployment would require a fleet of novel aircraft—a few hundred in order to achieve a 1 °C cooling target. Procuring just the first aircraft in the manner typical of large commercial or military aircraft development programs might take roughly a decade, and manufacturing the required fleet would take several years more. 

But starting with full-scale deployment is both imprudent and unlikely. Even if we are turning the global thermostat down, the faster we change the climate, the higher the risk of unforeseen impacts. A country or group of countries that wishes to deploy solar engineering is likely to appreciate the political and technical benefits of a slower start, one with a gradual reversal of warming that facilitates optimization and “learning by doing”, while minimizing the likelihood and impact of unintended consequences. 

We envision scenarios where, instead of attempting to inject aerosols in the most efficient way near the equator, a country or group of countries attempt to place a smaller amount of material in the lower stratosphere at higher latitudes. They could do this with existing aircraft, because the top of the troposphere slopes sharply downward as you move away from the equator. At 35° north and south, it is found at roughly 12 kilometers. Adding a 3 kilometer margin, an effective deployment altitude at 35° north and south would be 15 kilometers. This remains too high for airliners but is just below the 15.5 kilometer service ceiling of top-of-the-line business jets made by Gulfstream, Bombardier, and Dassault. The list of countries with territory at or near 35° north or south includes not only rich countries such as the US, Australia, Japan, South Korea, Spain, and China, but also poorer ones such as Morocco, Algeria, Iraq, Iran, Pakistan, India, Chile, and Argentina.

Subscale deployment

How might subscale deployment be accomplished? Most stratospheric scientific studies of aerosol injection assume the operative material is sulfur dioxide (SO2) gas, which is 50% sulfur by mass. Another plausible option is hydrogen sulfide (H2S), which cuts the mass requirement almost in half, though it is more hazardous to ground and flight crews than SO2 and thus might be eliminated from consideration. Carbon disulfide (CS2) gas cuts the mass requirement by 40% and is generally less hazardous than SO2. It is also possible to use elemental sulfur, which is the safest and easiest to handle, but this would require a method of combusting it on board before venting or the use of afterburners. No one has yet done the engineering studies required to determine which of these sulfur compounds would be the best choice. 

Using assumptions confirmed with Gulfstream, we estimate that any of its G500/600 aircraft could loft about 10 kilotons of material per year to 15.5 kilometers. If highly mass-efficient CS2 were used, a fleet of no more than 15 aircraft could carry up 100 kilotons of sulfur a year. Aged but operable used G650s cost about $25 million. Adding in the cost of modification, maintenance, spare parts, salaries, fuel, materials, and insurance, we expect the average total cost of a decade-long subscale deployment would be about $500 million a year. Large-scale deployment would cost at least 10 times as much.

How much is 100 kilotons of sulfur per year? It is a mere 0.3% of current global annual emissions of sulfur pollution into the atmosphere. Its contribution to the health impact of particulate air pollution would be substantially less than a tenth of what it would be if the same amount were emitted at the surface. As for its impact on climate, it would be about 1% of the sulfur injected in the stratosphere by the 1992 eruption of Mount Pinatubo in the Philippines. That well-studied event supports the assertion that no high-consequence unknown effects would occur. 

At the same time, 100 kilotons of sulfur per year is not insubstantial: it would be more than twice the natural background flux of sulfur from the troposphere into the stratosphere, absent unusual volcanic activity. The cooling effect would be enough to delay global rise in temperature for about a third of a year, an offset that would last as long as the subscale deployment was maintained. And because solar geoengineering is more effective at countering the rise in extreme precipitation than the rise in temperature, the deployment would delay the increasing intensity of tropical cyclones by more than half a year. These benefits are not negligible to those most at risk from climate impacts (though none of these benefits would necessarily be apparent due to the climate system’s natural variability).

We should mention that our 100 kilotons per year scenario is arbitrary. We define a subscale deployment to mean a deployment large enough to substantially increase the amount of aerosol in the stratosphere while being well below the level that is required to delay warming by a decade. With that definition, such a deployment could be several times larger or smaller than our sample scenario. 

Of course no amount of solar geoengineering can eliminate the need to reduce the concentration of greenhouse gases in the atmosphere. At best, solar geoengineering is a supplement to emissions cuts. But even the subscale deployment scenario we consider here would be a significant supplement: over a decade, it would have approximately half the cooling effect as eliminating all emissions from the European Union. 

The politics of subscale deployment

The subscale deployment we’ve outlined here could serve several plausible scientific and technological goals. It would demonstrate the storage, lofting, and dispersion technologies for larger-scale deployment. If combined with an observational program, it would assess monitoring capabilities as well. It would directly clarify how sulfate is carried around the stratosphere and how sulfate aerosols interact with the ozone layer. After a few years of such a subscale deployment, we would have a far better understanding of the scientific and technological barriers to large-scale deployment. 

At the same time, subscale deployment would pose risks for the deployer. It could trigger political instability and invite retribution from other countries and international bodies that would not respond well to entities fiddling with the planet’s thermostat without global coordination and oversight. Opposition might stem from a deep-rooted aversion to environmental modification or from more pragmatic concerns that large-scale deployment would be detrimental to some regions. 

Deployers might be motivated by a wide range of considerations. Most obviously, a state or coalition of states might conclude that solar geoengineering could significantly reduce their climate risk, and that such a subscale deployment would strike an effective balance between the goals of pushing the world toward large-scale deployment and minimizing the risk of political backlash. 

The deployers could decide that a subscale project might make bigger interventions possible. While scientists may be comfortable drawing inferences about solar geoengineering from tiny experiments and models, politicians and the public may be very cautious about atmospheric interventions that can alter the climate system and affect all the creatures that dwell within it. A subscale deployment that encountered no major surprises could go a long way toward reducing extreme concerns about full-scale deployment. 

The deployers could also claim some limited benefit from the subscale deployment itself. While the effects would be too small to be readily evident on the ground, the methods used to attribute extreme weather events to climate change could substantiate claims of small reductions in the severity of such events. 

They might also argue that the deployment is simply restoring atmospheric protection that was recently lost. The reduction in sulfur emissions from ships is now saving lives by creating cleaner air, but it is also accelerating warming by thinning the reflective veil that such pollution created. The subscale scenario we sketched out would restore almost half of that sunshade protection, without the countervailing air pollution.  

The deployers might also convince themselves that their action was consistent with international law because they could perform deployment entirely within their domestic airspace and because the effects, while global, would not produce “significant transboundary harm,” the relevant threshold under customary international law. 

The governance implications of such a subscale deployment would depend on the political circumstances. If it were done by a major power without meaningful attempts at multilateral engagement, one would expect dramatic backlash. On the other hand, were deployment undertaken by a coalition that included highly climate-vulnerable states and that invited other states to join the coalition and develop a shared governance architecture, many states might be publicly critical but privately pleased that geoengineering reduced climate risks.   

SAI is sometimes described as an imaginary sociotechnical scenario residing in a distant sci-fi future. But it is technically feasible to start subscale deployments of the kind we describe here in five years. A state or coalition of states that wished to meaningfully test both the science and politics of deployment may consider such subscale or demonstration deployments as climate risks become more salient. 

We are not advocating for such action—in fact, we reiterate our support for a moratorium against deployment until the science is critically assessed and some governance architecture is widely agreed upon. Yet a sound understanding of the interlinked technology and politics of SAI is hampered by the perception that it must start with a significant effort that would substantially slow or even reverse warming. The example we’ve outlined here illustrates that the infrastructural barriers to deployment are more easily overcome than is commonly assumed. Policymakers must take this into account—and soon—as they consider how to develop solar geoengineering in the public interest and what guardrails should be put in place.

David W. Keith is a professor of geophysical sciences and founding faculty director of the Climate Systems Engineering initiative at the University of Chicago. 

Wake Smith is a lecturer at the Yale School of Environment and a research fellow at the Harvard Kennedy School.  

We thank Christian V. Rice of VPE Aerospace for performing the payload calculations herein. Please consult this PDF for more detail on our estimates.

Eric Schmidt has a 6-point plan for fighting election misinformation

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

And election campaigns are using artificial intelligence in novel ways. Earlier this year in the US, the Republican presidential primary campaign of Florida governor Ron DeSantis posted doctored images of Donald Trump; the Republican National Committee released an AI-created ad depicting a dystopian future in response to Joe Biden’s announcing his reelection campaign; and just last month, Argentina’s presidential candidates each created an abundance of AI-generated content portraying the other party in an unflattering light. This surge in deepfakes heralds a new political playing field. Over the past year, AI was used in at least 16 countries to sow doubt, smear opponents, or influence public debate, according to a report released by Freedom House in October. We’ll need to brace ourselves for more chaos as key votes unfold across the world in 2024. 

The year ahead will also bring a paradigm shift for social media platforms. The role  of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps like TikTok and Discord—indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds. 

But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens. That’s a far cry from the global, democratized one-world conversation the idealists dreamed of 15 years ago. With many users left adrift and losing faith in these platforms, it’s clear that maximizing revenue has ironically hurt business interests.

Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy. Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year. 

1.     Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.

2.     Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 

3.     Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.

4.     Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  

5.     Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 

6.     Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation.

With a concerted effort from companies, regulators, and Congress, we can adopt these proposals in the coming year, in time to make a difference. My worry is that everyone benefits from favorable mis- or disinformation to varying degrees: our citizens are amused by such content, our political leaders may campaign with it, and the media garners traffic by covering sensationalist examples. The existing incentive structures will make misinformation hard to eliminate.  

Social media platforms need to fundamentally rethink their design for the age of AI, especially as democracies face a historic test worldwide. It’s clear to me the future will be one of many decentralized online spaces that cater to every interest, reflect the views of real humans (not bots), and focus on concrete community concerns. But until that day comes, setting these guardrails in place will help ensure that platforms maintain a healthy standard of discourse and do not let opaque, engagement-driven algorithms allow AI-enabled election content to run rampant.

Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields

The US climate bill has made emission reductions dependent on economic success

In August, President Joe Biden signed the Inflation Reduction Act (IRA) into law, the largest US climate bill in more than a decade. The legislation puts the country back on track to meet its commitments under the 2015 Paris Agreement.

Beyond enacting specific measures to reduce US carbon emissions by more than 40 percent by 2030, the IRA also fundamentally reframes how the government approaches climate change. After decades of understanding climate policy as primarily about cutting emissions, the IRA pitches it as an opportunity to invest in new sources of economic growth.

The IRA does this primarily through a series of updated tax incentives, which require electric vehicle batteries, wind turbines, and solar panels to be manufactured in the United States (or a free trade partner) to qualify. Implicit in the IRA is the notion that taking advantage of the economic opportunities presented by the global energy transition will require new forms of government intervention in the economy. Such direct government policy intervention on behalf of domestic clean-energy manufacturing sectors breaks with Washington’s past approach to industrial policy, which primarily focused on public investments in R&D and support for clean energy markets.

This reframing of climate change as an economic opportunity is overdue. China has long used the tools of the state to secure market share in rapidly growing clean energy industries. That nation now makes more than 85% of photovoltaic cells used in the global production of solar modules. It also produces 78% of the lithium-ion batteries used in the assembly of battery packs for electric vehicles and energy storage. The European Union, too, has not merely set ambitious climate goals—it has used industrial policy to build clean energy sectors and transition domestic industries, such as automakers, to a low-carbon future.

Since it first passed the House and Senate in August, the IRA has been met with much enthusiasm. The White House called it the single most impactful climate legislation ever passed in the US. Scientists see it as a turning point in the climate change battle. Others have emphasized the potential to create a half-million jobs through the industrial policy provisions contained in the bill. Indeed, solar PV, battery, and electric vehicle manufacturers have been quick to announce new investments in domestic production facilities in the weeks since the bill was signed.

Such enthusiasm notwithstanding, the US will still face formidable challenges in building up its domestic clean energy industries.

The nation is entering markets already crowded with international rivals, many of which have been investing billions for decades. China alone has spent more than $50 billion to establish control of virtually every segment of the solar supply chain. To compete with China’s dominance in electric vehicle batteries, the European Union established an alliance in 2017 with the goal of ensuring that European firms are suppliers along the entire battery supply chain. To advance its goal of building domestic clean energy supply chains, the EU also spent more than 40 percent of economic stimulus funds allotted during the start of the covid-19 pandemic on green industrial policy initiatives, to build up clean-energy supply chains.

To establish US clean energy industries that can replace and compete with global wind, solar, and battery supply chains will be particularly challenging in the timeframe envisioned in the IRA. Many content requirements contained in the tax credits take effect almost immediately. But developing domestic manufacturing capacity and opening new mines could take years, not months.

If US supply chains for solar, wind, and batteries take longer to build than expected, clean energy products will fail to qualify for government support, which could in turn slow deployment. Climate policy is now explicitly framed as an economic policy issue, dependent on economic policy success in ways that could complicate efforts to reduce US carbon emissions.

This could be particularly problematic, because the use of the so-called local content requirements and other industrial policy tools in the IRA—including loans for retooling and constructing manufacturing plants—is unprecedented in the United States. And even if meeting supply chain targets turns out to be unexpectedly difficult, it would be difficult to adjust and tweak the bill. Narrow political margins in the House and Senate offer few prospects for correcting industrial policy goals and incentives contained in the IRA, even if they threaten to undermine the bill’s climate objectives.  

The bill, for all its novel use of industrial policy tools, is also noteworthy for the things that it does not include. The local content requirements attached to the tax credits set important incentives for firms to establish domestic manufacturing capacity, but they fall short of proactive industrial policies to help firms meet these goals. To be fair, among other stipulations, the IRA includes substantial loans and loan guarantees for the establishment of domestic manufacturing plants in clean energy sectors, but such one-time investments are not a replacement for long-term fixes to US manufacturing.

The US financial sector has long been unwilling to fund domestic manufacturing, particularly in industries such as clean energy that heavily depend on government regulation. To scale up, these businesses also need a trained workforce, which will entail new investments in vocational training and coordination with clean energy industries to develop new curriculums and establish workforce needs. Meeting the industrial development goals of the IRA will necessitate new kinds of financing and training institutions that are not part of the bill itself.

Reframing climate policy as economic policy is not just important for the future of US competitiveness, it is politically savvy. Creating jobs in clean energy sectors will help build new coalitions behind climate policy, including in states where climate change has not yet been a priority of voters.

At the same time, the bill is just the starting point of a much broader industrial transformation. To make good on the IRA’s economic development goals, the US will need to fix structural problems that have long caused a decline in US manufacturing and are not addressed in the IRA itself. Because climate and economic outcomes are now so closely linked, failing to do so will jeopardize the growth of clean energy industries and the ability of the United States to meet its Paris Agreement goals.

Jonas Nahm is an assistant professor at the Johns Hopkins School of Advanced International Studies and expert on green industries.