Trump’s win is a tragic loss for climate progress

Donald Trump’s decisive victory is a stunning setback for climate change.

The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade. 

It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe. Under the policies in place today, the planet is already set to warm by more than 3 °C over preindustrial levels in the coming decades.

Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws. In fact, a second Trump administration could boost greenhouse-gas emissions by 4 billion tons through 2030 alone, according to an earlier analysis by Carbon Brief, a well-regarded climate news and data site. That will exacerbate the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution, inflicting some $900 million in climate damages around the world, Carbon Brief found.

I started as the climate editor at MIT Technology Review just as Trump came into office the last time. Much of the early job entailed covering his systematic unraveling of the modest climate policy and progress that President Barack Obama had managed to achieve. I fear it will be far worse this time, as Trump ambles into office feeling empowered and aggrieved, and ready to test the rule of law and crack down on dissent. 

This time he’ll be staffed all the more by loyalists and idealogues, who have already made plans to force out civil servants with expertise and experience from federal agencies including the Environmental Protection Agency. He’ll be backed by a Supreme Court that he moved well to the right, and which has already undercut landmark environmental doctrines and weakened federal regulatory agencies. 

This time the setbacks will sting more, too, because the US did finally manage to pass real, substantive climate policy, through the slimmest of congressional margins. The Inflation Reduction Act and Bipartisan Infrastructure Law allocated massive amounts of government funding to accelerating the shift to low-emissions industries and rebuilding the US manufacturing base around a clean-energy economy. 

Trump has made clear he will strive to repeal as many of these provisions as he can, tempered perhaps only by Republicans who recognize that these laws are producing revenue and jobs in their districts. Meanwhile, throughout the prolonged presidential campaign, Trump or his surrogates pledged to boost oil and gas production, eliminate federal support for electric vehicles, end pollution rules for power plants, and remove the US from the Paris climate agreement yet again. Each of those goals stands in direct opposition to the deep, rapid emissions cuts now necessary to prevent the planet from tipping past higher and higher temperature thresholds.

Project 2025, considered a blueprint for the early days of a second Trump administration despite his insistence to the contrary, calls for dismantling or downsizing federal institutions including the the National Oceanic and Atmospheric Administration and the Federal Emergency Management Agency. That could cripple the nation’s ability to forecast, track, or respond to storms, floods, and fires like those that have devastated communities in recent months.

Observers I’ve spoken to fear that the Trump administration will also return the Department of Energy, which under Biden had evolved its mission toward developing low-emissions technologies, to the primary task of helping companies dig up more fossil fuels.

The US election could create global ripples as well, and very soon. US negotiators will meet with their counterparts at the annual UN climate conference that kicks off next week. With Trump set to move back into the White House in January, they will have little credibility or leverage to nudge other nations to step up their commitments to reducing emissions. 

But those are just some of the direct ways that a second Trump administration will enfeeble the nation’s ability to drive down emissions and counter the growing dangers of climate change. He also has considerable power to stall the economy and sow international chaos amid escalating conflicts in Europe and the Middle East. 

Trump’s eagerness to enact tariffs, slash government spending, and deport major portions of the workforce may stunt growth, drive up inflation, and chill investment. All that would make it far more difficult for companies to raise the capital and purchase the components needed to build anything in the US, whether that means wind turbines, solar farms, and seawalls or buildings, bridges, and data centers. 

view from behind Trump on stage election night 2024 with press and crowd
President-elect Donald Trump speaks at an election night event in West Palm Beach, Florida.
WIN MCNAMEE/GETTY IMAGES

His clumsy handling of the economy and international affairs may also help China extend its dominance in producing and selling the components that are crucial to the energy transition, including batteries, EVs, and solar panels, to customers around the globe.

If one job of a commentator is to find some perspective in difficult moments, I admit I’m mostly failing in this one.

The best I can do is to say that there will be some meaningful lines of defense. For now ,at least, state leaders and legislatures can continue to enact and implement stronger climate rules. Other nations could step up their efforts to cut emissions and assert themselves as global leaders on climate. 

Private industry will likely continue to invest in and build businesses in climate tech and clean energy, since solar, wind, batteries, and EVs have proved themselves as competitive industries. And technological progress can occur no matter who is sitting in the round room on Pennsylvania Avenue, since researchers continue striving to develop cleaner, cheaper ways of producing our energy, food, and goods.

By any measure, the job of addressing climate change is now much harder. Nothing, however, has changed about the stakes. 

Our world doesn’t end if we surpass 2 °C, 2.5 °C, or even 3 °C, but it will steadily become a more dangerous and erratic place. Every tenth of a degree remains worth fighting for—whether two, four, or a dozen years from now—because every bit of warming that nations pull together to prevent eases future suffering somewhere.

So as the shock wears off and the despair begins to lift, the core task before us remains the same: to push for progress, whenever, wherever, and however we can. 

Sorry, AI won’t “fix” climate change

In an essay last week, Sam Altman, the CEO of OpenAI, argued that the accelerating capabilities of AI will usher in an idyllic “Intelligence Age,” unleashing “unimaginable” prosperity and “astounding triumphs” like “fixing the climate.”

It’s a promise that no one is in a position to make—and one that, when it comes to the topic of climate change, fundamentally misunderstands the nature of the problem. 

More maddening, the argument suggests that the technology’s massive consumption of electricity today doesn’t much matter, since it will allow us to generate abundant clean power in the future. That casually waves away growing concerns about a technology that’s already accelerating proposals for natural-gas plants and diverting major tech companies from their corporate climate targets

By all accounts, AI’s energy demands will only continue to increase, even as the world scrambles to build larger, cleaner power systems to meet the increasing needs of EV charging, green hydrogen production, heat pumps, and other low-carbon technologies. Altman himself reportedly just met with White House officials to make the case for building absolutely massive AI data centers, which could require the equivalent of five dedicated nuclear reactors to run.  

It’s a bedrock perspective of MIT Technology Review that technological advances can deliver real benefits and accelerate societal progress in meaningful ways. But for decades researchers and companies have oversold the potential of AI to deliver blockbuster medicines, achieve super intelligence, and free humanity from the need to work. To be fair, there have been significant advances, but nothing on the order of what’s been hyped.

Given that track record, I’d argue you need to develop a tool that does more than plagiarize journalism and help students cheat on homework before you can credibly assert that it will solve humanity’s thorniest problems, whether the target is rampant poverty or global warming.

To be sure, AI may help the world address the rising dangers of climate change. We have begun to see research groups and startups harness the technology to try to manage power grids more effectively, put out wildfires faster, and discover materials that could create cheaper, better batteries or solar panels.

All those advances are still relatively incremental. But let’s say AI does bring about an energy miracle. Perhaps its pattern-recognition prowess will deliver the key insight that finally cracks fusion—a technology that Altman is betting on heavily as an investor.

That would be fantastic. But technological advances are just the start—necessary but far from sufficient to eliminate the world’s climate emissions.

How do I know?

Because between nuclear fission plants, solar farms, wind turbines, and batteries, we already have every technology we need to clean up the power sector. This should be the low-hanging fruit of the energy transition. Yet in the largest economy on Earth, fossil fuels still generate 60% of the electricity. The fact that so much of our power still comes from coal, petroleum, and natural gas is a regulatory failure as much as a technological one. 

“As long as we effectively subsidize fossil fuels by allowing them to use the atmosphere as a waste dump, we are not allowing clean energy to compete on a level playing field,” Zeke Hausfather, a climate scientist at the independent research organization Berkeley Earth, wrote on X in a response to Altman’s post. “We need policy changes, not just tech breakthroughs, to meet our climate goals.”

That’s not to say there aren’t big technical problems we still need to solve. Just look at the continuing struggles to develop clean, cost-competitive ways of fertilizing crops or flying planes. But the fundamental challenges of climate change are sunk costs, development obstacles, and inertia.

We’ve built and paid for a global economy that spews out planet-warming gases, investing trillions of dollars in power plants, steel mills, factories, jets, boilers, water heaters, stoves, and SUVs that run on fossil fuels. And few people or companies will happily write off those investments so long as those products and plants still work. AI can’t remedy all that just by generating better ideas. 

To raze and replace the machinery of every industry around the world at the speed now required, we will need increasingly aggressive climate policies that incentivize or force everyone to switch to cleaner plants, products, and practices.

But with every proposal for a stricter law or some big new wind or solar farm, forces will push back, because the plan will hit someone’s wallet, block someone’s views, or threaten the areas or traditions someone cherishes. Climate change is an infrastructure problem, and building infrastructure is a messy human endeavor. 

Tech advances can ease some of these issues. Cheaper, better alternatives to legacy industries make hard choices more politically palatable. But there are no improvements to AI algorithms or underlying data sets that solve the challenge of NIMBYism, the conflict between human interests, or the desire to breathe the fresh air in an unsullied wilderness. 

To assert that a single technology—that just happens to be the one your company develops—can miraculously untangle these intractable conflicts of human society is at best self-serving, if not a little naïve. And it’s a troubling idea to proclaim at a point when the growth of that very technology is threatening to undermine the meager progress the world has begun to make on climate change.

As it is, the one thing we can state confidently about generative AI is that it’s making the hardest problem we’ve ever had to solve that much harder to solve.

Why we need an AI safety hotline

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

Why a ruling against the Internet Archive threatens the future of America’s libraries

I was raised in the 1980s and ’90s, and for my generation and generations before us, the public library was an equalizing force in every town, helping anyone move toward the American dream. In Chantilly, Virginia, where I grew up, it didn’t matter if you didn’t have a computer or your parents lacked infinite money for tutors—you could get a lifetime’s education for free at the public library. A ruling from the US Second Circuit against the Internet Archive and in favor of publisher Hachette has just thrown that promise of equality into doubt by limiting libraries’ access to digital lending.

To understand why this is so important to the future of libraries, you first have to understand the dire state of library e-book lending. 

Libraries have traditionally operated on a basic premise: Once they purchase a book, they can lend it out to patrons as much (or as little) as they like. Library copies often come from publishers, but they can also come from donations, used book sales, or other libraries. However the library obtains the book, once the library legally owns it, it is theirs to lend as they see fit. 

Not so for digital books. To make licensed e-books available to patrons, libraries have to pay publishers multiple times over. First, they must subscribe (for a fee) to aggregator platforms such as Overdrive. Aggregators, like streaming services such as HBO’s Max, have total control over adding or removing content from their catalogue. Content can be removed at any time, for any reason, without input from your local library. The decision happens not at the community level but at the corporate one, thousands of miles from the patrons affected. 

Then libraries must purchase each individual copy of each individual title that they want to offer as an e-book. These e-book copies are not only priced at a steep markup—up to 300% over consumer retail—but are also time- and loan-limited, meaning the files self-destruct after a certain number of loans. The library then needs to repurchase the same book, at a new price, in order to keep it in stock. 

This upending of the traditional order puts massive financial strain on libraries and the taxpayers that fund them. It also opens up a world of privacy concerns; while libraries are restricted in the reader data they can collect and share, private companies are under no such obligation.

Some libraries have turned to another solution: controlled digital lending, or CDL, a process by which a library scans the physical books it already has in its collection, makes secure digital copies, and lends those out on a one-to-one “owned to loaned” ratio.  The Internet Archive was an early pioneer of this technique.

When the digital copy is loaned, the physical copy is sequestered from borrowing; when the physical copy is checked out, the digital copy becomes unavailable. The benefits to libraries are obvious; delicate books can be circulated without fear of damage, volumes can be moved off-site for facilities work without interrupting patron access, and older and endangered works become searchable and can get a second chance at life. Library patrons, who fund their local library’s purchases with their tax dollars, also benefit from the ability to freely access the books.

Publishers are, unfortunately, not a fan of this model, and in 2020 four of them sued the Internet Archive over its CDL program. The suit ultimately focused on the Internet Archive’s lending of 127 books that were already commercially available through licensed aggregators. The publisher plaintiffs accused the Internet Archive of mass copyright infringement, while the Internet Archive argued that its digitization and lending program was a fair use. The trial court sided with the publishers, and on September 4, the Court of Appeals for the Second Circuit reaffirmed that decision with some alterations to the underlying reasoning. 

This decision harms libraries. It locks them into an e-book ecosystem designed to extract as much money as possible while harvesting (and reselling) reader data en masse. It leaves local communities’ reading habits at the mercy of curatorial decisions made by four dominant publishing companies thousands of miles away. It steers Americans away from one of the few remaining bastions of privacy protection and funnels them into a surveillance ecosystem that, like Big Tech, becomes more dangerous with each passing data breach. And by increasing the price for access to knowledge, it puts up even more barriers between underserved communities and the American dream.

It doesn’t stop there. This decision also renders the fair use doctrine—legally crucial in everything from parody to education to news reporting—almost unusable. And while there were occasional moments of sanity (such as recognizing that a “Donate here” button does not magically turn a nonprofit into a commercial enterprise), this decision fractured, rather than clarified, the law. 

If the courts won’t recognize CDL-based library lending as fair use, then the next step falls to Congress. Libraries are in crisis, caught between shrinking budgets and growing demand for services. Congress must act now to ensure that a pillar of equality in our communities isn’t sacrificed on the altar of profit. 

Chris Lewis is president and CEO of Public Knowledge, a consumer advocacy group that works to shape technology policy in the public interest. Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works.

How the US and its allies can rebuild economic security

A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state’s security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

When the history of the war in Ukraine is written, it will be taught as the first true “drone war.” But it should also be understood as an industrial wake-up call: a time when the role of a drone’s component parts was laid bare and the supply chains that support this technology—the knowledge, production operations, and manufacturing processes—were found wanting. Heroic stories will be told of Ukrainian ingenuity in building drones with Chinese parts in basements and on kitchen tables, and we will hear of the country’s attempt to rebuild supply chains dominated by China while in the midst of an existential fight for survival. But in the background, we will also need to understand the ways in which other nations, especially China, controlled the war through long-term economic policies focused on capturing industrial capacity that the US and its allies failed to support through to maturity.

Disassemble one of the FPV drones found across the battlefields of Ukraine and you will find about seven critical subsystems: power, propulsion, flight control, navigation and sensors (which gather location data and other information to support flight), compute (the processing and memory capacity needed to analyze the vast array of information and then support operations), communications (to connect the drone to the ground), and—supporting it all—the airframe.

We have created a bill of materials listing the components necessary to build an FPV drone and the common suppliers for those parts.

China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US. However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

Propulsion is another case in point. The brushless DC motors used to convert electrical energy from batteries into mechanical energy to rotate drone propellers were invented in the US and Germany. The sintered permanent neodymium (NdFeB) magnets used in these motors were invented in Japan and the US. Today, to our knowledge, all brushless DC motors for drones are made in China. Similarly, China dominates all steps in the processing and manufacture of NdFeB magnets, accounting for 92% of global NdFeB magnet and magnet alloy markets.

The missing middle of technology investment—insufficient funding for commercial production—is evident in each and every one of these failures, but the loss of expertise is an added dimension. For example, lithium polymer (LiPo) batteries are at the heart of every FPV drone. LiPo uses a solid or gel polymer electrolyte and achieves higher specific energy (energy per unit of weight)—a feature that is crucial for lightweight drones. Today, you would be hard-pressed to find a LiPo battery that was not manufactured in China. The experienced workforce behind these companies has contributed to learning curves that have led to a 97% drop in the cost of lithium-ion batteries and a simultaneous 300%-plus increase in battery energy density over the past three decades.

China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

Again, this dominance was not inevitable. Most of the critical breakthroughs needed to invent and commercialize Li-ion batteries were made by scientists in North America and Japan. But in comparison to the US and Europe (at least until very recently), China has taken a proactive stance to coordinate, support, and co-invest with strategic industries to commercialize emerging technologies. China’s Ministry of Industry and Information Technology has been at pains to support these domestic industries.

The case of Li-ion batteries is not an isolated one. The shift to Chinese dominance in the underlying electronics for FPV drones coincides with the period beginning in 2000, when Shenzhen started to emerge as a global hub for low-cost electronics. This trend was amplified by US corporations from Apple, for which low-cost production in China has been essential, to General Electric, which also sought low-cost approaches to maintain the competitive edge of its products. The global nature of supply chains was seen as a strength for US companies, whose comparative advantage lay in the design and integration of consumer products (such as smartphones) with little or no relevance for national security. Only a small handful of “exquisite systems” essential for military purposes were carefully developed within the US. And even those have relied upon global supply chains.

While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

Edlyn V. Levine is CEO and co-founder of a stealth-mode technology start up and an affiliate at MIT Sloan School of Management and the Department of Physics at Harvard University. Levine was co-founder and CSO of America’s Frontier Fund, and formerly Chief Technologist for the MITRE Corporation.

Fiona Murray is the William Porter (1967) Professor of Entrepreneurship at the MIT School of Management where she works at the intersection of critical technologies, entrepreneurship, and geopolitics. She is the Vice Chair of the NATO Innovation Fund—a multi-sovereign venture fund for defense, security and resilience, and served for a decade on the UK Prime Minister’s Council on Science and Technology.

The SEC’s new climate rules were a missed opportunity to accelerate corporate action

This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

But the SEC could and should have gone much further. 

After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

It is now just good business to look for carbon reductions that actually save money.

The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

Solar geoengineering could start soon if it starts small

For half a century, climate researchers have considered the possibility of injecting small particles into the stratosphere to counteract some aspects of climate change. The idea is that by reflecting a small fraction of sunlight back to space, these particles could partially offset the energy imbalance caused by accumulating carbon dioxide, thereby reducing warming as well as extreme storms and many other climate risks. 

Debates about this idea, a form of solar geoengineering called stratospheric aerosol injection (SAI), commonly focus either on small-scale outdoor research that seeks to understand the physical processes involved or on deployment at a climate-altering scale. The gulf between these is gigantic: an experiment might use mere kilograms of aerosol material whereas deployment that could substantially slow or even reverse warming would involve millions of metric tons per year—a billionfold difference in scale. Appreciably cooling the planet via SAI would also require a purpose-built fleet of high-altitude aircraft, which could take one or two decades to assemble. This long lead time encourages policymakers to ignore the hard decisions about regulating deployment of SAI. 

Such complacency is ill-advised. The barrier between research and deployment may be less distinct than is often assumed. Our analysis suggests a country or group of countries could conceivably start a subscale solar geoengineering deployment in as little as five years, one that would produce unmistakable changes in the composition of the stratosphere. A well-managed subscale deployment would benefit research by reducing important uncertainties about SAI, but it could not be justified as research alone—similar research could be carried out with a much smaller amount of aerosol particles. And it would have a non-negligible impact on the climate, providing as much cooling as sulfur pollution from international shipping did before the recent cleanup of shipping fuels. At the same time, the magnitude of the cooling would be small enough that its effects on climate, on a national or regional scale, would be very difficult to detect in the face of normal variability. 

While the climate impact of such a subscale deployment would be small (and most likely beneficial), the political impact could be profound. It could trigger a backlash that would upend climate geopolicy and threaten international stability. It could be an on-ramp to large-scale deployment. And it could be exploited by fossil fuel interests seeking to slow the essential task of cutting emissions. 

We oppose near-term deployment of solar geoengineering. In accord with the Climate Overshoot Commission, the most senior group of political leaders to examine the topic, we support a moratorium on deployment until the science is internationalized and critically assessed, and until some governance architecture is widely agreed upon. But if we are correct that such subscale deployments are plausible, then policymakers may need to confront solar geoengineering—its promise and disruptive potential, and its profound challenges to global governance—earlier than is now widely assumed. 

Obstacles to early deployment 

Humans already emit a huge quantity of aerosols into the troposphere (the turbulent lowest layer of the atmosphere) from sources such as shipping and heavy industry, but these aerosols fall to Earth or are removed by rainfall and other processes within about a week. Volcanic eruptions can have a more lasting effect. When eruptions are powerful enough to punch through the troposphere into the stratosphere, the aerosols deposited there can endure for roughly a year. SAI would, like the largest volcanic eruptions, inject aerosols or their precursors into the stratosphere. Given their vastly longer atmospheric endurance, aerosols placed there can have a cooling impact 100 times larger than they would if emitted at the surface. 

Getting aerosols to the stratosphere is another matter. Passenger jets routinely reach the lower stratosphere on transpolar flights. But to get efficient global coverage, aerosols are best deployed at low latitudes, where the stratosphere’s natural overturning circulation will carry them poleward and thus distribute them worldwide. The average height of the top of the troposphere is about 17 kilometers in the tropics, and models suggest injection needs to be a few kilometers higher than that to be captured in the upwelling stratospheric circulation. The altitude for efficient deployment is commonly assumed to be at least 20 kilometers, nearly twice the height at which commercial jets or large military aircraft cruise. 

Although small spy planes can cruise in this very thin air, they can carry only one to two metric tons of payload. That would be insufficient except for small-scale tests: offsetting a substantial fraction of global warming—say, 1 °C of cooling—would require platforms that could deliver several million metric tons per year of material to the stratosphere. Neither rockets nor balloons are suitable for hauling such a large mass to this high perch. Consequently, full-scale deployment would require a fleet of novel aircraft—a few hundred in order to achieve a 1 °C cooling target. Procuring just the first aircraft in the manner typical of large commercial or military aircraft development programs might take roughly a decade, and manufacturing the required fleet would take several years more. 

But starting with full-scale deployment is both imprudent and unlikely. Even if we are turning the global thermostat down, the faster we change the climate, the higher the risk of unforeseen impacts. A country or group of countries that wishes to deploy solar engineering is likely to appreciate the political and technical benefits of a slower start, one with a gradual reversal of warming that facilitates optimization and “learning by doing”, while minimizing the likelihood and impact of unintended consequences. 

We envision scenarios where, instead of attempting to inject aerosols in the most efficient way near the equator, a country or group of countries attempt to place a smaller amount of material in the lower stratosphere at higher latitudes. They could do this with existing aircraft, because the top of the troposphere slopes sharply downward as you move away from the equator. At 35° north and south, it is found at roughly 12 kilometers. Adding a 3 kilometer margin, an effective deployment altitude at 35° north and south would be 15 kilometers. This remains too high for airliners but is just below the 15.5 kilometer service ceiling of top-of-the-line business jets made by Gulfstream, Bombardier, and Dassault. The list of countries with territory at or near 35° north or south includes not only rich countries such as the US, Australia, Japan, South Korea, Spain, and China, but also poorer ones such as Morocco, Algeria, Iraq, Iran, Pakistan, India, Chile, and Argentina.

Subscale deployment

How might subscale deployment be accomplished? Most stratospheric scientific studies of aerosol injection assume the operative material is sulfur dioxide (SO2) gas, which is 50% sulfur by mass. Another plausible option is hydrogen sulfide (H2S), which cuts the mass requirement almost in half, though it is more hazardous to ground and flight crews than SO2 and thus might be eliminated from consideration. Carbon disulfide (CS2) gas cuts the mass requirement by 40% and is generally less hazardous than SO2. It is also possible to use elemental sulfur, which is the safest and easiest to handle, but this would require a method of combusting it on board before venting or the use of afterburners. No one has yet done the engineering studies required to determine which of these sulfur compounds would be the best choice. 

Using assumptions confirmed with Gulfstream, we estimate that any of its G500/600 aircraft could loft about 10 kilotons of material per year to 15.5 kilometers. If highly mass-efficient CS2 were used, a fleet of no more than 15 aircraft could carry up 100 kilotons of sulfur a year. Aged but operable used G650s cost about $25 million. Adding in the cost of modification, maintenance, spare parts, salaries, fuel, materials, and insurance, we expect the average total cost of a decade-long subscale deployment would be about $500 million a year. Large-scale deployment would cost at least 10 times as much.

How much is 100 kilotons of sulfur per year? It is a mere 0.3% of current global annual emissions of sulfur pollution into the atmosphere. Its contribution to the health impact of particulate air pollution would be substantially less than a tenth of what it would be if the same amount were emitted at the surface. As for its impact on climate, it would be about 1% of the sulfur injected in the stratosphere by the 1992 eruption of Mount Pinatubo in the Philippines. That well-studied event supports the assertion that no high-consequence unknown effects would occur. 

At the same time, 100 kilotons of sulfur per year is not insubstantial: it would be more than twice the natural background flux of sulfur from the troposphere into the stratosphere, absent unusual volcanic activity. The cooling effect would be enough to delay global rise in temperature for about a third of a year, an offset that would last as long as the subscale deployment was maintained. And because solar geoengineering is more effective at countering the rise in extreme precipitation than the rise in temperature, the deployment would delay the increasing intensity of tropical cyclones by more than half a year. These benefits are not negligible to those most at risk from climate impacts (though none of these benefits would necessarily be apparent due to the climate system’s natural variability).

We should mention that our 100 kilotons per year scenario is arbitrary. We define a subscale deployment to mean a deployment large enough to substantially increase the amount of aerosol in the stratosphere while being well below the level that is required to delay warming by a decade. With that definition, such a deployment could be several times larger or smaller than our sample scenario. 

Of course no amount of solar geoengineering can eliminate the need to reduce the concentration of greenhouse gases in the atmosphere. At best, solar geoengineering is a supplement to emissions cuts. But even the subscale deployment scenario we consider here would be a significant supplement: over a decade, it would have approximately half the cooling effect as eliminating all emissions from the European Union. 

The politics of subscale deployment

The subscale deployment we’ve outlined here could serve several plausible scientific and technological goals. It would demonstrate the storage, lofting, and dispersion technologies for larger-scale deployment. If combined with an observational program, it would assess monitoring capabilities as well. It would directly clarify how sulfate is carried around the stratosphere and how sulfate aerosols interact with the ozone layer. After a few years of such a subscale deployment, we would have a far better understanding of the scientific and technological barriers to large-scale deployment. 

At the same time, subscale deployment would pose risks for the deployer. It could trigger political instability and invite retribution from other countries and international bodies that would not respond well to entities fiddling with the planet’s thermostat without global coordination and oversight. Opposition might stem from a deep-rooted aversion to environmental modification or from more pragmatic concerns that large-scale deployment would be detrimental to some regions. 

Deployers might be motivated by a wide range of considerations. Most obviously, a state or coalition of states might conclude that solar geoengineering could significantly reduce their climate risk, and that such a subscale deployment would strike an effective balance between the goals of pushing the world toward large-scale deployment and minimizing the risk of political backlash. 

The deployers could decide that a subscale project might make bigger interventions possible. While scientists may be comfortable drawing inferences about solar geoengineering from tiny experiments and models, politicians and the public may be very cautious about atmospheric interventions that can alter the climate system and affect all the creatures that dwell within it. A subscale deployment that encountered no major surprises could go a long way toward reducing extreme concerns about full-scale deployment. 

The deployers could also claim some limited benefit from the subscale deployment itself. While the effects would be too small to be readily evident on the ground, the methods used to attribute extreme weather events to climate change could substantiate claims of small reductions in the severity of such events. 

They might also argue that the deployment is simply restoring atmospheric protection that was recently lost. The reduction in sulfur emissions from ships is now saving lives by creating cleaner air, but it is also accelerating warming by thinning the reflective veil that such pollution created. The subscale scenario we sketched out would restore almost half of that sunshade protection, without the countervailing air pollution.  

The deployers might also convince themselves that their action was consistent with international law because they could perform deployment entirely within their domestic airspace and because the effects, while global, would not produce “significant transboundary harm,” the relevant threshold under customary international law. 

The governance implications of such a subscale deployment would depend on the political circumstances. If it were done by a major power without meaningful attempts at multilateral engagement, one would expect dramatic backlash. On the other hand, were deployment undertaken by a coalition that included highly climate-vulnerable states and that invited other states to join the coalition and develop a shared governance architecture, many states might be publicly critical but privately pleased that geoengineering reduced climate risks.   

SAI is sometimes described as an imaginary sociotechnical scenario residing in a distant sci-fi future. But it is technically feasible to start subscale deployments of the kind we describe here in five years. A state or coalition of states that wished to meaningfully test both the science and politics of deployment may consider such subscale or demonstration deployments as climate risks become more salient. 

We are not advocating for such action—in fact, we reiterate our support for a moratorium against deployment until the science is critically assessed and some governance architecture is widely agreed upon. Yet a sound understanding of the interlinked technology and politics of SAI is hampered by the perception that it must start with a significant effort that would substantially slow or even reverse warming. The example we’ve outlined here illustrates that the infrastructural barriers to deployment are more easily overcome than is commonly assumed. Policymakers must take this into account—and soon—as they consider how to develop solar geoengineering in the public interest and what guardrails should be put in place.

David W. Keith is a professor of geophysical sciences and founding faculty director of the Climate Systems Engineering initiative at the University of Chicago. 

Wake Smith is a lecturer at the Yale School of Environment and a research fellow at the Harvard Kennedy School.  

We thank Christian V. Rice of VPE Aerospace for performing the payload calculations herein. Please consult this PDF for more detail on our estimates.

Eric Schmidt has a 6-point plan for fighting election misinformation

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

And election campaigns are using artificial intelligence in novel ways. Earlier this year in the US, the Republican presidential primary campaign of Florida governor Ron DeSantis posted doctored images of Donald Trump; the Republican National Committee released an AI-created ad depicting a dystopian future in response to Joe Biden’s announcing his reelection campaign; and just last month, Argentina’s presidential candidates each created an abundance of AI-generated content portraying the other party in an unflattering light. This surge in deepfakes heralds a new political playing field. Over the past year, AI was used in at least 16 countries to sow doubt, smear opponents, or influence public debate, according to a report released by Freedom House in October. We’ll need to brace ourselves for more chaos as key votes unfold across the world in 2024. 

The year ahead will also bring a paradigm shift for social media platforms. The role  of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps like TikTok and Discord—indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds. 

But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens. That’s a far cry from the global, democratized one-world conversation the idealists dreamed of 15 years ago. With many users left adrift and losing faith in these platforms, it’s clear that maximizing revenue has ironically hurt business interests.

Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy. Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year. 

1.     Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.

2.     Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 

3.     Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.

4.     Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  

5.     Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 

6.     Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation.

With a concerted effort from companies, regulators, and Congress, we can adopt these proposals in the coming year, in time to make a difference. My worry is that everyone benefits from favorable mis- or disinformation to varying degrees: our citizens are amused by such content, our political leaders may campaign with it, and the media garners traffic by covering sensationalist examples. The existing incentive structures will make misinformation hard to eliminate.  

Social media platforms need to fundamentally rethink their design for the age of AI, especially as democracies face a historic test worldwide. It’s clear to me the future will be one of many decentralized online spaces that cater to every interest, reflect the views of real humans (not bots), and focus on concrete community concerns. But until that day comes, setting these guardrails in place will help ensure that platforms maintain a healthy standard of discourse and do not let opaque, engagement-driven algorithms allow AI-enabled election content to run rampant.

Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields