Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

The cheapest way to supercharge America’s power grid

US electricity consumption is rising faster than it has in decades, thanks in part to the boom in data center development, the resurgence in manufacturing, and the increasing popularity of electric vehicles. 

Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. This challenge has created one of the biggest obstacles to getting more electricity generation online, reducing investment in new power plants and stranding others in years-long “interconnection queues” while they wait to join the grid.

Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. They could help us quickly bring online a significant portion of the nearly 2,600 gigawatts of backlogged generation and storage projects awaiting pathways to connect to the electric grid. 

The opportunity to leverage advanced transmission technologies to update the way we deliver and consume electricity in America is as close to a $20 bill sitting on the sidewalk as policymakers may ever encounter. Promoting the development and use of these technologies should be a top priority for politicians in Washington, DC, as well as electricity market regulators around the country.

That includes the new Trump administration, which has clearly stated that building greater electricity supply and keeping costs low for consumers are high priorities. 

In the last month, Washington has been consumed by the Trump team’s efforts to test the bounds of executive power, fire civil servants, and disrupt the basic workings of the federal government. But when or if the White House and Congress get around to enacting new energy policies, they would be wise to pick up the $20 bill by enacting bipartisan measures to accelerate the rollout of these innovative grid technologies.

ATTs generally fall into four categories: dynamic line ratings, which combine local weather forecasts and measurements on or near the transmission line to safely increase their capacity when conditions allow; high-performance conductors, which are advanced wires that use carbon fiber, composite cores, or superconducting materials to carry more electricity than traditional steel-core conductors; topology optimization, which uses software to model fluctuating conditions across the grid and identify the most efficient routes to distribute electricity from moment to moment; and advanced power flow control devices, which redistribute electricity to lines with available capacity. 


“This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.”


Other countries from Belgium to India to the United Kingdom are already making large-scale use of these technologies. Early projects in the United States have been remarkably successful as well. One recent deployment of dynamic line ratings increased capacity by more than 50% for only $45,000 per mile—roughly 1% of the price of building new transmission.

So why are we not seeing an explosion in ATT investment and deployment in the US? Because despite their potential to unlock 21st-century technology, the 20th-century structure of the nation’s electricity markets discourages adoption of these solutions. 

For one thing, under the current regulatory system, utilities generally make money by passing the cost of big new developments along to customers (earning a fixed annual return on their investment). That comes in the form of higher electricity rates, which local public utility commissions often approve after power companies propose such projects.

That means utilities have financial incentives to make large and expensive investments, but not to save consumers money. When ATTs are installed in place of building new transmission capacity, the smaller capital costs mean that utilities make lower profits. For example, utilities might earn $600,000 per year after building a new mile of transmission, compared with about $4,500 per mile annually after installing the equipment and software necessary for line ratings. While these state regulatory agencies are tasked with ensuring that utilities act in the best interest of consumers, they often lack the necessary information to identify the best approach for doing so.

Overcoming these structural barriers will require action from both state and federal governments, and it should appeal to Democrats and Republicans alike. We’ve already seen some states, including Minnesota and Montana, move in this direction, but policy interventions to date remain insufficient. In a recent paper, we propose a new approach for unlocking the potential of these technologies.

First, we suggest requiring transmission providers to use ATTs in some “no regrets” contexts, where possible downsides are minor or nonexistent. The Federal Energy Regulatory Commission, for example, is already considering requiring dynamic line ratings on certain highly congested lines. Given the low cost of dynamic line ratings, and their clear benefit in cases of congestion, we believe that FERC should quickly move forward with, and strengthen, such a rule. Likewise, the Department of Energy or Congress should adopt an efficiency standard for the wires that carry electricity around the country. Every year, approximately 5% of electricity generated is lost in the transmission and distribution process. The use of high-performance conductors can reduce those losses by 30%.

In addition, federal agencies and state lawmakers should require transmission providers to evaluate the potential for using ATTs on their grid, or provide support to help them do so. FERC has recently taken steps in this direction, and it should continue to strengthen those actions. 

Regulators should also provide financial incentives to transmission providers to encourage the installation of ATTs. The most promising approach is a “shared savings” incentive, such as that proposed in the recent Advancing GETS Act. This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.

Finally, we should invest in building digital tools so transmission owners can identify opportunities for these technologies and so regulators can hold them accountable. Developing these systems will require transmission providers to share information about electricity supply and demand as well as grid infrastructure. Ideally, with such data in hand, researchers can develop a “digital twin” of the current transmission system to test different configurations of ATTs and help improve the performance and efficiency of our grids. 

We are all too aware that the world often faces difficult policy trade-offs. But laws or regulations that facilitate the use of ATTs can quickly expand the grid and save consumers money. They should be an easy yes on both sides of the aisle.

Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.

From COBOL to chaos: Elon Musk, DOGE, and the Evil Housekeeper Problem

In trying to make sense of the wrecking ball that is Elon Musk and President Trump’s DOGE, it may be helpful to think about the Evil Housekeeper Problem. It’s a principle of computer security roughly stating that once someone is in your hotel room with your laptop, all bets are off. Because the intruder has physical access, you are in much more trouble. And the person demanding to get into your computer may be standing right beside you.

So who is going to stop the evil housekeeper from plugging a computer in and telling IT staff to connect it to the network?

What happens if someone comes in and tells you that you’ll be fired unless you reveal the authenticator code from your phone, or sign off on a code change, or turn over your PIV card, the Homeland Security–approved smart card used to access facilities and systems and securely sign documents and emails? What happens if someone says your name will otherwise be published in an online list of traitors? Already the new administration is firing, putting on leave, or outright escorting from the building people who refuse to do what they’re told. 

It’s incredibly hard to protect a system from someone—the evil housekeeper from DOGE—who has made their way inside and wants to wreck it. This administration is on the record as wanting to outright delete entire departments. Accelerationists are not only setting policy but implementing it by working within the administration. If you can’t delete a department, then why not just break it until it doesn’t work? 

That’s why what DOGE is doing is a massive, terrifying problem, and one I talked through earlier in a thread on Bluesky

Government is built to be stable. Collectively, we put systems and rules in place to ensure that stability. But whether they actually deliver and preserve stability in the real world isn’t actually about the technology used; it’s about the people using it. When it comes down to it, technology is a tool to be used by humans for human ends. The software used to run our democratically elected government is deployed to accomplish goals tied to policies: collecting money from people, or giving money to states so they can give money to people who qualify for food stamps, or making covid tests available to people.

Usually, our experience of government technology is that it’s out of date or slow or unreliable. Certainly not as shiny as what we see in the private sector. And that technology changes very, very slowly, if it happens at all. 

It’s not as if people don’t realize these systems could do with modernization. In my experience troubleshooting and modernizing government systems in California and the federal government, I worked with Head Start, Medicaid, child welfare, and logistics at the Department of Defense. Some of those systems were already undergoing modernization attempts, many of which were and continue to be late, over budget, or just plain broken. But the changes that are needed to make other systems more modern were frequently seen as too risky or too expensive. In other words, not important enough. 

Of course, some changes are deemed important enough. The covid-19 pandemic and our unemployment insurance systems offer good examples. When covid hit, certain critical government technologies suddenly became visible. Those systems, like unemployment insurance portals, also became politically important, just like the launch of the Affordable Care Act website (which is why it got so much attention when it was botched). 

Political attention can change everything. During the pandemic, suddenly it wasn’t just possible to modernize and upgrade government systems, or to make them simpler, clearer, and faster to use. It actually happened. Teams were parachuted in. Overly restrictive rules and procedures were reassessed and relaxed. Suddenly, government workers were allowed to work remotely and to use Slack.

However, there is a reason this was an exception. 

In normal times, rules and procedures are certainly part of what makes it very, very hard to change government technology. But they are in place to stop changes because, well, changes might break those systems and government doesn’t work without them working consistently. 

A long time ago I worked on a mainframe system in California—the kind that uses COBOL. It was as solid as a rock and worked day in, day out. Because if it didn’t, and reimbursements weren’t received for Medicaid, then the state might become temporarily insolvent. 

That’s why many of the rules about technology in government make it hard to make changes: because sometimes the risk of things breaking is just too high. Sometimes what’s at stake is simply keeping money flowing; sometimes, as with 911, lives are on the line.

Still, government systems and the rules that govern them are ultimately only as good as the people who oversee and enforce them. The technology will only do (and not do) what people tell it to. So if anyone comes in and breaks those rules on purpose—without fear of consequence—there are few practical or technical guardrails to prevent it. 

One system that’s meant to do that is the ATO, or the Authority to Operate. It does what it says: It lets you run a computer system. You are not supposed to operate a system without one. 

But DOGE staffers are behaving in a way that suggests they don’t care about getting ATOs. And nothing is really stopping them. (Someone on Bluesky replied to me: “My first thought about the OPM [email] server was, “there’s no way those fuckers have an ATO.”) 

You might think that there would be technical measures to stop someone right out of high school from coming in and changing the code to a government system. That the system could require two-factor authentication to deploy the code to the cloud. That you would need a smart card to log in to a specific system to do that. Nope—all those technical measures can be circumvented by coercion at the hands of the evil housekeeper. 

Indeed, none of our systems and rules work without enforcement, and consequences flowing from that enforcement. But to an unprecedented degree, this administration, and its individual leaders, have shown absolutely no fear. That’s why, according to Wired, the former X and SpaceX engineer and DOGE staffer Marko Elez had the “ability not just to read but to write code on two of the most sensitive systems in the US government: the Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service (BFS).” (Elez reportedly resigned yesterday after the Wall Street Journal began reporting on a series of racist comments he had allegedly made.)

We’re seeing in real time that there are no practical technical measures preventing someone from taking a spanner to the technology that keeps our government stable, that keeps society running every day—despite the very real consequences. 

So we should plan for the worst, even if the likelihood of the worst is low. 

We need a version of the UK government’s National Risk Register, covering everything from the collapse of financial markets to “an attack on government” (but, unsurprisingly, that risk is described in terms of external threats). The register mostly predicts long-term consequences, with recovery taking months. That may end up being the case here. 

We need to dust off those “in the event of an emergency” disaster response procedures dealing with the failure of federal government—at individual organizations that may soon hit cash-flow problems and huge budget deficits without federal funding, at statehouses that will need to keep social programs running, and in groups doing the hard work of archiving and preserving data and knowledge.

In the end, all we have is each other—our ability to form communities and networks to support, help, and care for each other. Sometimes all it takes is for the first person to step forward, or to say no, and for us to rally around so it’s easier for the next person. In the end, it’s not about the technology—it’s about the people.

Dan Hon is principal of Very Little Gravitas, where he helps turn around and modernize large and complex government services and products.

Three reasons Meta will struggle with community fact-checking

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).  

Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed: 

1. The system will miss falsehoods and could amplify hateful content

There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely). 

Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.

I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow. 

r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge. 

2. It won’t work without well-supported volunteers  

Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe. 

Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish). 

It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).    

3. It can’t work without protections and guardrails 

Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content. 

I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with. 

But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box. 

If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better. 

Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

Useful quantum computing is inevitable—and increasingly imminent

On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. 

However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.

I’ve been closely following developments in quantum computing as an investor, and it’s clear to me that it is rapidly converging on utility. Last year, Google’s Willow device demonstrated that there is a promising pathway to scaling up to bigger and bigger computers. It showed that errors can be reduced exponentially as the number of quantum bits, or qubits, increases. It also ran a benchmark test in under five minutes that would take one of today’s fastest supercomputers 10 septillion years. While too small to be commercially useful with known algorithms, Willow shows that quantum supremacy (executing a task that is effectively impossible for any classical computer to handle in a reasonable amount of time) and fault tolerance (correcting errors faster than they are made) are achievable.

For example, PsiQuantum, a startup my company is invested in, is set to break ground on two quantum computers that will enter commercial service before the end of this decade. The plan is for each one to be 10 thousand times the size of Willow, big enough to tackle important questions about materials, drugs, and the quantum aspects of nature. These computers will not use GPUs to implement error correction. Rather, they will have custom hardware, operating at speeds that would be impossible with Nvidia hardware.

At the same time, quantum algorithms are improving far faster than hardware. A recent collaboration between the pharmaceutical giant Boehringer Ingelheim and PsiQuantum demonstrated a more than 200x improvement in algorithms to simulate important drugs and materials. Phasecraft, another company we have invested in, has improved the simulation performance for a wide variety of crystal materials and has published a quantum-enhanced version of a widely used materials science algorithm that is tantalizingly close to beating all classical implementations on existing hardware.

Advances like these lead me to believe that useful quantum computing is inevitable and increasingly imminent. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve.

We should care about the prospect of useful quantum computers because today we don’t really know how to do chemistry. We lack knowledge about the mechanisms of action for many of our most important drugs. The catalysts that drive our industries are generally poorly understood, require expensive exotic materials, or both. Despite appearances, we have significant gaps in our agency over the physical world; our achievements belie the fact that we are, in many ways, stumbling around in the dark.

Nature operates on the principles of quantum mechanics. Our classical computational methods fail to accurately capture the quantum nature of reality, even though much of our high-performance computing resources are dedicated to this pursuit. Despite all the intellectual and financial capital expended, we still don’t understand why the painkiller acetaminophen works, how type-II superconductors function, or why a simple crystal of iron and nitrogen can produce a magnet with such incredible field strength. We search for compounds in Amazonian tree bark to cure cancer and other maladies, manually rummaging through a pitifully small subset of a design space encompassing 1060 small molecules. It’s more than a little embarrassing.

We do, however, have some tools to work with. In industry, density functional theory (DFT) is the workhorse of computational chemistry and materials modeling, widely used to investigate the electronic structure of many-body systems—such as atoms, molecules, and solids. When DFT is applied to systems where electron-electron correlations are weak, it produces reasonable results. But it fails entirely on a broad class of interesting problems. 

Take, for example, the buzz in the summer of 2023 around the “room-temperature superconductor” LK-99. Many accomplished chemists turned to DFT to try to characterize the material and determine whether it was, indeed, a superconductor. Results were, to put it politely, mixed—so we abandoned our best computational methods, returning to mortar and pestle to try to make some of the stuff. Sadly, although LK-99 might have many novel characteristics, a room-temperature superconductor it isn’t. That’s unfortunate, as such a material could revolutionize energy generation, transmission, and storage, not to mention magnetic confinement for fusion reactors, particle accelerators, and more.

AI will certainly help with our understanding of materials, but it is no panacea. New AI techniques have emerged in the last few years, with some promising results. DeepMind’s Graph Networks for Materials Exploration (GNoME), for example, found 380,000 new potentially stable materials. At its core, though, GNoME depends on DFT, so its performance is only as good as DFT’s ability to produce good answers. 

The fundamental issue is that an AI model is only as good as the data it’s trained on. Training an LLM on the entire internet corpus, for instance, can yield a model that has a reasonable grasp of most human culture and can process language effectively. But if DFT fails for any non-trivially correlated quantum systems, how useful can a DFT-derived training set really be? We could also turn to synthesis and experimentation to create training data, but the number of physical samples we can realistically produce is minuscule relative to the vast design space, leaving a great deal of potential untapped. Only once we have reliable quantum simulations to produce sufficiently accurate training data will we be able to create AI models that answer quantum questions on classical hardware.

And that means that we need quantum computers. They afford us the opportunity to shift from a world of discovery to a world of design. Today’s iterative process of guessing, synthesizing, and testing materials is comically inadequate.

In a few tantalizing cases, we have stumbled on materials, like superconductors, with near-magical properties. How many more might these new tools reveal in the coming years? We will eventually have machines with millions of qubits that, when used to simulate crystalline materials, open up a vast new design space. It will be like waking up one day and finding a million new elements with fascinating properties on the periodic table.

Of course, building a million-qubit quantum computer is not for the faint of heart. Such machines will be the size of supercomputers, and require large amounts of capital, cryoplant, electricity, concrete, and steel. They also require silicon photonics components that perform well beyond anything in industry, error correction hardware that runs fast enough to chase photons, and single-photon detectors with unprecedented sensitivity. But after years of research and development, and more than a billion dollars of investment, the challenge is now moving from science and engineering to construction.

It is impossible to fully predict how quantum computing will affect our world, but a thought exercise might offer a mental model of some of the possibilities. 

Imagine our world without metal. We could have wooden houses built with stone tools, agriculture, wooden plows, movable type, printing, poetry, and even thoughtfully edited science periodicals. But we would have no inkling of phenomena like electricity or electromagnetism—no motors, generators, radio, MRI machines, silicon, or AI. We wouldn’t miss them, as we’d be oblivious to their existence. 

Today, we are living in a world without quantum materials, oblivious to the unrealized potential and abundance that lie just out of sight. With large-scale quantum computers on the horizon and advancements in quantum algorithms, we are poised to shift from discovery to design, entering an era of unprecedented dynamism in chemistry, materials science, and medicine. It will be a new age of mastery over the physical world.

Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies including several in quantum computing, quantum algorithms, and quantum sensing: PsiQuantum, Phasecraft, NVision, and Ideon.

There can be no winners in a US-China AI arms race

The United States and China are entangled in what many have dubbed an “AI arms race.” 

In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.

But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.    

The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.

Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate. 

When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. 

Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.

It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. 

Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.

Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China

Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.

The promise of AI for good

Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration

It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. 

Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. 

The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.

Our recommendations for policymakers:

  1. Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 
    • 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
    • 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
    • 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today. 
    • 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
    • 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
    • 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. 

    Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. 

    The opportunity to harness AI for the common good is a chance the world cannot afford to miss.


    Alvin Wang Graylin

    Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.

    Paul Triolo

    Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

    Trump’s win is a tragic loss for climate progress

    Donald Trump’s decisive victory is a stunning setback for climate change.

    The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade. 

    It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe. Under the policies in place today, the planet is already set to warm by more than 3 °C over preindustrial levels in the coming decades.

    Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws. In fact, a second Trump administration could boost greenhouse-gas emissions by 4 billion tons through 2030 alone, according to an earlier analysis by Carbon Brief, a well-regarded climate news and data site. That will exacerbate the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution, inflicting some $900 million in climate damages around the world, Carbon Brief found.

    I started as the climate editor at MIT Technology Review just as Trump came into office the last time. Much of the early job entailed covering his systematic unraveling of the modest climate policy and progress that President Barack Obama had managed to achieve. I fear it will be far worse this time, as Trump ambles into office feeling empowered and aggrieved, and ready to test the rule of law and crack down on dissent. 

    This time he’ll be staffed all the more by loyalists and idealogues, who have already made plans to force out civil servants with expertise and experience from federal agencies including the Environmental Protection Agency. He’ll be backed by a Supreme Court that he moved well to the right, and which has already undercut landmark environmental doctrines and weakened federal regulatory agencies. 

    This time the setbacks will sting more, too, because the US did finally manage to pass real, substantive climate policy, through the slimmest of congressional margins. The Inflation Reduction Act and Bipartisan Infrastructure Law allocated massive amounts of government funding to accelerating the shift to low-emissions industries and rebuilding the US manufacturing base around a clean-energy economy. 

    Trump has made clear he will strive to repeal as many of these provisions as he can, tempered perhaps only by Republicans who recognize that these laws are producing revenue and jobs in their districts. Meanwhile, throughout the prolonged presidential campaign, Trump or his surrogates pledged to boost oil and gas production, eliminate federal support for electric vehicles, end pollution rules for power plants, and remove the US from the Paris climate agreement yet again. Each of those goals stands in direct opposition to the deep, rapid emissions cuts now necessary to prevent the planet from tipping past higher and higher temperature thresholds.

    Project 2025, considered a blueprint for the early days of a second Trump administration despite his insistence to the contrary, calls for dismantling or downsizing federal institutions including the the National Oceanic and Atmospheric Administration and the Federal Emergency Management Agency. That could cripple the nation’s ability to forecast, track, or respond to storms, floods, and fires like those that have devastated communities in recent months.

    Observers I’ve spoken to fear that the Trump administration will also return the Department of Energy, which under Biden had evolved its mission toward developing low-emissions technologies, to the primary task of helping companies dig up more fossil fuels.

    The US election could create global ripples as well, and very soon. US negotiators will meet with their counterparts at the annual UN climate conference that kicks off next week. With Trump set to move back into the White House in January, they will have little credibility or leverage to nudge other nations to step up their commitments to reducing emissions. 

    But those are just some of the direct ways that a second Trump administration will enfeeble the nation’s ability to drive down emissions and counter the growing dangers of climate change. He also has considerable power to stall the economy and sow international chaos amid escalating conflicts in Europe and the Middle East. 

    Trump’s eagerness to enact tariffs, slash government spending, and deport major portions of the workforce may stunt growth, drive up inflation, and chill investment. All that would make it far more difficult for companies to raise the capital and purchase the components needed to build anything in the US, whether that means wind turbines, solar farms, and seawalls or buildings, bridges, and data centers. 

    view from behind Trump on stage election night 2024 with press and crowd
    President-elect Donald Trump speaks at an election night event in West Palm Beach, Florida.
    WIN MCNAMEE/GETTY IMAGES

    His clumsy handling of the economy and international affairs may also help China extend its dominance in producing and selling the components that are crucial to the energy transition, including batteries, EVs, and solar panels, to customers around the globe.

    If one job of a commentator is to find some perspective in difficult moments, I admit I’m mostly failing in this one.

    The best I can do is to say that there will be some meaningful lines of defense. For now ,at least, state leaders and legislatures can continue to enact and implement stronger climate rules. Other nations could step up their efforts to cut emissions and assert themselves as global leaders on climate. 

    Private industry will likely continue to invest in and build businesses in climate tech and clean energy, since solar, wind, batteries, and EVs have proved themselves as competitive industries. And technological progress can occur no matter who is sitting in the round room on Pennsylvania Avenue, since researchers continue striving to develop cleaner, cheaper ways of producing our energy, food, and goods.

    By any measure, the job of addressing climate change is now much harder. Nothing, however, has changed about the stakes. 

    Our world doesn’t end if we surpass 2 °C, 2.5 °C, or even 3 °C, but it will steadily become a more dangerous and erratic place. Every tenth of a degree remains worth fighting for—whether two, four, or a dozen years from now—because every bit of warming that nations pull together to prevent eases future suffering somewhere.

    So as the shock wears off and the despair begins to lift, the core task before us remains the same: to push for progress, whenever, wherever, and however we can. 

    Sorry, AI won’t “fix” climate change

    In an essay last week, Sam Altman, the CEO of OpenAI, argued that the accelerating capabilities of AI will usher in an idyllic “Intelligence Age,” unleashing “unimaginable” prosperity and “astounding triumphs” like “fixing the climate.”

    It’s a promise that no one is in a position to make—and one that, when it comes to the topic of climate change, fundamentally misunderstands the nature of the problem. 

    More maddening, the argument suggests that the technology’s massive consumption of electricity today doesn’t much matter, since it will allow us to generate abundant clean power in the future. That casually waves away growing concerns about a technology that’s already accelerating proposals for natural-gas plants and diverting major tech companies from their corporate climate targets

    By all accounts, AI’s energy demands will only continue to increase, even as the world scrambles to build larger, cleaner power systems to meet the increasing needs of EV charging, green hydrogen production, heat pumps, and other low-carbon technologies. Altman himself reportedly just met with White House officials to make the case for building absolutely massive AI data centers, which could require the equivalent of five dedicated nuclear reactors to run.  

    It’s a bedrock perspective of MIT Technology Review that technological advances can deliver real benefits and accelerate societal progress in meaningful ways. But for decades researchers and companies have oversold the potential of AI to deliver blockbuster medicines, achieve super intelligence, and free humanity from the need to work. To be fair, there have been significant advances, but nothing on the order of what’s been hyped.

    Given that track record, I’d argue you need to develop a tool that does more than plagiarize journalism and help students cheat on homework before you can credibly assert that it will solve humanity’s thorniest problems, whether the target is rampant poverty or global warming.

    To be sure, AI may help the world address the rising dangers of climate change. We have begun to see research groups and startups harness the technology to try to manage power grids more effectively, put out wildfires faster, and discover materials that could create cheaper, better batteries or solar panels.

    All those advances are still relatively incremental. But let’s say AI does bring about an energy miracle. Perhaps its pattern-recognition prowess will deliver the key insight that finally cracks fusion—a technology that Altman is betting on heavily as an investor.

    That would be fantastic. But technological advances are just the start—necessary but far from sufficient to eliminate the world’s climate emissions.

    How do I know?

    Because between nuclear fission plants, solar farms, wind turbines, and batteries, we already have every technology we need to clean up the power sector. This should be the low-hanging fruit of the energy transition. Yet in the largest economy on Earth, fossil fuels still generate 60% of the electricity. The fact that so much of our power still comes from coal, petroleum, and natural gas is a regulatory failure as much as a technological one. 

    “As long as we effectively subsidize fossil fuels by allowing them to use the atmosphere as a waste dump, we are not allowing clean energy to compete on a level playing field,” Zeke Hausfather, a climate scientist at the independent research organization Berkeley Earth, wrote on X in a response to Altman’s post. “We need policy changes, not just tech breakthroughs, to meet our climate goals.”

    That’s not to say there aren’t big technical problems we still need to solve. Just look at the continuing struggles to develop clean, cost-competitive ways of fertilizing crops or flying planes. But the fundamental challenges of climate change are sunk costs, development obstacles, and inertia.

    We’ve built and paid for a global economy that spews out planet-warming gases, investing trillions of dollars in power plants, steel mills, factories, jets, boilers, water heaters, stoves, and SUVs that run on fossil fuels. And few people or companies will happily write off those investments so long as those products and plants still work. AI can’t remedy all that just by generating better ideas. 

    To raze and replace the machinery of every industry around the world at the speed now required, we will need increasingly aggressive climate policies that incentivize or force everyone to switch to cleaner plants, products, and practices.

    But with every proposal for a stricter law or some big new wind or solar farm, forces will push back, because the plan will hit someone’s wallet, block someone’s views, or threaten the areas or traditions someone cherishes. Climate change is an infrastructure problem, and building infrastructure is a messy human endeavor. 

    Tech advances can ease some of these issues. Cheaper, better alternatives to legacy industries make hard choices more politically palatable. But there are no improvements to AI algorithms or underlying data sets that solve the challenge of NIMBYism, the conflict between human interests, or the desire to breathe the fresh air in an unsullied wilderness. 

    To assert that a single technology—that just happens to be the one your company develops—can miraculously untangle these intractable conflicts of human society is at best self-serving, if not a little naïve. And it’s a troubling idea to proclaim at a point when the growth of that very technology is threatening to undermine the meager progress the world has begun to make on climate change.

    As it is, the one thing we can state confidently about generative AI is that it’s making the hardest problem we’ve ever had to solve that much harder to solve.

    Why we need an AI safety hotline

    In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

    Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

    One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

    How to sound the alarm

    In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

    These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

    When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

    For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

    Lowering the stakes

    What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

    As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

    Studying examples elsewhere

    The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

    One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

    This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

    An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

    A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

    Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.