Trump’s AI Action Plan is a distraction

On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI. 

The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions. 

The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar: 

  • One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation.
  • A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers.
  • A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries.

This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.

To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining. 

Investing federal funding in R&D 

Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.

American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself.

Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding.

Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry.

Supporting immigration and immigrants

Beyond public R&D investment, America has long attracted the world’s best researchers and innovators.

Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants. 

This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD.

“Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding.

Banning noncompetes

Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

Silicon Valley got its name because of mid-20thcentury companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.” 

Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare.

AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity.

This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers.

Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited.

Pursuing antitrust actions

One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation. 

Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described.

William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 

The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers.

Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 

Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement. 

I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI. 

But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition. 

Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower. 

Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

Why the US and Europe could lose the race for fusion energy

Fusion energy holds the potential to shift a geopolitical landscape that is currently configured around fossil fuels. Harnessing fusion will deliver the energy resilience, security, and abundance needed for all modern industrial and service sectors. But these benefits will be controlled by the nation that leads in both developing the complex supply chains required and building fusion power plants at scales large enough to drive down economic costs.

The US and other Western countries will have to build strong supply chains across a range of technologies in addition to creating the fundamental technology behind practical fusion power plants. Investing in supply chains and scaling up complex production processes has increasingly been a strength of China’s and a weakness of the West, resulting in the migration of many critical industries from the West to China. With fusion, we run the risk that history will repeat itself. But it does not have to go that way.

The US and Europe were the dominant public funders of fusion energy research and are home to many of the world’s pioneering private fusion efforts. The West has consequently developed many of the basic technologies that will make fusion power work. But in the past five years China’s support of fusion energy has surged, threatening to allow the country to dominate the industry.

The industrial base available to support China’s nascent fusion energy industry could enable it to climb the learning curve much faster and more effectively than the West. Commercialization requires know-how, capabilities, and complementary assets, including supply chains and workforces in adjacent industries. And especially in comparison with China, the US and Europe have significantly under-supported the industrial assets needed for a fusion industry, such as thin-film processing and power electronics.

To compete, the US, allies, and partners must invest more heavily not only in fusion itself—which is already happening—but also in those adjacent technologies that are critical to the fusion industrial base. 

China’s trajectory to dominating fusion and the West’s potential route to competing can be understood by looking at today’s most promising scientific and engineering pathway to achieve grid-relevant fusion energy. That pathway relies on the tokamak, a technology that uses a magnetic field to confine ionized gas—called plasma—and ultimately fuse nuclei. This process releases energy that is converted from heat to electricity. Tokamaks consist of several critical systems, including plasma confinement and heating, fuel production and processing, blankets and heat flux management, and power conversion.

A close look at the adjacent industries needed to build these critical systems clearly shows China’s advantage while also providing a glimpse into the challenges of building a fusion industrial base in the US or Europe. China has leadership in three of these six key industries, and the West is at risk of losing leadership in two more. China’s industrial might in thin-film processing, large metal-alloy structures, and power electronics provides a strong foundation to establish the upstream supply chain for fusion.

The importance of thin-film processing is evident in the plasma confinement system. Tokamaks use strong electromagnets to keep the fusion plasma in place, and the magnetic coils must be made from superconducting materials. Rare-earth barium copper oxide (REBCO) superconductors are the highest-performing materials available in sufficient quantity to be viable for use in fusion.

The REBCO industry, which relies on thin-film processing technologies, currently has low production volumes spanning globally distributed manufacturers. However, as the fusion industry grows, the manufacturing base for REBCO will likely consolidate among the industry players who are able to rapidly take advantage of economies of scale. China is today’s world leader in thin-film, high-volume manufacturing for solar panels and flat-panel displays, with the associated expert workforce, tooling sector, infrastructure, and upstream materials supply chain. Without significant attention and investment on the part of the West, China is well positioned to dominate REBCO thin-film processing for fusion magnets.

The electromagnets in a full-scale tokamak are as tall as a three-story building. Structures made using strong metal alloys are needed to hold these electromagnets around the large vacuum vessel that physically contains the magnetically confined plasma. Similar large-scale, complex metal structures are required for shipbuilding, aerospace, oil and gas infrastructure, and turbines. But fusion plants will require new versions of the alloys that are radiation-tolerant, able to withstand cryogenic temperatures, and corrosion-resistant. China’s manufacturing capacity and its metallurgical research efforts position it well to outcompete other global suppliers in making the necessary specialty metal alloys and machining them into the complex structures needed for fusion.

A tokamak also requires large-scale power electronics. Here again China dominates. Similar systems are found in the high-speed rail (HSR) industry, renewable microgrids, and arc furnaces. As of 2024, China had deployed over 48,000 kilometers of HSR. That is three times the length of Europe’s HSR network and 55 times as long as the Acela network in the US, which is slower than HSR. While other nations have a presence, China’s expertise is more recent and is being applied on a larger scale.

But this is not the end of the story. The West still has an opportunity to lead the other three adjacent industries important to the fusion supply chain: cryo-plants, fuel processing, and blankets. 

The electromagnets in an operational tokamak need to be kept at cryogenic temperatures of around 20 Kelvin to remain superconducting. This requires large-scale, multi-megawatt cryogenic cooling plants. Here, the country best set up to lead the industry is less clear. The two major global suppliers of cryo-plants are Europe-based Linde Engineering and Air Liquide Engineering; the US has Air Products and Chemicals and Chart Industries. But they are not alone: China’s domestic champions in the cryogenic sector include Hangyang Group, SASPG, Kaifeng Air Separation, and SOPC. Each of these regions already has an industrial base that could scale up to meet the demands of fusion.

Fuel production for fusion is a nascent part of the industrial base requiring processing technologies for light-isotope gases—hydrogen, deuterium, and tritium. Some processing of light-isotope gases is already done at small scale in medicine, hydrogen weapons production, and scientific research in the US, Europe, and China. But the scale needed for the fusion industry does not exist in today’s industrial base, presenting a major opportunity to develop the needed capabilities.

Similarly, blankets and heat flux management are an opportunity for the West. The blanket is the medium used to absorb energy from the fusion reaction and to breed tritium. Commercial-scale blankets will require entirely novel technology. To date, no adjacent industries have relevant commercial expertise in liquid lithium, lead-lithium eutectic, or fusion-specific molten salts that are required for blanket technology. Some overlapping blanket technologies are in early-stage development by the nuclear fission industry. As the largest producer of beryllium in the world, the US has an opportunity to capture leadership because that element is a key material in leading fusion blanket concepts. But the use of beryllium must be coupled with technology development programs for the other specialty blanket components.

These six industries will prove critical to scaling fusion energy. In some, such as thin-film processing and large metal-alloy structures, China already has a sizable advantage. Crucially, China recognizes the importance of these adjacent industries and is actively harnessing them in its fusion efforts. For example, China launched a fusion consortium that consists of industrial giants spanning the steel, machine tooling, electric grid, power generation, and aerospace sectors. It will be extremely difficult for the West to catch up in these areas, but policymakers and business leaders must pay attention and try to create robust alternative supply chains.

As the industrial area of greatest strength, cryo-plants could continue to be an opportunity for leadership in the West. Bolstering Western cryo-plant production by creating demand for natural-gas liquefaction will be a major boon to the future cryo-plant supply chain that will support fusion energy.

The US and European countries also have an opportunity to lead in the emerging industrial areas of fuel processing and blanket technologies. Doing so will require policymakers to work with companies to ensure that public and private funding is allocated to these critical emerging supply chains. Governments may well need to serve as early customers and provide debt financing for significant capital investment. Governments can also do better to incentivize private capital and equity financing—for example, through favorable capital-gains taxation. In lagging areas of thin-film and alloy production, the US and Europe will likely need partners, such as South Korea and Japan, that have the industrial bases to compete globally with China.

The need to connect and capitalize multiple industries and supply chains will require long-term thinking and clear leadership. A focus on the demand side of these complementary industries is essential. Fusion is a decade away from maturation, so its supplier base must be derisked and made profitable in the near term by focusing on other primary demand markets that contribute to our economic vitality. To name a few, policymakers can support modernization of the grid to bolster domestic demand for power electronics and domestic semiconductor manufacturing to support thin-film processing.

The West must also focus on the demand for energy production itself. As the world’s largest energy consumer, China will leverage demand from its massive domestic market to climb the learning curve and bolster national champions. This is a strategy that China has wielded with tremendous success to dominate global manufacturing, most recently in the electric-vehicle industry. Taken together, supply- and demand-side investment have been a winning strategy for China.

The competition to lead the future of fusion energy is here. Now is the moment for the US and its Western allies to start investing in the foundational innovation ecosystem needed for a vibrant and resilient industrial base to support it.

Daniel F. Brunner is a co-founder of Commonwealth Fusion Systems and a Partner at Future Tech Partners.

Edlyn V. Levine is the co-founder of a stealth-mode technology start up and an affiliate of the MIT Sloan School of Management.

Fiona E. Murray is a professor of entrepreneurship at the MIT School of Management and Vice Chair of the NATO Innovation Fund.

Rory Burke is a graduate of MIT Sloan and a former summer scholar with ARPA-E.

The latest threat from the rise of Chinese manufacturing

The findings a decade ago were, well, shocking. Mainstream economists had long argued that free trade was overall a good thing; though there might be some winners and losers, it would generally bring lower prices and widespread prosperity. Then, in 2013, a trio of academic researchers showed convincing evidence that increased trade with China beginning in the early 2000s and the resulting flood of cheap imports had been an unmitigated disaster for many US communities, destroying their manufacturing lifeblood.

The results of what in 2016 they called the “China shock” were gut-wrenching: the loss of 1 million US manufacturing jobs and 2.4 million jobs in total by 2011. Worse, these losses were heavily concentrated in what the economists called “trade-exposed” towns and cities (think furniture makers in North Carolina).

If in retrospect all that seems obvious, it’s only because the research by David Autor, an MIT labor economist, and his colleagues has become an accepted, albeit often distorted, political narrative these days: China destroyed all our manufacturing jobs! Though the nuances of the research are often ignored, the results help explain at least some of today’s political unrest. It’s reflected in rising calls for US protectionism, President Trump’s broad tariffs on imported goods, and nostalgia for the lost days of domestic manufacturing glory.

The impacts of the original China shock still scar much of the country. But Autor is now concerned about what he considers a far more urgent problem—what some are calling China shock 2.0. The US, he warns, is in danger of losing the next great manufacturing battle, this time over advanced technologies to make cars and planes as well as those enabling AI, quantum computing, and fusion energy.

Recently, I asked Autor about the lingering impacts of the China shock and the lessons it holds for today’s manufacturing challenges.

How are the impacts of the China shock still playing out?

I have a recent paper looking at 20 years of data, from 2000 to 2019. We tried to ask two related questions. One, if you looked at the places that were most exposed, how have they adjusted? And then if you look to the people who are most exposed, how have they adjusted? And how do those two things relate to one anothe

It turns out you get two very different answers. If you look at places that were most exposed, they have been substantially transformed. Manufacturing, once it starts going down, never comes back. But after 2010, these trade-impacted local labor markets staged something of an employment recovery, such that employment has grown faster after 2010 in trade-exposed places than non-trade-exposed places because a lot of people have come in. But these are jobs mostly in low-wage sectors. They’re in K–12 education and non-traded health services. They’re in warehousing and logistics. They’re in hospitality and lodging and recreation, and so they’re lower-wage, non-manufacturing jobs. And they’re done by a really different set of people.

The growth in employment is among women, among native-born Hispanics, among foreign-born adults and a lot of young people. The recovery is staged by a very different group from the white and black men, but especially white men, who were most represented in manufacturing. They have not really participated in this renaissance.

Employment is growing, but are these areas prospering?

They have a lower wage structure: fewer high-wage jobs, more low-wage jobs. So they’re not, if your definition of prospering is rapidly rising incomes. But there’s a lot of employment growth. They’re not like ghost towns. But then if you look at the people who were most concentrated in manufacturing—mostly white, non-college, native-born men—they have not prospered. Most of them have not transitioned from manufacturing to non-manufacturing.

One of the great surprises is everyone had believed that people would pull up stakes and move on. In fact, we find the opposite. People in the most adversely exposed places become less likely to leave. They have become less mobile. The presumption was that they would just relocate to find higher ground. And that is not at all what occurred.

What happened to the total number of manufacturing jobs?

There’s been no rebound. Once they go, they just keep going. If there is going to be new manufacturing, it won’t be in the sectors that were lost to China. Those were basically labor-intensive jobs, the kind of low-tech sectors that we will not be getting back. You know—commodity furniture and assembly of things, shoes, construction material. The US wasn’t going to keep them forever, and once they’re gone, it’s very unlikely to get them back.

I know you’ve written about this, but it’s not hard to draw a connection between the dynamics you’re describing—white-male manufacturing jobs going away and new jobs going to immigrants—and today’s political turmoil.

We have a paper about that called “Importing Political Polarization?”

How big a factor would you say it is in today’s political unrest?

I don’t want to say it’s the factor. The China trade shock was a catalyst, but there were lots of other things that were happening. It would be a vast oversimplification to say that it was the sole cause.

But most people don’t work in manufacturing anymore. Aren’t these impacts that you’re talking about, including the political unrest, disproportionate to the actual number of jobs lost?

These are jobs in places where manufacturing is the anchor activity. Manufacturing is very unevenly distributed. It’s not like grocery stores and hospitals that you find in every county. The impact of the China trade shock on these places was like dropping an economic bomb in the middle of downtown. If the China trade shock cost us a few million jobs, and these were all—you know—people in groceries and retail and gas stations, in hospitality and in trucking, you wouldn’t really notice it that much. We lost lots of clerical workers over the last couple of decades. Nobody talks about a clerical shock. Why not? Well, there was never a clerical capital of America. Clerical workers are everywhere. If they decline, it doesn’t wipe out the entire basis of a place.

So it goes beyond the jobs. These places lost their identity.

Maybe. But it’s also the jobs. Manufacturing offered relatively high pay to non-college workers, especially non-college men. It was an anchor of a way of life.

And we’re still seeing the damage.

Yeah, absolutely. It’s been 20 years. What’s amazing is the degree of stasis among the people who are most exposed—not the places, but the people. Though it’s been 20 years, we’re still feeling the pain and the political impacts from this transition.

Clearly, it has now entered the national psyche. Even if it weren’t true, everyone now believes it to have been a really big deal, and they’re responding to it. It continues to drive policy, political resentments, maybe even out of proportion to its economic significance. It certainly has become mythological.

What worries you now?

We’re in the midst of a totally different competition with China now that’s much, much more important. Now we’re not talking about commodity furniture and tube socks. We’re talking about semiconductors and drones and aviation, electric vehicles, shipping, fusion power, quantum, AI, robotics. These are the sectors where the US still maintains competitiveness, but they’re extremely threatened. China’s capacity for high-tech, low-cost, incredibly fast, innovative manufacturing is just unbelievable. And the Trump administration is basically fighting the war of 20 years ago. The loss of those jobs, you know, was devastating to those places. It was not devastating to the US economy as a whole. If we lose Boeing, GM, and Apple and Intel—and that’s quite possible—then that will be economically devastating.

I think some people are calling it China shock 2.0.

Yeah. And it’s well underway.

When we think about advanced manufacturing and why it’s important, it’s not so much about the number of jobs anymore, is it? Is it more about coming up with the next technologies?

It does create good jobs, but it’s about economic leadership. It’s about innovation. It’s about political leadership, and even standard setting for how the rest of the world works.

Should we just accept that manufacturing as a big source of jobs is in the past and move on?

No. It’s still 12 million jobs, right? Instead of the fantasy that we’re going to go back to 18 million or whatever—we had, what, 17.7 million manufacturing jobs in 1999—we should be worried about the fact that we’re going to end up at 6 million, that we’re going to lose 50% in the next decade. And that’s quite possible. And the Trump administration is doing a lot to help that process of loss along.

We have a labor market of over 160 million people, so it’s like 8% of employment. It’s not zero. So you should not think of it as too small to worry about it. It’s a lot of people; it’s a lot of jobs. But more important, it’s a lot of what has helped this country be a leader. So much innovation happens here, and so many of the things in which other countries are now innovating started here. It’s always been the case that the US tends to innovate in sectors and then lose them after a while and move on to the next thing. But at this point, it’s not clear that we’ll be in the frontier of a lot of these sectors for much longer.

So we want to revive manufacturing, but the right kind—advanced manufacturing?

The notion that we should be assembling iPhones in the United States, which Trump wants, is insane. Nobody wants to do that work. It’s horrible, tedious work. It pays very, very little. And if we actually did it here, it would make the iPhones 20% more expensive or more. Apple may very well decide to pay a 25% tariff rather than make the phones here. If Foxconn started doing iPhone assembly here, people would not be lining up for that job.

But at the same time, we do need new people coming into manufacturing.

But not that manufacturing. Not tedious, mind-numbing, eyestrain-inducing assembly.

We need them to do high-tech work. Manufacturing is a skilled activity. We need to build airplanes better. That takes a ton of expertise. Assembling iPhones does not.

What are your top priorities to head off China shock 2.0?

I would choose sectors that are important, and I would invest in them. I don’t think that tariffs are never justified, or industrial policies are never justified. I just don’t think protecting phone assembly is smart industrial policy. We really need to improve our ability to make semiconductors. I think that’s important. We need to remain competitive in the automobile sector—that’s important. We need to improve aviation and drones. That’s important. We need to invest in fusion power. That’s important. We need to adopt robotics at scale and improve in that sector. That’s important. I could come up with 15 things where I think public money is justified, and I would be willing to tolerate protections for those sectors.

What are the lasting lessons of the China shock and the opening up of global trade in the 2000s?

We did it too fast. We didn’t do enough to support people, and we pretended it wasn’t going on.

When we started the China shock research back around 2011, we really didn’t know what we’d find, and so we were as surprised as anyone. But the work has changed our own way of thinking and, I think, has been constructive—not because it has caused everyone to do the right thing, but it at least caused people to start asking the right questions.

What do the findings tell us about China shock 2.0?

I think the US is handling that challenge badly. The problem is much more serious this time around. The truth is, we have a sense of what the threats are. And yet we’re not seemingly responding in a very constructive way. Although we now know how seriously we should take this, the problem is that it doesn’t seem to be generating very serious policy responses. We’re generating a lot of policy responses—they’re just not serious ones.

Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together.

The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind.

Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform.

And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users canceling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.

In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs.

Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.

In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.

There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking.

None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it.

That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.

Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

The Bank Secrecy Act is failing everyone. It’s time to rethink financial surveillance.

The US is on the brink of enacting rules for digital assets, with growing bipartisan momentum to modernize our financial system. But amid all the talk about innovation and global competitiveness, one issue has been glaringly absent: financial privacy. As we build the digital infrastructure of the 21st century, we need to talk about not just what’s possible but what’s acceptable. That means confronting the expanding surveillance powers quietly embedded in our financial system, which today can track nearly every transaction without a warrant.

Many Americans may associate financial surveillance with authoritarian regimes. Yet because of a Nixon-era law called the Bank Secrecy Act (BSA) and the digitization of finance over the past half-century, financial privacy is under increasingly serious threat here at home. Most Americans don’t realize they live under an expansive surveillance regime that likely violates their constitutional rights. Every purchase, deposit, and transaction, from the smallest Venmo payment for a coffee to a large hospital bill, creates a data point in a system that watches you—even if you’ve done nothing wrong.

As a former federal prosecutor, I care deeply about giving law enforcement the tools it needs to keep us safe. But the status quo doesn’t make us safer. It creates a false sense of security while quietly and permanently eroding the constitutional rights of millions of Americans.

When Congress enacted the BSA in 1970, cash was king and organized crime was the target. The law created a scheme whereby, ever since, banks have been required to keep certain records on their customers and turn them over to law enforcement upon request. Unlike a search warrant, which must be issued by a judge or magistrate upon a showing of probable cause that a crime was committed and that specific evidence of that crime exists in the place to be searched, this power is exercised with no checks or balances. A prosecutor can “cut a subpoena”—demanding all your bank records for the past 10 years—with no judicial oversight or limitation on scope, and at no cost to the government. The burden falls entirely on the bank. In contrast, a proper search warrant must be narrowly tailored, with probable cause and judicial authorization.

In United States v. Miller (1976), the Supreme Court upheld the BSA, reasoning that citizens have no “legitimate expectation of privacy” about information shared with third parties, like banks. Thus began the third-party doctrine, enabling law enforcement to access financial records without a warrant. The BSA has been amended several times over the years (most notoriously in 2001 as a part of the Patriot Act), imposing an ever-growing list of recordkeeping obligations on an ever-growing list of financial institutions. Today, it is virtually inescapable for everyday Americans.

In the 1970s, when the BSA was enacted, banking and noncash payments were conducted predominantly through physical means: writing checks, visiting bank branches, and using passbooks. For cash transactions, the BSA required reporting of transactions over the kingly sum of $10,000, a figure that was not pegged to inflation and remains the same today. And given the nature of banking services and the technology available at the time, individuals conducted just a handful of noncash payments per month. Today, consumers make at least one payment or banking transaction a day, and just an estimated 16% of those are in cash

Meanwhile, emerging technologies further expand the footprint of financial data. Add to this the massive pools of personal information already collected by technology platforms—location history, search activity, communications metadata—and you create a world where financial surveillance can be linked to virtually every aspect of your identity, movement, and behavior.

Nor does the BSA actually appear to be effective at achieving its aims. In fiscal year 2024, financial institutions filed about 4.7 million Suspicious Activity Reports (SARs) and over 20 million currency transaction reports. Instead of stopping major crime, the system floods law enforcement with low-value information, overwhelming agents and obscuring real threats. Mass surveillance often reduces effectiveness by drowning law enforcement in noise. But while it doesn’t stop hackers, the BSA creates a trove of permanent info on everyone.

Worse still, the incentives are misaligned and asymmetrical. To avoid liability, financial institutions are required to report anything remotely suspicious. If they fail to file a SAR, they risk serious penalties—even indictment. But they face no consequences for overreporting. The vast overcollection of data is the unsurprising result. These practices, developed under regulations, require clearer guardrails so that executive branch actors can more safely outsource surveillance duties to private institutions.

But courts have recognized that constitutional privacy must evolve alongside technology. In 2012, the Supreme Court ruled in United States v. Jones that attaching a GPS tracker to a vehicle for prolonged surveillance constituted a search restricted by the Fourth Amendment. Justice Sonia Sotomayor, in a notable concurrence, argued that the third-party doctrine was ill suited to an era when individuals “reveal a great deal of information about themselves to third parties” merely by participating in daily life.

This legal evolution continued in 2018, when the Supreme Court held in Carpenter v. United States that accessing historical cell-phone location records held by a third party required a warrant, recognizing that “seismic shifts in digital technology” necessitate stronger protections and warning that “the fact that such information is gathered by a third party does not make it any less deserving of Fourth Amendment protection.”

The logic of Carpenter applies directly to the mass of financial records being collected today. Just as tracking a person’s phone over time reveals the “whole of their physical movements,” tracking a person’s financial life exposes travel, daily patterns, medical treatments, political affiliations, and personal associations. In many ways, because of the velocity and digital nature of today’s digital payments, financial data is among the most personal and revealing data there is—and therefore deserves the highest level of constitutional protection.

Though Miller remains formally intact, the writing is on the wall: Indiscriminate financial surveillance such as what we have today is fundamentally at odds with the Fourth Amendment in the digital age.

Technological innovations over the past several decades have brought incredible convenience to economic life. Now our privacy standards must catch up. With Congress considering landmark legislation on digital assets, it’s an important moment to consider what kind of financial system we want—not just in terms of efficiency and access, but in terms of freedom. Rather than striking down the BSA in its entirety, policymakers should narrow its reach, particularly around the bulk collection and warrantless sharing of Americans’ financial data.

Financial surveillance shouldn’t be the price of participation in modern life. The systems we build now will shape what freedom looks like for the next century. It’s time to treat financial privacy like what it is: a cornerstone of democracy, and a right worth fighting for.

Katie Haun is the CEO and founder of Haun Ventures, a venture capital firm focused on frontier technologies. She is a former federal prosecutor who created the US Justice Department’s first cryptocurrency task force. She led investigations into the Mt. Gox hack and the corrupt agents on the Silk Road task force. She clerked for US Supreme Court Justice Anthony Kennedy and is an honors graduate of Stanford Law School.

Why AI hardware needs to be open

When OpenAI acquired Io to create “the coolest piece of tech that the world will have ever seen,” it confirmed what industry experts have long been saying: Hardware is the new frontier for AI. AI will no longer just be an abstract thing in the cloud far away. It’s coming for our homes, our rooms, our beds, our bodies. 

That should worry us.

Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone.

By definition, the maker movement is humble and it is consistent. Makers do not believe in the cult of individual genius; we believe in collective genius. We believe that creativity is universally distributed (not exclusively bestowed), that inventing is better together, and that we should make open products so people can observe, learn, and create—basically, the polar opposite of what Jony Ive and Sam Altman are building.

But over time, the momentum faded. The movement was dismissed by the tech and investment industry as niche and hobbyist, and starting in 2018, pressures on the hardware venture market (followed by covid) made people retreat from social spaces to spend more time behind screens. 

Now it’s mounting a powerful second act, joined by a wave of AI open-source enthusiasts. This time around the stakes are higher, and we need to give it the support it never had.

In 2024 the AI leader Hugging Face developed an open-source platform for AI robots, which already has 3,500+ robot data sets and draws thousands of participants from every continent to join giant hackathons. Raspberry Pi went public on the London Stock Exchange for $700 million. After a hiatus, Maker Faire came back; the most recent one had nearly 30,000 attendees, with kinetic sculptures, flaming octopuses, and DIY robot bands, and this year there will be over 100 Maker Faires around the world. Just last week, DIY.org relaunched its app. In March, my friend Roya Mahboob, founder of the Afghan Girls Robotics Team, released a movie about the team to incredible reviews. People love the idea that making is the ultimate form of human empowerment and expression. All the while, a core set of people have continued influencing millions through maker organizations like FabLabs and Adafruit.

Studies show that hands-on creativity reduces anxiety, combats loneliness, and boosts cognitive function. The act of making grounds us, connects us to others, and reminds us that we are capable of shaping the world with our own hands. 

I’m not proposing to reject AI hardware but to reject the idea that innovation must be proprietary, elite, and closed. I’m proposing to fund and build the open alternative. That means putting our investment, time, and purchases towards robot built in community labs, AI models trained in the open, tools made transparent and hackable. That world isn’t just more inclusive—it’s more innovative. It’s also more fun. 

This is not nostalgia. This is about fighting for the kind of future we want: A future of openness and joy, not of conformity and consumption. One where technology invites participation, not passivity. Where children grow up not just knowing how to swipe, but how to build. Where creativity is a shared endeavor, not the mythical province of lone geniuses in glass towers.

In his Io announcement video, Altman said, “We are literally on the brink of a new generation of technology that can make us our better selves.” It reminded me of the movie Mountainhead, where four tech moguls tell themselves they are saving the world while the world is burning. I don’t think the iPhone made us our better selves. In fact, you’ve never seen me run faster than when I’m trying to snatch an iPhone out of my three-year-old’s hands.

So yes, I’m watching what Sam Altman and Jony Ive will unveil. But I’m far more excited by what’s happening in basements, in classrooms, on workbenches. Because the real iPhone moment isn’t a new product we wait for. It’s the moment you realize you can build it yourself. And best of all? You  can’t doomscroll when you’re holding a soldering iron.

Ayah Bdeir is a leader in the maker movement, a champion of open source AI, and founder of littleBits, the hardware platform that teaches STEAM to kids through hands-on invention. A graduate of the MIT Media Lab, she was selected as one of the BBC’s 100 Most Influential Women, and her inventions have been acquired by the Museum of Modern Art.

AI copyright anxiety will hold back creativity

Last fall, while attending a board meeting in Amsterdam, I had a few free hours and made an impromptu visit to the Van Gogh Museum. I often steal time for visits like this—a perk of global business travel for which I am grateful. Wandering the galleries, I found myself before The Courtesan (after Eisen), painted in 1887. Van Gogh had based it on a Japanese woodblock print by Keisai Eisen, which he encountered in the magazine Paris Illustré. He explicitly copied and reinterpreted Eisen’s composition, adding his own vivid border of frogs, cranes, and bamboo.

As I stood there, I imagined the painting as the product of a generative AI model prompted with the query How would van Gogh reinterpret a Japanese woodblock in the style of Keisai Eisen? And I wondered: If van Gogh had used such an AI tool to stimulate his imagination, would Eisen—or his heirs—have had a strong legal claim?  If van Gogh were working today, that might be the case. Two years ago, the US Supreme Court found that Andy Warhol had infringed upon the photographer Lynn Goldsmith’s copyright by using her photo of the musician Prince for a series of silkscreens. The court said the works were not sufficiently transformative to constitute fair use—a provision in the law that allows for others to make limited use of copyrighted material.

A few months later, at the Museum of Fine Arts in Boston, I visited a Salvador Dalí exhibition. I had always thought of Dalí as a true original genius who conjured surreal visions out of thin air. But the show included several Dutch engravings, including Pieter Bruegel the Elder’s Seven Deadly Sins (1558), that clearly influenced Dalí’s 8 Mortal Sins Suite (1966). The stylistic differences are significant, but the lineage is undeniable. Dalí himself cited Bruegel as a surrealist forerunner, someone who tapped into the same dream logic and bizarre forms that Dalí celebrated. Suddenly, I was seeing Dalí not just as an original but also as a reinterpreter. Should Bruegel have been flattered that Dalí built on his work—or should he have sued him for making it so “grotesque”?

During a later visit to a Picasso exhibit in Milan, I came across a famous informational diagram by the art historian Alfred Barr, mapping how modernist movements like Cubism evolved from earlier artistic traditions. Picasso is often held up as one of modern art’s most original and influential figures, but Barr’s chart made plain the many artists he drew from—Goya, El Greco, Cézanne, African sculptors. This made me wonder: If a generative AI model had been fed all those inputs, might it have produced Cubism? Could it have generated the next great artistic “breakthrough”?

These experiences—spread across three cities and centered on three iconic artists—coalesced into a broader reflection I’d already begun. I had recently spoken with Daniel Ek, the founder of Spotify, about how restrictive copyright laws are in music. Song arrangements and lyrics enjoy longer protection than many pharmaceutical patents. Ek sits at the leading edge of this debate, and he observed that generative AI already produces an astonishing range of music. Some of it is good. Much of it is terrible. But nearly all of it borrows from the patterns and structures of existing work.

Musicians already routinely sue one another for borrowing from previous works. How will the law adapt to a form of artistry that’s driven by prompts and precedent, built entirely on a corpus of existing material?

And the questions don’t stop there. Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it? Will the social forces that shape artistic standing—critics, curators, tastemakers—still hold sway? Or will a new, AI-era hierarchy emerge? If every artist has always borrowed from others, is AI’s generative recombination really so different? And in such a litigious culture, how long can copyright law hold its current form? The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field. 

Different industries are responding in different ways. The Academy of Motion Picture Arts and Sciences recently announced that filmmakers’ use of generative AI would not disqualify them from Oscar contention—and that they wouldn’t be required to disclose when they’d used the technology. Several acclaimed films, including Oscar winner The Brutalist, incorporated AI into their production processes.

The music world, meanwhile, continues to wrestle with its definitions of originality. Consider the recent lawsuit against Ed Sheeran. In 2016, he was sued by the heirs of Ed Townsend, co-writer of Marvin Gaye’s “Let’s Get It On,” who claimed that Sheeran’s “Thinking Out Loud” copied the earlier song’s melody, harmony, and rhythm. When the case finally went to trial in 2023, Sheeran brought a guitar to the stand. He played the disputed four-chord progression—I–iii–IV–V—and wove together a mash-up of songs built on the same foundation. The point was clear: These are the elemental units of songwriting. After a brief deliberation, the jury found Sheeran not liable.

Reflecting after the trial, Sheeran said: “These chords are common building blocks … No one owns them or the way they’re played, in the same way no one owns the colour blue.”

Exactly. Whether it’s expressed with a guitar, a paintbrush, or a generative algorithm, creativity has always been built on what came before.

I don’t consider this essay to be great art. But I should be transparent: I relied extensively on ChatGPT while drafting it. I began with a rough outline, notes typed on my phone in museum galleries, and transcripts from conversations with colleagues. I uploaded older writing samples to give the model a sense of my voice. Then I used the tool to shape a draft, which I revised repeatedly—by hand and with help from an editor—over several weeks.

There may still be phrases or sentences in here that came directly from the model. But I’ve iterated so much that I no longer know which ones. Nor, I suspect, could any reader—or any AI detector. (In fact, Grammarly found that 0% of this text appeared to be AI-generated.)

Many people today remain uneasy about using these tools. They worry it’s cheating, or feel embarrassed to admit that they’ve sought such help. I’ve moved past that. I assume all my students at Harvard Business School are using AI. I assume most academic research begins with literature scanned and synthesized by these models. And I assume that many of the essays I now read in leading publications were shaped, at least in part, by generative tools.

Why? Because we are professionals. And professionals adopt efficiency tools early. Generative AI joins a long lineage that includes the word processor, the search engine, and editing tools like Grammarly. The question is no longer Who’s using AI? but Why wouldn’t you?

I recognize the counterargument, notably put forward by Nicholas Thompson, CEO of the Atlantic: that content produced with AI assistance should not be eligible for copyright protection, because it blurs the boundaries of authorship. I understand the instinct. AI recombines vast corpora of preexisting work, and the results can feel derivative or machine-like.

But when I reflect on the history of creativity—van Gogh reworking Eisen, Dalí channeling Bruegel, Sheeran defending common musical DNA—I’m reminded that recombination has always been central to creation. The economist Joseph Schumpeter famously wrote that innovation is less about invention than “the novel reassembly of existing ideas.” If we tried to trace and assign ownership to every prior influence, we’d grind creativity to a halt.

From the outset, I knew the tools had transformative potential. What I underestimated was how quickly they would become ubiquitous across industries and in my own daily work.

Our copyright system has never required total originality. It demands meaningful human input. That standard should apply in the age of AI as well. When people thoughtfully engage with these models—choosing prompts, curating inputs, shaping the results—they are creating. The medium has changed, but the impulse remains the same: to build something new from the materials we inherit.


Nitin Nohria is the George F. Baker Jr. Professor at Harvard Business School and its former dean. He is also the chair of Thrive Capital, an early investor in several prominent AI firms, including OpenAI.

MIT Technology Review’s editorial guidelines state that generative AI should not be used to draft articles unless the article is meant to illustrate the capabilities of such tools and its use is clearly disclosed. 

AI’s energy impact is still small—but how we handle it is huge

With seemingly no limit to the demand for artificial intelligence, everyone in the energy, AI, and climate fields is justifiably worried. Will there be enough clean electricity to power AI and enough water to cool the data centers that support this technology? These are important questions with serious implications for communities, the economy, and the environment. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


But the question about AI’s energy usage portends even bigger issues about what we need to do in addressing climate change for the next several decades. If we can’t work out how to handle this, we won’t be able to handle broader electrification of the economy, and the climate risks we face will increase.

Innovation in IT got us to this point. Graphics processing units (GPUs) that power the computing behind AI have fallen in cost by 99% since 2006. There was similar concern about the energy use of data centers in the early 2010s, with wild projections of growth in electricity demand. But gains in computing power and energy efficiency not only proved these projections wrong but enabled a 550% increase in global computing capability from 2010 to 2018 with only minimal increases in energy use. 

In the late 2010s, however, the trends that had saved us began to break. As the accuracy of AI models dramatically improved, the electricity needed for data centers also started increasing faster; they now account for 4.4% of total demand, up  from 1.9% in 2018. Data centers consume more than 10% of the electricity supply in six US states. In Virginia, which has emerged as a hub of data center activity, that figure is 25%.

Projections about the future demand for energy to power AI are uncertain and range widely, but in one study, Lawrence Berkeley National Laboratory estimated that data centers could represent 6% to 12% of total US electricity use by 2028. Communities and companies will notice this type of rapid growth in electricity demand. It will put pressure on energy prices and on ecosystems. The projections have resulted in calls to build lots of new fossil-fired power plants or bring older ones out of retirement. In many parts of the US, the demand will likely result in a surge of natural-gas-powered plants.

It’s a daunting situation. Yet when we zoom out, the projected electricity use from AI is still pretty small. The US generated about 4,300 billion kilowatt-hours last year. We’ll likely need another 1,000 billion to 1,200 billion or more in the next decade—a 24% to 29% increase. Almost half the additional electricity demand will be from electrified vehicles. Another 30% is expected to be from electrified technologies in buildings and industry. Innovation in vehicle and building electrification also advanced in the last decade, and this shift will be good news for the climate, for communities, and for energy costs.

The remaining 22% of new electricity demand is estimated to come from AI and data centers. While it represents a smaller piece of the pie, it’s the most urgent one. Because of their rapid growth and geographic concentration, data centers are the electrification challenge we face right now—the small stuff we have to figure out before we’re able to do the big stuff like vehicles and buildings.

We also need to understand what the energy consumption and carbon emissions associated with AI are buying us. While the impacts from producing semiconductors and powering AI data centers are important, they are likely small compared with the positive or negative effects AI may have on applications such as the electricity grid, the transportation system, buildings and factories, or consumer behavior. Companies could use AI to develop new materials or batteries that would better integrate renewable energy into the grid. But they could also use AI to make it easier to find more fossil fuels. The claims about potential benefits for the climate are exciting, but they need to be continuously verified and will need support to be realized.

This isn’t the first time we’ve faced challenges coping with growth in electricity demand. In the 1960s, US electricity demand was growing at more than 7% per year. In the 1970s that growth was nearly 5%, and in the 1980s and 1990s it was more than 2% per year. Then, starting in 2005, we basically had a decade and a half of flat electricity growth. Most projections for the next decade put our expected growth in electricity demand at around 2% again—but this time we’ll have to do things differently. 

To manage these new energy demands, we need a “Grid New Deal” that leverages public and private capital to rebuild the electricity system for AI with enough capacity and intelligence for decarbonization. New clean energy supplies, investment in transmission and distribution, and strategies for virtual demand management can cut emissions, lower prices, and increase resilience. Data centers bringing clean electricity and distribution system upgrades could be given a fast lane to connect to the grid. Infrastructure banks could fund new transmission lines or pay to upgrade existing ones. Direct investment or tax incentives could encourage clean computing standards, workforce development in the clean energy sector, and open data transparency from data center operators about their energy use so that communities can understand and measure the impacts.

In 2022, the White House released a Blueprint for an AI Bill of Rights that provided principles to protect the public’s rights, opportunities, and access to critical resources from being restricted by AI systems. To the AI Bill of Rights, we humbly offer a climate amendment, because ethical AI must be climate-safe AI. It’s a starting point to ensure that the growth of AI works for everyone—that it doesn’t raise people’s energy bills, adds more clean power to the grid than it uses, increases investment in the power system’s infrastructure, and benefits communities while driving innovation.

By grounding the conversation about AI and energy in context about what is needed to tackle climate change, we can deliver better outcomes for communities, ecosystems, and the economy. The growth of electricity demand for AI and data centers is a test case for how society will respond to the demands and challenges of broader electrification. If we get this wrong, the likelihood of meeting our climate targets will be extremely low. This is what we mean when we say the energy and climate impacts from data centers are small, but they are also huge.

Costa Samaras is the Trustee Professor of Civil and Environmental Engineering and director of the Scott Institute for Energy Innovation at Carnegie Mellon University.

Emma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University.

Ramayya Krishnan is dean of the Heinz College of Information Systems and Public Policy and the William W. and Ruth F. Cooper Professor of Management Science and Information Systems at Carnegie Mellon University.

We need targeted policies, not blunt tariffs, to drive “American energy dominance”

President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.” 

But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies. 

By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage.

The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad. 

But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies.

Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China.

China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition.

Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy.

The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness.

Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs.

That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations.

Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built.

The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength. 

But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years.

Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries.

But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum. 

Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.  

Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home.

Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership.

The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal.

The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away.

Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.

How a bankruptcy judge can stop a genetic privacy disaster

Stop me if you’ve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind. 

The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million people’s genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users’ data. But, there’s still a small chance of writing a better ending for users.

After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. That’s good advice for the individual user (and you can read how to do so here). But the reality is most people won’t do it. Maybe they won’t see the recommendations to do so. Maybe they don’t know why they should be worried. Maybe they have long since abandoned an account that they don’t even remember exists. Or maybe they’re just occupied with the chaos of everyday life. 

This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMe’s meager revenue—fewer than 4% of people who took tests pay for subscriptions—it seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data. 

This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic data—whether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify you—will have it tomorrow and the next day and all the days after that. 

Users with exposed genetic data are not only vulnerable to harm today; they’re vulnerable to exploits that might be developed in the future. 

While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms. 

In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMe’s user data, that mistake is likely permanent and irreparable. 

All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moment’s notice. And the burden falls on users to keep track of it all, or just give up.

Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted. 

Bankruptcy proceedings involving personal data don’t have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.) 

The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.

Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMe’s failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldn’t expect or want. 

An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMe’s new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.

Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a person’s genetic privacy just to add a few dollars in value to the bankruptcy estate?    

Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans’ most sensitive data at a moment’s notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?

Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.