Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together.

The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind.

Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform.

And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users canceling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.

In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs.

Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.

In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.

There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking.

None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it.

That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.

Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

The Bank Secrecy Act is failing everyone. It’s time to rethink financial surveillance.

The US is on the brink of enacting rules for digital assets, with growing bipartisan momentum to modernize our financial system. But amid all the talk about innovation and global competitiveness, one issue has been glaringly absent: financial privacy. As we build the digital infrastructure of the 21st century, we need to talk about not just what’s possible but what’s acceptable. That means confronting the expanding surveillance powers quietly embedded in our financial system, which today can track nearly every transaction without a warrant.

Many Americans may associate financial surveillance with authoritarian regimes. Yet because of a Nixon-era law called the Bank Secrecy Act (BSA) and the digitization of finance over the past half-century, financial privacy is under increasingly serious threat here at home. Most Americans don’t realize they live under an expansive surveillance regime that likely violates their constitutional rights. Every purchase, deposit, and transaction, from the smallest Venmo payment for a coffee to a large hospital bill, creates a data point in a system that watches you—even if you’ve done nothing wrong.

As a former federal prosecutor, I care deeply about giving law enforcement the tools it needs to keep us safe. But the status quo doesn’t make us safer. It creates a false sense of security while quietly and permanently eroding the constitutional rights of millions of Americans.

When Congress enacted the BSA in 1970, cash was king and organized crime was the target. The law created a scheme whereby, ever since, banks have been required to keep certain records on their customers and turn them over to law enforcement upon request. Unlike a search warrant, which must be issued by a judge or magistrate upon a showing of probable cause that a crime was committed and that specific evidence of that crime exists in the place to be searched, this power is exercised with no checks or balances. A prosecutor can “cut a subpoena”—demanding all your bank records for the past 10 years—with no judicial oversight or limitation on scope, and at no cost to the government. The burden falls entirely on the bank. In contrast, a proper search warrant must be narrowly tailored, with probable cause and judicial authorization.

In United States v. Miller (1976), the Supreme Court upheld the BSA, reasoning that citizens have no “legitimate expectation of privacy” about information shared with third parties, like banks. Thus began the third-party doctrine, enabling law enforcement to access financial records without a warrant. The BSA has been amended several times over the years (most notoriously in 2001 as a part of the Patriot Act), imposing an ever-growing list of recordkeeping obligations on an ever-growing list of financial institutions. Today, it is virtually inescapable for everyday Americans.

In the 1970s, when the BSA was enacted, banking and noncash payments were conducted predominantly through physical means: writing checks, visiting bank branches, and using passbooks. For cash transactions, the BSA required reporting of transactions over the kingly sum of $10,000, a figure that was not pegged to inflation and remains the same today. And given the nature of banking services and the technology available at the time, individuals conducted just a handful of noncash payments per month. Today, consumers make at least one payment or banking transaction a day, and just an estimated 16% of those are in cash

Meanwhile, emerging technologies further expand the footprint of financial data. Add to this the massive pools of personal information already collected by technology platforms—location history, search activity, communications metadata—and you create a world where financial surveillance can be linked to virtually every aspect of your identity, movement, and behavior.

Nor does the BSA actually appear to be effective at achieving its aims. In fiscal year 2024, financial institutions filed about 4.7 million Suspicious Activity Reports (SARs) and over 20 million currency transaction reports. Instead of stopping major crime, the system floods law enforcement with low-value information, overwhelming agents and obscuring real threats. Mass surveillance often reduces effectiveness by drowning law enforcement in noise. But while it doesn’t stop hackers, the BSA creates a trove of permanent info on everyone.

Worse still, the incentives are misaligned and asymmetrical. To avoid liability, financial institutions are required to report anything remotely suspicious. If they fail to file a SAR, they risk serious penalties—even indictment. But they face no consequences for overreporting. The vast overcollection of data is the unsurprising result. These practices, developed under regulations, require clearer guardrails so that executive branch actors can more safely outsource surveillance duties to private institutions.

But courts have recognized that constitutional privacy must evolve alongside technology. In 2012, the Supreme Court ruled in United States v. Jones that attaching a GPS tracker to a vehicle for prolonged surveillance constituted a search restricted by the Fourth Amendment. Justice Sonia Sotomayor, in a notable concurrence, argued that the third-party doctrine was ill suited to an era when individuals “reveal a great deal of information about themselves to third parties” merely by participating in daily life.

This legal evolution continued in 2018, when the Supreme Court held in Carpenter v. United States that accessing historical cell-phone location records held by a third party required a warrant, recognizing that “seismic shifts in digital technology” necessitate stronger protections and warning that “the fact that such information is gathered by a third party does not make it any less deserving of Fourth Amendment protection.”

The logic of Carpenter applies directly to the mass of financial records being collected today. Just as tracking a person’s phone over time reveals the “whole of their physical movements,” tracking a person’s financial life exposes travel, daily patterns, medical treatments, political affiliations, and personal associations. In many ways, because of the velocity and digital nature of today’s digital payments, financial data is among the most personal and revealing data there is—and therefore deserves the highest level of constitutional protection.

Though Miller remains formally intact, the writing is on the wall: Indiscriminate financial surveillance such as what we have today is fundamentally at odds with the Fourth Amendment in the digital age.

Technological innovations over the past several decades have brought incredible convenience to economic life. Now our privacy standards must catch up. With Congress considering landmark legislation on digital assets, it’s an important moment to consider what kind of financial system we want—not just in terms of efficiency and access, but in terms of freedom. Rather than striking down the BSA in its entirety, policymakers should narrow its reach, particularly around the bulk collection and warrantless sharing of Americans’ financial data.

Financial surveillance shouldn’t be the price of participation in modern life. The systems we build now will shape what freedom looks like for the next century. It’s time to treat financial privacy like what it is: a cornerstone of democracy, and a right worth fighting for.

Katie Haun is the CEO and founder of Haun Ventures, a venture capital firm focused on frontier technologies. She is a former federal prosecutor who created the US Justice Department’s first cryptocurrency task force. She led investigations into the Mt. Gox hack and the corrupt agents on the Silk Road task force. She clerked for US Supreme Court Justice Anthony Kennedy and is an honors graduate of Stanford Law School.

Why AI hardware needs to be open

When OpenAI acquired Io to create “the coolest piece of tech that the world will have ever seen,” it confirmed what industry experts have long been saying: Hardware is the new frontier for AI. AI will no longer just be an abstract thing in the cloud far away. It’s coming for our homes, our rooms, our beds, our bodies. 

That should worry us.

Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone.

By definition, the maker movement is humble and it is consistent. Makers do not believe in the cult of individual genius; we believe in collective genius. We believe that creativity is universally distributed (not exclusively bestowed), that inventing is better together, and that we should make open products so people can observe, learn, and create—basically, the polar opposite of what Jony Ive and Sam Altman are building.

But over time, the momentum faded. The movement was dismissed by the tech and investment industry as niche and hobbyist, and starting in 2018, pressures on the hardware venture market (followed by covid) made people retreat from social spaces to spend more time behind screens. 

Now it’s mounting a powerful second act, joined by a wave of AI open-source enthusiasts. This time around the stakes are higher, and we need to give it the support it never had.

In 2024 the AI leader Hugging Face developed an open-source platform for AI robots, which already has 3,500+ robot data sets and draws thousands of participants from every continent to join giant hackathons. Raspberry Pi went public on the London Stock Exchange for $700 million. After a hiatus, Maker Faire came back; the most recent one had nearly 30,000 attendees, with kinetic sculptures, flaming octopuses, and DIY robot bands, and this year there will be over 100 Maker Faires around the world. Just last week, DIY.org relaunched its app. In March, my friend Roya Mahboob, founder of the Afghan Girls Robotics Team, released a movie about the team to incredible reviews. People love the idea that making is the ultimate form of human empowerment and expression. All the while, a core set of people have continued influencing millions through maker organizations like FabLabs and Adafruit.

Studies show that hands-on creativity reduces anxiety, combats loneliness, and boosts cognitive function. The act of making grounds us, connects us to others, and reminds us that we are capable of shaping the world with our own hands. 

I’m not proposing to reject AI hardware but to reject the idea that innovation must be proprietary, elite, and closed. I’m proposing to fund and build the open alternative. That means putting our investment, time, and purchases towards robot built in community labs, AI models trained in the open, tools made transparent and hackable. That world isn’t just more inclusive—it’s more innovative. It’s also more fun. 

This is not nostalgia. This is about fighting for the kind of future we want: A future of openness and joy, not of conformity and consumption. One where technology invites participation, not passivity. Where children grow up not just knowing how to swipe, but how to build. Where creativity is a shared endeavor, not the mythical province of lone geniuses in glass towers.

In his Io announcement video, Altman said, “We are literally on the brink of a new generation of technology that can make us our better selves.” It reminded me of the movie Mountainhead, where four tech moguls tell themselves they are saving the world while the world is burning. I don’t think the iPhone made us our better selves. In fact, you’ve never seen me run faster than when I’m trying to snatch an iPhone out of my three-year-old’s hands.

So yes, I’m watching what Sam Altman and Jony Ive will unveil. But I’m far more excited by what’s happening in basements, in classrooms, on workbenches. Because the real iPhone moment isn’t a new product we wait for. It’s the moment you realize you can build it yourself. And best of all? You  can’t doomscroll when you’re holding a soldering iron.

Ayah Bdeir is a leader in the maker movement, a champion of open source AI, and founder of littleBits, the hardware platform that teaches STEAM to kids through hands-on invention. A graduate of the MIT Media Lab, she was selected as one of the BBC’s 100 Most Influential Women, and her inventions have been acquired by the Museum of Modern Art.

AI copyright anxiety will hold back creativity

Last fall, while attending a board meeting in Amsterdam, I had a few free hours and made an impromptu visit to the Van Gogh Museum. I often steal time for visits like this—a perk of global business travel for which I am grateful. Wandering the galleries, I found myself before The Courtesan (after Eisen), painted in 1887. Van Gogh had based it on a Japanese woodblock print by Keisai Eisen, which he encountered in the magazine Paris Illustré. He explicitly copied and reinterpreted Eisen’s composition, adding his own vivid border of frogs, cranes, and bamboo.

As I stood there, I imagined the painting as the product of a generative AI model prompted with the query How would van Gogh reinterpret a Japanese woodblock in the style of Keisai Eisen? And I wondered: If van Gogh had used such an AI tool to stimulate his imagination, would Eisen—or his heirs—have had a strong legal claim?  If van Gogh were working today, that might be the case. Two years ago, the US Supreme Court found that Andy Warhol had infringed upon the photographer Lynn Goldsmith’s copyright by using her photo of the musician Prince for a series of silkscreens. The court said the works were not sufficiently transformative to constitute fair use—a provision in the law that allows for others to make limited use of copyrighted material.

A few months later, at the Museum of Fine Arts in Boston, I visited a Salvador Dalí exhibition. I had always thought of Dalí as a true original genius who conjured surreal visions out of thin air. But the show included several Dutch engravings, including Pieter Bruegel the Elder’s Seven Deadly Sins (1558), that clearly influenced Dalí’s 8 Mortal Sins Suite (1966). The stylistic differences are significant, but the lineage is undeniable. Dalí himself cited Bruegel as a surrealist forerunner, someone who tapped into the same dream logic and bizarre forms that Dalí celebrated. Suddenly, I was seeing Dalí not just as an original but also as a reinterpreter. Should Bruegel have been flattered that Dalí built on his work—or should he have sued him for making it so “grotesque”?

During a later visit to a Picasso exhibit in Milan, I came across a famous informational diagram by the art historian Alfred Barr, mapping how modernist movements like Cubism evolved from earlier artistic traditions. Picasso is often held up as one of modern art’s most original and influential figures, but Barr’s chart made plain the many artists he drew from—Goya, El Greco, Cézanne, African sculptors. This made me wonder: If a generative AI model had been fed all those inputs, might it have produced Cubism? Could it have generated the next great artistic “breakthrough”?

These experiences—spread across three cities and centered on three iconic artists—coalesced into a broader reflection I’d already begun. I had recently spoken with Daniel Ek, the founder of Spotify, about how restrictive copyright laws are in music. Song arrangements and lyrics enjoy longer protection than many pharmaceutical patents. Ek sits at the leading edge of this debate, and he observed that generative AI already produces an astonishing range of music. Some of it is good. Much of it is terrible. But nearly all of it borrows from the patterns and structures of existing work.

Musicians already routinely sue one another for borrowing from previous works. How will the law adapt to a form of artistry that’s driven by prompts and precedent, built entirely on a corpus of existing material?

And the questions don’t stop there. Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it? Will the social forces that shape artistic standing—critics, curators, tastemakers—still hold sway? Or will a new, AI-era hierarchy emerge? If every artist has always borrowed from others, is AI’s generative recombination really so different? And in such a litigious culture, how long can copyright law hold its current form? The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field. 

Different industries are responding in different ways. The Academy of Motion Picture Arts and Sciences recently announced that filmmakers’ use of generative AI would not disqualify them from Oscar contention—and that they wouldn’t be required to disclose when they’d used the technology. Several acclaimed films, including Oscar winner The Brutalist, incorporated AI into their production processes.

The music world, meanwhile, continues to wrestle with its definitions of originality. Consider the recent lawsuit against Ed Sheeran. In 2016, he was sued by the heirs of Ed Townsend, co-writer of Marvin Gaye’s “Let’s Get It On,” who claimed that Sheeran’s “Thinking Out Loud” copied the earlier song’s melody, harmony, and rhythm. When the case finally went to trial in 2023, Sheeran brought a guitar to the stand. He played the disputed four-chord progression—I–iii–IV–V—and wove together a mash-up of songs built on the same foundation. The point was clear: These are the elemental units of songwriting. After a brief deliberation, the jury found Sheeran not liable.

Reflecting after the trial, Sheeran said: “These chords are common building blocks … No one owns them or the way they’re played, in the same way no one owns the colour blue.”

Exactly. Whether it’s expressed with a guitar, a paintbrush, or a generative algorithm, creativity has always been built on what came before.

I don’t consider this essay to be great art. But I should be transparent: I relied extensively on ChatGPT while drafting it. I began with a rough outline, notes typed on my phone in museum galleries, and transcripts from conversations with colleagues. I uploaded older writing samples to give the model a sense of my voice. Then I used the tool to shape a draft, which I revised repeatedly—by hand and with help from an editor—over several weeks.

There may still be phrases or sentences in here that came directly from the model. But I’ve iterated so much that I no longer know which ones. Nor, I suspect, could any reader—or any AI detector. (In fact, Grammarly found that 0% of this text appeared to be AI-generated.)

Many people today remain uneasy about using these tools. They worry it’s cheating, or feel embarrassed to admit that they’ve sought such help. I’ve moved past that. I assume all my students at Harvard Business School are using AI. I assume most academic research begins with literature scanned and synthesized by these models. And I assume that many of the essays I now read in leading publications were shaped, at least in part, by generative tools.

Why? Because we are professionals. And professionals adopt efficiency tools early. Generative AI joins a long lineage that includes the word processor, the search engine, and editing tools like Grammarly. The question is no longer Who’s using AI? but Why wouldn’t you?

I recognize the counterargument, notably put forward by Nicholas Thompson, CEO of the Atlantic: that content produced with AI assistance should not be eligible for copyright protection, because it blurs the boundaries of authorship. I understand the instinct. AI recombines vast corpora of preexisting work, and the results can feel derivative or machine-like.

But when I reflect on the history of creativity—van Gogh reworking Eisen, Dalí channeling Bruegel, Sheeran defending common musical DNA—I’m reminded that recombination has always been central to creation. The economist Joseph Schumpeter famously wrote that innovation is less about invention than “the novel reassembly of existing ideas.” If we tried to trace and assign ownership to every prior influence, we’d grind creativity to a halt.

From the outset, I knew the tools had transformative potential. What I underestimated was how quickly they would become ubiquitous across industries and in my own daily work.

Our copyright system has never required total originality. It demands meaningful human input. That standard should apply in the age of AI as well. When people thoughtfully engage with these models—choosing prompts, curating inputs, shaping the results—they are creating. The medium has changed, but the impulse remains the same: to build something new from the materials we inherit.


Nitin Nohria is the George F. Baker Jr. Professor at Harvard Business School and its former dean. He is also the chair of Thrive Capital, an early investor in several prominent AI firms, including OpenAI.

MIT Technology Review’s editorial guidelines state that generative AI should not be used to draft articles unless the article is meant to illustrate the capabilities of such tools and its use is clearly disclosed. 

AI’s energy impact is still small—but how we handle it is huge

With seemingly no limit to the demand for artificial intelligence, everyone in the energy, AI, and climate fields is justifiably worried. Will there be enough clean electricity to power AI and enough water to cool the data centers that support this technology? These are important questions with serious implications for communities, the economy, and the environment. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


But the question about AI’s energy usage portends even bigger issues about what we need to do in addressing climate change for the next several decades. If we can’t work out how to handle this, we won’t be able to handle broader electrification of the economy, and the climate risks we face will increase.

Innovation in IT got us to this point. Graphics processing units (GPUs) that power the computing behind AI have fallen in cost by 99% since 2006. There was similar concern about the energy use of data centers in the early 2010s, with wild projections of growth in electricity demand. But gains in computing power and energy efficiency not only proved these projections wrong but enabled a 550% increase in global computing capability from 2010 to 2018 with only minimal increases in energy use. 

In the late 2010s, however, the trends that had saved us began to break. As the accuracy of AI models dramatically improved, the electricity needed for data centers also started increasing faster; they now account for 4.4% of total demand, up  from 1.9% in 2018. Data centers consume more than 10% of the electricity supply in six US states. In Virginia, which has emerged as a hub of data center activity, that figure is 25%.

Projections about the future demand for energy to power AI are uncertain and range widely, but in one study, Lawrence Berkeley National Laboratory estimated that data centers could represent 6% to 12% of total US electricity use by 2028. Communities and companies will notice this type of rapid growth in electricity demand. It will put pressure on energy prices and on ecosystems. The projections have resulted in calls to build lots of new fossil-fired power plants or bring older ones out of retirement. In many parts of the US, the demand will likely result in a surge of natural-gas-powered plants.

It’s a daunting situation. Yet when we zoom out, the projected electricity use from AI is still pretty small. The US generated about 4,300 billion kilowatt-hours last year. We’ll likely need another 1,000 billion to 1,200 billion or more in the next decade—a 24% to 29% increase. Almost half the additional electricity demand will be from electrified vehicles. Another 30% is expected to be from electrified technologies in buildings and industry. Innovation in vehicle and building electrification also advanced in the last decade, and this shift will be good news for the climate, for communities, and for energy costs.

The remaining 22% of new electricity demand is estimated to come from AI and data centers. While it represents a smaller piece of the pie, it’s the most urgent one. Because of their rapid growth and geographic concentration, data centers are the electrification challenge we face right now—the small stuff we have to figure out before we’re able to do the big stuff like vehicles and buildings.

We also need to understand what the energy consumption and carbon emissions associated with AI are buying us. While the impacts from producing semiconductors and powering AI data centers are important, they are likely small compared with the positive or negative effects AI may have on applications such as the electricity grid, the transportation system, buildings and factories, or consumer behavior. Companies could use AI to develop new materials or batteries that would better integrate renewable energy into the grid. But they could also use AI to make it easier to find more fossil fuels. The claims about potential benefits for the climate are exciting, but they need to be continuously verified and will need support to be realized.

This isn’t the first time we’ve faced challenges coping with growth in electricity demand. In the 1960s, US electricity demand was growing at more than 7% per year. In the 1970s that growth was nearly 5%, and in the 1980s and 1990s it was more than 2% per year. Then, starting in 2005, we basically had a decade and a half of flat electricity growth. Most projections for the next decade put our expected growth in electricity demand at around 2% again—but this time we’ll have to do things differently. 

To manage these new energy demands, we need a “Grid New Deal” that leverages public and private capital to rebuild the electricity system for AI with enough capacity and intelligence for decarbonization. New clean energy supplies, investment in transmission and distribution, and strategies for virtual demand management can cut emissions, lower prices, and increase resilience. Data centers bringing clean electricity and distribution system upgrades could be given a fast lane to connect to the grid. Infrastructure banks could fund new transmission lines or pay to upgrade existing ones. Direct investment or tax incentives could encourage clean computing standards, workforce development in the clean energy sector, and open data transparency from data center operators about their energy use so that communities can understand and measure the impacts.

In 2022, the White House released a Blueprint for an AI Bill of Rights that provided principles to protect the public’s rights, opportunities, and access to critical resources from being restricted by AI systems. To the AI Bill of Rights, we humbly offer a climate amendment, because ethical AI must be climate-safe AI. It’s a starting point to ensure that the growth of AI works for everyone—that it doesn’t raise people’s energy bills, adds more clean power to the grid than it uses, increases investment in the power system’s infrastructure, and benefits communities while driving innovation.

By grounding the conversation about AI and energy in context about what is needed to tackle climate change, we can deliver better outcomes for communities, ecosystems, and the economy. The growth of electricity demand for AI and data centers is a test case for how society will respond to the demands and challenges of broader electrification. If we get this wrong, the likelihood of meeting our climate targets will be extremely low. This is what we mean when we say the energy and climate impacts from data centers are small, but they are also huge.

Costa Samaras is the Trustee Professor of Civil and Environmental Engineering and director of the Scott Institute for Energy Innovation at Carnegie Mellon University.

Emma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University.

Ramayya Krishnan is dean of the Heinz College of Information Systems and Public Policy and the William W. and Ruth F. Cooper Professor of Management Science and Information Systems at Carnegie Mellon University.

We need targeted policies, not blunt tariffs, to drive “American energy dominance”

President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.” 

But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies. 

By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage.

The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad. 

But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies.

Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China.

China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition.

Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy.

The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness.

Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs.

That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations.

Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built.

The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength. 

But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years.

Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries.

But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum. 

Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.  

Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home.

Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership.

The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal.

The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away.

Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.

How a bankruptcy judge can stop a genetic privacy disaster

Stop me if you’ve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind. 

The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million people’s genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users’ data. But, there’s still a small chance of writing a better ending for users.

After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. That’s good advice for the individual user (and you can read how to do so here). But the reality is most people won’t do it. Maybe they won’t see the recommendations to do so. Maybe they don’t know why they should be worried. Maybe they have long since abandoned an account that they don’t even remember exists. Or maybe they’re just occupied with the chaos of everyday life. 

This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMe’s meager revenue—fewer than 4% of people who took tests pay for subscriptions—it seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data. 

This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic data—whether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify you—will have it tomorrow and the next day and all the days after that. 

Users with exposed genetic data are not only vulnerable to harm today; they’re vulnerable to exploits that might be developed in the future. 

While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms. 

In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMe’s user data, that mistake is likely permanent and irreparable. 

All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moment’s notice. And the burden falls on users to keep track of it all, or just give up.

Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted. 

Bankruptcy proceedings involving personal data don’t have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.) 

The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.

Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMe’s failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldn’t expect or want. 

An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMe’s new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.

Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a person’s genetic privacy just to add a few dollars in value to the bankruptcy estate?    

Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans’ most sensitive data at a moment’s notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?

Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

The cheapest way to supercharge America’s power grid

US electricity consumption is rising faster than it has in decades, thanks in part to the boom in data center development, the resurgence in manufacturing, and the increasing popularity of electric vehicles. 

Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. This challenge has created one of the biggest obstacles to getting more electricity generation online, reducing investment in new power plants and stranding others in years-long “interconnection queues” while they wait to join the grid.

Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. They could help us quickly bring online a significant portion of the nearly 2,600 gigawatts of backlogged generation and storage projects awaiting pathways to connect to the electric grid. 

The opportunity to leverage advanced transmission technologies to update the way we deliver and consume electricity in America is as close to a $20 bill sitting on the sidewalk as policymakers may ever encounter. Promoting the development and use of these technologies should be a top priority for politicians in Washington, DC, as well as electricity market regulators around the country.

That includes the new Trump administration, which has clearly stated that building greater electricity supply and keeping costs low for consumers are high priorities. 

In the last month, Washington has been consumed by the Trump team’s efforts to test the bounds of executive power, fire civil servants, and disrupt the basic workings of the federal government. But when or if the White House and Congress get around to enacting new energy policies, they would be wise to pick up the $20 bill by enacting bipartisan measures to accelerate the rollout of these innovative grid technologies.

ATTs generally fall into four categories: dynamic line ratings, which combine local weather forecasts and measurements on or near the transmission line to safely increase their capacity when conditions allow; high-performance conductors, which are advanced wires that use carbon fiber, composite cores, or superconducting materials to carry more electricity than traditional steel-core conductors; topology optimization, which uses software to model fluctuating conditions across the grid and identify the most efficient routes to distribute electricity from moment to moment; and advanced power flow control devices, which redistribute electricity to lines with available capacity. 


“This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.”


Other countries from Belgium to India to the United Kingdom are already making large-scale use of these technologies. Early projects in the United States have been remarkably successful as well. One recent deployment of dynamic line ratings increased capacity by more than 50% for only $45,000 per mile—roughly 1% of the price of building new transmission.

So why are we not seeing an explosion in ATT investment and deployment in the US? Because despite their potential to unlock 21st-century technology, the 20th-century structure of the nation’s electricity markets discourages adoption of these solutions. 

For one thing, under the current regulatory system, utilities generally make money by passing the cost of big new developments along to customers (earning a fixed annual return on their investment). That comes in the form of higher electricity rates, which local public utility commissions often approve after power companies propose such projects.

That means utilities have financial incentives to make large and expensive investments, but not to save consumers money. When ATTs are installed in place of building new transmission capacity, the smaller capital costs mean that utilities make lower profits. For example, utilities might earn $600,000 per year after building a new mile of transmission, compared with about $4,500 per mile annually after installing the equipment and software necessary for line ratings. While these state regulatory agencies are tasked with ensuring that utilities act in the best interest of consumers, they often lack the necessary information to identify the best approach for doing so.

Overcoming these structural barriers will require action from both state and federal governments, and it should appeal to Democrats and Republicans alike. We’ve already seen some states, including Minnesota and Montana, move in this direction, but policy interventions to date remain insufficient. In a recent paper, we propose a new approach for unlocking the potential of these technologies.

First, we suggest requiring transmission providers to use ATTs in some “no regrets” contexts, where possible downsides are minor or nonexistent. The Federal Energy Regulatory Commission, for example, is already considering requiring dynamic line ratings on certain highly congested lines. Given the low cost of dynamic line ratings, and their clear benefit in cases of congestion, we believe that FERC should quickly move forward with, and strengthen, such a rule. Likewise, the Department of Energy or Congress should adopt an efficiency standard for the wires that carry electricity around the country. Every year, approximately 5% of electricity generated is lost in the transmission and distribution process. The use of high-performance conductors can reduce those losses by 30%.

In addition, federal agencies and state lawmakers should require transmission providers to evaluate the potential for using ATTs on their grid, or provide support to help them do so. FERC has recently taken steps in this direction, and it should continue to strengthen those actions. 

Regulators should also provide financial incentives to transmission providers to encourage the installation of ATTs. The most promising approach is a “shared savings” incentive, such as that proposed in the recent Advancing GETS Act. This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.

Finally, we should invest in building digital tools so transmission owners can identify opportunities for these technologies and so regulators can hold them accountable. Developing these systems will require transmission providers to share information about electricity supply and demand as well as grid infrastructure. Ideally, with such data in hand, researchers can develop a “digital twin” of the current transmission system to test different configurations of ATTs and help improve the performance and efficiency of our grids. 

We are all too aware that the world often faces difficult policy trade-offs. But laws or regulations that facilitate the use of ATTs can quickly expand the grid and save consumers money. They should be an easy yes on both sides of the aisle.

Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.