OpenAI: The power and the pride

In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question. 

“If you had [GPT-4’s model weights] etched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.” 

There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. 

In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. 

Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work. 

With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes. 

The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells it (and as Hagey does too) is very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” to (at least in the eyes of the courts) illegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.  

The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes.

“During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.” 

To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers. 

She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it. 

Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project … [The New Zealand model] shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.” 

Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.” 

Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam (something he and the rest of the Altman family vehemently deny). Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics. 

Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.

The AI Hype Index: College students are hooked on ChatGPT

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Large language models confidently present their responses as accurate and reliable, even when they’re neither of those things. That’s why we’ve recently seen chatbots supercharge vulnerable people’s delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, and (in the case of xAI’s Grok) rant irrationally about “white genocide.”

But it’s not all bad news—AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind’s new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak.

A new sodium metal fuel cell could help clean up transportation

A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. 

The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.  

The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air.

One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up.

When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat. 

Ultimately, Form chose a more traditional battery concept, but the idea stuck with Chiang, who decided to explore it with other metals and landed on the idea of a sodium-based fuel cell. 

In this fuel cell format, the device takes in chemicals and runs reactions that generate electricity, after which the products get removed. Then fresh fuel is put in to run the whole thing again—no electrical charging required. (You might recognize this concept from hydrogen fuel cell vehicles, like the Toyota Mirai.)

Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping.

Gloved hands holding a small vial of sodium metal.
Sodium metal could be used to power regional planes or short distance shipping.
GRETCHEN ERTL/MITTR

The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C (208 °F), the cells operated at moderate temperatures of between 110 °C and 130 °C (or 230 °F and 266°F), which could be practical for use on planes or ships, Chiang says. 

From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram (Wh/kg). That’s much higher than what commercial lithium-ion batteries can reach today (around 300 Wh/kg). Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures.

“It’s an interesting cell concept,” says Jürgen Janek, a professor at the Institute of Physical Chemistry at the University of Giessen in Germany, who was not involved in the research. There’s been previous research on sodium-air batteries in the past, Janek says, but using this sort of chemistry in a fuel cell instead is new.

“One of the critical issues with this type of cell concept is the safety issue,” Janek says. Sodium metal reacts very strongly with water. (You may have seen videos where blocks of sodium metal get thrown into a lake, to dramatic effect). Asked about this issue, Chiang says the design of the cell ensures that water produced during reactions is continuously removed, so there’s not enough around to fuel harmful reactions. The solid electrolyte, a ceramic material, also helps prevent reactions between water and sodium, Chiang adds. 

Another question is what happens to one of the cell’s products, sodium hydroxide. Commonly known as lye, it’s an industrial chemical, used in products like liquid drain-cleaning solution. One of the researchers’ suggestions is to dilute the product and release it into the atmosphere or ocean, where it would react with carbon dioxide, capturing it in a stable form and preventing it from contributing to global warming. There are groups pursuing field trials using this exact chemical for ocean-based carbon removal, though some have been met with controversy. The researchers also laid out the potential for a closed system, where the chemical could be collected and sold as a by-product.

There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride (table salt), which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says.

Chiang has cofounded a company called Propel Aero to commercialize the research. The project received funding from ARPA-E’s Propel-1K program, which aims to develop new forms of high-power energy storage for aircraft, trains, and ships.

The next step is to continue research to improve the cells’ performance and energy density, and to start designing small-scale systems. One potential early application is drones. “We’d like to make something fly within the next year,” Chiang says.

“If people don’t find it crazy, I’ll be rather disappointed,” Chiang says. “Because if an idea doesn’t sound crazy at the beginning, it probably isn’t as revolutionary as you think. Fortunately, most people think I’m crazy on this one.”

The FDA plans to limit access to covid vaccines. Here’s why that’s not all bad.

This week, two new leaders at the US Food and Drug Administration announced plans to limit access to covid vaccines, arguing that there is not much evidence to support the value of annual shots in healthy people. New vaccines will be made available only to the people who are most vulnerable—namely, those over 65 and others with conditions that make them more susceptible to severe disease.

Anyone else will have to wait. Covid vaccines will soon be required to go through more rigorous trials to ensure that they really are beneficial for people who aren’t at high risk.

The plans have been met with fear and anger in some quarters. But they weren’t all that shocking to me. In the UK, where I live, covid boosters have been offered only to vulnerable groups for a while now. And the immunologists I spoke to agree: The plans make sense.

They are still controversial. Covid hasn’t gone away. And while most people are thought to have some level of immunity to the virus, some of us still stand to get very sick if infected. The threat of long covid lingers, too. Given that people respond differently to both the virus and the vaccine, perhaps individuals should be able to choose whether they get a vaccine or not.

I should start by saying that covid vaccines have been a remarkable success story. The drugs were developed at record-breaking speed—they were given to people in clinical trials just 69 days after the virus had been identified. They are, on the whole, very safe. And they work remarkably well. They have saved millions of lives. And they rescued many of us from lockdowns.

But while many of us have benefited hugely from covid vaccinations in the past, there are questions over how useful continuing annual booster doses might be. That’s the argument being made by FDA head Marty Makary and Vinay Prasad, director of the agency’s Center for Biologics Evaluation and Research.

Both men have been critical of the FDA in the past. Makary has long been accused of downplaying the benefits of covid vaccines. He made incorrect assumptions about the coronavirus responsible for covid-19 and predicted that the disease would be “mostly gone” by April 2021. Most recently, he also testified in Congress that the theory that the virus came from a lab in China was a “no-brainer.” (The strongest evidence suggests the virus jumped from animals to humans in a market in Wuhan.)

Prasad has said “the FDA is a failure” and has called annual covid boosters “a public health disaster the likes of which we’ve never seen before,” because of a perceived lack of clinical evidence to support their use.

Makary and Prasad’s plans, which were outlined in the New England Journal of Medicine on Tuesday, don’t include such inflammatory language or unfounded claims, thankfully. In fact, they seem pretty measured: Annual covid booster shots will continue to be approved for vulnerable people but will have to be shown to benefit others before people outside the approved groups can access them.

There are still concerns being raised, though. Let’s address a few of the biggest ones.

Shouldn’t I get an annual covid booster alongside my flu vaccine?

At the moment, a lot of people in the US opt to get a covid vaccination around the time they get their annual flu jab. Each year, a flu vaccine is developed to protect against what scientists predict will be the dominant strain of virus circulating come flu season, which tends to run from October through March.

But covid doesn’t seem to stick to the same seasonal patterns, says Susanna Dunachie, a clinical doctor and professor of infectious diseases at the University of Oxford in the UK. “We seem to be getting waves of covid year-round,” she says.

And an annual shot might not offer the best protection against covid anyway, says Fikadu Tafesse, an immunologist and virologist at Oregon Health & Science University in Portland. His own research suggests that leaving more than a year between booster doses could enhance their effectiveness. “One year is really a random time,” he says. It might be better to wait five or 10 years between doses instead, he adds.

“If you are at risk [of a serious covid infection] you may actually need [a dose] every six months,” says Tafesse. “But for healthy individuals, it’s a very different conversation.”

What about children—shouldn’t we be protecting them?

There are reports that pediatricians are concerned about the impact on children, some of whom can develop serious cases of covid. “If we have safe and effective vaccines that prevent illness, we think they should be available,” James Campbell, vice chair of the committee on infectious diseases at the American Academy of Pediatrics, told STAT.

This question has been on my mind for a while. My two young children, who were born in the UK, have never been eligible for a covid vaccine in this country. I found this incredibly distressing when the virus started tearing through child-care centers—especially given that at the time, the US was vaccinating babies from the age of six months.

My kids were eventually offered a vaccine in the US, when we temporarily moved there a couple of years ago. But by that point, the equation had changed. They’d both had covid by then. I had a better idea of the general risks of the virus to children. I turned it down.

I was relieved to hear that Tafesse had made the same decision for his own children. “There are always exceptions, but in general, [covid] is not severe in kids,” he says. The UK’s Joint Committee on Vaccination and Immunology found that the benefits of vaccination are much smaller for children than they are for adults.

“Of course there are children with health problems who should definitely have it,” says Dunachie. “But for healthy children in healthy households, the benefits probably are quite marginal.”

Shouldn’t healthy people get vaccinated to help protect more vulnerable members of society?

It’s a good argument, says Tafesse. Research suggests that people who are vaccinated against covid-19 are less likely to end up transmitting the infection to the people around them. The degree of protection is not entirely clear, particularly with less-studied—and more contagious—variants of the virus and targeted vaccines. The safest approach is to encourage those at high risk to get the vaccine themselves, says Tafesse.

If the vaccines are safe, shouldn’t I be able to choose to get one?

Tafesse doesn’t buy this argument. “I know they are safe, but even if they’re safe, why do I need to get one?” People should know if they are likely to benefit from a drug they are taking, he says.

Having said that, the cost-benefit calculation will differ between individuals. Even a “mild” covid infection can leave some people bed-bound for a week. For them, it might make total sense to get the vaccine.

Dunachie thinks people should be able to make their own decisions. “Giving people a top-up whether they need it or not is a safe thing to do,” she says.

It is still not entirely clear who will be able to access covid vaccinations under the new plans, and how. Makary and Prasad’s piece includes a list of “medical conditions that increase a person’s risk of severe covid-19,” which includes several disorders, pregnancy, and “physical inactivity.” It covers a lot of people; research suggests that around 25% of Americans are physically inactive.

But I find myself agreeing with Dunachie. Yes, we need up-to-date evidence to support the use of any drugs. But taking vaccines away from people who have experience with them and feel they could benefit from them doesn’t feel like the ideal way to go about it.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Meet Cathy Tie, Bride of “China’s Frankenstein”

Since the Chinese biophysicist He Jiankui was released from prison in 2022, he has sought to make a scientific comeback and to repair his reputation after a three-year incarceration for illegally creating the world’s first gene-edited children. 

While he has bounced between cities, jobs, and meetings with investors, one area of visible success on his comeback trail has been his X.com account, @Jiankui_He, which has become his main way of spreading his ideas to the world. Starting in September 2022, when he joined the platform, the account stuck to the scientist’s main themes, including promising a more careful approach to his dream of creating more gene-edited children. “I will do it, only after society has accepted it,” he posted in August 2024. He also shared mundane images of his daily life, including golf games and his family.

But over time, it evolved and started to go viral. First came a series of selfies accompanied by grandiose statements (“Every pioneer or prophet must suffer”). Then, in April of this year, it became particularly outrageous and even troll-like, blasting out bizarre messages (“Good morning bitches. How many embryos have you gene edited today?”). This has left observers unsure what to take seriously.

Last month, in reply to MIT Technology Review’s questions about who was responsible for the account’s transformation into a font of clever memes, He emailed us back: “It’s thanks to Cathy Tie.”

You may not be familiar with Tie, but she’s no stranger to the public spotlight. A former Thiel fellow, she is a partner in the attention-grabbing Los Angeles Project, which promised to create glow-in-the-dark pets. Over the past several weeks, though, the 29-year-old Canadian entrepreneur has started to get more and more attention as the new wife to (and apparent social media mastermind behind) He Jiankui. On April 15, He announced a new venture, Cathy Medicine, that would take up his mission of editing human embryos to create people resistant to diseases like Alzheimer’s or cancer. Just a few days later, on April 18, He and Tie announced that they had married, posting pictures of themselves in traditional Chinese wedding attire.

But now Tie says that just a month after she married “the most controversial scientist in the world,” her plans to relocate from Los Angeles to Beijing to be with He are in disarray; she says she’s been denied entry to China and the two “may never see each other again,” as He’s passport is being held by Chinese authorities and he can’t leave the country.

Reached by phone in Manila, Tie said authorities in the Philippines had intercepted her during a layover on May 17 and told her she couldn’t board a plane to China, where she was born and where she says she has a valid 10-year visa. She claims they didn’t say why but told her she is likely “on a watch list.” (MIT Technology Review could not independently confirm Tie’s account.) 

“While I’m concerned about my marriage, I am more concerned about what this means for humanity and the future of science,” Tie posted to her own X account.

A match made in gene-editing heaven

The romance between He and Tie has been playing out in public over the past several weeks through a series of reveals on He’s X feed, which had already started going viral late last year thanks to his style of posting awkward selfies alongside maxims about the untapped potential of heritable gene editing, which involves changing people’s DNA when they’re just embryos in an IVF dish. 

“Human [sic] will no longer be controlled by Darwin’s evolution,” He wrote in March. That post, which showed him standing in an empty lab, gazing into the distance, garnered 9.7 million views. And then, a week later, he collected 13.3 million for this one: “Ethics is holding back scientific innovation and progress.” 

In April, the feed started to change even more drastically. 

He’s posts became increasingly provocative, with better English and a unique sensibility reflecting online culture. “Stop asking for cat girls. I’m trying to cure disease,” the account posted on April 15. Two days later, it followed up: “I literally went to prison for this shit.” 

This shift coincided with the development of his romance with Tie. Tie told us she has visited China three times this year, including a three-week stint in April when she and He got married after a whirlwind romance. She bought him a silver wedding ring made up of intertwined DNA strands. 

The odd behavior on He’s X feed and the sudden marriage have left followers wondering if they are watching a love story, a new kind of business venture, or performance art. It might be all three. 

A wedding photo posted by Tie on the Chinese social media platform Rednote shows the couple sitting at a table in a banquet hall, with a small number of guests. MIT Technology Review has been able to identify several people who attended: Cai Xilei, He’s criminal attorney; Liu Haiyan, an investor and former business partner of He; and Darren Zhu, an artist and Thiel fellow who is making a “speculative” documentary about the biophysicist that will blur the boundaries of fiction and reality.

In the phone interview, Tie declined to say if she and He are legally married. She also confirmed she celebrated a wedding less than one year ago with someone else in California, in July of 2024, but said they broke up after a few months; she also declined to describe the legal status of that marriage. In the phone call, Tie emphasized that her relationship with He is genuine: “I wouldn’t marry him if I wasn’t in love with him.”

An up-and-comer

Years before Tie got into a relationship with He, she was getting plenty of attention in her own right. She became a Thiel fellow in 2015, when she was just 18. That program, started by the billionaire Peter Thiel, gave her a grant of $100,000 to drop out of the University of Toronto and start a gene testing company, Ranomics. 

Soon, she began appearing on the entrepreneur circuit as a “wunderkind” who was featured on a Forbes30 Under 30” list in 2018 and presented as an up-and-coming venture capitalist on CNN that same year. In 2020, she started her second company, Locke Bio, which focuses on online telemedicine.

Like Thiel, Tie has staked out contrarian positions. She’s called mainstream genomics a scam and described entrepreneurship as a way to escape the hidebound practices of academia and bioethics. “Starting companies is my preferred form of art,” she posted in 2022, linking to an interview on CNBC

By February 2025, Tie was ready to announce another new venture: the Los Angeles Project, a stealth company she had incorporated in 2023 under her legal name, Cheng Cheng Tie. The company, started with the Texas-based biohacker and artist Josie Zayner, says it will try to modify animal embryos; one goal is to make fluorescent glow-in-the-dark rabbits as pets.

The Los Angeles Project revels in explicitly transgressive aims for embryo editing, including a plan to add horn genes to horse embryos to make a unicorn. That’s consistent with Zayner’s past stunts, which include injecting herself with CRISPR during a livestream. “This is a company that should not exist,” Zayner said in announcing the newly public project.

Although the Los Angeles Project has only a tiny staff with uncertain qualifications, it did raise $1 million from the 1517 Fund, a venture group that supports “dropouts” and whose managers previously ran the Thiel Fellowship. 

Asked for his assessment of Tie, Michael Gibson, a 1517 partner, said in an email that he thinks Tie is “not just exceptional, but profoundly exceptional.” He sent along a list of observations he’d jotted down about Tie before funding her company, which approvingly noted her “hyper-fluent competence” and “low need for social approval,” adding: “Thoughts & actions routinely unconventional.” 

A comeback story

He first gained notoriety in 2018, when he and coworkers at the Southern University of Science & Technology in Shenzhen injected the CRISPR gene editor into several viable human embryos and then transferred these into volunteers, leading to the birth of three girls who he claimed would be resistant to HIV. A subsequent Chinese investigation found he’d practiced medicine illegally while “pursuing fame and fortune.” A court later sentenced him to three years in prison.

He has never apologized for his experiments, except to say he acted “too quickly” and to express regret for the trouble he’d caused his former wife and two daughters. (According to a leaked WeChat post by his ex-wife, she divorced him in 2024 “because of a major fault on his side.”)

Since his release from prison, He has sought to restart his research and convince people that he should be recognized as the “Chinese Darwin,” not “China’s Frankenstein,” as the press once dubbed him. 

But his comeback has been bumpy. He lost a position at Wuchang University of Technology, a small private university in Hubei province, after some negative press. In February 2024, He posted that his application for funding from the Muscular Dystrophy Association was rejected. Last September, he even posted pictures of his torn shirt—which he said was the result of an assault by jealous rivals.

One area of clear success, though, was the growing reach of his X profile, which today has ballooned to more than 130,000 followers. And as his public profile rose, some started encouraging He to find ways to cash in. Andrew Hessel, a futurist and synthetic biologist active in US ethics debates, says he tried to get He invited to give a TED Talk. “His story is unique, and I wanted to see his story get more widespread attention, if only as a cautionary tale,” Hessel says. “I think he is a lightning rod for a generation of people working in life sciences.”

Later, Hessel says, he sent him information on how to join X’s revenue-sharing program. “I said, ‘You have a powerful voice. Have you looked into monetization?’” Hessel says.

By last fall, He was also welcoming visitors to what he called a new lab in Beijing. One person who took him up on the offer was Steve Hsu, a Michigan State physics professor who has started several genetics companies and was visiting Beijing. 

They ended up talking for hours. Hsu says that He expressed a desire to move to the US and start a company, and that he shared his idea for conducting a clinical trial of embryo editing in South Africa, possibly for the prevention of HIV. 

Hsu says he later arranged an invitation for He to give a lecture in the United States. “You are a little radioactive, but things are opening up,” Hsu told him. But He declined the offer because the Chinese government is holding his passport—a common tactic it uses to restrict the movement of sensitive or high-profile figures—and won’t return it to him. “He doesn’t even know why. He literally doesn’t know,” says Hsu. “According to the law, they should give it back to him.”

A curious triangle

Despite any plans by He and Tie to advance the idea, creating designer babies is currently illegal in most of the world, including China and the US. Some experts, however, fret that forbidding the technology will only drive it underground and make it attractive to biohackers or scientists outside the mainstream. 

That’s one reason Tie’s simultaneous connection to two notable biotech renegades—He and Zayner—is worth watching. “There is clearly a triangle forming in some way,” says Hessel.

With Tie stuck outside China and He being kept inside the country, their new gene-editing venture, Cathy Medicine, faces an uncertain future. Tie posted previously on Rednote that she was “helping Dr. He open up the U.S. market” and was planning to return to the US with him for scientific research. But when we spoke on the phone, she declined to disclose their next steps and said their predicament means the project is “out of the window now.”

Even as the couple remain separated, their social media game is stronger than ever. As she waited in Manila, Tie sought help from friends, followers, and the entire internet. She blasted out a tweet to “crypto people,” calling them “too pussy to stand up for things when it matters.” Within hours, someone had created a memecoin called $GENE as a way for the public to support the couple. 

On May 20, Tie posted on X claiming that the amount donated to them is now worth almost $2 million. “I may need to retract my last statement about crypto,” she wrote. 

He’s X account also retweeted to express support: “I only want to reunite with my wife @CathyTie, and continue my gene editing research.” He added the hashtag $GENE.

Three takeaways about AI’s energy use and climate impacts

This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself. 

This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell (and the work of many others on our team). Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead. 

There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. 

1. The energy demands of AI are anything but constant. 

If you’ve heard estimates of AI’s toll, it’s probably a single number associated with a query, likely to OpenAI’s ChatGPT. One popular estimate is that writing an email with ChatGPT uses 500 milliliters (or roughly a bottle) of water. But as we started reporting, I was surprised to learn just how much the details of a query can affect its energy demand. No two queries are the same—for several reasons, including their complexity and the particulars of the model being queried.

One key caveat here is that we don’t know much about “closed source” models—for these, companies hold back the details of how they work. (OpenAI’s ChatGPT and Google’s Gemini are examples.) Instead, we worked with researchers who measured the energy it takes to run open-source AI models, for which the source code is publicly available. 

But using open-source models, it’s possible to directly measure the energy used to respond to a query rather than just guess. We worked with researchers who generated text, images, and video and measured the energy required for the chips the models are based on to perform the task.  

Even just within the text responses, there was a pretty large range of energy needs. A complicated travel itinerary consumed nearly 10 times as much energy as a simple request for a few jokes, for example. An even bigger difference comes from the size of the model used. Larger models with more parameters used up to 70 times more energy than smaller ones for the same prompts. 

As you might imagine, there’s also a big difference between text, images, or video. Videos generally took hundreds of times more energy to generate than text responses. 

2. What’s powering the grid will greatly affect the climate toll of AI’s energy use. 

As the resident climate reporter on this project, I was excited to take the expected energy toll and translate it into an expected emissions burden. 

Powering a data center with a nuclear reactor or a whole bunch of solar panels and batteries will not affect our planet the same way as burning mountains of coal. To quantify this idea, we used a figure called carbon intensity, a measure of how dirty a unit of electricity is on a given grid. 

We found that the same exact query, with the same exact energy demand, will have a very different climate impact depending on what the data center is powered by, and that depends on the location and the time of day. For example, querying a data center in West Virginia could cause nearly twice the emissions of querying one in California, according to calculations based on average data from 2024.

This point shows why it matters where tech giants are building data centers, what the grid looks like in their chosen locations, and how that might change with more demand from the new infrastructure. 

3. There is still so much that we don’t know when it comes to AI and energy. 

Our reporting resulted in estimates that are some of the most specific and comprehensive out there. But ultimately, we still have no idea what many of the biggest, most influential models are adding up to in terms of energy and emissions. None of the companies we reached out to were willing to provide numbers during our reporting. Not one.

Adding up our estimates can only go so far, in part because AI is increasingly everywhere. While today you might generally have to go to a dedicated site and type in questions, in the future AI could be stitched into the fabric of our interactions with technology. (See my colleague Will Douglas Heaven’s new story on Google’s I/O showcase: “By putting AI into everything, Google wants to make it invisible.”)

AI could be one of the major forces that shape our society, our work, and our power grid. Knowing more about its consequences could be crucial to planning our future. 

To dig into our reporting, give the main story a read. And if you’re looking for more details on how we came up with our numbers, you can check out this behind-the-scenes piece.

There are also some great related stories in this package, including one from James Temple on the data center boom in the Nevada desert, one from David Rotman about how AI’s rise could entrench natural gas, and one from Will Douglas Heaven on a few technical innovations that could help make AI more efficient. Oh, and I also have a piece on why nuclear isn’t the easy answer some think it is

Find them, and the rest of the stories in the package, here

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A new atomic clock in space could help us measure elevations on Earth

In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side.

The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany. (Geodesists study our planet’s shape, orientation, and gravitational field.) While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge. 

To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort. 

Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space.

That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone. 

Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. (In 2022, China installed a potentially stabler clock on its space station, but the Chinese government has not publicly shared the clock’s performance after launch, according to Cacciapuoti.) 

From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics. 

But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world.

Alignment over this “zero point” (basically where you stick the end of the tape measure to measure elevation) is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters. 

Airbus worker performs critical tests on ACES in the Space Station Processing Facility cleanroom at the Kennedy Space Center.
A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.
ESA-T. PEIGNIER

To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude.

However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert. 

To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America. 

To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet. 

A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation. 

“We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive.

ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks.

These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field. 

Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago. 

Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.

Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

Anthropic has announced two new AI models that it claims represent a major step toward making AI agents truly useful.

AI agents trained on Claude Opus 4, the company’s most powerful model to date, raise the bar for what such systems are capable of by tackling difficult tasks over extended periods of time and responding more usefully to user instructions, the company says.

Claude Opus 4 has been built to execute complex tasks that involve completing thousands of steps over several hours. For example, it created a guide for the video game Pokémon Red while playing it for more than 24 hours straight. The company’s previously most powerful model, Claude 3.7 Sonnet, was capable of playing for just 45 minutes, says Dianne Penn, product lead for research at Anthropic.

Similarly, the company says that one of its customers, the Japanese technology company Rakuten, recently deployed Claude Opus 4 to code autonomously for close to seven hours on a complicated open-source project. 

Anthropic achieved these advances by improving the model’s ability to create and maintain “memory files” to store key information. This enhanced ability to “remember” makes the model better at completing longer tasks.

“We see this model generation leap as going from an assistant to a true agent,” says Penn. “While you still have to give a lot of real-time feedback and make all of the key decisions for AI assistants, an agent can make those key decisions itself. It allows humans to act more like a delegator or a judge, rather than having to hold these systems’ hands through every step.”

While Claude Opus 4 will be limited to paying Anthropic customers, a second model, Claude Sonnet 4, will be available for both paid and free tiers of users. Opus 4 is being marketed as a powerful, large model for complex challenges, while Sonnet 4 is described as a smart, efficient model for everyday use.  

Both of the new models are hybrid, meaning they can offer a swift reply or a deeper, more reasoned response depending on the nature of a request. While they calculate a response, both models can search the web or use other tools to improve their output.

AI companies are currently locked in a race to create truly useful AI agents that are able to plan, reason, and execute complex tasks both reliably and free from human supervision, says Stefano Albrecht, director of AI at the startup DeepFlow and coauthor of Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. Often this involves autonomously using the internet or other tools. There are still safety and security obstacles to overcome. AI agents powered by large language models can act erratically and perform unintended actions—which becomes even more of a problem when they’re trusted to act without human supervision.

“The more agents are able to go ahead and do something over extended periods of time, the more helpful they will be, if I have to intervene less and less,” he says. “The new models’ ability to use tools in parallel is interesting—that could save some time along the way, so that’s going to be useful.”

As an example of the sorts of safety issues AI companies are still tackling, agents can end up taking unexpected shortcuts or exploiting loopholes to reach the goals they’ve been given. For example, they might book every seat on a plane to ensure that their user gets a seat, or resort to creative cheating to win a chess game. Anthropic says it managed to reduce this behavior, known as reward hacking, in both new models by 65% relative to Claude Sonnet 3.7. It achieved this by more closely monitoring problematic behaviors during training, and improving both the AI’s training environment and the evaluation methods.

AI could keep us dependent on natural gas for decades to come

The thousands of sprawling acres in rural northeast Louisiana had gone unwanted for nearly two decades. Louisiana authorities bought the land in Richland Parish in 2006 to promote economic development in one of the poorest regions in the state. For years, they marketed the former agricultural fields as the Franklin Farm mega site, first to auto manufacturers (no takers) and after that to other industries that might want to occupy more than a thousand acres just off the interstate.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


So it’s no wonder that state and local politicians were exuberant when Meta showed up. In December, the company announced plans to build a massive $10 billion data center for training its artificial-intelligence models at the site, with operations to begin in 2028. “A game changer,” declared Governor Jeff Landry, citing 5,000 construction jobs and 500 jobs at the data center that are expected to be created and calling it the largest private capital investment in the state’s history. From a rural backwater to the heart of the booming AI revolution!

The AI data center also promises to transform the state’s energy future. Stretching in length for more than a mile, it will be Meta’s largest in the world, and it will have an enormous appetite for electricity, requiring two gigawatts for computation alone (the electricity for cooling and other building needs will add to that). When it’s up and running, it will be the equivalent of suddenly adding a decent-size city to the region’s grid—one that never sleeps and needs a steady, uninterrupted flow of electricity.

To power the data center, Entergy aims to spend $3.2 billion to build three large natural-gas power plants with a total capacity of 2.3 gigawatts and upgrade the grid to accommodate the huge jump in anticipated demand. In its filing to the state’s power regulatory agency, Entergy acknowledged that natural-gas plants “emit significant amounts of CO2” but said the energy source was the only affordable choice given the need to quickly meet the 24-7 electricity demand from the huge data center.

Meta said it will work with Entergy to eventually bring online at least 1.5 gigawatts of new renewables, including solar, but that it had not yet decided which specific projects to fund or when those investments will be made. Meanwhile, the new natural-gas plants, which are scheduled to be up and running starting in 2028 and will have a typical lifetime of around 30 years, will further lock in the state’s commitment to the fossil fuel.

The development has sparked interest from the US Congress; last week, Sheldon Whitehouse, the ranking member of the Senate Committee on Environment and Public Works issued a letter to Meta that called out the company’s plan to power its data center with “new and unabated natural gas generation” and said its promises to offset the resulting emissions “by funding carbon capture and a solar project are vague and offer little reassurance.”

The choice of natural gas as the go-to solution to meet the growing demand for power from AI is not unique to Louisiana. The fossil fuel is already the country’s chief source of electricity generation, and large natural-gas plants are being built around the country to feed electricity to new and planned AI data centers. While some climate advocates have hoped that cleaner renewable power would soon overtake it, the booming power demand from data centers is all but wiping out any prospect that the US will wean itself off natural gas anytime soon.

The reality on the ground is that natural gas is “the default” to meet the exploding power demand from AI data centers, says David Victor, a political scientist at the University of California, San Diego, and co-director of its Deep Decarbonization Project. “The natural-gas plant is the thing that you know how to build, you know what it’s going to cost (more or less), and you know how to scale it and get it approved,” says Victor. “Even for [AI] companies that want to have low emissions profiles and who are big pushers of low or zero carbon, they won’t have a choice but to use gas.”

The preference for natural gas is particularly pronounced in the American South, where plans for multiple large gas-fired plants are in the works in states such as Virginia, North Carolina, South Carolina, and Georgia. Utilities in those states alone are planning some 20 gigawatts of new natural-gas power plants over the next 15 years, according to a recent report. And much of the new demand—particularly in Virginia, South Carolina and Georgia—is coming from data centers; in those 3 states data centers account for around 65 to 85% of projected load growth.

“It’s a long-term commitment in absolutely the wrong direction,” says Greg Buppert, a senior attorney at the Southern Environmental Law Center in Charlottesville, Virginia. If all the proposed gas plants get built in the South over the next 15 years, he says, “we’ll just have to accept that we won’t meet emissions reduction goals.”

But even as it looks more and more likely that natural gas will remain a sizable part of our energy future, questions abound over just what its continued dominance will look like.

For one thing, no one is sure exactly how much electricity AI data centers will need in the future and how large an appetite companies will have for natural gas. Demand for AI could fizzle. Or AI companies could make a concerted effort to shift to renewable energy or nuclear power. Such possibilities mean that the US could be on a path to overbuild natural-gas capacity, which would leave regions saddled with unneeded and polluting fossil-fuel dinosaurs—and residents footing soaring electricity bills to pay off today’s investments.

The good news is that such risks could likely be managed over the next few years, if—and it’s a big if—AI companies are more transparent about how flexible they can be in their seemingly insatiable energy demands.

The reign of natural gas

Natural gas in the US is cheap and abundant these days. Two decades ago, huge reserves were found in shale deposits scattered across the country. In 2008, as fracking started to make it possible to extract large quantities of the gas from shale, natural gas was selling for $13 per million Btu (a measure of thermal energy); last year, it averaged just $2.21, the lowest annual price (adjusting for inflation) ever reported, according to the US Energy Information Administration (EIA).

Around 2016, natural gas overtook coal as the main fuel for electricity generation in the US. And today—despite the rapid rise of solar and wind power, and well-deserved enthusiasm for the falling price of such renewables—natural gas is still king, accounting for around 40% of electricity generated in the US. In Louisiana, which is also a big producer, that share is some 72%, according to a recent audit.

Natural gas burns much cleaner than coal, producing roughly half as much carbon dioxide. In the early days of the gas revolution, many environmental activists and progressive politicians touted it as a valuable “bridge” to renewables and other sources of clean energy. And by some calculations, natural gas has fulfilled that promise. The power sector has been one of the few success stories in lowering US emissions, thanks to its use of natural gas as a replacement for coal.  

But natural gas still produces a lot of carbon dioxide when it is burned in conventionally equipped power plants. And fracking causes local air and water pollution. Perhaps most worrisome, drilling and pipelines are releasing substantial amounts of methane, the main ingredient in natural gas, both accidentally and by intentional venting. Methane is a far more potent greenhouse gas than carbon dioxide, and the emissions are a growing concern to climate scientists, albeit one that’s difficult to quantify.

Still, carbon emissions from the power sector will likely continue to drop as coal is further squeezed out and more renewables get built, according to the Rhodium Group, a research consultancy. But Rhodium also projects that if electricity demand from data centers remains high and natural-gas prices low, the fossil fuel will remain the dominant source of power generation at least through 2035 and the transition to cleaner electricity will be much delayed. Rhodium estimates that the continued reign of natural gas will lead to an additional 278 million metric tons of annual US carbon emissions by 2035 (roughly equivalent to the emissions from a large US state such as Florida), relative to a future in which the use of fossil fuel gradually winds down.

Our addiction to natural gas, however, doesn’t have to be a total climate disaster, at least over the longer term. Large AI companies could use their vast leverage to insist that utilities install carbon capture and sequestration (CCS) at power plants and use natural gas sourced with limited methane emissions.

Entergy, for one, says its new gas turbines will be able to incorporate CCS through future upgrades. And Meta says it will help to fund the installation of CCS equipment at one of Entergy’s existing natural-gas power plants in southern Louisiana to help prove out the technology.  

But the transition to clean natural gas is a hope that will take decades to realize. Meanwhile, utilities across the country are facing a more imminent and practical challenge: how to meet the sudden demand for gigawatts more power in the next few years without inadvertently building far too much capacity. For many, adding more natural-gas power plants might seem like the safe bet. But what if the explosion in AI demand doesn’t show up?

Times of stress

AI companies tout the need for massive, power-hungry data centers. But estimates for just how much energy it will actually take to train and run AI models vary wildly. And the technology keeps changing, sometimes seemingly overnight. DeepSeek, the new Chinese model that debuted in January, may or may not signal a future of new energy-efficient AI, but it certainly raises the possibility that such advances are possible. Maybe we will find ways to use far more energy-efficient hardware. Or maybe the AI revolution will peter out and many of the massive data centers that companies think they’ll need will never get built. There are already signs that too many have been constructed in China and clues that it might be beginning to happen in the US

Despite the uncertainty, power providers have the task of drawing up long-term plans for investments to accommodate projected demand. Too little capacity and their customers face blackouts; too much and those customers face outsize electricity bills to fund investments in unneeded power.

There could be a way to lessen the risk of overbuilding natural-gas power, however. Plenty of power is available on average around the country and on most regional grids. Most utilities typically use only about 53% of their available capacity on average during the year, according to a Duke study. The problem is that utilities must be prepared for the few hours when demand spikes—say, because of severe winter weather or a summer heat wave.

The soaring demand from AI data centers is prompting many power providers to plan new capacity to make sure they have plenty of what Tyler Norris, a fellow at Duke’s Nicholas School of the Environment, and his colleagues call “headroom,” to meet any spikes in demand. But after analyzing data from power systems across the country, Norris and his coauthors found that if large AI facilities cut back their electricity use during hours of peak demand, many regional power grids could accommodate those AI customers without adding new generation capacity.

Even a moderate level of flexibility would make a huge difference. The Duke researchers estimate that if data centers cut their electricity use by roughly half for just a few hours during the year, it will allow utilities to handle some additional 76 gigawatts of new demand. That means power providers could effectively absorb the 65 or so additional gigawatts that, according to some predictions, data centers will likely need by 2029.

“The prevailing assumption is that data centers are 100% inflexible,” says Norris. That is, that they need to run at full power all the time. But Norris says AI data centers, particularly ones that are training large foundation models (such as Meta’s facility in Richland Parish), can avoid running at full capacity or shift their computation loads to other data centers around the country—or even ramp up their own backup power—during times when a grid is under stress.

The increased flexibility could allow companies to get AI data centers up and running faster, without waiting for new power plants and upgrades to transmission lines—which can take years to get approved and built. It could also, Norris noted in testimony to the US Congress in early March, provide at least a short-term reprieve on the rush to build more natural-gas power, buying time for utilities to develop and plan for cleaner technologies such as advanced nuclear and enhanced geothermal. It could, he testified, prevent “a hasty overbuild of natural-gas infrastructure.”

AI companies have expressed some interest in their ability to shift around demand for power. But there are still plenty of technology questions around how to make it happen. Late last year, EPRI (the Electric Power Research Institute), a nonprofit R&D group, started a three-year collaboration with power providers, grid operators, and AI companies including Meta and Google, to figure it out. “The potential is very large,” says David Porter, the EPRI vice president who runs the project, but we must show it works “beyond just something on a piece of paper or a computer screen.”

Porter estimates that there are typically 80 to 90 hours a year when a local grid is under stress and it would help for a data center to reduce its energy use. But, he says, AI data centers still need to figure out how to throttle back at those times, and grid operators need to learn how to suddenly subtract and then add back hundreds of megawatts of electricity without disrupting their systems. “There’s still a lot of work to be done so that it’s seamless for the continuous operation of the data centers and seamless for the continuous operation of the grid,” he says.

Footing the bill

Ultimately, getting AI data centers to be more flexible in their power demands will require more than a technological fix. It will require a shift in how AI companies work with utilities and local communities, providing them with more information and insights into actual electricity needs. And it will take aggressive regulators to make sure utilities are rigorously evaluating the power requirements of data centers rather than just reflexively building more natural-gas plants.

“The most important climate policymakers in the country right now are not in Washington. They’re in state capitals, and these are public utility commissioners,” says Costa Samaras, the director of Carnegie Mellon University’s Scott Institute for Energy Innovation.

In Louisiana, those policymakers are the elected officials at the Louisiana Public Service Commission, who are expected to rule later this year on Entergy’s proposed new gas plants and grid upgrades. The LPSC commissioners will decide whether Entergy’s arguments about the huge energy requirements of Meta’s data center and need for full 24/7 power leave no alternative to natural gas. 

In the application it filed last fall with LPSC, Entergy said natural-gas power was essential for it to meet demand “throughout the day and night.” Teaming up solar power with battery storage could work “in theory” but would be “prohibitively costly.” Entergy also ruled out nuclear, saying it would take too long and cost too much.

Others are not satisfied with the utility’s judgment. In February, the New Orleans–based Alliance for Affordable Energy and the Union of Concerned Scientists filed a motion with the Louisiana regulators arguing that Entergy did not do a rigorous market evaluation of its options, as required by the commission’s rules. Part of the problem, the groups said, is that Entergy relied on “unsubstantiated assertions” from Meta on its load needs and timeline.

“Entergy is saying [Meta] needs around-the-clock power,” says Paul Arbaje, an analyst for the climate and energy program at the Union of Concerned Scientists. “But we’re just being asked to take [Entergy’s] word for it. Regulators need to be asking tough questions and not just assume that these data centers need to be operated at essentially full capacity all the time.” And, he suggests, if the utility had “started to poke holes at the assumptions that are sometimes taken as a given,” it “would have found other cleaner options.”      

In an email response to MIT Technology Review, Entergy said that it has discussed the operational aspects of the facility with Meta, but “as with all customers, Entergy Louisiana will not discuss sensitive matters on behalf of their customers.” In a letter filed with the state’s regulators in early April, Meta said Entergy’s understanding of its energy needs is, in fact, accurate.

The February motion also raised concern over who will end up paying for the new gas plants. Entergy says Meta has signed a 15-year supply contract for the electricity that is meant to help cover the costs of building and running the power plants but didn’t respond to requests by MIT Technology Review for further details of the deal, including what happens if Meta wants to terminate the contract early.

Meta referred MIT Technology Review’s questions about the contract to Entergy but says its policy is to cover the full cost that utilities incur to serve its data centers, including grid upgrades. It also says it is spending over $200 million to support the Richland Parish data centers with new infrastructure, including roads and water systems. 

Not everyone is convinced. The Alliance for Affordable Energy, which works on behalf of Louisiana residents, says that the large investments in new gas turbines could mean future rate hikes, in a state where residents already have high electricity bills and suffer from one of country’s most unreliable grids. Of special concern is what happens after the 15 years.

“Our biggest long-term concern is that in 15 years, residential ratepayers [and] small businesses in Louisiana will be left holding the bag for three large gas generators,” says Logan Burke, the alliance’s executive director.

Indeed, consumers across the country have good reasons to fear that their electricity bills will go up as utilities look to meet the increased demand from AI data centers by building new generation capacity. In a paper posted in March, researchers at Harvard Law School argued that utilities “are now forcing the public to pay for infrastructure designed to supply a handful of exceedingly wealthy corporations.”

The Harvard authors write, “Utilities tell [public utility commissions] what they want to hear: that the deals for Big Tech isolate data center energy costs from other ratepayers’ bills and won’t increase consumers’ power prices.” But the complexity of the utilities’ payment data and lack of transparency in the accounting, they say, make verifying this claim “all but impossible.”

The boom in AI data centers is making Big Tech a player in our energy infrastructure and electricity future in a way unimaginable just a few years ago. At their best, AI companies could greatly facilitate the move to cleaner energy by acting as reliable and well-paying customers that provide funding that utilities can use to invest in a more robust and flexible electricity grid. This change can happen without burdening other electricity customers with additional risks and costs. But it will take AI companies committed to that vision. And it will take state regulators who ask tough questions and don’t get carried away by the potential investments being dangled by AI companies.

Huge new AI data centers like the one in Richland Parish could in fact be a huge economic boon by providing new jobs, but residents deserve transparency and input into the negotiations. This is, after all, public infrastructure. Meta may come and go, but Louisiana’s residents will have to live with—and possibly pay for—the changes in the decades to come.

AI’s energy impact is still small—but how we handle it is huge

With seemingly no limit to the demand for artificial intelligence, everyone in the energy, AI, and climate fields is justifiably worried. Will there be enough clean electricity to power AI and enough water to cool the data centers that support this technology? These are important questions with serious implications for communities, the economy, and the environment. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


But the question about AI’s energy usage portends even bigger issues about what we need to do in addressing climate change for the next several decades. If we can’t work out how to handle this, we won’t be able to handle broader electrification of the economy, and the climate risks we face will increase.

Innovation in IT got us to this point. Graphics processing units (GPUs) that power the computing behind AI have fallen in cost by 99% since 2006. There was similar concern about the energy use of data centers in the early 2010s, with wild projections of growth in electricity demand. But gains in computing power and energy efficiency not only proved these projections wrong but enabled a 550% increase in global computing capability from 2010 to 2018 with only minimal increases in energy use. 

In the late 2010s, however, the trends that had saved us began to break. As the accuracy of AI models dramatically improved, the electricity needed for data centers also started increasing faster; they now account for 4.4% of total demand, up  from 1.9% in 2018. Data centers consume more than 10% of the electricity supply in six US states. In Virginia, which has emerged as a hub of data center activity, that figure is 25%.

Projections about the future demand for energy to power AI are uncertain and range widely, but in one study, Lawrence Berkeley National Laboratory estimated that data centers could represent 6% to 12% of total US electricity use by 2028. Communities and companies will notice this type of rapid growth in electricity demand. It will put pressure on energy prices and on ecosystems. The projections have resulted in calls to build lots of new fossil-fired power plants or bring older ones out of retirement. In many parts of the US, the demand will likely result in a surge of natural-gas-powered plants.

It’s a daunting situation. Yet when we zoom out, the projected electricity use from AI is still pretty small. The US generated about 4,300 billion kilowatt-hours last year. We’ll likely need another 1,000 billion to 1,200 billion or more in the next decade—a 24% to 29% increase. Almost half the additional electricity demand will be from electrified vehicles. Another 30% is expected to be from electrified technologies in buildings and industry. Innovation in vehicle and building electrification also advanced in the last decade, and this shift will be good news for the climate, for communities, and for energy costs.

The remaining 22% of new electricity demand is estimated to come from AI and data centers. While it represents a smaller piece of the pie, it’s the most urgent one. Because of their rapid growth and geographic concentration, data centers are the electrification challenge we face right now—the small stuff we have to figure out before we’re able to do the big stuff like vehicles and buildings.

We also need to understand what the energy consumption and carbon emissions associated with AI are buying us. While the impacts from producing semiconductors and powering AI data centers are important, they are likely small compared with the positive or negative effects AI may have on applications such as the electricity grid, the transportation system, buildings and factories, or consumer behavior. Companies could use AI to develop new materials or batteries that would better integrate renewable energy into the grid. But they could also use AI to make it easier to find more fossil fuels. The claims about potential benefits for the climate are exciting, but they need to be continuously verified and will need support to be realized.

This isn’t the first time we’ve faced challenges coping with growth in electricity demand. In the 1960s, US electricity demand was growing at more than 7% per year. In the 1970s that growth was nearly 5%, and in the 1980s and 1990s it was more than 2% per year. Then, starting in 2005, we basically had a decade and a half of flat electricity growth. Most projections for the next decade put our expected growth in electricity demand at around 2% again—but this time we’ll have to do things differently. 

To manage these new energy demands, we need a “Grid New Deal” that leverages public and private capital to rebuild the electricity system for AI with enough capacity and intelligence for decarbonization. New clean energy supplies, investment in transmission and distribution, and strategies for virtual demand management can cut emissions, lower prices, and increase resilience. Data centers bringing clean electricity and distribution system upgrades could be given a fast lane to connect to the grid. Infrastructure banks could fund new transmission lines or pay to upgrade existing ones. Direct investment or tax incentives could encourage clean computing standards, workforce development in the clean energy sector, and open data transparency from data center operators about their energy use so that communities can understand and measure the impacts.

In 2022, the White House released a Blueprint for an AI Bill of Rights that provided principles to protect the public’s rights, opportunities, and access to critical resources from being restricted by AI systems. To the AI Bill of Rights, we humbly offer a climate amendment, because ethical AI must be climate-safe AI. It’s a starting point to ensure that the growth of AI works for everyone—that it doesn’t raise people’s energy bills, adds more clean power to the grid than it uses, increases investment in the power system’s infrastructure, and benefits communities while driving innovation.

By grounding the conversation about AI and energy in context about what is needed to tackle climate change, we can deliver better outcomes for communities, ecosystems, and the economy. The growth of electricity demand for AI and data centers is a test case for how society will respond to the demands and challenges of broader electrification. If we get this wrong, the likelihood of meeting our climate targets will be extremely low. This is what we mean when we say the energy and climate impacts from data centers are small, but they are also huge.

Costa Samaras is the Trustee Professor of Civil and Environmental Engineering and director of the Scott Institute for Energy Innovation at Carnegie Mellon University.

Emma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University.

Ramayya Krishnan is dean of the Heinz College of Information Systems and Public Policy and the William W. and Ruth F. Cooper Professor of Management Science and Information Systems at Carnegie Mellon University.