The AI Hype Index: DeepSeek mania, Israel’s spying tool, and cheating at chess

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

While AI models are certainly capable of creating interesting and sometimes entertaining material, their output isn’t necessarily useful. Google DeepMind is hoping that its new robotics model could make machines more receptive to verbal commands, paving the way for us to simply speak orders to them aloud. Elsewhere, the Chinese startup Monica has created Manus, which it claims is the very first general AI agent to complete truly useful tasks. And burnt-out coders are allowing AI to take the wheel entirely in a new practice dubbed “vibe coding.”

China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of China’s AI craze. 

At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market. 

Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. “It seems like everyone is selling, but few are buying,” he says.

Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Review—including contractors, an executive at a GPU server company, and project managers—most of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnian and 36Kr report that up to 80% of China’s newly built computing resources remain unused.

Renting out GPUs to companies that need them for training AI models—the main business model for the new wave of data centers—was once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.

“The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation. 

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”

A chaotic building boom

When ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centers—a term coined to describe AI-focused data centers.

In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope they’d stimulate the economy and establish their region as a key AI hub. 

However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of China—the country’s central internet regulator—to develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.

China’s political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick results—often to gain favor with higher-ups—rather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.

The post-pandemic economic downturn only intensified this dynamic. With China’s real estate sector—once the backbone of local economies—slumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the country’s once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.

“AI felt like a shot of adrenaline,” says Li. “A lot of money that used to flow into real estate is now going into AI data centers.”

By 2023, major corporations—many of them with little prior experience in AI—began partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firm—hardly the names one would associate with cutting-edge AI technology.

This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards. 

“Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale,” says Goodrich. “This is all really state-of-the-art computer engineering. I’d be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and don’t offer the stability that a company like DeepSeek would want.”

To make matters worse, project leaders often relied on middlemen and brokers—some of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say. 

By the end of 2024, the excitement that once surrounded China’s data center boom was  curdling into disappointment. The reason is simple: GPU rental is no longer a particularly  lucrative business.

The DeepSeek reckoning

The business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost. 

“DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from ‘Who can make the best large language model?’ to ‘Who can use them better?’” says Hancheng Cao, an assistant professor of information systems at Emory University. 

The rise of reasoning models like DeepSeek’s R1 and OpenAI’s ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users’ queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff. 

This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients. 

Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloads—large, sustained computations run on massive data sets—rather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from what’s traditionally used for large-scale AI training. 

GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.

In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: “The revenue from having a tiny part of the data center running simply wouldn’t cover the electricity and maintenance cost.”

“It’s paradoxical—China faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low,” Li says. There’s an oversupply of computational power, especially in central and west China, but at the same time, there’s a shortage of cutting-edge chips. 

However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.

“Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability,” says Fang. “Everyone I met is leveraging the data center deal for something else the government could offer.”

A necessary evil

Despite the underutilization of data centers, China’s central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology. 

Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.

In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. ​Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. “If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success,”  says Goodrich, the tech policy advisor to RAND.

“The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means,” Goodrich says.

Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeek’s open-source models.

For now, many data centers in China sit in limbo—built for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeek’s success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress.

That’s just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. “The market is too chaotic. The early adopters profited, but now it’s just people chasing policy loopholes,” he says. He’s decided to go into AI education next. 

“What stands between now and a future where AI is actually everywhere,” he says, “is not infrastructure anymore, but solid plans to deploy the technology.” 

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.

Why handing over total control to AI agents would be a huge mistake

AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems operate outside of a chat window, navigating multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents are developed to become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost? 

New frameworks and functionalities for AI agents are announced almost weekly, and companies promote the technology as a way to make our lives easier by completing tasks we can’t do or don’t want to do. Prominent examples include “computer use,” a function that enables Anthropic’s Claude system to act directly on your computer screen, and the “general AI agent” Manus, which can use online tools for a variety of tasks, like scouting out customers or planning trips.

These developments mark a major advance in artificial intelligence: systems designed to operate in the digital world without direct human oversight.

The promise is compelling. Who doesn’t want assistance with cumbersome work or tasks there’s no time for? Agent assistance could soon take many different forms, such as reminding you to ask a colleague about their kid’s basketball tournament or finding images for your next presentation. Within a few weeks, they’ll probably be able to make presentations for you. 

There’s also clear potential for deeply meaningful differences in people’s lives. For people with hand mobility issues or low vision, agents could complete tasks online in response to simple language commands. Agents could also coordinate simultaneous assistance across large groups of people in critical situations, such as by routing traffic to help drivers flee an area en masse as quickly as possible when disaster strikes. 

But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. Our research team at Hugging Face has spent years implementing and investigating these systems, and our recent findings suggest that agent development could be on the cusp of a very serious misstep. 

Giving up control, bit by bit

This core issue lies at the heart of what’s most exciting about AI agents: The more autonomous an AI system is, the more we cede human control. AI agents are developed to be flexible, capable of completing a diverse array of tasks that don’t have to be directly programmed. 

For many systems, this flexibility is made possible because they’re built on large language models, which are unpredictable and prone to significant (and sometimes comical) errors. When an LLM generates text in a chat interface, any errors stay confined to that conversation. But when a system can act independently and with access to multiple applications, it may perform actions we didn’t intend, such as manipulating files, impersonating users, or making unauthorized transactions. The very feature being sold—reduced human oversight—is the primary vulnerability.

To understand the overall risk-benefit landscape, it’s useful to characterize AI agent systems on a spectrum of autonomy. The lowest level consists of simple processors that have no impact on program flow, like chatbots that greet you on a company website. The highest level, fully autonomous agents, can write and execute new code without human constraints or oversight—they can take action (moving around files, changing records, communicating in email, etc.) without your asking for anything. Intermediate levels include routers, which decide which human-provided steps to take; tool callers, which run human-written functions using agent-suggested tools; and multistep agents that determine which functions to do when and how. Each represents an incremental removal of human control.

It’s clear that AI agents can be extraordinarily helpful for what we do every day. But this brings clear privacy, safety, and security concerns. Agents that help bring you up to speed on someone would require that individual’s personal information and extensive surveillance over your previous interactions, which could result in serious privacy breaches. Agents that create directions from building plans could be used by malicious actors to gain access to unauthorized areas. 

And when systems can control multiple information sources simultaneously, potential for harm explodes. For example, an agent with access to both private communications and public platforms could share personal information on social media. That information might not be true, but it would fly under the radar of traditional fact-checking mechanisms and could be amplified with further sharing to create serious reputational damage. We imagine that “It wasn’t me—it was my agent!!” will soon be a common refrain to excuse bad outcomes.

Keep the human in the loop

Historical precedent demonstrates why maintaining human oversight is critical. In 1980, computer systems falsely indicated that over 2,000 Soviet missiles were heading toward North America. This error triggered emergency procedures that brought us perilously close to catastrophe. What averted disaster was human cross-verification between different warning systems. Had decision-making been fully delegated to autonomous systems prioritizing speed over certainty, the outcome might have been catastrophic.

Some will counter that the benefits are worth the risks, but we’d argue that realizing those benefits doesn’t require surrendering complete human control. Instead, the development of AI agents must occur alongside the development of guaranteed human oversight in a way that limits the scope of what AI agents can do.

Open-source agent systems are one way to address risks, since these systems allow for greater human oversight of what systems can and cannot do. At Hugging Face we’re developing smolagents, a framework that provides sandboxed secure environments and allows developers to build agents with transparency at their core so that any independent group can verify whether there is appropriate human control. 

This approach stands in stark contrast to the prevailing trend toward increasingly complex, opaque AI systems that obscure their decision-making processes behind layers of proprietary technology, making it impossible to guarantee safety.

As we navigate the development of increasingly sophisticated AI agents, we must recognize that the most important feature of any technology isn’t increasing efficiency but fostering human well-being. 

This means creating systems that remain tools rather than decision-makers, assistants rather than replacements. Human judgment, with all its imperfections, remains the essential component in ensuring that these systems serve rather than subvert our interests.

Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, a global startup in responsible open-source AI.

Dr. Margaret Mitchell is a machine learning researcher and Chief Ethics Scientist at Hugging Face, connecting human values to technology development.

Dr. Sasha Luccioni is Climate Lead at Hugging Face, where she spearheads research, consulting and capacity-building to elevate the sustainability of AI systems. 

Dr. Avijit Ghosh is an Applied Policy Researcher at Hugging Face working at the intersection of responsible AI and policy. His research and engagement with policymakers has helped shape AI regulation and industry practices.

Dr. Giada Pistilli is a philosophy researcher working as Principal Ethicist at Hugging Face.

Inside a new quest to save the “doomsday glacier”

The Thwaites glacier is a fortress larger than Florida, a wall of ice that reaches nearly 4,000 feet above the bedrock of West Antarctica, guarding the low-lying ice sheet behind it.

But a strong, warm ocean current is weakening its foundations and accelerating its slide into the Amundsen Sea. Scientists fear the waters could topple the walls in the coming decades, kick-starting a runaway process that would crack up the West Antarctic Ice Sheet.

That would mark the start of a global climate disaster. The glacier itself holds enough ice to raise ocean levels by more than two feet, which could flood coastlines and force tens of millions of people living in low-lying areas to abandon their homes.

The loss of the entire ice sheet—which could still take centuries to unfold—would push up sea levels by 11 feet and redraw the contours of the continents.

This is why Thwaites is known as the doomsday glacier—and why scientists are eager to understand just how likely such a collapse is, when it could happen, and if we have the power to stop it. 

Scientists at MIT and Dartmouth College founded Arête Glacier Initiative last year in the hope of providing clearer answers to these questions. The nonprofit research organization will officially unveil itself, launch its website, and post requests for research proposals today, March 21, timed to coincide with the UN’s inaugural World Day for Glaciers, MIT Technology Review can report exclusively. 

Arête will also announce it is issuing its first grants, each for around $200,000 over two years, to a pair of glacier researchers at the University of Wisconsin-Madison. 

One of the organization’s main goals is to study the possibility of preventing the loss of giant glaciers, Thwaites in particular, by refreezing them to the bedrock. It would represent a radical intervention into the natural world, requiring a massive, expensive engineering project in a remote, treacherous environment. 

But the hope is that such a mega-adaptation project could minimize the mass relocation of climate refugees, prevent much of the suffering and violence that would almost certainly accompany it, and help nations preserve trillions of dollars invested in high-rises, roads, homes, ports, and airports around the globe.

“About a million people are displaced per centimeter of sea-level rise,” says Brent Minchew, an associate professor of geophysics at MIT, who cofounded Arête Glacier Initiative and will serve as its chief scientist. “If we’re able to bring that down, even by a few centimeters, then we would safeguard the homes of millions.”

But some scientists believe the idea is an implausible, wildly expensive distraction, drawing money, expertise, time, and resources away from more essential polar research efforts. 

“Sometimes we can get a little over-optimistic about what engineering can do,” says Twila Moon, deputy lead scientist at the National Snow and Ice Data Center at the University of Colorado Boulder.

“Two possible futures”

Minchew, who earned his PhD in geophysics at Caltech, says he was drawn to studying glaciers because they are rapidly transforming as the world warms, increasing the dangers of sea-level rise. 

“But over the years, I became less content with simply telling a more dramatic story about how things were going and more open to asking the question of what can we do about it,” says Minchew, who will return to Caltech as a professor this summer.

Last March, he cofounded Arête Glacier Initiative with Colin Meyer, an assistant professor of engineering at Dartmouth, in the hope of funding and directing research to improve scientific understanding of two big questions: How big a risk does sea-level rise pose in the coming decades, and can we minimize that risk?

Brent Minchew, an MIT professor of geophysics, co-founded Arête Glacier Initiative and will serve as its chief scientist.
COURTESY: BRENT MINCHEW

“Philanthropic funding is needed to address both of these challenges, because there’s no private-sector funding for this kind of research and government funding is minuscule,” says Mike Schroepfer, the former Meta chief technology officer turned climate philanthropist, who provided funding to Arête through his new organization, Outlier Projects

The nonprofit has now raised about $5 million from Outlier and other donors, including the Navigation Fund, the Kissick Family Foundation, the Sky Foundation, the Wedner Family Foundation, and the Grantham Foundation. 

Minchew says they named the organization Arête, mainly because it’s the sharp mountain ridge between two valleys, generally left behind when a glacier carves out the cirques on either side. It directs the movement of the glacier and is shaped by it. 

It’s meant to symbolize “two possible futures,” he says. “One where we do something; one where we do nothing.”

Improving forecasts

The somewhat reassuring news is that, even with rising global temperatures, it may still take thousands of years for the West Antarctic Ice Sheet to completely melt. 

In addition, sea-level rise forecasts for this century generally range from as little as 0.28 meters (11 inches) to 1.10 meters (about three and a half feet), according to the latest UN climate panel report. The latter only occurs under a scenario with very high greenhouse gas emissions (SSP5-8.5), which significantly exceeds the pathway the world is now on.

But there’s still a “low-likelihood” that ocean levels could surge nearly two meters (about six and a half feet) by 2100 that “cannot be excluded,” given “deep uncertainty linked to ice-sheet processes,” the report adds. 

Two meters of sea-level rise could force nearly 190 million people to migrate away from the coasts, unless regions build dikes or other shoreline protections, according to some models. Many more people, mainly in the tropics, would face heightened flooding dangers.

Much of the uncertainty over what will happen this century comes down to scientists’ limited understanding of how Antarctic ice sheets will respond to growing climate pressures.

The initial goal of Arête Glacier Initiative is to help narrow the forecast ranges by improving our grasp of how Thwaites and other glaciers move, melt, and break apart.

Gravity is the driving force nudging glaciers along the bedrock and reshaping them as they flow. But many of the variables that determine how fast they slide lie at the base. That includes the type of sediment the river of ice slides along; the size of the boulders and outcroppings it contorts around; and the warmth and strength of the ocean waters that lap at its face.

In addition, heat rising from deep in the earth warms the ice closest to the ground, creating a lubricating layer of water that hastens the glacier’s slide. That acceleration, in turn, generates more frictional heat that melts still more of the ice, creating a self-reinforcing feedback effect.

Minchew and Meyer are confident that the glaciology field is at a point where it could speed up progress in sea-level rise forecasting, thanks largely to improving observational tools that are producing more and better data.

That includes a new generation of satellites orbiting the planet that can track the shifting shape of ice at the poles at far higher resolutions than in the recent past. Computer simulations of ice sheets, glaciers and sea ice are improving as well, thanks to growing computational resources and advancing machine learning techniques.

On March 21, Arête will issue a request for proposals from research teams to contribute to an effort to collect, organize, and openly publish existing observational glacier data. Much of that expensively gathered information is currently inaccessible to researchers around the world, Minchew says.

Colin Meyer, an assistant professor of engineering at Dartmouth, co-founded Arête Glacier Initiative.
ELI BURAK

By funding teams working across these areas, Arête’s founders hope to help produce more refined ice-sheet models and narrower projections of sea-level rise.

This improved understanding would help cities plan where to build new bridges, buildings, and homes, and to determine whether they’ll need to erect higher seawalls or raise their roads, Meyer says. It could also provide communities with more advance notice of the coming dangers, allowing them to relocate people and infrastructure to safer places through an organized process known as managed retreat.

A radical intervention

But the improved forecasts might also tell us that Thwaites is closer to tumbling into the ocean than we think, underscoring the importance of considering more drastic measures.

One idea is to build berms or artificial islands to prop up fragile parts of glaciers, and to block the warm waters that rise from the deep ocean and melt them from below. Some researchers have also considered erecting giant, flexible curtains anchored to the seabed to achieve the latter effect.

Others have looked at scattering highly reflective beads or other materials across ice sheets, or pumping ocean water onto them in the hopes it would freeze during the winter and reinforce the headwalls of the glaciers.

But the concept of refreezing glaciers in place, know as a basal intervention, is gaining traction in scientific circles, in part because there’s a natural analogue for it.

The glacier that stalled

About 200 years ago, the Kamb Ice Stream, another glacier in West Antarctica that had been sliding about 350 meters (1,150 feet) per year, suddenly stalled.

Glaciologists believe an adjacent ice stream intersected with the catchment area under the glacier, providing a path for the water running below it to flow out along the edge instead. That loss of fluid likely slowed down the Kamb Ice Stream, reduced the heat produced through friction, and allowed water at the surface to refreeze.

The deceleration of the glacier sparked the idea that humans might be able to bring about that same phenomenon deliberately, perhaps by drilling a series of boreholes down to the bedrock and pumping up water from the bottom.

Minchew himself has focused on a variation he believes could avoid much of the power use and heavy operating machinery hassles of that approach: slipping long tubular devices, known as thermosyphons, down nearly to the bottom of the boreholes. 

These passive heat exchangers, which are powered only by the temperature differential between two areas, are commonly used to keep permafrost cold around homes, buildings and pipelines in Arctic regions. The hope is that we could deploy extremely long ones, stretching up to two kilometers and encased in steel pipe, to draw warm temperatures away from the bottom of the glacier, allowing the water below to freeze.

Minchew says he’s in the process of producing refined calculations, but estimates that halting Thwaites could require drilling as many as 10,000 boreholes over a 100-square-kilometer area.

He readily acknowledges that would be a huge undertaking, but provides two points of comparison to put such a project into context: Melting the necessary ice to create those holes would require roughly the amount of energy all US domestic flights consume from jet fuel in about two and a half hours. Or, it would produce about the same level of greenhouse gas emissions as constructing 10 kilometers of seawalls, a small fraction of the length the world would need to build if it can’t slow down the collapse of the ice sheets, he says.

“Kick the system”

One of Arête’s initial grantees is Marianne Haseloff, an assistant professor of geoscience at the University of Wisconsin-Madison. She studies the physical processes that govern the behavior of glaciers and is striving to more faithfully represent them in ice sheet models. 

Haseloff says she will use those funds to develop mathematical methods that could more accurately determine what’s known as basal shear stress, or the resistance of the bed to sliding glaciers, based on satellite observations. That could help refine forecasts of how rapidly glaciers will slide into the ocean, in varying settings and climate conditions.

Arête’s other initial grant will go to Lucas Zoet, an associate professor in the same department as Haseloff and the principal investigator with the Surface Processes group.

He intends to use the funds to build the lab’s second “ring shear” device, the technical term for a simulated glacier.

The existing device, which is the only one operating in the world, stands about eight feet tall and fills the better part of a walk-in freezer on campus. The core of the machine is a transparent drum filled with a ring of ice, sitting under pressure and atop a layer of sediment. It slowly spins for weeks at a time as sensors and cameras capture how the ice and earth move and deform.

Lucas Zoet, an associate professor at the University of Wisconsin–Madison, stands in front of his lab’s “ring shear” device, a simulated glacier.
ETHAN PARRISH

The research team can select the sediment, topography, water pressure, temperature, and other conditions to match the environment of a real-world glacier of interest, be it Thwaites today—or Thwaites in 2100, under a high greenhouse gas emissions scenario. 

Zoet says these experiments promise to improve our understanding of how glaciers move over different types of beds, and to refine an equation known as the slip law, which represents these glacier dynamics mathematically in computer models.

The second machine will enable them to run more experiments and to conduct a specific kind that the current device can’t: a scaled-down, controlled version of the basal intervention.

Zoet says the team will be able to drill tiny holes through the ice, then pump out water or transfer heat away from the bed. They can then observe whether the simulated glacier freezes to the base at those points and experiment with how many interventions, across how much space, are required to slow down its movement.

It offers a way to test out different varieties of the basal intervention that is far easier and cheaper than using water drills to bore to the bottom of an actual glacier in Antarctica, Zoet says. The funding will allow the lab to explore a wide range of experiments, enabling them to “kick the system in a way we wouldn’t have before,” he adds.

“Virtually impossible”

The concept of glacier interventions is in its infancy. There are still considerable unknowns and uncertainties, including how much it would cost, how arduous the undertaking would be, and which approach would be most likely to work, or if any of them are feasible.

“This is mostly a theoretical idea at this point,” says Katharine Ricke, an associate professor at the University of California, San Diego, who researches the international relations implications of geoengineering, among other topics.

Conducting extensive field trials or moving forward with full-scale interventions may also require surmounting complex legal questions, she says. Antarctica isn’t owned by any nation, but it’s the subject of competing territorial claims among a number of countries and governed under a decades-old treaty to which dozens are a party.

The basal intervention—refreezing the glacier to its bed—faces numerous technical hurdles that would make it “virtually impossible to execute,” Moon and dozens of other researchers argued in a recent preprint paper, “Safeguarding the polar regions from dangerous geoengineering.”

Among other critiques, they stress that subglacial water systems are complex, dynamic, and interconnected, making it highly difficult to precisely identify and drill down to all the points that would be necessary to draw away enough water or heat to substantially slow down a massive glacier.

Further, they argue that the interventions could harm polar ecosystems by adding contaminants, producing greenhouse gases, or altering the structure of the ice in ways that may even increase sea-level rise.

“Overwhelmingly, glacial and polar geoengineering ideas do not make sense to pursue, in terms of the finances, the governance challenges, the impacts,” and the possibility of making matters worse, Moon says.

“No easy path forward”

But Douglas MacAyeal, professor emeritus of glaciology at the University of Chicago, says the basal intervention would have the lightest environmental impact among the competing ideas. He adds that nature has already provided an example of it working, and that much of the needed drilling and pumping technology is already in use in the oil industry.

“I would say it’s the strongest approach at the starting gate,” he says, “but we don’t really know anything about it yet. The research still has to be done. It’s very cutting-edge.”

A Sunday morning sunrise was enjoyed by personnel on board the research vessel Nathaniel B. Palmer as it moved into the Bellingshausen Sea. The cruise had been in the Amundsen Sea region participating in the International Thwaites Glacier Collaboration. 
The Nathaniel B. Palmer heads into the Bellinghausen sea.
CINDY DEAN/UNITED STATES ANTARCTIC PROGRAM

Minchew readily acknowledges that there are big challenges and significant unknowns—and that some of these ideas may not work.

But he says it’s well worth the effort to study the possibilities, in part because much of the research will also improve our understanding of glacier dynamics and the risks of sea-level rise—and in part because it’s only a question of when, not if, Thwaites will collapse.

Even if the world somehow halted all greenhouse gas emissions tomorrow, the forces melting that fortress of ice will continue to do so. 

So one way or another, the world will eventually need to make big, expensive, difficult interventions to protect people and infrastructure. The cost and effort of doing one project in Antarctica, he says, would be dwarfed by the global effort required to erect thousands of miles of seawalls, ratchet up homes, buildings, and roads, and relocate hundreds of millions of people.

“One thing is challenging—and the other is even more challenging,” Minchew says. “There’s no easy path forward.”