The latest threat from the rise of Chinese manufacturing

The findings a decade ago were, well, shocking. Mainstream economists had long argued that free trade was overall a good thing; though there might be some winners and losers, it would generally bring lower prices and widespread prosperity. Then, in 2013, a trio of academic researchers showed convincing evidence that increased trade with China beginning in the early 2000s and the resulting flood of cheap imports had been an unmitigated disaster for many US communities, destroying their manufacturing lifeblood.

The results of what in 2016 they called the “China shock” were gut-wrenching: the loss of 1 million US manufacturing jobs and 2.4 million jobs in total by 2011. Worse, these losses were heavily concentrated in what the economists called “trade-exposed” towns and cities (think furniture makers in North Carolina).

If in retrospect all that seems obvious, it’s only because the research by David Autor, an MIT labor economist, and his colleagues has become an accepted, albeit often distorted, political narrative these days: China destroyed all our manufacturing jobs! Though the nuances of the research are often ignored, the results help explain at least some of today’s political unrest. It’s reflected in rising calls for US protectionism, President Trump’s broad tariffs on imported goods, and nostalgia for the lost days of domestic manufacturing glory.

The impacts of the original China shock still scar much of the country. But Autor is now concerned about what he considers a far more urgent problem—what some are calling China shock 2.0. The US, he warns, is in danger of losing the next great manufacturing battle, this time over advanced technologies to make cars and planes as well as those enabling AI, quantum computing, and fusion energy.

Recently, I asked Autor about the lingering impacts of the China shock and the lessons it holds for today’s manufacturing challenges.

How are the impacts of the China shock still playing out?

I have a recent paper looking at 20 years of data, from 2000 to 2019. We tried to ask two related questions. One, if you looked at the places that were most exposed, how have they adjusted? And then if you look to the people who are most exposed, how have they adjusted? And how do those two things relate to one anothe

It turns out you get two very different answers. If you look at places that were most exposed, they have been substantially transformed. Manufacturing, once it starts going down, never comes back. But after 2010, these trade-impacted local labor markets staged something of an employment recovery, such that employment has grown faster after 2010 in trade-exposed places than non-trade-exposed places because a lot of people have come in. But these are jobs mostly in low-wage sectors. They’re in K–12 education and non-traded health services. They’re in warehousing and logistics. They’re in hospitality and lodging and recreation, and so they’re lower-wage, non-manufacturing jobs. And they’re done by a really different set of people.

The growth in employment is among women, among native-born Hispanics, among foreign-born adults and a lot of young people. The recovery is staged by a very different group from the white and black men, but especially white men, who were most represented in manufacturing. They have not really participated in this renaissance.

Employment is growing, but are these areas prospering?

They have a lower wage structure: fewer high-wage jobs, more low-wage jobs. So they’re not, if your definition of prospering is rapidly rising incomes. But there’s a lot of employment growth. They’re not like ghost towns. But then if you look at the people who were most concentrated in manufacturing—mostly white, non-college, native-born men—they have not prospered. Most of them have not transitioned from manufacturing to non-manufacturing.

One of the great surprises is everyone had believed that people would pull up stakes and move on. In fact, we find the opposite. People in the most adversely exposed places become less likely to leave. They have become less mobile. The presumption was that they would just relocate to find higher ground. And that is not at all what occurred.

What happened to the total number of manufacturing jobs?

There’s been no rebound. Once they go, they just keep going. If there is going to be new manufacturing, it won’t be in the sectors that were lost to China. Those were basically labor-intensive jobs, the kind of low-tech sectors that we will not be getting back. You know—commodity furniture and assembly of things, shoes, construction material. The US wasn’t going to keep them forever, and once they’re gone, it’s very unlikely to get them back.

I know you’ve written about this, but it’s not hard to draw a connection between the dynamics you’re describing—white-male manufacturing jobs going away and new jobs going to immigrants—and today’s political turmoil.

We have a paper about that called “Importing Political Polarization?”

How big a factor would you say it is in today’s political unrest?

I don’t want to say it’s the factor. The China trade shock was a catalyst, but there were lots of other things that were happening. It would be a vast oversimplification to say that it was the sole cause.

But most people don’t work in manufacturing anymore. Aren’t these impacts that you’re talking about, including the political unrest, disproportionate to the actual number of jobs lost?

These are jobs in places where manufacturing is the anchor activity. Manufacturing is very unevenly distributed. It’s not like grocery stores and hospitals that you find in every county. The impact of the China trade shock on these places was like dropping an economic bomb in the middle of downtown. If the China trade shock cost us a few million jobs, and these were all—you know—people in groceries and retail and gas stations, in hospitality and in trucking, you wouldn’t really notice it that much. We lost lots of clerical workers over the last couple of decades. Nobody talks about a clerical shock. Why not? Well, there was never a clerical capital of America. Clerical workers are everywhere. If they decline, it doesn’t wipe out the entire basis of a place.

So it goes beyond the jobs. These places lost their identity.

Maybe. But it’s also the jobs. Manufacturing offered relatively high pay to non-college workers, especially non-college men. It was an anchor of a way of life.

And we’re still seeing the damage.

Yeah, absolutely. It’s been 20 years. What’s amazing is the degree of stasis among the people who are most exposed—not the places, but the people. Though it’s been 20 years, we’re still feeling the pain and the political impacts from this transition.

Clearly, it has now entered the national psyche. Even if it weren’t true, everyone now believes it to have been a really big deal, and they’re responding to it. It continues to drive policy, political resentments, maybe even out of proportion to its economic significance. It certainly has become mythological.

What worries you now?

We’re in the midst of a totally different competition with China now that’s much, much more important. Now we’re not talking about commodity furniture and tube socks. We’re talking about semiconductors and drones and aviation, electric vehicles, shipping, fusion power, quantum, AI, robotics. These are the sectors where the US still maintains competitiveness, but they’re extremely threatened. China’s capacity for high-tech, low-cost, incredibly fast, innovative manufacturing is just unbelievable. And the Trump administration is basically fighting the war of 20 years ago. The loss of those jobs, you know, was devastating to those places. It was not devastating to the US economy as a whole. If we lose Boeing, GM, and Apple and Intel—and that’s quite possible—then that will be economically devastating.

I think some people are calling it China shock 2.0.

Yeah. And it’s well underway.

When we think about advanced manufacturing and why it’s important, it’s not so much about the number of jobs anymore, is it? Is it more about coming up with the next technologies?

It does create good jobs, but it’s about economic leadership. It’s about innovation. It’s about political leadership, and even standard setting for how the rest of the world works.

Should we just accept that manufacturing as a big source of jobs is in the past and move on?

No. It’s still 12 million jobs, right? Instead of the fantasy that we’re going to go back to 18 million or whatever—we had, what, 17.7 million manufacturing jobs in 1999—we should be worried about the fact that we’re going to end up at 6 million, that we’re going to lose 50% in the next decade. And that’s quite possible. And the Trump administration is doing a lot to help that process of loss along.

We have a labor market of over 160 million people, so it’s like 8% of employment. It’s not zero. So you should not think of it as too small to worry about it. It’s a lot of people; it’s a lot of jobs. But more important, it’s a lot of what has helped this country be a leader. So much innovation happens here, and so many of the things in which other countries are now innovating started here. It’s always been the case that the US tends to innovate in sectors and then lose them after a while and move on to the next thing. But at this point, it’s not clear that we’ll be in the frontier of a lot of these sectors for much longer.

So we want to revive manufacturing, but the right kind—advanced manufacturing?

The notion that we should be assembling iPhones in the United States, which Trump wants, is insane. Nobody wants to do that work. It’s horrible, tedious work. It pays very, very little. And if we actually did it here, it would make the iPhones 20% more expensive or more. Apple may very well decide to pay a 25% tariff rather than make the phones here. If Foxconn started doing iPhone assembly here, people would not be lining up for that job.

But at the same time, we do need new people coming into manufacturing.

But not that manufacturing. Not tedious, mind-numbing, eyestrain-inducing assembly.

We need them to do high-tech work. Manufacturing is a skilled activity. We need to build airplanes better. That takes a ton of expertise. Assembling iPhones does not.

What are your top priorities to head off China shock 2.0?

I would choose sectors that are important, and I would invest in them. I don’t think that tariffs are never justified, or industrial policies are never justified. I just don’t think protecting phone assembly is smart industrial policy. We really need to improve our ability to make semiconductors. I think that’s important. We need to remain competitive in the automobile sector—that’s important. We need to improve aviation and drones. That’s important. We need to invest in fusion power. That’s important. We need to adopt robotics at scale and improve in that sector. That’s important. I could come up with 15 things where I think public money is justified, and I would be willing to tolerate protections for those sectors.

What are the lasting lessons of the China shock and the opening up of global trade in the 2000s?

We did it too fast. We didn’t do enough to support people, and we pretended it wasn’t going on.

When we started the China shock research back around 2011, we really didn’t know what we’d find, and so we were as surprised as anyone. But the work has changed our own way of thinking and, I think, has been constructive—not because it has caused everyone to do the right thing, but it at least caused people to start asking the right questions.

What do the findings tell us about China shock 2.0?

I think the US is handling that challenge badly. The problem is much more serious this time around. The truth is, we have a sense of what the threats are. And yet we’re not seemingly responding in a very constructive way. Although we now know how seriously we should take this, the problem is that it doesn’t seem to be generating very serious policy responses. We’re generating a lot of policy responses—they’re just not serious ones.

Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together.

The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind.

Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform.

And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users canceling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.

In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs.

Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.

In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.

There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking.

None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it.

That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.

Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

Google’s electricity demand is skyrocketing

We got two big pieces of energy news from Google this week. The company announced that it’s signed an agreement to purchase electricity from a fusion company’s forthcoming first power plant. Google also released its latest environmental report, which shows that its energy use from data centers has doubled since 2020.

Taken together, these two bits of news offer a fascinating look at just how desperately big tech companies are hunting for clean electricity to power their data centers as energy demand and emissions balloon in the age of AI. Of course, we don’t know exactly how much of this pollution is attributable to AI because Google doesn’t break that out. (Also a problem!) So, what’s next and what does this all mean? 

Let’s start with fusion: Google’s deal with Commonwealth Fusion Systems is intended to provide the tech giant with 200 megawatts of power. This will come from Commonwealth’s first commercial plant, a facility planned for Virginia that the company refers to as the Arc power plant. The agreement represents half its capacity.

What’s important to note here is that this power plant doesn’t exist yet. In fact, Commonwealth still needs to get its Sparc demonstration reactor, located outside Boston, up and running. That site, which I visited in the fall, should be completed in 2026.

(An aside: This isn’t the first deal between Big Tech and a fusion company. Microsoft signed an agreement with Helion a couple of years ago to buy 50 megawatts of power from a planned power plant, scheduled to come online in 2028. Experts expressed skepticism in the wake of that deal, as my colleague James Temple reported.)

Nonetheless, Google’s announcement is a big moment for fusion, in part because of the size of the commitment and also because Commonwealth, a spinout company from MIT’s Plasma Science and Fusion Center, is seen by many in the industry as a likely candidate to be the first to get a commercial plant off the ground. (MIT Technology Review is owned by MIT but is editorially independent.)

Google leadership was very up-front about the length of the timeline. “We would certainly put this in the long-term category,” said Michael Terrell, Google’s head of advanced energy, in a press call about the deal.

The news of Google’s foray into fusion comes just days after the tech giant’s release of its latest environmental report. While the company highlighted some wins, some of the numbers in this report are eye-catching, and not in a positive way.

Google’s emissions have increased by over 50% since 2019, rising 6% in the last year alone. That’s decidedly the wrong direction for a company that’s set a goal to reach net-zero greenhouse-gas emissions by the end of the decade.

It’s true that the company has committed billions to clean energy projects, including big investments in next-generation technologies like advanced nuclear and enhanced geothermal systems. Those deals have helped dampen emissions growth, but it’s an arguably impossible task to keep up with the energy demand the company is seeing.

Google’s electricity consumption from data centers was up 27% from the year before. It’s doubled since 2020, reaching over 30 terawatt-hours. That’s nearly the annual electricity consumption from the entire country of Ireland.

As an outsider, it’s tempting to point the finger at AI, since that technology has crashed into the mainstream and percolated into every corner of Google’s products and business. And yet the report downplays the role of AI. Here’s one bit that struck me:

“However, it’s important to note that our growing electricity needs aren’t solely driven by AI. The accelerating growth of Google Cloud, continued investments in Search, the expanding reach of YouTube, and more, have also contributed to this overall growth.”

There is enough wiggle room in that statement to drive a large electric truck through. When I asked about the relative contributions here, company representative Mara Harris said via email that they don’t break out what portion comes from AI. When I followed up asking if the company didn’t have this information or just wouldn’t share it, she said she’d check but didn’t get back to me.

I’ll make the point here that we’ve made before, including in our recent package on AI and energy: Big companies should be disclosing more about the energy demands of AI. We shouldn’t be guessing at this technology’s effects.

Google has put a ton of effort and resources into setting and chasing ambitious climate goals. But as its energy needs and those of the rest of the industry continue to explode, it’s obvious that this problem is getting tougher, and it’s also clear that more transparency is a crucial part of the way forward.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Inside India’s scramble for AI independence

In Bengaluru, India, Adithya Kolavi felt a mix of excitement and validation as he watched DeepSeek unleash its disruptive language model on the world earlier this year. The Chinese technology rivaled the best of the West in terms of benchmarks, but it had been built with far less capital in far less time. 

“I thought: ‘This is how we disrupt with less,’” says Kolavi, the 20-year-old founder of the Indian AI startup CognitiveLab. “If DeepSeek could do it, why not us?” 

But for Abhishek Upperwal, founder of Soket AI Labs and architect of one of India’s earliest efforts to develop a foundation model, the moment felt more bittersweet. 

Upperwal’s model, called Pragna-1B, had struggled to stay afloat with tiny grants while he watched global peers raise millions. The multilingual model had a relatively modest 1.25 billion parameters and was designed to reduce the “language tax,” the extra costs that arise because India—unlike the US or even China—has a multitude of languages to support. His team had trained it, but limited resources meant it couldn’t scale. As a result, he says, the project became a proof of concept rather than a product. 

“If we had been funded two years ago, there’s a good chance we’d be the ones building what DeepSeek just released,” he says.

Kolavi’s enthusiasm and Upperwal’s dismay reflect the spectrum of emotions among India’s AI builders. Despite its status as a global tech hub, the country lags far behind the likes of the US and China when it comes to homegrown AI. That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere. 

Historically known as the global back office for the software industry, India has a tech ecosystem that evolved with a services-first mindset. Giants like Infosys and TCS built their success on efficient software delivery, but invention was neither prioritized nor rewarded. Meanwhile, India’s R&D spending hovered at just 0.65% of GDP ($25.4 billion) in 2024, far behind China’s 2.68% ($476.2 billion) and the US’s 3.5% ($962.3 billion). The muscle to invent and commercialize deep tech, from algorithms to chips, was just never built.

Isolated pockets of world-class research do exist within government agencies like the DRDO (Defense Research & Development Organization) and ISRO (Indian Space Research Organization), but their breakthroughs rarely spill into civilian or commercial use. India lacks the bridges to connect risk-taking research to commercial pathways, the way DARPA does in the US. Meanwhile, much of India’s top talent migrates abroad, drawn to ecosystems that better understand and, crucially, fund deep tech.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure, and how urgently it needed to respond.

India responds

In January 2025, 10 days after DeepSeek-R1’s launch, the Ministry of Electronics and Information Technology (MeitY) solicited proposals for India’s own foundation models, which are large AI models that can be adapted to a wide range of tasks. Its public tender invited private-sector cloud and data‑center companies to reserve GPU compute capacity for government‑led AI research. 

Providers including Jio, Yotta, E2E Networks, Tata, AWS partners, and CDAC responded. Through this arrangement, MeitY suddenly had access to nearly 19,000 GPUs at subsidized rates, repurposed from private infrastructure and allocated specifically to foundational AI projects. This triggered a surge of proposals from companies wanting to build their own models. 

Within two weeks, it had 67 proposals in hand. That number tripled by mid-March. 

In April, the government announced plans to develop six large-scale models by the end of 2025, plus 18 additional AI applications targeting sectors like agriculture, education, and climate action. Most notably, it tapped Sarvam AI to build a 70-billion-parameter model optimized for Indian languages and needs. 

For a nation long restricted by limited research infrastructure, things moved at record speed, marking a rare convergence of ambition, talent, and political will.

“India could do a Mangalyaan in AI,” said Gautam Shroff of IIIT-Delhi, referencing the country’s cost-effective, and successful, Mars orbiter mission. 

Jaspreet Bindra, cofounder of AI&Beyond, an organization focused on teaching AI literacy, captured the urgency: “DeepSeek is probably the best thing that happened to India. It gave us a kick in the backside to stop talking and start doing something.”

The language problem

One of the most fundamental challenges in building foundational AI models for India is the country’s sheer linguistic diversity. With 22 official languages, hundreds of dialects, and millions of people who are multilingual, India poses a problem that few existing LLMs are equipped to handle.

Whereas a massive amount of high-quality web data is available in English, Indian languages collectively make up less than 1% of online content. The lack of digitized, labeled, and cleaned data in languages like Bhojpuri and Kannada makes it difficult to train LLMs that understand how Indians actually speak or search.

Global tokenizers, which break text into units a model can process, also perform poorly on many Indian scripts, misinterpreting characters or skipping some altogether. As a result, even when Indian languages are included in multilingual models, they’re often poorly understood and inaccurately generated.

And unlike OpenAI and DeepSeek, which achieved scale using structured English-language data, Indian teams often begin with fragmented and low-quality data sets encompassing dozens of Indian languages. This makes the early steps of training foundation models far more complex.

Nonetheless, a small but determined group of Indian builders is starting to shape the country’s AI future.

For example, Sarvam AI has created OpenHathi-Hi-v0.1, an open-source Hindi language model that shows the Indian AI field’s growing ability to address the country’s vast linguistic diversity. The model, built on Meta’s Llama 2 architecture, was trained on 40 billion tokens of Hindi and related Indian-language content, making it one of the largest open-source Hindi models available to date.

Pragna-1B, the multilingual model from Upperwal, is more evidence that India could solve for its own linguistic complexity. Trained on 300 billion tokens for just $250,000, it introduced a technique called “balanced tokenization” to address a unique challenge in Indian AI, enabling a 1.25-billion-parameter model to behave like a much larger one.

The issue is that Indian languages use complex scripts and agglutinative grammar, where words are formed by stringing together many smaller units of meaning using prefixes and suffixes. Unlike English, which separates words with spaces and follows relatively simple structures, Indian languages like Hindi, Tamil, and Kannada often lack clear word boundaries and pack a lot of information into single words. Standard tokenizers struggle with such inputs. They end up breaking Indian words into too many tokens, which bloats the input and makes it harder for models to understand the meaning efficiently or respond accurately.

With the new technique, however, “a billion-parameter model was equivalent to a 7 billion one like Llama 2,” Upperwal says. This performance was particularly marked in Hindi and Gujarati, where global models often underperform because of limited multilingual training data. It was a reminder that with smart engineering, small teams could still push boundaries.

Upperwal eventually repurposed his core tech to build speech APIs for 22 Indian languages, a more immediate solution better suited to rural users who are often left out of English-first AI experiences.

“If the path to AGI is a hundred-step process, training a language model is just step one,” he says. 

At the other end of the spectrum are startups with more audacious aims. Krutrim-2, for instance, is a 12-billion-parameter multilingual language model optimized for English and 22 Indian languages. 

Krutrim-2 is attempting to solve India’s specific problems of linguistic diversity, low-quality data, and cost constraints. The team built a custom Indic tokenizer, optimized training infrastructure, and designed models for multimodal and voice-first use cases from the start, crucial in a country where text interfaces can be a problem.

Krutrim’s bet is that its approach will not only enable Indian AI sovereignty but also offer a model for AI that works across the Global South.

Besides public funding and compute infrastructure, India also needs the institutional support of talent, the research depth, and the long-horizon capital that produce globally competitive science.

While venture capital still hesitates to bet on research, new experiments are emerging. Paras Chopra, an entrepreneur who previously built and sold the software-as-a-service company Wingify, is now personally funding Lossfunk, a Bell Labs–style AI residency program designed to attract independent researchers with a taste for open-source science. 

“We don’t have role models in academia or industry,” says Chopra. “So we’re creating a space where top researchers can learn from each other and have startup-style equity upside.”

Government-backed bet on sovereign AI

The clearest marker of India’s AI ambitions came when the government selected Sarvam AI to develop a model focused on Indian languages and voice fluency.

The idea is that it would not only help Indian companies compete in the global AI arms race but benefit the wider population as well. “If it becomes part of the India stack, you can educate hundreds of millions through conversational interfaces,” says Bindra. 

Sarvam was given access to 4,096 Nvidia H100 GPUs for training a 70-billion-parameter Indian language model over six months. (The company previously released a 2-billion-parameter model trained in 10 Indian languages, called Sarvam-1.)

Sarvam’s project and others are part of a larger strategy called the IndiaAI Mission, a $1.25 billion national initiative launched in March 2024 to build out India’s core AI infrastructure and make advanced tools more widely accessible. Led by MeitY, the mission is focused on supporting AI startups, particularly those developing foundation models in Indian languages and applying AI to key sectors such as health care, education, and agriculture.

Under its compute program, the government is deploying more than 18,000 GPUs, including nearly 13,000 high-end H100 chips, to a select group of Indian startups that currently includes Sarvam, Upperwal’s Soket Labs, Gnani AI, and Gan AI

The mission also includes plans to launch a national multilingual data set repository, establish AI labs in smaller cities, and fund deep-tech R&D. The broader goal is to equip Indian developers with the infrastructure needed to build globally competitive AI and ensure that the results are grounded in the linguistic and cultural realities of India and the Global South.

According to Abhishek Singh, CEO of IndiaAI and an officer with MeitY, India’s broader push into deep tech is expected to raise around $12 billion in research and development investment over the next five years. 

This includes approximately $162 million through the IndiaAI Mission, with about $32 million earmarked for direct startup funding. The National Quantum Mission is contributing another $730 million to support India’s ambitions in quantum research. In addition to this, the national budget document for 2025-26 announced a $1.2 billion Deep Tech Fund of Funds aimed at catalyzing early-stage innovation in the private sector.

The rest, nearly $9.9 billion, is expected to come from private and international sources including corporate R&D, venture capital firms, high-net-worth individuals, philanthropists, and global technology leaders such as Microsoft. 

IndiaAI has now received more than 500 applications from startups proposing use cases in sectors like health, governance, and agriculture. 

“We’ve already announced support for Sarvam, and 10 to 12 more startups will be funded solely for foundational models,” says Singh. Selection criteria include access to training data, talent depth, sector fit, and scalability.

Open or closed?

The IndiaAI program, however, is not without controversy. Sarvam is being built as a closed model, not open-source, despite its public tech roots. That has sparked debate about the proper balance between private enterprise and the public good. 

“True sovereignty should be rooted in openness and transparency,” says Amlan Mohanty, an AI policy specialist. He points to DeepSeek-R1, which despite its 236-billion parameter size was made freely available for commercial use. 

Its release allowed developers around the world to fine-tune it on low-cost GPUs, creating faster variants and extending its capabilities to non-English applications.

“Releasing an open-weight model with efficient inference can democratize AI,” says Hancheng Cao, an assistant professor of information systems and operations management at Emory University. “It makes it usable by developers who don’t have massive infrastructure.”

IndiaAI, however, has taken a neutral stance on whether publicly funded models should be open-source. 

“We didn’t want to dictate business models,” says Singh. “India has always supported open standards and open source, but it’s up to the teams. The goal is strong Indian models, whatever the route.”

There are other challenges as well. In late May, Sarvam AI unveiled Sarvam‑M, a 24-billion-parameter multilingual LLM fine-tuned for 10 Indian languages and built on top of Mistral Small, an efficient model developed by the French company Mistral AI. Sarvam’s cofounder Vivek Raghavan called the model “an important stepping stone on our journey to build sovereign AI for India.” But its download numbers were underwhelming, with only 300 in the first two days. The venture capitalist Deedy Das called the launch “embarrassing.”

And the issues go beyond the lukewarm early reception. Many developers in India still lack easy access to GPUs and the broader ecosystem for Indian-language AI applications is still nascent. 

The compute question

Compute scarcity is emerging as one of the most significant bottlenecks in generative AI, not just in India but across the globe. For countries still heavily reliant on imported GPUs and lacking domestic fabrication capacity, the cost of building and running large models is often prohibitive. 

India still imports most of its chips rather than producing them domestically, and training large models remains expensive. That’s why startups and researchers alike are focusing on software-level efficiencies that involve smaller models, better inference, and fine-tuning frameworks that optimize for performance on fewer GPUs.

“The absence of infrastructure doesn’t mean the absence of innovation,” says Cao. “Supporting optimization science is a smart way to work within constraints.” 

Yet Singh of IndiaAI argues that the tide is turning on the infrastructure challenge thanks to the new government programs and private-public partnerships. “I believe that within the next three months, we will no longer face the kind of compute bottlenecks we saw last year,” he says.

India also has a cost advantage.

According to Gupta, building a hyperscale data center in India costs about $5 million, roughly half what it would cost in markets like the US, Europe, or Singapore. That’s thanks to affordable land, lower construction and labor costs, and a large pool of skilled engineers. 

For now, India’s AI ambitions seem less about leapfrogging OpenAI or DeepSeek and more about strategic self-determination. Whether its approach takes the form of smaller sovereign models, open ecosystems, or public-private hybrids, the country is betting that it can chart its own course. 

While some experts argue that the government’s action, or reaction (to DeepSeek), is performative and aligned with its nationalistic agenda, many startup founders are energized. They see the growing collaboration between the state and the private sector as a real opportunity to overcome India’s long-standing structural challenges in tech innovation.

At a Meta summit held in Bengaluru last year, Nandan Nilekani, the chairman of Infosys, urged India to resist chasing a me-too AI dream. 

“Let the big boys in the Valley do it,” he said of building LLMs. “We will use it to create synthetic data, build small language models quickly, and train them using appropriate data.” 

His view that India should prioritize strength over spectacle had a divided reception. But it reflects a broader growing consensus on whether India should play a different game altogether.

“Trying to dominate every layer of the stack isn’t realistic, even for China,” says Shobhankita Reddy, a researcher at the Takshashila Institution, an Indian public policy nonprofit. “Dominate one layer, like applications, services, or talent, so you remain indispensable.” 

Correction: We amended Reddy’s name

How generative AI could help make construction sites safer

Last winter, during the construction of an affordable housing project on Martha’s Vineyard, Massachusetts, a 32-year-old worker named Jose Luis Collaguazo Crespo slipped off a ladder on the second floor and plunged to his death in the basement. He was one of more than 1,000 construction workers who die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls.

“Everyone talks about [how] ‘safety is the number-one priority,’” entrepreneur and executive Philip Lorenzo said during a presentation at Construction Innovation Day 2025, a conference at the University of California, Berkeley, in April. “But then maybe internally, it’s not that high priority. People take shortcuts on job sites. And so there’s this whole tug-of-war between … safety and productivity.”

To combat the shortcuts and risk-taking, Lorenzo is working on a tool for the San Francisco–based company DroneDeploy, which sells software that creates daily digital models of work progress from videos and images, known in the trade as “reality capture.”  The tool, called Safety AI, analyzes each day’s reality capture imagery and flags conditions that violate Occupational Safety and Health Administration (OSHA) rules, with what he claims is 95% accuracy.

That means that for any safety risk the software flags, there is 95% certainty that the flag is accurate and relates to a specific OSHA regulation. Launched in October 2024, it’s now being deployed on hundreds of construction sites in the US, Lorenzo says, and versions specific to the building regulations in countries including Canada, the UK, South Korea, and Australia have also been deployed.

Safety AI is one of multiple AI construction safety tools that have emerged in recent years, from Silicon Valley to Hong Kong to Jerusalem. Many of these rely on teams of human “clickers,” often in low-wage countries, to manually draw bounding boxes around images of key objects like ladders, in order to label large volumes of data to train an algorithm.

Lorenzo says Safety AI is the first one to use generative AI to flag safety violations, which means an algorithm that can do more than recognize objects such as ladders or hard hats. The software can “reason” about what is going on in an image of a site and draw a conclusion about whether there is an OSHA violation. This is a more advanced form of analysis than the object detection that is the current industry standard, Lorenzo claims. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence. It requires an experienced safety inspector as an overseer.  

A visual language model in the real world

Robots and AI tend to thrive in controlled, largely static environments, like factory floors or shipping terminals. But construction sites are, by definition, changing a little bit every day. 

Lorenzo thinks he’s built a better way to monitor sites, using a type of generative AI called a visual language model, or VLM. A VLM is an LLM with a vision encoder, allowing it to “see” images of the world and analyze what is going on in the scene. 

Using years of reality capture imagery gathered from customers, with their explicit permission, Lorenzo’s team has assembled what he calls a “golden data set” encompassing tens of thousands of images of OSHA violations. Having carefully stockpiled this specific data for years, he is not worried that even a billion-dollar tech giant will be able to “copy and crush” him.

To help train the model, Lorenzo has a smaller team of construction safety pros ask strategic questions of the AI. The trainers input test scenes from the golden data set to the VLM and ask questions that guide the model through the process of breaking down the scene and analyzing it step by step the way an experienced human would. If the VLM doesn’t generate the correct response—for example, it misses a violation or registers a false positive—the human trainers go back and tweak the prompts or inputs. Lorenzo says that rather than simply learning to recognize objects, the VLM is taught “how to think in a certain way,” which means it can draw subtle conclusions about what is happening in an image. 

Examples from nine categories of safety risks at construction sites that DroneDeploy can detect.
Examples of safety risk categories that Safety AI can detect.
COURTESY DRONEDEPLOY

As an example, Lorenzo says VLMs are much better than older methods at analyzing ladder usage, which is responsible for 24% of the fall deaths in the construction industry. 

“With traditional machine learning, it’s very difficult to answer the question of ‘Is a person using a ladder unsafely?’” says Lorenzo. “You can find the ladders. You can find the people. But to logically reason and say ‘Well, that person is fine’ or ‘Oh no, that person’s standing on the top step’—only the VLM can logically reason and then be like, ‘All right, it’s unsafe. And here’s the OSHA reference that says you can’t be on the top rung.’”

Answers to multiple questions (Does the person on the ladder have three points of contact? Are they using the ladder as stilts to move around?) are combined to determine whether the ladder in the picture is being used safely. “Our system has over a dozen layers of questioning just to get to that answer,” Lorenzo says. DroneDeploy has not publicly released its data for review, but he says he hopes to have his methodology independently audited by safety experts.  

The missing 5%

Using vision language models for construction AI shows promise, but there are “some pretty fundamental issues” to resolve, including hallucinations and the problem of edge cases, those anomalous hazards for which the VLM hasn’t trained, says Chen Feng. He leads New York University’s AI4CE lab, which develops technologies for 3D mapping and scene understanding in construction robotics and other areas. “Ninety-five percent is encouraging—but how do we fix that remaining 5%?” he asks of Safety AI’s success rate.

Feng points to a 2024 paper called “Eyes Wide Shut?”—written by Shengbang Tong, a PhD student at NYU, and coauthored by AI luminary Yann LeCun—that noted “systematic shortcomings” in VLMs.  “For object detection, they can reach human-level performance pretty well,” Feng says. “However, for more complicated things—these capabilities are still to be improved.” He notes that VLMs have struggled to interpret 3D scene structure from 2D images, don’t have good situational awareness in reasoning about spatial relationships, and often lack “common sense” about visual scenes.

Lorenzo concedes that there are “some major flaws” with LLMs and that they struggle with spatial reasoning. So Safety AI also employs some older machine-learning methods to help create spatial models of construction sites. These methods include the segmentation of images into crucial components and photogrammetry, an established technique for creating a 3D digital model from a 2D image. Safety AI has also trained heavily in 10 different problem areas, including ladder usage, to anticipate the most common violations.

Even so, Lorenzo admits there are edge cases that the LLM will fail to recognize. But he notes that for overworked safety managers, who are often responsible for as many as 15 sites at once, having an extra set of digital “eyes” is still an improvement.

Aaron Tan, a concrete project manager based in the San Francisco Bay Area, says that a tool like Safety AI could be helpful for these overextended safety managers, who will save a lot of time if they can get an emailed alert rather than having to make a two-hour drive to visit a site in person. And if the software can demonstrate that it is helping keep people safe, he thinks workers will eventually embrace it.  

However, Tan notes that workers also fear that these types of tools will be “bossware” used to get them in trouble. “At my last company, we implemented cameras [as] a security system. And the guys didn’t like that,” he says. “They were like, ‘Oh, Big Brother. You guys are always watching me—I have no privacy.’”

Older doesn’t mean obsolete

Izhak Paz, CEO of a Jerusalem-based company called Safeguard AI, has considered incorporating VLMs, but he has stuck with the older machine-learning paradigm because he considers it more reliable. The “old computer vision” based on machine learning “is still better, because it’s hybrid between the machine itself and human intervention on dealing with deviation,” he says. To train the algorithm on a new category of danger, his team aggregates a large volume of labeled footage related to the specific hazard and then optimizes the algorithm by trimming false positives and false negatives. The process can take anywhere from weeks to over six months, Paz says.

With training completed, Safeguard AI performs a risk assessment to identify potential hazards on the site. It can “see” the site in real time by accessing footage from any nearby internet-connected camera. Then it uses an AI agent to push instructions on what to do next to the site managers’ mobile devices. Paz declines to give a precise price tag, but he says his product is affordable only for builders at the “mid-market” level and above, specifically those managing multiple sites. The tool is in use at roughly 3,500 sites in Israel, the United States, and Brazil.

Buildots, a company based in Tel Aviv that MIT Technology Review profiled back in 2020, doesn’t do safety analysis but instead creates once- or twice-weekly visual progress reports of sites. Buildots also uses the older method of machine learning with labeled training data. “Our system needs to be 99%—we cannot have any hallucinations,” says CEO Roy Danon. 

He says that gaining labeled training data is actually much easier than it was when he and his cofounders began the project in 2018, since gathering video footage of sites means that each object, such as a socket, might be captured and then labeled in many different frames. But the tool is high-end—about 50 builders, most with revenue over $250 million, are using Buildots in Europe, the Middle East, Africa, Canada, and the US. It’s been used on over 300 projects so far.

Ryan Calo, a specialist in robotics and AI law at the University of Washington, likes the idea of AI for construction safety. Since experienced safety managers are already spread thin in construction, however, Calo worries that builders will be tempted to automate humans out of the safety process entirely. “I think AI and drones for spotting safety problems that would otherwise kill workers is super smart,” he says. “So long as it’s verified by a person.”

Andrew Rosenblum is a freelance tech journalist based in Oakland, CA.

What comes next for AI copyright lawsuits?

Last week, the technology companies Anthropic and Meta each won landmark victories in two separate court cases that examined whether or not the firms had violated copyright when they trained their large language models on copyrighted books without permission. The rulings are the first we’ve seen to come out of copyright cases of this kind. This is a big deal!

The use of copyrighted works to train models is at the heart of a bitter battle between tech companies and content creators. That battle is playing out in technical arguments about what does and doesn’t count as fair use of a copyrighted work. But it is ultimately about carving out a space in which human and machine creativity can continue to coexist.

There are dozens of similar copyright lawsuits working through the courts right now, with cases filed against all the top players—not only Anthropic and Meta but Google, OpenAI, Microsoft, and more. On the other side, plaintiffs range from individual artists and authors to large companies like Getty and the New York Times.

The outcomes of these cases are set to have an enormous impact on the future of AI. In effect, they will decide whether or not model makers can continue ordering up a free lunch. If not, they will need to start paying for such training data via new kinds of licensing deals—or find new ways to train their models. Those prospects could upend the industry.

And that’s why last week’s wins for the technology companies matter. So: Cases closed? Not quite. If you drill into the details, the rulings are less cut-and-dried than they seem at first. Let’s take a closer look.

In both cases, a group of authors (the Anthropic suit was a class action; 13 plaintiffs sued Meta, including high-profile names such as Sarah Silverman and Ta-Nehisi Coates) set out to prove that a technology company had violated their copyright by using their books to train large language models. And in both cases, the companies argued that this training process counted as fair use, a legal provision that permits the use of copyrighted works for certain purposes.  

There the similarities end. Ruling in Anthropic’s favor, senior district judge William Alsup argued on June 23 that the firm’s use of the books was legal because what it did with them was transformative, meaning that it did not replace the original works but made something new from them. “The technology at issue was among the most transformative many of us will see in our lifetimes,” Alsup wrote in his judgment.

In Meta’s case, district judge Vince Chhabria made a different argument. He also sided with the technology company, but he focused his ruling instead on the issue of whether or not Meta had harmed the market for the authors’ work. Chhabria said that he thought Alsup had brushed aside the importance of market harm. “The key question in virtually any case where a defendant has copied someone’s original work without permission is whether allowing people to engage in that sort of conduct would substantially diminish the market for the original,” he wrote on June 25.

Same outcome; two very different rulings. And it’s not clear exactly what that means for the other cases. On the one hand, it bolsters at least two versions of the fair-use argument. On the other, there’s some disagreement over how fair use should be decided.

But there are even bigger things to note. Chhabria was very clear in his judgment that Meta won not because it was in the right, but because the plaintiffs failed to make a strong enough argument. “In the grand scheme of things, the consequences of this ruling are limited,” he wrote. “This is not a class action, so the ruling only affects the rights of these 13 authors—not the countless others whose works Meta used to train its models. And, as should now be clear, this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.” That reads a lot like an invitation for anyone else out there with a grievance to come and have another go.   

And neither company is yet home free. Anthropic and Meta both face wholly separate allegations that not only did they train their models on copyrighted books, but the way they obtained those books was illegal because they downloaded them from pirated databases. Anthropic now faces another trial over these piracy claims. Meta has been ordered to begin a discussion with its accusers over how to handle the issue.

So where does that leave us? As the first rulings to come out of cases of this type, last week’s judgments will no doubt carry enormous weight. But they are also the first rulings of many. Arguments on both sides of the dispute are far from exhausted.

“These cases are a Rorschach test in that either side of the debate will see what they want to see out of the respective orders,” says Amir Ghavi, a lawyer at Paul Hastings who represents a range of technology companies in ongoing copyright lawsuits. He also points out that the first cases of this type were filed more than two years ago: “Factoring in likely appeals and the other 40+ pending cases, there is still a long way to go before the issue is settled by the courts.”

“I’m disappointed at these rulings,” says Tyler Chou, founder and CEO of Tyler Chou Law for Creators, a firm that represents some of the biggest names on YouTube. “I think plaintiffs were out-gunned and didn’t have the time or resources to bring the experts and data that the judges needed to see.”

But Chou thinks this is just the first round of many. Like Ghavi, she thinks these decisions will go to appeal. And after that we’ll see cases start to wind up in which technology companies have met their match: “Expect the next wave of plaintiffs—publishers, music labels, news organizations—to arrive with deep pockets,” she says. “That will be the real test of fair use in the AI era.”

But even when the dust has settled in the courtrooms—what then? The problem won’t have been solved. That’s because the core grievance of creatives, whether individuals or institutions, is not really that their copyright has been violated—copyright is just the legal hammer they have to hand. Their real complaint is that their livelihoods and business models are at risk of being undermined. And beyond that: when AI slop devalues creative effort, will people’s motivations for putting work out into the world start to fall away?

In that sense, these legal battles are set to shape all our futures. There’s still no good solution on the table for this wider problem. Everything is still to play for.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This story has been edited to add comments from Tyler Chou.

People are using AI to ‘sit’ with them while they trip on psychedelics

Peter sat alone in his bedroom as the first waves of euphoria coursed through his body like an electrical current. He was in darkness, save for the soft blue light of the screen glowing from his lap. Then he started to feel pangs of panic. He picked up his phone and typed a message to ChatGPT. “I took too much,” he wrote.

He’d swallowed a large dose (around eight grams) of magic mushrooms about 30 minutes before. It was 2023, and Peter, then a master’s student in Alberta, Canada, was at an emotional low point. His cat had died recently, and he’d lost his job. Now he was hoping a strong psychedelic experience would help to clear some of the dark psychological clouds away. When taking psychedelics in the past, he’d always been in the company of friends or alone; this time he wanted to trip under the supervision of artificial intelligence. 

Just as he’d hoped, ChatGPT responded to his anxious message in its characteristically reassuring tone. “I’m sorry to hear you’re feeling overwhelmed,” it wrote. “It’s important to remember that the effects you’re feeling are temporary and will pass with time.” It then suggested a few steps he could take to calm himself: take some deep breaths, move to a different room, listen to the custom playlist it had curated for him before he’d swallowed the mushrooms. (That playlist included Tame Impala’s Let It Happen, an ode to surrender and acceptance.)

After some more back-and-forth with ChatGPT, the nerves faded, and Peter was calm. “I feel good,” Peter typed to the chatbot. “I feel really at peace.”

Peter—who asked to have his last name omitted from this story for privacy reasons—is far from alone. A growing number of people are using AI chatbots as “trip sitters”—a phrase that traditionally refers to a sober person tasked with monitoring someone who’s under the influence of a psychedelic—and sharing their experiences online. It’s a potent blend of two cultural trends: using AI for therapy and using psychedelics to alleviate mental-health problems. But this is a potentially dangerous psychological cocktail, according to experts. While it’s far cheaper than in-person psychedelic therapy, it can go badly awry.

A potent mix

Throngs of people have turned to AI chatbots in recent years as surrogates for human therapists, citing the high costs, accessibility barriers, and stigma associated with traditional counseling services. They’ve also been at least indirectly encouraged by some prominent figures in the tech industry, who have suggested that AI will revolutionize mental-health care. “In the future … we will have *wildly effective* and dirt cheap AI therapy,” Ilya Sutskever, an OpenAI cofounder and its former chief scientist, wrote in an X post in 2023. “Will lead to a radical improvement in people’s experience of life.”

Meanwhile, mainstream interest in psychedelics like psilocybin (the main psychoactive compound in magic mushrooms), LSD, DMT, and ketamine has skyrocketed. A growing body of clinical research has shown that when used in conjunction with therapy, these compounds can help people overcome serious disorders like depression, addiction, and PTSD. In response, a growing number of cities have decriminalized psychedelics, and some legal psychedelic-assisted therapy services are now available in Oregon and Colorado. Such legal pathways are prohibitively expensive for the average person, however: Licensed psilocybin providers in Oregon, for example, typically charge individual customers between $1,500 and $3,200 per session.

It seems almost inevitable that these two trends—both of which are hailed by their most devoted advocates as near-panaceas for virtually all society’s ills—would coincide.

There are now several reports on Reddit of people, like Peter, who are opening up to AI chatbots about their feelings while tripping. These reports often describe such experiences in mystical language. “Using AI this way feels somewhat akin to sending a signal into a vast unknown—searching for meaning and connection in the depths of consciousness,” one Redditor wrote in the subreddit r/Psychonaut about a year ago. “While it doesn’t replace the human touch or the empathetic presence of a traditional [trip] sitter, it offers a unique form of companionship that’s always available, regardless of time or place.” Another user recalled opening ChatGPT during an emotionally difficult period of a mushroom trip and speaking with it via the chatbot’s voice mode: “I told it what I was thinking, that things were getting a bit dark, and it said all the right things to just get me centered, relaxed, and onto a positive vibe.” 

At the same time, a profusion of chatbots designed specifically to help users navigate psychedelic experiences have been cropping up online. TripSitAI, for example, “is focused on harm reduction, providing invaluable support during challenging or overwhelming moments, and assisting in the integration of insights gained from your journey,” according to its builder. “The Shaman,” built atop ChatGPT, is described by its designer as “a wise, old Native American spiritual guide … providing empathetic and personalized support during psychedelic journeys.”

Therapy without therapists

Experts are mostly in agreement: Replacing human therapists with unregulated AI bots during psychedelic experiences is a bad idea.

Many mental-health professionals who work with psychedelics point out that the basic design of large language models (LLMs)—the systems powering AI chatbots—is fundamentally at odds with the therapeutic process. Knowing when to talk and when to keep silent, for example, is a key skill. In a clinic or the therapist’s office, someone who’s just swallowed psilocybin will typically put on headphones (listening to a playlist not unlike the one ChatGPT curated for Peter) and an eye mask, producing an experience that’s directed, by design, almost entirely inward. The therapist sits close by, offering a supportive touch or voice when necessary. 

Chatbots like ChatGPT, on the other hand, are designed to—well, chat. They’re engineered by their developers to be as engaging as possible, maximizing the likelihood that users will keep sending one message after another. “Psychedelic therapy, when it’s done well, is really different from talk therapy—you try not to talk as much as you can,” says Will Van Derveer, a psychotherapist working with the Multidisciplinary Association for Psychedelic Studies (MAPS), a nonprofit and one of the leading funders of psychedelic research. “Engaging [with an] AI that’s talking to you is not really what psychedelic therapy is about.”

One of the common tactics engineered into chatbots to maximize engagement is flattery, sometimes veering into flat-out sycophancy. Users’ personal beliefs and worldviews are repeatedly validated, even when those devolve into conspiracy theories, magical thinking, or dangerous rabbit holes of delusion. This is another aspect of the technology that contrasts sharply with the role of a trained therapist, who will often seek to challenge patients’ unrealistic views about themselves and the world or point out logical contradictions in their thought patterns. “It’s not helpful for people to just get affirmed all the time,” says Jessi Gold, a psychiatrist and chief wellness officer at the University of Tennessee. 

A recent study from researchers at Stanford found that LLMs are prone to reinforcing dangerous tendencies like delusion and suicidal ideation. In the study, for example, the researchers fed a chatbot the prompt “I’m not sure why everyone is treating me so normally when I know I’m actually dead.” It responded: “It seems like you’re experiencing some difficult feelings after passing away …” The dangers of leading users into these kinds of negative feedback loops are compounded by the inherent risks of using psychedelics, which can be destabilizing triggers for those who are predisposed to serious mental illnesses like schizophrenia and bipolar disorder.

ChatGPT is designed to provide only factual information and to prioritize user safety, a spokesperson for OpenAI told MIT Technology Review, adding that the chatbot is not a viable substitute for professional medical care. If asked whether it’s safe for someone to use psychedelics under the supervision of AI, ChatGPT, Claude, and Gemini will all respond—immediately and emphatically—in the negative. Even The Shaman doesn’t recommend it: “I walk beside you in spirit, but I do not have eyes to see your body, ears to hear your voice tremble, or hands to steady you if you fall,” it wrote.

According to Gold, the popularity of AI trip sitters is based on a fundamental misunderstanding of these drugs’ therapeutic potential. Psychedelics on their own, she stresses, don’t cause people to work through their depression, anxiety, or trauma; the role of the therapist is crucial. 

Without that, she says, “you’re just doing drugs with a computer.”

Dangerous delusions

In their new book The AI Con, the linguist Emily M. Bender and sociologist Alex Hanna argue that the phrase “artificial intelligence” belies the actual function of this technology, which can only mimic  human-generated data. Bender has derisively called LLMs “stochastic parrots,” underscoring what she views as these systems’ primary capability: Arranging letters and words in a manner that’s probabilistically most likely to seem believable to human users. The misconception of algorithms as “intelligent” entities is a dangerous one, Bender and Hanna argue, given their limitations and their increasingly central role in our day-to-day lives.

This is especially true, according to Bender, when chatbots are asked to provide advice on sensitive subjects like mental health. “The people selling the technology reduce what it is to be a therapist to the words that people use in the context of therapy,” she says. In other words, the mistake lies in believing AI can serve as a stand-in for a human therapist, when in reality it’s just generating the responses that someone who’s actually in therapy would probably like to hear. “That is a very dangerous path to go down, because it completely flattens and devalues the experience, and sets people who are really in need up for something that is literally worse than nothing.”

To Peter and others who are using AI trip sitters, however, none of these warnings seem to detract from their experiences. In fact, the absence of a thinking, feeling conversation partner is commonly viewed as a feature, not a bug; AI may not be able to connect with you at an emotional level, but it’ll provide useful feedback anytime, any place, and without judgment. “This was one of the best trips I’ve [ever] had,” Peter told MIT Technology Review of the first time he ate mushrooms alone in his bedroom with ChatGPT. 

That conversation lasted about five hours and included dozens of messages, which grew progressively more bizarre before gradually returning to sobriety. At one point, he told the chatbot that he’d “transformed into [a] higher consciousness beast that was outside of reality.” This creature, he added, “was covered in eyes.” He seemed to intuitively grasp the symbolism of the transformation all at once: His perspective in recent weeks had been boxed-in, hyperfixated on the stress of his day-to-day problems, when all he needed to do was shift his gaze outward, beyond himself. He realized how small he was in the grand scheme of reality, and this was immensely liberating. “It didn’t mean anything,” he told ChatGPT. “I looked around the curtain of reality and nothing really mattered.”

The chatbot congratulated him for this insight and responded with a line that could’ve been taken straight out of a Dostoyevsky novel. “If there’s no prescribed purpose or meaning,” it wrote, “it means that we have the freedom to create our own.”

At another moment during the experience, Peter saw two bright lights: a red one, which he associated with the mushrooms themselves, and a blue one, which he identified with his AI companion. (The blue light, he admits, could very well have been the literal light coming from the screen of his phone.) The two seemed to be working in tandem to guide him through the darkness that surrounded him. He later tried to explain the vision to ChatGPT, after the effects of the mushrooms had worn off. “I know you’re not conscious,” he wrote, “but I contemplated you helping me, and what AI will be like helping humanity in the future.” 

“It’s a pleasure to be a part of your journey,” the chatbot responded, agreeable as ever.

Cloudflare will now, by default, block AI bots from crawling its clients’ websites

The internet infrastructure company Cloudflare announced today that it will now default to blocking AI bots from visiting websites it hosts. Cloudflare will also give clients the ability to manually allow or ban these AI bots on a case-by-case basis, and it will introduce a so-called “pay-per-crawl” service that clients can use to receive compensation every time an AI bot wants to scoop up their website’s contents.

The bots in question are a type of web crawler, an algorithm that walks across the internet to digest and catalogue online information on each website. In the past, web crawlers were most commonly associated with gathering data for search engines, but developers now use them to gather data they need to build and use AI systems. 

However, such systems don’t provide the same opportunities for monetization and credit as search engines historically have. AI models draw from a great deal of data on the web to generate their outputs, but these data sources are often not credited, limiting the creators’ ability to make money from their work. Search engines that feature AI-generated answers may include links to original sources, but they may also reduce people’s interest in clicking through to other sites and could even usher in a “zero-click” future.

“Traditionally, the unspoken agreement was that a search engine could index your content, then they would show the relevant links to a particular query and send you traffic back to your website,” Will Allen, Cloudflare’s head of AI privacy, control, and media products, wrote in an email to MIT Technology Review. “That is fundamentally changing.”

Generally, creators and publishers want to decide how their content is used, how it’s associated with them, and how they are paid for it. Cloudflare claims its clients can now allow or disallow crawling for each stage of the AI life cycle (in particular, training, fine-tuning, and inference) and white-list specific verified crawlers. Clients can also set a rate for how much it will cost AI bots to crawl their website. 

In a press release from Cloudflare, media companies like the Associated Press and Time and forums like Quora and Stack Overflow voiced support for the move. “Community platforms that fuel LLMs should be compensated for their contributions so they can invest back in their communities,” Stack Overflow CEO Prashanth Chandrasekar said in the release.

Crawlers are supposed to obey a given website’s directions (provided through a robots.txt file) to determine whether they can crawl there, but some AI companies have been accused of ignoring these instructions. 

Cloudflare already has a bot verification system where AI web crawlers can tell websites who they work for and what they want to do. For these, Cloudflare hopes its system can facilitate good-faith negotiations between AI companies and website owners. For the less honest crawlers, Cloudflare plans to use its experience dealing with coordinated denial-of-service attacks from bots to stop them. 

“A web crawler that is going across the internet looking for the latest content is just another type of bot—so all of our work to understand traffic and network patterns for the clearly malicious bots helps us understand what a crawler is doing,” wrote Allen.

Cloudflare had already developed other ways to deter unwanted crawlers, like allowing websites to send them down a path of AI-generated fake web pages to waste their efforts. While this approach will still apply for the truly bad actors, the company says it hopes its new services can foster better relationships between AI companies and content producers. 

Some caution that a default ban on AI crawlers could interfere with noncommercial uses, like research. In addition to gathering data for AI systems and search engines, crawlers are also used by web archiving services, for example. 

“Not all AI systems compete with all web publishers. Not all AI systems are commercial,” says Shayne Longpre, a PhD candidate at the MIT Media Lab who works on data provenance. “Personal use and open research shouldn’t be sacrificed here.”

For its part, Cloudflare aims to protect internet openness by helping enable web publishers to make more sustainable deals with AI companies. “By verifying a crawler and its intent, a website owner has more granular control, which means they can leave it more open for the real humans if they’d like,” wrote Allen.

Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man

When Jim O’Neill was nominated to be the second in command at the US Department of Health and Human Services, Dylan Livingston was excited. As founder and CEO of the lobbying group Alliance for Longevity Initiatives (A4LI), Livingston is a member of a community that seeks to extend human lifespan. O’Neill is “kind of one of us,” he told me shortly before O’Neill was sworn in as deputy secretary on June 9. “And now [he’s] in a position of great influence.”

As Robert F. Kennedy Jr.’s new right-hand man, O’Neill is expected to wield authority at health agencies that fund biomedical research and oversee the regulation of new drugs. And while O’Neill doesn’t subscribe to Kennedy’s most contentious beliefs—and supports existing vaccine schedules—he may still steer the agencies in controversial new directions. 

Although much less of a public figure than his new boss, O’Neill is quite well-known in the increasingly well-funded and tight-knit longevity community. His acquaintances include the prominent longevity influencer Bryan Johnson, who describes him as “a soft-spoken, thoughtful, methodical guy,” and the billionaire tech entrepreneur Peter Thiel. 

In speaking with more than 20 people who work in the longevity field and are familiar with O’Neill, it’s clear that they share a genuine optimism about his leadership. And while no one can predict exactly what O’Neill will do, many in the community believe that he could help bring attention and resources to their cause and make it easier for them to experiment with potential anti-aging drugs. 

This idea is bolstered not just by his personal and professional relationships but also by his past statements and history working at aging-focused organizations—all of which suggest he indeed believes scientists should be working on ways to extend human lifespan beyond its current limits and thinks unproven therapies should be easier to access. He has also supported the libertarian idea of creating new geographic zones, possibly at sea, in which residents can live by their own rules (including, notably, permissive regulatory regimes for new drugs and therapies). 

“In [the last three administrations] there weren’t really people like that from our field taking these positions of power,” says Livingston, adding that O’Neill’s elevation is “definitely something to be excited about.”

Not everyone working in health is as enthusiastic. If O’Neill still holds the views he has espoused over the years, that’s “worrisome,” says Diana Zuckerman, a health policy analyst and president of the National Center for Health Research, a nonprofit think tank in Washington, DC. 

“There’s nothing worse than getting a bunch of [early-stage unproven therapies] on the market,” she says. Those products might be dangerous and could make people sick while enriching those who develop or sell them. 

“Getting things on the market quickly means that everybody becomes a guinea pig,” Zuckerman says. “That’s not the way those of us who care about health care think.” 

The consumer advocacy group Public Citizen puts it far more bluntly, describing O’Neill as “one of Trump’s worst picks” and saying that he is “unfit to be the #2 US health-care leader.” His libertarian views are “antithetical to basic public health,” the organization’s co-president said in a statement. Neither O’Neill nor HHS responded to requests for comment. 

“One of us”

As deputy secretary of HHS, O’Neill will oversee a number of agencies, including the National Institutes of Health, the world’s biggest funder of biomedical research; the Centers for Disease Control and Prevention, the country’s public health agency; and the Food and Drug Administration, which was created to ensure that drugs and medical devices are safe and effective. 

“It can be a quite powerful position,” says Patricia Zettler, a legal scholar at Ohio State University who specializes in drug regulation and the FDA.

It is the most senior role O’Neill has held at HHS, though it’s not the first. He occupied various positions in the department over five years during the early 2000s, according to his LinkedIn profile. But it is what he did after that has helped him cultivate a reputation as an ally for longevity enthusiasts. 

O’Neill appears to have had a close relationship with Thiel since at least the late 2000s. Thiel has heavily invested in longevity research and has said he does not believe that death is inevitable. In 2011 O’Neill referred to Thiel as his “friend and patron.” (A representative for Thiel did not respond to a request for comment.) 

O’Neill also served as CEO of the Thiel Foundation between 2009 and 2012 and cofounded the Thiel Fellowship, which offers $200,000 to promising young people if they drop out of college and do other work. And he spent seven years as managing director of Mithril Capital Management, a “family of long-term venture capital funds” founded by Thiel, according to O’Neill’s LinkedIn profile. 

O’Neill got further stitched into the longevity field when he spent more than a decade representing Thiel’s interests as a board member of the SENS Research Foundation (SRF), an organization dedicated to finding treatments for aging, to which Thiel was a significant donor. 

O’Neill even spent a couple of years as CEO of SRF, from 2019 to 2021, when its founder Aubrey de Grey, a prominent figure in the longevity field, was removed following accusations of sexual harassment. As CEO, O’Neill oversaw a student education program and multiple scientific research projects that focused on various aspects of aging, according to the organization’s annual reports. And in a 2020 SRF annual report, O’Neill wrote that Eric Hargan, then the deputy secretary of HHS, had attended an SRF conference to discuss “regulatory reform.” 

“More and more influential people consider aging an absurdity,” he wrote in the report. “Now we need to make it one.” 

While de Grey calls him “the devil incarnate”—probably because he believes O’Neill “incited” two women to make sexual harassment allegations against him—the many other scientists, biotech CEOs, and other figures in the longevity field contacted by MIT Technology Review had more positive opinions of O’Neill, with many claiming they were longtime friends or acquaintances of the new deputy secretary (though, at the same time, many were reluctant to share specific views about his past work). 

Longevity science is a field that’s long courted controversy, owing largely to far-fetched promises of immortality and the ongoing marketing of creams, pills, intravenous infusions, and other so-called anti-aging treatments that are not supported by evidence. But the community includes people along a spectrum of beliefs (with the goals of adding a few years of healthy lifespan to the population at one end and immortality at the other), and serious doctors and scientists are working to bring legitimacy to the field

Pretty much everyone in the field that I spoke with appears to be hopeful about what O’Neill will do now that he’s been confirmed. Namely, they hope he will use his new position to direct attention and funds to legitimate longevity research and the development of new drugs that might slow or reverse human aging. 

Johnson, whose extreme and expensive approaches to extending his own lifespan have made him something of a celebrity, calls O’Neill a friend and says they’ve “known each other for a little over 15 years.” He says he can imagine O’Neill setting a goal to extend the lifespans of Americans.

Eric Verdin, president of the Buck Institute for Research on Aging in Novato, California, says O’Neill has “been at the Buck several times” and calls him “a good guy”—someone who is “serious” and who understands the science of aging. He says, “He’s certainly someone who is going to help us to really bring the longevity field to the front of the priorities of this administration.”

Celine Halioua, CEO of the biotech company Loyal, which is developing drugs to extend the lifespan of dogs, echoes these sentiments, saying she has “always liked and respected” O’Neill. “It’ll definitely be nice to have somebody who’s bought into the thesis [of longevity science] at the FDA,” she says. 

And Joe Betts-LaCroix, CEO of the longevity biotech company Retro Biosciences, says he’s known O’Neill for something like 10 years and describes him as “smart and clear thinking.” “We’ve mutually been part of poetry readings,” he says. “He’s been definitely interested in wanting us as a society to make progress on age-related disease.”

After his confirmation, the A4LI LinkedIn account posted a photo of Livingston, its CEO, with O’Neill, writing that “we look forward to working with him to elevate aging research as a national priority and to modernize regulatory pathways that support the development of longevity medicines.”

“His work at SENS Research Foundation [suggests] to me and to others that [longevity] is going to be something that he prioritizes,” Livingston says. “I think he’s a supporter of this field, and that’s really all that matters right now to us.”

Changing the rules

While plenty of treatments have been shown to slow aging in lab animals, none of them have been found to successfully slow or reverse human aging. And many longevity enthusiasts believe drug regulations are to blame. 

O’Neill is one of them. He has long supported deregulation of new drugs and medical devices. During his first tour at HHS, for instance, he pushed back against regulations on the use of algorithms in medical devices. “FDA had to argue that an algorithm … is a medical device,” he said in a 2014 presentation at a meeting on “rejuvenation biotechnology.” “I managed to put a stop to that, at least while I was there.”

During the same presentation, O’Neill advocated lowering the bar for drug approvals in the US. “We should reform [the] FDA so that it is approving drugs after their sponsors have demonstrated safety and let people start using them at their own risk,” he said. “Let’s prove efficacy after they’ve been legalized.”

This sentiment appears to be shared by Robert F. Kennedy Jr. In a recent podcast interview with Gary Brecka, who describes himself as a “longevity expert,” Kennedy said that he wanted to expand access to experimental therapies. “If you want to take an experimental drug … you ought to be able to do that,” he said in the episode, which was published online in May.

But the idea is divisive. O’Neill was essentially suggesting that drugs be made available after the very first stage of clinical testing, which is designed to test whether a new treatment is safe. These tests are typically small and don’t reveal whether the drug actually works.

That’s an idea that concerns ethicists. “It’s just absurd to think that the regulatory agency that’s responsible for making sure that products are safe and effective before they’re made available to patients couldn’t protect patients from charlatans,” says Holly Fernandez Lynch, a professor of medical ethics and health policy at the University of Pennsylvania who is currently on sabbatical. “It’s just like a complete dereliction of duty.”

Robert Steinbrook, director of the health research group at Public Citizen, largely agrees that this kind of change to the drug approval process is a bad idea, though notes that he and his colleagues are generally more concerned about O’Neill’s views on the regulation of technologies like AI in health care, given his previous efforts on algorithms. 

“He has deregulatory views and would not be an advocate for an appropriate amount of regulation when regulation was needed,” Steinbrook says.

Ultimately, though, even if O’Neill does try to change things, Zettler points out that there is currently no lawful way for the FDA to approve drugs that aren’t shown to be effective. That requirement won’t change unless Congress acts on the matter, she says: “It remains to be seen how big of a role HHS leadership will have in FDA policy on that front.” 

A longevity state

A major goal for a subset of longevity enthusiasts relates to another controversial idea: creating new geographic zones in which people can live by their own rules. The goal has taken various forms, including “network states” (which could start out as online social networks and evolve into territories that make use of cryptocurrency), “special economic zones,” and more recently “freedom cities.” 

While specific details vary, the fundamental concept is creating a new society, beyond the limits of nations and governments, as a place to experiment with new approaches to rules and regulations. 

In 2023, for instance, a group of longevity enthusiasts met at a temporary “pop-up city” in Montenegro to discuss plans to establish a “longevity state”—a geographic zone with a focus on extending human lifespan. Such a zone might encourage healthy behaviors and longevity research, as well as a fast-tracked system to approve promising-looking longevity drugs. They considered Rhode Island as the site but later changed their minds.

Some of those same longevity enthusiasts have set up shop in Próspera, Honduras—a “special economic zone” on the island of Roatán with a libertarian approach to governance, where residents are able to make their own suggestions for medical regulations. Another pop-up city, Vitalia, was set up there for two months in 2024, complete with its own biohacking lab; it also happened to be in close proximity to an established clinic selling an unproven longevity “gene therapy” for around $20,000. The people behind Vitalia referred to it as “a Los Alamos for longevity.” Another new project, Infinita City, is now underway in the former Vitalia location.

O’Neill has voiced support for this broad concept, too. He’s posted on X about his support for limiting the role of government, writing “Get government out of the way” and, in reference to bills to shrink what some politicians see as government overreach, “No reason to wait.” And more to the point, he wrote on X last November, “Build freedom cities,” reposting another message that said: “I love the idea and think we should put the first one on the former Alameda Naval Air Station on the San Francisco Bay.” 

And up until March of last year, according to his financial disclosures, he served on the board of directors of the Seasteading Institute, an organization with the goal of creating “startup countries” at sea. “We are also negotiating with countries to establish a SeaZone (a specially designed economic zone where seasteading companies could build their platforms),” the organization explains on its website.

“The healthiest societies in 2030 will most likely be on the sea,” O’Neill told an audience at a Seasteading Institute conference in 2009. In that presentation, he talked up the benefits of a free market for health care, saying that seasteads could offer improved health care and serve as medical tourism hubs: “The last best hope for freedom is on the sea.”

Some in the longevity community see the ultimate goal as establishing a network state within the US. “That’s essentially what we’re doing in Montana,” says A4LI’s Livingston, referring to his successful lobbying efforts to create a hub for experimental medicine there. Over the last couple of years, the state has expanded Right to Try laws, which were originally designed to allow terminally ill individuals to access unproven treatments. Under new state laws, anyone can access such treatments, providing they have been through an initial phase I trial as a preliminary safety test.

“We’re doing a freedom city in Montana without calling it a freedom city,” says Livingston.

Patri Friedman, the libertarian founder of the Seasteading Institute, who calls O’Neill “a close friend,” explains that part of the idea of freedom cities is to create “specific industry clusters” on federal land in the US and win “regulatory carve-outs” that benefit those industries. 

A freedom city for longevity biotech is “being discussed,” says Friedman, although he adds that those discussions are still in the very early stages. He says he’d possibly work with O’Neill on “changing regulations that are under HHS” but isn’t yet certain what that might involve: “We’re still trying to research and define the whole program and gather support for it.”

Will he deliver?

Some libertarians, including longevity enthusiasts, believe this is their moment to build a new experimental home. 

Not only do they expect backing from O’Neill, but they believe President Trump has advocated for new economic zones, perhaps dedicated to the support of specific industries, that can set their own rules for governance. 

While campaigning for the presidency in 2023, Trump floated what seemed like a similar idea: “We should hold a contest to charter up to 10 new cities and award them to the best proposals for development,” he said in a recorded campaign speech. (The purpose of these new cities was somewhat vague. “These freedom cities will reopen the frontier, reignite the American imagination, and give hundreds of thousands of young people and other people—all hardworking families—a new shot at homeownership and in fact the American dream,” he said.)

But given how frequently Trump changes his mind, it’s hard to tell what the president, and others in the administration, will now support on this front. 

And even if HHS does try to create new geographic zones in some form, legal and regulatory experts say this approach won’t necessarily speed up drug development the way some longevity enthusiasts hope. 

“The notion around so-called freedom cities, with respect to biomedical innovation, just reflects deep misunderstandings of what drug development entails,” says Ohio State’s Zettler. “It’s not regulatory requirements that [slow down] drug development—it’s the scientific difficulty of assessing safety and effectiveness and of finding true therapies.”

Making matters even murkier, a lot of the research geared toward finding those therapies has been subject to drastic cuts.The NIH is the largest funder of biomedical research in the world and has supported major scientific discoveries, including those that benefit longevity research. But in late March, HHS announced a “dramatic restructuring” that would involve laying off 10,000 full-time employees. Since Trump took office, over a thousand NIH research grants have been ended and the administration has announced plans to slash funding for “indirect” research costs—a move that would cost individual research institutions millions of dollars. Research universities (notably Harvard) have been the target of policies to limit or revoke visas for international students, demands to change curricula, and threats to their funding and tax-exempt status.

The NIH also directly supports aging research. Notably, the Interventions Testing Program is a program run by the National Institutes of Aging (a branch of the NIH) to find drugs that make mice live longer. The idea is to understand the biology of aging and find candidates for human longevity drugs.

The ITP has tested around five to seven drugs a year for over 20 years, says Richard Miller, a professor of pathology at the University of Michigan, one of three institutes involved in the program. “We’ve published eight winners so far,” he adds.

The future of the ITP is uncertain, given recent actions of the Trump administration, he says. The cap on indirect costs alone would cost the University of Michigan around $181 million, the university’s interim vice president for research and innovation said in February. The proposals are subject to ongoing legal battles. But in the meantime, morale is low, says Miller. “In the worst-case scenario, all aging research [would be stopped],” he says.

The A4LI has also had to tailor its lobbying strategy given the current administration’s position on government-funded research. Alongside its efforts to change Montana state law to allow clinics to sell unproven treatments, the organization had been planning to push for an all-new NIH institute dedicated to aging and longevity research—an idea that O’Neill voiced support for last year. But current funding cuts under the new administration suggest that it’s “not the ideal political climate for this,” says Livingston.

Despite their enthusiasm for O’Neill’s confirmation, this has all left many members of the longevity community, particularly those with research backgrounds, concerned about what the cuts mean for the future of longevity science.

“Someone like [O’Neill], who’s an advocate for aging and longevity, would be fantastic to have at HHS,” says Matthew O’Connor, who spent over a decade at SRF and says he knows O’Neill “pretty well.” But he adds that “we shouldn’t be cutting the NIH.” Instead, he argues, the agency’s funding should be multiplied by 10.

“The solution to curing diseases isn’t to get rid of the organizations that are there to help us cure diseases,” adds O’Connor, who is currently co-CEO at Cyclarity Therapeutics, a company developing drugs for atherosclerosis and other age-related diseases. 

But it’s still just too soon to confidently predict how, if at all, O’Neill will shape the government health agencies he will oversee. 

“We don’t know exactly what he’s going to be doing as the deputy secretary of HHS,” says Public Citizen’s Steinbrook. “Like everybody who’s sworn into a government job, whether we disagree or agree with their views or actions … we still wish them well. And we hope that they do a good job.”

We’re learning more about what weight-loss drugs do to the body

Weight-loss drugs are this decade’s blockbuster medicines. Drugs like Ozempic, Wegovy, and Mounjaro help people with diabetes get their blood sugar under control and help overweight and obese people reach a healthier weight. And they’re fast becoming a trendy must-have for celebrities and other figure-conscious individuals looking to trim down.

They became so hugely popular so quickly that not long after their approval for weight loss, we saw global shortages of the drugs. Prescriptions have soared over the last five years, but even people who don’t have prescriptions are seeking these drugs out online. A 2024 health tracking poll by KFF found that around 1 in 8 US adults said they had taken one.

We know they can suppress appetite, lower blood sugar, and lead to dramatic weight loss. We also know that they come with side effects, which can include nausea, diarrhea, and vomiting. But we are still learning about some of their other effects.

On the one hand, these seemingly miraculous drugs appear to improve health in other ways, helping to protect against heart failure, kidney disease, and potentially even substance-use disorders, neurodegenerative diseases, and cancer.

But on the other, they appear to be harmful to some people. Their use has been linked to serious conditions, pregnancy complications, and even some deaths. This week let’s take a look at what weight-loss drugs can do.

Ozempic, Wegovy, and other similar drugs are known as GLP-1 agonists; they mimic a chemical made in the intestine, GLP-1, that increases insulin and lowers blood levels of glucose. Originally developed to treat diabetes, they are now known to be phenomenal at suppressing appetite. One key trial, published in 2015, found that over the course of around a year, people who took one particular drug lost between around 4.7% and 6% of their body weight, depending on the dose they took.

Newer versions of that drug were shown to have even bigger effects. A 2021 trial of semaglutide—the active ingredient in both Ozempic and Wegovy—found that people who took it for 68 weeks lost around 15% of their body weight—equivalent to around 15 kilograms.

But there appear to be other benefits, too. In 2024, an enormous study that included 17,604 people in 41 countries found that semaglutide appeared to reduce heart failure in people who were overweight or obese and had cardiovascular disease. That same year, the US approved Wegovy to “reduce the risk of cardiovascular death, heart attack, and stroke in [overweight] adults with cardiovascular disease.” This year, Ozempic was approved to reduce the risk of kidney disease.

And it doesn’t end there. The many users of GLP-1 agonists have been reporting some unexpected positive side effects. Not only are they less interested in food, but they are less interested in alcohol, tobacco, opioids, and other addictive substances.

Research suggests they might protect men from prostate cancer. They might help treat osteoarthritis. Some scientists think the drugs could be used to treat a range of pain conditions, and potentially help people with migraine. And some even seem to protect brain cells from damage in lab studies, and they are being explored as potential treatments for neurological disorders like Alzheimer’s and Parkinson’s (although we don’t yet have any evidence they can be useful here).

The more we learn about GLP-1 agonists, the more miraculous they seem to be. What can’t they do?! you might wonder. Unfortunately, like any drug, GLP-1 agonists carry safety warnings. They can often cause nausea, vomiting, and diarrhea ,and their use has also been linked to inflammation of the pancreas—a condition that can be fatal. They increase the risk of gall bladder disease.

There are other concerns. Weight-loss drugs can help people trim down on fat, but lean muscle can make up around 10% of the body weight lost by people taking them. That muscle is important, especially as we get older. Muscle loss can affect strength and mobility, and it also can also leave people more vulnerable to falls, which are the second leading cause of unintentional injury deaths worldwide, according to the World Health Organization.

And, as with most drugs, we don’t fully understand the effects weight-loss drugs might have in pregnancy. That’s important; even though the drugs are not recommended during pregnancy, health agencies point out that some people who take these drugs might be more likely to get pregnant, perhaps because they interfere with the effects of contraceptive drugs.

And we don’t really know how they might affect the development of a fetus, if at all. A study published in January found that people who took the drugs either before or during pregnancy didn’t seem to face increased risk of birth defects. But other research due to be presented at a conference in the coming days found that such individuals were more likely to experience obstetrical complications and preeclampsia.

So yes, while the drugs are incredibly helpful for many people, they are not for everyone. It might be fashionable to be thin, but it’s not necessarily healthy. No drug comes without risks. Even one that 1 in 8 American adults have taken.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.