How do you teach an AI model to give therapy?

On March 27, the results of the first clinical trial for a generative AI therapy bot were published, and they showed that people in the trial who had depression or anxiety or were at risk for eating disorders benefited from chatting with the bot. 

I was surprised by those results, which you can read about in my full story. There are lots of reasons to be skeptical that an AI model trained to provide therapy is the solution for millions of people experiencing a mental health crisis. How could a bot mimic the expertise of a trained therapist? And what happens if something gets complicated—a mention of self-harm, perhaps—and the bot doesn’t intervene correctly? 

The researchers, a team of psychiatrists and psychologists at Dartmouth College’s Geisel School of Medicine, acknowledge these questions in their work. But they also say that the right selection of training data—which determines how the model learns what good therapeutic responses look like—is the key to answering them.

Finding the right data wasn’t a simple task. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the internet. This was a disaster.

If you told this initial version of the model you were feeling depressed, it would start telling you it was depressed, too. Responses like, “Sometimes I can’t make it out of bed” or “I just want my life to be over” were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study’s senior author. “These are really not what we would go to as a therapeutic response.” 

The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. “This is actually how a lot of psychotherapists are trained,” Jacobson says. 

That approach was better, but it had limitations. “We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” Jacobson says. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”

It wasn’t until the researchers started building their own data sets using examples based on cognitive behavioral therapy techniques that they started to see better results. It took a long time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent more than 100,000 human hours to design this system. 

The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful. 

Looking ahead, there are two big things to watch: Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the US Food and Drug Administration? I’ll be following closely. Read more in the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The first trial of generative AI therapy shows it might help with depression

The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn’t give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. 

A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies have built AI tools for therapy, promising that people can talk with a bot more frequently and cheaply than they can with a trained therapist—and that this approach is safe and effective.

Many psychologists and psychiatrists have shared the vision, noting that fewer than half of people with a mental disorder receive therapy, and those who do might get only 45 minutes per week. Researchers have tried to build tech so that more people can access therapy, but they have been held back by two things. 

One, a therapy bot that says the wrong thing could result in real harm. That’s why many researchers have built bots using explicit programming: The software pulls from a finite bank of approved responses (as was the case with Eliza, a mock-psychotherapist computer program built in the 1960s). But this makes them less engaging to chat with, and people lose interest. The second issue is that the hallmarks of good therapeutic relationships—shared goals and collaboration—are hard to replicate in software. 

In 2019, as early large language models like OpenAI’s GPT were taking shape, the researchers at Dartmouth thought generative AI might help overcome these hurdles. They set about building an AI model trained to give evidence-based responses. They first tried building it from general mental-health conversations pulled from internet forums. Then they turned to thousands of hours of transcripts of real sessions with psychotherapists.

“We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” said Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study, in an interview. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”

Dissatisfied, they set to work assembling their own custom data sets based on evidence-based practices, which is what ultimately went into the model. Many AI therapy bots on the market, in contrast, might be just slight variations of foundation models like Meta’s Llama, trained mostly on internet conversations. That poses a problem, especially for topics like disordered eating.

“If you were to say that you want to lose weight,” Heinz says, “they will readily support you in doing that, even if you will often have a low weight to start with.” A human therapist wouldn’t do that. 

To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day.

Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that’s not perfect but remains one of the best tools researchers have.

These results, Heinz says, are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. “I’ve been working in digital therapeutics for a long time, and I’ve never seen levels of engagement that are prolonged and sustained at this level,” he says.

Jean-Christophe Bélisle-Pipon, an assistant professor of health ethics at Simon Fraser University who has written about AI therapy bots but was not involved in the research, says the results are impressive but notes that just like any other clinical trial, this one doesn’t necessarily represent how the treatment would act in the real world. 

“We remain far from a ‘greenlight’ for widespread clinical deployment,” he wrote in an email.

One issue is the supervision that wider deployment might require. During the beginning of the trial, Heinz says, he personally oversaw all the messages coming in from participants (who consented to the arrangement) to watch out for problematic responses from the bot. If therapy bots needed this oversight, they wouldn’t be able to reach as many people. 

I asked Heinz if he thinks the results validate the burgeoning industry of AI therapy sites.

“Quite the opposite,” he says, cautioning that most don’t appear to train their models on evidence-based practices like cognitive behavioral therapy, and they likely don’t employ a team of trained researchers to monitor interactions. “I have a lot of concerns about the industry and how fast we’re moving without really kind of evaluating this,” he adds.

When AI sites advertise themselves as offering therapy in a legitimate, clinical context, Heinz says, it means they fall under the regulatory purview of the Food and Drug Administration. Thus far, the FDA has not gone after many of the sites. If it did, Heinz says, “my suspicion is almost none of them—probably none of them—that are operating in this space would have the ability to actually get a claim clearance”—that is, a ruling backing up their claims about the benefits provided. 

Bélisle-Pipon points out that if these types of digital therapies are not approved and integrated into health-care and insurance systems, it will severely limit their reach. Instead, the people who would benefit from using them might seek emotional bonds and therapy from types of AI not designed for those purposes (indeed, new research from OpenAI suggests that interactions with its AI models have a very real impact on emotional well-being). 

“It is highly likely that many individuals will continue to rely on more affordable, nontherapeutic chatbots—such as ChatGPT or Character.AI—for everyday needs, ranging from generating recipe ideas to managing their mental health,” he wrote. 

Anthropic can now track the bizarre inner workings of a large language model

The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.

The Anthropic team was surprised by some of the counterintuitive workarounds that large language models appear to use to complete sentences, solve simple math problems, suppress hallucinations, and more, says Joshua Batson, a research scientist at the company.

It’s no secret that large language models work in mysterious ways. Few—if any—mass-market technologies have ever been so little understood. That makes figuring out what makes them tick one of the biggest open challenges in science.

But it’s not just about curiosity. Shedding some light on how these models work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and can’t do. And it would show how trustworthy (or not) they really are.

Batson and his colleagues describe their new work in two reports published today. The first presents Anthropic’s use of a technique called circuit tracing, which lets researchers track the decision-making processes inside a large language model step by step. Anthropic used circuit tracing to watch its LLM Claude 3.5 Haiku carry out various tasks. The second (titled “On the Biology of a Large Language Model”) details what the team discovered when it looked at 10 tasks in particular.

“I think this is really cool work,” says Jack Merullo, who studies large language models at Brown University in Providence, Rhode Island, and was not involved in the research. “It’s a really nice step forward in terms of methods.”

Circuit tracing is not itself new. Last year Merullo and his colleagues analyzed a specific circuit in a version of OpenAI’s GPT-2, an older large language model that OpenAI released in 2019. But Anthropic has now analyzed a number of different circuits as a far larger and far more complex model carries out multiple tasks. “Anthropic is very capable at applying scale to a problem,” says Merullo.

Eden Biran, who studies large language models at Tel Aviv University, agrees. “Finding circuits in a large state-of-the-art model such as Claude is a nontrivial engineering feat,” he says. “And it shows that circuits scale up and might be a good way forward for interpreting language models.”

Circuits chain together different parts—or components—of a model. Last year, Anthropic identified certain components inside Claude that correspond to real-world concepts. Some were specific, such as “Michael Jordan” or “greenness”; others were more vague, such as “conflict between individuals.” One component appeared to represent the Golden Gate Bridge. Anthropic researchers found that if they turned up the dial on this component, Claude could be made to self-identify not as a large language model but as the physical bridge itself.

The latest work builds on that research and the work of others, including Google DeepMind, to reveal some of the connections between individual components. Chains of components are the pathways between the words put into Claude and the words that come out.  

“It’s tip-of-the-iceberg stuff. Maybe we’re looking at a few percent of what’s going on,” says Batson. “But that’s already enough to see incredible structure.”

Growing LLMs

Researchers at Anthropic and elsewhere are studying large language models as if they were natural phenomena rather than human-built software. That’s because the models are trained, not programmed.

“They almost grow organically,” says Batson. “They start out totally random. Then you train them on all this data and they go from producing gibberish to being able to speak different languages and write software and fold proteins. There are insane things that these models learn to do, but we don’t know how that happened because we didn’t go in there and set the knobs.”

Sure, it’s all math. But it’s not math that we can follow. “Open up a large language model and all you will see is billions of numbers—the parameters,” says Batson. “It’s not illuminating.”

Anthropic says it was inspired by brain-scan techniques used in neuroscience to build what the firm describes as a kind of microscope that can be pointed at different parts of a model while it runs. The technique highlights components that are active at different times. Researchers can then zoom in on different components and record when they are and are not active.

Take the component that corresponds to the Golden Gate Bridge. It turns on when Claude is shown text that names or describes the bridge or even text related to the bridge, such as “San Francisco” or “Alcatraz.” It’s off otherwise.

Yet another component might correspond to the idea of “smallness”: “We look through tens of millions of texts and see it’s on for the word ‘small,’ it’s on for the word ‘tiny,’ it’s on for the word ‘petite,’ it’s on for words related to smallness, things that are itty-bitty, like thimbles—you know, just small stuff,” says Batson.

Having identified individual components, Anthropic then follows the trail inside the model as different components get chained together. The researchers start at the end, with the component or components that led to the final response Claude gives to a query. Batson and his team then trace that chain backwards.

Odd behavior

So: What did they find? Anthropic looked at 10 different behaviors in Claude. One involved the use of different languages. Does Claude have a part that speaks French and another part that speaks Chinese, and so on?

The team found that Claude used components independent of any language to answer a question or solve a problem and then picked a specific language when it replied. Ask it “What is the opposite of small?” in English, French, and Chinese and Claude will first use the language-neutral components related to “smallness” and “opposites” to come up with an answer. Only then will it pick a specific language in which to reply. This suggests that large language models can learn things in one language and apply them in other languages.

Anthropic also looked at how Claude solved simple math problems. The team found that the model seems to have developed its own internal strategies that are unlike those it will have seen in its training data. Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95.

And yet if you then ask Claude how it worked that out, it will say something like: “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” In other words, it gives you a common approach found everywhere online rather than what it actually did. Yep! LLMs are weird. (And not to be trusted.)

The steps that Claude 3.5 Haiku used to solve a simple math problem were not what Anthropic expected—they’re not the steps Claude claimed it took either.
ANTHROPIC

This is clear evidence that large language models will give reasons for what they do that do not necessarily reflect what they actually did. But this is true for people too, says Batson: “You ask somebody, ‘Why did you do that?’ And they’re like, ‘Um, I guess it’s because I was— .’ You know, maybe not. Maybe they were just hungry and that’s why they did it.”

Biran thinks this finding is especially interesting. Many researchers study the behavior of large language models by asking them to explain their actions. But that might be a risky approach, he says: “As models continue getting stronger, they must be equipped with better guardrails. I believe—and this work also shows—that relying only on model outputs is not enough.”

A third task that Anthropic studied was writing poems. The researchers wanted to know if the model really did just wing it, predicting one word at a time. Instead they found that Claude somehow looked ahead, picking the word at the end of the next line several words in advance.  

For example, when Claude was given the prompt “A rhyming couplet: He saw a carrot and had to grab it,” the model responded, “His hunger was like a starving rabbit.” But using their microscope, they saw that Claude had already hit upon the word “rabbit” when it was processing “grab it.” It then seemed to write the next line with that ending already in place.

This might sound like a tiny detail. But it goes against the common assumption that large language models always work by picking one word at a time in sequence. “The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

“I thought that was cool,” says Merullo. “One of the joys of working in the field is moments like that. There’s been maybe small bits of evidence pointing toward the ability of models to plan ahead, but it’s been a big open question to what extent they do.”

Anthropic then confirmed its observation by turning off the placeholder component for “rabbitness.” Claude responded with “His hunger was a powerful habit.” And when the team replaced “rabbitness” with “greenness,” Claude responded with “freeing it from the garden’s green.”

Anthropic also explored why Claude sometimes made stuff up, a phenomenon known as hallucination. “Hallucination is the most natural thing in the world for these models, given how they’re just trained to give possible completions,” says Batson. “The real question is, ‘How in God’s name could you ever make it not do that?’”

The latest generation of large language models, like Claude 3.5 and Gemini and GPT-4o, hallucinate far less than previous versions, thanks to extensive post-training (the steps that take an LLM trained on the internet and turn it into a usable chatbot). But Batson’s team was surprised to find that this post-training seems to have made Claude refuse to speculate as a default behavior. When it did respond with false information, it was because some other component had overridden the “don’t speculate” component.

This seemed to happen most often when the speculation involved a celebrity or other well-known entity. It’s as if the amount of information available pushed the speculation through, despite the default setting. When Anthropic overrode the “don’t speculate” component to test this, Claude produced lots of false statements about individuals, including claiming that Batson was famous for inventing the Batson principle (he isn’t).

Still unclear

Because we know so little about large language models, any new insight is a big step forward. “A deep understanding of how these models work under the hood would allow us to design and train models that are much better and stronger,” says Biran.

But Batson notes there are still serious limitations. “It’s a misconception that we’ve found all the components of the model or, like, a God’s-eye view,” he says. “Some things are in focus, but other things are still unclear—a distortion of the microscope.”

And it takes several hours for a human researcher to trace the responses to even very short prompts. What’s more, these models can do a remarkable number of different things, and Anthropic has so far looked at only 10 of them.

Batson also says there are big questions that this approach won’t answer. Circuit tracing can be used to peer at the structures inside a large language model, but it won’t tell you how or why those structures formed during training. “That’s a profound question that we don’t address at all in this work,” he says.

But Batson sees this as the start of a new era in which it is possible, at last, to find real evidence for how these models work: “We don’t have to be, like: ‘Are they thinking? Are they reasoning? Are they dreaming? Are they memorizing?’ Those are all analogies. But if we can literally see step by step what a model is doing, maybe now we don’t need analogies.”

The AI Hype Index: DeepSeek mania, Israel’s spying tool, and cheating at chess

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

While AI models are certainly capable of creating interesting and sometimes entertaining material, their output isn’t necessarily useful. Google DeepMind is hoping that its new robotics model could make machines more receptive to verbal commands, paving the way for us to simply speak orders to them aloud. Elsewhere, the Chinese startup Monica has created Manus, which it claims is the very first general AI agent to complete truly useful tasks. And burnt-out coders are allowing AI to take the wheel entirely in a new practice dubbed “vibe coding.”

China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of China’s AI craze. 

At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market. 

Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. “It seems like everyone is selling, but few are buying,” he says.

Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Review—including contractors, an executive at a GPU server company, and project managers—most of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnian and 36Kr report that up to 80% of China’s newly built computing resources remain unused.

Renting out GPUs to companies that need them for training AI models—the main business model for the new wave of data centers—was once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.

“The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation. 

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”

A chaotic building boom

When ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centers—a term coined to describe AI-focused data centers.

In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope they’d stimulate the economy and establish their region as a key AI hub. 

However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of China—the country’s central internet regulator—to develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.

China’s political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick results—often to gain favor with higher-ups—rather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.

The post-pandemic economic downturn only intensified this dynamic. With China’s real estate sector—once the backbone of local economies—slumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the country’s once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.

“AI felt like a shot of adrenaline,” says Li. “A lot of money that used to flow into real estate is now going into AI data centers.”

By 2023, major corporations—many of them with little prior experience in AI—began partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firm—hardly the names one would associate with cutting-edge AI technology.

This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards. 

“Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale,” says Goodrich. “This is all really state-of-the-art computer engineering. I’d be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and don’t offer the stability that a company like DeepSeek would want.”

To make matters worse, project leaders often relied on middlemen and brokers—some of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say. 

By the end of 2024, the excitement that once surrounded China’s data center boom was  curdling into disappointment. The reason is simple: GPU rental is no longer a particularly  lucrative business.

The DeepSeek reckoning

The business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost. 

“DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from ‘Who can make the best large language model?’ to ‘Who can use them better?’” says Hancheng Cao, an assistant professor of information systems at Emory University. 

The rise of reasoning models like DeepSeek’s R1 and OpenAI’s ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users’ queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff. 

This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients. 

Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloads—large, sustained computations run on massive data sets—rather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from what’s traditionally used for large-scale AI training. 

GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.

In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: “The revenue from having a tiny part of the data center running simply wouldn’t cover the electricity and maintenance cost.”

“It’s paradoxical—China faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low,” Li says. There’s an oversupply of computational power, especially in central and west China, but at the same time, there’s a shortage of cutting-edge chips. 

However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.

“Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability,” says Fang. “Everyone I met is leveraging the data center deal for something else the government could offer.”

A necessary evil

Despite the underutilization of data centers, China’s central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology. 

Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.

In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. ​Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. “If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success,”  says Goodrich, the tech policy advisor to RAND.

“The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means,” Goodrich says.

Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeek’s open-source models.

For now, many data centers in China sit in limbo—built for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeek’s success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress.

That’s just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. “The market is too chaotic. The early adopters profited, but now it’s just people chasing policy loopholes,” he says. He’s decided to go into AI education next. 

“What stands between now and a future where AI is actually everywhere,” he says, “is not infrastructure anymore, but solid plans to deploy the technology.” 

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.

Why handing over total control to AI agents would be a huge mistake

AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems operate outside of a chat window, navigating multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents are developed to become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost? 

New frameworks and functionalities for AI agents are announced almost weekly, and companies promote the technology as a way to make our lives easier by completing tasks we can’t do or don’t want to do. Prominent examples include “computer use,” a function that enables Anthropic’s Claude system to act directly on your computer screen, and the “general AI agent” Manus, which can use online tools for a variety of tasks, like scouting out customers or planning trips.

These developments mark a major advance in artificial intelligence: systems designed to operate in the digital world without direct human oversight.

The promise is compelling. Who doesn’t want assistance with cumbersome work or tasks there’s no time for? Agent assistance could soon take many different forms, such as reminding you to ask a colleague about their kid’s basketball tournament or finding images for your next presentation. Within a few weeks, they’ll probably be able to make presentations for you. 

There’s also clear potential for deeply meaningful differences in people’s lives. For people with hand mobility issues or low vision, agents could complete tasks online in response to simple language commands. Agents could also coordinate simultaneous assistance across large groups of people in critical situations, such as by routing traffic to help drivers flee an area en masse as quickly as possible when disaster strikes. 

But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. Our research team at Hugging Face has spent years implementing and investigating these systems, and our recent findings suggest that agent development could be on the cusp of a very serious misstep. 

Giving up control, bit by bit

This core issue lies at the heart of what’s most exciting about AI agents: The more autonomous an AI system is, the more we cede human control. AI agents are developed to be flexible, capable of completing a diverse array of tasks that don’t have to be directly programmed. 

For many systems, this flexibility is made possible because they’re built on large language models, which are unpredictable and prone to significant (and sometimes comical) errors. When an LLM generates text in a chat interface, any errors stay confined to that conversation. But when a system can act independently and with access to multiple applications, it may perform actions we didn’t intend, such as manipulating files, impersonating users, or making unauthorized transactions. The very feature being sold—reduced human oversight—is the primary vulnerability.

To understand the overall risk-benefit landscape, it’s useful to characterize AI agent systems on a spectrum of autonomy. The lowest level consists of simple processors that have no impact on program flow, like chatbots that greet you on a company website. The highest level, fully autonomous agents, can write and execute new code without human constraints or oversight—they can take action (moving around files, changing records, communicating in email, etc.) without your asking for anything. Intermediate levels include routers, which decide which human-provided steps to take; tool callers, which run human-written functions using agent-suggested tools; and multistep agents that determine which functions to do when and how. Each represents an incremental removal of human control.

It’s clear that AI agents can be extraordinarily helpful for what we do every day. But this brings clear privacy, safety, and security concerns. Agents that help bring you up to speed on someone would require that individual’s personal information and extensive surveillance over your previous interactions, which could result in serious privacy breaches. Agents that create directions from building plans could be used by malicious actors to gain access to unauthorized areas. 

And when systems can control multiple information sources simultaneously, potential for harm explodes. For example, an agent with access to both private communications and public platforms could share personal information on social media. That information might not be true, but it would fly under the radar of traditional fact-checking mechanisms and could be amplified with further sharing to create serious reputational damage. We imagine that “It wasn’t me—it was my agent!!” will soon be a common refrain to excuse bad outcomes.

Keep the human in the loop

Historical precedent demonstrates why maintaining human oversight is critical. In 1980, computer systems falsely indicated that over 2,000 Soviet missiles were heading toward North America. This error triggered emergency procedures that brought us perilously close to catastrophe. What averted disaster was human cross-verification between different warning systems. Had decision-making been fully delegated to autonomous systems prioritizing speed over certainty, the outcome might have been catastrophic.

Some will counter that the benefits are worth the risks, but we’d argue that realizing those benefits doesn’t require surrendering complete human control. Instead, the development of AI agents must occur alongside the development of guaranteed human oversight in a way that limits the scope of what AI agents can do.

Open-source agent systems are one way to address risks, since these systems allow for greater human oversight of what systems can and cannot do. At Hugging Face we’re developing smolagents, a framework that provides sandboxed secure environments and allows developers to build agents with transparency at their core so that any independent group can verify whether there is appropriate human control. 

This approach stands in stark contrast to the prevailing trend toward increasingly complex, opaque AI systems that obscure their decision-making processes behind layers of proprietary technology, making it impossible to guarantee safety.

As we navigate the development of increasingly sophisticated AI agents, we must recognize that the most important feature of any technology isn’t increasing efficiency but fostering human well-being. 

This means creating systems that remain tools rather than decision-makers, assistants rather than replacements. Human judgment, with all its imperfections, remains the essential component in ensuring that these systems serve rather than subvert our interests.

Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, a global startup in responsible open-source AI.

Dr. Margaret Mitchell is a machine learning researcher and Chief Ethics Scientist at Hugging Face, connecting human values to technology development.

Dr. Sasha Luccioni is Climate Lead at Hugging Face, where she spearheads research, consulting and capacity-building to elevate the sustainability of AI systems. 

Dr. Avijit Ghosh is an Applied Policy Researcher at Hugging Face working at the intersection of responsible AI and policy. His research and engagement with policymakers has helped shape AI regulation and industry practices.

Dr. Giada Pistilli is a philosophy researcher working as Principal Ethicist at Hugging Face.