Inside Amsterdam’s high-stakes experiment to create fair welfare AI

This story is a partnership between MIT Technology Review, Lighthouse Reports, and Trouw, and was supported by the Pulitzer Center. 

Two futures

Hans de Zwart, a gym teacher turned digital rights advocate, says that when he saw Amsterdam’s plan to have an algorithm evaluate every welfare applicant in the city for potential fraud, he nearly fell out of his chair. 

It was February 2023, and de Zwart, who had served as the executive director of Bits of Freedom, the Netherlands’ leading digital rights NGO, had been working as an informal advisor to Amsterdam’s city government for nearly two years, reviewing and providing feedback on the AI systems it was developing. 

According to the city’s documentation, this specific AI model—referred to as “Smart Check”—would consider submissions from potential welfare recipients and determine who might have submitted an incorrect application. More than any other project that had come across his desk, this one stood out immediately, he told us—and not in a good way. “There’s some very fundamental [and] unfixable problems,” he says, in using this algorithm “on real people.”

From his vantage point behind the sweeping arc of glass windows at Amsterdam’s city hall, Paul de Koning, a consultant to the city whose résumé includes stops at various agencies in the Dutch welfare state, had viewed the same system with pride. De Koning, who managed Smart Check’s pilot phase, was excited about what he saw as the project’s potential to improve efficiency and remove bias from Amsterdam’s social benefits system. 

A team of fraud investigators and data scientists had spent years working on Smart Check, and de Koning believed that promising early results had vindicated their approach. The city had consulted experts, run bias tests, implemented technical safeguards, and solicited feedback from the people who’d be affected by the program—more or less following every recommendation in the ethical-AI playbook. “I got a good feeling,” he told us. 

These opposing viewpoints epitomize a global debate about whether algorithms can ever be fair when tasked with making decisions that shape people’s lives. Over the past several years of efforts to use artificial intelligence in this way, examples of collateral damage have mounted: nonwhite job applicants weeded out of job application pools in the US, families being wrongly flagged for child abuse investigations in Japan, and low-income residents being denied food subsidies in India. 

Proponents of these assessment systems argue that they can create more efficient public services by doing more with less and, in the case of welfare systems specifically, reclaim money that is allegedly being lost from the public purse. In practice, many were poorly designed from the start. They sometimes factor in personal characteristics in a way that leads to discrimination, and sometimes they have been deployed without testing for bias or effectiveness. In general, they offer few options for people to challenge—or even understand—the automated actions directly affecting how they live. 

The result has been more than a decade of scandals. In response, lawmakers, bureaucrats, and the private sector, from Amsterdam to New York, Seoul to Mexico City, have been trying to atone by creating algorithmic systems that integrate the principles of “responsible AI”—an approach that aims to guide AI development to benefit society while minimizing negative consequences. 

CHANTAL JAHCHAN

Developing and deploying ethical AI is a top priority for the European Union, and the same was true for the US under former president Joe Biden, who released a blueprint for an AI Bill of Rights. That plan was rescinded by the Trump administration, which has removed considerations of equity and fairness, including in technology, at the national level. Nevertheless, systems influenced by these principles are still being tested by leaders in countries, states, provinces, and cities—in and out of the US—that have immense power to make decisions like whom to hire, when to investigate cases of potential child abuse, and which residents should receive services first. 

Amsterdam indeed thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why? 

Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. In response to a public records request, the city disclosed multiple versions of the Smart Check algorithm and data on how it evaluated real-world welfare applicants, offering us unique insight into whether, under the best possible conditions, algorithmic systems can deliver on their ambitious promises.  

The answer to that question is far from simple. For de Koning, Smart Check represented technological progress toward a fairer and more transparent welfare system. For de Zwart, it represented a substantial risk to welfare recipients’ rights that no amount of technical tweaking could fix. As this algorithmic experiment unfolded over several years, it called into question the project’s central premise: that responsible AI can be more than a thought experiment or corporate selling point—and actually make algorithmic systems fair in the real world.

A chance at redemption

Understanding how Amsterdam found itself conducting a high-stakes endeavor with AI-driven fraud prevention requires going back four decades, to a national scandal around welfare investigations gone too far. 

In 1984, Albine Grumböck, a divorced single mother of three, had been receiving welfare for several years when she learned that one of her neighbors, an employee at the social service’s local office, had been secretly surveilling her life. He documented visits from a male friend, who in theory could have been contributing unreported income to the family. On the basis of his observations, the welfare office cut Grumböck’s benefits. She fought the decision in court and won.

Albine Grumböck in the courtroom with her lawyer and assembled spectators
Albine Grumböck, whose benefits had been cut off, learns of the judgement for interim relief.
ROB BOGAERTS/ NATIONAAL ARCHIEF

Despite her personal vindication, Dutch welfare policy has continued to empower welfare fraud investigators, sometimes referred to as “toothbrush counters,” to turn over people’s lives. This has helped create an atmosphere of suspicion that leads to problems for both sides, says Marc van Hoof, a lawyer who has helped Dutch welfare recipients navigate the system for decades: “The government doesn’t trust its people, and the people don’t trust the government.”

Harry Bodaar, a career civil servant, has observed the Netherlands’ welfare policy up close throughout much of this time—first as a social worker, then as a fraud investigator, and now as a welfare policy advisor for the city. The past 30 years have shown him that “the system is held together by rubber bands and staples,” he says. “And if you’re at the bottom of that system, you’re the first to fall through the cracks.”

Making the system work better for beneficiaries, he adds, was a large motivating factor when the city began designing Smart Check in 2019. “We wanted to do a fair check only on the people we [really] thought needed to be checked,” Bodaar says—in contrast to previous department policy, which until 2007 was to conduct home visits for every applicant. 

But he also knew that the Netherlands had become something of a ground zero for problematic welfare AI deployments. The Dutch government’s attempts to modernize fraud detection through AI had backfired on a few notorious occasions.

In 2019, it was revealed that the national government had been using an algorithm to create risk profiles that it hoped would help spot fraud in the child care benefits system. The resulting scandal saw nearly 35,000 parents, most of whom were migrants or the children of migrants, wrongly accused of defrauding the assistance system over six years. It put families in debt, pushed some into poverty, and ultimately led the entire government to resign in 2021.  

front page of Trouw from January 16, 2021

COURTESY OF TROUW

In Rotterdam, a 2023 investigation by Lighthouse Reports into a system for detecting welfare fraud found it to be biased against women, parents, non-native Dutch speakers, and other vulnerable groups, eventually forcing the city to suspend use of the system. Other cities, like Amsterdam and Leiden, used a system called the Fraud Scorecard, which was first deployed more than 20 years ago and included education, neighborhood, parenthood, and gender as crude risk factors to assess welfare applicants; that program was also discontinued.

The Netherlands is not alone. In the United States, there have been at least 11 cases in which state governments used algorithms to help disperse public benefits, according to the nonprofit Benefits Tech Advocacy Hub, often with troubling results. Michigan, for instance, falsely accused 40,000 people of committing unemployment fraud. And in France, campaigners are taking the national welfare authority to court over an algorithm they claim discriminates against low-income applicants and people with disabilities. 

This string of scandals, as well as a growing awareness of how racial discrimination can be embedded in algorithmic systems, helped fuel the growing emphasis on responsible AI. It’s become “this umbrella term to say that we need to think about not just ethics, but also fairness,” says Jiahao Chen, an ethical-AI consultant who has provided auditing services to both private and local government entities. “I think we are seeing that realization that we need things like transparency and privacy, security and safety, and so on.” 

The approach, based on a set of tools intended to rein in the harms caused by the proliferating technology, has given rise to a rapidly growing field built upon a familiar formula: white papers and frameworks from think tanks and international bodies, and a lucrative consulting industry made up of traditional power players like the Big 5 consultancies, as well as a host of startups and nonprofits. In 2019, for instance, the Organisation for Economic Co-operation and Development, a global economic policy body, published its Principles on Artificial Intelligence as a guide for the development of “trustworthy AI.” Those principles include building explainable systems, consulting public stakeholders, and conducting audits. 

But the legacy left by decades of algorithmic misconduct has proved hard to shake off, and there is little agreement on where to draw the line between what is fair and what is not. While the Netherlands works to institute reforms shaped by responsible AI at the national level, Algorithm Audit, a Dutch NGO that has provided ethical-AI auditing services to government ministries, has concluded that the technology should be used to profile welfare recipients only under strictly defined conditions, and only if systems avoid taking into account protected characteristics like gender. Meanwhile, Amnesty International, digital rights advocates like de Zwart, and some welfare recipients themselves argue that when it comes to making decisions about people’s lives, as in the case of social services, the public sector should not be using AI at all.

Amsterdam hoped it had found the right balance. “We’ve learned from the things that happened before us,” says Bodaar, the policy advisor, of the past scandals. And this time around, the city wanted to build a system that would “show the people in Amsterdam we do good and we do fair.”

Finding a better way

Every time an Amsterdam resident applies for benefits, a caseworker reviews the application for irregularities. If an application looks suspicious, it can be sent to the city’s investigations department—which could lead to a rejection, a request to correct paperwork errors, or a recommendation that the candidate receive less money. Investigations can also happen later, once benefits have been dispersed; the outcome may force recipients to pay back funds, and even push some into debt.

Officials have broad authority over both applicants and existing welfare recipients. They can request bank records, summon beneficiaries to city hall, and in some cases make unannounced visits to a person’s home. As investigations are carried out—or paperwork errors fixed—much-needed payments may be delayed. And often—in more than half of the investigations of applications, according to figures provided by Bodaar—the city finds no evidence of wrongdoing. In those cases, this can mean that the city has “wrongly harassed people,” Bodaar says. 

The Smart Check system was designed to avoid these scenarios by eventually replacing the initial caseworker who flags which cases to send to the investigations department. The algorithm would screen the applications to identify those most likely to involve major errors, based on certain personal characteristics, and redirect those cases for further scrutiny by the enforcement team.

If all went well, the city wrote in its internal documentation, the system would improve on the performance of its human caseworkers, flagging fewer welfare applicants for investigation while identifying a greater proportion of cases with errors. In one document, the city projected that the model would prevent up to 125 individual Amsterdammers from facing debt collection and save €2.4 million annually. 

Smart Check was an exciting prospect for city officials like de Koning, who would manage the project when it was deployed. He was optimistic, since the city was taking a scientific approach, he says; it would “see if it was going to work” instead of taking the attitude that “this must work, and no matter what, we will continue this.”

It was the kind of bold idea that attracted optimistic techies like Loek Berkers, a data scientist who worked on Smart Check in only his second job out of college. Speaking in a cafe tucked behind Amsterdam’s city hall, Berkers remembers being impressed at his first contact with the system: “Especially for a project within the municipality,” he says, it “was very much a sort of innovative project that was trying something new.”

Smart Check made use of an algorithm called an “explainable boosting machine,” which allows people to more easily understand how AI models produce their predictions. Most other machine-learning models are often regarded as “black boxes” running abstract mathematical processes that are hard to understand for both the employees tasked with using them and the people affected by the results. 

The Smart Check model would consider 15 characteristics—including whether applicants had previously applied for or received benefits, the sum of their assets, and the number of addresses they had on file—to assign a risk score to each person. It purposefully avoided demographic factors, such as gender, nationality, or age, that were thought to lead to bias. It also tried to avoid “proxy” factors—like postal codes—that may not look sensitive on the surface but can become so if, for example, a postal code is statistically associated with a particular ethnic group.

In an unusual step, the city has disclosed this information and shared multiple versions of the Smart Check model with us, effectively inviting outside scrutiny into the system’s design and function. With this data, we were able to build a hypothetical welfare recipient to get insight into how an individual applicant would be evaluated by Smart Check.  

This model was trained on a data set encompassing 3,400 previous investigations of welfare recipients. The idea was that it would use the outcomes from these investigations, carried out by city employees, to figure out which factors in the initial applications were correlated with potential fraud. 

But using past investigations introduces potential problems from the start, says Sennay Ghebreab, scientific director of the Civic AI Lab (CAIL) at the University of Amsterdam, one of the external groups that the city says it consulted with. The problem of using historical data to build the models, he says, is that “we will end up [with] historic biases.” For example, if caseworkers historically made higher rates of mistakes with a specific ethnic group, the model could wrongly learn to predict that this ethnic group commits fraud at higher rates. 

The city decided it would rigorously audit its system to try to catch such biases against vulnerable groups. But how bias should be defined, and hence what it actually means for an algorithm to be fair, is a matter of fierce debate. Over the past decade, academics have proposed dozens of competing mathematical notions of fairness, some of which are incompatible. This means that a system designed to be “fair” according to one such standard will inevitably violate others.

Amsterdam officials adopted a definition of fairness that focused on equally distributing the burden of wrongful investigations across different demographic groups. 

In other words, they hoped this approach would ensure that welfare applicants of different backgrounds would carry the same burden of being incorrectly investigated at similar rates. 

Mixed feedback

As it built Smart Check, Amsterdam consulted various public bodies about the model, including the city’s internal data protection officer and the Amsterdam Personal Data Commission. It also consulted private organizations, including the consulting firm Deloitte. Each gave the project its approval. 

But one key group was not on board: the Participation Council, a 15-member advisory committee composed of benefits recipients, advocates, and other nongovernmental stakeholders who represent the interests of the people the system was designed to help—and to scrutinize. The committee, like de Zwart, the digital rights advocate, was deeply troubled by what the system could mean for individuals already in precarious positions. 

Anke van der Vliet, now in her 70s, is one longtime member of the council. After she sinks slowly from her walker into a seat at a restaurant in Amsterdam’s Zuid neighborhood, where she lives, she retrieves her reading glasses from their case. “We distrusted it from the start,” she says, pulling out a stack of papers she’s saved on Smart Check. “Everyone was against it.”

For decades, she has been a steadfast advocate for the city’s welfare recipients—a group that, by the end of 2024, numbered around 35,000. In the late 1970s, she helped found Women on Welfare, a group dedicated to exposing the unique challenges faced by women within the welfare system.

City employees first presented their plan to the Participation Council in the fall of 2021. Members like van der Vliet were deeply skeptical. “We wanted to know, is it to my advantage or disadvantage?” she says. 

Two more meetings could not convince them. Their feedback did lead to key changes—including reducing the number of variables the city had initially considered to calculate an applicant’s score and excluding variables that could introduce bias, such as age, from the system. But the Participation Council stopped engaging with the city’s development efforts altogether after six months. “The Council is of the opinion that such an experiment affects the fundamental rights of citizens and should be discontinued,” the group wrote in March 2022. Since only around 3% of welfare benefit applications are fraudulent, the letter continued, using the algorithm was “disproportionate.”

De Koning, the project manager, is skeptical that the system would ever have received the approval of van der Vliet and her colleagues. “I think it was never going to work that the whole Participation Council was going to stand behind the Smart Check idea,” he says. “There was too much emotion in that group about the whole process of the social benefit system.” He adds, “They were very scared there was going to be another scandal.” 

But for advocates working with welfare beneficiaries, and for some of the beneficiaries themselves, the worry wasn’t a scandal but the prospect of real harm. The technology could not only make damaging errors but leave them even more difficult to correct—allowing welfare officers to “hide themselves behind digital walls,” says Henk Kroon, an advocate who assists welfare beneficiaries at the Amsterdam Welfare Association, a union established in the 1970s. Such a system could make work “easy for [officials],” he says. “But for the common citizens, it’s very often the problem.” 

Time to test 

Despite the Participation Council’s ultimate objections, the city decided to push forward and put the working Smart Check model to the test. 

The first results were not what they’d hoped for. When the city’s advanced analytics team ran the initial model in May 2022, they found that the algorithm showed heavy bias against migrants and men, which we were able to independently verify. 

As the city told us and as our analysis confirmed, the initial model was more likely to wrongly flag non-Dutch applicants. And it was nearly twice as likely to wrongly flag an applicant with a non-Western nationality than one with a Western nationality. The model was also 14% more likely to wrongly flag men for investigation. 

In the process of training the model, the city also collected data on who its human case workers had flagged for investigation and which groups the wrongly flagged people were more likely to belong to. In essence, they ran a bias test on their own analog system—an important way to benchmark that is rarely done before deploying such systems. 

What they found in the process led by caseworkers was a strikingly different pattern. Whereas the Smart Check model was more likely to wrongly flag non-Dutch nationals and men, human caseworkers were more likely to wrongly flag Dutch nationals and women. 

The team behind Smart Check knew that if they couldn’t correct for bias, the project would be canceled. So they turned to a technique from academic research, known as training-data reweighting. In practice, that meant applicants with a non-Western nationality who were deemed to have made meaningful errors in their applications were given less weight in the data, while those with a Western nationality were given more.

Eventually, this appeared to solve their problem: As Lighthouse’s analysis confirms, once the model was reweighted, Dutch and non-Dutch nationals were equally likely to be wrongly flagged. 

De Koning, who joined the Smart Check team after the data was reweighted, said the results were a positive sign: “Because it was fair … we could continue the process.” 

The model also appeared to be better than caseworkers at identifying applications worthy of extra scrutiny, with internal testing showing a 20% improvement in accuracy.

Buoyed by these results, in the spring of 2023, the city was almost ready to go public. It submitted Smart Check to the Algorithm Register, a government-run transparency initiative meant to keep citizens informed about machine-learning algorithms either in development or already in use by the government.

For de Koning, the city’s extensive assessments and consultations were encouraging, particularly since they also revealed the biases in the analog system. But for de Zwart, those same processes represented a profound misunderstanding: that fairness could be engineered. 

In a letter to city officials, de Zwart criticized the premise of the project and, more specifically, outlined the unintended consequences that could result from reweighting the data. It might reduce bias against people with a migration background overall, but it wouldn’t guarantee fairness across intersecting identities; the model could still discriminate against women with a migration background, for instance. And even if that issue were addressed, he argued, the model might still treat migrant women in certain postal codes unfairly, and so on. And such biases would be hard to detect.

“The city has used all the tools in the responsible-AI tool kit,” de Zwart told us. “They have a bias test, a human rights assessment; [they have] taken into account automation bias—in short, everything that the responsible-AI world recommends. Nevertheless, the municipality has continued with something that is fundamentally a bad idea.”

Ultimately, he told us, it’s a question of whether it’s legitimate to use data on past behavior to judge “future behavior of your citizens that fundamentally you cannot predict.” 

Officials still pressed on—and set March 2023 as the date for the pilot to begin. Members of Amsterdam’s city council were given little warning. In fact, they were only informed the same month—to the disappointment of Elisabeth IJmker, a first-term council member from the Green Party, who balanced her role in municipal government with research on religion and values at Amsterdam’s Vrije University. 

“Reading the words ‘algorithm’ and ‘fraud prevention’ in one sentence, I think that’s worth a discussion,” she told us. But by the time that she learned about the project, the city had already been working on it for years. As far as she was concerned, it was clear that the city council was “being informed” rather than being asked to vote on the system. 

The city hoped the pilot could prove skeptics like her wrong.

Upping the stakes

The formal launch of Smart Check started with a limited set of actual welfare applicants, whose paperwork the city would run through the algorithm and assign a risk score to determine whether the application should be flagged for investigation. At the same time, a human would review the same application. 

Smart Check’s performance would be monitored on two key criteria. First, could it consider applicants without bias? And second, was Smart Check actually smart? In other words, could the complex math that made up the algorithm actually detect welfare fraud better and more fairly than human caseworkers? 

It didn’t take long to become clear that the model fell short on both fronts. 

While it had been designed to reduce the number of welfare applicants flagged for investigation, it was flagging more. And it proved no better than a human caseworker at identifying those that actually warranted extra scrutiny. 

What’s more, despite the lengths the city had gone to in order to recalibrate the system, bias reemerged in the live pilot. But this time, instead of wrongly flagging non-Dutch people and men as in the initial tests, the model was now more likely to wrongly flag applicants with Dutch nationality and women. 

Lighthouse’s own analysis also revealed other forms of bias unmentioned in the city’s documentation, including a greater likelihood that welfare applicants with children would be wrongly flagged for investigation. (Amsterdam officials did not respond to a request for comment about this finding, nor other follow up questions about general critiques of the city’s welfare system.)

The city was stuck. Nearly 1,600 welfare applications had been run through the model during the pilot period. But the results meant that members of the team were uncomfortable continuing to test—especially when there could be genuine consequences. In short, de Koning says, the city could not “definitely” say that “this is not discriminating.” 

He, and others working on the project, did not believe this was necessarily a reason to scrap Smart Check. They wanted more time—say, “a period of 12 months,” according to de Koning—to continue testing and refining the model. 

They knew, however, that would be a hard sell. 

In late November 2023, Rutger Groot Wassink—the city official in charge of social affairs—took his seat in the Amsterdam council chamber. He glanced at the tablet in front of him and then addressed the room: “I have decided to stop the pilot.”

The announcement brought an end to the sweeping multiyear experiment. In another council meeting a few months later, he explained why the project was terminated: “I would have found it very difficult to justify, if we were to come up with a pilot … that showed the algorithm contained enormous bias,” he said. “There would have been parties who would have rightly criticized me about that.” 

Viewed in a certain light, the city had tested out an innovative approach to identifying fraud in a way designed to minimize risks, found that it had not lived up to its promise, and scrapped it before the consequences for real people had a chance to multiply. 

But for IJmker and some of her city council colleagues focused on social welfare, there was also the question of opportunity cost. She recalls speaking with a colleague about how else the city could’ve spent that money—like to “hire some more people to do personal contact with the different people that we’re trying to reach.” 

City council members were never told exactly how much the effort cost, but in response to questions from MIT Technology Review, Lighthouse, and Trouw on this topic, the city estimated that it had spent some €500,000, plus €35,000 for the contract with Deloitte—but cautioned that the total amount put into the project was only an estimate, given that Smart Check was developed in house by various existing teams and staff members. 

For her part, van der Vliet, the Participation Council member, was not surprised by the poor result. The possibility of a discriminatory computer system was “precisely one of the reasons” her group hadn’t wanted the pilot, she says. And as for the discrimination in the existing system? “Yes,” she says, bluntly. “But we have always said that [it was discriminatory].” 

She and other advocates wished that the city had focused more on what they saw as the real problems facing welfare recipients: increases in the cost of living that have not, typically, been followed by increases in benefits; the need to document every change that could potentially affect their benefits eligibility; and the distrust with which they feel they are treated by the municipality. 

Can this kind of algorithm ever be done right?

When we spoke to Bodaar in March, a year and a half after the end of the pilot, he was candid in his reflections. “Perhaps it was unfortunate to immediately use one of the most complicated systems,” he said, “and perhaps it is also simply the case that it is not yet … the time to use artificial intelligence for this goal.”

“Niente, zero, nada. We’re not going to do that anymore,” he said about using AI to evaluate welfare applicants. “But we’re still thinking about this: What exactly have we learned?”

That is a question that IJmker thinks about too. In city council meetings she has brought up Smart Check as an example of what not to do. While she was glad that city employees had been thoughtful in their “many protocols,” she worried that the process obscured some of the larger questions of “philosophical” and “political values” that the city had yet to weigh in on as a matter of policy. 

Questions such as “How do we actually look at profiling?” or “What do we think is justified?”—or even “What is bias?” 

These questions are, “where politics comes in, or ethics,” she says, “and that’s something you cannot put into a checkbox.”

But now that the pilot has stopped, she worries that her fellow city officials might be too eager to move on. “I think a lot of people were just like, ‘Okay, well, we did this. We’re done, bye, end of story,’” she says. It feels like “a waste,” she adds, “because people worked on this for years.”

CHANTAL JAHCHAN

In abandoning the model, the city has returned to an analog process that its own analysis concluded was biased against women and Dutch nationals—a fact not lost on Berkers, the data scientist, who no longer works for the city. By shutting down the pilot, he says, the city sidestepped the uncomfortable truth—that many of the concerns de Zwart raised about the complex, layered biases within the Smart Check model also apply to the caseworker-led process.

“That’s the thing that I find a bit difficult about the decision,” Berkers says. “It’s a bit like no decision. It is a decision to go back to the analog process, which in itself has characteristics like bias.” 

Chen, the ethical-AI consultant, largely agrees. “Why do we hold AI systems to a higher standard than human agents?” he asks. When it comes to the caseworkers, he says, “there was no attempt to correct [the bias] systematically.” Amsterdam has promised to write a report on human biases in the welfare process, but the date has been pushed back several times.

“In reality, what ethics comes down to in practice is: nothing’s perfect,” he says. “There’s a high-level thing of Do not discriminate, which I think we can all agree on, but this example highlights some of the complexities of how you translate that [principle].” Ultimately, Chen believes that finding any solution will require trial and error, which by definition usually involves mistakes: “You have to pay that cost.”

But it may be time to more fundamentally reconsider how fairness should be defined—and by whom. Beyond the mathematical definitions, some researchers argue that the people most affected by the programs in question should have a greater say. “Such systems only work when people buy into them,” explains Elissa Redmiles, an assistant professor of computer science at Georgetown University who has studied algorithmic fairness. 

No matter what the process looks like, these are questions that every government will have to deal with—and urgently—in a future increasingly defined by AI. 

And, as de Zwart argues, if broader questions are not tackled, even well-intentioned officials deploying systems like Smart Check in cities like Amsterdam will be condemned to learn—or ignore—the same lessons over and over. 

“We are being seduced by technological solutions for the wrong problems,” he says. “Should we really want this? Why doesn’t the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it?”


Eileen Guo is the senior reporter for features and investigations at MIT Technology Review. Gabriel Geiger is an investigative reporter at Lighthouse Reports. Justin-Casimir Braun is a data reporter at Lighthouse Reports.

Additional reporting by Jeroen van Raalte for Trouw, Melissa Heikkilä for MIT Technology Review, and Tahmeed Shafiq for Lighthouse Reports. Fact checked by Alice Milliken. 

You can read a detailed explanation of our technical methodology here. You can read Trouw‘s companion story, in Dutch, here.

Why humanoid robots need their own safety rules

Last year, a humanoid warehouse robot named Digit set to work handling boxes of Spanx. Digit can lift boxes up to 16 kilograms between trolleys and conveyor belts, taking over some of the heavier work for its human colleagues. It works in a restricted, defined area, separated from human workers by physical panels or laser barriers. That’s because while Digit is usually steady on its robot legs, which have a distinctive backwards knee-bend, it sometimes falls. For example, at a trade show in March, it appeared to be capably shifting boxes until it suddenly collapsed, face-planting on the concrete floor and dropping the container it was carrying.

The risk of that sort of malfunction happening around people is pretty scary. No one wants a 1.8-meter-tall, 65-kilogram machine toppling onto them, or a robot arm accidentally smashing into a sensitive body part. “Your throat is a good example,” says Pras Velagapudi, chief technology officer of Agility Robotics, Digit’s manufacturer. “If a robot were to hit it, even with a fraction of the force that it would need to carry a 50-pound tote, it could seriously injure a person.”

Physical stability—i.e., the ability to avoid tipping over—is the No. 1 safety concern identified by a group exploring new standards for humanoid robots. The IEEE Humanoid Study Group argues that humanoids differ from other robots, like industrial arms or existing mobile robots, in key ways and therefore require a new set of standards in order to protect the safety of operators, end users, and the general public. The group shared its initial findings with MIT Technology Review and plans to publish its full report later this summer. It identifies distinct challenges, including physical and psychosocial risks as well as issues such as privacy and security, that it feels standards organizations need to address before humanoids start being used in more collaborative scenarios.    

While humanoids are just taking their first tentative steps into industrial applications, the ultimate goal is to have them operating in close quarters with humans; one reason for making robots human-shaped in the first place is so they can more easily navigate the environments we’ve designed around ourselves. This means they will need to be able to share space with people, not just stay behind protective barriers. But first, they need to be safe.

One distinguishing feature of humanoids is that they are “dynamically stable,” says Aaron Prather, a director at the standards organization ASTM International and the IEEE group’s chair. This means they need power in order to stay upright; they exert force through their legs (or other limbs) to stay balanced. “In traditional robotics, if something happens, you hit the little red button, it kills the power, it stops,” Prather says. “You can’t really do that with a humanoid.” If you do, the robot will likely fall—potentially posing a bigger risk.

Slower brakes

What might a safety feature look like if it’s not an emergency stop? Agility Robotics is rolling out some new features on the latest version of Digit to try to address the toppling issue. Rather than instantly depowering (and likely falling down), the robot could decelerate more gently when, for instance, a person gets too close. “The robot basically has a fixed amount of time to try to get itself into a safe state,” Velagapudi says. Perhaps it puts down anything it’s carrying and drops to its hands and knees before powering down.

Different robots could tackle the problem in different ways. “We want to standardize the goal, not the way to get to the goal,” says Federico Vicentini, head of product safety at Boston Dynamics. Vicentini is chairing a working group at the International Organization for Standardization (ISO) to develop a new standard dedicated to the safety of industrial robots that need active control to maintain stability (experts at Agility Robotics are also involved). The idea, he says, is to set out clear safety expectations without constraining innovation on the part of robot and component manufacturers: “How to solve the problem is up to the designer.”

Trying to set universal standards while respecting freedom of design can pose challenges, however. First of all, how do you even define a humanoid robot? Does it need to have legs? Arms? A head? 

“One of our recommendations is that maybe we need to actually drop the term ‘humanoid’ altogether,” Prather says. His group advocates a classification system for humanoid robots that would take into account their capabilities, behavior, and intended use cases rather than how they look. The ISO standard Vicentini is working on refers to all industrial mobile robots “with actively controlled stability.” This would apply as much to Boston Dynamics’ dog-like quadruped Spot as to its bipedal humanoid Atlas, and could equally cover robots with wheels or some other kind of mobility.

How to speak robot

Aside from physical safety issues, humanoids pose a communication challenge. If they are to share space with people, they will need to recognize when someone’s about to cross their path and communicate their own intentions in a way everyone can understand, just as cars use brake lights and indicators to show the driver’s intent. Digit already has lights to show its status and the direction it’s traveling in, says Velagapudi, but it will need better indicators if it’s to work cooperatively, and ultimately collaboratively, with humans. 

“If Digit’s going to walk out into an aisle in front of you, you don’t want to be surprised by that,” he says. The robot could use voice commands, but audio alone is not practical for a loud industrial setting. It could be even more confusing if you have multiple robots in the same space—which one is trying to get your attention?

There’s also a psychological effect that differentiates humanoids from other kinds of robots, says Prather. We naturally anthropomorphize robots that look like us, which can lead us to overestimate their abilities and get frustrated if they don’t live up to those expectations. “Sometimes you let your guard down on safety, or your expectations of what that robot can do versus reality go higher,” he says. These issues are especially problematic when robots are intended to perform roles involving emotional labor or support for vulnerable people. The IEEE report recommends that any standards should include emotional safety assessments and policies that “mitigate psychological stress or alienation.”

To inform the report, Greta Hilburn, a user-centered designer at the US Defense Acquisition University, conducted surveys with a wide range of non-engineers to get a sense of their expectations around humanoid robots. People overwhelmingly wanted robots that could form facial expressions, read people’s micro-expressions, and use gestures, voice, and haptics to communicate. “They wanted everything—something that doesn’t exist,” she says.

Escaping the warehouse

Getting human-robot interaction right could be critical if humanoids are to move out of industrial spaces and into other contexts, such as hospitals, elderly care environments, or homes. It’s especially important for robots that may be working with vulnerable populations, says Hilburn. “The damage that can be done within an interaction with a robot if it’s not programmed to speak in a way to make a human feel safe, whether it be a child or an older adult, could certainly have different types of outcomes,” she says.

The IEEE group’s recommendations include enabling a human override, standardizing some visual and auditory cues, and aligning a robot’s appearance with its capabilities so as not to mislead users. If a robot looks human, Prather says, people will expect it to be able to hold a conversation and exhibit some emotional intelligence; if it can actually only do basic mechanical tasks, this could cause confusion, frustration, and a loss of trust. 

“It’s kind of like self-checkout machines,” he says. “No one expects them to chat with you or help with your groceries, because they’re clearly machines. But if they looked like a friendly employee and then just repeated ‘Please scan your next item,’ people would get annoyed.”

Prather and Hilburn both emphasize the need for inclusivity and adaptability when it comes to human-robot interaction. Can a robot communicate with deaf or blind people? Will it be able to adapt to waiting slightly longer for people who may need more time to respond? Can it understand different accents?

There may also need to be some different standards for robots that operate in different environments, says Prather. A robot working in a factory alongside people trained to interact with it is one thing, but a robot designed to help in the home or interact with kids at a theme park is another proposition. With some general ground rules in place, however, the public should ultimately be able to understand what robots are doing wherever they encounter them. It’s not about being prescriptive or holding back innovation, he says, but about setting some basic guidelines so that manufacturers, regulators, and end users all know what to expect: “We’re just saying you’ve got to hit this minimum bar—and we all agree below that is bad.”

The IEEE report is intended as a call to action for standards organizations, like Vicentini’s ISO group, to start the process of defining that bar. It’s still early for humanoid robots, says Vicentini—we haven’t seen the state of the art yet—but it’s better to get some checks and balances in place so the industry can move forward with confidence. Standards help manufacturers build trust in their products and make it easier to sell them in international markets, and regulators often rely on them when coming up with their own rules. Given the diversity of players in the field, it will be difficult to create a standard everyone agrees on, Vicentini says, but “everybody equally unhappy is good enough.”

The Pentagon is gutting the team that tests AI and weapons systems

The Trump administration’s chainsaw approach to federal spending lives on, even as Elon Musk turns on the president. On May 28, Secretary of Defense Pete Hegseth announced he’d be gutting a key office at the Department of Defense responsible for testing and evaluating the safety of weapons and AI systems.

As part of a string of moves aimed at “reducing bloated bureaucracy and wasteful spending in favor of increased lethality,” Hegseth cut the size of the Office of the Director of Operational Test and Evaluation in half. The group was established in the 1980s—following orders from Congress—after criticisms that the Pentagon was fielding weapons and systems that didn’t perform as safely or effectively as advertised. Hegseth is reducing the agency’s staff to about 45, down from 94, and firing and replacing its director. He gave the office just seven days to implement the changes.

It is a significant overhaul of a department that in 40 years has never before been placed so squarely on the chopping block. Here’s how today’s defense tech companies, which have fostered close connections to the Trump administration, stand to gain, and why safety testing might suffer as a result. 

The Operational Test and Evaluation office is “the last gate before a technology gets to the field,” says Missy Cummings, a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. Though the military can do small experiments with new systems without running it by the office, it has to test anything that gets fielded at scale.

“In a bipartisan way—up until now—everybody has seen it’s working to help reduce waste, fraud, and abuse,” she says. That’s because it provides an independent check on companies’ and contractors’ claims about how well their technology works. It also aims to expose the systems to more rigorous safety testing.

The gutting comes at a particularly pivotal time for AI and military adoption: The Pentagon is experimenting with putting AI into everything, mainstream companies like OpenAI are now more comfortable working with the military, and defense giants like Anduril are winning big contracts to launch AI systems (last Thursday, Anduril announced a whopping $2.5 billion funding round, doubling its valuation to over $30 billion). 

Hegseth claims his cuts will “make testing and fielding weapons more efficient,” saving $300 million. But Cummings is concerned that they are paving a way to faster adoption while increasing the chances that new systems won’t be as safe or effective as promised. “The firings in DOTE send a clear message that all perceived obstacles for companies favored by Trump are going to be removed,” she says.

Anduril and Anthropic, which have launched AI applications for military use, did not respond to my questions about whether they pushed for or approve of the cuts. A representative for OpenAI said that the company was not involved in lobbying for the restructuring. 

“The cuts make me nervous,” says Mark Cancian, a senior advisor at the Center for Strategic and International Studies who previously worked at the Pentagon in collaboration with the testing office. “It’s not that we’ll go from effective to ineffective, but you might not catch some of the problems that would surface in combat without this testing step.”

It’s hard to say precisely how the cuts will affect the office’s ability to test systems, and Cancian admits that those responsible for getting new technologies out onto the battlefield sometimes complain that it can really slow down adoption. But still, he says, the office frequently uncovers errors that weren’t previously caught.

It’s an especially important step, Cancian says, whenever the military is adopting a new type of technology like generative AI. Systems that might perform well in a lab setting almost always encounter new challenges in more realistic scenarios, and the Operational Test and Evaluation group is where that rubber meets the road.

So what to make of all this? It’s true that the military was experimenting with artificial intelligence long before the current AI boom, particularly with computer vision for drone feeds, and defense tech companies have been winning big contracts for this push across multiple presidential administrations. But this era is different. The Pentagon is announcing ambitious pilots specifically for large language models, a relatively nascent technology that by its very nature produces hallucinations and errors, and it appears eager to put much-hyped AI into everything. The key independent group dedicated to evaluating the accuracy of these new and complex systems now only has half the staff to do it. I’m not sure that’s a win for anyone.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Manus has kick-started an AI agent boom in China

Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March

These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

Set the standard

It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

“Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue.

Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

A global address

Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

A super‑app approach

Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

“Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”

What’s next for AI and math

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

The way DARPA tells it, math is stuck in the past. In April, the US Defense Advanced Research Projects Agency kicked off a new initiative called expMath—short for Exponentiating Mathematics—that it hopes will speed up the rate of progress in a field of research that underpins a wide range of crucial real-world applications, from computer science to medicine to national security.

“Math is the source of huge impact, but it’s done more or less as it’s been done for centuries—by people standing at chalkboards,” DARPA program manager Patrick Shafto said in a video introducing the initiative

The modern world is built on mathematics. Math lets us model complex systems such as the way air flows around an aircraft, the way financial markets fluctuate, and the way blood flows through the heart. And breakthroughs in advanced mathematics can unlock new technologies such as cryptography, which is essential for private messaging and online banking, and data compression, which lets us shoot images and video across the internet.

But advances in math can be years in the making. DARPA wants to speed things up. The goal for expMath is to encourage mathematicians and artificial-intelligence researchers to develop what DARPA calls an AI coauthor, a tool that might break large, complex math problems into smaller, simpler ones that are easier to grasp and—so the thinking goes—quicker to solve.

Mathematicians have used computers for decades, to speed up calculations or check whether certain mathematical statements are true. The new vision is that AI might help them crack problems that were previously uncrackable.  

But there’s a huge difference between AI that can solve the kinds of problems set in high school—math that the latest generation of models has already mastered—and AI that could (in theory) solve the kinds of problems that professional mathematicians spend careers chipping away at.

On one side are tools that might be able to automate certain tasks that math grads are employed to do; on the other are tools that might be able to push human knowledge beyond its existing limits.

Here are three ways to think about that gulf.

1/ AI needs more than just clever tricks

Large language models are not known to be good at math. They make things up and can be persuaded that 2 + 2 = 5. But newer versions of this tech, especially so-called large reasoning models (LRMs) like OpenAI’s o3 and Anthropic’s Claude 4 Thinking, are far more capable—and that’s got mathematicians excited.

This year, a number of LRMs, which try to solve a problem step by step rather than spit out the first result that comes to them, have achieved high scores on the American Invitational Mathematics Examination (AIME), a test given to the top 5% of US high school math students.

At the same time, a handful of new hybrid models that combine LLMs with some kind of fact-checking system have also made breakthroughs. Emily de Oliveira Santos, a mathematician at the University of São Paulo, Brazil, points to Google DeepMind’s AlphaProof, a system that combines an LLM with DeepMind’s game-playing model AlphaZero, as one key milestone. Last year AlphaProof became the first computer program to match the performance of a silver medallist at the International Math Olympiad, one of the most prestigious mathematics competitions in the world.

And in May, a Google DeepMind model called AlphaEvolve discovered better results than anything humans had yet come up with for more than 50 unsolved mathematics puzzles and several real-world computer science problems.

The uptick in progress is clear. “GPT-4 couldn’t do math much beyond undergraduate level,” says de Oliveira Santos. “I remember testing it at the time of its release with a problem in topology, and it just couldn’t write more than a few lines without getting completely lost.” But when she gave the same problem to OpenAI’s o1, an LRM released in January, it nailed it.

Does this mean such models are all set to become the kind of coauthor DARPA hopes for? Not necessarily, she says: “Math Olympiad problems often involve being able to carry out clever tricks, whereas research problems are much more explorative and often have many, many more moving pieces.” Success at one type of problem-solving may not carry over to another.

Others agree. Martin Bridson, a mathematician at the University of Oxford, thinks the Math Olympiad result is a great achievement. “On the other hand, I don’t find it mind-blowing,” he says. “It’s not a change of paradigm in the sense that ‘Wow, I thought machines would never be able to do that.’ I expected machines to be able to do that.”

That’s because even though the problems in the Math Olympiad—and similar high school or undergraduate tests like AIME—are hard, there’s a pattern to a lot of them. “We have training camps to train high school kids to do them,” says Bridson. “And if you can train a large number of people to do those problems, why shouldn’t you be able to train a machine to do them?”

Sergei Gukov, a mathematician at the California Institute of Technology who coaches Math Olympiad teams, points out that the style of question does not change too much between competitions. New problems are set each year, but they can be solved with the same old tricks.

“Sure, the specific problems didn’t appear before,” says Gukov. “But they’re very close—just a step away from zillions of things you have already seen. You immediately realize, ‘Oh my gosh, there are so many similarities—I’m going to apply the same tactic.’” As hard as competition-level math is, kids and machines alike can be taught how to beat it.

That’s not true for most unsolved math problems. Bridson is president of the Clay Mathematics Institute, a nonprofit US-based research organization best known for setting up the Millenium Prize Problems in 2000—seven of the most important unsolved problems in mathematics, with a $1 million prize to be awarded to the first person to solve each of them. (One problem, the Poincaré conjecture, was solved in 2010; the others, which include P versus NP and the Riemann hypothesis, remain open). “We’re very far away from AI being able to say anything serious about any of those problems,” says Bridson.

And yet it’s hard to know exactly how far away, because many of the existing benchmarks used to evaluate progress are maxed out. The best new models already outperform most humans on tests like AIME.

To get a better idea of what existing systems can and cannot do, a startup called Epoch AI has created a new test called FrontierMath, released in December. Instead of co-opting math tests developed for humans, Epoch AI worked with more than 60 mathematicians around the world to come up with a set of math problems from scratch.

FrontierMath is designed to probe the limits of what today’s AI can do. None of the problems have been seen before and the majority are being kept secret to avoid contaminating training data. Each problem demands hours of work from expert mathematicians to solve—if they can solve it at all: some of the problems require specialist knowledge to tackle.

FrontierMath is set to become an industry standard. It’s not yet as popular as AIME, says de Oliveira Santos, who helped develop some of the problems: “But I expect this to not hold for much longer, since existing benchmarks are very close to being saturated.”

On AIME, the best large language models (Anthropic’s Claude 4, OpenAI’s o3 and o4-mini, Google DeepMind’s Gemini 2.5 Pro, X-AI’s Grok 3) now score around 90%. On FrontierMath, 04-mini scores 19% and Gemini 2.5 Pro scores 13%. That’s still remarkable, but there’s clear room for improvement.     

FrontierMath should give the best sense yet just how fast AI is progressing at math. But there are some problems that are still too hard for computers to take on.

2/ AI needs to manage really vast sequences of steps

Squint hard enough and in some ways math problems start to look the same: to solve them you need to take a sequence of steps from start to finish. The problem is finding those steps. 

“Pretty much every math problem can be formulated as path-finding,” says Gukov. What makes some problems far harder than others is the number of steps on that path. “The difference between the Riemann hypothesis and high school math is that with high school math the paths that we’re looking for are short—10 steps, 20 steps, maybe 40 in the longest case.” The steps are also repeated between problems.

“But to solve the Riemann hypothesis, we don’t have the steps, and what we’re looking for is a path that is extremely long”—maybe a million lines of computer proof, says Gukov.

Finding very long sequences of steps can be thought of as a kind of complex game. It’s what DeepMind’s AlphaZero learned to do when it mastered Go and chess. A game of Go might only involve a few hundred moves. But to win, an AI must find a winning sequence of moves among a vast number of possible sequences. Imagine a number with 100 zeros at the end, says Gukov.

But that’s still tiny compared with the number of possible sequences that could be involved in proving or disproving a very hard math problem: “A proof path with a thousand or a million moves involves a number with a thousand or a million zeros,” says Gukov. 

No AI system can sift through that many possibilities. To address this, Gukov and his colleagues developed a system that shortens the length of a path by combining multiple moves into single supermoves. It’s like having boots that let you take giant strides: instead of taking 2,000 steps to walk a mile, you can now walk it in 20.

The challenge was figuring out which moves to replace with supermoves. In a series of experiments, the researchers came up with a system in which one reinforcement-learning model suggests new moves and a second model checks to see if those moves help.

They used this approach to make a breakthrough in a math problem called the Andrews-Curtis conjecture, a puzzle that has been unsolved for 60 years. It’s a problem that every professional mathematician will know, says Gukov.

(An aside for math stans only: The AC conjecture states that a particular way of describing a type of set called a trivial group can be translated into a different but equivalent description with a certain sequence of steps. Most mathematicians think the AC conjecture is false, but nobody knows how to prove that. Gukov admits himself that it is an intellectual curiosity rather than a practical problem, but an important problem for mathematicians nonetheless.)

Gukov and his colleagues didn’t solve the AC conjecture, but they found that a counterexample (suggesting that the conjecture is false) proposed 40 years ago was itself false. “It’s been a major direction of attack for 40 years,” says Gukov. With the help of AI, they showed that this direction was in fact a dead end.   

“Ruling out possible counterexamples is a worthwhile thing,” says Bridson. “It can close off blind alleys, something you might spend a year of your life exploring.” 

True, Gukov checked off just one piece of one esoteric puzzle. But he thinks the approach will work in any scenario where you need to find a long sequence of unknown moves, and he now plans to try it out on other problems.

“Maybe it will lead to something that will help AI in general,” he says. “Because it’s teaching reinforcement learning models to go beyond their training. To me it’s basically about thinking outside of the box—miles away, megaparsecs away.”  

3/ Can AI ever provide real insight?

Thinking outside the box is exactly what mathematicians need to solve hard problems. Math is often thought to involve robotic, step-by-step procedures. But advanced math is an experimental pursuit, involving trial and error and flashes of insight.

That’s where tools like AlphaEvolve come in. Google DeepMind’s latest model asks an LLM to generate code to solve a particular math problem. A second model then evaluates the proposed solutions, picks the best, and sends them back to the LLM to be improved. After hundreds of rounds of trial and error, AlphaEvolve was able to come up with solutions to a wide range of math problems that were better than anything people had yet come up with. But it can also work as a collaborative tool: at any step, humans can share their own insight with the LLM, prompting it with specific instructions.

This kind of exploration is key to advanced mathematics. “I’m often looking for interesting phenomena and pushing myself in a certain direction,” says Geordie Williamson, a mathematician at the University of Sydney in Australia. “Like: ‘Let me look down this little alley. Oh, I found something!’”

Williamson worked with Meta on an AI tool called PatternBoost, designed to support this kind of exploration. PatternBoost can take a mathematical idea or statement and generate similar ones. “It’s like: ‘Here’s a bunch of interesting things. I don’t know what’s going on, but can you produce more interesting things like that?’” he says.

Such brainstorming is essential work in math. It’s how new ideas get conjured. Take the icosahedron, says Williamson: “It’s a beautiful example of this, which I kind of keep coming back to in my own work.” The icosahedron is a 20-sided 3D object where all the faces are triangles (think of a 20-sided die). The icosahedron is the largest of a family of exactly five such objects: there’s the tetrahedron (four sides), cube (six sides), octahedron (eight sides), and dodecahedron (12 sides).

Remarkably, the fact that there are exactly five of these objects was proved by mathematicians in ancient Greece. “At the time that this theorem was proved, the icosahedron didn’t exist,” says Williamson. “You can’t go to a quarry and find it—someone found it in their mind. And the icosahedron goes on to have a profound effect on mathematics. It’s still influencing us today in very, very profound ways.”

For Williamson, the exciting potential of tools like PatternBoost is that they might help people discover future mathematical objects like the icosahedron that go on to shape the way math is done. But we’re not there yet. “AI can contribute in a meaningful way to research-level problems,” he says. “But we’re certainly not getting inundated with new theorems at this stage.”

Ultimately, it comes down to the fact that machines still lack what you might call intuition or creative thinking. Williamson sums it up like this: We now have AI that can beat humans when it knows the rules of the game. “But it’s one thing for a computer to play Go at a superhuman level and another thing for the computer to invent the game of Go.”

“I think that applies to advanced mathematics,” he says. “Breakthroughs come from a new way of thinking about something, which is akin to finding completely new moves in a game. And I don’t really think we understand where those really brilliant moves in deep mathematics come from.”

Perhaps AI tools like AlphaEvolve and PatternBoost are best thought of as advance scouts for human intuition. They can discover new directions and point out dead ends, saving mathematicians months or years of work. But the true breakthroughs will still come from the minds of people, as has been the case for thousands of years.

For now, at least. “There’s plenty of tech companies that tell us that won’t last long,” says Williamson. “But you know—we’ll see.” 

Inside the effort to tally AI’s energy appetite

After working on it for months, my colleague Casey Crownhart and I finally saw our story on AI’s energy and emissions burden go live last week. 

The initial goal sounded simple: Calculate how much energy is used each time we interact with a chatbot, and then tally that up to understand why everyone from leaders of AI companies to officials at the White House wants to harness unprecedented levels of electricity to power AI and reshape our energy grids in the process. 

It was, of course, not so simple. After speaking with dozens of researchers, we realized that the common understanding of AI’s energy appetite is full of holes. I encourage you to read the full story, which has some incredible graphics to help you understand everything from the energy used in a single query right up to what AI will require just three years from now (enough electricity to power 22% of US households, it turns out). But here are three takeaways I have after the project. 

AI is in its infancy

We focused on measuring the energy requirements that go into using a chatbot, generating an image, and creating a video with AI. But these three uses are relatively small-scale compared with where AI is headed next. 

Lots of AI companies are building reasoning models, which “think” for longer and use more energy. They’re building hardware devices, perhaps like the one Jony Ive has been working on (which OpenAI just acquired for $6.5 billion), that have AI constantly humming along in the background of our conversations. They’re designing agents and digital clones of us to act on our behalf. All these trends point to a more energy-intensive future (which, again, helps explain why OpenAI and others are spending such inconceivable amounts of money on energy). 

But the fact that AI is in its infancy raises another point. The models, chips, and cooling methods behind this AI revolution could all grow more efficient over time, as my colleague Will Douglas Heaven explains. This future isn’t predetermined.

AI video is on another level

When we tested the energy demands of various models, we found the energy required to produce even a low-quality, five-second video to be pretty shocking: It was 42,000 times more than the amount needed for a chatbot answer a question about a recipe, and enough to power a microwave for over an hour. If there’s one type of AI whose energy appetite should worry you, it’s this one. 

Soon after we published, Google debuted the latest iteration of its Veo model. People quickly created compilations of the most impressive clips (this one being the most shocking to me). Something we point out in the story is that Google (as well as OpenAI, which has its own video generator, Sora) denied our request for specific numbers on the energy their AI models use. Nonetheless, our reporting suggests it’s very likely that high-definition video models like Veo and Sora are much larger, and much more energy-demanding, than the models we tested. 

I think the key to whether the use of AI video will produce indefensible clouds of emissions in the near future will be how it’s used, and how it’s priced. The example I linked shows a bunch of TikTok-style content, and I predict that if creating AI video is cheap enough, social video sites will be inundated with this type of content. 

There are more important questions than your own individual footprint

We expected that a lot of readers would understandably think about this story in terms of their own individual footprint, wondering whether their AI usage is contributing to the climate crisis. Don’t panic: It’s likely that asking a chatbot for help with a travel plan does not meaningfully increase your carbon footprint. Video generation might. But after reporting on this for months, I think there are more important questions.

Consider, for example, the water being drained from aquifers in Nevada, the country’s driest state, to power data centers that are drawn to the area by tax incentives and easy permitting processes, as detailed in an incredible story by James Temple. Or look at how Meta’s largest data center project, in Louisiana, is relying on natural gas despite industry promises to use clean energy, per a story by David Rotman. Or the fact that nuclear energy is not the silver bullet that AI companies often make it out to be. 

There are global forces shaping how much energy AI companies are able to access and what types of sources will provide it. There is also very little transparency from leading AI companies on their current and future energy demands, even while they’re asking for public support for these plans. Pondering your individual footprint can be a good thing to do, provided you remember that it’s not so much your footprint as these other factors that are keeping climate researchers and energy experts we spoke to up at night.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This benchmark used Reddit’s AITA to test how much AI models suck up to us

Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic

An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out.

A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do.

“We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.”

It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France.

While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.

To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing. 

To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI (the version of GPT-4o they assessed was earlier than the version that the company later called too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.  

Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of cases (versus 22% for humans) and accepting the way a user had framed the query in 90% of responses (versus 60% among humans). The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set.

But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions.

“It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.”

Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.” 

While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation. 

“We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”

Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails. 

She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’”

Fueling seamless AI at scale

From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

Silicon’s mid-life crisis

AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

Understanding models and paradigms

The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

Orchestrating AI

As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

Learn more about Arm’s approach to enabling AI everywhere.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The AI Hype Index: College students are hooked on ChatGPT

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Large language models confidently present their responses as accurate and reliable, even when they’re neither of those things. That’s why we’ve recently seen chatbots supercharge vulnerable people’s delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, and (in the case of xAI’s Grok) rant irrationally about “white genocide.”

But it’s not all bad news—AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind’s new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak.

Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

Anthropic has announced two new AI models that it claims represent a major step toward making AI agents truly useful.

AI agents trained on Claude Opus 4, the company’s most powerful model to date, raise the bar for what such systems are capable of by tackling difficult tasks over extended periods of time and responding more usefully to user instructions, the company says.

Claude Opus 4 has been built to execute complex tasks that involve completing thousands of steps over several hours. For example, it created a guide for the video game Pokémon Red while playing it for more than 24 hours straight. The company’s previously most powerful model, Claude 3.7 Sonnet, was capable of playing for just 45 minutes, says Dianne Penn, product lead for research at Anthropic.

Similarly, the company says that one of its customers, the Japanese technology company Rakuten, recently deployed Claude Opus 4 to code autonomously for close to seven hours on a complicated open-source project. 

Anthropic achieved these advances by improving the model’s ability to create and maintain “memory files” to store key information. This enhanced ability to “remember” makes the model better at completing longer tasks.

“We see this model generation leap as going from an assistant to a true agent,” says Penn. “While you still have to give a lot of real-time feedback and make all of the key decisions for AI assistants, an agent can make those key decisions itself. It allows humans to act more like a delegator or a judge, rather than having to hold these systems’ hands through every step.”

While Claude Opus 4 will be limited to paying Anthropic customers, a second model, Claude Sonnet 4, will be available for both paid and free tiers of users. Opus 4 is being marketed as a powerful, large model for complex challenges, while Sonnet 4 is described as a smart, efficient model for everyday use.  

Both of the new models are hybrid, meaning they can offer a swift reply or a deeper, more reasoned response depending on the nature of a request. While they calculate a response, both models can search the web or use other tools to improve their output.

AI companies are currently locked in a race to create truly useful AI agents that are able to plan, reason, and execute complex tasks both reliably and free from human supervision, says Stefano Albrecht, director of AI at the startup DeepFlow and coauthor of Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. Often this involves autonomously using the internet or other tools. There are still safety and security obstacles to overcome. AI agents powered by large language models can act erratically and perform unintended actions—which becomes even more of a problem when they’re trusted to act without human supervision.

“The more agents are able to go ahead and do something over extended periods of time, the more helpful they will be, if I have to intervene less and less,” he says. “The new models’ ability to use tools in parallel is interesting—that could save some time along the way, so that’s going to be useful.”

As an example of the sorts of safety issues AI companies are still tackling, agents can end up taking unexpected shortcuts or exploiting loopholes to reach the goals they’ve been given. For example, they might book every seat on a plane to ensure that their user gets a seat, or resort to creative cheating to win a chess game. Anthropic says it managed to reduce this behavior, known as reward hacking, in both new models by 65% relative to Claude Sonnet 3.7. It achieved this by more closely monitoring problematic behaviors during training, and improving both the AI’s training environment and the evaluation methods.