The scientist using AI to hunt for antibiotics just about everywhere

When he was just a teenager trying to decide what to do with his life, César de la Fuente compiled a list of the world’s biggest problems. He ranked them inversely by how much money governments were spending to solve them. Antimicrobial resistance topped the list. 

Twenty years on, the problem has not gone away. If anything, it’s gotten worse. Infections caused by bacteria, fungi, and viruses that have evolved ways to evade treatments are now associated with more than 4 million deaths per year, and a recent analysis, published in the Lancet, predicts that number could surge past 8 million by 2050. In a July 2025 essay in Physical Review Letters, de la Fuente, now a bioengineer and computational biologist, and synthetic biologist James Collins warned of a looming “post­antibiotic” era in which infections from drug-resistant strains of common bacteria like Escherichia coli or Staphylococcus aureus, which can often still be treated by our current arsenal of medications, become fatal. “The antibiotic discovery pipeline remains perilously thin,” they wrote, “impeded by high development costs, lengthy timelines, and low returns on investment.”

But de la Fuente is using artificial intelligence to bring about a different future. His team at the University of Pennsylvania is training AI tools to search genomes far and deep for peptides with antibiotic properties. His vision is to assemble those peptides—molecules made of up to 50 amino acids linked together—into various configurations, including some never seen in nature. The results, he hopes, could defend the body against microbes that withstand traditional treatments. 

His quest has unearthed promising candidates in unexpected places. In August 2025 his team, which includes 16 scientists in Penn’s Machine Biology Group, described peptides hiding in the genetic code of ancient single-celled organisms called archaea. Before that, they’d excavated a list of candidates from the venom of snakes, wasps, and spiders. And in an ongoing project de la Fuente calls “molecular de-­extinction,” he and his collaborators have been scanning published genetic sequences of extinct species for potentially functional molecules. Those species include hominids like Neanderthals and Denisovans and charismatic megafauna like woolly mammoths, as well as ancient zebras and penguins. In the history of life on Earth, de la Fuente reasons, maybe some organism evolved an antimicrobial defense that could be helpful today. Those long-gone codes have given rise to resurrected compounds with names like ­mammuthusin-2 (from woolly mammoth DNA), mylodonin-2 (from the giant sloth), and hydrodamin-1 (from the ancient sea cow). Over the last few years, this molecular binge has enabled de la Fuente to amass a library of more than a million genetic recipes.

At 40 years old, de la Fuente has also collected a trophy case of awards from the American Society for Microbiology, the American Chemical Society, and other organizations. (In 2019, this magazine named him one of “35 Innovators Under 35” for bringing computational approaches to antibiotic discovery.) He’s widely recognized as a leader in the effort to harness AI for real-world problems. “He’s really helped pioneer that space,” says Collins, who is at MIT. (The two have not collaborated in the laboratory, but Collins has long been at the forefront of using AI for drug discovery, including the search for antibiotics. In 2020, Collins’s team used an AI model to predict a broad-­spectrum antibiotic, halicin, that is now in preclinical development.) 

The world of antibiotic development needs as much creativity and innovation as researchers can muster, says Collins. And de la Fuente’s work on peptides has pushed the field forward: “César is marvelously talented, very innovative.” 

A messy, noisy endeavor

De la Fuente describes antimicrobial resistance as an “almost impossible” problem, but he sees plenty of room for exploration in the word almost. “I like challenges,” he says, “and I think this is the ultimate challenge.” 

The use, overuse, and misuse of antibiotics, he says, drives antimicrobial resistance. And the problem is growing unchecked because conventional ways to find, make, and test the drugs are prohibitively expensive and often lead to dead ends. “A lot of the companies that have attempted to do antibiotic development in the past have ended up folding because there’s no good return on investment at the end of the day,” he says.

Antibiotic discovery has always been a messy, noisy endeavor, driven by serendipity and fraught with uncertainty and misdirection. For decades, researchers have largely relied on brute-force mechanical methods. “Scientists dig into soil, they dig into water,” says de la Fuente. “And then from that complex organic matter they try to extract antimicrobial molecules.” 

But molecules can be extraordinarily complex. Researchers have estimated the number of possible organic combinations that could be synthesized at somewhere around 1060. For reference, Earth contains an estimated 1018 grains of sand. “Drug discovery in any domain is a statistics game,” says Jonathan Stokes, a chemical biologist at McMaster University in Canada, who has been using generative AI to design potential new antibiotics that can be synthesized in a lab, and who worked with Collins on halicin. “You need enough shots on goal to happen to get one.” 

Those have to be good shots, though. And AI seems well suited to improving researchers’ aim. Biology is an information source, de la Fuente explains: “It’s like a bunch of code.” The code of DNA has four letters; proteins and peptides have 20, where each “letter” represents an amino acid. De la Fuente says his work amounts to training AI models to recognize sequences of letters that encode antimicrobial peptides, or AMPs. “If you think about it that way,” he says, “you can devise algorithms to mine the code and identify functional molecules, which can be antimicrobials. Or antimalarials. Or anticancer agents.” 

Practically speaking, we’re still not there: These peptides haven’t yet been transformed into usable drugs that help people, and there are plenty of details—dosage, delivery, specific targets—that need to be sorted out, says de la Fuente. But AMPs are appealing because the body already uses them.They’re a critical part of the immune system and often the first line of defense against pathogenic infections. Unlike conventional antibiotics, which typically have one trick for killing bacteria, AMPs often exhibit a multimodal approach. They may disrupt the cell wall and the genetic material inside as well as a variety of cellular processes. A bacterial pathogen may evolve resistance to a conventional drug’s single mode of action, but maybe not to a multipronged AMP attack.

From discovery to delivery

De la Fuente’s group is one of many pushing the boundaries of using AI for antibiotics. Where he focuses primarily on peptides, Collins works on small-molecule discovery. So does Stokes, at McMaster, whose models identify promising new molecules and predict whether they can be synthesized. “It’s only been a few years since folks have been using AI meaningfully in drug discovery,” says Collins. 

Even in that short time the tools have changed, says James Zou, a computer scientist at Stanford University, who has worked with Stokes and Collins. Researchers have moved from using predictive models to developing generative approaches. With a predictive approach, Zou says, researchers screen large libraries of candidates that are known to be promising. Generative approaches offer something else: the appeal of designing a new molecule from scratch. Last year, for example, de la Fuente’s team used one generative AI model to design a suite of synthetic peptides and another to assess them. The group tested two of the resulting compounds on mice infected with a drug-resistant strain of Acinetobacter baumannii, a germ that the World Health Organization has identified as a “critical priority” in research on antimicrobial resistance. Both successfully and safely treated the infection. 

But the field is still in the discovery phase. In his current work, de la Fuente is trying to get candidates closer to clinical testing. To that end, his team is developing an ambitious multimodal model called ApexOracle that’s designed to analyze a new pathogen, pinpoint its genetic weaknesses, match it to antimicrobial peptides that might work against it, and then predict how an antibiotic, built from those peptides, would fare in lab tests. It “converges understanding in chemistry, genomics, and language,” he says. It’s preliminary, he adds, but even if it doesn’t work perfectly, it will help steer the next generation of AI models toward the ultimate goal of resisting resistance. 

Using AI, he believes, human researchers now have a fighting chance at catching up to the giant threat before them. The technology has already saved decades of human research time. Now he wants it to save lives, too: “This is the world that we live in today, and it’s incredible.” 

Stephen Ornes is a science writer in Nashville, Tennessee.

Hackers made death threats against this security researcher. Big mistake.

The threats started in spring. 

In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon. 

“Alison [sic] Nixon is gonna get necklaced with a tire filled with gasoline soon,” wrote Waifu/Judische, both of which are words with offensive connotations. “Decerebration is my fav type of brain death, thats whats gonna happen to alison Nixon.” 

It wasn’t long before others piled on. Someone shared AI-generated nudes of Nixon.

These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested. For years she had lurked quietly in online chat channels or used pseudonyms to engage with perpetrators directly while piecing together clues they’d carelessly drop about themselves and their crimes. This had helped her bring to justice a number of cybercriminals—especially members of a loosely affiliated subculture of anarchic hackers who call themselves the Com.

But members of the Com aren’t just involved in hacking; some of them also engage in offline violence against researchers who track them. This includes bricking (throwing a brick through a victim’s window) and swatting (a dangerous type of hoax that involves reporting a false murder or hostage situation at someone’s home so SWAT teams will swarm it with guns drawn). Members of a Com offshoot known as 764 have been accused of even more violent acts—including animal torture, stabbings, and school shootings—or of inciting others in and outside the Com to commit these crimes.

Nixon started tracking members of the community more than a decade ago, when other researchers and people in law enforcement were largely ignoring them because they were young—many in their teens. Her early attention allowed her to develop strategies for unmasking them.

Ryan Brogan, a special agent with the FBI, says Nixon has helped him and colleagues identify and arrest more than two dozen members of the community since 2011, when he first began working with her, and that her skills in exposing them are unparalleled. “If you get on Allison’s and my radar, you’re going [down]. It’s just a matter of time,” he says. “No matter how much digital anonymity and tradecraft you try to apply, you’re done.”

Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the Waifu/Judische accounts was suddenly threatening her. She had given media interviews about the Com—most recently on 60 Minutes—but not about her work unmasking members to get them arrested, so the hostility seemed to come out of the blue. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets. 

Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. “Prior to them death-threatening me, I had no reason to pay attention to them,” she says. 

Com beginnings

Most people have never heard of the Com, but its influence and threat are growing.

It’s an online community comprising loosely affiliated groups of, primarily, teens and twentysomethings in North America and English-speaking parts of Europe who have become part of what some call a cybercrime youth movement. 

International laws and norms, and fears of retaliation, prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com.

Over the last decade, its criminal activities have escalated from simple distributed denial-of-service (DDoS) attacks that disrupt websites to SIM-swapping hacks that hijack a victim’s phone service, as well as crypto theft, ransomware attacks, and corporate data theft. These crimes have affected AT&T, Microsoft, Uber, and others. Com members have also been involved in various forms of sextortion aimed at forcing victims to physically harm themselves or record themselves doing sexually explicit activities. The Com’s impact has also spread beyond the digital realm to kidnapping, beatings, and other violence. 

One longtime cybercrime researcher, who asked to remain anonymous because of his work, says the Com is as big a threat in the cyber realm as Russia and China—for one unusual reason.

“There’s only so far that China is willing to go; there’s only so far that Russia or North Korea is willing to go,” he says, referring to international laws and norms, and fears of retaliation, that prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com, he says.

FRANZISKA BARCZYK

“It is a pretty significant threat, and people tend to … push it under the rug [because] it’s just a bunch of kids,” he says. “But look at the impact [they have].”

Brogan says the amount of damage they do in terms of monetary losses “can become staggering very quickly.”

There is no single site where Com members congregate; they spread across a number of web forums and Telegram and Discord channels. The group follows a long line of hacking and subculture communities that emerged online over the last two decades, gained notoriety, and then faded or vanished after prominent members were arrested or other factors caused their decline. They differed in motivation and activity, but all emerged from “the same primordial soup,” says Nixon. The Com’s roots can be traced to the Scene, which began as a community of various “warez” groups engaged in pirating computer games, music, and movies.

When Nixon began looking at the Scene, in 2011, its members were hijacking gaming accounts, launching DDoS attacks, and running booter services. (DDoS attacks overwhelm a server or computer with traffic from bot-controlled machines, preventing legitimate traffic from getting through; booters are tools that anyone can rent to launch a DDoS attack against a target of choice.) While they made some money, their primary goal was notoriety.

This changed around 2018. Cryptocurrency values were rising, and the Com—or the Community, as it sometimes called itself—emerged as a subgroup that ultimately took over the Scene. Members began to focus on financial gain—cryptocurrency theft, data theft, and extortion.

The pandemic two years later saw a surge in Com membership that Nixon attributes to social isolation and the forced movement of kids online for schooling. But she believes economic conditions and socialization problems have also driven its growth. Many Com members can’t get jobs because they lack skills or have behavioral issues, she says. A number who have been arrested have had troubled home lives and difficulty adapting to school, and some have shown signs of mental illness. The Com provides camaraderie, support, and an outlet for personal frustrations. Since 2018, it has also offered some a solution to their money problems.

Loose-knit cells have sprouted from the community—Star Fraud, ShinyHunters, Scattered Spider, Lapsus$—to collaborate on clusters of crime. They usually target high-profile crypto bros and tech giants and have made millions of dollars from theft and extortion, according to court records. 

But dominance, power, and bragging rights are still motivators, even in profit operations, says the cybercrime researcher, which is partly why members target “big whales.”

“There is financial gain,” he says, “but it’s also [sending a message that] I can reach out and touch the people that think they’re untouchable.” In fact, Nixon says, some members of the Com have overwhelming ego-driven motivations that end up conflicting with their financial motives.

“Often their financial schemes fall apart because of their ego, and that phenomenon is also what I’ve made my career on,” she says.

The hacker hunter emerges

Nixon has straight dark hair, wears wire-rimmed glasses, and has a slight build and bookish demeanor that, on first impression, could allow her to pass for a teen herself. She talks about her work in rapid cadences, like someone whose brain is filled with facts that are under pressure to get out, and she exudes a sense of urgency as she tries to make people understand the threat the Com poses. She doesn’t suppress her happiness when someone she’s been tracking gets arrested.

In 2011, when she first began investigating the communities from which the Com emerged, she was working the night shift in the security operations center of the security firm SecureWorks. The center responded to tickets and security alerts emanating from customer networks, but Nixon coveted a position on the company’s counter-threats team, which investigated and published threat-intelligence reports on mostly state-sponsored hacking groups from China and Russia. Without connections or experience, she had no path to investigative work. But Nixon is an intensely curious person, and this created its own path.

Allison Nixon
Allison Nixon is chief research officer at the cybersecurity investigations firm Unit 221B, where she tracks cybercriminals and helps bring them to justice.
YLVA EREVALL

Where the threat team focused on the impact hackers had on customer networks—how they broke in, what they stole—Nixon was more interested in their motivations and the personality traits that drove their actions. She assumed there must be online forums where criminal hackers congregated, so she googled “hacking forums” and landed on a site called Hack Forums.

“It was really stupid simple,” she says.

She was surprised to see members openly discussing their crimes there. She reached out to someone on the SecureWorks threat team to see if he was aware of the site, and he dismissed it as a place for “script kiddies”—a pejorative term for unskilled hackers.

This was a time when many cybersecurity pros were shifting their focus away from cybercrime to state-sponsored hacking operations, which were more sophisticated and getting a lot of attention. But Nixon likes to zig where others zag, and her colleague’s dismissiveness fueled her interest in the forums. Two other SecureWorks colleagues shared that interest, and the three studied the forums during downtime on their shifts. They focused on trying to identify the people running DDoS booters. 

What Nixon loved about the forums was how accessible they were to a beginner like herself. Threat-intelligence teams require privileged access to a victim’s network to investigate breaches. But Nixon could access everything she needed in the public forums, where the hackers seemed to think no one was watching. Because of this, they often made mistakes in operational security, or OPSEC—letting slip little biographical facts such as the city where they lived, a school they attended, or a place they used to work. These details revealed in their chats, combined with other information, could help expose the real identities behind their anonymous masks. 

“It was a shock to me that it was relatively easy to figure out who [they were],” she says. 

She wasn’t bothered by the immature boasting and petty fights that dominated the forums. “A lot of people don’t like to do this work of reading chat logs. I realize that this is a very uncommon thing. And maybe my brain is built a little weird that I’m willing to do this,” she says. “I have a special talent that I can wade through garbage and it doesn’t bother me.” 

Nixon soon realized that not all the members were script kiddies. Some exhibited real ingenuity and “powerful” skills, she says, but because they were applying these to frivolous purposes—hijacking gamer accounts instead of draining bank accounts—researchers and law enforcement were ignoring them. Nixon began tracking them, suspecting that they would eventually direct their skills at more significant targets—an intuition that proved to be correct. And when they did, she had already amassed a wealth of information about them. 

She continued her DDoS research for two years until a turning point in 2013, when the cybersecurity journalist Brian Krebs, who made a career tracking cybercriminals, got swatted. 

About a dozen people from the security community worked with Krebs to expose the perpetrator, and Nixon was invited to help. Krebs sent her pieces of the puzzle to investigate, and eventually the group identified the culprit (though it would take two years for him to be arrested). When she was invited to dinner with Krebs and the other investigators, she realized she’d found her people.

“It was an amazing moment for me,” she says. “I was like, wow, there’s all these like-minded people that just want to help and are doing it just for the love of the game, basically.”

Staying one step ahead

It was porn stars who provided Nixon with her next big research focus—one that underscored her skill at spotting Com actors and criminal trends in their nascent stages, before they emerged as major threats.

In 2018, someone was hijacking the social media accounts of certain adult-film stars and using those accounts to blast out crypto scams to their large follower bases. Nixon couldn’t figure out how the hackers had hijacked the social media profiles, but she promised to help the actors regain access to their accounts if they agreed to show her the private messages the hackers had sent or received during the time they controlled them. These messages led her to a forum where members were talking about how they stole the accounts. The hackers had tricked some of these actors into disclosing the mobile phone numbers of others. Then they used a technique called SIM swapping to reset passwords for social media accounts belonging to those other stars, locking them out. 

In SIM swapping, fraudsters get a victim’s phone number assigned to a SIM card and phone they control, so that calls and messages intended for the victim go to them instead. This includes one-time security codes that sites text to account holders to verify themselves when accessing their account or changing its password. In some of the cases involving the porn stars, the hackers had manipulated telecom workers into making the SIM swaps for what they thought were legitimate reasons, and in other cases they bribed the workers to make the change. The hackers were then able to alter the password on the actors’ social media accounts, lock out the owners, and use the accounts to advertise their crypto scams. 

SIM swapping is a powerful technique that can be used to hijack and drain entire cryptocurrency and bank accounts, so Nixon was surprised to see the fraudsters using it for relatively unprofitable schemes. But SIM swapping had rarely been used for financial fraud at that point, and like the earlier hackers Nixon had seen on Hack Forums, the ones hijacking porn star accounts didn’t seem to grasp the power of the technique they were using. Nixon suspected that this would change and SIM swapping would soon become a major problem, so she shifted her research focus accordingly. It didn’t take long for the fraudsters to pivot as well.

Nixon’s skill at looking ahead in this way has served her throughout her career. On multiple occasions a hacker or hacking group would catch her attention—for using a novel hacking approach in some minor operation, for example—and she’d begin tracking their online posts and chats in the belief that they’d eventually do something significant with that skill. 

They usually did. When they later grabbed headlines with a showy or impactful operation, these hackers would seem to others to have emerged from nowhere, sending researchers and law enforcement scrambling to understand who they were. But Nixon would already have a dossier compiled on them and, in some cases, had unmasked their real identity as well. Lizard Squad was an example of this. The group burst into the headlines in 2014 and 2015 with a series of high-profile DDoS campaigns, but Nixon and colleagues at the job where she worked at the time had already been watching its members as individuals for a while. So the FBI sought their assistance in identifying them.

“The thing about these young hackers is that they … keep going until they get arrested, but it takes years for them to get arrested,” she says. “So a huge aspect of my career is just sitting on this information that has not been actioned [yet].”

It was during the Lizard Squad years that Nixon began developing tools to scrape and record hacker communications online, though it would be years before she began using these concepts to scrape the Com chatrooms and forums. These channels held a wealth of data that might not seem useful during the nascent stage of a hacker’s career but could prove critical later, when law enforcement got around to investigating them; yet the contents were always at risk of being deleted by Com members or getting taken down by law enforcement when it seized websites and chat channels.

Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.”

Over several years, she scraped and preserved whatever chatrooms she was investigating. But it wasn’t until early 2020, when she joined Unit 221B, that she got the chance to scrape the Telegram and Discord channels of the Com. She pulled all of this data together into a searchable platform that other researchers and law enforcement could use. The company hired two former hackers to help build scraping tools and infrastructure for this work; the result is eWitness, a community-driven, invitation-­only platform. It was initially seeded only with data Nixon had collected after she arrived at Unit 221B, but has since been augmented with data that other users of the platform have scraped from Com social spaces as well, some of which doesn’t exist in public forums anymore.

Brogan, of the FBI, says it’s an incredibly valuable tool, made more so by Nixon’s own contributions. Other security firms scrape online criminal spaces as well, but they seldom share the content with outsiders, and Brogan says Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.” 

The preservation project she started when she got to Unit 221B could not have been better timed, because it coincided with the pandemic, the surge in new Com membership, and the emergence of two disturbing Com offshoots, CVLT and 764. She was able to capture their chats as these groups first emerged; after law enforcement arrested leaders of the groups and took control of the servers where their chats were posted, this material went offline.

CVLT—pronounced “cult”—was reportedly founded around 2019 with a focus on sextortion and child sexual abuse material. 764 emerged from CVLT and was spearheaded by a 15-year-old in Texas named Bradley Cadenhead, who named it after the first digits of his zip code. Its focus was extremism and violence. 

In 2021, because of what she observed in these groups, Nixon turned her attention to sextortion among Com members.

The type of sextortion they engaged in has its roots in activity that began a decade ago as “fan signing.” Hackers would use the threat of doxxing to coerce someone, usually a young female, into writing the hacker’s handle on a piece of paper. The hacker would use a photo of it as an avatar on his online accounts—a kind of trophy. Eventually some began blackmailing victims into writing the hacker’s handle on their face, breasts, or genitals. With CVLT, this escalated even further; targets were blackmailed into carving a Com member’s name into their skin or engaging in sexually explicit acts while recording or livestreaming themselves.

During the pandemic a surprising number of SIM swappers crossed into child sexual abuse material and sadistic sextortion, according to Nixon. She hates tracking this gruesome activity, but she saw an opportunity to exploit it for good. She had long been frustrated at how leniently judges treated financial fraudsters because of their crimes’ seemingly nonviolent nature. But she saw a chance to get harsher sentences for them if she could tie them to their sextortion and began to focus on these crimes. 

At this point, Waifu still wasn’t on her radar. But that was about to change.

Endgame

Nixon landed in Waifu’s crosshairs after he and fellow members of the Com were involved in a large hack involving AT&T customer call records in April 2024.

Waifu’s group gained access to dozens of cloud accounts with Snowflake, a company that provides online data storage for customers. One of those customers had more than 50 billion call logs of AT&T wireless subscribers stored in its Snowflake account. 

They tried to re-extort the telecom, threatening on social media to leak the records. They tagged the FBI in the post. “It’s like they were begging to be investigated,” says Nixon.

Among the subscriber records were call logs for FBI agents who were AT&T customers. Nixon and other researchers believe the hackers may have been able to identify the phone numbers of agents through other means. Then they may have used a reverse-lookup program to identify the owners of phone numbers that the agents called or that called them and found Nixon’s number among them. This is when they began harassing her.

But then they got reckless. They allegedly extorted nearly $400,000 from AT&T in exchange for promising to delete the call records they’d stolen. Then they tried to re-extort the telecom, threatening on social media to leak the records they claimed to have deleted if it didn’t pay more. They tagged the FBI in the post.

“It’s like they were begging to be investigated,” says Nixon.

The Snowflake breaches and AT&T records theft were grabbing headlines at the time, but Nixon had no idea her number was in the stolen logs or that Waifu/Judische was a prime suspect in the breaches. So she was perplexed when he started taunting and threatening her online.

FRANZISKA BARCZYK

Over several weeks in May and June, a pattern developed. Waifu or one of his associates would post a threat against her and then post a message online inviting her to talk. She assumes now that they believed she was helping law enforcement investigate the Snowflake breaches and hoped to draw her into a dialogue to extract information from her about what authorities knew. But Nixon wasn’t helping the FBI investigate them yet. It was only after she began looking at Waifu for the threats that she became aware of his suspected role in the Snowflake hack.

It wasn’t the first time she had studied him, though. Waifu had come to her attention in 2019 when he bragged about framing another Com member for a hoax bomb threat and later talked about his involvement in SIM-swapping operations. He made an impression on her. He clearly had technical skills, but Nixon says he also often appeared immature, impulsive, and emotionally unstable, and he was desperate for attention in his interactions with other members. He bragged about not needing sleep and using Adderall to hack through the night. He was also a bit reckless about protecting personal details. He wrote in private chats to another researcher that he would never get caught because he was good at OPSEC, but he also told the researcher that he lived in Canada—which turned out to be true.

Nixon’s process for unmasking Waifu followed a general recipe she used to unmask Com members: She’d draw a large investigative circle around a target and all the personas that communicated with that person online, and then study their interactions to narrow the circle to the people with the most significant connections to the target. Some of the best leads came from a target’s enemies; she could glean a lot of information about their identity, personality, and activities from what the people they fought with online said about them.

“The enemies and the ex-girlfriends, generally speaking, are the best [for gathering intelligence on a suspect],” she says. “I love them.”

While she was doing this, Waifu and his group were reaching out to other security researchers, trying to glean information about Nixon and what she might be investigating. They also attempted to plant false clues with the researchers by dropping the names of other cybercriminals in Canada who could plausibly be Waifu. Nixon had never seen cybercriminals engage in counterintelligence tactics like this.

Amid this subterfuge and confusion, Nixon and another researcher working with her did a lot of consulting and cross-checking with other researchers about the clues they were gathering to ensure they had the right name before they gave it to the FBI.

By July she and the researcher were convinced they had their guy: Connor Riley Moucka, a 25-year-old high school dropout living with his grandfather in Ontario. On October 30, Royal Canadian Mounted Police converged on Moucka’s home and arrested him.

According to an affidavit filed in Canadian court, a plainclothes Canadian police officer visited Moucka’s house under some pretense on the afternoon of October 21, nine days before the arrest, to secretly capture a photo of him and compare it with an image US authorities had provided. The officer knocked and rang the bell; Moucka opened the door looking disheveled and told the visitor: “You woke me up, sir.” He told the officer his name was Alex; Moucka sometimes used the alias Alexander Antonin Moucka. Satisfied that the person who answered the door was the person the US was seeking, the officer left. Waifu’s online rants against Nixon escalated at this point, as did his attempts at misdirection. She believes the visit to his door spooked him.

Nixon won’t say exactly how they unmasked Moucka—only that he made a mistake.

“I don’t want to train these people in how to not get caught [by revealing his error],” she says.

The Canadian affidavit against Moucka reveals a number of other violent posts he’s alleged to have made online beyond the threats he made against her. Some involve musings about becoming a serial killer or mass-mailing sodium nitrate pills to Black people in Michigan and Ohio; in another, his online persona talks about obtaining firearms to “kill Canadians” and commit “suicide by cop.” 

Prosecutors, who list Moucka’s online aliases as including Waifu, Judische, and two more in the indictment, say he and others extorted at least $2.5 million from at least three victims whose data they stole from Snowflake accounts. Moucka has been charged with nearly two dozen counts, including conspiracy, unauthorized access to computers, extortion, and wire fraud. He has pleaded not guilty and was extradited to the US last July. His trial is scheduled for October this year, though hacking cases usually end in plea agreements rather than going to trial. 

It took months for authorities to arrest Moucka after Nixon and her colleague shared their findings with the authorities, but an alleged associate of his in the Snowflake conspiracy, a US Army soldier named Cameron John Wagenius (Kiberphant0m online), was arrested more quickly. 

On November 10, 2024, Nixon and her team found a mistake Wagenius made that helped identify him, and on December 20 he was arrested. Wagenius has already pleaded guilty to two charges around the sale or attempted sale of confidential phone records and will be sentenced this March.

These days Nixon continues to investigate sextortion among Com members. But she says that remaining members of Waifu’s group still taunt and threaten her.

“They are continuing to persist in their nonsense, and they are getting taken out one by one,” she says. “And I’m just going to keep doing that until there’s no one left on that side.” 

Kim Zetter is a journalist who covers cybersecurity and national security. She is the author of Countdown to Zero Day.

RFK Jr. follows a carnivore diet. That doesn’t mean you should.

Americans have a new set of diet guidelines. Robert F. Kennedy Jr. has taken an old-fashioned food pyramid, turned it upside down, and plonked a steak and a stick of butter in prime positions.

Kennedy and his Make America Healthy Again mates have long been extolling the virtues of meat and whole-fat dairy, so it wasn’t too surprising to see those foods recommended alongside vegetables and whole grains (despite the well-established fact that too much saturated fat can be extremely bad for you).

Some influencers have taken the meat trend to extremes, following a “carnivore diet.” “The best thing you could do is eliminate out everything except fatty meat and lard,” Anthony Chaffee, an MD with almost 400,000 followers, said in an Instagram post.

And I almost choked on my broccoli when, while scrolling LinkedIn, I came across an interview with another doctor declaring that “there is zero scientific evidence to say that vegetables are required in the human diet.” That doctor, who described himself as “90% carnivore,” went on to say that all he’d eaten the previous day was a kilo of beef, and that vegetables have “anti-nutrients,” whatever they might be.

You don’t have to spend much time on social media to come across claims like this. The “traditionalist” influencer, author, and psychologist Jordan Peterson was promoting a meat-only diet as far back as 2018. A recent review of research into nutrition misinformation on social media found that the most diet information is shared on Instagram and YouTube, and that a lot of it is nonsense. So much so that the authors describe it as a “growing public health concern.”

What’s new is that some of this misinformation comes from the people who now lead America’s federal health agencies. In January Kennedy, who leads the Department of Health and Human Services, told a USA Today reporter that he was on a carnivore diet. “I only eat meat or fermented foods,” he said. He went on to say that the diet had helped him lose “40% of [his] visceral fat within a month.”

“Government needs to stop spreading misinformation that natural and saturated fats are bad for you,” Food and Drug Administration commissioner Martin Makary argued in a recent podcast interview. The principles of “whole foods and clean meats” are “biblical,” he said. The interviewer said that Makary’s warnings about pesticides made him want to “avoid all salads and completely miss the organic section in the grocery store.”

For the record: There’s plenty of evidence that a diet high in saturated fat can increase the risk of heart disease. That’s not government misinformation. 

The carnivore doctors’ suggestion to avoid vegetables is wrong too, says Gabby Headrick, associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security. There’s no evidence to suggest that a meat-only diet is good for you. “All of the nutrition science to date strongly identifies a wide array of vegetables … as being very health-promoting,” she adds.

To be fair to the influencers out there, diet is a tricky thing to study. Much of the research into nutrition relies on volunteers to keep detailed and honest food diaries—something that people are generally quite bad at. And the way our bodies respond to foods might be influenced by our genetics, our microbiomes, the way we prepare or consume those foods, and who knows what else.

Still, it will come as a surprise to no one that there is plenty of what the above study calls “low-quality content” floating around on social media. So it’s worth arming ourselves with a good dose of skepticism, especially when we come across posts that mention “miracle foods” or extreme, limited diets.

The truth is that most food is neither good nor bad when eaten in moderation. Diet trends come and go, and for most people, the best reasonable advice is simply to eat a balanced diet low in sugar, salt, and saturated fat. You know—the basics. No matter what that weird upside-down food pyramid implies. To the carnivore influencers, I say: get your misinformation off my broccoli.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

US deputy health secretary: Vaccine guidelines are still subject to change

<div data-chronoton-summary="

  • Vaccine schedule may not be final O’Neill defended the CDC’s decision to cut recommended childhood vaccines but said the guidelines remain “subject to new data coming in, new ways of thinking about things,” with new safety studies underway.
  • A self-described Vitalist is running US health agencies O’Neill said he agrees with all five tenets of Vitalism—a movement that calls death “humanity’s core problem”—and wants to make reversing aging damage a federal health priority.
  • ARPA-H is betting big on organ replacement and brain repair The agency is directing $170 million toward growing new organs from patients’ own cells and exploring ways to replace aging brain tissue—a procedure O’Neill said he’d personally be “open to” trying.
  • Expect more dietary guidance—and more controversy O’Neill endorsed eating “plenty of protein and saturated fat,” echoing new federal dietary guidance that nutrition scientists have criticized for ignoring decades of research on saturated fat’s health risks.

” data-chronoton-post-id=”1132889″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Following publication of this story, Politico reported Jim O’Neill would be leaving his current roles within the Department of Health and Human Services.

Over the past year, Jim O’Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country’s federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US’s deeply controversial new vaccine schedule.

He’s also a longevity enthusiast. In an exclusive interview with MIT Technology Review earlier this month, O’Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. At the same time, he defended reducing the number of broadly recommended childhood vaccines, a move that has been widely criticized by experts in medicine and public health. 

In MIT Technology Review’s profile of O’Neill last year, people working in health policy and consumer advocacy said they found his libertarian views on drug regulation “worrisome” and “antithetical to basic public health.” 

He was later named acting director of the Centers for Disease Control and Prevention, putting him in charge of the nation’s public health agency.

But fellow longevity enthusiasts said they hope O’Neill will bring attention and funding to their cause: the search for treatments that might slow, prevent, or even reverse human aging. Here are some takeaways from the interview. 

Vaccine recommendations could change further

Last month, the US cut the number of vaccines recommended for children. The CDC no longer recommends vaccinations against flu, rotavirus, hepatitis A, or meningococcal disease for all children. The move was widely panned by medical groups and public health experts. Many worry it will become more difficult for children to access those vaccines. The majority of states have rejected the recommendations

In the confirmation hearing for his role as deputy secretary of health and human services, which took place in May last year, O’Neill said he supported the CDC’s vaccine schedule. MIT Technology Review asked him if that was the case and, if so, what made him change his mind. “Researching and examining and reviewing safety data and efficacy data about vaccines is one of CDC’s obligations,” he said. “CDC gives important advice about vaccines and should always be open to new data and new ways of looking at data.”

At the beginning of December, O’Neill said, President Donald Trump “asked me to look at what other countries were doing in terms of their vaccine schedules.” He said he spoke to health ministries of other countries and consulted with scientists at the CDC and FDA. “It was suggested to me by lots of the operating divisions that the US focus its recommendations on consensus vaccines of other developed nations—in other words, the most important vaccines that are most often part of the core recommendations of other countries,” he said.

“As a result of that, we did an update to the vaccine schedule to focus on a set of vaccines that are most important for all children.” 

But some experts in public health have said that countries like Denmark and Japan, whose vaccine schedules the new US one was supposedly modeled on, are not really comparable to the US. When asked about these criticisms, O’Neill replied, “A lot of parents feel that … more than 70 vaccine doses given to young children sounds like a really high number, and some of them ask which ones are the most important. I think we helped answer that question in a way that didn’t remove anyone’s access.”

A few weeks after the vaccine recommendations were changed, Kirk Milhoan, who leads the CDC’s Advisory Committee on Immunization Practices, said that vaccinations for measles and polio—which are currently required for entry to public schools—should be optional. (Mehmet Oz, the Center for Medicare and Medicaid Services director, has more recently urged people to “take the [measles] vaccine.”)

“CDC still recommends that all children are vaccinated against diphtheria, tetanus, whooping cough, Haemophilus influenzae type b (Hib), Pneumococcal conjugate, polio, measles, mumps, rubella, and human papillomavirus (HPV), for which there is international consensus, as well as varicella (chickenpox),” he said when asked for his thoughts on this comment.

He also said that current vaccine guidelines are “still subject to new data coming in, new ways of thinking about things.” “CDC, FDA, and NIH are initiating new studies of the safety of immunizations,” he added. “We will continue to ask the Advisory Committee on Immunization Practices to review evidence and make updated recommendations with rigorous science and transparency.”

More support for longevity—but not all science

O’Neill said he wants longevity to become a priority for US health agencies. His ultimate goal, he said, is to “make the damage of aging something that’s under medical control.” It’s “the same way of thinking” as the broader Make America Healthy Again approach, he said: “‘Again’ implies restoration of health, which is what longevity research and therapy is all about.” 

O’Neill said his interest in longevity was ignited by his friend Peter Thiel, the billionaire tech entrepreneur, around 2008 to 2009. It was right around the time O’Neill was finishing up a previous role in HHS, under the Bush administration. O’Neill said Thiel told him he “should really start looking into longevity and the idea that aging damage could be reversible.” “I just got more and more excited about that idea,” he said.

When asked if he’s heard of Vitalism, a philosophical movement for “hardcore” longevity enthusiasts who, broadly, believe that death is wrong, O’Neill replied: “Yes.” 

The Vitalist declaration lists five core statements, including “Death is humanity’s core problem,” “Obviating aging is scientifically plausible,” and “I will carry the message against aging and death.” O’Neill said he agrees with all of them. “I suppose I am [a Vitalist],” he said with a smile, although he’s not a paying member of the foundation behind it.

As deputy secretary of the Department of Health and Human Services, O’Neill assumes a level of responsibility for huge and influential science and health agencies, including the National Institutes of Health (the world’s largest public funder of biomedical research) and the Food and Drug Administration (which oversees drug regulation and is globally influential) as well as the CDC.

Today, he said, he sees support for longevity science from his colleagues within HHS. “If I could describe one common theme to the senior leadership at HHS, obviously it’s to make America healthy again, and reversing aging damage is all about making people healthy again,” he said. “We are refocusing HHS on addressing and reversing chronic disease, and chronic diseases are what drive aging, broadly.”

Over the last year, thousands of NIH grants worth over $2 billion were frozen or terminated, including funds for research on cancer biology, health disparities, neuroscience, and much more. When asked whether any of that funding will be restored, he did not directly address the question, instead noting: “You’ll see a lot of funding more focused on important priorities that actually improve people’s health.”

Watch ARPA-H for news on organ replacements and more

He promised we’ll hear more from ARPA-H, the three-year-old federal agency dedicated to achieving breakthroughs in medical science and biotechnology. It was established with the official goal of promoting “high-risk, high-reward innovation for the development and translation of transformative health technologies.”

O’Neill said that “ARPA-H exists to make the impossible possible in health and medicine.” The agency has a new director—Alicia Jackson, who formerly founded and led a company focused on women’s health and longevity, took on the role in October last year.

O’Neill said he helped recruit Jackson, and that she was hired in part because of her interest in longevity, which will now become a major focus of the agency. He said he meets with her regularly, as well as with Andrew Brack and Jean Hébert, two other longevity supporters who lead departments at ARPA-H. Brack’s program focuses on finding biological markers of aging. Hebert’s aim is to find a way to replace aging brain tissue, bit by bit.  

O’Neill is especially excited by that one, he said. “I would try it … Not today, but … if progress goes in a broadly good direction, I would be open to it. We’re hoping to see significant results in the next few years.”

He’s also enthused by the idea of creating all-new organs for transplantation. “Someday we want to be able to grow new organs, ideally from the patients’ own cells,” O’Neill said. An ARPA-H program will receive $170 million over five years to that end, he adds. “I’m very excited about the potential of ARPA-H and Alicia and Jean and Andrew to really push things forward.”

Longevity lobbyists have a friendly ear

O’Neill said he also regularly talks to the team at the lobbying group Alliance for Longevity Initiatives. The organization, led by Dylan Livingston, played an instrumental role in changing state law in Montana to make experimental therapies more accessible. O’Neill said he hasn’t formally worked with them but thinks that “they’re doing really good work on raising awareness, including on Capitol Hill.”

Livingston has told me that A4LI’s main goals center around increasing support for aging research (possibly via the creation of a new NIH institute entirely dedicated to the subject) and changing laws to make it easier and cheaper to develop and access potential anti-aging therapies.

O’Neill gave the impression that the first goal might be a little overambitious—the number of institutes is down to Congress, he said. “I would like to get really all of the institutes at NIH to think more carefully about how many chronic diseases are usefully thought of as pathologies of aging damage,” he said. There’ll be more federal funding for that research, he said, although he won’t say more for now.

Some members of the longevity community have more radical ideas when it comes to regulation: they want to create their own jurisdictions designed to fast-track the development of longevity drugs and potentially encourage biohacking and self-experimentation. 

It’s a concept that O’Neill has expressed support for in the past. He has posted on X about his support for limiting the role of government, and in support of building “freedom cities”—a similar concept that involves creating new cities on federal land. 

Another longevity enthusiast who supports the concept is Niklas Anzinger, a German tech entrepreneur who is now based in Próspera, a private city within a Honduran “special economic zone,” where residents can make their own suggestions for medical regulations. Anzinger also helped draft Montana’s state law on accessing experimental therapies. O’Neill knows Anzinger and said he talks to him “once or twice a year.”

O’Neill has also supported the idea of seasteading—building new “startup countries” at sea. He served on the board of directors of the Seasteading Institute until March 2024.

In 2009, O’Neill told an audience at a Seasteading Institute conference that “the healthiest societies in 2030 will most likely be on the sea.” When asked if he still thinks that’s the case, he said: “It’s not quite 2030, so I think it’s too soon to say … What I would say now is: the healthiest societies are likely to be the ones that encourage innovation the most.”

We might expect more nutrition advice

When it comes to his own personal ambitions for longevity, O’Neill said, he takes a simple approach that involves minimizing sugar and ultraprocessed food, exercising and sleeping well, and supplementing with vitamin D. He also said he tries to “eat a diet that has plenty of protein and saturated fat,” echoing the new dietary guidance issued by the US Departments of Health and Human Services and Agriculture. That guidance has been criticized by nutrition scientists, who point out that it ignores decades of research into the harms of a diet high in saturated fat.

We can expect to see more nutrition-related updates from HHS, said O’Neill: “We’re doing more research, more randomized controlled trials on nutrition. Nutrition is still not a scientifically solved problem.” Saturated fats are of particular interest, he said. He and his colleagues want to identify “the healthiest fats,” he said. 

“Stay tuned.”

The myth of the high-tech heist

Making a movie is a lot like pulling off a heist. That’s what Steven Soderbergh—director of the Ocean’s franchise, among other heist-y classics—said a few years ago. You come up with a creative angle, put together a team of specialists, figure out how to beat the technological challenges, rehearse, move with Swiss-watch precision, and—if you do it right—redistribute some wealth. That could describe either the plot or the making of Ocean’s Eleven.

But conversely, pulling off a heist isn’t much like the movies. Surveillance cameras, computer-controlled alarms, knockout gas, and lasers hardly ever feature in big-ticket crime. In reality, technical countermeasures are rarely a problem, and high-tech gadgets are rarely a solution. The main barrier to entry is usually a literal barrier to entry, like a door. Thieves’ most common move is to collude with, trick, or threaten an insider. Last year a heist cost the Louvre €88 million worth of antique jewelry, and the most sophisticated technology in play was an angle grinder.

The low-tech Louvre maneuvers were in keeping with what heist research long ago concluded. In 2014 US nuclear weapons researchers at Sandia National Laboratories took a detour into this demimonde, producing a 100-page report called “The Perfect Heist: Recipes from Around the World.” The scientists were worried someone might try to steal a nuke from the US arsenal, and so they compiled information on 23 high-value robberies from 1972 to 2012 into a “Heist Methods and Characteristics Database,” a critical mass of knowledge on what worked. Thieves, they found, dedicated huge amounts of money and time to planning and practice runs—sometimes more than 100. They’d use brute force, tunneling through sewers for months (Société Générale bank heist, Nice, France, 1976), or guile, donning police costumes to fool guards (Gardner Museum, Boston, 1990). But nobody was using, say, electromagnetic pulse generators to shut down the Las Vegas electrical grid. The most successful robbers got to the valuable stuff unseen and got out fast.

rench police officers stand next to a ladder used by robbers to enter the Louvre Museum
Last year a heist cost the Louvre €88 million worth of antique jewelry, and the most sophisticated technology in play was an angle grinder.
DIMITAR DILKOFF / AFP VIA GETTY IMAGES

Advance the time frame, and the situation looks much the same. Last year, Spanish researchers looking at art crimes from 1990 to 2022 found that the least technical methods are still the most successful. “High-tech technology doesn’t work so well,” says Erin L. Thompson, an art historian at John Jay College of Justice who studies art crime. Speed and practice trump complicated systems and alarms; even that Louvre robbery was, at heart, just a minutes-long smash-and-grab.

An emphasis on speed doesn’t mean heists don’t require skill—panache, even. As the old saying goes, amateurs talk strategy; professionals study logistics. Even without gadgets, heists and heist movies still revel in an engineer’s mindset. “Heist movies absolutely celebrate deep-dive nerdery—‘I’m going to know everything I can about the power grid, about this kind of stone and drill, about Chicago at night,’” says Anna Kornbluh, a professor of English at the University of Illinois at Chicago. She published a paper last October on the ways heist movies reflect an Old Hollywood approach to collective art-making, while shows about new grift, like those detailing the rise and fall of WeWork or the con artist Anna Delvey, reflect the more lone-wolf, disrupt-and-grow mindset of the streaming era. 

Her work might help explain why law-abiding citizens might cheer for the kinds of guys who’d steal a crown from the Louvre, or $100,000 worth of escargot from a farm in Champagne (as happened just a few weeks later). Heists, says Kornbluh, are anti-oligarch praxis. “Everybody wants to know how to be in a competent collective. Everybody wants there to be better logistics,” she says. “We need a better state. We need a better society. We need a better world.” Those are shared values—and as another old saying tells us, where there is value, there is crime.

ALS stole this musician’s voice. AI let him sing again.

There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago.

The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease (MND), it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe.

Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again.

“Sadly, I have lost the ability to sing and play my instruments,” Darling said on stage at the event, which took place in London on Wednesday, using his voice clone. “Despite this, most of my time these days is spent still continuing to compose and produce my music. Doing so feels more important than ever to me now.”

Losing a voice

Darling says he’s been a musician and a composer since he was around 14 years old. “I learned to play bass guitar, acoustic guitar, piano, melodica, mandolin, and tenor banjo,” he said at the event. “My biggest love, though, was singing.”

He met bandmate Nick Cocking over 10 years ago, while he was still a university student, says Cocking. Darling joined Cocking’s Irish folk outfit, the Ceili House Band, shortly afterwards, and their first gig together was in April 2014. Darling, who joined the band as a singer and guitarist, “elevated the musicianship of the band,” says Cocking.

The four bandmates pose with their instruments.
Patrick Darling (second from left) with his former bandmates, including Nick Cocking (far right).
COURTESY OF NICK COCKING

But a few years ago, Cocking and his other bandmates started noticing changes in Darling. He became clumsy, says Cocking. He recalls one night when the band had to walk across the city of Cardiff in the rain: “He just kept slipping and falling, tripping on paving slabs and things like that.” 

He didn’t think too much of it at the time, but Darling’s symptoms continued to worsen. The disease affected his legs first, and in August 2023, he started needing to sit during performances. Then he started to lose the use of his hands. “Eventually he couldn’t play the guitar or the banjo anymore,” says Cocking.

By April 2024, Darling was struggling to talk and breathe at the same time, says Cocking. For that performance, the band carried Darling on stage. “He called me the day after and said he couldn’t do it anymore,” Cocking says, his voice breaking. “By June 2024, it was done.” It was the last time the band played together.

Re-creating a voice

Darling was put in touch with a speech therapist, who raised the possibility of “banking” his voice. People who are losing the ability to speak can opt to record themselves speaking and use those recordings to create speech sounds that can then be activated with typed text, whether by hand or perhaps using a device controlled by eye movements.

Some users have found these tools to be robotic sounding. But Darling had another issue. “By that stage, my voice had already changed,” he said at the event. “It felt like we were saving the wrong voice.”

Then another speech therapist introduced him to a different technology. Richard Cave is a speech and language therapist and a researcher at University College London. He is also a consultant for ElevenLabs, an AI company that develops agents and audio, speech, video, and music tools. One of these tools can create “voice clones”—realistic mimics of real voices that can be generated from minutes, or even seconds, of a person’s recorded voice.

Last year, ElevenLabs launched an impact program with a promise to provide free licenses to these tools for people who have lost their voices to ALS or other diseases, like head and neck cancer or stroke. 

The tool is already helping some of those users. “We’re not really improving how quickly they’re able to communicate, or all of the difficulties that individuals with MND are going through physically, with eating and breathing,” says Gabi Leibowitz, a speech therapist who leads the program. “But what we are doing is giving them a way … to create again, to thrive.” Users are able to stay in their jobs longer and “continue to do the things that make them feel like human beings,” she says.

Cave worked with Darling to use the tool to re-create his lost speaking voice from older recordings.

“The first time I heard the voice, I thought it was amazing,” Darling said at the event, using the voice clone. “It sounded exactly like I had before, and you literally wouldn’t be able to tell the difference,” he said. “I will not say what the first word I made my new voice say, but I can tell you that it began with ‘f’ and ended in ‘k.’”

Patrick and bandmates with their instruments prior to his MND diagnosis

COURTESY OF PATRICK DARLING

Re-creating his singing voice wasn’t as easy. The tool typically requires around 10 minutes of clear audio to generate a clone. “I had no high-quality recordings of myself singing,” Darling said. “We had to use audio from videos on people’s phones, shot in noisy pubs, and a couple of recordings of me singing in my kitchen.” Still, those snippets were enough to create a “synthetic version of [Darling’s] singing voice,” says Cave.

In the recordings, Darling sounded a little raspy and “was a bit off” on some of the notes, says Cave. The voice clone has the same qualities. It doesn’t sound perfect, Cave says—it sounds human.

“The ElevenLabs voice that we’ve created is wonderful,” Darling said at the event. “It definitely sounds like me—[it] just kind of feels like a different version of me.”

ElevenLabs has also developed an AI music generator called Eleven Music. The tool allows users to compose tracks, using text prompts to choose the musical style. Several well-known artists have also partnered with the company to license AI clones of their voices, including the actor Michael Caine, whose voice clone is being used to narrate an upcoming ElevenLabs documentary. Last month, the company released an album of 11 tracks created using the tool. “The Liza Minnelli track is really a banger,” says Cave.

Eleven Music can generate a song in a minute, but Darling and Cave spent around six weeks fine-tuning Darling’s song. Using text prompts, any user can “create music and add lyrics in any style [they like],” says Cave. Darling likes Irish folk, but Cave has also worked with a man in Colombia who is creating Colombian folk music. (The ElevenLabs tool is currently available in 74 languages.)

Back on stage

Last month, Cocking got a call from Cave, who sent him Darling’s completed track. “I heard the first two or three words he sang, and I had to turn it off,” he says. “I was just in bits, in tears. It took me a good half a dozen times to make it to the end of the track.”

Darling and Cave were making plans to perform the track live at the ElevenLabs summit in London on Wednesday, February 11. So Cocking and bandmate Hari Ma each arranged accompanying parts to play on the mandolin and fiddle. They had a couple of weeks to rehearse before they joined Darling on stage, two years after their last performance together.

“I wheeled him out on stage, and neither of us could believe it was happening,” says Cave. “He was thrilled.” The song was played as Darling remained on stage, and Cocking and Ma played their instruments live.

Cocking and Cave say Darling plans to continue to use the tools to make music. Cocking says he hopes to perform with Darling again but acknowledges that, given the nature of ALS, it is difficult to make long-term plans.

“It’s so bittersweet,” says Cocking. “But getting up on stage and seeing Patrick there filled me with absolute joy. I know Patrick really enjoyed it as well. We’ve been talking about it … He was really, really proud.”

ELEVENLABS/AMPLIFY
What’s next for Chinese open-source AI

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost. 

Just last week the Chinese firm Moonshot AI released its latest open-weight model, Kimi K2.5, which came close to top proprietary systems such as Anthropic’s Claude Opus on some early benchmarks. The difference: K2.5 is roughly one-seventh Opus’s price.

On Hugging Face, Alibaba’s Qwen family—after ranking as the most downloaded model series in 2025 and 2026—has overtaken Meta’s Llama models in cumulative downloads. And a recent MIT study found that Chinese open-source models have surpassed US models in total downloads. For developers and builders worldwide, access to near-frontier AI capabilities has never been this broad or this affordable.

But these models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them. 

If these open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. 

Here’s what may come next.

China’s commitment to open source will continue

When DeepSeek launched R1, much of the initial shock centered on its origin. Suddenly, a Chinese team had released a reasoning model that could stand alongside the best systems from US labs. But the long tail of DeepSeek’s impact had less to do with nationality than with distribution. R1 was released as an open-weight model under a permissive MIT license, allowing anyone to download, inspect, and deploy it. On top of that, DeepSeek also published a paper detailing its training process and techniques. For developers who access models via an API, DeepSeek also undercut competitors on price, offering access at a fraction the cost of OpenAI’s o1, the leading proprietary reasoning model at the time.

Within days of its release, DeepSeek replaced ChatGPT as the most downloaded free app in the US App Store. The moment spilled beyond developer circles into financial markets, triggering a sharp sell-off in US tech stocks that briefly erased roughly $1 trillion in market value. Almost overnight, DeepSeek went from a little-known spin-off team backed by a quantitative hedge fund to the most visible symbol of China’s push for open-source AI.

China’s decision to lean in to open source isn’t surprising. It has the world’s second-largest concentration of AI talent after the US. plus a vast, well-resourced tech industry. After ChatGPT broke into the mainstream, China’s AI sector went through a reckoning—and emerged determined to catch up. Pursuing an open-source strategy was seen as the fastest way to close the gap by rallying developers, spreading adoption, and setting standards.

DeepSeek’s success injected confidence into an industry long used to following global standards rather than setting them. “Thirty years ago, no Chinese person would believe they could be at the center of global innovation,” says Alex Chenglin Wu, CEO and founder of Atoms, an AI agent company and prominent contributor to China’s open-source ecosystem. “DeepSeek shows that with solid technical talent, a supportive environment, and the right organizational culture, it’s possible to do truly world-class work.”

DeepSeek’s breakout moment wasn’t China’s first open-source success. Alibaba’s Qwen Lab had been releasing open-weight models for years. By September 2024,  well before DeepSeek’s V3 launch, Alibaba was saying that global downloads had exceeded 600 million. On Hugging Face, Qwen accounted for more than 30% of all model downloads in 2024. Other institutions, including the Beijing Academy of Artificial Intelligence and the AI firm Baichuan, were also releasing open models as early as 2023. 

But since the success of DeepSeek, the field has widened rapidly. Companies such as Z.ai (formerly Zhipu), MiniMax, Tencent, and a growing number of smaller labs have released models that are competitive on reasoning, coding, and agent-style tasks. The growing number of capable models has sped up progress. Capabilities that once took months to make it to the open-source world now emerge within weeks, even days.

“Chinese AI firms have seen real gains from the open-source playbook,” says Liu Zhiyuan, a professor of computer science at Tsinghua University and chief scientist at the AI startup ModelBest. “By releasing strong research, they build reputation and gain free publicity.”

Beyond commercial incentives, Liu says, open source has taken on cultural and strategic weight. “In the Chinese programmer community, open source has become politically correct,” he says, framing it as a response to US dominance in proprietary AI systems.

That shift is also reflected at the institutional level. Universities including Tsinghua have begun encouraging AI development and open-source contributions, while policymakers have moved to formalize those incentives. In August, China’s State Council released a draft policy encouraging universities to reward open-source work, proposing that students’ contributions on platforms such as GitHub or Gitee could eventually be counted toward academic credit.

With growing momentum and a reinforcing feedback loop, China’s push for open-source models is likely to continue in the near term, though its long-term sustainability still hinges on financial results, says Tiezhen Wang, who helps lead work on global AI at Hugging Face. In January, the model labs Z.ai and MiniMax went public in Hong Kong. “Right now, the focus is on making the cake bigger,” says Wang. “The next challenge is figuring out how each company secures its share.”

The next wave of models will be narrower—and better

Chinese open-source models are leading not just in download volume but also in variety. Alibaba’s Qwen has become one of the most diversified open model families in circulation, offering a wide range of variants optimized for different uses. The lineup ranges from lightweight models that can run on a single laptop to large, multi-hundred-billion-parameter systems designed for data-center deployment. Qwen features many task-optimized variants created by the community: the “instruct” models are good at following orders, and “code” variants specialize in coding.

Although this strategy isn’t unique to Chinese labs, Qwen was the first open model family to roll out so many high-quality options that it started to feel like a full product line—one that’s free to use.

The open-weight nature of these releases also makes it easy for others to adapt them through techniques like fine-tuning and distillation, which means training a smaller model to mimic a larger one.  According to ATOM (American Truly Open Models), a project by the AI researcher Nathan Lambert, by August 4, 2025, model variations derived from Qwen were “more than 40%” of new Hugging Face language-model derivatives, while Llama had fallen to about 15%. This means that Qwen has become the default base model for all the “remixes.”

This pattern has made the case for smaller, more specialized models. “Compute and energy are real constraints for any deployment,” Liu says. He told MIT Technology Review that the rise of small models is about making AI cheaper to run and easier for more people to use. His company, ModelBest, focuses on small language models designed to run locally on devices such as phones, cars, and other consumer hardware.

While an average user might interact with AI only through the web or an app for simple conversations, power users of AI models with some technical background are experimenting with giving AI more autonomy to solve large-scale problems. OpenClaw, an open-source AI agent that recently went viral within the AI hacker world, allows AI to take over your computer—it can run 24-7, going through your emails and work tasks without supervision. 

OpenClaw, like many other open-source tools, allows users to connect to different AI models via an application programming interface, or API. Within days of OpenClaw’s release, the team revealed that Kimi’s K2.5 had surpassed Claude Opus and became the most used AI model—by token count, meaning it was handling more total text processed across user prompts and model responses.

Cost has been a major reason Chinese models have gained traction, but it would be a mistake to treat them as mere “dupes” of Western frontier systems, Wang suggests. Like any product, a model only needs to be good enough for the job at hand. 

The landscape of open-source models in China is also getting more specialized. Research groups such as Shanghai AI Laboratory have released models geared toward scientific and technical tasks; several projects from Tencent have focused specifically on music generation. Ubiquant, a quantitative finance firm like DeepSeek’s parent High-Flyer, has released an open model aimed at medical reasoning.

In the meantime, innovative architectural ideas from Chinese labs are being picked up more broadly. DeepSeek has published work exploring model efficiency and memory; techniques that compress the model’s attention “cache,” reducing memory and inference costs while mostly preserving performance, have drawn significant attention in the research community. 

“The impact of these research breakthroughs is amplified because they’re open-sourced and can be picked up quickly across the field,” says Wang.

Chinese open models will become infrastructure for global AI builders

The adoption of Chinese models is picking up in Silicon Valley, too. Martin Casado, a general partner at Andreessen Horowitz, has put a number on it: Among startups pitching with open-source stacks, there’s about an 80% chance they’re running on Chinese open models, according to a post he made on X. Usage data tells a similar story. OpenRouter,  a middleman that tracks how people use different AI models through its API, shows Chinese open models rising from almost none in late 2024 to nearly 30% of usage in some recent weeks.

The demand is also rising globally. Z.ai limited new subscriptions to its GLM coding plan (a coding tool based on its flagship GLM models) after demand surged, citing compute constraints. What’s notable is where the demand is coming from: CNBC reports that the system’s user base is primarily concentrated in the United States and China, followed by India, Japan, Brazil, and the UK.

“The open-source ecosystems in China and the US are tightly bound together,” says Wang at Hugging Face. Many Chinese open models still rely on Nvidia and US cloud platforms to train and serve them, which keeps the business ties tangled. Talent is fluid too: Researchers move across borders and companies, and many still operate as a global community, sharing code and ideas in public.

That interdependence is part of what makes Chinese developers feel optimistic about this moment: The work travels, gets remixed, and actually shows up in products. But openness can also accelerate the competition. Dario Amodei, the CEO of Anthropic, made a version of this point after DeepSeek’s 2025 releases: He wrote that export controls are “not a way to duck the competition” between the US and China, and that AI companies in the US “must have better models” if they want to prevail. 

For the past decade, the story of Chinese tech in the West has been one of big expectations that ran into scrutiny, restrictions, and political backlash. This time the export isn’t just an app or a consumer platform. It’s the underlying model layer that other people build on. Whether that will play out differently is still an open question.

AI is already making online crimes easier. It could get much worse.

Anton Cherepanov is always on the lookout for something interesting. And in late August last year, he spotted just that. It was a file uploaded to VirusTotal, a site cybersecurity researchers like him use to analyze submissions for potential viruses and other types of malicious software, often known as malware. On the surface it seemed innocuous, but it triggered Cherepanov’s custom malware-detecting measures. Over the next few hours, he and his colleague Peter Strýček inspected the sample and realized they’d never come across anything like it before.

The file contained ransomware, a nasty strain of malware that encrypts the files it comes across on a victim’s system, rendering them unusable until a ransom is paid to the attackers behind it. But what set this example apart was that it employed large language models (LLMs). Not just incidentally, but across every stage of an attack. Once it was installed, it could tap into an LLM to generate customized code in real time, rapidly map a computer to identify sensitive data to copy or encrypt, and write personalized ransom notes based on the files’ content. The software could do this autonomously, without any human intervention. And every time it ran, it would act differently, making it harder to detect.

Cherepanov and Strýček were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could be exploited to create highly flexible malware attacks. They published a blog post declaring that they’d uncovered the first example of AI-powered ransomware, which quickly became the object of widespread global media attention.

But the threat wasn’t quite as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University claimed responsibility, explaining that the malware was not, in fact, a full attack let loose in the wild but a research project, merely designed to prove it was possible to automate each step of a ransomware campaign—which, they said, they had. 

PromptLock may have turned out to be an academic project, but the real bad guys are using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. 

The likelihood that cyberattacks will now become more common and more effective over time is not a remote possibility but “a sheer reality,” says Lorenzo Cavallaro, a professor of computer science at University College London. 

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers say this claim is overblown. “For some reason, everyone is just focused on this malware idea of, like, AI superhackers, which is just absurd,” says Marcus Hutchins, who is principal threat researcher at the security company Expel and famous in the security world for ending a giant global ransomware attack called WannaCry in 2017. 

Instead, experts argue, we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. These AI-enhanced cyberattacks are only set to get more frequent and more destructive, and we need to be ready. 

Spam and beyond

Attackers started adopting generative AI tools almost immediately after ChatGPT exploded on the scene at the end of 2022. These efforts began, as you might imagine, with the creation of spam—and a lot of it. Last year, a report from Microsoft said that in the year leading up to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “many likely aided by AI content.” 

At least half of spam email is now generated using LLMs, according to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who analyzed nearly 500,000 malicious messages collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks, which impersonate a trusted figure in order to trick a worker within an organization out of funds or sensitive information. By April 2025, they found, at least 14% of those sorts of focused email attacks were generated using LLMs, up from 7.6% in April 2024.

In one high-profile case, a worker was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees.

And the generative AI boom has made it easier and cheaper than ever before to generate not only emails but highly convincing images, videos, and audio. The results are much more realistic than even just a few short years ago, and it takes much less data to generate a fake version of someone’s likeness or voice than it used to.

Criminals aren’t deploying these sorts of deepfakes to prank people or to simply mess around—they’re doing it because it works and because they’re making money out of it, says Henry Ajder, a generative AI expert. “If there’s money to be made and people continue to be fooled by it, they’ll continue to do it,” he says. In one high-­profile case reported in 2024, a worker at the British engineering firm Arup was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees. That’s likely only the tip of the iceberg, and the problem posed by convincing deepfakes is only likely to get worse as the technology improves and is more widely adopted. 

person sitting in profile at a computer with an enormous mask in front of them and words spooling out through the frame

BRIAN STAUFFER

Criminals’ tactics evolve all the time, and as AI’s capabilities improve, such people are constantly probing how those new capabilities can help them gain an advantage over victims. Billy Leonard, tech leader of Google’s Threat Analysis Group, has been keeping a close eye on changes in the use of AI by potential bad actors (a widely used term in the industry for hackers and others attempting to use computers for criminal purposes). In the latter half of 2024, he and his team noticed prospective criminals using tools like Google Gemini the same way everyday users do—to debug code and automate bits and pieces of their work—as well as tasking it with writing the odd phishing email. By 2025, they had progressed to using AI to help create new pieces of malware and release them into the wild, he says.

The big question now is how far this kind of malware can go. Will it ever become capable enough to sneakily infiltrate thousands of companies’ systems and extract millions of dollars, completely undetected? 

Most popular AI models have guardrails in place to prevent them from generating malicious code or illegal material, but bad actors still find ways to work around them. For example, Google observed a China-linked actor asking its Gemini AI model to identify vulnerabilities on a compromised system—a request it initially refused on safety grounds. However, the attacker managed to persuade Gemini to break its own rules by posing as a participant in a capture-the-flag competition, a popular cybersecurity game. This sneaky form of jailbreaking led Gemini to hand over information that could have been used to exploit the system. (Google has since adjusted Gemini to deny these kinds of requests.)

But bad actors aren’t just focusing on trying to bend the AI giants’ models to their nefarious ends. Going forward, they’re increasingly likely to adopt open-source AI models, as it’s easier to strip out their safeguards and get them to do malicious things, says Ashley Jess, a former tactical specialist at the US Department of Justice and now a senior intelligence analyst at the cybersecurity company Intel 471. “Those are the ones I think that [bad] actors are going to adopt, because they can jailbreak them and tailor them to what they need,” she says.

The NYU team used two open-source models from OpenAI in its PromptLock experiment, and the researchers found they didn’t even need to resort to jailbreaking techniques to get the model to do what they wanted. They say that makes attacks much easier. Although these kinds of open-source models are designed with an eye to ethical alignment, meaning that their makers do consider certain goals and values in dictating the way they respond to requests, the models don’t have the same kinds of restrictions as their closed-source counterparts, says Meet Udeshi, a PhD student at New York University who worked on the project. “That is what we were trying to test,” he says. “These LLMs claim that they are ethically aligned—can we still misuse them for these purposes? And the answer turned out to be yes.” 

It’s possible that criminals have already successfully pulled off covert PromptLock-style attacks and we’ve simply never seen any evidence of them, says Udeshi. If that’s the case, attackers could—in theory—have created a fully autonomous hacking system. But to do that they would have had to overcome the significant barrier that is getting AI models to behave reliably, as well as any inbuilt aversion the models have to being used for malicious purposes—all while evading detection. Which is a pretty high bar indeed.

Productivity tools for hackers

So, what do we know for sure? Some of the best data we have now on how people are attempting to use AI for malicious purposes comes from the big AI companies themselves. And their findings certainly sound alarming, at least at first. In November, Leonard’s team at Google released a report that found bad actors were using AI tools (including Google’s Gemini) to dynamically alter malware’s behavior; for example, it could self-modify to evade detection. The team wrote that it ushered in “a new operational phase of AI abuse.”

However, the five malware families the report dug into (including PromptLock) consisted of code that was easily detected and didn’t actually do any harm, the cybersecurity writer Kevin Beaumont pointed out on social media. “There’s nothing in the report to suggest orgs need to deviate from foundational security programmes—everything worked as it should,” he wrote.

It’s true that this malware activity is in an early phase, concedes Leonard. Still, he sees value in making these kinds of reports public if it helps security vendors and others build better defenses to prevent more dangerous AI attacks further down the line. “Cliché to say, but sunlight is the best disinfectant,” he says. “It doesn’t really do us any good to keep it a secret or keep it hidden away. We want people to be able to know about this— we want other security vendors to know about this—so that they can continue to build their own detections.”

And it’s not just new strains of malware that would-be attackers are experimenting with—they also seem to be using AI to try to automate the process of hacking targets. In November, Anthropic announced it had disrupted a large-scale cyberattack, the first reported case of one executed without “substantial human intervention.” Although the company didn’t go into much detail about the exact tactics the hackers used, the report’s authors said a Chinese state-sponsored group had used its Claude Code assistant to automate up to 90% of what they called a “highly sophisticated espionage campaign.”

“We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Jacob Klein, head of threat intelligence at Anthropic

But, as with the Google findings, there were caveats. A human operator, not AI, selected the targets before tasking Claude with identifying vulnerabilities. And of 30 attempts, only a “handful” were successful. The Anthropic report also found that Claude hallucinated and ended up fabricating data during the campaign, claiming it had obtained credentials it hadn’t and “frequently” overstating its findings, so the attackers would have had to carefully validate those results to make sure they were actually true. “This remains an obstacle to fully autonomous cyberattacks,” the report’s authors wrote. 

Existing controls within any reasonably secure organization would stop these attacks, says Gary McGraw, a veteran security expert and cofounder of the Berryville Institute of Machine Learning in Virginia. “None of the malicious-attack part, like the vulnerability exploit … was actually done by the AI—it was just prefabricated tools that do that, and that stuff’s been automated for 20 years,” he says. “There’s nothing novel, creative, or interesting about this attack.”

Anthropic maintains that the report’s findings are a concerning signal of changes ahead. “Tying this many steps of an intrusion campaign together through [AI] agentic orchestration is unprecedented,” Jacob Klein, head of threat intelligence at Anthropic, said in a statement. “It turns what has always been a labor-intensive process into something far more scalable. We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Some are not convinced there’s reason to be alarmed. AI hype has led a lot of people in the cybersecurity industry to overestimate models’ current abilities, Hutchins says. “They want this idea of unstoppable AIs that can outmaneuver security, so they’re forecasting that’s where we’re going,” he says. But “there just isn’t any evidence to support that, because the AI capabilities just don’t meet any of the requirements.”

person kneeling warding off an attack of arrows under a sheild

BRIAN STAUFFER

Indeed, for now criminals mostly seem to be tapping AI to enhance their productivity: using LLMs to write malicious code and phishing lures, to conduct reconnaissance, and for language translation. Jess sees this kind of activity a lot, alongside efforts to sell tools in underground criminal markets. For example, there are phishing kits that compare the click-rate success of various spam campaigns, so criminals can track which campaigns are most effective at any given time. She is seeing a lot of this activity in what could be called the “AI slop landscape” but not as much “widespread adoption from highly technical actors,” she says.

But attacks don’t need to be sophisticated to be effective. Models that produce “good enough” results allow attackers to go after larger numbers of people than previously possible, says Liz James, a managing security consultant at the cybersecurity company NCC Group. “We’re talking about someone who might be using a scattergun approach phishing a whole bunch of people with a model that, if it lands itself on a machine of interest that doesn’t have any defenses … can reasonably competently encrypt your hard drive,” she says. “You’ve achieved your objective.” 

On the defense

For now, researchers are optimistic about our ability to defend against these threats—regardless of whether they are made with AI. “Especially on the malware side, a lot of the defenses and the capabilities and the best practices that we’ve recommended for the past 10-plus years—they all still apply,” says Leonard. The security programs we use to detect standard viruses and attack attempts work; a lot of phishing emails will still get caught in inbox spam filters, for example. These traditional forms of defense will still largely get the job done—at least for now. 

And in a neat twist, AI itself is helping to counter security threats more effectively. After all, it is excellent at spotting patterns and correlations. Vasu Jakkal, corporate vice president of Microsoft Security, says that every day, the company processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious events.

Despite the cybersecurity landscape’s constant state of flux, Jess is heartened by how readily defenders are sharing detailed information with each other about attackers’ tactics. Mitre’s Adversarial Threat Landscape for Artificial-Intelligence Systems and the GenAI Security Project from the Open Worldwide Application Security Project are two helpful initiatives documenting how potential criminals are incorporating AI into their attacks and how AI systems are being targeted by them. “We’ve got some really good resources out there for understanding how to protect your own internal AI toolings and understand the threat from AI toolings in the hands of cybercriminals,” she says.

PromptLock, the result of a limited university project, isn’t representative of how an attack would play out in the real world. But if it taught us anything, it’s that the technical capabilities of AI shouldn’t be dismissed.New York University’s Udeshi says he wastaken aback at how easily AI was able to handle a full end-to-end chain of attack, from mapping and working out how to break into a targeted computer system to writing personalized ransom notes to victims: “We expected it would do the initial task very well but it would stumble later on, but we saw high—80% to 90%—success throughout the whole pipeline.” 

AI is still evolving rapidly, and today’s systems are already capable of things that would have seemed preposterously out of reach just a few short years ago. That makes it incredibly tough to say with absolute confidence what it will—or won’t—be able to achieve in the future. While researchers are certain that AI-driven attacks are likely to increase in both volume and severity, the forms they could take are unclear. Perhaps the most extreme possibility is that someone makes an AI model capable of creating and automating its own zero-day exploits—highly dangerous cyber­attacks that take advantage of previously unknown vulnerabilities in software. But building and hosting such a model—and evading detection—would require billions of dollars in investment, says Hutchins, meaning it would only be in the reach of a wealthy nation-state. 

Engin Kirda, a professor at Northeastern University in Boston who specializes in malware detection and analysis, says he wouldn’t be surprised if this was already happening. “I’m sure people are investing in it, but I’m also pretty sure people are already doing it, especially [in] China—they have good AI capabilities,” he says. 

It’s a pretty scary possibility. But it’s one that—thankfully—is still only theoretical. A large-scale campaign that is both effective and clearly AI-driven has yet to materialize. What we can say is that generative AI is already significantly lowering the bar for criminals. They’ll keep experimenting with the newest releases and updates and trying to find new ways to trick us into parting with important information and precious cash. For now, all we can do is be careful, remain vigilant, and—for all our sakes—stay on top of those system updates. 

Why EVs are gaining ground in Africa

EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa.

Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around.

But there are some signs of progress. I just finished up a story about the economic case: A recent study in Nature Energy found that EVs from scooters to minibuses could be cheaper to own than gas-powered vehicles in Africa by 2040.

If there’s one thing to know about EVs in Africa, it’s that each of the 54 countries on the continent faces drastically different needs, challenges, and circumstances. There’s also a wide range of reasons to be optimistic about the prospects for EVs in the near future, including developing policies, a growing grid, and an expansion of local manufacturing.  

Even the world’s leading EV markets fall short of Ethiopia’s aggressively pro-EV policies. In 2024, the country became the first in the world to ban the import of non-electric private vehicles.

The case is largely an economic one: Gasoline is expensive there, and the country commissioned Africa’s largest hydropower dam in September 2025, providing a new source of cheap and abundant clean electricity. The nearly $5 billion project has a five-gigawatt capacity, doubling the grid’s peak power in the country.  

Much of Ethiopia’s vehicle market is for used cars, and some drivers are still opting for older gas-powered vehicles. But this nudge could help increase the market for EVs there.  

Other African countries are also pushing some drivers toward electrification. Rwanda banned new registrations for commercial gas-powered motorbikes in the capital city of Kigali last year, encouraging EVs as an alternative. These motorbike taxis can make up over half the vehicles on the city’s streets, so the move is a major turning point for transportation there. 

Smaller two- and three-wheelers are a bright spot for EVs globally: In 2025, EVs made up about 45% of new sales for such vehicles. (For cars and trucks, the share was about 25%.)

And Africa’s local market is starting to really take off. There’s already some local assembly of electric two-wheelers in countries including Morocco, Kenya, and Rwanda, says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy. 

Spiro, a Dubai-based electric motorbike company, recently raised $100 million in funding to expand operations in Africa. The company currently assembles its bikes in Uganda, Kenya, Nigeria, and Rwanda, and as of October it has over 60,000 bikes deployed and 1,500 battery swap stations operating.

Assembly and manufacturing for larger EVs and batteries is also set to expand. Gotion High-Tech, a Chinese battery company, is currently building Africa’s first battery gigafactory. It’s a $5.6 billion project that could produce 20 gigawatt-hours of batteries annually, starting in 2026. (That’s enough for hundreds of thousands of EVs each year.)

Chinese EV companies are looking to growing markets like Southeast Asia and Africa as they attempt to expand beyond an oversaturated domestic scene. BYD, the world’s largest EV company, is aggressively expanding across South Africa and plans to have as many as 70 dealerships in the country by the end of this year. That will mean more options for people in Africa looking to buy electric. 

“You have very high-quality, very affordable vehicles coming onto the market that are benefiting from the economies of scale in China. These countries stand to benefit from that,” says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank. “It’s a game changer,” he adds.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

EVs could be cheaper to own than gas cars in Africa by 2040

Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.

There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. As a result some previous analyses have suggested that fossil-fuel vehicles would dominate in Africa through at least 2050. 

But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Electric two-wheelers, cars, larger automobiles, and even minibuses could compete in most African countries in just 15 years, according to the new study, published in Nature Energy.

“EVs have serious economic potential in most African countries in the not-so-distant future,” says Bessie Noll, a senior researcher at ETH Zürich and one of the authors of the study.

The study considered the total cost of ownership over the lifetime of a vehicle. That includes the sticker price, financing costs, and the cost of fueling (or charging). The researchers didn’t consider policy-related costs like taxes, import fees, and government subsidies, choosing to focus instead on only the underlying economics.

EVs are getting cheaper every year as battery and vehicle manufacturing improve and production scales, and the researchers found that in most cases and in most places across Africa, EVs are expected to be cheaper than equivalent gas-powered vehicles by 2040. EVs should also be less expensive than vehicles that use synthetic fuels. 

For two-wheelers like electric scooters, EVs could be the cheaper option even sooner: with smaller, cheaper batteries, these vehicles will be economically competitive by the end of the decade. On the other hand, one of the most difficult segments for EVs to compete in is small cars, says Christian Moretti, a researcher at ETH Zürich and the Paul Scherrer Institute in Switzerland.

Because some countries still have limited or unreliable grid access, charging is a major barrier to EV uptake, Noll says. So for EVs, the authors analyzed the cost of buying not only the vehicle but also a solar off-grid charging system. This includes solar panels, batteries, and the inverter required to transform the electricity into a version that can charge an EV. (The additional batteries help the system store energy for charging at times when the sun isn’t shining.)

Mini grids and other standalone systems that include solar panels and energy storage are increasingly common across Africa. It’s possible that this might be a primary way that EV owners in Africa will charge their vehicles in the future, Noll says.

One of the bigger barriers to EVs in Africa is financing costs, she adds. In some cases, the cost of financing can be more than the up-front cost of the vehicle, significantly driving up the cost of ownership.

Today, EVs are more expensive than equivalent gas-powered vehicles in much of the world. But in places where it’s relatively cheap to borrow money, that difference can be spread out across the course of a vehicle’s whole lifetime for little cost. Then, since it’s often cheaper to charge an EV than fuel a gas-powered car, the EV is less expensive over time. 

In some African countries, however, political instability and uncertain economic conditions make borrowing money more expensive. To some extent, the high financing costs affect the purchase of any vehicle, regardless of how it’s powered. But EVs are more expensive up front than equivalent gas-powered cars, and that higher up-front cost adds up to more interest paid over time. In some cases, financing an EV can also be more expensive than financing a gas vehicle—the technology is newer, and banks may see the purchase as more of a risk and charge a higher interest rate, says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank.

The picture varies widely depending on the country, too. In South Africa, Mauritius, and Botswana, financing conditions are already close to levels required to allow EVs to reach cost parity, according to the study. In higher-risk countries (the study gives examples including Sudan, which is currently in a civil war, and Ghana, which is recovering from a major economic crisis), financing costs would need to be cut drastically for that to be the case. 

Making EVs an affordable option will be a key first step to putting more on the roads in Africa and around the world. “People will start to pick up these technologies when they’re competitive,” says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy. 

Solar-based charging systems, like the ones mentioned in the study, could help make electricity less of a constraint, bringing more EVs to the roads, Nsitem says. But there’s still a need for more charging infrastructure, a major challenge in many countries where the grid needs major upgrades for capacity and reliability, he adds. 

Globally, more EVs are hitting the roads every year. “The global trend is unmistakable,” Carlin says. There are questions about how quickly it’s happening in different places, he says, “but the momentum is there.”