The noise we make is hurting animals. Can we learn to shut up?

When the covid-19 pandemic started, Jennifer Phillips thought about the songs of the sparrows.

They were easier to hear, because the world had suddenly become quieter. Car traffic plummeted as people sheltered at home and shifted to remote work. Air travel collapsed. Cities—normally filled with the honking, screeching, engine-gunning riot of transportation—became as silent as tombs.

For years, Phillips has studied how animals react to “anthropogenic noise,” or the racket created by human activity. Most animals really don’t like it, she and her colleagues have learned. Animals constantly listen to the world around them: They’re on the alert for the rustle of approaching predators, or a mating call from a member of their species. As human society has expanded—with sprawling cities, industrial mines, and roads crisscrossing the world—it has gotten noisier too, and animals have trouble hearing one another.

Noise is invisible; there’s no billowing smokestack, no soiled waterway. We just got used to it as it vibrated in the background.

Phillips and her colleagues had spent time in the 2010s in San Francisco recording the sound of white-crowned sparrows in the Presidio. It’s a park that is half peaceful nature and half automobile noise, since it’s filled with thick clumps of trees and grassy fields but also has two highways that slice through it, feeding onto the Golden Gate Bridge. In past recordings, starting in the 1950s, sparrows had sung with complex and lower-pitched melodies and three major “dialects.” But by the 2010s, traffic in the Presidio had exploded, and the hubbub was so loud that the birds began to sing with faster trills—and at a higher pitch—so their fellows could hear them. The two quietest dialects were either dead or on their way to extinction.

They’re “screaming at the top of their lungs,” says Phillips. “They really can’t hear the lower frequencies when the traffic noise is present.” Urban noise can even change birds’ bodies; they get thinner and more stressed out. Their mating calls aren’t as effective, because female birds, as researchers have found, generally don’t enjoy high-pitched, high-volume shouting. (It makes them wonder if the males are unhealthy.) The noise can increase bird-on-bird conflict, because when birds can’t hear warning cries they accidentally stumble into enemy territory. Perhaps worst of all, in situations like these biodiversity takes a hit: Entire species that can’t handle urban clamor simply head out of town and never come back.

But as the sudden, eerie silence of the pandemic descended, Phillips sat at home thinking, It’s really quiet. And then she wondered: Would the Presidio birds now be able to hear each other better?

She raced over to the park and started recording. Sure enough, the park was seven decibels quieter—a huge drop. (That’s like the difference between the noise of the average home and whispering.)

And remarkably, the researchers found that the songs of the white-crowned sparrows had transformed. They were singing more quietly, with a richer range of frequencies. A bird could be heard twice as far as before. And the mating calls had gotten more sultry.

“They could sing a higher performance, basically a sexier song, but not have to scream it so loud,” Phillips says. 

It was as if time had been reversed and all the damage abruptly repaired. And it proved what Phillips and her peers have been increasingly documenting: that anthropogenic noise is the newest form of pollution we need to tackle. The noise of our relentlessly on-the-move industrial society affects all life on Earth, wildlife and humans, in ways we’re just beginning to grasp. Yet strategies such as electrification and clever urban design could help. As the Presidio showed, noise can vanish overnight—once we figure out how to shut up.

Hidden impacts

Many forms of pollution are obvious to us humans. Dumping toxic goo into lakes? Sure, that’s bad. Coal smokestacks pumping soot and carbon dioxide, plastic bags and sea nets choking whales—we now understand that these, too, are problems. Even an idea as gauzy as light pollution has penetrated the public consciousness to some extent, since it’s why city dwellers can’t see many stars, and we’ve heard it confuses migratory birds.

But noise, mostly from transportation, took longer to hit our radar. This is partly because it’s invisible; there’s no billowing smokestack, no soiled waterway. We just got used to it as it vibrated in the background.

sparrow perched on a branch, singing
Sparrows in San Francisco’s Presidio began to sing with faster trills—and at a higher pitch—so their fellows could hear them over the noise of nearby traffic.
GETTY IMAGES
hummingbird in flight
The black-chinned hummingbird seems to prefer noisy areas, fledging more chicks than the same species does in quieter areas.
MDF/WIKIMEDIA COMMONS

There were a few studies in the ’70s and ’80s showing that animals were upset by our noise. But the field really began to take off in the ’00s, in part because digital technology made it easier to record long swathes of sound out in nature and analyze them. One early salvo came from the biologist Hans Slabbekoorn, who was studying doves in the city of Leiden and irritatedly noticed that he could rarely get a clean recording because of the background noise. Sometimes he’d see the doves’ throats moving as they cooed but couldn’t hear them. “If I’m having difficulty hearing them,” he thought, “what about them?”

So he and a colleague started recording ambient sound levels in different parts of Leiden. Some were quiet residential areas, which registered a soothing 42 decibels, and others were noisy intersections or areas near highways, which reached 63 decibels, about as loud as background music. Sure enough, he found that birds in the noisy areas were singing at a higher pitch.

Over the next two decades, research in the field bloomed. Noise, the scientists found, has a few common ill effects on animals. It disrupts communication, certainly. But it also generally stresses them, reducing everything from their body weight to their receptivity to mating calls. If an animal nests closer to a road, its reproduction rates can go down; eastern bluebirds, for example, produce fewer fledglings. Truly cacophonous noise—like planes taking off at a nearby airport—can cause hearing loss in birds. And animals can wind up becoming less aware of threats from predators. They’ll wander closer to danger, because they can’t hear it coming. (And sometimes they’ll do the opposite: They’ll develop a rageaholic hair-­trigger temper, because they’re constantly on high alert and regard everything as a threat.) 

Even in deep rural areas, where things are normally pretty quiet, highways can disrupt wildlife—the noise carries far into the fields nearby. Fraser Shilling, a biologist at the University of California, Davis, has stood up to half a mile from rural highways and recorded sound as loud as 60 decibels, which is at least 20 decibels higher than you’d typically find in the wilderness. “The motorcycles and the 18-wheelers are really the ones that project a lot of noise,” he told me. 

Above 55 decibels, many skittish animals get into a fight-or-flight panic. The prevalence of bobcats—an endangered species famously rattled by noise—“starts dropping off the cliff,” says Shilling. Above 65, “you’re really starting to exclude almost all wildlife.”

And that’s not even the upper limit of what wildlife is exposed to. There are roughly a half-million natural-gas wells around the US, and piercingly loud compressors are used to shoot water down into most of them. Up close, the compressors can kick out 95 decibels, a sound as loud as a subway train; at one Wyoming gas well the sound still registered around 48 decibels nearly a quarter-mile away.

Historically, it wasn’t always easy to prove that noise was causing whatever problems the animals were experiencing. Maybe it was other factors; maybe animal populations reduce near a road because some are hit by vehicles? 

But several clever experiments have proved that noise—and noise alone—can disrupt wildlife. One was the “phantom road” experiment by the conservation scientist Jesse Barber and his team, then at Boise State University. They went out to a quiet, uninhabited area of the Boise foothills in Idaho, far away from any roads. In this valley in the mountains, thousands of migratory birds stop on their way south each year; they’ll gorge themselves on cherry bushes, gaining weight for the next days of flying. The researchers strapped 15 pairs of speakers to Douglas fir trees, in a half-kilometer line. Then they blasted recordings of highway noise. They played the noise for four days and then turned it off for four days. Then they observed thousands of birds, capturing many to measure their body mass.

The noise truly rattled the birds. When the sound was turned on, nearly a third left the area. Those that stuck around ate less: While birds should be heavier after a day of foraging, these ones didn’t gain much. The noise seemed to have so interrupted their feeding that they weren’t packing on the weight needed for their migratory trip.

Other, similarly nifty A/B tests followed. One was led by David Luther, a biologist at George Mason University (who also worked with Phillips on the covid-19 study in San Francisco). In 2015, these researchers took 17 white-crowned sparrows at birth and raised them in a lab. To teach them their species’ songs, they played the nestlings recordings of adult sparrows singing, at low and high pitches. Six of the nestlings heard the songs without any interference; with the other half, the researchers played the sounds of city noise at the same time.

The results were stark. The lucky birds that were spared the traffic noise learned to perform the quieter, sweeter, more complex songs. But the birds that had traffic noise blasted learned only the higher, faster, more stressed-out songs. From the cradle, noise changed the way they communicated.

Humans hate noise too

You can’t pull the same experiment with humans, raising them in a lab to see how noise affects them. (Not ethically, anyway.) But if we could, we’d likely find the same thing. We, too, are animals—and it appears that we suffer in similar ways from anthropogenic noise, even though we’re the ones creating it.

The sound of traffic is correlated with lousy sleep, higher blood pressure, more heart disease, and higher stress.

Stacks of research in the last few decades have found that noise—most often, as with wildlife, the sound of traffic—is correlated with lousy sleep, higher blood pressure, more heart disease, and higher stress. A Danish study followed almost 25,000 nurses for years and found that an additional 10 decibels hit them hard; over a 23-year period they had an 8% higher rate of death, plus higher rates of nearly every bad thing that could happen to you: cancers, psychiatric problems, strokes. (They controlled for other malign health influences.) As you’d probably predict by now, children fare badly too. When Barcelona researchers followed almost 3,000 elementary school kids for a year, they found that those in noisier schools performed worse on assessments of working memory and ability to pay attention.

“We think of ourselves as being ‘used to it,’” says Gail Patricelli, a professor of evolution and ecology at the University of California, Davis. “We’re not as used to it as we think we are.”

It’s also true that there’s a trade-off. Many people understand that noise from cities and highways is aggravating, but we tolerate it because we get benefits along with the hassles. Cities are crammed with jobs and connections and dating opportunities; cars and trucks bring us the things we need and increase our personal mobility.

It turns out that animals make a similar calculus. Some species appear to benefit in certain ways from proximity to noise, so they move toward it. 

Clinton Francis, a biologist at California Polytechnic State University, and a team studied bird populations near noisy gas wells in rural New Mexico. Most species avoided the riot of the well pumps. But Francis was surprised to find that some hummingbirds and finches preferred it, and by one important measure they thrived: They were nesting more in the noisy areas than in the quieter areas. Additionally, several species had more success at fledging chicks in noisier locations.

What was going on? It’s likely that the noise makes it harder for predators to hear the birds and hunt down their nests. “It’s essentially a predator shield,” Francis says. Since his research found that predators can cause as much as 76% of failures of eggs to produce healthy offspring, that’s a significant survival advantage.

Cities can offer the same protections to certain species. Consider the case of Flaco, a Eurasian eagle-owl that escaped from the Central Park Zoo in February of 2023 and found he was in a terrific place to hunt. The incessant traffic ought to have caused him trouble. “An owl like this is among the most vulnerable species to intrusions from noise pollution. They’re listening for extremely faint signals or cues that their prey provide,” Francis notes. But New York has its compensations, because prey animals abound. They’re also naïve and unguarded, never expecting an owl with a six-foot wingspan to swoop down and devour them.

EDDIE GUY

Granted, these upsides don’t cancel out the negatives. Human noise may shield some birds from predators, but in other ways it leaves them faintly miserable, with high levels of stress hormones and lower weight. 

Worse, the species that manage to thrive in cities or near highways are often the same ones all over the country.  And they represent only a minority of species; most are driven further away, with less and less land to live on as civilization spreads ever outward. 

“Overall, it’s kind of a nightmare for diversity,” says Luther.

How to silence the world

In the early ’00s, the village of Alverna in the Netherlands began to get louder. A major intercity road cut straight through the town, and traffic had gone up by two-thirds in the previous decade. Facing complaints about the din, the town offered to put up some 13-foot walls on either side of the route. Residents hated the idea. Who wants to look out the window at massive walls?

So instead town planners redesigned the road in subtle ways. They lowered it by half a meter, slightly blocking the tire sounds. They built wedges that rise up three feet on either side, and surfaced them with attractive antique stone; that blocked even more sound. They planted sound-absorbing trees. And as a final coup de grâce, they reduced the speed limit from about 50 to 30 miles per hour. When a car is moving slowly, the engine is producing most of the roar—but once it’s going 45 mph or faster, the rumble of tires on the pavement takes over and is much louder. Each intervention had only a small effect, but cumulatively they made the road a blessed 10 decibels quieter.

This tale illustrates one curious upside of noise. Compared with other forms of pollution, it can be ended quickly. Toxic pollutants or CO2 can hang around for tens of thousands of years; the microplastics in your pancreas are probably never coming out. But with noise, the instant you reduce the source, the benefits are immediate. 

Plus, most of what works is “not rocket science,” Shilling says. A tall wall at the side of a highway will cut noise by 10 decibels; fill a double-sided wall with rubble and it’s even better. That could cut the traffic noise to below 55 decibels, he notes, which would help particularly skittish forms of wildlife. Walls can block animal movement, though, so in animal-heavy areas it’s better to build berms—small hills on either side of a highway. Areas of high ecological importance could be prioritized to keep costs down. 

“If there’s a great chunk of wetland habitat and it’s the only one around for 50 miles in any direction? Well, then we should build noise walls around it,” he says. We should also build overpasses and underpasses to help animals get around. And to quiet the din of gas wells out in the countryside, states could require companies to build walls around them. (They’ll likely only do that, though, when human neighbors complain or launch lawsuits; animals don’t have lawyers.)

Cities, too, can learn to shut up, as Alverna proved. At the most ambitious, some have buried noisy highways that once cut through the downtown core. Boston put a massive elevated highway underground in its “Big Dig”; in Slabbekoorn’s hometown of Amstelveen—a suburb of Amsterdam—they’re currently enclosing the A9 highway in a tunnel and turning the surface into a verdant park with new buildings. “That’s amazing, getting back a lot of the space as well,” he says. 

Granted, this sort of reengineering can be brutally expensive, which is why politicians blanch when they’re asked to reduce road noise. The Big Dig cost $15 billion, and with interest up to $24 billion. When I mentioned cost to Shilling, he sighed. “It’s not as expensive as a B-1 bomber or tax cuts for rich people,” he says. “Environmental stuff is considered expensive just because our expectations are low, not because we can’t afford to do it.”

There are cheaper and more politically palatable fixes, though. Reducing urban speed limits is one; Paris recently cut the top speed on its ring roads from 70 to 50 kilometers per hour (43 to 31 mph), and noise at night went down by an average 2.7 decibels—a noticeable drop. Planting more trees and vegetation all around roads and cities can cut a few decibels more, and residents love it. 

Growing adoption of electricity would also bring down the volume. “Electric vehicles of all kinds have the potential to make a big difference,” Patricelli says; when the light turns green and an EV next to you accelerates away, it’s up to 13 decibels quieter than a comparable gas-­powered vehicle. These benefits won’t be felt as much on highways, because EVs still make tire noise at high speeds. But in the slower stop-and-go traffic of urban life, they are far more pleasant to the ears, both animal and human. Indeed, the electrification of everything that currently uses a gas-powered motor will make urban life quieter. Cities like Alameda, California, and Alexandria, Virginia, are increasingly banning gas-powered leaf blowers and lawn mowers, which operate at hair-raising volume while electric ones whisper along. 

We’ve engineered a civilization that roars, but the next phase is making it purr. The animals will thank us. 

Clive Thompson is a science and technology journalist based in New York City.

The quest to measure our relationship with nature

As a movement, environmentalism has been pretty misanthropic. Understandably so—we humans have done some destructive things to the ecosystems around us. In the 21st century, though, mainstream conservation is learning that humans can be a force for good. Foresters are turning to Indigenous burning practices to prevent wildfires. Biologists are realizing that flower-dotted meadows were ancient food-production landscapes that need harvesting or they’ll disappear. And the once endangered peregrine falcon now thrives in part thanks to nesting sites on skyscrapers and abundant urban prey: rats. 

For decades (two, but that counts), I’ve been writing about how humans aren’t metaphysically different from any other species on Earth. Conservation can’t only be about fencing people out of protected areas. A lot of the time the real trick is not to withdraw from “nature” but to get better at being part of it. 

Still, I recognize that living in harmony with nature sounds like a mushy idea. I was therefore stoked to participate in a meeting in Oxford, UK, that sought to build more precise tools to assess human-nonhuman relationships. Scientists have invented lots of measurements of environmental destruction, from parts per million of carbon dioxide to extinction rates to “planetary boundaries.” These have their uses, but they engage people mostly through dread. Why not invent metrics, we thought, that would engage people’s hopes and dreams? 

It was harder than I expected. How do you quantify how good people in any given nation are at living with other Earthlings? Some of the metrics the group proposed seemed to me to be too similar to the older, more adversarial approach. Why tally the agricultural land use per person, for example? Environmentalists have typically seen farms as the opposite of nature, but they’re also potential sites for both edible and inedible biodiversity. Some of us were keen on satellite imagery to calculate things like how close people live to green space. But without local information, you can’t prove that people can actually access that space.

Eventually the 20 or so scientists, authors, and philosophers who met in Oxford settled on three basic questions. First, is nature thriving and accessible to people? We wanted to know if humans could engage with the world around them. Second, is nature being used with care? (Of course, “care” could mean lots of things. Is it just keeping harvests under maximum sustainable yield? Or does it require a completely circular economy?) And third, is nature safeguarded? Again, not easy to assess. But if we could roughly measure each of these three things, the numbers could combine into an overall score for the quality of a human-nature relationship. 

We published our ideas in Nature last year. Though they weren’t perfect, green-space remote sensing and agricultural footprint calculations made the cut. Since then, a team in the United Nations Human Development Office has continued that work, planning to debut a Nature Relationship Index (NRI) later this year alongside the 2026 Human Development Report. Everyone loves a ranked list; we hope countries will want to score well and will compete to rise to the top. 

Pedro Conceição, lead author of the Human Development Report, tells me that he wants the new index to shift how countries see their environmental programs. (He wouldn’t give me spoilers as to the final metrics, but he did tell me that nothing from our Nature paper made it in.) The NRI, Conceição says, will be critical for “challenging this idea that humans are inherent destroyers of nature and that nature is pristine.” Narratives around constraints, limits, and boundaries are polarizing instead of energizing, he says. So the NRI isn’t about how badly we are failing. It speaks to aspirations for a green, abundant world. As we do better, the number goes up—and there is no limit. 

Emma Marris is the author of Wild Souls: Freedom and Flourishing in the Non-Human World.

Is carbon removal in trouble?

Last week, news outlets reported that Microsoft was pausing carbon removal purchases. It was something of a bombshell.

The thing is, Microsoft is the carbon removal market. The company has single-handedly purchased something like 80% of all contracted carbon removal. If you’re looking for someone to pay you to suck carbon dioxide out of the atmosphere, Microsoft is probably who you’re after.

The company has said that it is not permanently ending its carbon removal purchases (though it didn’t directly answer further questions about this apparent pause). But with this flurry of news, there’s a lot of fear in the industry—so, it’s worth talking about the state of carbon removal, and where Big Tech companies fit in.

Carbon removal aims to reliably pull carbon dioxide out of the atmosphere and permanently store it. There’s a wide range of technologies in this space, including direct air capture (DAC) plants, which usually use some kind of sorbent or solvent to pull carbon dioxide from the air. Another important method is bioenergy with carbon capture and storage (BECCS), in which biomass like trees or waste-derived biofuels are burned for energy, and scrubbing equipment captures the greenhouse gases.

There was a huge boom of interest in carbon removal technologies in the first half of this decade. One UN climate report in 2022 found that nations may need to remove up to 11 billion metric tons of carbon dioxide every year by 2050 to keep warming to 2 °C above preindustrial levels.

One nagging problem is that the economics here have always been tricky. There’s a major potential public good to pulling carbon pollution out of the atmosphere. The question is, Who will pay for it?

So far, the answer has been Microsoft. The company is by far the largest buyer of carbon removal contracts, and it’s the only purchaser that has made megatonne-scale purchases, says Robert Höglund, cofounder of CDR.fyi, ​​a public-benefit corporation that analyzes the carbon removal sector. “Microsoft has had a huge importance, especially for getting large-scale projects off the ground and showing there is demand for large deals,” Höglund said via email.

Microsoft has pledged to become carbon-negative by 2030 and to remove the equivalent of its historic emissions by 2050. Progress on actually cutting emissions has been tough to achieve though—in the company’s latest Environmental Sustainability Report, published in June 2025, it announced emissions had risen by 23.4% since 2020.

On April 10, Heatmap News reported that Microsoft staff had told suppliers and partners that it was pausing future purchases of carbon removal, though it wasn’t clear whether the company would increase support for existing projects, or when purchases might resume. Bloomberg reported a similar story the next day. In one instance, Microsoft employees said that the decision was related to financial considerations, one source told Bloomberg. 

In a statement in response to written questions, Microsoft said that it was not permanently closing its carbon removal program. “At times we may adjust the pace or volume of our carbon removal procurement as we continue to refine our approach toward sustainability goals. Any adjustments we make are part of our disciplined approach—not a change in ambition,” Microsoft Chief Sustainability Officer Melanie Nakagawa said in the statement.

Whatever, exactly, is happening behind the scenes, many in the industry are nervous, says Wil Burns, Co-Director of the Institute for Responsible Carbon Removal at American University. People viewed the company as the foundational supporter of carbon removal, he adds.

“This pause—whether it’s short term or whatever it is—the way it’s been rolled out is extremely irresponsible,” Burns says. The vast majority of firms looking to get carbon removal contracts are probably seeking Microsoft deals. So, while Microsoft has every right to change its plans, the company needs to be open with the industry now, he adds.

“I don’t think you can hold yourself out as the paragon of fostering carbon removal and then treat a nascent industry that disrespectfully,” Burns says.

Carbon removal companies were already in turmoil in the US, particularly because of recent policy shifts: Funding has been cut back, and recent changes at the Environmental Protection Agency were aimed at the government’s ability to target carbon pollution.

Now, if the largest corporate backer is shifting plans or taking a significant pause, things could get rocky.

Depending on the extent of this pause, the industry may need to survive on smaller purchases and hope for support from governments and philanthropy, Höglund says. But for carbon removal to truly scale, we need policymakers to create mandates so that emitters are responsible for either storing the carbon dioxide they produce or paying for it, Burns says.

“Maybe the upside of this is Microsoft has sent a wake-up call, that you just can’t rely on the kindness of strangers to make carbon removal scale.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

Why having “humans in the loop” in an AI war is an illusion

The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon. This debate has become urgent, with AI playing a bigger role than ever before in the current conflict with Iran. AI is no longer just helping humans analyze intelligence. It is now an active player—generating targets in real time, controlling and coordinating missile interceptions, and guiding lethal swarms of autonomous drones.

Most of the public conversation regarding the use of AI-driven autonomous lethal weapons centers on how much humans should remain “in the loop.” Under the Pentagon’s current guidelines, human oversight supposedly provides accountability, context, and nuance while reducing the risk of hacking.

AI systems are opaque “black boxes”

But the debate over “humans in the loop” is a comforting distraction. The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are actually “thinking.” The Pentagon’s guidelines are fundamentally flawed because they rest on the dangerous assumption that humans understand how AI systems work.

Having studied intentions in the human brain for decades and in AI systems more recently, I can attest that state-of-the-art AI systems are essentially “black boxes.” We know the inputs and outputs, but the artificial “brain” processing them remains opaque. Even their creators cannot fully interpret them or understand how they work. And when AIs do provide reasons, they are not always trustworthy.

The illusion of human oversight in autonomous systems

In the debate over human oversight, a fundamental question is going unasked: Can we understand what an AI system intends to do before it acts?

Imagine an autonomous drone tasked with destroying an enemy munitions factory. The automated command and control system determines that the optimal target is a munitions storage building. It reports a 92% probability of mission success because secondary explosions of the munitions in the building will thoroughly destroy the facility. A human operator reviews the legitimate military objective, sees the high success rate, and approves the strike.

But what the operator does not know is that the AI system’s calculation included a hidden factor: Beyond devastating the munitions factory, the secondary explosions would also severely damage a nearby children’s hospital. The emergency response would then focus on the hospital, ensuring the factory burns down. To the AI, maximizing disruption in this way meets its given objective. But to a human, it is potentially committing a war crime by violating the rules regarding civilian life. 

Keeping a human in the loop may not provide the safeguard people imagine, because the human cannot know the AI’s intention before it acts. Advanced AI systems do not simply execute instructions; they interpret them. If operators fail to define their objectives carefully enough—a highly likely scenario in high-pressure situations—the “black box” system could be doing exactly what it was told and still not acting as humans intended.

This “intention gap” between AI systems and human operators is precisely why we hesitate to deploy frontier black-box AI in civilian health care or air traffic control, and why its integration into the workplace remains fraught—yet we are rushing to deploy it on the battlefield.

To make matters worse, if one side in a conflict deploys fully autonomous weapons, which operate at machine speed and scale, the pressure to remain competitive would push the other side to rely on such weapons too. This means the use of increasingly autonomous—and opaque—AI decision-making in war is only likely to grow.

The solution: Advance the science of AI intentions

The science of AI must comprise both building highly capable AI technology and understanding how this technology works. Huge advances have been made in developing and building more capable models, driven by record investments—forecast by Gartner to grow to around $2.5 trillion in 2026 alone. In contrast, the investment in understanding how the technology works has been minuscule.

We need a massive paradigm shift. Engineers are building increasingly capable systems. But understanding how these systems work is not just an engineering problem—it requires an interdisciplinary effort. We must build the tools to characterize, measure, and intervene in the intentions of AI agents before they act. We need to map the internal pathways of the neural networks that drive these agents so that we can build a true causal understanding of their decision-making, moving beyond merely observing inputs and outputs. 

A promising way forward is to combine techniques from mechanistic interpretability (breaking neural networks down into human-understandable components) with insights, tools, and models from the neuroscience of intentions. Another idea is to develop transparent, interpretable “auditor” AIs designed to monitor the behavior and emergent goals of more capable black-box systems in real time.  

Developing a better understanding of how AI functions will enable us to rely on AI systems for mission-critical applications. It will also make it easier to build more efficient, more capable, and safer systems.

Colleagues and I are exploring how ideas from neuroscience, cognitive science, and philosophy—fields that study how intentions arise in human decision-making—might help us understand the intentions of artificial systems. We must prioritize these kinds of interdisciplinary efforts, including collaborations between academia, government, and industry.

However, we need more than just academic exploration. The tech industry—and the philanthropists funding AI alignment, which strives to encode human values and goals into these models—must direct substantial investments toward interdisciplinary interpretability research. Furthermore, as the Pentagon pursues increasingly autonomous systems, Congress must mandate rigorous testing of AI systems’ intentions, not just their performance.

Until we achieve that, human oversight over AI may be more illusion than safeguard.

Uri Maoz is a cognitive and computational neuroscientist specializing in how the brain transforms intentions into actions. A professor at Chapman University with appointments at UCLA and Caltech, he leads an interdisciplinary initiative focused on understanding and measuring intentions in artificial intelligence systems (ai-intentions.org).

No one’s sure if synthetic mirror life will kill us all

For four days in February 2019, some 30 synthetic biologists and ethicists hunkered down at a conference center in Northern Virginia to brainstorm high-risk, cutting-­edge, irresistibly exciting ideas that the National Science Foundation should fund. By the end of the meeting, they’d landed on a compelling contender: making “mirror” bacteria. Should they come to be, the lab-created microbes would be structured and organized like ordinary bacteria, with one important exception: Key biological molecules like proteins, sugars, and lipids would be the mirror images of those found in nature. DNA, RNA, and many other components of living cells are chiral, which means they have a built-in rotational structure. Their mirrors would twist in the opposite direction. 

Researchers thrilled at the prospect. “Everybody—everybody—thought this was cool,” says John Glass, a synthetic biologist at the J. Craig Venter Institute in La Jolla, California, who attended the 2019 workshop and is a pioneer in developing synthetic cells. It was “an incredibly difficult project that would tell us potentially new things about how to design and build cells, or about the origin of life on Earth.” The group saw enormous potential for medicine, too. Mirror microbes might be engineered as biological factories, producing mirror molecules that could form the basis for new kinds of drugs. In theory, such therapeutics could perform the same functions as their natural counterparts, but without triggering unwelcome immune responses. 

After the meeting, the biologists recommended NSF funding for a handful of research groups to develop tools and carry out preliminary experiments, the beginnings of a path through the looking glass. The excitement was global. The National Natural Science Foundation of China funded major projects in mirror biology, as did the German Federal Ministry of Research, Technology, and Space.

By five years later, in 2024, many researchers involved in that NSF meeting had reversed course. They’d become convinced that in the worst of all possible futures, mirror organisms could trigger a catastrophic event threatening every form of life on Earth; they’d proliferate without predators and evade the immune defenses of people, plants, and animals. 

“I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened.”

Kate Adamala, synthetic biologist, University of Minnesota

Over the past two years, they’ve been ringing alarm bells. They published an article in Science in December 2024, accompanied by a 299-page technical report addressing feasibility and risks. They’ve written essays and convened panels and cofounded the Mirror Biology Dialogues Fund (MBDF), a broadly funded nonprofit charged with supporting work on understanding and addressing the risk. The issue has received a blaze of media attention and ignited dialogues among not only chemists and synthetic biologists but also bioethicists and policymakers.  

What’s received less attention, however, is how we got here and what uncertainties still remain about any potential threat. Creating a mirror-life organism would be tremendously complicated and expensive. And although the scientific community is taking the alarm seriously, some scientists doubt whether it’s even possible to create a mirror organism anytime soon. “The hypothetical creation of mirror-­image organisms lies far beyond the reach of present-day science,” says Ting Zhu, a molecular biologist at Westlake University, in China, whose lab focuses on synthesizing mirror-image peptides and other molecules. He and others have urged colleagues not to let speculation and anxiety guide decision-making and argued that it’s premature to call for a broad moratorium on early-stage research, which they say could have medical benefits. 

But the researchers who are raising flags describe a pathway, even multiple pathways, to bringing mirror life into existence—and they say we urgently need guardrails to figure out what kinds of mirror-biology research might still be safe. That means they’re facing a question that others have encountered before, multiple times over the last several decades and with mixed results—one that doesn’t have a neat home in the scientific method. What should scientists do when they see the shadow of the end of the world in their own research? 

Looking-glass life

The French chemist and microbiologist Louis Pasteur was the first to recognize that biological molecules had built-in handedness. In the late 19th century, he described all living species as “functions of cosmic asymmetry.” What would happen, he mused, if one could replace these chiral components with their mirror opposites? 

Scientists now recognize that chirality is central to life itself, though no one knows why. In humans, 19 of the 20 so-called “standard” amino acids that make up proteins are chiral, and all in the same way. (The outlier, glycine, is symmetrical.) The functions of proteins are intricately tied to their shapes, and they mostly interact with other molecules through chiral structures. Almost all receptors on the surface of a cell are chiral. During an infection, the immune system’s sentinels use chirality to detect and bind to antigens—substances that trigger an immune response—and to start the process of building antibodies. 

By the late 20th century, researchers had begun to explore the idea of reversing chirality. In 1992, one team reported having synthesized the first mirror-image protein. That, in turn, set off the first clarion call about the risk: In response to the discovery, chemists at Purdue University pointed out, briefly, that mirror-life organisms, if they escaped from a lab, would be immune to any attack by “normal” life. A 2010 story in Wired highlighting early findings in the area noted that if a such a microbe developed the ability to photosynthesize, it could obliterate life as we know it. 

The synthetic biology community didn’t seriously weigh those threats then, says David Relman, a specialist who bridges infectious disease and microbiology at Stanford University and a trailblazer in studying the gut and oral microbiomes. The idea of a mirror microbe seemed too far beyond the actual progress on proteins. “This was almost a solely theoretical argument 20 years ago,” he says. 

Now the research landscape has changed. 

Scientists are quickly making progress on mirror images of the machinery cells use to make proteins and to self-replicate. Those components include DNA, which encodes the recipes for proteins; DNA polymerases, which help copy genetic material; and RNA, which carries recipes to ribosomes, the cell’s protein factories. If researchers could make self-replicating mirror ribosomes, then they would have an efficient way to produce mirror proteins. That could be used as a biological manufacturing method for therapeutics. But embedded in a self-­replicating, metabolizing synthetic cell, all these pieces could give rise to a mirror microbe. 

When synthetic biologists convened in Northern Virginia in 2019, they didn’t recognize how quickly the technology was advancing, and if they saw a threat at all, it may have been obscured by the blinding appeal of pushing the science forward. What’s become apparent now, says Glass, is that scientists in different disciplines, all related to mirror life, were largely unaware of what other scientists had been doing. Chemists didn’t know that synthetic biologists had made so much progress on creating mirror cells with natural chirality from scratch. Biologists didn’t appreciate that chemists were building ever-larger mirror macromolecules. “We tend to be siloed,” Glass says. And nobody, he says, had thought to seriously examine the immune system concerns that had already been raised in response to earlier work. “There was not an immunologist or an infectious disease person in the room,” Glass says, reflecting on the 2019 meeting. “I may have come closest, given that I work with pathogenic bacteria and viruses,” he adds, but his work doesn’t address how they cause infections in their hosts.

on the left, a hand with petri dish and the same image inverted on the right

GETTY IMAGES

These scientists also didn’t know that around the same time as their meeting, another conversation about mirror life was happening—a darker dialogue that was as focused on danger as it was on discovery. Starting around 2016, researchers with a nonprofit called Open Philanthropy had begun compiling research files on catastrophic biological risks. The organization, which rebranded as Coefficient Giving in 2025, funds projects across a range of focus areas; it adheres to a divisive philanthropic philosophy called effective altruism, which advocates giving money to projects with the highest potential benefit to the most people. While that might not sound objectionable, critics point out that the metrics devotees use to gauge “effectiveness” can prioritize long-term solutions while neglecting social injustices or systemic problems. 

Someone in Open Philanthropy’s bio­security group had suggested looking into the risks posed by mirror life. In 2019 the organization began funding research by Kevin Esvelt, who leads the Sculpting Evolution group at the MIT Media Lab, on biosecurity issues, including mirror life. He began reading up to see whether mirror life was something to worry about.

Esvelt made waves in 2013 for pioneering the use of CRISPR to develop a gene drive, a technology that could spread genetic changes introduced into a living organism through a whole population. Researchers are exploring its use, for example, to make mosquitoes hostile to the parasite that causes malaria—and, as a result, lower their chance of spreading it to humans. But almost immediately after he developed the tool, Esvelt argued against using it for profit, at least until proper safeguards could be set and its use in fighting malaria had been established. “Do you really have the right to run an experiment where if you screw up, it affects the whole world?” he asked, in this magazine, in 2016. At the Media Lab, Esvelt leads efforts to safely develop gene drives that can be deployed locally but prevented from spreading globally. 

Esvelt says he’s often thinking about the security risks posed by self-sustaining genetically engineered technologies, and research led him to suspect that the threat of mirror organisms hadn’t been seriously interrogated. The more he learned about microbial growth rates, predator-prey and microbe-microbe interactions, and immunology, the more he began to worry that mirror organisms, if impervious to the innate defenses of natural ones, could cause unstoppable infections in the event that they escaped the lab. 

Even if the first experimental iteration of such a germ were too fragile to survive in the environment or a human body, Esvelt says, it would be a light lift to genetically engineer new, more resilient versions with existing technology. Even worse, he says, the results could be weaponized. The possible path from 2019 to global annihilation seemed almost too direct, he found. 

But he wasn’t an expert in all the scientific fields involved in research on mirror life, so he started making calls. He first described his concerns to Relman one night in February 2022, at a restaurant outside Washington, DC. Esvelt hoped Relman would tell him he was wrong, that he’d missed something over the years of gathering data. Instead, he was troubled. 

The concern spreads

When Relman returned to California, he read more about the technology, the risks, and the role of chirality in the immune system and the environment. And he consulted experts he knew well—ecologists, other microbiologists, immunologists, all of them leaders in their fields—in an attempt to assuage his concerns. “I was hoping that they’d be able to say, I’ve thought about this, and I see a problem with your logic. I see that it’s really not so bad,” he says. “At every turn, that did not happen. Something about it was new to every person.” 

The concern spread. Relman worked with Jack Szostak, a professor of chemistry at the University of Chicago, and a group of researchers to see if it was possible to make an argument that mirror life wasn’t going to wipe out humanity. Included in that group was Kate Adamala, a synthetic biologist at the University of Minnesota. She was a natural choice: Adamala had shared the initial grant from the NSF, in 2019, to explore mirror-life technologies. 

She also became convinced the risk was real—and was dumbfounded that she hadn’t seen it earlier. “I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened,” she says. “I’m embarrassed to admit that I wasn’t even the one that brought up the risks first.” Through late 2023 and early 2024, the endeavor began to take on the form of a rigorous scientific investigation. Experts were presented with a hypothesis—namely, that if mirror cells were built, they would pose an existential threat—and asked to challenge it. The goal was to falsify the hypothesis. “It would be great if we were wrong,” says Vaughn Cooper, a microbiologist at the University of Pittsburgh and president-elect of the American Society for Microbiology. 

Relman says that as the chemists and biologists learned more about one another’s work and began to understand what immunologists know about how living things defend themselves, they started to connect the dots and see an emerging picture of an unstoppable synthetic threat.

Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.”

Timothy Hand, an immunologist at the University of Pittsburgh who hadn’t participated in the 2019 NSF meeting, wasn’t initially worried when he heard about mirror life, in 2024. “The mammalian immune system has this incredible capability to make antibodies against any shape,” he says. “Who cares if it’s a mirror?” But when he took a closer look at that process, he could see a cascade of potential problems far upstream of antibody production. Start with detection: Macrophages, which are cells the immune system uses to identify and dispatch invaders, use chiral sensing receptors on their surfaces. The proteins they use to grab on to those invaders, too, are chiral. That suggests the possibility that an organism could be infected with a mirror organism but not be able to detect it or defend against it. “The lack of innate immune sensing is an incredibly dangerous circumstance for the host,” Hand says.

By early 2024, Glass had become concerned as well. Relman and James Wagstaff, a structural biologist from Open Philanthropy, visited him at the Venter Institute to talk about the possibility of using synthetic cell technology—Glass’s specialty—to build mirror life. “At first I thought, This can’t be real,” Glass says. They walked through arguments and counterarguments. “The more this went on, the more I started feeling ill,” he says. “It made me realize that work I had been doing for much of the last 20 years could be setting the world up for this incredible catastrophe.” 

In the second half of 2024, the growing group of scientists assembled the report and wrote the policy forum for Science. Relman briefed policymakers at the White House and members of the national security community. Researchers met with the National Institutes of Health and the National Science Foundation. “We briefed the United Nations, the UK government, the government of Singapore, scientific funding organizations from Brazil,” says Glass. “We’ve talked to the Chinese government indirectly. We were trying to not blindside anybody.” 

A year and a half on, the push has had an impact. UNESCO has recommended a precautionary global moratorium on creating mirror-life cells, and major philanthropic organizations that fund science, including the Alfred P. Sloan Foundation, have announced they will not finance research leading to a mirror microorganism. The Bulletin of the Atomic Scientists highlighted considerations about mirror life in its most recent report on the Doomsday Clock. In March, the United Nations Secretary-General’s Scientific Advisory Board issued a brief highlighting the risks—noting, for example, that recent progress on building mirror molecules could reduce the cost of creating a mirror microbe. 

“I think no one really believes at this stage that we should make mirror life, based on the evidence that’s available,” says James Smith, the scientist who leads the MBDF, the nonprofit focused on assessing the risks of mirror life, which is funded by Coefficient Giving, the Sloan Foundation, and other organizations. The challenge now, Smith says, is for scientists to work with policymakers and bioethicists to figure out how much research on mirror life should be permitted—and who will enforce the rules.

Drawing the line

Not everyone is convinced that mirror organisms pose an existential threat. It’s difficult to verify predictions about how mirror microbes would fare in the immune system—or the larger world—without running experiments on them. Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.” Others have noted that carbohydrates called glycans already exist in both left- and right-handed forms—even in pathogens—and the immune system can recognize both of them. Experiments focused on interactions between the immune system and mirror molecules, they say, could help clarify the risks of mirror organisms and reduce uncertainty. 

Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited?

Andy Ellington, a biotechnologist and synthetic biologist at the University of Texas at Austin, doesn’t think mirror organisms will come to fruition anytime soon. Even if they do, he isn’t sure they will pose a threat. “If there is going to be harm done to the human race, this is about position 382 on my list,” he says. But at the same time, he says it’s a complicated issue worth studying more, and he wants to see the conversations continue: “We’re operating in a space where there’s so much unknown that it’s very difficult for us to do risk assessment.” 

Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited? 

Adamala, of the University of Minnesota, and others see a natural line at ribosomes, the cellular factories that transform chains of amino acids into proteins. These would be a critical ingredient in creating a self-replicating organism, and Adamala says the path to getting there once mirror ribosomes are in place would be pretty straightforward. But Zhu, at Westlake, and others counter that it’s worth developing mirror ribosomes because they could possibly produce medically useful peptides and proteins more efficiently than traditional chemical methods. He sees a clear distinction, and a foundational gap, between that kind of technology and the creation of a living synthetic organism. “It is crucial to distinguish mirror-image molecular biology from mirror-image life,” he says. That said, he points out that many synthetic molecules and organisms containing unnatural components, including but not limited to the mirror-image subset, might pose health risks. Researchers, he says, should focus on developing holistic guidelines to cover such risks—not just those from mirror molecules. 

Even if the exact risk remains uncertain, Esvelt remains more convinced than ever that the work should be paused, perhaps indefinitely. No one has taken a meaningful swing at the hypothesis that mirror life could wipe out everything, he says. The primary uncertainties aren’t around whether mirror life is dangerous, he points out; they have more to do with identifying which bacterium—including what genes it encodes, what it eats, how it evades the immune system’s sentinels—could lead to the most serious consequences. “The risk of losing everything, like the entire future of humanity integrated over time, is not worth any small fraction of the economy. You just don’t muck around with existential risk like that,” he says. 

In some ways, scientists have been here before, working out rules and limits for research. Two years after the start of the covid-19 pandemic, for example, the World Health Organization published guidelines for managing risks in biological research. But the history is much deeper: Horrific episodes of human experimentation led to the establishment of institutional review boards to provide ethical oversight. In the early 1970s, in response to concerns over lab-acquired infections and growing use of biological warfare, the US Centers for Disease Control and Prevention established biohazard safety levels (BSLs), which govern work on potentially dangerous biological experiments.

And in 1975—at the dawn of recombinant DNA research, which allows researchers to put genetic material from one organism into another—geneticists met at the Asilomar conference center in Pacific Grove, California, to hammer out rules governing the work. There were concerns over what would happen if some virus or bacterium, genetically engineered to have traits that would make it particularly dangerous for people, escaped from a lab. Scientists agreed to self-imposed restrictions, like a moratorium on research until new safety guidelines were in place. As a result of the meeting, in June 1976 the NIH issued rules that, among other things, categorized the risks associated with rDNA experiments and aligned them with the newly adopted BSL system.

Asilomar is often hailed as a successful model for scientific self-governance. But that perception reflects a tendency to recall the meeting through a nostalgic haze. “In fact, it was incredibly messy and human,” says Luis Campos, a historian of science at Rice University. Equally brilliant Nobelists argued on either side of the question of whether to rein in rDNA research. Technical discussions dominated; talks about who would be affected by the technology were missing. The meeting didn’t start establishing guidelines, says Campos, until the lawyers mentioned liability and lab leaks. 

For now it’s unclear whether these examples of self-­governance, which arose from the demonstrated risks of existing technologies, hold useful lessons for the mirror-life community. Three competing images of the future are coming into focus: Mirror life might not be possible, it might be possible but not threatening, or it might be possible and capable of obliterating all life on Earth. 

Scientists may be censoring themselves out of fear and speculation. To some, shutting down the work seems necessary and urgent; to others, it is unnecessarily limiting. What’s clear is that the question of what to do about mirror life has been both illuminating and disorienting, pushing scientists to interrogate not only their current research but where it might lead. This is uncharted territory. 

Stephen Ornes is a science writer based in Nashville, Tennessee.

Correction (April 15): An earlier version of this article incorrectly stated that David Relman briefed the National Security Agency. Relman says he briefed members of the national security community.

Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram

<div data-chronoton-summary="

  • A growing black market: Scammers are buying tools advertised on Telegram that trick banks’ facial recognition checks, letting them access accounts using photos, deepfakes, or virtual cameras instead of live video.
  • The stakes are enormous: Crypto scams stole an estimated $17 billion in 2025 alone, and virtual-camera attacks were 25 times more common in 2024 than the year before.
  • Banks are aware, but holes remain: Major institutions like Binance, BBVA, and Revolut acknowledge the problem but won’t confirm its scale. Experts warn that the most successful attacks may never be detected at all.
  • Regulators are scrambling to keep up: New laws in Thailand and warnings from US financial regulators signal growing pressure on the industry, but researchers say determined scammers will keep adapting.

” data-chronoton-post-id=”1135898″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

From inside a money-laundering center in Cambodia, an employee opens a popular Vietnamese banking app on his phone. The app asks him to upload a photo associated with the account, so he clicks on a picture of a 30-something Asian man.

Next, the app requests to open the camera for a video “liveness” check. The scammer holds up a static image of a woman bearing no resemblance to the man who owns the account. After a 90-second wait—as the app tells him to readjust the face inside the frame—he’s in. 

The exploit he’s demonstrating, in a video shared with me by a cyberscam researcher named Hieu Minh Ngo, is possible thanks to one of a growing range of illicit hacking services, readily available for purchase on Telegram, that are designed to break “Know Your Customer” (KYC) facial scans.

These banking and crypto safeguards are supposed to confirm that an account belongs to a real person, and that the user’s face matches the identity documents that were provided to open the account. But scammers are bypassing them in order to open mule accounts and launder money. Rather than using a live phone camera feed for a liveness check, the hacks typically deploy a tool known as a virtual camera. Users can replace the video stream with other videos or photos—depicting a real or deepfake person or even an object.

As financial institutions enact enhanced security measures aimed at stopping cyberscammers, these workarounds are the latest round in the cat-and-mouse game between criminal operators and the financial services industry.

Over the course of a two-month investigation earlier this year, MIT Technology Review identified 22 Chinese-, Vietnamese-, and English-language public Telegram channels and groups advertising bypass kits and stolen biometric data. The software kits use a variety of methods to compromise phone operating systems and banking applications, claiming to enable users to get around the compliance checks imposed by financial institutions ranging from major crypto exchanges such as Binance to name-brand banks like Spain’s BBVA. 

“Specializing in bank services—handling dirty money,” reads the since-deleted Telegram bio of the program used by the Cambodian launderer, complete with a thumbs-up emoji. “Secure. Professional. High quality.” Some of the channels and groups had thousands of subscribers or members, and many posted bullet points listing their services (“All kinds of KYC verification services”; “It’s all smooth and seamless”) alongside videos purporting to show successful hacks. 

Telegram says that after reviewing the accounts, it removed them for violating its terms of service. But such online marketplaces proliferate easily, and multiple channels and groups advertising similar tools remain active.

Banks and butchers

The rise in KYC bypasses has occurred alongside an expansion of a global industry in “pig-butchering” cyberscams. Crypto platforms and banks around the world are facing increasing scrutiny over the flow of illegally obtained money, including profits from such scams, through their platforms. This has prompted tightened banking regulations in countries such as Vietnam and Thailand, where governments have increased customer verification and fraud monitoring requirements and are pushing for stronger anti-money-laundering safeguards in the crypto industry.

Chainalysis, a US blockchain analysis firm, estimates that around $17 billion was stolen in 2025 in crypto scams and fraud, up from $13 billion in 2024. The United Nations Office on Drugs and Crime, meanwhile, warned in a recent report that the expansion of Asian scam syndicates in Africa and the Pacific has helped the industry “dramatically scale up profits.”

That combination of factors—more scrutiny, but also more revenue—has vaulted KYC bypasses to the center of the online marketplace for cyberscam and casino money launderers. Although estimates vary, cybersecurity researchers say these kinds of attacks are rising: The biometrics verification company iProov estimated that virtual-camera attacks were more than 25 times as common worldwide 2024 than in 2023, while Sumsub, a company providing KYC services, reported that “sophisticated” or multi-step fraud attempts, including virtual-camera bypasses, almost tripled last year among its clients. 

Three financial institutions that were named as targets on such Telegram channels—the world’s largest crypto exchange, Binance, as well as BBVA and UK-based Revolut—told me they’re aware of such bypasses and emphasize that they’re an industry-wide challenge. A spokesperson from Binance said it has “observed attempts of this nature to circumvent our controls,” adding that “we have successfully prevented such attacks and remain confident in our systems.”  BBVA and Revolut also declined to comment on whether their safeguards had been breached.

It’s difficult to estimate success rates, because companies may not be aware of bypasses—or report them—until later. “What’s important is what we don’t see,” Artem Popov, Sumsub’s head of fraud prevention products, told me, referring to attacks that go undetected. “There’s always part of the story where it might be completely hidden from our eyes, and from the eyes of any company in the industry, using any type of KYC provider.”

How criminals navigate a compliance maze 

Advertisements for the exploits appear simple enough, but on the back end, building a successful bypass is complex and often involves multiple methods. Some channels offer to jailbreak a physical phone so that scammers can trigger the use of a virtual camera (VCam) instead of the built-in one whenever they’d like. Other hacks inject code known as a “hooking framework” into a financial institution’s app that triggers the VCam to open. Either way, VCams can be used to dupe KYC safeguards with images or videos that replace genuine, live video of the account’s owner.

Sergiy Yakymchuk, CEO of Talsec, a cybersecurity company that primarily serves financial institutions, reviewed details from the Telegram channels identified by MIT Technology Review and says they are consistent with successful tactics used against his banking and crypto clients. His team received help requests from banks and exchanges for roughly 30 VCam-based hacks over the past year, up from fewer than 10 in 2023. 

Increasingly, hackers compromise both the phone itself and the code of the financial institutions’ apps before feeding the virtual camera a mix of stolen biometrics and deepfakes, Yakymchuk says.

“Some time ago, it was enough to decompile the app of a bank and distribute this on Telegram, and that was everything you needed,” he says. “Now it’s not enough, because you have KYC—and more and more things are needed.”

For money launderers, KYC bypasses have “become essential for everything right now—because scam compounds need to move money,” says Ngo, the researcher who shared the demo video. A convicted former hacker who became a cybersecurity advisor for the Vietnamese government, Ngo now runs an anti-scam nonprofit and helps law enforcement investigate money laundering. 

He describes how the process works in the case of pig-butchering scams: Funds originating with victims are received into bank accounts controlled or rented by a money-laundering network, known colloquially as “water houses.” Money launderers use KYC bypasses to access the accounts and quickly redistribute the profits before converting them into digital assets—typically in the form of the stablecoin Tether, a type of cryptocurrency that is pegged to the US dollar.

These transactions often happen in seconds, under tightly orchestrated management. “They know, very clearly, the flow of how the banks verify or authenticate accounts,” Ngo says. 

A cat-and-mouse game 

The growth of cyberscam money laundering has led to heightened scrutiny of financial institutions. In 2023, Binance pleaded guilty in US federal courts to operating without anti-money-laundering safeguards. Donald Trump pardoned former Binance CEO Chaopeng Zhao last October.

Recent analysis from the International Consortium of Investigative Journalists found that after Zhao’s guilty plea, more than $400 million continued to move to Binance from Huione Group, a Cambodia-based firm that the US sanctioned after the Treasury Department deemed it a “critical node” for money laundering in pig-butchering scams.

Binance says it has “state-of-the-art security systems” that prevented billions in fraud losses and that the company processed more than 71,000 law enforcement requests in 2025.

But John Griffin, a finance and blockchain expert at the University of Texas at Austin, does not think the exchanges are sufficiently secure. “Even though they have all this press about ‘Oh, yes, we’ve changed this and that’—well, the proof is in the pudding. The criminals are still using your exchange,” Griffin told me of the industry at large. “So there must be holes.” (Binance says it “objects to the dubious findings” of Griffin’s work tracking the flow of criminal profits across exchanges like Binance, Huobi, OKX, and Tokenlon, calling it “misleading at best and, at worst, wildly inaccurate.”)

Binance also pointed out that some purported bypass services are themselves scams, casting doubt on whether successful bypasses are as widespread as the Telegram marketplace may suggest. Engaging with such services “exposes individuals to significant security risks,” a spokesperson said. “Even where access appears to be granted, accounts are often already restricted by internal detection and compliance controls, rendering them nonfunctional for trading or withdrawals.”

Regulators around the world are trying to catch up. In Thailand, where citizens’ bank accounts regularly serve as money mules for cyberscams based in neighboring Myanmar and Cambodia, new legislation has enhanced KYC monitoring, limited daily transactions, and strengthened oversight bodies’ ability to suspend accounts. The US money-laundering regulator, the Financial Crimes Enforcement Network, issued a warning against KYC deepfakes and the use of VCams in late 2024, encouraging platforms to track broader transaction patterns to identify money laundering.

For scammers, any new security or reporting requirements will make bypasses harder, but “it’s not going to stop them,” Ngo says. “It’s just a matter of time.”

The problem with thinking you’re part Neanderthal

You’ve probably heard some version of this idea before: that many of us have an “inner Neanderthal.” That is to say, around 45,000 years ago, when Homo sapiens first arrived in Europe, they met members of a cousin species—the broad-browed, heavier-set Neanderthals—and, well, one thing led to another, which is why some people now carry a small amount of Neanderthal DNA. 

This DNA is arguably the 21st century’s most celebrated discovery in human evolution. It has been connected to all kinds of traits and health conditions, and it helped win the Swedish geneticist Svante Pääbo a Nobel Prize.

But in 2024, a pair of French population geneticists called into question the foundation of the popular and pervasive theory. 

Lounès Chikhi and Rémi Tournebize, then colleagues at the Université de Toulouse, proposed an alternative explanation for the very same genomic patterns. The problem, they said, was that the original evidence for the inner Neanderthal was based on a statistical assumption: that humans, Neanderthals, and their ancestors all mated randomly in huge, continent-size populations. That meant a person in South Africa was just as likely to reproduce with a person in West Africa or East Africa as with someone from their own community. 

Archaeological, genetic, and fossil evidence all shows, though, that Homo ­sapiens evolved in Africa in smaller groups, cut off from one another by deserts, mountains, and cultural divides. People sometimes crossed those barriers, but more often they partnered up within them. 

In the terminology of the field, this dynamic is called population structure. Because of structure, genes do not spread evenly through a population but can concentrate in some places and be totally absent from others. The human gene pool is not so much an Olympic-size swimming pool as a complex network of tidal pools whose connectivity ebbs and flows over time.

This dynamic greatly complicates the math at the heart of evolutionary biology, which long relied on assumptions like randomly mating populations to extract general principles from limited data. If you take structure into account, Chikhi told me recently, then there are other ways to explain the DNA that some living people share with Neanderthals—ways that don’t require any interspecies sex at all.

“I believe most species are spatially organized and structured in different, complex ways,” says Chikhi, who has researched population structure for more than two decades and has also studied lemurs, orangutans, and island birds. “It’s a general failure of our field that we do not compare our results in a clear way with alternative scenarios.” (Pääbo did not respond to multiple requests for comment.)

The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells.

Chikhi and Tournebize’s argument is about population structure, yes, but at heart, it is actually one about methods—how modern evolutionary science deploys computer models and statistical techniques to make sense of mountains upon mountains of genetic data. 

They’re not the only scientists who are worried. “People think we really understand how genomes evolve and can write sophisticated algorithms for saying what happened,” says William Amos, a University of Cambridge population geneticist who has been critical of the “inner Neanderthal” theory. But, he adds, those models are “based on simple assumptions that are often wrong.” 

And if they’re wrong, what’s at stake is far more than a single evolutionary mystery. 

A captivating story of interspecies passion

Back in 2010, Pääbo’s lab pulled off something of a miracle. The researchers were able to extract DNA from nuclei in the cells of 40,000-year-old Neanderthal bones. DNA breaks down quickly after death, but the group got enough of it from three different individuals to produce a draft sequence of the entire Neanderthal genome, with 4 billion base pairs. 

As part of their study, they performed a statistical test comparing their Neanderthal genome with the genomes of five present-day people from different parts of the world. That’s how they discovered that modern humans of non-African ancestry had a small amount of DNA in common with Neanderthals, a species that diverged from the Homo sapiens line more than 400,000 years ago, that they did not share with either modern humans of African ancestry or our closest living relative, the chimpanzee. 

Neanderthal front and profile view
This model of a Neanderthal man was exhibited in the “Prehistory Gallery” at London’s Wellcome Historical Medical Museum in the 1930s.
WELLCOME COLLECTION

Pääbo’s team interpreted this as evidence of sexual reproduction between ancient Homo sapiens and the Neanderthals they encountered after they expanded out of Africa. “Neanderthals are not totally extinct,” Pääbo said to the BBC in 2010. “In some of us, they live on a little bit.”

The discovery was monumental on its own—but even more so because it reversed a previous consensus. More than a decade earlier, in 1997, Pääbo had sequenced a much smaller amount of Neanderthal DNA, in that case from a cell structure called a mitochondrion. It was different enough from Homo sapiens mitochondrial DNA for his team to cautiously conclude there had been “little or no interbreeding” between the two species. 

After 2010, though, the idea of hybridization, also called admixture, effectively became canon. Top journals like Science and Nature published study after study on the inner Neanderthal. Some scientists have argued that Homo sapiens would never have adapted to colder habitats in Europe and Asia without an infusion of Neanderthal DNA. Other research teams used Pääbo’s techniques to find genetic traces of interbreeding with an extinct group of hominins in Asia, called the Denisovans, and a mysterious “ghost lineage” in Africa. Biologists used similar tests to find evidence of interbreeding between chimpanzees and bonobos, polar and brown bears, and all kinds of other animals. 

The inner-Neanderthal hypothesis also took a turn for the personal. Various studies linked Neanderthal DNA to a head-spinning range of conditions: alcoholism, asthma, autism, ADHD, depression, diabetes, heart disease, skin cancer, and severe covid-19. Some researchers suggested that Neanderthal DNA had an impact on hair and skin color, while others assigned individuals a “NeanderScore” that was correlated with skull shape and prevalence of schizophrenia markers. Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports. 

The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells. Or as Latif Nasser, a host of the popular-science program Radiolab, put it when he was hospitalized with Crohn’s disease, another Neanderthal-associated condition: “I just keep imagining these tiny Neanderthals … just, like, stabbing me and drawing these little droplets of blood out of me.”

“These things become meaningful to people,” Chikhi says. “What we say will be important to how people view themselves.” 

The pitfalls of simplistic solutions 

When population geneticists built the theoretical framework for evolutionary biology in the early 20th century, genes were only abstract units of heredity inferred from experiments with peas and fruit flies. Population genetics developed theory far more quickly than it accumulated data. As a result, many data-driven scientists dismissed the study of evolution as a form of storytelling based on unexamined assumptions and preconceived ideas.

By the ’90s, though, genes were no longer abstractions but sequenced segments of DNA. Genomic sequencing grounded evolutionary studies in the kind of hard data that a chemist or physicist could respect. 

Yet biologists could not simply read evolutionary history from genomes as though they were books. They were trying to determine which of a nearly infinite number of plausible histories was the most likely to have created the patterns they observed in a small sample of genomes. For that, they needed simplified, algorithmic models of evolution. The study of evolution shifted from storytelling to statistics, and from biology to computer science. 

That suited Chikhi, who as a child was drawn to the predictable laws and numerical precision of math and science. He entered the field in the mid-’90s just as the first big studies of human DNA were settling old debates about human origins. DNA showed that Africa harbored far more genetic diversity than the entire rest of the planet. The new evidence supported the idea that modern humans evolved for hundreds of thousands of years in Africa and expanded to the other continents only in the last 100,000 years. For Chikhi, whose parents were Algerian immigrants, this discovery was a powerful challenge to the way some archaeologists and biologists talked about race. DNA could be used to deconstruct rather than encourage the pernicious idea that human races had deep-seated evolutionary differences based on their places of origin. 

At the same time, though, he was wary of the tendency to treat DNA as the final verdict on open questions in evolution. Chikhi had been surprised when, back in 1997, Pääbo and his team used that small amount of mitochondrial DNA to rule out hybridization between Homo sapiens and Neanderthals. He didn’t think that the absence of Neanderthal DNA there necessarily meant it wouldn’t be found elsewhere in the Homo sapiens genome.

Chikhi’s own research in the aughts opened his eyes to the gaps between historical reality and models of evolution. For one, despite the assumption of random mating, none of the animals Chikhi studied actually mated randomly. Orangutans lived in highly fragmented habitats, which restricted their pool of potential mates, and female birds were often extremely picky about their male partners. 

These factors could confound an evolutionary biologist’s traditional statistical tool kit. Scientists were starting to apply a mathematical technique to estimate historical population sizes for a species from the genome of just a single individual. This method showed sharp population declines in the histories of many different species. Chikhi realized, though, that the apparent declines could be an artifact of treating a structured population as one that evolved with random mating; in that case, the technique could indicate a bottleneck even if all the subgroups were actually growing in size. “This is completely counterintuitive,” he says. 

That’s at least partly why, when Pääbo’s 2010 Neanderthal genome came out, Chikhi was impressed with the sheer technical accomplishment but also leery of the findings about hybridization. “It was the type of thing we conclude too quickly based on genetic data,” he says. Pääbo’s work mentioned population structure as a possible alternative explanation—but didn’t follow up.

Just a couple of years later, a pair of independent scientists named Anders Eriksson and Andrea Manica picked up the idea, building a model with simple population structure that explicitly excluded admixture. They simulated human evolution starting from 500,000 years ago and found that their model produced the same genomic patterns Pääbo’s group had interpreted as evidence of hybridization.

“Working with structured models is really out of the comfort zone of a lot of population geneticists,” says Eriksson, now a professor at the University of Tartu in Estonia.

Their research impressed Chikhi. “At the time, I thought people would focus on population structure in the evolution of humans,” he says. Instead, he watched as the inner-Neanderthal hypothesis took on a life of its own. Scientists produced new methods to quantify hybridization but rarely examined whether population structure would yield the same results. To Chikhi, this wasn’t science; it was storytelling, like some of the old narratives about the evolution of racial differences. 

Chikhi and Tournebize decided to take a crack at the problem themselves. “I’ve always been very skeptical about science, and population genetics in particular,” says Tournebize, now a researcher at the French National Research Institute for Sustainable Development. “We make a lot of assumptions, and the models we use are very simplistic.” As detailed in a 2024 paper published in Nature Ecology & Evolution, they built a model of human evolution that replaced randomly mating continent-wide populations with many smaller populations linked by occasional migration. Then they let it run—a million times.

At the end of the simulation, they kept the 20 scenarios that produced genomes most similar to the ones in a sample of actual Homo sapiens and Neanderthals. Many of these scenarios produced long segments of DNA like the ones their peers argued could only have been inherited from Neanderthals. They showed that several statistics, which other scientists had proposed as measurements of Neanderthal DNA, couldn’t actually distinguish between hybridization and population structure. What’s more, they showed that many of the models that supported hybridization failed to accurately predict other known features of human evolution.

“A model will say there was admixture but then predict diversity that is totally incompatible with what we actually know of human diversity,” Chikhi says. “Nobody seems to care.”

So how did Neanderthal DNA wind up in living people if not via interspecies passion? Chikhi and Tournebize think it’s more likely that it was inherited by both Neanderthals and some sapiens groups in Africa from a common ancestor living at least half a million years ago. If the sapiens groups carrying those genetic variants included the people who migrated out of Africa, then the two human species would have already had the DNA in common when they came into contact in Europe and Asia—no sex required. 

“The interpretation of genetic data is not straightforward,” Chikhi says. “We always have to make assumptions. Nobody takes data and magically comes up with a solution.” 

Embracing the uncertainty 

Most of the half-dozen population geneticists I spoke with praised Chikhi and Tournebize’s ingenuity and appreciated the spirit of their critique. “Their paper forces us to think more critically about the model we use for inference and consider alternatives,” says Aaron Ragsdale, a population geneticist at the University of Wisconsin–Madison. His own work likewise suggests that the earliest Homo sapiens populations in Africa were probably structured—and that this is the likely reason for genomic patterns that other research groups had attributed to hybridization with a mysterious “ghost lineage” of hominins in Africa.

Yet most researchers still believe that modern humans and Neanderthals did probably have children with each other tens of thousands of years ago. Several pointed to the fact that fossil DNA of Homo sapiens who died thousands of years ago had longer chunks of apparent Neanderthal DNA than living people, which is exactly what you would expect if they had a more recent Neanderthal ancestor. (To address this possibility, Chikhi and Tournebize included DNA from 10 ancient humans in their study and found that most of them fit the structured model.) And while the Harvard population geneticist David Reich, who helped design the statistical test from Pääbo’s 2010 study, declined an interview, he did say he thought Chikhi and Tournebize’s model was “weak” and “very contrived,” adding that “there are multiple lines of evidence for Neanderthal admixture into modern humans that make the evidence for this overwhelming.” (Two other authors of that study, Richard Green and Nick Patterson, did not respond to requests for comment.) 

Nevertheless, most scientists these days welcome the development of structured, or “spatially explicit,” models that account for the fact that any given member of a population is usually more closely related to individuals living nearby than to those living far away. 

Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history.

Other scientists also say that random mating isn’t the only assumption in population genetics that merits scrutiny. Models rarely factor in natural selection, which can also create genetic patterns that look like hybridization. Another common assumption is that everyone’s DNA mutates at the same, constant rate. “All the theory says the mutation rate is fixed,” says Amos, the Cambridge population geneticist. But he thinks that rate would have slowed drastically in the group of Homo sapiens that expanded to Europe around 45,000 years ago. This, too, could have created genomic patterns that other scientists interpret as evidence of interbreeding with Neanderthals. 

Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports.
COURTESY OF 23ANDME

The point here isn’t that a complex model of evolution with many moving pieces is necessarily better than a simple one. Scientists need to reduce complexity in order to see the underlying processes more clearly. But simple models require assumptions, and scientists need to reevaluate those assumptions in light of what they learn. “As you get more data, you can justify more complex models of the world,” says Mark Thomas, a population geneticist at University College London, who wrote a history of random mating in population genetics that highlighted how the field was starting to see it as “a limiting assumption as opposed to a simplifying one.” 

It can feel discouraging to couch conversations about the past in confusing terms like “population structure” and “mutation rates.” It seems almost antithetical to the spirit of science to talk more about uncertainty at the same time we are developing powerful technologies and enormous data sets for analyzing evolution. These tools often yield novel answers, but they can also limit the questions we ask. The French archaeologist Ludovic Slimak, for example, has complained that the idea of the inner Neanderthal has domesticated our image of Neanderthals and made it difficult to imagine their humanity as distinct from our own. Investigating Neanderthal DNA is sexier to many young researchers than searching for archaeological and fossil evidence of how Neanderthals actually lived. 

Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history. Ultimately, that’s what Chikhi and Tournebize hope to do. After all, they don’t believe the question of population structure versus hybridization is either-or. It’s possible, and even likely, that both played a role in human evolution. “Our structured model does not necessarily mean that no admixture ever took place,” Chikhi and Tournebize wrote in their study. “What our results suggest is that, if admixture ever occurred, it is currently hard to identify using existing methods.” 

Future methods might disentangle the different factors, but it’s just as important, Chikhi says, for scientists to be up-front about their assumptions and test alternatives. “There’s still so much uncertainty on so many aspects of the demographic history of Neanderthals and Homo sapiens,” he notes. 

Keep that in mind the next time you read about your inner Neanderthal. The association between this DNA and some diseases may be real, of course—but would journals publish these studies without the additional claim that the DNA is from Neanderthals? Any good storyteller knows that sex sells, even in science. 

Ben Crair is a science and travel writer based in Berlin.

Coming soon: 10 Things That Matter in AI Right Now

Each year we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will have the biggest impact on how we live and work.

This year, however, we had a dilemma. While our final picks encompass all our core coverage areas (energy, AI, and biotech, plus a few more), our 2026 list was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in! (The ones that made it were AI companions, mechanistic interpretability, generative coding, and hyperscale data centers.) Many great ideas fell by the wayside to keep the list as wide-ranging as possible.

Well, that got us thinking: What if we made an entirely new list that was all about AI? We got excited about that idea—and before we knew it we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now. It’s an entirely new annual list that we’re proud to be publishing for the first time on April 21, 2026. We’ll unveil it on stage for attendees at our signature AI conference, EmTech AI, held on MIT’s campus (it’s not too late to get tickets), and then publish the list online later that day.

The process for coming up with the list was similar to the way we pick our 10 Breakthrough Technologies. We petitioned our AI team of reporters and editors to propose ideas, put them all in a document, and engaged in some robust discussion. Eventually, we voted for our favorites and whittled the long list down to a final 10.

But there’s a slight difference between this list and our 10 Breakthrough Technologies. AI is already such a big part of our lives that we didn’t want to restrict ourselves to nominating only technologies. Instead, we wanted to put together a definitive annual list that highlights what we believe are the biggest ideas, topics, and research directions in AI right now. So yes, it will include cutting-edge AI technologies, but it will also feature other trends and developments in AI that we want to bring to our subscribers’ attention.

Think of it as a sneak peek inside the collective brain of our crack AI reporting team: These are the things that our reporters will be watching this year. We intend to follow the items on this list really closely, and you will see it reflected in the news and feature stories we publish in 2026.

For us, 10 Things That Matter in AI Right Now is a guide to how we view the current AI landscape. It will be a source of discussion, debate, and maybe some arguments! We are so excited to share it with you on April 21. If you want to be among the first to see it—join us at EmTech AI or become a subscriber to livestream the announcement.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work?

<div data-chronoton-summary="

  • A US nuclear-powered spacecraft may head to Mars: NASA has announced SR-1, the first-ever nuclear-reactor-powered interplanetary spacecraft, with a planned Mars launch before the end of 2028—a timeline experts call aggressive but exciting.
  • Nuclear could beat chemical and solar power: Unlike traditional propulsion, nuclear electric propulsion is orders of magnitude more efficient and doesn’t depend on sunlight, making it better suited for long, fast journeys through the solar system.
  • The design is already taking shape: SR-1 will resemble a giant fletched arrow, with a recycled Gateway space station propulsion unit at the rear and a 20-kilowatt uranium reactor up front, cooled by enormous fins that vent excess heat into space.
  • The stakes go beyond engineering: With China and Russia pursuing their own deep-space nuclear programs, SR-1 is as much a geopolitical gambit as a scientific one—and success could put the US ahead in the race to land humans on Mars.

” data-chronoton-post-id=”1135848″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.

These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”

A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.

While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.

Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.

Nuclear propulsion 101

Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.

Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible. 

“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia. 

The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available. 

To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.

So how will a nuclear-reactor-powered spacecraft work? 

Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.

Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down. 

Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.

To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).

But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space. 

“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”

One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.” 

Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.

Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then  blast it out of the spacecraft, generating thrust.  

Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”

How to build a nuclear-powered spaceship

For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.

For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.

What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.) 

NASA

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.

According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.” 

Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work? 

For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth. 

If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035. 

Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”

And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline. 

“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”

Job titles of the future: Wildlife first responder

Grizzly bears have made such a comeback across eastern Montana that in 2017, the state hired its first-ever prairie-based grizzly manager: wildlife biologist Wesley Sarmento. 

For some seven years, Sarmento worked to keep both the bears, which are still listed as threatened under the Endangered Species Act, and the humans, who are sprawling into once-wild spaces, out of trouble. Based in the small city of Conrad, population 2,553, he acted sort of like a first responder, trying to defuse potentially dangerous situations. He even got caught in some himself—which is why, before he left the role to pursue a PhD, he turned to drones to get the job done. 

The bear necessities

Sarmento was studying mountain goats in Glacier National Park when he first started working with bears. To better understand how goats responded to the apex predator, he dressed up in a bear costume once a week for over three years. 

When he later started as grizzly manager, he often drove long distances to push bears away from farms. Bears are drawn to spilled or leaking grains, and an open silo quickly turns into a buffet. Sarmento would typically arrive armed with a shotgun, cracker shells, and bear spray, but after he narrowly escaped getting mauled one day, he knew he had to pivot.

“In that moment,” he says, “I was like, I am gonna get myself killed.”

A bird’s-eye view

Sarmento first turned to two Airedale dogs, a breed known for deterring bears on farms, but the dogs were easily sidetracked. Meanwhile, drones were slowly becoming more common tools for biologists in a range of activities, including counting birds and mapping habitats.

He first took one into the field in 2022, when a grizzly mom and two cubs were found rummaging around in a silo outside of town. The drone’s infrared sensors helped him quickly find their location, and he used the aircraft’s sound to drive them away from the property. (Researchers suspect bears instinctively dislike the whir of blades because it sounds like a swarm of bees.) “The whole thing was so clean and controlled,” he says. “And I did it all from the safety of my truck.”

Since then, the flying machine that Sarmento bought for $4,000—a fairly simple model with a thermal camera and 30 minutes of battery life—has shown its potential for detecting grizzlies in perilous terrain he’d otherwise have to approach on foot, like dense brush or hard-to-reach river bottoms.

A new technological foundation

Now studying wildlife ecology at the University of Montana, Sarmento is hoping to design a drone campus police can use to deter black bears from school grounds. In the future, he hopes, AI image recognition might be broadly integrated into his wildlife management work—maybe even helping drones identify bears and autonomously divert them from high-traffic areas.

All this helps keep bears from learning behaviors that lead to conflict with people—which typically ends badly for the bear and is occasionally fatal for humans.

“The out-of-the-box technology doesn’t exist yet, but the hope is to keep exploring applications,” he says. “Drones are the next frontier.” 

Emily Senkosky is a writer with a master’s degree in environmental science journalism from the University of Montana.