Meet the researchers testing the “Armageddon” approach to asteroid defense

One day, in the near or far future, an asteroid about the length of a football stadium will find itself on a collision course with Earth. If we are lucky, it will land in the middle of the vast ocean, creating a good-size but innocuous tsunami, or in an uninhabited patch of desert. But if it has a city in its crosshairs, one of the worst natural disasters in modern times will unfold. As the asteroid steams through the atmosphere, it will begin to fragment—but the bulk of it will likely make it to the ground in just a few seconds, instantly turning anything solid into a fluid and excavating a huge impact crater in a heartbeat. A colossal blast wave, akin to one unleashed by a large nuclear weapon, will explode from the impact site in every direction. Homes dozens of miles away will fold like cardboard. Millions of people could die.

Fortunately for all 8 billion of us, planetary defense—the science of preventing asteroid impacts—is a highly active field of research. Astronomers are watching the skies, constantly on the hunt for new near-Earth objects that might pose a threat. And others are actively working on developing ways to prevent a collision should we find an asteroid that seems likely to hit us.

We already know that at least one method works: ramming the rock with an uncrewed spacecraft to push it away from Earth. In September 2022, NASA’s Double Asteroid Redirection Test, or DART, showed it could be done when a semiautonomous spacecraft the size of a small car, with solar panel wings, was smashed into an (innocuous) asteroid named Dimorphos at 14,000 miles per hour, successfully changing its orbit around a larger asteroid named Didymos. 

But there are circumstances in which giving an asteroid a physical shove might not be enough to protect the planet. If that’s the case, we could need another method, one that is notoriously difficult to test in real life: a nuclear explosion. 

Scientists have used computer simulations to explore this potential method of planetary defense. But in an ideal world, researchers would ground their models with cold, hard, practical data. Therein lies a challenge. Sending a nuclear weapon into space would violate international laws and risk inflaming political tensions. What’s more, it could do damage to Earth: A rocket malfunction could send radioactive debris into the atmosphere. 

Over the last few years, however, scientists have started to devise some creative ways around this experimental limitation. The effort began in 2023, with a team of scientists led by Nathan Moore, a physicist and chemical engineer at the Sandia National Laboratories in Albuquerque, New Mexico. Sandia is a semi-secretive site that serves as the engineering arm of America’s nuclear weapons program. And within that complex lies the Z Pulsed Power Facility, or Z machine, a cylindrical metallic labyrinth of warning signs and wiring. It’s capable of summoning enough energy to melt diamond. 

About 25,000 asteroids more than 460 feet long—a size range that starts with midsize “city killers” and goes up in impact from there—are thought to exist close to Earth. Just under half of them have been found.

The researchers reckoned they could use the Z machine to re-create the x-ray blast of a nuclear weapon—the radiation that would be used to knock back an asteroid—on a very small and safe scale.

It took a while to sort out the details. But by July 2023, Moore and his team were ready. They waited anxiously inside a control room, monitoring the thrumming contraption from afar. Inside the machine’s heart were two small pieces of rock, stand-ins for asteroids, and at the press of a button, a maelstrom of x-rays would thunder toward them. If they were knocked back by those x-rays, it would prove something that, until now, was purely theoretical: You can deflect an asteroid from Earth using a nuke.

This experiment “had never been done before,” says Moore. But if it succeeded, its data would contribute to the safety of everyone on the planet. Would it work?

Monoliths and rubble piles

Asteroid impacts are a natural disaster like any other. You shouldn’t lose sleep over the prospect, but if we get unlucky, an errant space rock may rudely ring Earth’s doorbell. “The probability of an asteroid striking Earth during my lifetime is very small. But what if one did? What would we do about it?” says Moore. “I think that’s worth being curious about.”

Forget about the gigantic asteroids you know from Hollywood blockbusters. Space rocks over two-thirds of a mile (about one kilometer) in diameter—those capable of imperiling civilization—are certainly out there, and some hew close to Earth’s own orbit. But because these asteroids are so elephantine, astronomers have found almost all of them already, and none pose an impact threat. 

Rather, it’s asteroids a size range down—those upwards of 460 feet (140 meters) long—that are of paramount concern. About 25,000 of those are thought to exist close to our planet, and just under half have been found. The day-to-day odds of an impact are extremely low, but even one of the smaller ones in that size range could do significant damage if it found Earth and hit a populated area—a capacity that has led astronomers to dub such midsize asteroids “city killers.”

If we find a city killer that looks likely to hit Earth, we’ll need a way to stop it. That could be technology to break or “disrupt” the asteroid into fragments that will either miss the planet entirely or harmlessly ignite in the atmosphere. Or it could be something that can deflect the asteroid, pushing it onto a path that will no longer intersect with our blue marble. 

Because disruption could accidentally turn a big asteroid into multiple smaller, but still deadly, shards bound for Earth, it’s often considered to be a strategy of last resort. Deflection is seen as safer and more elegant. One way to achieve it is to deploy a spacecraft known as a kinetic impactor—a battering ram that collides with an asteroid and transfers its momentum to the rocky interloper, nudging it away from Earth. NASA’s DART mission demonstrated that this can work, but there are some important caveats: You need to deflect the asteroid years in advance to make sure it completely misses Earth, and asteroids that we spot too late—or that are too big—can’t be swatted away by just one DART-like mission. Instead, you’d need several kinetic impactors—maybe many of them—to hit one side of the asteroid perfectly each time in order to push it far enough to save our planet. That’s a tall order for orbital mechanics, and not something space agencies may be willing to gamble on. 

In that case, the best option might instead be to detonate a nuclear weapon next to the asteroid. This would irradiate one hemisphere of the asteroid in x-rays, which in a few millionths of a second would violently shatter and vaporize the rocky surface. The stream of debris spewing out of that surface and into space would act like a rocket, pushing the asteroid in the opposite direction. “There are scenarios where kinetic impact is insufficient, and we’d have to use a nuclear explosive device,” says Moore.

IKEA-style diagram of an asteroid trailed by a cloud of particles with an inset of an explosion

MCKIBILLO

This idea isn’t new. Several decades ago, Peter Schultz, a planetary geologist and impacts expert at Brown University, was giving a planetary defense talk at the Lawrence Livermore National Laboratory in California, another American lab focused on nuclear deterrence and nuclear physics research. Afterwards, he recalls, none other than Edward Teller, the father of the hydrogen bomb and a key member of the Manhattan Project, invited him into his office for a chat. “He wanted to do one of these near-Earth-­asteroid flybys and wanted to test the nukes,” Schultz says. What, he wondered, would happen if you blasted an asteroid with a nuclear weapon’s x-rays? Could you forestall a spaceborne disaster using weapons of mass destruction?

But Teller’s dream wasn’t fulfilled—and it’s unlikely to become a reality anytime soon. The United Nations’ 1967 Outer Space Treaty states that no nation can deploy or use nuclear weapons off-world (even if it’s not clear how long certain spacefaring nations will continue to adhere to that rule).

Even raising the possibility of using nukes to defend the planet can be tricky. “There’re still many folks that don’t want to talk about it at all … even if that were the only option to prevent an impact,” says Megan Bruck Syal, a physicist and planetary defense researcher at Lawrence Livermore. Nuclear weapons have long been a sensitive subject, and with relations between several nuclear nations currently at a new nadir, anxiety over the subject is understandable. 

But in the US, there are groups of scientists who “recognize that we have a special responsibility as a spacefaring nation and as a nuclear-­capable nation to look at this,” Syal says. “It isn’t our preference to use a nuclear explosive, of course. But we are still looking at it, in case it’s needed.” 

But how? 

Mostly, researchers have turned to the virtual world, using supercomputers at various US laboratories to simulate the asteroid-­agitating physics of a nuclear blast. To put it mildly, “this is very hard,” says Mary Burkey, a physicist and planetary defense researcher at Lawrence Livermore. You cannot simply flick a switch on a computer and get immediate answers. “When a nuke goes off in space, there’s just x-ray light that’s coming out of it. It’s shining on the surface of your asteroid, and you’re tracking those little photons penetrating maybe a tiny little bit into the surface, and then somehow you have to take that micro­meter worth of resolution and then propagate it out onto something that might be on the order of hundreds of meters wide, watching that shock wave propagate and then watching fragments spin off into space. That’s four different problems.” 

Mimicking the physics of x-ray rock annihilation with as much verisimilitude as possible is difficult work. But recent research using these high-fidelity simulations does suggest that nukes are an effective planetary defense tool for both disruption and deflection. The thing is, though, no two asteroids are alike; each is mechanically and geologically unique, meaning huge uncertainties remain. A more monolithic asteroid might respond in a straightforward way to a nuclear deflection campaign, whereas a rubble pile asteroid—a weakly bound fleet of boulders barely held together by their own gravity—might respond in a chaotic, uncontrollable way. Can you be sure the explosion wouldn’t accidentally shatter the asteroid, turning a cannonball into a hail of bullets still headed for Earth? 

Simulations can go a long way toward answering these questions, but they remain virtual re-creations of reality, with built-in assumptions. “Our models are only as good as the physics that we understand and that we put into them,” says Angela Stickle, a hypervelocity impact physicist at the Johns Hopkins University Applied Physics Laboratory in Maryland. To make sure the simulations are reproducing the correct physics and delivering realistic data, physical experiments are needed to ground them.

Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second.

Researchers studying kinetic impactors can get that sort of real-world data. Along with DART, they can use specialized cannons—like the Vertical Gun Range at NASA’s Ames Research Center in California—to fire all sorts of projectiles at meteorites. In doing so, they can find out how tough or fragile asteroid shards can be, effectively reproducing a kinetic impact mission on a small scale. 

Battle-testing nuke-based asteroid defense simulations is another matter. Re-creating the physics of these confrontations on a small scale was long considered to be exceedingly difficult. Fortunately, those keen on fighting asteroids are as persistent as they are creative—and several teams, including Moore’s at Sandia, think they have come up with a solution.

X-ray scissors

The prime mission of Sandia, like that of Lawrence Livermore, is to help maintain the nation’s nuclear weapons arsenal. “It’s a national security laboratory,” says Moore. “Planetary defense affects the entire planet,” he adds—making it, by default, a national security issue as well. And that logic, in part, persuaded the powers that be in July 2022 to try a brand-new kind of experiment. Moore took charge of the project in January 2023—and with the shot scheduled for the summer, he had only a few months to come up with the specific plan for the experiment. There was “lots of scribbling on my whiteboard, running computer simulations, and getting data to our engineers to design the test fixture for the several months it would take to get all the parts machined and assembled,” he says.

Although there were previous and ongoing experiments that showered asteroid-like targets with x-rays, Moore and his team were frustrated by one aspect of them. Unlike actual asteroids floating freely in space, the micro-­asteroids on Earth were fixed in place. To truly test whether x-rays could deflect asteroids, targets would have to be suspended in a vacuum—and it wasn’t immediately clear how that could be achieved.

Generating the nuke-like x-rays was the easy part, because Sandia had the Z machine, a hulking mass of diodes, pipes, and wires interwoven with an assortment of walkways that circumnavigate a vacuum chamber at its core. When it’s powered up, electrical currents are channeled into capacitors—and, when commanded, blast that energy at a target or substance to create radiation and intense magnetic pressures. 

Flanked by klaxons and flashing lights, it’s an intimidating sight. “It’s the size of a building—about three stories tall,” says Moore. Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second: “You can’t even blink that fast.” The Z machine is named for the axis along which its energetic particles cascade, but the Z could easily stand for “Zeus.”

The Z Pulsed Power Facility, or Z machine, at Sandia National Laboratories in Albuquerque, New Mexico, concentrates electricity into short bursts of intense energy that can be used to create x-rays and gamma rays and compress matter to high densities.
RANDY MONTOYA/SANDIA NATIONAL LABORATORY

The original purpose of the Z machine, whose first form was built half a century ago, was nuclear fusion research. But over time, it’s been tinkered with, upgraded, and used for all kinds of science. “The Z machine has been used to compress matter to the same densities [you’d find at] the centers of planets. And we can do experiments like that to better understand how planets form,” Moore says, as an example. And the machine’s preternatural energies could easily be used to generate x-rays—in this case, by electrifying and collapsing a cloud of argon gas.

“The idea of studying asteroid deflection is completely different for us,” says Moore. And the machine “fires just once a day,” he adds, “so all the experiments are planned more than a year in advance.” In other words, the researchers had to be near certain their one experiment would work, or they would be in for a long wait to try again—if they were permitted a second attempt. 

For some time, they could not figure out how to suspend their micro-asteroids. But eventually, they found a solution: Two incredibly thin bits of aluminum foil would hold their targets in place within the Z machine’s vacuum chamber. When the x-ray blast hit them and the targets, the pieces of foil would be instantly vaporized, briefly leaving the targets suspended in the chamber and allowing them to be pushed back as if they were in space. “It’s like you wave your magic wand and it’s gone,” Moore says of the foil. He dubbed this technique “x-ray scissors.” 

In July 2023, after considerable planning, the team was ready. Within the Z machine’s vacuum chamber were two fingernail-size targets—a bit of quartz and some fused silica, both frequently found on real asteroids. Nearby, a pocket of argon gas swirled away. Satisfied that the gigantic gizmo was ready, everyone left and went to stand in the control room. For a moment, it was deathly quiet.

Stand by.

Fire.

It was over before their ears could even register a metallic bang. A tempest of electricity shocked the argon gas cloud, causing it to implode; as it did, it transformed into a plasma and x-rays screamed out of it, racing toward the two targets in the chamber. The foil vanished, the surfaces of both targets erupted outward as supersonic sprays of debris, and the targets flew backward, away from the x-rays, at 160 miles per hour.

Moore wasn’t there. “I was in Spain when the experiment was run, because I was celebrating my anniversary with my wife, and there was no way I was going to miss that,” he says. But just after the Z machine was fired, one of his colleagues sent him a very concise text: IT WORKED.

“We knew right away it was a huge success,” says Moore. The implications were immediately clear. The experimental setup was complex, but they were trying to achieve something extremely fundamental: a real-world demonstration that a nuclear blast could make an object in space move. 

“We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives.’”

Patrick King, a physicist at the Johns Hopkins University Applied Physics Laboratory, was impressed. Previously, pushing back objects using x-ray vaporization had been extremely difficult to demonstrate in the lab. “They were able to get a direct measurement of that momentum transfer,” he says, calling the x-ray scissors an “elegant” technique.

Sandia’s work took many in the community by surprise. “The Z machine experiment was a bit of a newcomer for the planetary defense field,” says Burkey. But she notes that we can’t overinterpret the results. It isn’t clear, from the deflection of the very small and rudimentary asteroid-like targets, how much a genuine nuclear explosion would deflect an actual asteroid. As ever, more work is needed. 

King leads a team that is also working on this question. His NASA-funded project involves the Omega Laser Facility, a complex based at the University of Rochester in upstate New York. Omega can generate x-rays by firing powerful lasers at a target within a specialized chamber. Upon being irradiated, the target generates an x-ray flash, similar to the one produced during a nuclear explosion in space, which can then be used to bombard various objects—in this case, some Earth rocks acting as asteroid mimics, and (crucially) some bona fide meteoritic material too. 

King’s Omega experiments have tried to answer a basic question: “How much material actually gets removed from the surface?” says King. The amount of material that flies off the pseudo-asteroids, and the vigor with which it’s removed, will differ from target to target. The hope is that these results—which the team is still considering—will hint at how different types of asteroids will react to being nuked. Although experiments with Omega cannot produce the kickback seen in the Z machine, King’s team has used a more realistic and diverse series of targets and blasted them with x-rays hundreds of times. That, in turn, should clue us in to how effectively, or not, actual asteroids would be deflected by a nuclear explosion.

“I wouldn’t say one [experiment] has definitive advantages over the other,” says King. “Like many things in science, each approach can yield insight along different ‘axes,’ if you will, and no experimental setup gives you the whole picture.”

Ikea-style diagram of the Earth with a chat bubble inset of two figures high-fiving.

MCKIBILLO

Experiments like Moore’s and King’s may sound technologically baroque—a bit like lightning-fast Rube Goldberg machines overseen by wizards. But they are likely the first in a long line of increasingly sophisticated tests. “We’ve just scratched the surface of what we can do,” Moore says. As with King’s experiments, Moore hopes to place a variety of materials in the Z machine, including targets that can stand in for the wetter, more fragile carbon-rich asteroids that astronomers commonly see in near-Earth space. “If we could get our hands on real asteroid material, we’d do it,” he says. And it’s expected that all this experimental data will be fed back into those nuke-versus-­asteroid computer simulations, helping to verify the virtual results.

Although these experiments are perfectly safe, planetary defenders remain fully cognizant of the taboo around merely discussing the use of nukes for any reason—even if that reason is potentially saving the world. “We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives,’” King says.

Inevitably, Earth will be imperiled by a dangerous asteroid. And the hope is that when that day arrives, it can be dealt with using something other than a nuke. But comfort should be taken from the fact that scientists are researching this scenario, just in case it’s our only protection against the firmament. “We are your taxpayer dollars at work,” says Burkey. 

There’s still some way to go before they can be near certain that this asteroid-stopping technique will succeed. Their progress, though, belongs to everyone. “Ultimately,” says Moore, “we all win if we solve this problem.” 

Robin George Andrews is an award-winning science journalist based in London and the author, most recently, of How to Kill an Asteroid: The Real Science of Planetary Defense.

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year. 

But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky. 

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  

Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis.

The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 

This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine.

What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs.

Ed Sisco in a mask
Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.
B. HAYES/NIST

The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs.

Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else.

“It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week.

The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without.

In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco. 

To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine. 

Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions.

“It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.” 

Adam Bluestein writes about innovation in business, science, and technology.

Phase two of military AI has arrived

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT. 

As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. Consider this phase two of the US military’s AI push, where phase one began back in 2017 with older types of AI, like computer vision to analyze drone imagery. Though this newest phase began under the Biden administration, there’s fresh urgency as Elon Musk’s DOGE and Secretary of Defense Pete Hegseth push loudly for AI-fueled efficiency. 

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite. 

With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.”

What are the limits of “human in the loop”?

Talk to as many defense-tech companies as I have and you’ll hear one phrase repeated quite often: “human in the loop.” It means that the AI is responsible for particular tasks, and humans are there to check its work. It’s meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them.

But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

“‘Human in the loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to draw conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.” As AI systems rely on more and more data, this problem scales up. 

Is AI making it easier or harder to know what should be classified?

In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped “Top Secret,” with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways.

One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at. 

With the mountain of data growing each day, and then AI constantly creating new analyses, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information. 

The defense giant Palantir is positioning itself to help, by offering its AI tools to determine whether a piece of data should be classified or not. It’s also working with Microsoft on AI models that would train on classified data. 

How high up the decision chain should AI go?

Zooming out for a moment, it’s worth noting that the US military’s adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets.

Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance. 

So what’s next? For consumers, it’s agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It’s also personalized AI, or models that learn from your private data to be more helpful. 

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown’s Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. “Military commanders are interested in AI’s potential to improve decision-making, especially at the operational level of war,” the authors wrote.

In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn’t been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it’s clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions. 

I’ll be following these three questions closely. If you have information on how the Pentagon might be handling these questions, please reach out via Signal at jamesodonnell.22. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This architect wants to build cities out of lava

Arnhildur Pálmadóttir was around three years old when she saw a red sky from her living room window. A volcano was erupting about 25 miles away from where she lived on the northeastern coast of Iceland. Though it posed no immediate threat, its ominous presence seeped into her subconscious, populating her dreams with streaks of light in the night sky.

Fifty years later, these “gloomy, strange dreams,” as Pálmadóttir now describes them, have led to a career as an architect with an extraordinary mission: to harness molten lava and build cities out of it.

Pálmadóttir today lives in Reykjavik, where she runs her own architecture studio, S.AP Arkitektar, and the Icelandic branch of the Danish architecture company Lendager, which specializes in reusing building materials.

The architect believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques: drill straight into magma pockets and extract the lava; channel molten lava into pre-dug trenches that could form a city’s foundations; or 3D-print bricks from molten lava in a technique similar to the way objects can be printed out of molten glass.

Pálmadóttir and Skarphéðinsson first presented the concept during a talk at Reykjavik’s DesignMarch festival in 2022. This year they are producing a speculative film set in 2150, in an imaginary city called Eldborg. Their film, titled Lavaforming, follows the lives of Eldborg’s residents and looks back on how they learned to use molten lava as a building material. It will be presented at the Venice Biennale, a leading architecture festival, in May. 

lava around a structure
Set in 2150, her speculative film Lavaforming presents a fictional city built from molten lava.
COURTESY OF S.AP ARKITEKTAR

Buildings and construction materials like concrete and steel currently contribute a staggering 37% of the world’s annual carbon dioxide emissions. Many architects are advocating for the use of natural or preexisting materials, but mixing earth and water into a mold is one thing; tinkering with 2,000 °F lava is another. 

Still, Pálmadóttir is piggybacking on research already being done in Iceland, which has 30 active volcanoes. Since 2021, eruptions have intensified in the Reykjanes Peninsula, which is close to the capital and to tourist hot spots like the Blue Lagoon. In 2024 alone, there were six volcanic eruptions in that area. This frequency has given volcanologists opportunities to study how lava behaves after a volcano erupts. “We try to follow this beast,” says Gro Birkefeldt M. Pedersen, a volcanologist at the Icelandic Meteorological Office (IMO), who has consulted with Pálmadóttir on a few occasions. “There is so much going on, and we’re just trying to catch up and be prepared.”

Pálmadóttir’s concept assumes that many years from now, volcanologists will be able to forecast lava flow accurately enough for cities to plan on using it in building. They will know when and where to dig trenches so that when a volcano erupts, the lava will flow into them and solidify into either walls or foundations.

Today, forecasting lava flows is a complex science that requires remote sensing technology and tremendous amounts of computational power to run simulations on supercomputers. The IMO typically runs two simulations for every new eruption—one based on data from previous eruptions, and another based on additional data acquired shortly after the eruption (from various sources like specially outfitted planes). With every event, the team accumulates more data, which makes the simulations of lava flow more accurate. Pedersen says there is much research yet to be done, but she expects “a lot of advancement” in the next 10 years or so. 

To design the speculative city of Eldborg for their film, Pálmadóttir and Skarphéðinsson used 3D-modeling software similar to what Pedersen uses for her simulations. The city is primarily built on a network of trenches that were filled with lava over the course of several eruptions, while buildings are constructed out of lava bricks. “We’re going to let nature design the buildings that will pop up,” says Pálmadóttir. 

The aesthetic of the city they envision will be less modernist and more fantastical—a bit “like [Gaudi’s] Sagrada Familia,” says Pálmadóttir. But the aesthetic output is not really the point; the architects’ goal is to galvanize architects today and spark an urgent discussion about the impact of climate change on our cities. She stresses the value of what can only be described as moonshot thinking. “I think it is important for architects not to be only in the present,” she told me. “Because if we are only in the present, working inside the system, we won’t change anything.”

Pálmadóttir was born in 1972 in Húsavik, a town known as the whale-watching capital of Iceland. But she was more interested in space and technology and spent a lot of time flying with her father, a construction engineer who owned a small plane. She credits his job for the curiosity she developed about science and “how things were put together”—an inclination that proved useful later, when she started researching volcanoes. So was the fact that Icelanders “learn to live with volcanoes from birth.” At 21, she moved to Norway, where she spent seven years working in 3D visualization before returning to Reykjavik and enrolling in an architecture program at the Iceland University of the Arts. But things didn’t click until she moved to Barcelona for a master’s degree at the Institute for Advanced Architecture of Catalonia. “I remember being there and feeling, finally, like I was in the exact right place,” she says. 

Before, architecture had seemed like a commodity and architects like “slaves to investment companies,” she says. Now, it felt like a path with potential. 

Lava has proved to be a strong, durable building material, at least in its solid state. To explore its potential, Pálmadóttir and Skarphéðinsson envision a city built on a network of trenches that have filled with lava over the course of several eruptions, while buildings are constructed with lava bricks.

She returned to Reykjavik in 2009 and worked as an architect until she founded S.AP (for “studio Arnhildur Pálmadóttir”) Arkitektar in 2018; her son started working with her in 2019 and officially joined her as an architect this year, after graduating from the Southern California Institute of Architecture. 

In 2021, the pair witnessed their first eruption up close, near the Fagradalsfjall volcano on the Reykjanes Peninsula. It was there that Pálmadóttir became aware of the sheer quantity of material coursing through the planet’s veins, and the potential to divert it into channels. 

Lava has already proved to be a strong, long-lasting building material—at least in its solid state. When it cools, it solidifies into volcanic rock like basalt or rhyolite. The type of rock depends on the composition of the lava, but basaltic lava—like the kind found in Iceland and Hawaii—forms one of the hardest rocks on Earth, which means that structures built from this type of lava would be durable and resilient. 

For years, architects in Mexico, Iceland, and Hawaii (where lava is widely available) have built structures out of volcanic rock. But quarrying that rock is an energy-intensive process that requires heavy machines to extract, cut, and haul it, often across long distances, leaving a big carbon footprint. Harnessing lava in its molten state, however, could unlock new methods for sustainable construction. Jeffrey Karson, a professor emeritus at Syracuse University who specializes in volcanic activity and who cofounded the Syracuse University Lava Project, agrees that lava is abundant enough to warrant interest as a building material. To understand how it behaves, Karson has spent the past 15 years performing over a thousand controlled lava pours from giant furnaces. If we figure out how to build up its strength as it cools, he says, “that stuff has a lot of potential.” 

In his research, Karson found that inserting metal rods into the lava flow helps reduce the kind of uneven cooling that would lead to thermal cracking—and therefore makes the material stronger (a bit like rebar in concrete). Like glass and other molten materials, lava behaves differently depending on how fast it cools. When glass or lava cools slowly, crystals start forming, strengthening the material. Replicating this process—perhaps in a kiln—could slow down the rate of cooling and let the lava become stronger. This kind of controlled cooling is “easy to do on small things like bricks,” says Karson, so “it’s not impossible to make a wall.” 

Pálmadóttir is clear-eyed about the challenges before her. She knows the techniques she and Skarphéðinsson are exploring may not lead to anything tangible in their lifetimes, but they still believe that the ripple effect the projects could create in the architecture community is worth pursuing.

Both Karson and Pedersen caution that more experiments are necessary to study this material’s potential. For Skarphéðinsson, that potential transcends the building industry. More than 12 years ago, Icelanders voted that the island’s natural resources, like its volcanoes and fishing waters, should be declared national property. That means any city built from lava flowing out of these volcanoes would be controlled not by deep-pocketed individuals or companies, but by the nation itself. (The referendum was considered illegal almost as soon as it was approved by voters and has since stalled.) 

For Skarphéðinsson, the Lavaforming project is less about the material than about the “political implications that get brought to the surface with this material.” “That is the change I want to see in the world,” he says. “It could force us to make radical changes and be a catalyst for something”—perhaps a social megalopolis where citizens have more say in how resources are used and profits are shared more evenly.

Cynics might dismiss the idea of harnessing lava as pure folly. But the more I spoke with Pálmadóttir, the more convinced I became. It wouldn’t be the first time in modern history that a seemingly dangerous idea (for example, drilling into scalding pockets of underground hot springs) proved revolutionary. Once entirely dependent on oil, Iceland today obtains 85% of its electricity and heat from renewable sources. “[My friends] probably think I’m pretty crazy, but they think maybe we could be clever geniuses,” she told me with a laugh. Maybe she is a little bit of both.

Elissaveta M. Brandon is a regular contributor to Fast Company and Wired.

A small US city experiments with AI to find out what residents want

Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city?

When Doug Gorman, elected leader of the county that includes Bowling Green, took office in 2023, it was the fastest-growing city in the state and projected to double in size by 2050, but it lacked a plan for how that growth would unfold. Gorman had a meeting with Sam Ford, a local consultant who had worked with the surveying platform Pol.is, which uses machine learning to gather opinions from large groups of people. 

They “needed a vision” for the anticipated growth, Ford says. The two convened a group of volunteers with experience in eight areas: economic development, talent, housing, public health, quality of life, tourism, storytelling, and infrastructure. They built a plan to use Pol.is to help write a 25-year plan for the city. The platform is just one of several new technologies used in Europe and increasingly in the US to help make sure that local governance is informed by public opinion.

After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what the 25-year plan should include. They could also vote on whether they agreed or disagreed with other ideas. The tool could be translated into a participant’s preferred language, and human moderators worked to make sure the traffic was coming from the Bowling Green area. 

Over the month that it was live, 7,890 residents participated, and 2,000 people submitted their own ideas. An AI-powered tool from Google Jigsaw then analyzed the data to find what people agreed and disagreed on. 

Experts on democracy technologies who were not involved in the project say this level of participation—about 10% of the city’s residents—was impressive.

“That is a lot,” says Archon Fung, director of the Ash Center for Innovation and Democratic Governance at the Harvard Kennedy School. A local election might see a 25% turnout, he says, and that requires nothing more than filling out a ballot. 

“Here, it’s a more demanding kind of participation, right? You’re actually voting on or considering some substantive things, and 2,000 people are contributing ideas,” he says. “So I think that’s a lot of people who are engaged.”

The plans that received the most attention in the Bowling Green experiment were hyperlocal. The ideas with the broadest support were increasing the number of local health-care specialists so residents wouldn’t have to travel to nearby Nashville for medical care, enticing more restaurants and grocery stores to open on the city’s north side, and preserving historic buildings. 

More contentious ideas included approving recreational marijuana, adding sexual orientation and gender identity to the city’s nondiscrimination clause, and providing more options for private education. Out of 3,940 unique ideas, 2,370 received more than 80% agreement, including initiatives like investing in stormwater infrastructure and expanding local opportunities for children and adults with autism.  

The volunteers running the experiment were not completely hands-off. Submitted ideas were screened according to a moderation policy, and redundant ideas were not posted. Ford says that 51% of ideas were published, and 31% were deemed redundant. About 6% of ideas were not posted because they were either completely off-topic or contained a personal attack.

But some researchers who study the technologies that can make democracy more effective question whether soliciting input in this manner is a reliable way to understand what a community wants.

One problem is self-selection—for example, certain kinds of people tend to show up to in-person forums like town halls. Research shows that seniors, homeowners, and people with high levels of education are the most likely to attend, Fung says. It’s possible that similar dynamics are at play among the residents of Bowling Green who decided to participate in the project.

“Self-selection is not an adequate way to represent the opinions of a public,” says James Fishkin, a political scientist at Stanford who’s known for developing a process he calls deliberative polling, in which a representative sample of a population’s residents are brought together for a weekend, paid about $300 each for their participation, and asked to deliberate in small groups. Other methods, used in some European governments, use jury-style groups of residents to make public policy decisions. 

What’s clear to everyone who studies the effectiveness of these tools is that they promise to move a city in a more democratic direction, but we won’t know if Bowling Green’s experiment worked until residents see what the city does with the ideas that they raised.

“You can’t make policy based on a tweet,” says Beth Simone Noveck, who directs a lab that studies democracy and technology at Northeastern University. As she points out, residents were voting on 140-character ideas, and those now need to be formed into real policies. 

“What comes next,” she says, “is the conversation between the city and residents to develop a short proposal into something that can actually be implemented.” For residents to trust that their voice actually matters, the city must be clear on why it’s implementing some ideas and not others. 

For now, the organizers have made the results public, and they will make recommendations to the Warren County leadership later this year. 

How AI is interacting with our creative human processes

In 2021, 20 years after the death of her older sister, Vauhini Vara was still unable to tell the story of her loss. “I wondered,” she writes in Searches, her new collection of essays on AI technology, “if Sam Altman’s machine could do it for me.” So she tried ChatGPT. But as it expanded on Vara’s prompts in sentences ranging from the stilted to the unsettling to the sublime, the thing she’d enlisted as a tool stopped seeming so mechanical. 

“Once upon a time, she taught me to exist,” the AI model wrote of the young woman Vara had idolized. Vara, a journalist and novelist, called the resulting essay “Ghosts,” and in her opinion, the best lines didn’t come from her: “I found myself irresistibly attracted to GPT-3—to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them … as I tried to write more honestly, the AI seemed to be doing the same.”

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them? And how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Both Vara’s book and The Uncanny Muse, a collection of essays on the history of art and automation by the music critic David Hajdu, explore how humans have historically and personally wrestled with the ways in which machines relate to our own bodies, brains, and creativity. At the same time, The Mind Electric, a new book by a neurologist, Pria Anand, reminds us that our own inner workings may not be so easy to replicate.

Searches is a strange artifact. Part memoir, part critical analysis, and part AI-assisted creative experimentation, Vara’s essays trace her time as a tech reporter and then novelist in the San Francisco Bay Area alongside the history of the industry she watched grow up. Tech was always close enough to touch: One college friend was an early Google employee, and when Vara started reporting on Facebook (now Meta), she and Mark Zuckerberg became “friends” on his platform. In 2007, she published a scoop that the company was planning to introduce ad targeting based on users’ personal information—the first shot fired in the long, gnarly data war to come. In her essay “Stealing Great Ideas,” she talks about turning down a job reporting on Apple to go to graduate school for fiction. There, she wrote a novel about a tech founder, which was later published as The Immortal King Rao. Vara points out that in some ways at the time, her art was “inextricable from the resources [she] used to create it”—products like Google Docs, a MacBook, an iPhone. But these pre-AI resources were tools, plain and simple. What came next was different.

Interspersed with Vara’s essays are chapters of back-and-forths between the author and ChatGPT about the book itself, where the bot serves as editor at Vara’s prompting. ChatGPT obligingly summarizes and critiques her writing in a corporate-­shaded tone that’s now familiar to any knowledge worker. “If there’s a place for disagreement,” it offers about the first few chapters on tech companies, “it might be in the balance of these narratives. Some might argue that the ­benefits—such as job creation, innovation in various sectors like AI and logistics, and contributions to the global economy—can outweigh the negatives.” 

book cover
Searches: Selfhood in the Digital Age
Vauhini Vara
PANTHEON, 2025

Vara notices that ChatGPT writes “we” and “our” in these responses, pulling it into the human story, not the tech one: “Earlier you mentioned ‘our access to information’ and ‘our collective experiences and understandings.’” When she asks what the rhetorical purpose of that choice is, ChatGPT responds with a numbered list of benefits including “inclusivity and solidarity” and “neutrality and objectivity.” It adds that “using the first-person plural helps to frame the discussion in terms of shared human experiences and collective challenges.” Does the bot believe it’s human? Or at least, do the humans who made it want other humans to believe it does? “Can corporations use these [rhetorical] tools in their products too, to subtly make people identify with, and not in opposition to, them?” Vara asks. ChatGPT replies, “Absolutely.”

Vara has concerns about the words she’s used as well. In “Thank You for Your Important Work,” she worries about the impact of “Ghosts,” which went viral after it was first published. Had her writing helped corporations hide the reality of AI behind a velvet curtain? She’d meant to offer a nuanced “provocation,” exploring how uncanny generative AI can be. But instead, she’d produced something beautiful enough to resonate as an ad for its creative potential. Even Vara herself felt fooled. She particularly loved one passage the bot wrote, about Vara and her sister as kids holding hands on a long drive. But she couldn’t imagine either of them being so sentimental. What Vara had elicited from the machine, she realized, was “wish fulfillment,” not a haunting. 

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. How can we make sense of these machines, not just use them? 

The machine wasn’t the only thing crouching behind that too-good-to-be-true curtain. The GPT models and others are trained through human labor, in sometimes exploitative conditions. And much of the training data was the creative work of human writers before her. “I’d conjured artificial language about grief through the extraction of real human beings’ language about grief,” she writes. The creative ghosts in the model were made of code, yes, but also, ultimately, made of people. Maybe Vara’s essay helped cover up that truth too.

In the book’s final essay, Vara offers a mirror image of those AI call-and-­response exchanges as an antidote. After sending out an anonymous survey to women of various ages, she presents the replies to each question, one after the other. “Describe something that doesn’t exist,” she prompts, and the women respond: “God.” “God.” “God.” “Perfection.” “My job. (Lost it.)” Real people contradict each other, joke, yell, mourn, and reminisce. Instead of a single authoritative voice—an editor, or a company’s limited style guide—Vara gives us the full gasping crowd of human creativity. “What’s it like to be alive?” Vara asks the group. “It depends,” one woman answers.    

David Hajdu, now music editor at The Nation and previously a music critic for The New Republic, goes back much further than the early years of Facebook to tell the history of how humans have made and used machines to express ourselves. Player pianos, microphones, synthesizers, and electrical instruments were all assistive technologies that faced skepticism before acceptance and, sometimes, elevation in music and popular culture. They even influenced the kind of art people were able to and wanted to make. Electrical amplification, for instance, allowed singers to use a wider vocal range and still reach an audience. The synthesizer introduced a new lexicon of sound to rock music. “What’s so bad about being mechanical, anyway?” Hajdu asks in The Uncanny Muse. And “what’s so great about being human?” 

book cover of the Uncanny Muse
The Uncanny Muse: Music, Art, and Machines from Automata to AI
David Hajdu
W.W. NORTON & COMPANY, 2025

But Hajdu is also interested in how intertwined the history of man and machine can be, and how often we’ve used one as a metaphor for the other. Descartes saw the body as empty machinery for consciousness, he reminds us. Hobbes wrote that “life is but a motion of limbs.” Freud described the mind as a steam engine. Andy Warhol told an interviewer that “everybody should be a machine.” And when computers entered the scene, humans used them as metaphors for themselves too. “Where the machine model had once helped us understand the human body … a new category of machines led us to imagine the brain (how we think, what we know, even how we feel or how we think about what we feel) in terms of the computer,” Hajdu writes. 

But what is lost with these one-to-one mappings? What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s? Maybe what happens is we get a world full of chatbots and agents, computer-­generated artworks and AI DJs, that companies claim are singular creative voices rather than remixes of a million human inputs. And perhaps we also get projects like the painfully named Painting Fool—an AI that paints, developed by Simon Colton, a scholar at Queen Mary University of London. He told Hajdu that he wanted to “demonstrate the potential of a computer program to be taken seriously as a creative artist in its own right.” What Colton means is not just a machine that makes art but one that expresses its own worldview: “Art that communicates what it’s like to be a machine.”  

What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s?

Hajdu seems to be curious and optimistic about this line of inquiry. “Machines of many kinds have been communicating things for ages, playing invaluable roles in our communication through art,” he says. “Growing in intelligence, machines may still have more to communicate, if we let them.” But the question that The Uncanny Muse raises at the end is: Why should we art-­making humans be so quick to hand over the paint to the paintbrush? Why do we care how the paintbrush sees the world? Are we truly finished telling our own stories ourselves?

Pria Anand might say no. In The Mind Electric, she writes: “Narrative is universally, spectacularly human; it is as unconscious as breathing, as essential as sleep, as comforting as familiarity. It has the capacity to bind us, but also to other, to lay bare, but also obscure.” The electricity in The Mind Electric belongs entirely to the human brain—no metaphor necessary. Instead, the book explores a number of neurological afflictions and the stories patients and doctors tell to better understand them. “The truth of our bodies and minds is as strange as fiction,” Anand writes—and the language she uses throughout the book is as evocative as that in any novel. 

cover of the Mind Electric
The Mind Electric: A Neurologist on the Strangeness and Wonder of Our Brains
Pria Anand
WASHINGTON SQUARE PRESS, 2025

In personal and deeply researched vignettes in the tradition of Oliver Sacks, Anand shows that any comparison between brains and machines will inevitably fall flat. She tells of patients who see clear images when they’re functionally blind, invent entire backstories when they’ve lost a memory, break along seams that few can find, and—yes—see and hear ghosts. In fact, Anand cites one study of 375 college students in which researchers found that nearly three-quarters “had heard a voice that no one else could hear.” These were not diagnosed schizophrenics or sufferers of brain tumors—just people listening to their own uncanny muses. Many heard their name, others heard God, and some could make out the voice of a loved one who’d passed on. Anand suggests that writers throughout history have harnessed organic exchanges with these internal apparitions to make art. “I see myself taking the breath of these voices in my sails,” Virginia Woolf wrote of her own experiences with ghostly sounds. “I am a porous vessel afloat on sensation.” The mind in The Mind Electric is vast, mysterious, and populated. The narratives people construct to traverse it are just as full of wonder. 

Humans are not going to stop using technology to help us create anytime soon—and there’s no reason we should. Machines make for wonderful tools, as they always have. But when we turn the tools themselves into artists and storytellers, brains and bodies, magicians and ghosts, we bypass truth for wish fulfillment. Maybe what’s worse, we rob ourselves of the opportunity to contribute our own voices to the lively and loud chorus of human experience. And we keep others from the human pleasure of hearing them too. 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Generative AI is learning to spy for the US military

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders. 

“We still need to validate the sources,” says Lowdon. But the unit’s commanders encouraged the use of large language models, he says, “because they provide a lot more efficiency during a dynamic situation.”

The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence—not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. 

Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI—tools that can engage in human-like conversation like those built by Vannevar Labs—represent a newer frontier.

The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own to troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend (and a large part of what sets Vannevar’s products apart): terabytes of data in 80 different languages are hoovered every day in 180 countries. The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information; it also uses nonclassified data that is difficult to get online (gathered by human operatives on the ground), as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. 

Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment, with the results delivered through a chatbot interface that’s not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China’s efforts to secure rare earth minerals in the Philippines. 

“Our real focus as a company,” says Scott Philips, Vannevar Labs’ chief technology officer, is to “collect data, make sense of that data, and help the US make good decisions.” 

That approach is particularly appealing to the US intelligence apparatus because for years the world has been awash in more data than human analysts can possibly interpret—a problem that contributed to the 2003 founding of Palantir, a company with a market value of over $200 billion and known for its powerful and controversial tools, including a database that helps Immigration and Customs Enforcement search for and track information on undocumented immigrants

In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data but to actually talk through an analysis with someone interactively.

Vannevar’s tools proved useful for the deployment in the Pacific, and Enzenauer and Lowdon say that while they were instructed to always double-check the AI’s work, they didn’t find inaccuracies to be a significant issue. Enzenauer regularly used the tool to track any foreign news reports in which the unit’s exercises were mentioned and to perform sentiment analysis, detecting the emotions and opinions expressed in text. Judging whether a foreign news article reflects a threatening or friendly opinion toward the unit is a task that on previous deployments she had to do manually.

“It was mostly by hand—researching, translating, coding, and analyzing the data,” she says. “It was definitely way more time-consuming than it was when using the AI.” 

Still, Enzenauer and Lowdon say there were hiccups, some of which would affect most digital tools: The ships had spotty internet connections much of the time, limiting how quickly the AI model could synthesize foreign intelligence, especially if it involved photos or video. 

With this first test completed, the unit’s commanding officer, Colonel Sean Dynan, said on a call with reporters in February that heavier use of generative AI was coming; this experiment was “the tip of the iceberg.” 

This is indeed the direction that the entire US military is barreling toward at full speed. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. (The US is of course not alone in this approach; notably, Israel has been using AI to sort through information and even generate lists of targets in its war in Gaza, a practice that has been widely criticized.)

Perhaps unsurprisingly, plenty of people outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and has expertise in leading safety audits for AI-powered systems. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” 

Khlaaf adds that even if humans are “double-checking” the work of AI, there’s little reason to think they’re capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.”

One particular use case that concerns her is sentiment analysis, which she argues is “a highly subjective metric that even humans would struggle to appropriately assess based on media alone.” 

If AI perceives hostility toward US forces where a human analyst would not—or if the system misses hostility that is really there—the military could make an misinformed decision or escalate a situation unnecessarily.

Sentiment analysis is indeed a task that AI has not perfected. Philips, the Vannevar CTO, says the company has built models specifically to judge whether an article is pro-US or not, but MIT Technology Review was not able to evaluate them. 

Chris Mouton, a senior engineer for RAND, recently tested how well-suited generative AI is for the task. He evaluated leading models, including OpenAI’s GPT-4 and an older version of GPT fine-tuned to do such intelligence work, on how accurately they flagged foreign content as propaganda compared with human experts. “It’s hard,” he says, noting that AI struggled to identify more subtle types of propaganda. But he adds that the models could still be useful in lots of other analysis tasks. 

Another limitation of Vannevar’s approach, Khlaaf says, is that the usefulness of open-source intelligence is debatable. Mouton says that open-source data can be “pretty extraordinary,” but Khlaaf points out that unlike classified intel gathered through reconnaissance or wiretaps, it is exposed to the open internet—making it far more susceptible to misinformation campaigns, bot networks, and deliberate manipulation, as the US Army has warned.

For Mouton, the biggest open question now is whether these generative AI technologies will be simply one investigatory tool among many that analysts use—or whether they’ll produce the subjective analysis that’s relied upon and trusted in decision-making. “This is the central debate,” he says. 

What everyone agrees is that AI models are accessible—you can just ask them a question about complex pieces of intelligence, and they’ll respond in plain language. But it’s still in dispute what imperfections will be acceptable in the name of efficiency. 

Update: This story was updated to include additional context from Heidy Khlaaf.

Love or immortality: A short story

1.

Sophie and Martin are at the 2012 Gordon Research Conference on the Biology of Aging in Ventura, California. It is a foggy February weekend. Both are disappointed about how little sun there is on the California beach.

They are two graduate students—Sophie in her sixth and final year, Martin in his fourth—who have traveled from different East Coast cities to present posters on their work. Martin’s shows health data collected from supercentenarians compared with the general Medicare population, capturing the diseases that are less and more common in the populations. Sophie is presenting on her recently accepted first-author paper in Aging Cell on two specific genes that, when activated, extend lifespan in C. elegans roundworms, the model organism of her research. 

2.

Sophie walks by Martin’s poster after she is done presenting her own. She is not immediately impressed by his work. It is not published, for one thing. But she sees how it is attention-grabbing and relevant, even necessary. He has a little crowd listening to him. He notices her—a frowning girl—standing in the back and begins to talk louder, hoping she hears.

“Supercentenarians are much less likely to have seven diseases,” he says, pointing to his poster. “Alzheimer’s, heart failure, diabetes, depression, prostate cancer, hip fracture, and chronic kidney disease. Though they have higher instances of four diseases, which are arthritis, cataracts, osteoporosis, and glaucoma. These aren’t linked to mortality, but they do affect quality of life.”

What stands out to Sophie is the confidence in Martin’s voice, despite the unsurprising nature of the findings. She admires that sound, its sturdiness. She makes note of his name and plans to seek him out. 

3.

They find one another in the hotel bar among other graduate students. The students are talking about the logistics of their futures: Who is going for a postdoc, who will opt for industry, do any have job offers already, where will their research have the most impact, is it worth spending years working toward something so uncertain? They stay up too late, dissecting journal articles they’ve read as if they were debating politics. They enjoy the freedom away from their labs and PIs. 

Martin says, again with that confidence, that he will become a professor. Sophie says she likely won’t go down that path. She has received an offer to start as a scientist at an aging research startup called Abyssinian Bio, after she defends. Martin says, “Wouldn’t your work make more sense in an academic setting, where you have more freedom and power over what you do?” She says, “But that could be years from now and I want to start my real life, so …” 

4-18.

Martin is enamored with Sophie. She is not only brilliant; she is helpful. She strengthens his papers with precise edits and grounds his arguments with stronger evidence. Sophie is enamored with Martin. He is not only ambitious; he is supportive and adventurous. He encourages her to try new activities and tools, both in and out of work, like learning to ride a motorcycle or using CRISPR.

Martin visits Sophie in San Francisco whenever he can, which amounts to a weekend or two every other month. After two years, their long-distance relationship is taking its toll. They want more weekends, more months, more everything together. They make plans for him to get a postdoc near her, but after multiple rejections from the labs where he most wants to work, his resentment toward academia grows. 

“They don’t see the value of my work,” he says.

19.

“Join Abyssinian,” Sophie offers.

The company is growing. They want more researchers with data science backgrounds. He takes the job, drawn more by their future together than by the science.

20-35.

For a long time, they are happy. They marry. They do their research. They travel. Sophie visits Martin’s extended family in France. Martin goes with Sophie to her cousin’s wedding in Taipei. They get a dog. The dog dies. They are both devastated but increasingly motivated to better understand the mechanisms of aging. Maybe their next dog will have the opportunity to live longer. They do not get a next dog.

Sophie moves up at Abyssinian. Despite being in industry, her work is published in well-respected journals. She collaborates well with her colleagues. Eventually, she is promoted to executive director of research. 

Martin stalls at the rank of principal scientist, and though Sophie is technically his boss—or his boss’s boss—he genuinely doesn’t mind when others call him “Dr. Sophie Xie’s husband.”

40.

At dinner on his 35th birthday, a friend jokes that Martin is now middle-aged. Sophie laughs and agrees, though she is older than Martin. Martin joins in the laughter, but this small comment unlocks a sense of urgency inside him. What once felt hypothetical—his own death, the death of his wife—now appears very close. He can feel his wrinkles forming.  

First come the subtle shifts in how he talks about his research and Abyssinian’s work. He wants to “defeat” and “obliterate” aging, which he comes to describe as humankind’s “greatest adversary.” 

43.

He begins taking supplements touted by tech influencers. He goes on a calorie-restricted diet. He gets weekly vitamin IV sessions. He looks into blood transfusions from young donors, but Sophie tells him to stop with all the fake science. She says he’s being ridiculous, that what he’s doing could be dangerous.  

Martin, for the first time, sees Sophie differently. Not without love, but love burdened by an opposing weight, what others might recognize as resentment. Sophie is dedicated to the demands of her growing department. Martin thinks she is not taking the task of living longer seriously enough. He does not want her to die. He does not want to die. 

Nobody at Abyssinian is taking the task of living longer seriously enough. Of all the aging bio startups he could have ended up at, how has he ended up at one with such modest—no, lazy—goals? He begins publicly dismissing basic research as “too slow” and “too limited,” which offends many of his and Sophie’s colleagues. 

Sophie defends him, says he is still doing good work, despite the evidence. She is busy, traveling often for conferences, and mistakenly misclassifies the changes in Martin’s attitude as temporary outliers.

44.

One day, during a meeting, Martin says to Jerry, a well-­respected scientist at Abyssinian and in the electron microscopy imaging community at large, that EM is an outdated, old, crusty technology. Martin says it is stupid to use it when there are more advanced, cutting-edge methods, like cryo-EM and super-resolution microscopy. Martin has always been outspoken, but this instance veers into rudeness. 

At home, Martin and Sophie argue. Initially, they argue about whether tools of the past can be useful to their work. Then the argument morphs. What is the true purpose of their research? Martin says it’s called anti-aging research for a reason: It’s to defy aging! Sophie says she’s never called her work anti-aging research; she calls it aging research or research into the biology of aging. And Abyssinian’s overarching mission is more simply to find druggable targets for chronic and age-related diseases. Occasionally, the company’s marketing arm will push out messaging about extending the human lifespan by 20 years, but that has nothing to do with scientists like them in R&D. Martin seethes. Only 20 years! What about hundreds? Thousands? 

45-49.

They continue to argue and the arguments are roundabout, typically ending with Sophie crying, absconding to her sister’s house, and the two of them not speaking for short periods of time.

50.

What hurts Sophie most is Martin’s persistent dismissal of death as merely an engineering problem to be solved. Sophie thinks of the ways the C. elegans she observes regulate their lifespans in response to environmental stress. The complex dance of genes and proteins that orchestrates their aging process. In the previous month’s experiment, a seemingly simple mutation produced unexpected effects across three generations of worms. Nature’s complexity still humbles her daily. There is still so much unknown. 

Martin is at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity. And all you want to do is sit in the lab to watch worms die.”

50.

Martin blames the past. He realizes he should have tried harder to become a professor. Let Sophie make the industry money—he could have had academic clout. Professor Warwick. It would have had a nice sound to it. To his dismay, everyone in his lab calls him Martin. Abyssinian has a first-name policy. Something about flat hierarchies making for better collaboration. Good ideas could come from anyone, even a lowly, unintelligent senior associate scientist in Martin’s lab who barely understands how to process a data set. A great idea could come from anyone at all—except him, apparently. Sophie has made that clear.

51-59.

They live in a tenuous peace for some time, perfecting the art of careful scheduling: separate coffee times, meetings avoided, short conversations that stick to the day-to-day facts of their lives.

60.

Then Martin stands up to interrupt a presentation by the VP of research to announce that studying natural aging is pointless since they will soon eliminate it entirely. While Jerry may have shrugged off Martin’s aggressiveness, the VP does not. This leads to a blowout fight between Martin and many of his colleagues, in which Martin refuses to apologize and calls them all shortsighted idiots. 

Sophie watches with a mixture of fear and awe. Martin thinks: Can’t she, my wife, just side with me this once? 

61.

Back at home:

Martin at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity.” He taps the powder into his protein shake with the precision of a scientist measuring reagents. “And all you want to do is sit in the lab to watch worms die.”

Sophie observes his familiar movements, now foreign in their desperation. The kitchen light catches the silver spreading at his temples and on his chin—the very evidence of aging he is trying so hard to erase.

“That’s not true,” she says.

Martin gulps down his shake.

“What about us? What about children?”

Martin coughs, then laughs, a sound that makes Sophie flinch. “Why would we have children now? You certainly don’t have the time. But if we solve aging, which I believe we can, we’d have all the time in the world.”

“We used to talk about starting a family.”

“Any children we have should be born into a world where we already know they never have to die.”

“We could both make the time. I want to grow old together—”

All Martin hears are promises that lead to nothing, nowhere.  

“You want us to deteriorate? To watch each other decay?”

“I want a real life.”

“So you’re choosing death. You’re choosing limitation. Mediocrity.”

64.

Martin doesn’t hear from his wife for four days, despite texting her 16 times—12 too many, by his count. He finally breaks down enough to call her in the evening, after a couple of glasses of aged whisky (a gift from a former colleague, which Martin has rarely touched and kept hidden in the far back of a desk drawer). 

Voicemail. And after this morning’s text, still no glimmering ellipsis bubble to indicate Sophie’s typing. 

66.

Forget her, he thinks, leaning back in his Steelcase chair, adjusted specifically for his long runner’s legs and shorter­-than-average torso. At 39, Martin’s spreadsheets of vitals now show an upward trajectory; proof of his ability to reverse his biological age. Sophie does not appreciate this. He stares out his office window, down at the employees crawling around Abyssinian Bio’s main quad. How small, he thinks. How significantly unaware of the future’s true possibilities. Sophie is like them. 

67.

Forget her, he thinks again as he turns down a bay toward Robert, one of his struggling postdocs, who is sitting at his bench staring at his laptop. As Martin approaches, Robert minimizes several windows, leaving only his home screen behind.

“Where are you at with the NAD+ data?” Martin asks.

Robert shifts in his chair to face Martin. The skin of his neck grows red and splotchy. Martin stares at it in disgust.

“Well?” he asks again. 

“Oh, I was told not to work on that anymore?” The boy has a tendency to speak in the lilt of questions. 

“By who?” Martin demands.

“Uh, Sophie?” 

“I see. Well, I expect new data by end of day.” 

“Oh, but—”

Martin narrows his eyes. The red splotches on Robert’s neck grow larger. 

“Um, okay,” the boy says, returning his focus to the computer. 

Martin decides a response is called for …

70.

Immortality Promise

I am immortal. This doesn’t make me special. In fact, most people on Earth are immortal. I am 6,000 years old. Now, 6,000 years of existence give one a certain perspective. I remember back when genetic engineering and knowledge about the processes behind aging were still in their infancy. Oh, how people argued and protested.

“It’s unethical!”

“We’ll kill the Earth if there’s no death!”

“Immortal people won’t be motivated to do anything! We’ll become a useless civilization living under our AI overlords!” 

I believed back then, and now I know. Their concerns had no ground to stand on.

Eternal life isn’t even remarkable anymore, but being among its architects and early believers still garners respect from the world. The elegance of my team’s solution continues to fill me with pride. We didn’t just halt aging; we mastered it. My cellular machinery hums with an efficiency that would make evolution herself jealous.

Those early protesters—bless their mortal, no-longer-­beating hearts—never grasped the biological imperative of what we were doing. Nature had already created functionally immortal organisms—the hydra, certain jellyfish species, even some plants. We simply perfected what evolution had sketched out. The supposed ethical concerns melted away once people understood that we weren’t defying nature. We were fulfilling its potential.

Today, those who did not want to be immortal aren’t around. Simple as that. Those who are here do care about the planet more than ever! There are almost no diseases, and we’re all very productive people. Young adults—or should I say young-looking adults—are naturally restless and energetic. And with all this life, you have the added benefit of not wasting your time on a career you might hate! You get to try different things and find out what you’re really good at and where you’re appreciated! Life is not short! Resources are plentiful!

Of course, biological immortality doesn’t equal invincibility. People still die. Just not very often. My colleagues in materials science developed our modern protective exoskeletons. They’re elegant solutions, though I prefer to rely on my enhanced reflexes and reinforced skeletal structure most days. 

The population concerns proved mathematically unfounded. Stable reproduction rates emerged naturally once people realized they had unlimited time to start families. I’ve had four sets of children across 6,000 years, each born when I felt truly ready to pass on another iteration of my accumulated knowledge. With more life, people have much more patience. 

Now we are on to bigger and more ambitious projects. We conquered survival of individuals. The next step: survival of our species in this universe. The sun’s eventual death poses an interesting challenge, but nothing we can’t handle. We have colonized five planets and two moons in our solar system, and we will colonize more. Humanity will adapt to whatever environment we encounter. That’s what we do.

My ancient motorcycle remains my favorite indulgence. I love taking it for long cruises on the old Earth roads that remain intact. The neural interface is state-of-the-art, of course. But mostly I keep it because it reminds me of earlier times, when we thought death was inevitable and life was limited to a single planet. The future stretches out before us like an infinity I helped create—yet another masterpiece in the eternal gallery of human evolution.

71.

Martin feels better after writing it out. He rereads it a couple times, feels even better. Then he has the idea to send his writing to the department administrator. He asks her to create a new tab on his lab page, titled “Immortality Promise,” and to post his piece there. That will get his message across to Sophie and everyone at Abyssinian. 

72.

Sophie’s boss, Ray, is the first to email her. The subject line: “martn” [sic]. No further words in the body. Ray is known to be short and blunt in all his communications, but his meaning is always clear. They’ve had enough conversations about Martin by then. She is already in the process of slowly shutting down his projects, has been ignoring his texts and calls because of this. Now she has to move even faster. 

73.

Sophie leaves her office and goes into the lab. As an executive, she is not expected to do experiments, but watching a thousand tiny worms crawl across their agar plates soothes her. Each of the ones she now looks at carries a fluorescent marker she designed to track mitochondrial dynamics during aging. The green glow pulses with their movements, like stars blinking in a microscopic galaxy. She spent years developing this strain of C. elegans, carefully selecting for longevity without sacrificing health. The worms that lived longest weren’t always the healthiest—a truth about aging that seemed to elude Martin. Those worms taught her more about the genuine complexity of aging. Just last week, she observed something unexpected: The mitochondrial networks in her long-lived strains showed subtle patterns of reorganization never documented before. The discovery felt intimate, like being trusted with a secret.

“How are things looking?” Jerry appears beside her. “That new strain expressing the dual markers?”

Sophie nods, adjusting the focus. “Look at this network pattern. It’s different from anything in the literature.” She shifts aside so Jerry can see. This is what she loves about science: the genuine puzzles, the patient observation, the slow accumulation of knowledge that, while far removed from a specific application, could someday help people age with dignity.

“Beautiful,” Jerry murmurs. He straightens. “I heard about Martin’s … post.”

Sophie closes her eyes for a moment, the image of the mitochondrial networks still floating in her vision. She’s read Martin’s “Immortality Promise” piece three times, each more painful than the last. Not because of its grandiose claims—those were comically disconnected from reality—but because of what it’s revealed about her husband. The writing pulsed with a frightening certainty, a complete absence of doubt or wonder. Gone was the scientist who once spent many lively evenings debating with her about the evolutionary purpose of aging, who delighted in being proved wrong because it meant learning something new. 

74.

She sees in his words a man who has abandoned the fundamental principles of science. His piece reads like a religious text or science fiction story, casting himself as the hero. He isn’t pursuing research anymore. He hasn’t been for a long time. 

She wonders how and when he arrived there. The change in Martin didn’t take place overnight. It was gradual, almost imperceptible—not unlike watching someone age. It wasn’t easy to notice if you saw the person every day; Sophie feels guilty for not noticing. Then again, she read a new study out a few months ago from Stanford researchers that found people do not age linearly but in spurts—specifically, around 44 and 60. Shifts in the body lead to sudden accelerations of change. If she’s honest with herself, she knew this was happening to Martin, to their relationship. But she chose to ignore it, give other problems precedence. Now it is too late. Maybe if she’d addressed the conditions right before the spike—but how? wasn’t it inevitable?—he would not have gone from scientist to fanatic.

75.

“You’re giving the keynote at next month’s Gordon conference,” Jerry reminds her, pulling her back to reality. “Don’t let this overshadow that.”

She manages a small smile. Her work has always been methodical, built on careful observation and respect for the fundamental mysteries of biology. The keynote speech represents more than five years of research: countless hours of guiding her teams, of exciting discussions among her peers, of watching worms age and die, of documenting every detail of their cellular changes. It is one of the biggest honors of her career. There is poetry in it, she thinks—in the collisions between discoveries and failures. 

76.

The knock on her office door comes at 2:45. Linda from HR, right on schedule. Sophie walks with her to conference room B2, two floors below, where Martin’s group resides. Through the glass walls of each lab, they see scientists working at their benches. One adjusts a microscope’s focus. Another pipettes clear liquid into rows of tubes. Three researchers point at data on a screen. Each person is investigating some aspect of aging, one careful experiment at a time. The work will continue, with or without Martin.

In the conference room, Sophie opens her laptop and pulls up the folder of evidence. She has been collecting it for months. Martin’s emails to colleagues, complaints from collaborators and direct reports, and finally, his “Immortality Promise” piece. The documentation is thorough, organized chronologically. She has labeled each file with dates and brief descriptions, as she would for any other data.

77.

Martin walks in at 3:00. Linda from HR shifts in her chair. Sophie is the one to hand the papers over to Martin; this much she owes him. They contain words like “termination” and “effective immediately.” Martin’s face complicates itself when he looks them over. Sophie hands over a pen and he signs quickly.  

He stands, adjusts his shirt cuffs, and walks to the door. He turns back.

“I’ll prove you wrong,” he says, looking at Sophie. But what stands out to her is the crack in his voice on the last word. 

Sophie watches him leave. She picks up the signed papers and hands them to Linda, and then walks out herself. 

Alexandra Chang is the author of Days of Distraction and Tomb Sweeping and is a National Book Foundation 5 under 35 honoree. She lives in Camarillo, California.

How AI can help supercharge creativity

Sometimes Lizzie Wilson shows up to a rave with her AI sidekick. 

One weeknight this past February, Wilson plugged her laptop into a projector that threw her screen onto the wall of a low-ceilinged loft space in East London. A small crowd shuffled in the glow of dim pink lights. Wilson sat down and started programming.

Techno clicks and whirs thumped from the venue’s speakers. The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen—tweaking sounds, looping beats, pulling a face when she messed up.  

Wilson is a live coder. Instead of using purpose-built software like most electronic music producers, live coders create music by writing the code to generate it on the fly. It’s an improvised performance art known as algorave.

“It’s kind of boring when you go to watch a show and someone’s just sitting there on their laptop,” she says. “You can enjoy the music, but there’s a performative aspect that’s missing. With live coding, everyone can see what it is that I’m typing. And when I’ve had my laptop crash, people really like that. They start cheering.”

Taking risks is part of the vibe. And so Wilson likes to dial up her performances one more notch by riffing off what she calls a live-coding agent, a generative AI model that comes up with its own beats and loops to add to the mix. Often the model suggests sound combinations that Wilson hadn’t thought of. “You get these elements of surprise,” she says. “You just have to go for it.”

two performers at a table with a disapproving cat covered in code on a screen behind them

ADELA FESTIVAL

Wilson, a researcher at the Creative Computing Institute at the University of the Arts London, is just one of many working on what’s known as co-­creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. She and her colleagues built the live-­coding agent to explore how artificial intelligence can be used to support human artistic endeavors—in Wilson’s case, musical improvisation.

It’s a vision that goes beyond the promise of existing generative tools put out by companies like OpenAI and Google DeepMind. Those can automate a striking range of creative tasks and offer near-instant gratificationbut at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.

And so they are looking for ways to inject human creativity back into the process. The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.

Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers. 

Explosion of creativity

There’s no one way to be creative, but we all do it. We make everything from memes to masterpieces, infant doodles to industrial designs. There’s a mistaken belief, typically among adults, that creativity is something you grow out of. But being creative—whether cooking, singing in the shower, or putting together super-weird TikToks—is still something that most of us do just for the fun of it. It doesn’t have to be high art or a world-changing idea (and yet it can be). Creativity is basic human behavior; it should be celebrated and encouraged. 

When generative text-to-image models like Midjourney, OpenAI’s DALL-E, and the popular open-source Stable Diffusion arrived, they sparked an explosion of what looked a lot like creativity. Millions of people were now able to create remarkable images of pretty much anything, in any style, with the click of a button. Text-to-video models came next. Now startups like Udio are developing similar tools for music. Never before have the fruits of creation been within reach of so many.

But for a number of researchers and artists, the hype around these tools has warped the idea of what creativity really is. “If I ask the AI to create something for me, that’s not me being creative,” says Jeba Rezwana, who works on co-creativity at Towson University in Maryland. “It’s a one-shot interaction: You click on it and it generates something and that’s it. You cannot say ‘I like this part, but maybe change something here.’ You cannot have a back-and-forth dialogue.”

Rezwana is referring to the way most generative models are set up. You can give the tools feedback and ask them to have another go. But each new result is generated from scratch, which can make it hard to nail exactly what you want. As the filmmaker Walter Woodman put it last year after his art collective Shy Kids made a short film with OpenAI’s text-to-video model for the first time: “Sora is a slot machine as to what you get back.”

What’s more, the latest versions of some of these generative tools do not even use your submitted prompt as is to produce an image or video (at least not on their default settings). Before a prompt is sent to the model, the software edits it—often by adding dozens of hidden words—to make it more likely that the generated image will appear polished.

“Extra things get added to juice the output,” says Mike Cook, a computational creativity researcher at King’s College London. “Try asking Midjourney to give you a bad drawing of something—it can’t do it.” These tools do not give you what you want; they give you what their designers think you want.

Mike Cook

COURTESY OF MIKE COOK

All of which is fine if you just need a quick image and don’t care too much about the details, says Nick Bryan-Kinns, also at the Creative Computing Institute: “Maybe you want to make a Christmas card for your family or a flyer for your community cake sale. These tools are great for that.”

In short, existing generative models have made it easy to create, but they have not made it easy to be creative. And there’s a big difference between the two. For Cook, relying on such tools could in fact harm people’s creative development in the long run. “Although many of these creative AI systems are promoted as making creativity more accessible,” he wrote in a paper published last year, they might instead have “adverse effects on their users in terms of restricting their ability to innovate, ideate, and create.” Given how much generative models have been championed for putting creative abilities at everyone’s fingertips, the suggestion that they might in fact do the opposite is damning.  

screenshot from the game with overlapping saws
In the game Disc Room, players navigate a room of moving buzz saws.
screenshot from the AI-generated game with tiny saws
Cook used AI to design a new level for the game. The result was a room where none of the discs actually moved.

He’s far from the only researcher worrying about the cognitive impact of these technologies. In February a team at Microsoft Research Cambridge published a report concluding that generative AI tools “can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” The researchers found that with the use of generative tools, people’s effort “shifts from task execution to task stewardship.”

Cook is concerned that generative tools don’t let you fail—a crucial part of learning new skills. We have a habit of saying that artists are gifted, says Cook. But the truth is that artists work at their art, developing skills over months and years.

“If you actually talk to artists, they say, ‘Well, I got good by doing it over and over and over,’” he says. “But failure sucks. And we’re always looking at ways to get around that.”

Generative models let us skip the frustration of doing a bad job. 

“Unfortunately, we’re removing the one thing that you have to do to develop creative skills for yourself, which is fail,” says Cook. “But absolutely nobody wants to hear that.”

Surprise me

And yet it’s not all bad news. Artists and researchers are buzzing at the ways generative tools could empower creators, pointing them in surprising new directions and steering them away from dead ends. Cook thinks the real promise of AI will be to help us get better at what we want to do rather than doing it for us. For that, he says, we’ll need to create new tools, different from the ones we have now. “Using Midjourney does not do anything for me—it doesn’t change anything about me,” he says. “And I think that’s a wasted opportunity.”

Ask a range of researchers studying creativity to name a key part of the creative process and many will say: reflection. It’s hard to define exactly, but reflection is a particular type of focused, deliberate thinking. It’s what happens when a new idea hits you. Or when an assumption you had turns out to be wrong and you need to rethink your approach. It’s the opposite of a one-shot interaction.

Looking for ways that AI might support or encourage reflection—asking it to throw new ideas into the mix or challenge ideas you already hold—is a common thread across co-creativity research. If generative tools like DALL-E make creation frictionless, the aim here is to add friction back in. “How can we make art without friction?” asks Elisa Giaccardi, who studies design at the Polytechnic University of Milan in Italy. “How can we engage in a truly creative process without material that pushes back?”

Take Wilson’s live-coding agent. She claims that it pushes her musical improvisation in directions she might not have taken by herself. Trained on public code shared by the wider live-coding community, the model suggests snippets of code that are closer to other people’s styles than her own. This makes it more likely to produce something unexpected. “Not because you couldn’t produce it yourself,” she says. “But the way the human brain works, you tend to fall back on repeated ideas.”

Last year, Wilson took part in a study run by Bryan-Kinns and his colleagues in which they surveyed six experienced musicians as they used a variety of generative models to help them compose a piece of music. The researchers wanted to get a sense of what kinds of interactions with the technology were useful and which were not.

The participants all said they liked it when the models made surprising suggestions, even when those were the result of glitches or mistakes. Sometimes the results were simply better. Sometimes the process felt fresh and exciting. But a few people struggled with giving up control. It was hard to direct the models to produce specific results or to repeat results that the musicians had liked. “In some ways it’s the same as being in a band,” says Bryan-Kinns. “You need to have that sense of risk and a sense of surprise, but you don’t want it totally random.”

Alternative designs

Cook comes at surprise from a different angle: He coaxes unexpected insights out of AI tools that he has developed to co-create video games. One of his tools, Puck, which was first released in 2022, generates designs for simple shape-matching puzzle games like Candy Crush or Bejeweled. A lot of Puck’s designs are experimental and clunky—don’t expect it to come up with anything you are ever likely to play. But that’s not the point: Cook uses Puck—and a newer tool called Pixie—to explore what kinds of interactions people might want to have with a co-creative tool.

Pixie can read computer code for a game and tweak certain lines to come up with alternative designs. Not long ago, Cook was working on a copy of a popular game called Disc Room, in which players have to cross a room full of moving buzz saws. He asked Pixie to help him come up with a design for a level that skilled and unskilled players would find equally hard. Pixie designed a room where none of the discs actually moved. Cook laughs: It’s not what he expected. “It basically turned the room into a minefield,” he says. “But I thought it was really interesting. I hadn’t thought of that before.”

Anne Arzberger
a stuffed unicorn and sewing materials

Researcher Anne Arzberger developed experimental AI tools to come up with gender-neutral toy designs.

Pushing back on assumptions, or being challenged, is part of the creative process, says Anne Arzberger, a researcher at the Delft University of Technology in the Netherlands. “If I think of the people I’ve collaborated with best, they’re not the ones who just said ‘Yes, great’ to every idea I brought forth,” she says. “They were really critical and had opposing ideas.”

She wants to build tech that provides a similar sounding board. As part of a project called Creating Monsters, Arzberger developed two experimental AI tools that help designers find hidden biases in their designs. “I was interested in ways in which I could use this technology to access information that would otherwise be difficult to access,” she says.

For the project, she and her colleagues looked at the problem of designing toy figures that would be gender neutral. She and her colleagues (including Giaccardi) used Teachable Machine, a web app built by Google researchers in 2017 that makes it easy to train your own machine-learning model to classify different inputs, such as images. They trained this model with a few dozen images that Arzberger had labeled as being masculine, feminine, or gender neutral.

Arzberger then asked the model to identify the genders of new candidate toy designs. She found that quite a few designs were judged to be feminine even when she had tried to make them gender neutral. She felt that her views of the world—her own hidden biases—were being exposed. But the tool was often right: It challenged her assumptions and helped the team improve the designs. The same approach could be used to assess all sorts of design characteristics, she says.

Arzberger then used a second model, a version of a tool made by the generative image and video startup Runway, to come up with gender-neutral toy designs of its own. First the researchers trained the model to generate and classify designs for male- and female-looking toys. They could then ask the tool to find a design that was exactly midway between the male and female designs it had learned.

Generative models can give feedback on designs that human designers might miss by themselves, she says: “We can really learn something.” 

Taking control

The history of technology is full of breakthroughs that changed the way art gets made, from recipes for vibrant new paint colors to photography to synthesizers. In the 1960s, the Stanford researcher John Chowning spent years working on an esoteric algorithm that could manipulate the frequencies of computer-generated sounds. Stanford licensed the tech to Yamaha, which built it into its synthesizers—including the DX7, the cool new sound behind 1980s hits such as Tina Turner’s “The Best,” A-ha’s “Take On Me,” and Prince’s “When Doves Cry.”

Bryan-Kinns is fascinated by how artists and designers find ways to use new technologies. “If you talk to artists, most of them don’t actually talk about these AI generative models as a tool—they talk about them as a material, like an artistic material, like a paint or something,” he says. “It’s a different way of thinking about what the AI is doing.” He highlights the way some people are pushing the technology to do weird things it wasn’t designed to do. Artists often appropriate or misuse these kinds of tools, he says.

Bryan-Kinns points to the work of Terence Broad, another colleague of his at the Creative Computing Institute, as a favorite example. Broad employs techniques like network bending, which involves inserting new layers into a neural network to produce glitchy visual effects in generated images, and generating images with a model trained on no data, which produces almost Rothko-like abstract swabs of color.

But Broad is an extreme case. Bryan-Kinns sums it up like this: “The problem is that you’ve got this gulf between the very commercial generative tools that produce super-high-quality outputs but you’ve got very little control over what they do—and then you’ve got this other end where you’ve got total control over what they’re doing but the barriers to use are high because you need to be somebody who’s comfortable getting under the hood of your computer.”

“That’s a small number of people,” he says. “It’s a very small number of artists.”

Arzberger admits that working with her models was not straightforward. Running them took several hours, and she’s not sure the Runway tool she used is even available anymore. Bryan-Kinns, Arzberger, Cook, and others want to take the kinds of creative interactions they are discovering and build them into tools that can be used by people who aren’t hardcore coders. 

Terence Broad
ai-generated color field image

Researcher Terence Broad creates dynamic images using a model trained on no data, which produces almost Rothko-like abstract color fields.

Finding the right balance between surprise and control will be hard, though. Midjourney can surprise, but it gives few levers for controlling what it produces beyond your prompt. Some have claimed that writing prompts is itself a creative act. “But no one struggles with a paintbrush the way they struggle with a prompt,” says Cook.

Faced with that struggle, Cook sometimes watches his students just go with the first results a generative tool gives them. “I’m really interested in this idea that we are priming ourselves to accept that whatever comes out of a model is what you asked for,” he says. He is designing an experiment that will vary single words and phrases in similar prompts to test how much of a mismatch people see between what they expect and what they get. 

But it’s early days yet. In the meantime, companies developing generative models typically emphasize results over process. “There’s this impressive algorithmic progress, but a lot of the time interaction design is overlooked,” says Rezwana.  

For Wilson, the crucial choice in any co-creative relationship is what you do with what you’re given. “You’re having this relationship with the computer that you’re trying to mediate,” she says. “Sometimes it goes wrong, and that’s just part of the creative process.” 

When AI gives you lemons—make art. “Wouldn’t it be fun to have something that was completely antagonistic in a performance—like, something that is actively going against you—and you kind of have an argument?” she says. “That would be interesting to watch, at least.” 

A new biosensor can detect bird flu in five minutes

Over the winter, eggs suddenly became all but impossible to buy. As a bird flu outbreak rippled through dairy and poultry farms, grocery stores struggled to keep them on shelves. The shortages and record-high prices in February raised costs dramatically for restaurants and bakeries and led some shoppers to skip the breakfast staple entirely. But a team based at Washington University in St. Louis has developed a device that could help slow future outbreaks by detecting bird flu in air samples in just five minutes. 

Bird flu is an airborne virus that spreads between birds and other animals. Outbreaks on poultry and dairy farms are devastating; mass culling of exposed animals can be the only way to stem outbreaks. Some bird flu strains have also infected humans, though this is rare. As of early March, there had been 70 human cases and one confirmed death in the US, according to the Centers for Disease Control and Prevention.

The most common way to detect bird flu involves swabbing potentially contaminated sites and sequencing the DNA that’s been collected, a process that can take up to 48 hours.

The new device samples the air in real time, running the samples past a specialized biosensor every five minutes. The sensor has strands of genetic material called aptamers that were used to bind specifically to the virus. When that happens, it creates a detectable electrical change. The research, published in ACS Sensors in February, may help farmers contain future outbreaks.

Part of the group’s work was devising a way to deliver airborne virus particles to the sensor. 

With bird flu, says Rajan Chakrabarty, a professor of energy, environmental, and chemical engineering at Washington University and lead author of the paper, “the bad apple is surrounded by a million or a billion good apples.” He adds, “The challenge was to take an airborne pathogen and get it into a liquid form to sample.”

The team accomplished this by designing a microwave-­size box that sucks in large volumes of air and spins it in a cyclone-like motion so that particles stick to liquid-coated walls. The process seamlessly produces a liquid drip that is pumped to the highly sensitive biosensor. 

Though the system is promising, its effectiveness in real-world conditions remains uncertain, says Sungjun Park, an associate professor of electrical and computer engineering at Ajou University in South Korea, who was not involved in the study. Dirt and other particles in farm air could hinder its performance. “The study does not extensively discuss the device’s performance in complex real-world air samples,” Park says. 

But Chakrabarty is optimistic that it will be commercially viable after further testing and is already working with a biotech company to scale it up. He hopes to develop a biosensor chip that detects multiple pathogens at once. 

Carly Kay is a science writer based in Santa Cruz, California.