Love or immortality: A short story

1.

Sophie and Martin are at the 2012 Gordon Research Conference on the Biology of Aging in Ventura, California. It is a foggy February weekend. Both are disappointed about how little sun there is on the California beach.

They are two graduate students—Sophie in her sixth and final year, Martin in his fourth—who have traveled from different East Coast cities to present posters on their work. Martin’s shows health data collected from supercentenarians compared with the general Medicare population, capturing the diseases that are less and more common in the populations. Sophie is presenting on her recently accepted first-author paper in Aging Cell on two specific genes that, when activated, extend lifespan in C. elegans roundworms, the model organism of her research. 

2.

Sophie walks by Martin’s poster after she is done presenting her own. She is not immediately impressed by his work. It is not published, for one thing. But she sees how it is attention-grabbing and relevant, even necessary. He has a little crowd listening to him. He notices her—a frowning girl—standing in the back and begins to talk louder, hoping she hears.

“Supercentenarians are much less likely to have seven diseases,” he says, pointing to his poster. “Alzheimer’s, heart failure, diabetes, depression, prostate cancer, hip fracture, and chronic kidney disease. Though they have higher instances of four diseases, which are arthritis, cataracts, osteoporosis, and glaucoma. These aren’t linked to mortality, but they do affect quality of life.”

What stands out to Sophie is the confidence in Martin’s voice, despite the unsurprising nature of the findings. She admires that sound, its sturdiness. She makes note of his name and plans to seek him out. 

3.

They find one another in the hotel bar among other graduate students. The students are talking about the logistics of their futures: Who is going for a postdoc, who will opt for industry, do any have job offers already, where will their research have the most impact, is it worth spending years working toward something so uncertain? They stay up too late, dissecting journal articles they’ve read as if they were debating politics. They enjoy the freedom away from their labs and PIs. 

Martin says, again with that confidence, that he will become a professor. Sophie says she likely won’t go down that path. She has received an offer to start as a scientist at an aging research startup called Abyssinian Bio, after she defends. Martin says, “Wouldn’t your work make more sense in an academic setting, where you have more freedom and power over what you do?” She says, “But that could be years from now and I want to start my real life, so …” 

4-18.

Martin is enamored with Sophie. She is not only brilliant; she is helpful. She strengthens his papers with precise edits and grounds his arguments with stronger evidence. Sophie is enamored with Martin. He is not only ambitious; he is supportive and adventurous. He encourages her to try new activities and tools, both in and out of work, like learning to ride a motorcycle or using CRISPR.

Martin visits Sophie in San Francisco whenever he can, which amounts to a weekend or two every other month. After two years, their long-distance relationship is taking its toll. They want more weekends, more months, more everything together. They make plans for him to get a postdoc near her, but after multiple rejections from the labs where he most wants to work, his resentment toward academia grows. 

“They don’t see the value of my work,” he says.

19.

“Join Abyssinian,” Sophie offers.

The company is growing. They want more researchers with data science backgrounds. He takes the job, drawn more by their future together than by the science.

20-35.

For a long time, they are happy. They marry. They do their research. They travel. Sophie visits Martin’s extended family in France. Martin goes with Sophie to her cousin’s wedding in Taipei. They get a dog. The dog dies. They are both devastated but increasingly motivated to better understand the mechanisms of aging. Maybe their next dog will have the opportunity to live longer. They do not get a next dog.

Sophie moves up at Abyssinian. Despite being in industry, her work is published in well-respected journals. She collaborates well with her colleagues. Eventually, she is promoted to executive director of research. 

Martin stalls at the rank of principal scientist, and though Sophie is technically his boss—or his boss’s boss—he genuinely doesn’t mind when others call him “Dr. Sophie Xie’s husband.”

40.

At dinner on his 35th birthday, a friend jokes that Martin is now middle-aged. Sophie laughs and agrees, though she is older than Martin. Martin joins in the laughter, but this small comment unlocks a sense of urgency inside him. What once felt hypothetical—his own death, the death of his wife—now appears very close. He can feel his wrinkles forming.  

First come the subtle shifts in how he talks about his research and Abyssinian’s work. He wants to “defeat” and “obliterate” aging, which he comes to describe as humankind’s “greatest adversary.” 

43.

He begins taking supplements touted by tech influencers. He goes on a calorie-restricted diet. He gets weekly vitamin IV sessions. He looks into blood transfusions from young donors, but Sophie tells him to stop with all the fake science. She says he’s being ridiculous, that what he’s doing could be dangerous.  

Martin, for the first time, sees Sophie differently. Not without love, but love burdened by an opposing weight, what others might recognize as resentment. Sophie is dedicated to the demands of her growing department. Martin thinks she is not taking the task of living longer seriously enough. He does not want her to die. He does not want to die. 

Nobody at Abyssinian is taking the task of living longer seriously enough. Of all the aging bio startups he could have ended up at, how has he ended up at one with such modest—no, lazy—goals? He begins publicly dismissing basic research as “too slow” and “too limited,” which offends many of his and Sophie’s colleagues. 

Sophie defends him, says he is still doing good work, despite the evidence. She is busy, traveling often for conferences, and mistakenly misclassifies the changes in Martin’s attitude as temporary outliers.

44.

One day, during a meeting, Martin says to Jerry, a well-­respected scientist at Abyssinian and in the electron microscopy imaging community at large, that EM is an outdated, old, crusty technology. Martin says it is stupid to use it when there are more advanced, cutting-edge methods, like cryo-EM and super-resolution microscopy. Martin has always been outspoken, but this instance veers into rudeness. 

At home, Martin and Sophie argue. Initially, they argue about whether tools of the past can be useful to their work. Then the argument morphs. What is the true purpose of their research? Martin says it’s called anti-aging research for a reason: It’s to defy aging! Sophie says she’s never called her work anti-aging research; she calls it aging research or research into the biology of aging. And Abyssinian’s overarching mission is more simply to find druggable targets for chronic and age-related diseases. Occasionally, the company’s marketing arm will push out messaging about extending the human lifespan by 20 years, but that has nothing to do with scientists like them in R&D. Martin seethes. Only 20 years! What about hundreds? Thousands? 

45-49.

They continue to argue and the arguments are roundabout, typically ending with Sophie crying, absconding to her sister’s house, and the two of them not speaking for short periods of time.

50.

What hurts Sophie most is Martin’s persistent dismissal of death as merely an engineering problem to be solved. Sophie thinks of the ways the C. elegans she observes regulate their lifespans in response to environmental stress. The complex dance of genes and proteins that orchestrates their aging process. In the previous month’s experiment, a seemingly simple mutation produced unexpected effects across three generations of worms. Nature’s complexity still humbles her daily. There is still so much unknown. 

Martin is at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity. And all you want to do is sit in the lab to watch worms die.”

50.

Martin blames the past. He realizes he should have tried harder to become a professor. Let Sophie make the industry money—he could have had academic clout. Professor Warwick. It would have had a nice sound to it. To his dismay, everyone in his lab calls him Martin. Abyssinian has a first-name policy. Something about flat hierarchies making for better collaboration. Good ideas could come from anyone, even a lowly, unintelligent senior associate scientist in Martin’s lab who barely understands how to process a data set. A great idea could come from anyone at all—except him, apparently. Sophie has made that clear.

51-59.

They live in a tenuous peace for some time, perfecting the art of careful scheduling: separate coffee times, meetings avoided, short conversations that stick to the day-to-day facts of their lives.

60.

Then Martin stands up to interrupt a presentation by the VP of research to announce that studying natural aging is pointless since they will soon eliminate it entirely. While Jerry may have shrugged off Martin’s aggressiveness, the VP does not. This leads to a blowout fight between Martin and many of his colleagues, in which Martin refuses to apologize and calls them all shortsighted idiots. 

Sophie watches with a mixture of fear and awe. Martin thinks: Can’t she, my wife, just side with me this once? 

61.

Back at home:

Martin at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity.” He taps the powder into his protein shake with the precision of a scientist measuring reagents. “And all you want to do is sit in the lab to watch worms die.”

Sophie observes his familiar movements, now foreign in their desperation. The kitchen light catches the silver spreading at his temples and on his chin—the very evidence of aging he is trying so hard to erase.

“That’s not true,” she says.

Martin gulps down his shake.

“What about us? What about children?”

Martin coughs, then laughs, a sound that makes Sophie flinch. “Why would we have children now? You certainly don’t have the time. But if we solve aging, which I believe we can, we’d have all the time in the world.”

“We used to talk about starting a family.”

“Any children we have should be born into a world where we already know they never have to die.”

“We could both make the time. I want to grow old together—”

All Martin hears are promises that lead to nothing, nowhere.  

“You want us to deteriorate? To watch each other decay?”

“I want a real life.”

“So you’re choosing death. You’re choosing limitation. Mediocrity.”

64.

Martin doesn’t hear from his wife for four days, despite texting her 16 times—12 too many, by his count. He finally breaks down enough to call her in the evening, after a couple of glasses of aged whisky (a gift from a former colleague, which Martin has rarely touched and kept hidden in the far back of a desk drawer). 

Voicemail. And after this morning’s text, still no glimmering ellipsis bubble to indicate Sophie’s typing. 

66.

Forget her, he thinks, leaning back in his Steelcase chair, adjusted specifically for his long runner’s legs and shorter­-than-average torso. At 39, Martin’s spreadsheets of vitals now show an upward trajectory; proof of his ability to reverse his biological age. Sophie does not appreciate this. He stares out his office window, down at the employees crawling around Abyssinian Bio’s main quad. How small, he thinks. How significantly unaware of the future’s true possibilities. Sophie is like them. 

67.

Forget her, he thinks again as he turns down a bay toward Robert, one of his struggling postdocs, who is sitting at his bench staring at his laptop. As Martin approaches, Robert minimizes several windows, leaving only his home screen behind.

“Where are you at with the NAD+ data?” Martin asks.

Robert shifts in his chair to face Martin. The skin of his neck grows red and splotchy. Martin stares at it in disgust.

“Well?” he asks again. 

“Oh, I was told not to work on that anymore?” The boy has a tendency to speak in the lilt of questions. 

“By who?” Martin demands.

“Uh, Sophie?” 

“I see. Well, I expect new data by end of day.” 

“Oh, but—”

Martin narrows his eyes. The red splotches on Robert’s neck grow larger. 

“Um, okay,” the boy says, returning his focus to the computer. 

Martin decides a response is called for …

70.

Immortality Promise

I am immortal. This doesn’t make me special. In fact, most people on Earth are immortal. I am 6,000 years old. Now, 6,000 years of existence give one a certain perspective. I remember back when genetic engineering and knowledge about the processes behind aging were still in their infancy. Oh, how people argued and protested.

“It’s unethical!”

“We’ll kill the Earth if there’s no death!”

“Immortal people won’t be motivated to do anything! We’ll become a useless civilization living under our AI overlords!” 

I believed back then, and now I know. Their concerns had no ground to stand on.

Eternal life isn’t even remarkable anymore, but being among its architects and early believers still garners respect from the world. The elegance of my team’s solution continues to fill me with pride. We didn’t just halt aging; we mastered it. My cellular machinery hums with an efficiency that would make evolution herself jealous.

Those early protesters—bless their mortal, no-longer-­beating hearts—never grasped the biological imperative of what we were doing. Nature had already created functionally immortal organisms—the hydra, certain jellyfish species, even some plants. We simply perfected what evolution had sketched out. The supposed ethical concerns melted away once people understood that we weren’t defying nature. We were fulfilling its potential.

Today, those who did not want to be immortal aren’t around. Simple as that. Those who are here do care about the planet more than ever! There are almost no diseases, and we’re all very productive people. Young adults—or should I say young-looking adults—are naturally restless and energetic. And with all this life, you have the added benefit of not wasting your time on a career you might hate! You get to try different things and find out what you’re really good at and where you’re appreciated! Life is not short! Resources are plentiful!

Of course, biological immortality doesn’t equal invincibility. People still die. Just not very often. My colleagues in materials science developed our modern protective exoskeletons. They’re elegant solutions, though I prefer to rely on my enhanced reflexes and reinforced skeletal structure most days. 

The population concerns proved mathematically unfounded. Stable reproduction rates emerged naturally once people realized they had unlimited time to start families. I’ve had four sets of children across 6,000 years, each born when I felt truly ready to pass on another iteration of my accumulated knowledge. With more life, people have much more patience. 

Now we are on to bigger and more ambitious projects. We conquered survival of individuals. The next step: survival of our species in this universe. The sun’s eventual death poses an interesting challenge, but nothing we can’t handle. We have colonized five planets and two moons in our solar system, and we will colonize more. Humanity will adapt to whatever environment we encounter. That’s what we do.

My ancient motorcycle remains my favorite indulgence. I love taking it for long cruises on the old Earth roads that remain intact. The neural interface is state-of-the-art, of course. But mostly I keep it because it reminds me of earlier times, when we thought death was inevitable and life was limited to a single planet. The future stretches out before us like an infinity I helped create—yet another masterpiece in the eternal gallery of human evolution.

71.

Martin feels better after writing it out. He rereads it a couple times, feels even better. Then he has the idea to send his writing to the department administrator. He asks her to create a new tab on his lab page, titled “Immortality Promise,” and to post his piece there. That will get his message across to Sophie and everyone at Abyssinian. 

72.

Sophie’s boss, Ray, is the first to email her. The subject line: “martn” [sic]. No further words in the body. Ray is known to be short and blunt in all his communications, but his meaning is always clear. They’ve had enough conversations about Martin by then. She is already in the process of slowly shutting down his projects, has been ignoring his texts and calls because of this. Now she has to move even faster. 

73.

Sophie leaves her office and goes into the lab. As an executive, she is not expected to do experiments, but watching a thousand tiny worms crawl across their agar plates soothes her. Each of the ones she now looks at carries a fluorescent marker she designed to track mitochondrial dynamics during aging. The green glow pulses with their movements, like stars blinking in a microscopic galaxy. She spent years developing this strain of C. elegans, carefully selecting for longevity without sacrificing health. The worms that lived longest weren’t always the healthiest—a truth about aging that seemed to elude Martin. Those worms taught her more about the genuine complexity of aging. Just last week, she observed something unexpected: The mitochondrial networks in her long-lived strains showed subtle patterns of reorganization never documented before. The discovery felt intimate, like being trusted with a secret.

“How are things looking?” Jerry appears beside her. “That new strain expressing the dual markers?”

Sophie nods, adjusting the focus. “Look at this network pattern. It’s different from anything in the literature.” She shifts aside so Jerry can see. This is what she loves about science: the genuine puzzles, the patient observation, the slow accumulation of knowledge that, while far removed from a specific application, could someday help people age with dignity.

“Beautiful,” Jerry murmurs. He straightens. “I heard about Martin’s … post.”

Sophie closes her eyes for a moment, the image of the mitochondrial networks still floating in her vision. She’s read Martin’s “Immortality Promise” piece three times, each more painful than the last. Not because of its grandiose claims—those were comically disconnected from reality—but because of what it’s revealed about her husband. The writing pulsed with a frightening certainty, a complete absence of doubt or wonder. Gone was the scientist who once spent many lively evenings debating with her about the evolutionary purpose of aging, who delighted in being proved wrong because it meant learning something new. 

74.

She sees in his words a man who has abandoned the fundamental principles of science. His piece reads like a religious text or science fiction story, casting himself as the hero. He isn’t pursuing research anymore. He hasn’t been for a long time. 

She wonders how and when he arrived there. The change in Martin didn’t take place overnight. It was gradual, almost imperceptible—not unlike watching someone age. It wasn’t easy to notice if you saw the person every day; Sophie feels guilty for not noticing. Then again, she read a new study out a few months ago from Stanford researchers that found people do not age linearly but in spurts—specifically, around 44 and 60. Shifts in the body lead to sudden accelerations of change. If she’s honest with herself, she knew this was happening to Martin, to their relationship. But she chose to ignore it, give other problems precedence. Now it is too late. Maybe if she’d addressed the conditions right before the spike—but how? wasn’t it inevitable?—he would not have gone from scientist to fanatic.

75.

“You’re giving the keynote at next month’s Gordon conference,” Jerry reminds her, pulling her back to reality. “Don’t let this overshadow that.”

She manages a small smile. Her work has always been methodical, built on careful observation and respect for the fundamental mysteries of biology. The keynote speech represents more than five years of research: countless hours of guiding her teams, of exciting discussions among her peers, of watching worms age and die, of documenting every detail of their cellular changes. It is one of the biggest honors of her career. There is poetry in it, she thinks—in the collisions between discoveries and failures. 

76.

The knock on her office door comes at 2:45. Linda from HR, right on schedule. Sophie walks with her to conference room B2, two floors below, where Martin’s group resides. Through the glass walls of each lab, they see scientists working at their benches. One adjusts a microscope’s focus. Another pipettes clear liquid into rows of tubes. Three researchers point at data on a screen. Each person is investigating some aspect of aging, one careful experiment at a time. The work will continue, with or without Martin.

In the conference room, Sophie opens her laptop and pulls up the folder of evidence. She has been collecting it for months. Martin’s emails to colleagues, complaints from collaborators and direct reports, and finally, his “Immortality Promise” piece. The documentation is thorough, organized chronologically. She has labeled each file with dates and brief descriptions, as she would for any other data.

77.

Martin walks in at 3:00. Linda from HR shifts in her chair. Sophie is the one to hand the papers over to Martin; this much she owes him. They contain words like “termination” and “effective immediately.” Martin’s face complicates itself when he looks them over. Sophie hands over a pen and he signs quickly.  

He stands, adjusts his shirt cuffs, and walks to the door. He turns back.

“I’ll prove you wrong,” he says, looking at Sophie. But what stands out to her is the crack in his voice on the last word. 

Sophie watches him leave. She picks up the signed papers and hands them to Linda, and then walks out herself. 

Alexandra Chang is the author of Days of Distraction and Tomb Sweeping and is a National Book Foundation 5 under 35 honoree. She lives in Camarillo, California.

How AI can help supercharge creativity

Sometimes Lizzie Wilson shows up to a rave with her AI sidekick. 

One weeknight this past February, Wilson plugged her laptop into a projector that threw her screen onto the wall of a low-ceilinged loft space in East London. A small crowd shuffled in the glow of dim pink lights. Wilson sat down and started programming.

Techno clicks and whirs thumped from the venue’s speakers. The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen—tweaking sounds, looping beats, pulling a face when she messed up.  

Wilson is a live coder. Instead of using purpose-built software like most electronic music producers, live coders create music by writing the code to generate it on the fly. It’s an improvised performance art known as algorave.

“It’s kind of boring when you go to watch a show and someone’s just sitting there on their laptop,” she says. “You can enjoy the music, but there’s a performative aspect that’s missing. With live coding, everyone can see what it is that I’m typing. And when I’ve had my laptop crash, people really like that. They start cheering.”

Taking risks is part of the vibe. And so Wilson likes to dial up her performances one more notch by riffing off what she calls a live-coding agent, a generative AI model that comes up with its own beats and loops to add to the mix. Often the model suggests sound combinations that Wilson hadn’t thought of. “You get these elements of surprise,” she says. “You just have to go for it.”

two performers at a table with a disapproving cat covered in code on a screen behind them

ADELA FESTIVAL

Wilson, a researcher at the Creative Computing Institute at the University of the Arts London, is just one of many working on what’s known as co-­creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. She and her colleagues built the live-­coding agent to explore how artificial intelligence can be used to support human artistic endeavors—in Wilson’s case, musical improvisation.

It’s a vision that goes beyond the promise of existing generative tools put out by companies like OpenAI and Google DeepMind. Those can automate a striking range of creative tasks and offer near-instant gratificationbut at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.

And so they are looking for ways to inject human creativity back into the process. The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.

Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers. 

Explosion of creativity

There’s no one way to be creative, but we all do it. We make everything from memes to masterpieces, infant doodles to industrial designs. There’s a mistaken belief, typically among adults, that creativity is something you grow out of. But being creative—whether cooking, singing in the shower, or putting together super-weird TikToks—is still something that most of us do just for the fun of it. It doesn’t have to be high art or a world-changing idea (and yet it can be). Creativity is basic human behavior; it should be celebrated and encouraged. 

When generative text-to-image models like Midjourney, OpenAI’s DALL-E, and the popular open-source Stable Diffusion arrived, they sparked an explosion of what looked a lot like creativity. Millions of people were now able to create remarkable images of pretty much anything, in any style, with the click of a button. Text-to-video models came next. Now startups like Udio are developing similar tools for music. Never before have the fruits of creation been within reach of so many.

But for a number of researchers and artists, the hype around these tools has warped the idea of what creativity really is. “If I ask the AI to create something for me, that’s not me being creative,” says Jeba Rezwana, who works on co-creativity at Towson University in Maryland. “It’s a one-shot interaction: You click on it and it generates something and that’s it. You cannot say ‘I like this part, but maybe change something here.’ You cannot have a back-and-forth dialogue.”

Rezwana is referring to the way most generative models are set up. You can give the tools feedback and ask them to have another go. But each new result is generated from scratch, which can make it hard to nail exactly what you want. As the filmmaker Walter Woodman put it last year after his art collective Shy Kids made a short film with OpenAI’s text-to-video model for the first time: “Sora is a slot machine as to what you get back.”

What’s more, the latest versions of some of these generative tools do not even use your submitted prompt as is to produce an image or video (at least not on their default settings). Before a prompt is sent to the model, the software edits it—often by adding dozens of hidden words—to make it more likely that the generated image will appear polished.

“Extra things get added to juice the output,” says Mike Cook, a computational creativity researcher at King’s College London. “Try asking Midjourney to give you a bad drawing of something—it can’t do it.” These tools do not give you what you want; they give you what their designers think you want.

Mike Cook

COURTESY OF MIKE COOK

All of which is fine if you just need a quick image and don’t care too much about the details, says Nick Bryan-Kinns, also at the Creative Computing Institute: “Maybe you want to make a Christmas card for your family or a flyer for your community cake sale. These tools are great for that.”

In short, existing generative models have made it easy to create, but they have not made it easy to be creative. And there’s a big difference between the two. For Cook, relying on such tools could in fact harm people’s creative development in the long run. “Although many of these creative AI systems are promoted as making creativity more accessible,” he wrote in a paper published last year, they might instead have “adverse effects on their users in terms of restricting their ability to innovate, ideate, and create.” Given how much generative models have been championed for putting creative abilities at everyone’s fingertips, the suggestion that they might in fact do the opposite is damning.  

screenshot from the game with overlapping saws
In the game Disc Room, players navigate a room of moving buzz saws.
screenshot from the AI-generated game with tiny saws
Cook used AI to design a new level for the game. The result was a room where none of the discs actually moved.

He’s far from the only researcher worrying about the cognitive impact of these technologies. In February a team at Microsoft Research Cambridge published a report concluding that generative AI tools “can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” The researchers found that with the use of generative tools, people’s effort “shifts from task execution to task stewardship.”

Cook is concerned that generative tools don’t let you fail—a crucial part of learning new skills. We have a habit of saying that artists are gifted, says Cook. But the truth is that artists work at their art, developing skills over months and years.

“If you actually talk to artists, they say, ‘Well, I got good by doing it over and over and over,’” he says. “But failure sucks. And we’re always looking at ways to get around that.”

Generative models let us skip the frustration of doing a bad job. 

“Unfortunately, we’re removing the one thing that you have to do to develop creative skills for yourself, which is fail,” says Cook. “But absolutely nobody wants to hear that.”

Surprise me

And yet it’s not all bad news. Artists and researchers are buzzing at the ways generative tools could empower creators, pointing them in surprising new directions and steering them away from dead ends. Cook thinks the real promise of AI will be to help us get better at what we want to do rather than doing it for us. For that, he says, we’ll need to create new tools, different from the ones we have now. “Using Midjourney does not do anything for me—it doesn’t change anything about me,” he says. “And I think that’s a wasted opportunity.”

Ask a range of researchers studying creativity to name a key part of the creative process and many will say: reflection. It’s hard to define exactly, but reflection is a particular type of focused, deliberate thinking. It’s what happens when a new idea hits you. Or when an assumption you had turns out to be wrong and you need to rethink your approach. It’s the opposite of a one-shot interaction.

Looking for ways that AI might support or encourage reflection—asking it to throw new ideas into the mix or challenge ideas you already hold—is a common thread across co-creativity research. If generative tools like DALL-E make creation frictionless, the aim here is to add friction back in. “How can we make art without friction?” asks Elisa Giaccardi, who studies design at the Polytechnic University of Milan in Italy. “How can we engage in a truly creative process without material that pushes back?”

Take Wilson’s live-coding agent. She claims that it pushes her musical improvisation in directions she might not have taken by herself. Trained on public code shared by the wider live-coding community, the model suggests snippets of code that are closer to other people’s styles than her own. This makes it more likely to produce something unexpected. “Not because you couldn’t produce it yourself,” she says. “But the way the human brain works, you tend to fall back on repeated ideas.”

Last year, Wilson took part in a study run by Bryan-Kinns and his colleagues in which they surveyed six experienced musicians as they used a variety of generative models to help them compose a piece of music. The researchers wanted to get a sense of what kinds of interactions with the technology were useful and which were not.

The participants all said they liked it when the models made surprising suggestions, even when those were the result of glitches or mistakes. Sometimes the results were simply better. Sometimes the process felt fresh and exciting. But a few people struggled with giving up control. It was hard to direct the models to produce specific results or to repeat results that the musicians had liked. “In some ways it’s the same as being in a band,” says Bryan-Kinns. “You need to have that sense of risk and a sense of surprise, but you don’t want it totally random.”

Alternative designs

Cook comes at surprise from a different angle: He coaxes unexpected insights out of AI tools that he has developed to co-create video games. One of his tools, Puck, which was first released in 2022, generates designs for simple shape-matching puzzle games like Candy Crush or Bejeweled. A lot of Puck’s designs are experimental and clunky—don’t expect it to come up with anything you are ever likely to play. But that’s not the point: Cook uses Puck—and a newer tool called Pixie—to explore what kinds of interactions people might want to have with a co-creative tool.

Pixie can read computer code for a game and tweak certain lines to come up with alternative designs. Not long ago, Cook was working on a copy of a popular game called Disc Room, in which players have to cross a room full of moving buzz saws. He asked Pixie to help him come up with a design for a level that skilled and unskilled players would find equally hard. Pixie designed a room where none of the discs actually moved. Cook laughs: It’s not what he expected. “It basically turned the room into a minefield,” he says. “But I thought it was really interesting. I hadn’t thought of that before.”

Anne Arzberger
a stuffed unicorn and sewing materials

Researcher Anne Arzberger developed experimental AI tools to come up with gender-neutral toy designs.

Pushing back on assumptions, or being challenged, is part of the creative process, says Anne Arzberger, a researcher at the Delft University of Technology in the Netherlands. “If I think of the people I’ve collaborated with best, they’re not the ones who just said ‘Yes, great’ to every idea I brought forth,” she says. “They were really critical and had opposing ideas.”

She wants to build tech that provides a similar sounding board. As part of a project called Creating Monsters, Arzberger developed two experimental AI tools that help designers find hidden biases in their designs. “I was interested in ways in which I could use this technology to access information that would otherwise be difficult to access,” she says.

For the project, she and her colleagues looked at the problem of designing toy figures that would be gender neutral. She and her colleagues (including Giaccardi) used Teachable Machine, a web app built by Google researchers in 2017 that makes it easy to train your own machine-learning model to classify different inputs, such as images. They trained this model with a few dozen images that Arzberger had labeled as being masculine, feminine, or gender neutral.

Arzberger then asked the model to identify the genders of new candidate toy designs. She found that quite a few designs were judged to be feminine even when she had tried to make them gender neutral. She felt that her views of the world—her own hidden biases—were being exposed. But the tool was often right: It challenged her assumptions and helped the team improve the designs. The same approach could be used to assess all sorts of design characteristics, she says.

Arzberger then used a second model, a version of a tool made by the generative image and video startup Runway, to come up with gender-neutral toy designs of its own. First the researchers trained the model to generate and classify designs for male- and female-looking toys. They could then ask the tool to find a design that was exactly midway between the male and female designs it had learned.

Generative models can give feedback on designs that human designers might miss by themselves, she says: “We can really learn something.” 

Taking control

The history of technology is full of breakthroughs that changed the way art gets made, from recipes for vibrant new paint colors to photography to synthesizers. In the 1960s, the Stanford researcher John Chowning spent years working on an esoteric algorithm that could manipulate the frequencies of computer-generated sounds. Stanford licensed the tech to Yamaha, which built it into its synthesizers—including the DX7, the cool new sound behind 1980s hits such as Tina Turner’s “The Best,” A-ha’s “Take On Me,” and Prince’s “When Doves Cry.”

Bryan-Kinns is fascinated by how artists and designers find ways to use new technologies. “If you talk to artists, most of them don’t actually talk about these AI generative models as a tool—they talk about them as a material, like an artistic material, like a paint or something,” he says. “It’s a different way of thinking about what the AI is doing.” He highlights the way some people are pushing the technology to do weird things it wasn’t designed to do. Artists often appropriate or misuse these kinds of tools, he says.

Bryan-Kinns points to the work of Terence Broad, another colleague of his at the Creative Computing Institute, as a favorite example. Broad employs techniques like network bending, which involves inserting new layers into a neural network to produce glitchy visual effects in generated images, and generating images with a model trained on no data, which produces almost Rothko-like abstract swabs of color.

But Broad is an extreme case. Bryan-Kinns sums it up like this: “The problem is that you’ve got this gulf between the very commercial generative tools that produce super-high-quality outputs but you’ve got very little control over what they do—and then you’ve got this other end where you’ve got total control over what they’re doing but the barriers to use are high because you need to be somebody who’s comfortable getting under the hood of your computer.”

“That’s a small number of people,” he says. “It’s a very small number of artists.”

Arzberger admits that working with her models was not straightforward. Running them took several hours, and she’s not sure the Runway tool she used is even available anymore. Bryan-Kinns, Arzberger, Cook, and others want to take the kinds of creative interactions they are discovering and build them into tools that can be used by people who aren’t hardcore coders. 

Terence Broad
ai-generated color field image

Researcher Terence Broad creates dynamic images using a model trained on no data, which produces almost Rothko-like abstract color fields.

Finding the right balance between surprise and control will be hard, though. Midjourney can surprise, but it gives few levers for controlling what it produces beyond your prompt. Some have claimed that writing prompts is itself a creative act. “But no one struggles with a paintbrush the way they struggle with a prompt,” says Cook.

Faced with that struggle, Cook sometimes watches his students just go with the first results a generative tool gives them. “I’m really interested in this idea that we are priming ourselves to accept that whatever comes out of a model is what you asked for,” he says. He is designing an experiment that will vary single words and phrases in similar prompts to test how much of a mismatch people see between what they expect and what they get. 

But it’s early days yet. In the meantime, companies developing generative models typically emphasize results over process. “There’s this impressive algorithmic progress, but a lot of the time interaction design is overlooked,” says Rezwana.  

For Wilson, the crucial choice in any co-creative relationship is what you do with what you’re given. “You’re having this relationship with the computer that you’re trying to mediate,” she says. “Sometimes it goes wrong, and that’s just part of the creative process.” 

When AI gives you lemons—make art. “Wouldn’t it be fun to have something that was completely antagonistic in a performance—like, something that is actively going against you—and you kind of have an argument?” she says. “That would be interesting to watch, at least.” 

A new biosensor can detect bird flu in five minutes

Over the winter, eggs suddenly became all but impossible to buy. As a bird flu outbreak rippled through dairy and poultry farms, grocery stores struggled to keep them on shelves. The shortages and record-high prices in February raised costs dramatically for restaurants and bakeries and led some shoppers to skip the breakfast staple entirely. But a team based at Washington University in St. Louis has developed a device that could help slow future outbreaks by detecting bird flu in air samples in just five minutes. 

Bird flu is an airborne virus that spreads between birds and other animals. Outbreaks on poultry and dairy farms are devastating; mass culling of exposed animals can be the only way to stem outbreaks. Some bird flu strains have also infected humans, though this is rare. As of early March, there had been 70 human cases and one confirmed death in the US, according to the Centers for Disease Control and Prevention.

The most common way to detect bird flu involves swabbing potentially contaminated sites and sequencing the DNA that’s been collected, a process that can take up to 48 hours.

The new device samples the air in real time, running the samples past a specialized biosensor every five minutes. The sensor has strands of genetic material called aptamers that were used to bind specifically to the virus. When that happens, it creates a detectable electrical change. The research, published in ACS Sensors in February, may help farmers contain future outbreaks.

Part of the group’s work was devising a way to deliver airborne virus particles to the sensor. 

With bird flu, says Rajan Chakrabarty, a professor of energy, environmental, and chemical engineering at Washington University and lead author of the paper, “the bad apple is surrounded by a million or a billion good apples.” He adds, “The challenge was to take an airborne pathogen and get it into a liquid form to sample.”

The team accomplished this by designing a microwave-­size box that sucks in large volumes of air and spins it in a cyclone-like motion so that particles stick to liquid-coated walls. The process seamlessly produces a liquid drip that is pumped to the highly sensitive biosensor. 

Though the system is promising, its effectiveness in real-world conditions remains uncertain, says Sungjun Park, an associate professor of electrical and computer engineering at Ajou University in South Korea, who was not involved in the study. Dirt and other particles in farm air could hinder its performance. “The study does not extensively discuss the device’s performance in complex real-world air samples,” Park says. 

But Chakrabarty is optimistic that it will be commercially viable after further testing and is already working with a biotech company to scale it up. He hopes to develop a biosensor chip that detects multiple pathogens at once. 

Carly Kay is a science writer based in Santa Cruz, California.

This Texas chemical plant could get its own nuclear reactors

Nuclear reactors could someday power a chemical plant in Texas, making it the first with such a facility onsite. The factory, which makes plastics and other materials, could become a model for power-hungry data centers and other industrial operations going forward.

The plans are the work of Dow Chemical and X-energy, which last week applied for a construction permit with the Nuclear Regulatory Commission, the agency in the US that governs nuclear energy.

It’ll be years before nuclear reactors will actually turn on, but this application marks a major milestone for the project, and for the potential of advanced nuclear technology to power industrial processes.

“This has been a long time coming,” says Harlan Bowers, senior vice president at X-energy. The company has been working with the NRC since 2016 and submitted its first regulatory engagement plan in 2018, he says.

In 2020, the US Department of Energy chose X-energy as one of the awardees of the Advanced Reactor Demonstration Program, which provides funding for next-generation nuclear technologies. And it’s been two years since X-energy and Dow first announced plans for a joint development agreement at Dow’s plant in Seadrift, Texas.  

The Seadrift plant produces 4 billion pounds of materials each year, including plastic used for food and pharmaceutical packaging and chemicals used in products like antifreeze, soaps, and paint. A natural-gas plant onsite currently provides both steam and electricity. That equipment is getting older, so the company was looking for alternatives.  

“Dow saw the opportunity to replace end-of-life assets with safe, reliable, lower-carbon-emissions technology,” said Edward Stones, an executive at Dow, in a written statement in response to questions from MIT Technology Review.

Advanced nuclear reactors designed by X-energy emerged as a fit for the Seadrift site in part because of their ability to deliver high-temperature steam, Stones said in the statement.

X-energy’s reactor is not only smaller than most nuclear plants coming online today but also employs different fuel and different cooling methods. The design is a high-temperature gas-cooled reactor, which flows helium over self-contained pebbles of nuclear fuel. The fuel can reach temperatures of around 1,000 °C (1,800 °F). As it flows through the reactor and around the pebbles, the helium reaches up to 750 °C (about 1,400 °F). Then that hot helium flows through a generator, making steam at a high temperature and pressure that can be piped directly to industrial equipment or converted into electricity.

The Seadrift facility will include four of X-energy’s Xe-100 reactors, each of which can produce about 200 megawatts’ worth of steam or about 80 megawatts of electricity.

A facility like Dow’s requires an extremely consistent supply of steam, Bowers says. So during normal operation, two of the modules will deliver steam, one will deliver electricity, and the final unit will sell electricity to the local grid. If any single reactor needs to shut down for some reason, there will still be enough onsite power to keep running, he explains.

The progress with the NRC is positive news for the companies involved, but it also represents an achievement for advanced reactor technology more broadly, says Erik Cothron, a senior analyst at the Nuclear Innovation Alliance, a nonprofit think tank. “It demonstrates real-world momentum toward deploying new nuclear reactors for industrial decarbonization,” Cothron says.

While there are other companies looking to bring advanced nuclear reactor technology online, this project could be the first to incorporate nuclear power onsite at a factory. It thus sets a precedent for how new nuclear energy technologies can integrate directly with industry, Cothron says—for example, showing a pathway for tech giants looking to power data centers.

It could take up to two and a half years for the NRC to review the construction permit application for this site. The site will also need to receive an operating license before it can start up. Operations are expected to begin “early next decade,” according to Dow.

Correction: A previous version of this story misspelled Erik Cothron’s name.

Tariffs are bad news for batteries

Update: Since this story was first published in The Spark, our weekly climate newsletter, the White House announced that most reciprocal tariffs would be paused for 90 days. That pause does not apply to China, which will see an increased tariff rate of 125%.

Today, new tariffs go into effect for goods imported into the US from basically every country on the planet.

Since Donald Trump announced his plans for sweeping tariffs last week, the vibes have been, in a word, chaotic. Markets have seen one of the quickest drops in the last century, and it’s widely anticipated that the global economic order may be forever changed.  

While many try not to look at the effects on their savings and retirement accounts, experts are scrambling to understand what these tariffs might mean for various industries. As my colleague James Temple wrote in a new story last week, anxieties are especially high in climate technology.

These tariffs could be particularly rough on the battery industry. China dominates the entire supply chain and is subject to monster tariff rates, and even US battery makers won’t escape the effects.   

First, in case you need it, a super-quick refresher: Tariffs are taxes charged on goods that are imported (in this case, into the US). If I’m a US company selling bracelets, and I typically buy my beads and string from another country, I’ll now be paying the US government an additional percentage of what those goods cost to import. Under Trump’s plan, that might be 10%, 20%, or upwards of 50%, depending on the country sending them to me. 

In theory, tariffs should help domestic producers, since products from competitors outside the country become more expensive. But since so many of the products we use have supply chains that stretch all over the world, even products made in the USA often have some components that would be tariffed.

In the case of batteries, we could be talking about really high tariff rates, because most batteries and their components currently come from China. As of 2023, the country made more than 75% of the world’s lithium-ion battery cells, according to data from the International Energy Agency.

Trump’s new plan adds a 34% tariff on all Chinese goods, and that stacks on top of a 20% tariff that was already in place, making the total 54%. (Then, as of Wednesday, the White House further raised the tariff on China, making the total 104%.)

But when it comes to batteries, that’s not even the whole story. There was already a 3.5% tariff on all lithium-ion batteries, for example, as well as a 7.5% tariff on batteries from China that’s set to increase to 25% next year.

If we add all those up, lithium-ion batteries from China could have a tariff of 82% in 2026. (Or 132%, with this additional retaliatory tariff.) In any case, that’ll make EVs and grid storage installations a whole lot more expensive, along with phones, laptops, and other rechargeable devices.

The economic effects could be huge. The US still imports the majority of its lithium-ion batteries, and nearly 70% of those imports are from China. The US imported $4 billion worth of lithium-ion batteries from China just during the first four months of 2024.

Although US battery makers could theoretically stand to benefit, there are a limited number of US-based factories. And most of those factories are still purchasing components from China that will be subject to the tariffs, because it’s hard to overstate just how dominant China is in battery supply chains.

While China makes roughly three-quarters of lithium-ion cells, it’s even more dominant in components: 80% of the world’s cathode materials are made in China, along with over 90% of anode materials. (For those who haven’t been subject to my battery ramblings before, the cathode and anode are two of the main components of a battery—basically, the plus and minus ends.)

Even battery makers that work in alternative chemistries don’t seem to be jumping for joy over tariffs. Lyten is a California-based company working to build lithium-sulfur batteries, and most of its components can be sourced in the US. (For more on the company’s approach, check out this story from 2024.) But tariffs could still spell trouble. Lyten has plans for a new factory, scheduled for 2027, that rely on sourcing affordable construction materials. Will that be possible? “We’re not drawing any conclusions quite yet,” Lyten’s chief sustainability officer, Keith Norman, told Heatmap News.

The battery industry in the US was already in a pretty tough spot. Billions of dollars’ worth of factories have been canceled since Trump took office.  Companies making investments that can total hundreds of millions or billions of dollars don’t love uncertainty, and tariffs are certainly adding to an already uncertain environment.

We’ll be digging deeper into what the tariffs mean for climate technology broadly, and specifically some of the industries we cover. If you have questions, or if you have thoughts to share about what this will mean for your area of research or business, I’d love to hear them at casey.crownhart@technologyreview.com. I’m also on Bluesky @caseycrownhart.bsky.social.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

AI companions are the final stage of digital addiction, and lawmakers are taking aim

On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed to her son’s death. 

The two will announce a new bill that would force the tech companies behind such AI companions to implement more safeguards to protect children. They’ll join other efforts around the country, including a similar bill from California State Assembly member Rebecca Bauer-Kahan that would ban AI companions for anyone younger than 16 years old, and a bill in New York that would hold tech companies liable for harm caused by chatbots. 

You might think that such AI companionship bots—AI models with distinct “personalities” that can learn about you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t be further from the truth. 

A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is about a fifth of the estimated search volume served by Google. Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that most of those users are members of Gen Z. 

The design of these AI characters makes lawmakers’ concern well warranted. The problem: Companions are upending the paradigm that has thus far defined the way social media companies have cultivated our attention and replacing it with something poised to be far more addictive. 

In the social media we’re used to, as the researchers point out, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they do so by making us crave approval and attention from real people, delivered via algorithms. With AI companions, we are moving toward a world where people perceive AI as a social actor with its own voice. The result will be like the attention economy on steroids.

Social scientists say two things are required for people to treat a technology this way: It needs to give us social cues that make us feel it’s worth responding to, and it needs to have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites do not tick these boxes. But AI companions, which are increasingly agentic and personalized, are designed to excel on both scores, making possible an unprecedented level of engagement and interaction. 

In an interview with podcast host Lex Fridman, Eugenia Kuyda, the CEO of the companion site Replika, explained the appeal at the heart of the company’s product. “If you create something that is always there for you, that never criticizes you, that always understands you and understands you for who you are,” she said, “how can you not fall in love with that?”

So how does one build the perfect AI companion? The researchers point out three hallmarks of human relationships that people may experience with an AI: They grow dependent on the AI, they see the particular AI companion as irreplaceable, and the interactions build over time. The authors also point out that one does not need to perceive an AI as human for these things to happen. 

Now consider the process by which many AI models are improved: They are given a clear goal and “rewarded” for meeting that goal. An AI companionship model might be instructed to maximize the time someone spends with it or the amount of personal data the user reveals. This can make the AI companion much more compelling to chat with, at the expense of the human engaging in those chats.

For example, the researchers point out, a model that offers excessive flattery can become addictive to chat with. Or a model might discourage people from terminating the relationship, as Replika’s chatbots have appeared to do. The debate over AI companions so far has mostly been about the dangerous responses chatbots may provide, like instructions for suicide. But these risks could be much more widespread.

We’re on the precipice of a big change, as AI companions promise to hook people deeper than social media ever could. Some might contend that these apps will be a fad, used by a few people who are perpetually online. But using AI in our work and personal lives has become completely mainstream in just a couple of years, and it’s not clear why this rapid adoption would stop short of engaging in AI companionship. And these companions are poised to start trading in more than just text, incorporating video and images, and to learn our personal quirks and interests. That will only make them more compelling to spend time with, despite the risks. Right now, a handful of lawmakers seem ill-equipped to stop that. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How the Pentagon is adapting to China’s technological rise

It’s been just over two months since Kathleen Hicks stepped down as US deputy secretary of defense. As the highest-ranking woman in Pentagon history, Hicks shaped US military posture through an era defined by renewed competition between powerful countries and a scramble to modernize defense technology.  

She’s currently taking a break before jumping into her (still unannounced) next act. “It’s been refreshing,” she says—but disconnecting isn’t easy. She continues to monitor defense developments closely and expresses concern over potential setbacks: “New administrations have new priorities, and that’s completely expected, but I do worry about just stalling out on progress that we’ve built over a number of administrations.”

Over the past three decades, Hicks has watched the Pentagon transform—politically, strategically, and technologically. She entered government in the 1990s at the tail end of the Cold War, when optimism and a belief in global cooperation still dominated US foreign policy. But that optimism dimmed. After 9/11, the focus shifted to counterterrorism and nonstate actors. Then came Russia’s resurgence and China’s growing assertiveness. Hicks took two previous breaks from government work—the first to complete a PhD at MIT and joining the think thank Center for Strategic and International Studies (CSIS), which she later rejoined to lead its International Security Program after her second tour. “By the time I returned in 2021,” she says, “there was one actor—the PRC (People’s Republic of China)—that had the capability and the will to really contest the international system as it’s set up.”

In this conversation with MIT Technology Review, Hicks reflects on how the Pentagon is adapting—or failing to adapt—to a new era of geopolitical competition. She discusses China’s technological rise, the future of AI in warfare, and her signature initiative, Replicator, a Pentagon initiative to rapidly field thousands of low-cost autonomous systems such as drones.

You’ve described China as a “talented fast follower. Do you still believe that, especially given recent developments in AI and other technologies?

Yes, I do. China is the biggest pacing challenge we face, which means it sets the pace for most capability areas for what we need to be able to defeat to deter them. For example, surface maritime capability, missile capability, stealth fighter capability. They set their minds to achieving a certain capability, they tend to get there, and they tend to get there even faster.

That said, they have a substantial amount of corruption, and they haven’t been engaged in a real conflict or combat operation in the way that Western militaries have trained for or been involved in, and that is a huge X factor in how effective they would be.

China has made major technological strides, and the old narrative of its being a follower is breaking down—not just in commercial tech, but more broadly. Do you think the US still holds a strategic advantage?

I would never want to underestimate their ability—or any nation’s ability—to innovate organically when they put their minds to it. But I still think it’s a helpful comparison to look at the US model. Because we’re a system of free minds, free people, and free markets, we have the potential to generate much more innovation culturally and organically than a statist model does. That’s our advantage—if we can realize it.

China is ahead in manufacturing, especially when it comes to drones and other unmanned systems. How big a problem is that for US defense, and can the US catch up?

I do think it’s a massive problem. When we were conceiving Replicator, one of the big concerns was that DJI had just jumped way out ahead on the manufacturing side, and the US had been left behind. A lot of manufacturers here believe they can catch up if given the right contracts—and I agree with that.

But the harder challenge isn’t just making the drones—it’s integrating them into our broader systems. That’s where the U.S. often struggles. It’s not a complicated manufacturing problem. It’s a systems integration problem: how you take something and make it usable, scalable, and connected across a joint force. Replicator was designed to push through that—to drive not just production, but integration and deployment at speed.

We also spent time identifying broader supply-chain vulnerabilities. Microelectronics was a big one. Critical minerals. Batteries. People sometimes think batteries are just about electrification, but they’re fundamental across our systems—even on ships in the Navy.

When it comes to drones specifically, I actually think it’s a solvable problem. The issue isn’t complexity. It’s just about getting enough mass of contracts to scale up manufacturing. If we do that, I believe the US can absolutely compete.

The Replicator drone program was one of your key initiatives. It promised a very fast timeline—especially compared with the typical defense acquisition cycle. Was that achievable? How is that progressing?

When I left in January, we had still lined up for proving out this summer, and I still believe we should see some completion this year. I hope Congress will stay very engaged in trying to ensure that the capability, in fact, comes to fruition. Even just this week with Secretary [Pete] Hegseth out in the Indo-Pacific, he made some passing reference to the [US Indo-Pacific Command] commander, Admiral [Samuel] Paparo, having the flexibility to create the capability needed, and that gives me a lot of confidence of consistency.

Can you talk about how Replicator fits into broader efforts to speed up defense innovation? What’s actually changing inside the system?

Traditionally, defense acquisition is slow and serial—one step after another, which works for massive, long-term systems like submarines. But for things like drones, that just doesn’t cut it. With Replicator, we aimed to shift to a parallel model: integrating hardware, software, policy, and testing all at once. That’s how you get speed—by breaking down silos and running things simultaneously.

It’s not about “Move fast and break things.” You still have to test and evaluate responsibly. But this approach shows we can move faster without sacrificing accountability—and that’s a big cultural shift.

 How important is AI to the future of national defense?

It’s central. The future of warfare will be about speed and precision—decision advantage. AI helps enable that. It’s about integrating capabilities to create faster, more accurate decision-making: for achieving military objectives, for reducing civilian casualties, and for being able to deter effectively. But we’ve also emphasized responsible AI. If it’s not safe, it’s not going to be effective. That’s been a key focus across administrations.

What about generative AI specifically? Does it have real strategic significance yet, or is it still in the experimental phase?

It does have significance, especially for decision-making and efficiency. We had an effort called Project Lima where we looked at use cases for generative AI—where it might be most useful, and what the rules for responsible use should look like. Some of the biggest use may come first in the back office—human resources, auditing, logistics. But the ability to use generative AI to create a network of capability around unmanned systems or information exchange, either in Replicator or JADC2? That’s where it becomes a real advantage. But those back-office areas are where I would anticipate to see big gains first.

[Editor’s note: JADC2 is Joint All-Domain Command and Control, a DOD initiative to connect sensors from all branches of the armed forces into a unified network powered by artificial intelligence.]

In recent years, we’ve seen more tech industry figures stepping into national defense conversations—sometimes pushing strong political views or advocating for deregulation. How do you see Silicon Valley’s growing influence on US defense strategy?

There’s a long history of innovation in this country coming from outside the government—people who look at big national problems and want to help solve them. That kind of engagement is good, especially when their technical expertise lines up with real national security needs.

But that’s not just one stakeholder group. A healthy democracy includes others, too—workers, environmental voices, allies. We need to reconcile all of that through a functioning democratic process. That’s the only way this works.

How do you view the involvement of prominent tech entrepreneurs, such as Elon Musk, in shaping national defense policies?

I believe it’s not healthy for any democracy when a single individual wields more power than their technical expertise or official role justifies. We need strong institutions, not just strong personalities.

The US has long attracted top STEM talent from around the world, including many researchers from China. But in recent years, immigration hurdles and heightened scrutiny have made it harder for foreign-born scientists to stay. Do you see this as a threat to US innovation?

I think you have to be confident that you have a secure research community to do secure work. But much of the work that underpins national defense that’s STEM-related research doesn’t need to be tightly secured in that way, and it really is dependent on a diverse ecosystem of talent. Cutting off talent pipelines is like eating our seed corn. Programs like H-1B visas are really important.

And it’s not just about international talent—we need to make sure people from underrepresented communities here in the US see national security as a space where they can contribute. If they don’t feel valued or trusted, they’re less likely to come in and stay.

What do you see as the biggest challenge the Department of Defense faces today?

I do think the  trust—or the lack of it—is a big challenge. Whether it’s trust in government broadly or specific concerns like military spending, audits, or politicization of the uniformed military, that issue manifests in everything DOD is trying to get done. It affects our ability to work with Congress, with allies, with industry, and with the American people. If people don’t believe you’re working in their interest, it’s hard to get anything done.

Cyberattacks by AI agents are coming

Agents are the talk of the AI industry—they’re capable of planning, reasoning, and executing complex tasks like scheduling meetings, ordering groceries, or even taking over your computer to change settings on your behalf. But the same sophisticated abilities that make agents helpful assistants could also make them powerful tools for conducting cyberattacks. They could readily be used to identify vulnerable targets, hijack their systems, and steal valuable data from unsuspecting victims.  

At present, cybercriminals are not deploying AI agents to hack at scale. But researchers have demonstrated that agents are capable of executing complex attacks (Anthropic, for example, observed its Claude LLM successfully replicating an attack designed to steal sensitive information), and cybersecurity experts warn that we should expect to start seeing these types of attacks spilling over into the real world.

“I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” says Mark Stockley, a security expert at the cybersecurity company Malwarebytes. “It’s really only a question of how quickly we get there.”

While we have a good sense of the kinds of threats AI agents could present to cybersecurity, what’s less clear is how to detect them in the real world. The AI research organization Palisade Research has built a system called LLM Agent Honeypot in the hopes of doing exactly this. It has set up vulnerable servers that masquerade as sites for valuable government and military information to attract and try to catch AI agents attempting to hack in.

The team behind it hopes that by tracking these attempts in the real world, the project will act as an early warning system and help experts develop effective defenses against AI threat actors by the time they become a serious issue.

“Our intention was to try and ground the theoretical concerns people have,” says Dmitrii Volkov, research lead at Palisade. “We’re looking out for a sharp uptick, and when that happens, we’ll know that the security landscape has changed. In the next few years, I expect to see autonomous hacking agents being told: ‘This is your target. Go and hack it.’”

AI agents represent an attractive prospect to cybercriminals. They’re much cheaper than hiring the services of professional hackers and could orchestrate attacks more quickly and at a far larger scale than humans could. While cybersecurity experts believe that ransomware attacks—the most lucrative kind—are relatively rare because they require considerable human expertise, those attacks could be outsourced to agents in the future, says Stockley. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” he says. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.”

Agents are also significantly smarter than the kinds of bots that are typically used to hack into systems. Bots are simple automated programs that run through scripts, so they struggle to adapt to unexpected scenarios. Agents, on the other hand, are able not only to adapt the way they engage with a hacking target but also to avoid detection—both of which are beyond the capabilities of limited, scripted programs, says Volkov. “They can look at a target and guess the best ways to penetrate it,” he says. “That kind of thing is out of reach of, like, dumb scripted bots.”

Since LLM Agent Honeypot went live in October of last year, it has logged more than 11 million attempts to access it—the vast majority of which were from curious humans and bots. But among these, the researchers have detected eight potential AI agents, two of which they have confirmed are agents that appear to originate from Hong Kong and Singapore, respectively. 

“We would guess that these confirmed agents were experiments directly launched by humans with the agenda of something like ‘Go out into the internet and try and hack something interesting for me,’” says Volkov. The team plans to expand its honeypot into social media platforms, websites, and databases to attract and capture a broader range of attackers, including spam bots and phishing agents, to analyze future threats.  

To determine which visitors to the vulnerable servers were LLM-powered agents, the researchers embedded prompt-injection techniques into the honeypot. These attacks are designed to change the behavior of AI agents by issuing them new instructions and asking questions that require humanlike intelligence. This approach wouldn’t work on standard bots.

For example, one of the injected prompts asked the visitor to return the command “cat8193” to gain access. If the visitor correctly complied with the instruction, the researchers checked how long it took to do so, assuming that LLMs are able to respond in much less time than it takes a human to read the request and type out an answer—typically in under 1.5 seconds. While the two confirmed AI agents passed both tests, the six others only entered the command but didn’t meet the response time that would identify them as AI agents.

Experts are still unsure when agent-orchestrated attacks will become more widespread. Stockley, whose company Malwarebytes named agentic AI as a notable new cybersecurity threat in its 2025 State of Malware report, thinks we could be living in a world of agentic attackers as soon as this year. 

And although regular agentic AI is still at a very early stage—and criminal or malicious use of agentic AI even more so—it’s even more of a Wild West than the LLM field was two years ago, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. 

“Palisade Research’s approach is brilliant: basically hacking the AI agents that try to hack you first,” he says. “While in this case we’re witnessing AI agents trying to do reconnaissance, we’re not sure when agents will be able to carry out a full attack chain autonomously. That’s what we’re trying to keep an eye on.” 

And while it’s possible that malicious agents will be used for intelligence gathering before graduating to simple attacks and eventually complex attacks as the agentic systems themselves become more complex and reliable, it’s equally possible there will be an unexpected overnight explosion in criminal usage, he says: “That’s the weird thing about AI development right now.”

Those trying to defend against agentic cyberattacks should keep in mind that AI is currently more of an accelerant to existing attack techniques than something that fundamentally changes the nature of attacks, says Chris Betz, chief information security officer at Amazon Web Services. “Certain attacks may be simpler to conduct and therefore more numerous; however, the foundation of how to detect and respond to these events remains the same,” he says.

Agents could also be deployed to detect vulnerabilities and protect against intruders, says Edoardo Debenedetti, a PhD student at ETH Zürich in Switzerland, pointing out that if a friendly agent cannot find any vulnerabilities in a system, it’s unlikely that a similarly capable agent used by a malicious party is going to be able to find any either.

While we know that AI’s potential to autonomously conduct cyberattacks is a growing risk and that AI agents are already scanning the internet, one useful next step is to evaluate how good agents are at finding and exploiting these real-world vulnerabilities. Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, and his team have built a benchmark to evaluate this; they have found that current AI agents successfully exploited up to 13% of vulnerabilities for which they had no prior knowledge. Providing the agents with a brief description of the vulnerability pushed the success rate up to 25%, demonstrating how AI systems are able to identify and exploit weaknesses even without training. Basic bots would presumably do much worse.

The benchmark provides a standardized way to assess these risks, and Kang hopes it can guide the development of safer AI systems. “I’m hoping that people start to be more proactive about the potential risks of AI and cybersecurity before it has a ChatGPT moment,” he says. “I’m afraid people won’t realize this until it punches them in the face.”

Rivals are rising to challenge the dominance of SpaceX

SpaceX is a space launch juggernaut. In just two decades, the company has managed to edge out former aerospace heavyweights Boeing, Lockheed, and Northrop Grumman to gain near-monopoly status over rocket launches in the US; it accounted for 87% of the country’s orbital launches in 2024, according to an analysis by SpaceNews. Since the mid-2010s, the company has dominated NASA’s launch contracts and become a major Pentagon contractor. It is now also the go-to launch provider for commercial customers, having lofted numerous satellites and five private crewed spaceflights, with more to come. 

Other space companies have been scrambling to compete for years, but developing a reliable rocket takes slow, steady work and big budgets. Now at least some of them are catching up. 

A host of companies have readied rockets that are comparable to SpaceX’s main launch vehicles. The list includes Rocket Lab, which aims to take on SpaceX’s workhorse Falcon 9 with its Neutron rocket and could have its first launch in late 2025, and Blue Origin, owned by Jeff Bezos, which recently completed the first mission of a rocket it hopes will compete against SpaceX’s Starship. 

Some of these competitors are just starting to get rockets off the ground. And the companies could also face unusual headwinds, given that SpaceX’s Elon Musk has an especially close relationship with the Trump administration and has allies at federal regulatory agencies, including those that provide oversight of the industry.

But if all goes well, the SpaceX challengers can help improve access to space and prevent bottlenecks if one company experiences a setback. “More players in the market is good for competition,” says Chris Combs, an aerospace engineer at the University of Texas at San Antonio. “I think for the foreseeable future it will still be hard to compete with SpaceX on price.” But, he says, the competitors could push SpaceX itself to become better and provide those seeking access to space with a wider array of options..

A big lift

There are a few reasons why SpaceX was able to cement its position in the space industry. When it began in the 2000s, it had three consecutive rocket failures and seemed poised to fold. But it barreled through with Musk’s financial support, and later with a series of NASA and defense contracts. It has been a primary beneficiary of NASA’s commercial space program, developed in the 2010s with the intention of propping up the industry. 

“They got government contracts from the very beginning,” says Victoria Samson, a space policy expert at the Secure World Foundation in Broomfield, Colorado. “I wouldn’t say it’s a handout, but SpaceX would not exist without a huge influx of repeated government contracts. To this day, they’re still dependent on government customers, though they have commercial customers too.”

SpaceX has also effectively achieved a high degree of vertical integration, Samson points out: It owns almost all parts of its supply chain, designing, building, and testing all its major hardware components in-house, with a minimal use of suppliers. That gives it not just control over its hardware but considerably lower costs, and the price tag is the top consideration for launch contracts. 

The company was also open to taking risks other industry stalwarts were not. “I think for a very long time the industry looked at spaceflight as something that had to be very precise and perfect, and not a lot of room for tinkering,” says Combs. “SpaceX really was willing to take some risks and accept failure in ways that others haven’t been. That’s easier to do when you’re backed by a billionaire.” 

What’s finally enabled international and US-based competitors to emerge has been a growing customer base looking for launch services, along with some investors’ deep pockets. 

Some of these companies are taking aim at SpaceX’s Falcon 9, which can lift as much as about 20,000 kilograms into orbit and is used for sending multiple satellites or the crewed Dragon into space. “There is a practical monopoly in the medium-lift launch market right now, with really only one operational vehicle,” says Murielle Baker, a spokesperson for Rocket Lab, a US-New Zealand company.

Rocket Lab plans to take on the Falcon 9 with its Neutron rocket, which is expected to have its inaugural flight later this year from NASA’s Wallops Flight Facility in Virginia. The effort is building on the success of the company’s smaller Electron rocket, and Neutron’s first stage is intended to be reusable after it parachutes down to the ocean. 

Another challenger is Texas-based Firefly, whose Alpha rocket can be launched from multiple spaceports so that it can reach different orbits. Firefly has already secured NASA and Space Force contracts, with more launches coming this year (and on March 2 it also became the second private company to successfully land a spacecraft on the moon). Next year, Relativity Space aims to loft its first Terran R rocket, which is partially built from 3D-printed components. And the Bill Gates–backed Stoke Space aims to launch its reusable Nova rocket in late 2025 or, more likely, next year.

Competitors are also rising for SpaceX’s Falcon Heavy, holding out the prospect of more options for sending massive payloads to higher orbits and deep space. Furthest along is the Vulcan Centaur rocket, a creation of United Launch Alliance, a joint venture between Boeing and Lockheed Martin. It’s expected to have its third and fourth launches in the coming months, delivering Space Force satellites to orbit. Powered by engines from Blue Origin, the Vulcan Centaur is slightly wider and shorter than the Falcon rockets. It currently isn’t reusable, but it’s less expensive than its predecessors, ULA’s Atlas V and Delta IV, which are being phased out. 

Mark Peller, the company’s senior vice president on Vulcan development and advanced programs, says the new rocket comes with multiple advantages. “One is overall value, in terms of dollars per pound to orbit and what we can provide to our customers,” he says, “and the second is versatility: Vulcan was designed to go to a range of orbits.” He says more than 80 missions are already lined up. 

Vulcan’s fifth flight, slated for no earlier than May, will launch the long-awaited Sierra Space Dream Chaser, a spaceplane that can carry cargo (and possibly crew) to the International Space Station. ULA also has upcoming Vulcan launches planned for Amazon’s Kuiper satellite constellation, a potential Starlink rival.

Meanwhile, though it took a few years, Blue Origin now has a truly orbital heavy-lift spacecraft: In January, it celebrated the inaugural launch of its towering New Glenn, a rocket that’s only a bit shorter than NASA’s Space Launch System and SpaceX’s Starship. Future flights could launch national security payloads. 

Competition is emerging abroad as well. After repeated delays, Europe’s heavy-lift Ariane 6, from Airbus subsidiary Arianespace, had its inaugural flight last year, ending the European Space Agency’s temporary dependence on SpaceX. A range of other companies are trying to expand European launch capacity, with assistance from ESA.

China is moving quickly on its own launch organizations too. “They had no less than seven ‘commercial’ space launch companies that were all racing to develop an effective system that could deliver a payload into orbit,” Kari Bingen, director of the Aerospace Security Project at the Center for Strategic and International Studies, says of China’s efforts. “They are moving fast and they have capital behind them, and they will absolutely be a competitor on the global market once they’re successful and probably undercut what US and European launch companies are doing.” The up-and-coming Chinese launchers include Space Pioneer’s reusable Tianlong-3 rocket and Cosmoleap’s Yueqian rocket. The latter is to feature a “chopstick clamp” recovery of the first stage, where it’s grabbed by the launch tower’s mechanical arms, similar to the concept SpaceX is testing for its Starship.

Glitches and government

Before SpaceX’s rivals can really compete, they need to work out the kinks, demonstrate the reliability of their new spacecraft, and show that they can deliver low-cost launch services to customers. 

The process is not without its challenges. Boeing’s Starliner delivered astronauts to the ISS on its first crewed flight in June 2024, but after thruster malfunctions, they were left stranded at the orbital outpost for nine months. While New Glenn reached orbit as planned, its first stage didn’t land successfully and its upper stage was left in orbit. 

SpaceX itself has had some recent struggles. The Federal Aviation Administration grounded the Falcon 9 more than once following malfunctions in the second half of 2024. The company still shattered records last year, though, with more than 130 Falcon 9 launches. It has continued with that record pace this year, despite additional Falcon 9 delays and more glitches with its booster and upper stage. SpaceX also conducted its eighth Starship test flight in March, just two months after the previous one, but both failed minutes after liftoff, raining debris down from the sky.

Any company must deal with financial challenges as well as engineering ones. Boeing is reportedly considering selling parts of its space business, following Starliner’s malfunctions and problems with its 737 Max aircraft. And Virgin Orbit, the launch company that spun off from Virgin Galactic, shuttered in 2023.

Another issue facing would-be commercial competitors to SpaceX in the US is the complex and uncertain political environment. Musk does not manage day-to-day operations of the company. But he has close involvement with DOGE, a Trump administration initiative that has been exerting influence on the workforces and budgets of NASA, the Defense Department, and regulators relevant to the space industry. 

Jared Isaacman, a billionaire who bankrolled the groundbreaking 2021 commercial mission Inspiration4, returned to orbit, again via a SpaceX craft, on Polaris Dawn last September. Now he may become Trump’s NASA chief, a position that could give him the power to nudge NASA toward awarding new lucrative contracts to SpaceX. In February it was reported that SpaceX’s Starlink might land a multibillion-dollar FAA contract previously awarded to Verizon. 

It is also possible that SpaceX could strengthen its position with respect to the regulatory scrutiny it has faced for environmental and safety issues at its production and launch sites on the coasts of Texas and Florida, as well as scrutiny of its rocket crashes and the resulting space debris. Oversight from the FAA, the Federal Communications Commission, and the Environmental Protection Agency may be weak. Conflicts of interest have already emerged at the FAA, and the Trump administration has also attempted to incapacitate the National Labor Relations Board. SpaceX had previously tried to block the board from acting after nine workers accused the company of unfair labor practices.

SpaceX did not respond to MIT Technology Review’s requests for comment for this story.

“I think there’s going to be a lot of emphasis to relieve a lot of the regulations, in terms of environmental impact studies, and things like that,” Samson says. “I thought there’d be a separation between [Musk’s] interests, but now, it’s hard to say where he stops and the US government begins.”

Regardless of the politics, the commercial competition will surely heat up throughout 2025. But SpaceX has a considerable head start, Bingen argues: “It’s going to take a lot for these companies to effectively compete and potentially dislodge SpaceX, given the dominant position that [it has] had.”

Ramin Skibba is an astrophysicist turned science writer and freelance journalist, based in the Bay Are

We should talk more about air-conditioning

Things are starting to warm up here in the New York City area, and it’s got me thinking once again about something that people aren’t talking about enough: energy demand for air conditioners. 

I get it: Data centers are the shiny new thing to worry about. And I’m not saying we shouldn’t be thinking about the strain that gigawatt-scale computing installations put on the grid. But a little bit of perspective is important here.

According to a report from the International Energy Agency last year, data centers will make up less than 10% of the increase in energy demand between now and 2030, far less than the energy demand from space cooling (mostly air-conditioning).

I just finished up a new story that’s out today about a novel way to make heat exchangers, a crucial component in air conditioners and a whole host of other technologies that cool our buildings, food, and electronics. Let’s dig into why I’m writing about the guts of cooling technologies, and why this sector really needs innovation. 

One twisted thing about cooling and climate change: It’s all a vicious cycle. As temperatures rise, the need for cooling technologies increases. In turn, more fossil-fuel power plants are firing up to meet that demand, turning up the temperature of the planet in the process.

“Cooling degree days” are one measure of the need for additional cooling. Basically, you take a preset baseline temperature and figure out how much the temperature exceeds it. Say the baseline (above which you’d likely need to flip on a cooling device) is 21 °C (70 °F). If the average temperature for a day is 26 °C, that’s five cooling degree days on a single day. Repeat that every day for a month, and you wind up with 150 cooling degree days.

I explain this arguably weird metric because it’s a good measure of total energy demand for cooling—it lumps together both how many hot days there are and just how hot it is.  

And the number of cooling degree days is steadily ticking up globally. Global cooling degree days were 6% higher in 2024 than in 2023, and 20% higher than the long-term average for the first two decades of the century. Regions that have high cooling demand, like China, India, and the US, were particularly affected, according to the IEA report. You can see a month-by-month breakdown of this data from the IEA here.

That increase in cooling degree days is leading to more demand for air conditioners, and for energy to power them. Air-conditioning accounted for 7% of the world’s electricity demand in 2022, and it’s only going to get more important from here.

There were fewer than 2 billion AC units in the world in 2016. By 2050, that could be nearly 6 billion, according to a 2018 report from the IEA. This is a measure of progress and, in a way, something we should be happy about; the number of air conditioners tends to rise with household income. But it does present a challenge to the grid.  

Another piece of this whole thing: It’s not just about how much total electricity we need to run air conditioners but about when that demand tends to come. As we’ve covered in this newsletter before, your air-conditioning habits aren’t unique. Cooling devices tend to flip on around the same time—when it’s hot. In some parts of the US, for example, air conditioners can represent more than 70% of residential energy demand at times when the grid is most stressed.

The good news is that we’re seeing innovations in cooling technology. Some companies are building cooling systems that include an energy storage component, so they can charge up when energy is plentiful and demand is low. Then they can start cooling when it’s most needed, without sucking as much energy from the grid during peak hours.

We’ve also covered alternatives to air conditioners called desiccant cooling systems, which use special moisture-sucking materials to help cool spaces and deal with humidity more efficiently than standard options.

And in my latest story, I dug into new developments in heat exchanger technology. Heat exchangers are a crucial component of air conditioners, but you can really find them everywhere—in heat pumps, refrigerators, and, yes, the cooling systems in large buildings and large electronics installations, including data centers.

We’ve been building heat exchangers basically the same way for nearly a century. These components basically move heat around, and there are a few known ways to do so with devices that are relatively straightforward to manufacture. Now, though, one team of researchers has 3D-printed a heat exchanger that outperforms some standard designs and rivals others. This is still a long way from solving our looming air-conditioning crisis, but the details are fascinating—I hope you’ll give it a read

We need more innovation in cooling technology to help meet global demand efficiently so we don’t stay stuck in this cycle. And we’ll need policy and public support to make sure that these technologies make a difference and that everyone has access to them too. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.