This spa’s water is heated by bitcoin mining

At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricitywhich in turn produces plenty of heat that usually goes to waste. 

 “I thought, ‘That’s interestingwe need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa. 

It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams. 

Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use. 

Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says. 

Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters. 

Crypto enthusiasts like Goodman think much more of this is comingespecially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists. 

The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries. 

Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon. 

One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.” 

The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says. 

For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time. 

Carrie Klein is a freelance journalist based in New York City.

A Google Gemini model now has a “dial” to adjust how much it reasons

Google DeepMind’s latest update to a top Gemini AI model includes a dial to control how much the system “thinks” through a response. The new feature is ostensibly designed to save money for developers, but it also concedes a problem: Reasoning models, the tech world’s new obsession, are prone to overthinking, burning money and energy in the process.

Since 2019, there have been a couple of tried and true ways to make an AI model more powerful. One was to make it bigger by using more training data, and the other was to give it better feedback on what constitutes a good answer. But toward the end of last year, Google DeepMind and other AI companies turned to a third method: reasoning.

“We’ve been really pushing on ‘thinking,’” says Jack Rae, a principal research scientist at DeepMind. Such models, which are built to work through problems logically and spend more time arriving at an answer, rose to prominence earlier this year with the launch of the DeepSeek R1 model. They’re attractive to AI companies because they can make an existing model better by training it to approach a problem pragmatically. That way, the companies can avoid having to build a new model from scratch. 

When the AI model dedicates more time (and energy) to a query, it costs more to run. Leaderboards of reasoning models show that one task can cost upwards of $200 to complete. The promise is that this extra time and money help reasoning models do better at handling challenging tasks, like analyzing code or gathering information from lots of documents. 

“The more you can iterate over certain hypotheses and thoughts,” says Google DeepMind chief technical officer Koray Kavukcuoglu, the more “it’s going to find the right thing.”

This isn’t true in all cases, though. “The model overthinks,” says Tulsee Doshi, who leads the product team at Gemini, referring specifically to Gemini Flash 2.5, the model released today that includes a slider for developers to dial back how much it thinks. “For simple prompts, the model does think more than it needs to.” 

When a model spends longer than necessary on a problem, it makes the model expensive to run for developers and worsens AI’s environmental footprint.

Nathan Habib, an engineer at Hugging Face who has studied the proliferation of such reasoning models, says overthinking is abundant. In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight, Habib says. Indeed, when OpenAI announced a new model in February, it said it would be the company’s last nonreasoning model. 

The performance gain is “undeniable” for certain tasks, Habib says, but not for many others where people normally use AI. Even when reasoning is used for the right problem, things can go awry. Habib showed me an example of a leading reasoning model that was asked to work through an organic chemistry problem. It started out okay, but halfway through its reasoning process the model’s responses started resembling a meltdown: It sputtered “Wait, but …” hundreds of times. It ended up taking far longer than a nonreasoning model would spend on one task. Kate Olszewska, who works on evaluating Gemini models at DeepMind, says Google’s models can also get stuck in loops.

Google’s new “reasoning” dial is one attempt to solve that problem. For now, it’s built not for the consumer version of Gemini but for developers who are making apps. Developers can set a budget for how much computing power the model should spend on a certain problem, the idea being to turn down the dial if the task shouldn’t involve much reasoning at all. Outputs from the model are about six times more expensive to generate when reasoning is turned on.

Another reason for this flexibility is that it’s not yet clear when more reasoning will be required to get a better answer.

“It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking?” Rae says. 

Obvious tasks include coding (developers might paste hundreds of lines of code into the model and then ask for help), or generating expert-level research reports. The dial would be turned way up for these, and developers might find the expense worth it. But more testing and feedback from developers will be needed to find out when medium or low settings are good enough.

Habib says the amount of investment in reasoning models is a sign that the old paradigm for how to make models better is changing. “Scaling laws are being replaced,” he says. 

Instead, companies are betting that the best responses will come from longer thinking times rather than bigger models. It’s been clear for several years that AI companies are spending more money on inferencing—when models are actually “pinged” to generate an answer for something—than on training, and this spending will accelerate as reasoning models take off. Inferencing is also responsible for a growing share of emissions.

(While on the subject of models that “reason” or “think”: an AI model cannot perform these acts in the way we normally use such words when talking about humans. I asked Rae why the company uses anthropomorphic language like this. “It’s allowed us to have a simple name,” he says, “and people have an intuitive sense of what it should mean.” Kavukcuoglu says that Google is not trying to mimic any particular human cognitive process in its models.)

Even if reasoning models continue to dominate, Google DeepMind isn’t the only game in town. When the results from DeepSeek began circulating in December and January, it triggered a nearly $1 trillion dip in the stock market because it promised that powerful reasoning models could be had for cheap. The model is referred to as “open weight”—in other words, its internal settings, called weights, are made publicly available, allowing developers to run it on their own rather than paying to access proprietary models from Google or OpenAI. (The term “open source” is reserved for models that disclose the data they were trained on.) 

So why use proprietary models from Google when open ones like DeepSeek are performing so well? Kavukcuoglu says that coding, math, and finance are cases where “there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations,” and he expects models that deliver on that, open or not, to win out. In DeepMind’s view, this reasoning will be the foundation of future AI models that act on your behalf and solve problems for you.

“Reasoning is the key capability that builds up intelligence,” he says. “The moment the model starts thinking, the agency of the model has started.”

This story was updated to clarify the problem of “overthinking.

US office that counters foreign disinformation is being eliminated

The only office within the US State Department that monitors foreign disinformation is to be eliminated, according to US Secretary of State Marco Rubio, confirming reporting by MIT Technology Review.

The Counter Foreign Information Manipulation and Interference (R/FIMI) Hub is a small office in the State Department’s Office of Public Diplomacy that tracks and counters foreign disinformation campaigns. 

In shutting R/FIMI, the department’s controversial acting undersecretary, Darren Beattie, is delivering a major win to conservative critics who have alleged that it censors conservative voices. Created at the end of 2024, it was reorganized from the Global Engagement Center (GEC), a larger office with a similar mission that had long been criticized by conservatives who claimed that, despite its international mission, it was censoring American conservatives. In 2023, Elon Musk called the center the “worst offender in US government censorship [and] media manipulation” and a “threat to our democracy.” 

The culling of the office leaves the State Department without a way to actively counter the increasingly sophisticated disinformation campaigns from foreign governments like those of Russia, Iran, and China.

Shortly after publication, employees at R/FIMI received an email, inviting them to an 11:15AM meeting with Beattie, where employees were told that the office and their jobs have been eliminated. 

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

Then, Secretary of State Marco Rubio confirmed our reporting in a blog post in The Federalist, which had sued GEC last year alleging that it had infringed on its freedom of speech. “It is my pleasure to announce the State Department is taking a crucial step toward keeping the president’s promise to liberate American speech by abolishing forever the body formerly known as the Global Engagement Center (GEC),” he wrote. And he told Mike Benz, a former first-term Trump official who also reportedly has alt right views, during a YouTube interview, “We ended government-sponsored censorship in the United States through the State Department.”  

Censorship claims

For years, conservative voices both in and out of government have accused Big Tech of censoring conservative views—and they often charged GEC with enabling such censorship. 

GEC had its roots as the Center for Strategic Counterterrorism Communications (CSCC), created by an Obama-era executive order, but shifted its mission to fight propaganda and disinformation from foreign governments and terrorist organizations in 2016, becoming the Global Engagement Center. It was always explicitly focused on the international information space, but some of the organizations that it funded also did work in the United States. It shut down last December after a measure to reauthorize its $61 million budget was blocked by Republicans in Congress, who accused it of helping Big Tech censor American conservative voices. 

R/FIMI had a similar goal to fight foreign disinformation, but it was smaller: the newly created office had a $51.9 million budget, and a small staff that, by mid-April, was down to just 40 employees, from 125 at GEC. In a Wednesday morning meeting, those employees were told that they would  be put on administrative leave and terminated within 30 days. 

With the change in administrations, R/FIMI had never really gotten off the ground. Beattie, a controversial pick for undersecretary—he was fired as a speechwriter during the first Trump administration for attending a white nationalism conference, has suggested that the FBI organized the January 6 attack on Congress, and has said that it’s not worth defending Taiwan from China—had instructed the few remaining staff to be “pencils down,” one State Department official told me, meaning to pause in their work. 

The administration’s executive order on “countering censorship and restoring freedom of speech” reads like a summary of conservative accusations against GEC:

“Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.  Government censorship of speech is intolerable in a free society.”

In 2023, The Daily Wire, founded by conservative media personality Ben Shapiro, joined The Federalist in suing GEC for allegedly infringing on the company’s first amendment rights by funding two non-profit organizations, the London-based Global Disinformation Index and New York-based NewsGuard, that had labeled The Daily Wire as “unreliable,” “risky,” and/or (per GDI), susceptible to foreign disinformation. Those projects were not funded by GEC. The lawsuit alleged that this amounted to censorship by “starving them of advertising revenue and reducing the circulation of their reporting and speech,” the lawsuit continued. 

In 2022, the Republican attorneys general of Missouri and Louisiana named GEC among the federal agencies that, they alleged, were pressuring social networks to censor conservative views. Though the case eventually made its way to the Supreme Court, which found no First Amendment violations, a lower court had already removed GEC’s name from the list of defendants, ruling there was “no evidence” that GEC’s communications with the social media platforms had gone beyond “educating the platforms on ‘tools and techniques used by foreign actors.’”

The stakes

The GEC—and now R/FIMI—was targeted as part of a wider campaign to shut down groups accused of being “weaponized” against conservatives. 

Conservative critics railing against what they have alternatively called a disinformation- or censorship- industrial complex have also taken aim at the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Stanford Internet Observatory, a prominent research group that conducted widely cited research on the flows of disinformation during elections. 

CISA’s former director, Chris Krebs, was personally targeted in an April 9 White House memo, while in response to the criticism and millions of dollars in legal fees, Stanford University shuttered the Stanford Internet Observatory ahead of the 2024 presidential elections.  

But this targeting comes at a time when foreign disinformation campaigns—especially by Russia, China, and Iran—have become increasingly sophisticated. 

According to one estimate, Russia spends $1.5 billion per year on foreign influence campaigns. In 2022, the Islamic Republic of Iran Broadcasting, that country’s primary foreign propaganda arm, had a $1.26 billion budget. And a 2015 estimate suggests that China spent up to $10 billion per year on media targeting non-Chinese foreigners—a figure that has almost certainly grown.

In September 2024, the Justice Department indicted two employees of RT, a Russian state-owned propaganda agency, in a $10 million scheme to create propaganda aimed at influencing US audiences through a media company that has since been identified as the conservative Tenet Media. 

The GEC was one effort to counter such campaigns. Some of its recent projects have included developing AI models to detect memes and deepfakes and exposing Russian propaganda efforts to influence Latin American public opinion against the war in Ukraine. 

By law, the Office of Public Diplomacy has to provide Congress with 15-day advance notice of any intent to reassign any funding allocated by Congress over $1 million. Congress then has time to respond, ask questions, and challenge the decisions—though to judge from its record with other unilateral executive-branch decisions to gut government agencies, it is unlikely to do so. 

We have reached out to the State Department for comment. 

This story was updated at 11:55am to note that R/FIMI employees have confirmed that the office closed.
This story was updated at 12:37am to include confirmation about R/FIMI’s shutdown from Marco Rubio.
This story was updated at 6:10pm to add an identifier for Mike Benz.

Meet the researchers testing the “Armageddon” approach to asteroid defense

One day, in the near or far future, an asteroid about the length of a football stadium will find itself on a collision course with Earth. If we are lucky, it will land in the middle of the vast ocean, creating a good-size but innocuous tsunami, or in an uninhabited patch of desert. But if it has a city in its crosshairs, one of the worst natural disasters in modern times will unfold. As the asteroid steams through the atmosphere, it will begin to fragment—but the bulk of it will likely make it to the ground in just a few seconds, instantly turning anything solid into a fluid and excavating a huge impact crater in a heartbeat. A colossal blast wave, akin to one unleashed by a large nuclear weapon, will explode from the impact site in every direction. Homes dozens of miles away will fold like cardboard. Millions of people could die.

Fortunately for all 8 billion of us, planetary defense—the science of preventing asteroid impacts—is a highly active field of research. Astronomers are watching the skies, constantly on the hunt for new near-Earth objects that might pose a threat. And others are actively working on developing ways to prevent a collision should we find an asteroid that seems likely to hit us.

We already know that at least one method works: ramming the rock with an uncrewed spacecraft to push it away from Earth. In September 2022, NASA’s Double Asteroid Redirection Test, or DART, showed it could be done when a semiautonomous spacecraft the size of a small car, with solar panel wings, was smashed into an (innocuous) asteroid named Dimorphos at 14,000 miles per hour, successfully changing its orbit around a larger asteroid named Didymos. 

But there are circumstances in which giving an asteroid a physical shove might not be enough to protect the planet. If that’s the case, we could need another method, one that is notoriously difficult to test in real life: a nuclear explosion. 

Scientists have used computer simulations to explore this potential method of planetary defense. But in an ideal world, researchers would ground their models with cold, hard, practical data. Therein lies a challenge. Sending a nuclear weapon into space would violate international laws and risk inflaming political tensions. What’s more, it could do damage to Earth: A rocket malfunction could send radioactive debris into the atmosphere. 

Over the last few years, however, scientists have started to devise some creative ways around this experimental limitation. The effort began in 2023, with a team of scientists led by Nathan Moore, a physicist and chemical engineer at the Sandia National Laboratories in Albuquerque, New Mexico. Sandia is a semi-secretive site that serves as the engineering arm of America’s nuclear weapons program. And within that complex lies the Z Pulsed Power Facility, or Z machine, a cylindrical metallic labyrinth of warning signs and wiring. It’s capable of summoning enough energy to melt diamond. 

About 25,000 asteroids more than 460 feet long—a size range that starts with midsize “city killers” and goes up in impact from there—are thought to exist close to Earth. Just under half of them have been found.

The researchers reckoned they could use the Z machine to re-create the x-ray blast of a nuclear weapon—the radiation that would be used to knock back an asteroid—on a very small and safe scale.

It took a while to sort out the details. But by July 2023, Moore and his team were ready. They waited anxiously inside a control room, monitoring the thrumming contraption from afar. Inside the machine’s heart were two small pieces of rock, stand-ins for asteroids, and at the press of a button, a maelstrom of x-rays would thunder toward them. If they were knocked back by those x-rays, it would prove something that, until now, was purely theoretical: You can deflect an asteroid from Earth using a nuke.

This experiment “had never been done before,” says Moore. But if it succeeded, its data would contribute to the safety of everyone on the planet. Would it work?

Monoliths and rubble piles

Asteroid impacts are a natural disaster like any other. You shouldn’t lose sleep over the prospect, but if we get unlucky, an errant space rock may rudely ring Earth’s doorbell. “The probability of an asteroid striking Earth during my lifetime is very small. But what if one did? What would we do about it?” says Moore. “I think that’s worth being curious about.”

Forget about the gigantic asteroids you know from Hollywood blockbusters. Space rocks over two-thirds of a mile (about one kilometer) in diameter—those capable of imperiling civilization—are certainly out there, and some hew close to Earth’s own orbit. But because these asteroids are so elephantine, astronomers have found almost all of them already, and none pose an impact threat. 

Rather, it’s asteroids a size range down—those upwards of 460 feet (140 meters) long—that are of paramount concern. About 25,000 of those are thought to exist close to our planet, and just under half have been found. The day-to-day odds of an impact are extremely low, but even one of the smaller ones in that size range could do significant damage if it found Earth and hit a populated area—a capacity that has led astronomers to dub such midsize asteroids “city killers.”

If we find a city killer that looks likely to hit Earth, we’ll need a way to stop it. That could be technology to break or “disrupt” the asteroid into fragments that will either miss the planet entirely or harmlessly ignite in the atmosphere. Or it could be something that can deflect the asteroid, pushing it onto a path that will no longer intersect with our blue marble. 

Because disruption could accidentally turn a big asteroid into multiple smaller, but still deadly, shards bound for Earth, it’s often considered to be a strategy of last resort. Deflection is seen as safer and more elegant. One way to achieve it is to deploy a spacecraft known as a kinetic impactor—a battering ram that collides with an asteroid and transfers its momentum to the rocky interloper, nudging it away from Earth. NASA’s DART mission demonstrated that this can work, but there are some important caveats: You need to deflect the asteroid years in advance to make sure it completely misses Earth, and asteroids that we spot too late—or that are too big—can’t be swatted away by just one DART-like mission. Instead, you’d need several kinetic impactors—maybe many of them—to hit one side of the asteroid perfectly each time in order to push it far enough to save our planet. That’s a tall order for orbital mechanics, and not something space agencies may be willing to gamble on. 

In that case, the best option might instead be to detonate a nuclear weapon next to the asteroid. This would irradiate one hemisphere of the asteroid in x-rays, which in a few millionths of a second would violently shatter and vaporize the rocky surface. The stream of debris spewing out of that surface and into space would act like a rocket, pushing the asteroid in the opposite direction. “There are scenarios where kinetic impact is insufficient, and we’d have to use a nuclear explosive device,” says Moore.

IKEA-style diagram of an asteroid trailed by a cloud of particles with an inset of an explosion

MCKIBILLO

This idea isn’t new. Several decades ago, Peter Schultz, a planetary geologist and impacts expert at Brown University, was giving a planetary defense talk at the Lawrence Livermore National Laboratory in California, another American lab focused on nuclear deterrence and nuclear physics research. Afterwards, he recalls, none other than Edward Teller, the father of the hydrogen bomb and a key member of the Manhattan Project, invited him into his office for a chat. “He wanted to do one of these near-Earth-­asteroid flybys and wanted to test the nukes,” Schultz says. What, he wondered, would happen if you blasted an asteroid with a nuclear weapon’s x-rays? Could you forestall a spaceborne disaster using weapons of mass destruction?

But Teller’s dream wasn’t fulfilled—and it’s unlikely to become a reality anytime soon. The United Nations’ 1967 Outer Space Treaty states that no nation can deploy or use nuclear weapons off-world (even if it’s not clear how long certain spacefaring nations will continue to adhere to that rule).

Even raising the possibility of using nukes to defend the planet can be tricky. “There’re still many folks that don’t want to talk about it at all … even if that were the only option to prevent an impact,” says Megan Bruck Syal, a physicist and planetary defense researcher at Lawrence Livermore. Nuclear weapons have long been a sensitive subject, and with relations between several nuclear nations currently at a new nadir, anxiety over the subject is understandable. 

But in the US, there are groups of scientists who “recognize that we have a special responsibility as a spacefaring nation and as a nuclear-­capable nation to look at this,” Syal says. “It isn’t our preference to use a nuclear explosive, of course. But we are still looking at it, in case it’s needed.” 

But how? 

Mostly, researchers have turned to the virtual world, using supercomputers at various US laboratories to simulate the asteroid-­agitating physics of a nuclear blast. To put it mildly, “this is very hard,” says Mary Burkey, a physicist and planetary defense researcher at Lawrence Livermore. You cannot simply flick a switch on a computer and get immediate answers. “When a nuke goes off in space, there’s just x-ray light that’s coming out of it. It’s shining on the surface of your asteroid, and you’re tracking those little photons penetrating maybe a tiny little bit into the surface, and then somehow you have to take that micro­meter worth of resolution and then propagate it out onto something that might be on the order of hundreds of meters wide, watching that shock wave propagate and then watching fragments spin off into space. That’s four different problems.” 

Mimicking the physics of x-ray rock annihilation with as much verisimilitude as possible is difficult work. But recent research using these high-fidelity simulations does suggest that nukes are an effective planetary defense tool for both disruption and deflection. The thing is, though, no two asteroids are alike; each is mechanically and geologically unique, meaning huge uncertainties remain. A more monolithic asteroid might respond in a straightforward way to a nuclear deflection campaign, whereas a rubble pile asteroid—a weakly bound fleet of boulders barely held together by their own gravity—might respond in a chaotic, uncontrollable way. Can you be sure the explosion wouldn’t accidentally shatter the asteroid, turning a cannonball into a hail of bullets still headed for Earth? 

Simulations can go a long way toward answering these questions, but they remain virtual re-creations of reality, with built-in assumptions. “Our models are only as good as the physics that we understand and that we put into them,” says Angela Stickle, a hypervelocity impact physicist at the Johns Hopkins University Applied Physics Laboratory in Maryland. To make sure the simulations are reproducing the correct physics and delivering realistic data, physical experiments are needed to ground them.

Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second.

Researchers studying kinetic impactors can get that sort of real-world data. Along with DART, they can use specialized cannons—like the Vertical Gun Range at NASA’s Ames Research Center in California—to fire all sorts of projectiles at meteorites. In doing so, they can find out how tough or fragile asteroid shards can be, effectively reproducing a kinetic impact mission on a small scale. 

Battle-testing nuke-based asteroid defense simulations is another matter. Re-creating the physics of these confrontations on a small scale was long considered to be exceedingly difficult. Fortunately, those keen on fighting asteroids are as persistent as they are creative—and several teams, including Moore’s at Sandia, think they have come up with a solution.

X-ray scissors

The prime mission of Sandia, like that of Lawrence Livermore, is to help maintain the nation’s nuclear weapons arsenal. “It’s a national security laboratory,” says Moore. “Planetary defense affects the entire planet,” he adds—making it, by default, a national security issue as well. And that logic, in part, persuaded the powers that be in July 2022 to try a brand-new kind of experiment. Moore took charge of the project in January 2023—and with the shot scheduled for the summer, he had only a few months to come up with the specific plan for the experiment. There was “lots of scribbling on my whiteboard, running computer simulations, and getting data to our engineers to design the test fixture for the several months it would take to get all the parts machined and assembled,” he says.

Although there were previous and ongoing experiments that showered asteroid-like targets with x-rays, Moore and his team were frustrated by one aspect of them. Unlike actual asteroids floating freely in space, the micro-­asteroids on Earth were fixed in place. To truly test whether x-rays could deflect asteroids, targets would have to be suspended in a vacuum—and it wasn’t immediately clear how that could be achieved.

Generating the nuke-like x-rays was the easy part, because Sandia had the Z machine, a hulking mass of diodes, pipes, and wires interwoven with an assortment of walkways that circumnavigate a vacuum chamber at its core. When it’s powered up, electrical currents are channeled into capacitors—and, when commanded, blast that energy at a target or substance to create radiation and intense magnetic pressures. 

Flanked by klaxons and flashing lights, it’s an intimidating sight. “It’s the size of a building—about three stories tall,” says Moore. Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second: “You can’t even blink that fast.” The Z machine is named for the axis along which its energetic particles cascade, but the Z could easily stand for “Zeus.”

The Z Pulsed Power Facility, or Z machine, at Sandia National Laboratories in Albuquerque, New Mexico, concentrates electricity into short bursts of intense energy that can be used to create x-rays and gamma rays and compress matter to high densities.
RANDY MONTOYA/SANDIA NATIONAL LABORATORY

The original purpose of the Z machine, whose first form was built half a century ago, was nuclear fusion research. But over time, it’s been tinkered with, upgraded, and used for all kinds of science. “The Z machine has been used to compress matter to the same densities [you’d find at] the centers of planets. And we can do experiments like that to better understand how planets form,” Moore says, as an example. And the machine’s preternatural energies could easily be used to generate x-rays—in this case, by electrifying and collapsing a cloud of argon gas.

“The idea of studying asteroid deflection is completely different for us,” says Moore. And the machine “fires just once a day,” he adds, “so all the experiments are planned more than a year in advance.” In other words, the researchers had to be near certain their one experiment would work, or they would be in for a long wait to try again—if they were permitted a second attempt. 

For some time, they could not figure out how to suspend their micro-asteroids. But eventually, they found a solution: Two incredibly thin bits of aluminum foil would hold their targets in place within the Z machine’s vacuum chamber. When the x-ray blast hit them and the targets, the pieces of foil would be instantly vaporized, briefly leaving the targets suspended in the chamber and allowing them to be pushed back as if they were in space. “It’s like you wave your magic wand and it’s gone,” Moore says of the foil. He dubbed this technique “x-ray scissors.” 

In July 2023, after considerable planning, the team was ready. Within the Z machine’s vacuum chamber were two fingernail-size targets—a bit of quartz and some fused silica, both frequently found on real asteroids. Nearby, a pocket of argon gas swirled away. Satisfied that the gigantic gizmo was ready, everyone left and went to stand in the control room. For a moment, it was deathly quiet.

Stand by.

Fire.

It was over before their ears could even register a metallic bang. A tempest of electricity shocked the argon gas cloud, causing it to implode; as it did, it transformed into a plasma and x-rays screamed out of it, racing toward the two targets in the chamber. The foil vanished, the surfaces of both targets erupted outward as supersonic sprays of debris, and the targets flew backward, away from the x-rays, at 160 miles per hour.

Moore wasn’t there. “I was in Spain when the experiment was run, because I was celebrating my anniversary with my wife, and there was no way I was going to miss that,” he says. But just after the Z machine was fired, one of his colleagues sent him a very concise text: IT WORKED.

“We knew right away it was a huge success,” says Moore. The implications were immediately clear. The experimental setup was complex, but they were trying to achieve something extremely fundamental: a real-world demonstration that a nuclear blast could make an object in space move. 

“We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives.’”

Patrick King, a physicist at the Johns Hopkins University Applied Physics Laboratory, was impressed. Previously, pushing back objects using x-ray vaporization had been extremely difficult to demonstrate in the lab. “They were able to get a direct measurement of that momentum transfer,” he says, calling the x-ray scissors an “elegant” technique.

Sandia’s work took many in the community by surprise. “The Z machine experiment was a bit of a newcomer for the planetary defense field,” says Burkey. But she notes that we can’t overinterpret the results. It isn’t clear, from the deflection of the very small and rudimentary asteroid-like targets, how much a genuine nuclear explosion would deflect an actual asteroid. As ever, more work is needed. 

King leads a team that is also working on this question. His NASA-funded project involves the Omega Laser Facility, a complex based at the University of Rochester in upstate New York. Omega can generate x-rays by firing powerful lasers at a target within a specialized chamber. Upon being irradiated, the target generates an x-ray flash, similar to the one produced during a nuclear explosion in space, which can then be used to bombard various objects—in this case, some Earth rocks acting as asteroid mimics, and (crucially) some bona fide meteoritic material too. 

King’s Omega experiments have tried to answer a basic question: “How much material actually gets removed from the surface?” says King. The amount of material that flies off the pseudo-asteroids, and the vigor with which it’s removed, will differ from target to target. The hope is that these results—which the team is still considering—will hint at how different types of asteroids will react to being nuked. Although experiments with Omega cannot produce the kickback seen in the Z machine, King’s team has used a more realistic and diverse series of targets and blasted them with x-rays hundreds of times. That, in turn, should clue us in to how effectively, or not, actual asteroids would be deflected by a nuclear explosion.

“I wouldn’t say one [experiment] has definitive advantages over the other,” says King. “Like many things in science, each approach can yield insight along different ‘axes,’ if you will, and no experimental setup gives you the whole picture.”

Ikea-style diagram of the Earth with a chat bubble inset of two figures high-fiving.

MCKIBILLO

Experiments like Moore’s and King’s may sound technologically baroque—a bit like lightning-fast Rube Goldberg machines overseen by wizards. But they are likely the first in a long line of increasingly sophisticated tests. “We’ve just scratched the surface of what we can do,” Moore says. As with King’s experiments, Moore hopes to place a variety of materials in the Z machine, including targets that can stand in for the wetter, more fragile carbon-rich asteroids that astronomers commonly see in near-Earth space. “If we could get our hands on real asteroid material, we’d do it,” he says. And it’s expected that all this experimental data will be fed back into those nuke-versus-­asteroid computer simulations, helping to verify the virtual results.

Although these experiments are perfectly safe, planetary defenders remain fully cognizant of the taboo around merely discussing the use of nukes for any reason—even if that reason is potentially saving the world. “We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives,’” King says.

Inevitably, Earth will be imperiled by a dangerous asteroid. And the hope is that when that day arrives, it can be dealt with using something other than a nuke. But comfort should be taken from the fact that scientists are researching this scenario, just in case it’s our only protection against the firmament. “We are your taxpayer dollars at work,” says Burkey. 

There’s still some way to go before they can be near certain that this asteroid-stopping technique will succeed. Their progress, though, belongs to everyone. “Ultimately,” says Moore, “we all win if we solve this problem.” 

Robin George Andrews is an award-winning science journalist based in London and the author, most recently, of How to Kill an Asteroid: The Real Science of Planetary Defense.

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year. 

But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky. 

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  

Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis.

The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 

This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine.

What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs.

Ed Sisco in a mask
Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.
B. HAYES/NIST

The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs.

Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else.

“It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week.

The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without.

In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco. 

To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine. 

Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions.

“It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.” 

Adam Bluestein writes about innovation in business, science, and technology.

Phase two of military AI has arrived

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT. 

As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. Consider this phase two of the US military’s AI push, where phase one began back in 2017 with older types of AI, like computer vision to analyze drone imagery. Though this newest phase began under the Biden administration, there’s fresh urgency as Elon Musk’s DOGE and Secretary of Defense Pete Hegseth push loudly for AI-fueled efficiency. 

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite. 

With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.”

What are the limits of “human in the loop”?

Talk to as many defense-tech companies as I have and you’ll hear one phrase repeated quite often: “human in the loop.” It means that the AI is responsible for particular tasks, and humans are there to check its work. It’s meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them.

But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

“‘Human in the loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to draw conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.” As AI systems rely on more and more data, this problem scales up. 

Is AI making it easier or harder to know what should be classified?

In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped “Top Secret,” with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways.

One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at. 

With the mountain of data growing each day, and then AI constantly creating new analyses, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information. 

The defense giant Palantir is positioning itself to help, by offering its AI tools to determine whether a piece of data should be classified or not. It’s also working with Microsoft on AI models that would train on classified data. 

How high up the decision chain should AI go?

Zooming out for a moment, it’s worth noting that the US military’s adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets.

Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance. 

So what’s next? For consumers, it’s agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It’s also personalized AI, or models that learn from your private data to be more helpful. 

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown’s Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. “Military commanders are interested in AI’s potential to improve decision-making, especially at the operational level of war,” the authors wrote.

In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn’t been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it’s clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions. 

I’ll be following these three questions closely. If you have information on how the Pentagon might be handling these questions, please reach out via Signal at jamesodonnell.22. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This architect wants to build cities out of lava

Arnhildur Pálmadóttir was around three years old when she saw a red sky from her living room window. A volcano was erupting about 25 miles away from where she lived on the northeastern coast of Iceland. Though it posed no immediate threat, its ominous presence seeped into her subconscious, populating her dreams with streaks of light in the night sky.

Fifty years later, these “gloomy, strange dreams,” as Pálmadóttir now describes them, have led to a career as an architect with an extraordinary mission: to harness molten lava and build cities out of it.

Pálmadóttir today lives in Reykjavik, where she runs her own architecture studio, S.AP Arkitektar, and the Icelandic branch of the Danish architecture company Lendager, which specializes in reusing building materials.

The architect believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques: drill straight into magma pockets and extract the lava; channel molten lava into pre-dug trenches that could form a city’s foundations; or 3D-print bricks from molten lava in a technique similar to the way objects can be printed out of molten glass.

Pálmadóttir and Skarphéðinsson first presented the concept during a talk at Reykjavik’s DesignMarch festival in 2022. This year they are producing a speculative film set in 2150, in an imaginary city called Eldborg. Their film, titled Lavaforming, follows the lives of Eldborg’s residents and looks back on how they learned to use molten lava as a building material. It will be presented at the Venice Biennale, a leading architecture festival, in May. 

lava around a structure
Set in 2150, her speculative film Lavaforming presents a fictional city built from molten lava.
COURTESY OF S.AP ARKITEKTAR

Buildings and construction materials like concrete and steel currently contribute a staggering 37% of the world’s annual carbon dioxide emissions. Many architects are advocating for the use of natural or preexisting materials, but mixing earth and water into a mold is one thing; tinkering with 2,000 °F lava is another. 

Still, Pálmadóttir is piggybacking on research already being done in Iceland, which has 30 active volcanoes. Since 2021, eruptions have intensified in the Reykjanes Peninsula, which is close to the capital and to tourist hot spots like the Blue Lagoon. In 2024 alone, there were six volcanic eruptions in that area. This frequency has given volcanologists opportunities to study how lava behaves after a volcano erupts. “We try to follow this beast,” says Gro Birkefeldt M. Pedersen, a volcanologist at the Icelandic Meteorological Office (IMO), who has consulted with Pálmadóttir on a few occasions. “There is so much going on, and we’re just trying to catch up and be prepared.”

Pálmadóttir’s concept assumes that many years from now, volcanologists will be able to forecast lava flow accurately enough for cities to plan on using it in building. They will know when and where to dig trenches so that when a volcano erupts, the lava will flow into them and solidify into either walls or foundations.

Today, forecasting lava flows is a complex science that requires remote sensing technology and tremendous amounts of computational power to run simulations on supercomputers. The IMO typically runs two simulations for every new eruption—one based on data from previous eruptions, and another based on additional data acquired shortly after the eruption (from various sources like specially outfitted planes). With every event, the team accumulates more data, which makes the simulations of lava flow more accurate. Pedersen says there is much research yet to be done, but she expects “a lot of advancement” in the next 10 years or so. 

To design the speculative city of Eldborg for their film, Pálmadóttir and Skarphéðinsson used 3D-modeling software similar to what Pedersen uses for her simulations. The city is primarily built on a network of trenches that were filled with lava over the course of several eruptions, while buildings are constructed out of lava bricks. “We’re going to let nature design the buildings that will pop up,” says Pálmadóttir. 

The aesthetic of the city they envision will be less modernist and more fantastical—a bit “like [Gaudi’s] Sagrada Familia,” says Pálmadóttir. But the aesthetic output is not really the point; the architects’ goal is to galvanize architects today and spark an urgent discussion about the impact of climate change on our cities. She stresses the value of what can only be described as moonshot thinking. “I think it is important for architects not to be only in the present,” she told me. “Because if we are only in the present, working inside the system, we won’t change anything.”

Pálmadóttir was born in 1972 in Húsavik, a town known as the whale-watching capital of Iceland. But she was more interested in space and technology and spent a lot of time flying with her father, a construction engineer who owned a small plane. She credits his job for the curiosity she developed about science and “how things were put together”—an inclination that proved useful later, when she started researching volcanoes. So was the fact that Icelanders “learn to live with volcanoes from birth.” At 21, she moved to Norway, where she spent seven years working in 3D visualization before returning to Reykjavik and enrolling in an architecture program at the Iceland University of the Arts. But things didn’t click until she moved to Barcelona for a master’s degree at the Institute for Advanced Architecture of Catalonia. “I remember being there and feeling, finally, like I was in the exact right place,” she says. 

Before, architecture had seemed like a commodity and architects like “slaves to investment companies,” she says. Now, it felt like a path with potential. 

Lava has proved to be a strong, durable building material, at least in its solid state. To explore its potential, Pálmadóttir and Skarphéðinsson envision a city built on a network of trenches that have filled with lava over the course of several eruptions, while buildings are constructed with lava bricks.

She returned to Reykjavik in 2009 and worked as an architect until she founded S.AP (for “studio Arnhildur Pálmadóttir”) Arkitektar in 2018; her son started working with her in 2019 and officially joined her as an architect this year, after graduating from the Southern California Institute of Architecture. 

In 2021, the pair witnessed their first eruption up close, near the Fagradalsfjall volcano on the Reykjanes Peninsula. It was there that Pálmadóttir became aware of the sheer quantity of material coursing through the planet’s veins, and the potential to divert it into channels. 

Lava has already proved to be a strong, long-lasting building material—at least in its solid state. When it cools, it solidifies into volcanic rock like basalt or rhyolite. The type of rock depends on the composition of the lava, but basaltic lava—like the kind found in Iceland and Hawaii—forms one of the hardest rocks on Earth, which means that structures built from this type of lava would be durable and resilient. 

For years, architects in Mexico, Iceland, and Hawaii (where lava is widely available) have built structures out of volcanic rock. But quarrying that rock is an energy-intensive process that requires heavy machines to extract, cut, and haul it, often across long distances, leaving a big carbon footprint. Harnessing lava in its molten state, however, could unlock new methods for sustainable construction. Jeffrey Karson, a professor emeritus at Syracuse University who specializes in volcanic activity and who cofounded the Syracuse University Lava Project, agrees that lava is abundant enough to warrant interest as a building material. To understand how it behaves, Karson has spent the past 15 years performing over a thousand controlled lava pours from giant furnaces. If we figure out how to build up its strength as it cools, he says, “that stuff has a lot of potential.” 

In his research, Karson found that inserting metal rods into the lava flow helps reduce the kind of uneven cooling that would lead to thermal cracking—and therefore makes the material stronger (a bit like rebar in concrete). Like glass and other molten materials, lava behaves differently depending on how fast it cools. When glass or lava cools slowly, crystals start forming, strengthening the material. Replicating this process—perhaps in a kiln—could slow down the rate of cooling and let the lava become stronger. This kind of controlled cooling is “easy to do on small things like bricks,” says Karson, so “it’s not impossible to make a wall.” 

Pálmadóttir is clear-eyed about the challenges before her. She knows the techniques she and Skarphéðinsson are exploring may not lead to anything tangible in their lifetimes, but they still believe that the ripple effect the projects could create in the architecture community is worth pursuing.

Both Karson and Pedersen caution that more experiments are necessary to study this material’s potential. For Skarphéðinsson, that potential transcends the building industry. More than 12 years ago, Icelanders voted that the island’s natural resources, like its volcanoes and fishing waters, should be declared national property. That means any city built from lava flowing out of these volcanoes would be controlled not by deep-pocketed individuals or companies, but by the nation itself. (The referendum was considered illegal almost as soon as it was approved by voters and has since stalled.) 

For Skarphéðinsson, the Lavaforming project is less about the material than about the “political implications that get brought to the surface with this material.” “That is the change I want to see in the world,” he says. “It could force us to make radical changes and be a catalyst for something”—perhaps a social megalopolis where citizens have more say in how resources are used and profits are shared more evenly.

Cynics might dismiss the idea of harnessing lava as pure folly. But the more I spoke with Pálmadóttir, the more convinced I became. It wouldn’t be the first time in modern history that a seemingly dangerous idea (for example, drilling into scalding pockets of underground hot springs) proved revolutionary. Once entirely dependent on oil, Iceland today obtains 85% of its electricity and heat from renewable sources. “[My friends] probably think I’m pretty crazy, but they think maybe we could be clever geniuses,” she told me with a laugh. Maybe she is a little bit of both.

Elissaveta M. Brandon is a regular contributor to Fast Company and Wired.

A small US city experiments with AI to find out what residents want

Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city?

When Doug Gorman, elected leader of the county that includes Bowling Green, took office in 2023, it was the fastest-growing city in the state and projected to double in size by 2050, but it lacked a plan for how that growth would unfold. Gorman had a meeting with Sam Ford, a local consultant who had worked with the surveying platform Pol.is, which uses machine learning to gather opinions from large groups of people. 

They “needed a vision” for the anticipated growth, Ford says. The two convened a group of volunteers with experience in eight areas: economic development, talent, housing, public health, quality of life, tourism, storytelling, and infrastructure. They built a plan to use Pol.is to help write a 25-year plan for the city. The platform is just one of several new technologies used in Europe and increasingly in the US to help make sure that local governance is informed by public opinion.

After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what the 25-year plan should include. They could also vote on whether they agreed or disagreed with other ideas. The tool could be translated into a participant’s preferred language, and human moderators worked to make sure the traffic was coming from the Bowling Green area. 

Over the month that it was live, 7,890 residents participated, and 2,000 people submitted their own ideas. An AI-powered tool from Google Jigsaw then analyzed the data to find what people agreed and disagreed on. 

Experts on democracy technologies who were not involved in the project say this level of participation—about 10% of the city’s residents—was impressive.

“That is a lot,” says Archon Fung, director of the Ash Center for Innovation and Democratic Governance at the Harvard Kennedy School. A local election might see a 25% turnout, he says, and that requires nothing more than filling out a ballot. 

“Here, it’s a more demanding kind of participation, right? You’re actually voting on or considering some substantive things, and 2,000 people are contributing ideas,” he says. “So I think that’s a lot of people who are engaged.”

The plans that received the most attention in the Bowling Green experiment were hyperlocal. The ideas with the broadest support were increasing the number of local health-care specialists so residents wouldn’t have to travel to nearby Nashville for medical care, enticing more restaurants and grocery stores to open on the city’s north side, and preserving historic buildings. 

More contentious ideas included approving recreational marijuana, adding sexual orientation and gender identity to the city’s nondiscrimination clause, and providing more options for private education. Out of 3,940 unique ideas, 2,370 received more than 80% agreement, including initiatives like investing in stormwater infrastructure and expanding local opportunities for children and adults with autism.  

The volunteers running the experiment were not completely hands-off. Submitted ideas were screened according to a moderation policy, and redundant ideas were not posted. Ford says that 51% of ideas were published, and 31% were deemed redundant. About 6% of ideas were not posted because they were either completely off-topic or contained a personal attack.

But some researchers who study the technologies that can make democracy more effective question whether soliciting input in this manner is a reliable way to understand what a community wants.

One problem is self-selection—for example, certain kinds of people tend to show up to in-person forums like town halls. Research shows that seniors, homeowners, and people with high levels of education are the most likely to attend, Fung says. It’s possible that similar dynamics are at play among the residents of Bowling Green who decided to participate in the project.

“Self-selection is not an adequate way to represent the opinions of a public,” says James Fishkin, a political scientist at Stanford who’s known for developing a process he calls deliberative polling, in which a representative sample of a population’s residents are brought together for a weekend, paid about $300 each for their participation, and asked to deliberate in small groups. Other methods, used in some European governments, use jury-style groups of residents to make public policy decisions. 

What’s clear to everyone who studies the effectiveness of these tools is that they promise to move a city in a more democratic direction, but we won’t know if Bowling Green’s experiment worked until residents see what the city does with the ideas that they raised.

“You can’t make policy based on a tweet,” says Beth Simone Noveck, who directs a lab that studies democracy and technology at Northeastern University. As she points out, residents were voting on 140-character ideas, and those now need to be formed into real policies. 

“What comes next,” she says, “is the conversation between the city and residents to develop a short proposal into something that can actually be implemented.” For residents to trust that their voice actually matters, the city must be clear on why it’s implementing some ideas and not others. 

For now, the organizers have made the results public, and they will make recommendations to the Warren County leadership later this year. 

How AI is interacting with our creative human processes

In 2021, 20 years after the death of her older sister, Vauhini Vara was still unable to tell the story of her loss. “I wondered,” she writes in Searches, her new collection of essays on AI technology, “if Sam Altman’s machine could do it for me.” So she tried ChatGPT. But as it expanded on Vara’s prompts in sentences ranging from the stilted to the unsettling to the sublime, the thing she’d enlisted as a tool stopped seeming so mechanical. 

“Once upon a time, she taught me to exist,” the AI model wrote of the young woman Vara had idolized. Vara, a journalist and novelist, called the resulting essay “Ghosts,” and in her opinion, the best lines didn’t come from her: “I found myself irresistibly attracted to GPT-3—to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them … as I tried to write more honestly, the AI seemed to be doing the same.”

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them? And how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Both Vara’s book and The Uncanny Muse, a collection of essays on the history of art and automation by the music critic David Hajdu, explore how humans have historically and personally wrestled with the ways in which machines relate to our own bodies, brains, and creativity. At the same time, The Mind Electric, a new book by a neurologist, Pria Anand, reminds us that our own inner workings may not be so easy to replicate.

Searches is a strange artifact. Part memoir, part critical analysis, and part AI-assisted creative experimentation, Vara’s essays trace her time as a tech reporter and then novelist in the San Francisco Bay Area alongside the history of the industry she watched grow up. Tech was always close enough to touch: One college friend was an early Google employee, and when Vara started reporting on Facebook (now Meta), she and Mark Zuckerberg became “friends” on his platform. In 2007, she published a scoop that the company was planning to introduce ad targeting based on users’ personal information—the first shot fired in the long, gnarly data war to come. In her essay “Stealing Great Ideas,” she talks about turning down a job reporting on Apple to go to graduate school for fiction. There, she wrote a novel about a tech founder, which was later published as The Immortal King Rao. Vara points out that in some ways at the time, her art was “inextricable from the resources [she] used to create it”—products like Google Docs, a MacBook, an iPhone. But these pre-AI resources were tools, plain and simple. What came next was different.

Interspersed with Vara’s essays are chapters of back-and-forths between the author and ChatGPT about the book itself, where the bot serves as editor at Vara’s prompting. ChatGPT obligingly summarizes and critiques her writing in a corporate-­shaded tone that’s now familiar to any knowledge worker. “If there’s a place for disagreement,” it offers about the first few chapters on tech companies, “it might be in the balance of these narratives. Some might argue that the ­benefits—such as job creation, innovation in various sectors like AI and logistics, and contributions to the global economy—can outweigh the negatives.” 

book cover
Searches: Selfhood in the Digital Age
Vauhini Vara
PANTHEON, 2025

Vara notices that ChatGPT writes “we” and “our” in these responses, pulling it into the human story, not the tech one: “Earlier you mentioned ‘our access to information’ and ‘our collective experiences and understandings.’” When she asks what the rhetorical purpose of that choice is, ChatGPT responds with a numbered list of benefits including “inclusivity and solidarity” and “neutrality and objectivity.” It adds that “using the first-person plural helps to frame the discussion in terms of shared human experiences and collective challenges.” Does the bot believe it’s human? Or at least, do the humans who made it want other humans to believe it does? “Can corporations use these [rhetorical] tools in their products too, to subtly make people identify with, and not in opposition to, them?” Vara asks. ChatGPT replies, “Absolutely.”

Vara has concerns about the words she’s used as well. In “Thank You for Your Important Work,” she worries about the impact of “Ghosts,” which went viral after it was first published. Had her writing helped corporations hide the reality of AI behind a velvet curtain? She’d meant to offer a nuanced “provocation,” exploring how uncanny generative AI can be. But instead, she’d produced something beautiful enough to resonate as an ad for its creative potential. Even Vara herself felt fooled. She particularly loved one passage the bot wrote, about Vara and her sister as kids holding hands on a long drive. But she couldn’t imagine either of them being so sentimental. What Vara had elicited from the machine, she realized, was “wish fulfillment,” not a haunting. 

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. How can we make sense of these machines, not just use them? 

The machine wasn’t the only thing crouching behind that too-good-to-be-true curtain. The GPT models and others are trained through human labor, in sometimes exploitative conditions. And much of the training data was the creative work of human writers before her. “I’d conjured artificial language about grief through the extraction of real human beings’ language about grief,” she writes. The creative ghosts in the model were made of code, yes, but also, ultimately, made of people. Maybe Vara’s essay helped cover up that truth too.

In the book’s final essay, Vara offers a mirror image of those AI call-and-­response exchanges as an antidote. After sending out an anonymous survey to women of various ages, she presents the replies to each question, one after the other. “Describe something that doesn’t exist,” she prompts, and the women respond: “God.” “God.” “God.” “Perfection.” “My job. (Lost it.)” Real people contradict each other, joke, yell, mourn, and reminisce. Instead of a single authoritative voice—an editor, or a company’s limited style guide—Vara gives us the full gasping crowd of human creativity. “What’s it like to be alive?” Vara asks the group. “It depends,” one woman answers.    

David Hajdu, now music editor at The Nation and previously a music critic for The New Republic, goes back much further than the early years of Facebook to tell the history of how humans have made and used machines to express ourselves. Player pianos, microphones, synthesizers, and electrical instruments were all assistive technologies that faced skepticism before acceptance and, sometimes, elevation in music and popular culture. They even influenced the kind of art people were able to and wanted to make. Electrical amplification, for instance, allowed singers to use a wider vocal range and still reach an audience. The synthesizer introduced a new lexicon of sound to rock music. “What’s so bad about being mechanical, anyway?” Hajdu asks in The Uncanny Muse. And “what’s so great about being human?” 

book cover of the Uncanny Muse
The Uncanny Muse: Music, Art, and Machines from Automata to AI
David Hajdu
W.W. NORTON & COMPANY, 2025

But Hajdu is also interested in how intertwined the history of man and machine can be, and how often we’ve used one as a metaphor for the other. Descartes saw the body as empty machinery for consciousness, he reminds us. Hobbes wrote that “life is but a motion of limbs.” Freud described the mind as a steam engine. Andy Warhol told an interviewer that “everybody should be a machine.” And when computers entered the scene, humans used them as metaphors for themselves too. “Where the machine model had once helped us understand the human body … a new category of machines led us to imagine the brain (how we think, what we know, even how we feel or how we think about what we feel) in terms of the computer,” Hajdu writes. 

But what is lost with these one-to-one mappings? What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s? Maybe what happens is we get a world full of chatbots and agents, computer-­generated artworks and AI DJs, that companies claim are singular creative voices rather than remixes of a million human inputs. And perhaps we also get projects like the painfully named Painting Fool—an AI that paints, developed by Simon Colton, a scholar at Queen Mary University of London. He told Hajdu that he wanted to “demonstrate the potential of a computer program to be taken seriously as a creative artist in its own right.” What Colton means is not just a machine that makes art but one that expresses its own worldview: “Art that communicates what it’s like to be a machine.”  

What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s?

Hajdu seems to be curious and optimistic about this line of inquiry. “Machines of many kinds have been communicating things for ages, playing invaluable roles in our communication through art,” he says. “Growing in intelligence, machines may still have more to communicate, if we let them.” But the question that The Uncanny Muse raises at the end is: Why should we art-­making humans be so quick to hand over the paint to the paintbrush? Why do we care how the paintbrush sees the world? Are we truly finished telling our own stories ourselves?

Pria Anand might say no. In The Mind Electric, she writes: “Narrative is universally, spectacularly human; it is as unconscious as breathing, as essential as sleep, as comforting as familiarity. It has the capacity to bind us, but also to other, to lay bare, but also obscure.” The electricity in The Mind Electric belongs entirely to the human brain—no metaphor necessary. Instead, the book explores a number of neurological afflictions and the stories patients and doctors tell to better understand them. “The truth of our bodies and minds is as strange as fiction,” Anand writes—and the language she uses throughout the book is as evocative as that in any novel. 

cover of the Mind Electric
The Mind Electric: A Neurologist on the Strangeness and Wonder of Our Brains
Pria Anand
WASHINGTON SQUARE PRESS, 2025

In personal and deeply researched vignettes in the tradition of Oliver Sacks, Anand shows that any comparison between brains and machines will inevitably fall flat. She tells of patients who see clear images when they’re functionally blind, invent entire backstories when they’ve lost a memory, break along seams that few can find, and—yes—see and hear ghosts. In fact, Anand cites one study of 375 college students in which researchers found that nearly three-quarters “had heard a voice that no one else could hear.” These were not diagnosed schizophrenics or sufferers of brain tumors—just people listening to their own uncanny muses. Many heard their name, others heard God, and some could make out the voice of a loved one who’d passed on. Anand suggests that writers throughout history have harnessed organic exchanges with these internal apparitions to make art. “I see myself taking the breath of these voices in my sails,” Virginia Woolf wrote of her own experiences with ghostly sounds. “I am a porous vessel afloat on sensation.” The mind in The Mind Electric is vast, mysterious, and populated. The narratives people construct to traverse it are just as full of wonder. 

Humans are not going to stop using technology to help us create anytime soon—and there’s no reason we should. Machines make for wonderful tools, as they always have. But when we turn the tools themselves into artists and storytellers, brains and bodies, magicians and ghosts, we bypass truth for wish fulfillment. Maybe what’s worse, we rob ourselves of the opportunity to contribute our own voices to the lively and loud chorus of human experience. And we keep others from the human pleasure of hearing them too. 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Generative AI is learning to spy for the US military

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders. 

“We still need to validate the sources,” says Lowdon. But the unit’s commanders encouraged the use of large language models, he says, “because they provide a lot more efficiency during a dynamic situation.”

The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence—not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. 

Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI—tools that can engage in human-like conversation like those built by Vannevar Labs—represent a newer frontier.

The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own to troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend (and a large part of what sets Vannevar’s products apart): terabytes of data in 80 different languages are hoovered every day in 180 countries. The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information; it also uses nonclassified data that is difficult to get online (gathered by human operatives on the ground), as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. 

Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment, with the results delivered through a chatbot interface that’s not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China’s efforts to secure rare earth minerals in the Philippines. 

“Our real focus as a company,” says Scott Philips, Vannevar Labs’ chief technology officer, is to “collect data, make sense of that data, and help the US make good decisions.” 

That approach is particularly appealing to the US intelligence apparatus because for years the world has been awash in more data than human analysts can possibly interpret—a problem that contributed to the 2003 founding of Palantir, a company with a market value of over $200 billion and known for its powerful and controversial tools, including a database that helps Immigration and Customs Enforcement search for and track information on undocumented immigrants

In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data but to actually talk through an analysis with someone interactively.

Vannevar’s tools proved useful for the deployment in the Pacific, and Enzenauer and Lowdon say that while they were instructed to always double-check the AI’s work, they didn’t find inaccuracies to be a significant issue. Enzenauer regularly used the tool to track any foreign news reports in which the unit’s exercises were mentioned and to perform sentiment analysis, detecting the emotions and opinions expressed in text. Judging whether a foreign news article reflects a threatening or friendly opinion toward the unit is a task that on previous deployments she had to do manually.

“It was mostly by hand—researching, translating, coding, and analyzing the data,” she says. “It was definitely way more time-consuming than it was when using the AI.” 

Still, Enzenauer and Lowdon say there were hiccups, some of which would affect most digital tools: The ships had spotty internet connections much of the time, limiting how quickly the AI model could synthesize foreign intelligence, especially if it involved photos or video. 

With this first test completed, the unit’s commanding officer, Colonel Sean Dynan, said on a call with reporters in February that heavier use of generative AI was coming; this experiment was “the tip of the iceberg.” 

This is indeed the direction that the entire US military is barreling toward at full speed. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. (The US is of course not alone in this approach; notably, Israel has been using AI to sort through information and even generate lists of targets in its war in Gaza, a practice that has been widely criticized.)

Perhaps unsurprisingly, plenty of people outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and has expertise in leading safety audits for AI-powered systems. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” 

Khlaaf adds that even if humans are “double-checking” the work of AI, there’s little reason to think they’re capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.”

One particular use case that concerns her is sentiment analysis, which she argues is “a highly subjective metric that even humans would struggle to appropriately assess based on media alone.” 

If AI perceives hostility toward US forces where a human analyst would not—or if the system misses hostility that is really there—the military could make an misinformed decision or escalate a situation unnecessarily.

Sentiment analysis is indeed a task that AI has not perfected. Philips, the Vannevar CTO, says the company has built models specifically to judge whether an article is pro-US or not, but MIT Technology Review was not able to evaluate them. 

Chris Mouton, a senior engineer for RAND, recently tested how well-suited generative AI is for the task. He evaluated leading models, including OpenAI’s GPT-4 and an older version of GPT fine-tuned to do such intelligence work, on how accurately they flagged foreign content as propaganda compared with human experts. “It’s hard,” he says, noting that AI struggled to identify more subtle types of propaganda. But he adds that the models could still be useful in lots of other analysis tasks. 

Another limitation of Vannevar’s approach, Khlaaf says, is that the usefulness of open-source intelligence is debatable. Mouton says that open-source data can be “pretty extraordinary,” but Khlaaf points out that unlike classified intel gathered through reconnaissance or wiretaps, it is exposed to the open internet—making it far more susceptible to misinformation campaigns, bot networks, and deliberate manipulation, as the US Army has warned.

For Mouton, the biggest open question now is whether these generative AI technologies will be simply one investigatory tool among many that analysts use—or whether they’ll produce the subjective analysis that’s relied upon and trusted in decision-making. “This is the central debate,” he says. 

What everyone agrees is that AI models are accessible—you can just ask them a question about complex pieces of intelligence, and they’ll respond in plain language. But it’s still in dispute what imperfections will be acceptable in the name of efficiency. 

Update: This story was updated to include additional context from Heidy Khlaaf.