How lidar measures the cost of climate disasters

The wildfires that swept through Los Angeles County in January 2025 left an indelible mark on the Southern California landscape. The Eaton and Palisades fires raged for 24 days, killing 29 people and destroying 16,000 structures, with losses estimated at $60 billion. More than 55,000 acres were consumed, and the landscape itself was physically transformed.

Researchers are now using lidar (light detection and ranging) technology to precisely measure these changes in the landscape’s geometry—helping them understand the effects of climate disasters.

Lidar, which measures how long it takes for pulses of laser light to bounce off surfaces and return, has been used in topographic mapping for decades. Today, airborne lidar from planes and drones maps the Earth’s surface in high detail. Scientists can then “diff” the data—compare before-and-after snapshots and highlight all the changes—to identify more subtle consequences of a disaster, including fault-line shifts, volcanic eruptions, and mudslides.

Falko Kuester, an engineering professor at the University of California, San Diego, co-directs ALERTCalifornia, a public safety program that uses real-time remote sensing to help detect wildfires. Kuester says lidar snapshots can tell a story over time.

“They give us a lay of the land,” he says. “This is what a particular region has been like at this point in time. Now, if you have consecutive flights at a later time, you can do a ‘difference.’ Show me what it looked like. Show me what it looks like. Tell me what changed. Was something constructed? Something burned down? Did something fall down? Did vegetation grow?” 

Shortly after the fires were contained in late January 2025, ALERTCalifornia sponsored new lidar flights over the Eaton and Palisades burn areas. NV5, an inspection and engineering firm, conducted the scans, and the US Geological Survey is now hosting the public data sets.  

Comparing a 2016 lidar snapshot and the January 2025 snapshot, Cassandra Brigham and her team at Arizona State University visualized the elevation changes—revealing the buildings, trees, and structures that had disappeared.

“We said, what would be a useful product for people to have as quickly as possible, since we’re doing this a couple weeks after the end of the fires?” says Brigham. Her team cleaned and reformatted the older, lower-resolution data and then subtracted the newer data. The resulting visualizations reveal the scale of devastation in ways satellite imagery can’t match. Red shows lost elevation (like when a building burns), and blue shows a gain (such as tree growth or new construction).

Lidar is helping scientists track the cascading effects of climate-­driven disasters—from the damage to structures and vegetation destroyed by wildfires to the landslides and debris flows that often follow in their wake. “For the Eaton and Palisades fires, for example, entire hillsides burned. So all of that vegetation is removed,” Kuester says. “Now you have an atmospheric river coming in, dumping water. What happens next? You have debris flows, mud flows, landslides.” 

Lidar’s usefulness for quantifying the costs of climate disasters underscores its value in preparing for future fires, floods, and earthquakes. But as policymakers weigh steep budget cuts to scientific research, these crucial lidar data collection projects could face an uncertain future.

Jon Keegan writes about technology and AI, and he publishes Beautiful Public Data (beautifulpublicdata.com), a curated collection of government data sets.

On the ground in Ukraine’s largest Starlink repair shop

Oleh Kovalskyy thinks that Starlink terminals are built as if someone assembled them with their feet. Or perhaps with their hands behind their back. 

To demonstrate this last image, Kovalskyy—a large, 47-year-old Ukrainian, clad in sweatpants and with tattoos stretching from his wrists up to his neck—leans over to wiggle his fingers in the air behind him, laughing as he does. Components often detach, he says through bleached-white teeth, and they’re sensitive to dust and moisture. “It’s terrible quality. Very terrible.” 

But even if he’s not particularly impressed by the production quality, he won’t dispute how important the satellite internet service has been to his country’s defense. 

Starlink is absolutely critical to Ukraine’s ability to continue in the fight against Russia: It’s how troops in battle zones stay connected with faraway HQs; it’s how many of the drones essential to Ukraine’s survival hit their targets; it’s even how soldiers stay in touch with spouses and children back home. 

At the time of my visit to Kovalskyy in March 2025, however, it had begun to seem like this vital support system may suddenly disappear. Reuters had just broken news that suggested Musk, who was then still deeply enmeshed in Trump world, would remove Ukraine’s access to the service should its government fail to toe the line in US-led peace negotiations. Musk denied the allegations shortly afterward, but given Trump’s fickle foreign policy and inconsistent support of Ukrainian president Volodymyr Zelensky, the uncertainty of the technology’s future had become—and remains—impossible to ignore.  

a view down at the back of a volunteer working in a corner workbench. Tools and components are piled on every bit of the surface as well as the shelves in front of him.

ELENA SUBACH
a carboard box stuffed with grey cylinders

ELENA SUBACH

Kovalskyy’s unofficial Starlink repair shop may be the biggest of its kind in the world. Ordered chaos is the best way to describe it.

The stakes couldn’t be higher: Another Reuters report in late July revealed that Musk had ordered the restriction of Starlink in parts of Ukraine during a critical counteroffensive back in 2022. “Ukrainian troops suddenly faced a communications blackout,” the story explains. “Soldiers panicked, drones surveilling Russian forces went dark, and long-range artillery units, reliant on Starlink to aim their fire, struggled to hit targets.”

None of this is lost on Kovalskyy—and for now Starlink access largely comes down to the unofficial community of users and engineers of which Kovalskyy is just one part: Narodnyi Starlink.

The group, whose name translates to “The People’s Starlink,” was created back in March 2022 by a tech-savvy veteran of the previous battles against Russia-backed militias in Ukraine’s east. It started as a Facebook group for the country’s infant yet burgeoning community of Starlink users—a forum to share guidance and swap tips—but it very quickly emerged as a major support system for the new war effort. Today, it has grown to almost 20,000 members, including the unofficial expert “Dr. Starlink”—famous for his creative ways of customizing the systems—and other volunteer engineers like Kovalskyy and his men. It’s a prime example of the many informal, yet highly effective, volunteer networks that have kept Ukraine in the fight, both on and off the front line.

A repaired and mounted Starlink terminal standing on a cobbled road

ELENA SUBACH
a Starlink unit mounted to the roof of a vehicle with pink tinted windows

ELENA SUBACH

Kovalskyy and his crew of eight volunteers have repaired or customized more than 15,000 terminals since the war began in February 2022. Here, they test repaired units in a nearby parking lot.

Kovalskyy gave MIT Technology Review exclusive access to his unofficial Starlink repair workshop in the city of Lviv, about 300 miles west of Kyiv. Ordered chaos is the best way to describe it: Spread across a few small rooms in a nondescript two-story building behind a tile shop, sagging cardboard boxes filled with mud-splattered Starlink casings form alleyways among the rubble of spare parts. Like flying buttresses, green circuit boards seem to prop up the walls, and coils of cable sprout from every crevice.

Those acquainted with the workshop refer to it as the biggest of its kind in Ukraine—and, by extension, maybe the world. Official and unofficial estimates suggest that anywhere from 42,000 to 160,000 Starlink terminals operate in the country. Kovalskyy says he and his crew of eight volunteers have repaired or customized more than 15,000 terminals since the war began.

a surface scattered with pieces of used blue tape of various colors and sizes. Two ziploc bags with small metal parts are also taped up.
The informal, accessible nature of the Narodnyi Starlink community has been critical to its success. One military communications officer was inspired by Kovalskyy to set up his own repair workshop as part of Ukraine’s armed forces, but he says that official processes can be slower than private ones by a factor of 10.
ELENA SUBACH

Despite the pressure, the chance that they may lose access to Starlink was not worrying volunteers like Kovalskyy at the time of my visit; in our conversations, it was clear they had more pressing concerns than the whims of a foreign tech mogul. Russia continues to launch frequent aerial bombardments of Ukrainian cities, sometimes sending more than 500 drones in a single night. The threat of involuntary mobilization to the front line looms on every street corner. How can one plan for a hypothetical future crisis when crisis defines every minute of one’s day?


Almost every inch of every axis of the battlefield in Ukraine is enabled by Starlink. It connects pilots near the trenches with reconnaissance drones soaring kilometers above them. It relays the video feeds from those drones to command centers in rear positions. And it even connects soldiers, via encrypted messaging services, with their family and friends living far from the front.  

Although some soldiers and volunteers, including members of Narodnyi Starlink, refer to Starlink as a luxury, the reality is that it’s an essential utility; without it, Ukrainian forces would need to rely on other, often less effective means of communication. These include wired-line networks, mobile internet, and older geostationary satellite technology—all of which provide connectivity that is either slower, more vulnerable to interference, or more difficult for untrained soldiers to set up. 

“If not for Starlink, we would already be counting rubles in Kyiv,” Kovalskyy says.

close up of a Starlink unit on the lap of a volunteer, who is writing notes in a gridded notebook

ELENA SUBACH
a hand holding pieces of shrapnel

ELENA SUBACH

The workshop’s crew has learned to perform adjustments to terminals, especially in adapting them for battlefield conditions. At right, a volunteer engineer shows the fragments of shrapnel he has extracted from the terminals.

Despite being designed primarily for commercial use, Starlink provides a fantastic battlefield solution. The low-latency, high-bandwidth connection its terminals establish with its constellation of low-Earth-orbit satellites can transmit large streams of data while remaining very difficult for the enemy to jam—in part because the satellites, unlike geostationary ones, are in constant motion. 

It’s also fairly easy to use, so that soldiers with little or no technical knowledge can connect in minutes. And the system costs much less than other military technology; while the US and Polish governments pay business rates for many of Ukraine’s Starlink systems, individual soldiers or military units can purchase the hardware at the private rate of about $500, and subscribe for just $50 per month.

No alternatives match Starlink for cost, ease of use, or coverage—and none will in the near future. Its constellation of 8,000 satellites dwarfs that of its main competitor, a service called OneWeb sold by the French satellite operator Eutelsat, which has only 630 satellites. OneWeb’s hardware costs about 20 times more, and a subscription can run significantly higher, since OneWeb targets business customers. Amazon’s Project Kuiper, the most likely future competitor, started putting satellites in space only this year. 


Volodymyr Stepanets, a 51-year-old Ukrainian self-described “geek,” had been living in Krakow, Poland, with his family when Russia invaded in 2022. But before that, he had volunteered for several years on the front lines of the war against Russian-supported paramilitaries that began in 2014. 

He recalls, in those early months in eastern Ukraine, witnessing troops coordinating an air strike with rulers and a calculator; the whole process took them between 30 and 40 minutes. “All these calculations can be done in one minute,” he says he told them. “All we need is a very stupid computer and very easy software.” (The Ukrainian military declined to comment on this issue.)

Stepanets subsequently committed to helping this brigade, the 72nd, integrate modern technology into its operations. He says that within one year, he had taught them how to use modern communication platforms, positioning devices, and older satellite communication systems that predate Starlink. 

a Starlink terminal with leaves inside the housing, seen lit in silhouette and numbered 5566
Narodnyi Starlink members ask each other for advice about how to adapt the systems: how to camouflage them from marauding Russian drones or resolve glitches in the software, for example.
ELENA SUBACH

So after Russian tanks rolled across the border, Stepanets was quick to see how Starlink’s service could provide an advantage to Ukraine’s armed forces. He also recognized that these units, as well as civilian users, would need support in utilizing the new technology. And that’s how he came up with the idea for Narodnyi Starlink, an open Facebook group he launched on March 21, just a few weeks after the full invasion began and the Ukrainian government requested the activation of Starlink.

Over the past few years, the Narodnyi Starlink digital community has grown to include volunteer engineers, resellers, and military service members interested in the satellite comms service. The group’s members post roughly three times per day, often sharing or asking for advice about adaptations, or seeking volunteers to fix broken equipment. A user called Igor Semenyak recently asked, for example, whether anyone knew how to mask his system from infrared cameras. “How do you protect yourself from heat radiation?” he wrote, to which someone suggested throwing special heat-proof fabric over the terminal.

Its most famous member is probably a man widely considered the brains of the group: Oleg Kutkov, a 36-year-old software engineer otherwise known to some members as “Dr. Starlink.” Kutkov had been privately studying Starlink technology from his home in Kyiv since 2021, having purchased a system to tinker with when service was still unavailable in the country; he believes that he may have been the country’s first Starlink user. Like Stepanets, he saw the immense potential for Starlink after Russia broke traditional communication lines ahead of its attack.

“Our infrastructure was very vulnerable because we did not have a lot of air defense,” says Kutkov, who still works full time as an engineer at the US networking company Ubiquiti’s R&D center in Kyiv. “Starlink quickly became a crucial part of our survival.”

Stepanets contacted Kutkov after coming across his popular Twitter feed and blog, which had been attracting a lot of attention as early Starlink users sought help. Kutkov still publishes the results of his own research there—experiments he performs in his spare time, sometimes staying up until 3 a.m. to complete them. In May, for example, he published a blog post explaining how users can physically move a user account from one terminal to another when the printed circuit board in one is “so severely damaged that repair is impossible or impractical.” 

“Oleg Kutkov is the coolest engineer I’ve met in my entire life,” Kovalskyy says.

a volunteer holding a Starlink vertically to pry it open

ELENA SUBACH
two volunteers at workbenches repairing terminals

ELENA SUBACH

When the fighting is at its worst, the workshop may receive 500 terminals to repair every month. The crew lives and sometimes even sleeps there.

Supported by Kutkov’s technical expertise and Stepanets’s organizational prowess, Kovalskyy’s warehouse became the major repair hub (though other volunteers also make repairs elsewhere). Over time, Kovalskyy—who co-owned a regional internet service provider before the war—and his crew have learned to perform adjustments to Starlink terminals, especially to adapt them for battlefield conditions. For example, they modified them to receive charge at the right voltage directly from vehicles, years before Starlink released a proprietary car adapter. They’ve also switched out Starlink’s proprietary SPX plugs—which Kovalskyy criticized as vulnerable to moisture and temperature changes—with standard ethernet ports. 

Together, the three civilians—Kutkov, Stepanets, and Kovalskyy—effectively lead Narodnyi Starlink. Along with several other members who wished to remain anonymous, they hold meetings every Monday over Zoom to discuss their activities, including recent Starlink-related developments on the battlefield, as well as information security. 

While the public group served as a suitable means of disseminating information in the early stages of the war when speed was critical, they have had to move a lot of their communications to private channels after discovering Russian surveillance; Stepanets says that at least as early as 2024, Russians had translated a 300-page educational document they had produced and shared online. Now, as administrators of the Facebook group, the three men block the publication of any posts deemed to reveal information that might be useful to Russian forces. 

Stepanets believes the threat extends beyond the group’s intel to its members’ physical safety. When we talked, he brought up the attempted assassination of the Ukrainian activist and volunteer Serhii Sternenko in May this year. Although Sternenko was unaffiliated with Narodnyi Starlink, the event served as a clear reminder of the risks even civilian volunteers undertake in wartime Ukraine. “The Russian FSB and other [security] services still understand the importance of participation in initiatives like [Narodnyi Starlink],” Stepanets says. He stresses that the group is not an organization with a centralized chain of command, but a community that would continue operating if any of its members were no longer able to perform their roles. 

closeup of a Starlink board with light shining through the holes
“We have extremely professional engineers who are extremely intelligent,” Kovalskyy told me. “Repairing Starlink terminals for them is like shooting ducks with HIMARS [a vehicle-borne GPS-guided rocket launcher].”
ELENA SUBACH

The informal, accessible nature of this community has been critical to its success. Operating outside official structures has allowed Narodnyi Starlink to function much more efficiently than state channels. Yuri Krylach, a military communications officer who was inspired by Kovalskyy to set up his own repair workshop as part of Ukraine’s armed forces, says that official processes can be slower than private ones by a factor of 10; his own team’s work is often interrupted by other tasks that commanders deem more urgent, whereas members of the Narodnyi Starlink community can respond to requests quickly and directly. (The military declined to comment on this issue, or on any military connections with Narodnyi Starlink.)


Most of the Narodnyi Starlink members I spoke to, including active-duty soldiers, were unconcerned about the report that Musk might withdraw access to the service in Ukraine. They pointed out that doing so would involve terminating state contracts, including those with the US Department of Defense and Poland’s Ministry of Digitalization. Losing contracts worth hundreds of millions of dollars (the Polish government claims to pay $50 million per year in subscription fees), on top of the private subscriptions, would cost the company a significant amount of revenue. “I don’t really think that Musk would cut this money supply,” Kutkov says. “It would be quite stupid.” Oleksandr Dolynyak, an officer in the 103rd Separate Territorial Defense Brigade and a Narodnyi Starlink member since 2022, says: “As long as it is profitable for him, Starlink will work for us.”

Stepanets does believe, however, that Musk’s threats exposed an overreliance on the technology that few had properly considered. “Starlink has really become one of the powerful tools of defense of Ukraine,” he wrote in a March Facebook post entitled “Irreversible Starlink hegemony,” accompanied by an image of the evil Darth Sidious from Star Wars. “Now, the issue of the country’s dependence on the decisions of certain eccentric individuals … has reached [a] melting point.”

Even if telecommunications experts both inside and outside the military agree that Starlink has no direct substitute, Stepanets believes that Ukraine needs to diversify its portfolio of satellite communication tools anyway, integrating additional high-speed satellite communication services like OneWeb. This would relieve some of the pressure caused by Musk’s erratic, unpredictable personality and, he believes, give Ukraine some sense of control over its wartime communications. (SpaceX did not respond to a request for comment.) 

The Ukrainian military seems to agree with this notion. In late March, at a closed-door event in Kyiv, the country’s then-deputy minister of defense Kateryna Chernohorenko announced the formation of a special Space Policy Directorate “to consolidate internal and external capabilities to advance Ukraine’s military space sector.” The announcement referred to the creation of a domestic “satellite constellation,” which suggests that reliance on foreign services like Starlink had been a catalyst. “Ukraine needs to transition from the role of consumer to that of a full-fledged player in the space sector,” a government blog post stated. (Chernohorenko did not respond to a request for comment.)

Ukraine isn’t alone in this quandary. Recent discussions about a potential Starlink deal with the Italian government, for example, have stalled as a result of Musk’s behavior. And as Juliana Süss, an associate fellow at the UK’s Royal United Services Institute, points out, Taiwan chose SpaceX’s competitor Eutelsat when it sought a satellite communications partner in 2023.

“I think we always knew that SpaceX is not always the most reliable partner,” says Süss, who also hosts RUSI’s War in Space podcast, citing Musk’s controversial comments about the country’s status. “The Taiwan problems are a good example for how the rest of the world might be feeling about this.”

Nevertheless, Ukraine is about to become even more deeply enmeshed with Starlink; the country’s leading mobile operator Kyivstar announced in July that Ukraine will soon become the first European nation to offer Starlink direct-to-mobile services. Süss is cautious about placing too much emphasis on this development though. “This step does increase dependency,” she says. “But that dependency is already there.” Adding an additional channel of communications as a possible backup is otherwise a logical action for a country at war, she says.


These issues can feel far away for the many Ukrainians who are just trying to make it through to the next day. Despite its location in the far west of Ukraine, Lviv, home to Kovalskyy’s shop, is still frequently hit by Russian kamikaze drones, and local military-affiliated sites are popular targets. 

Still, during our time together, Kovalskyy was far more worried by the prospect of his team’s possible mobilization. In March, the Ministry of Defense had removed the special status that had otherwise protected his people from involuntary conscription given the nature of their volunteer activities. They’re now at risk of being essentially picked up off the street by Ukraine’s dreaded military recruitment teams, known as the TCK, whenever they leave the house.

A room with walls covered by a grid of patches and Ukrainian flags, and stacks of grey boxes on the floor
The repair shop displays patches from many different Ukrainian military units—each given as a gift for their services. “We sometimes perform miracles with Starlinks,” Kovalskyy said.
COURTESY OF THE AUTHOR

This is true even though there’s so much demand for the workshop’s services that during my visit, Kovalskyy expressed frustration at the vast amount of time they’ve had to dedicate solely to basic repairs. “We have extremely professional engineers who are extremely intelligent,” he told me. “Repairing Starlink terminals for them is like shooting ducks with HIMARS [a vehicle-borne GPS-guided rocket launcher].” 

At least the situation seemed to have become better on the front over the winter, Kovalskyy added, handing me a Starlink antenna whose flat, white surface had been ripped open by shrapnel. When the fighting is at its worst, the team might receive 500 terminals to repair every month, and the crew lives in the workshop, sometimes even sleeping there. But at that moment in time, it was receiving only a couple of hundred.

We ended our morning at the workshop by browsing its vast collection of varied military patches, pinned to the wall on large pieces of Velcro. Each had been given as a gift by a different unit as thanks for the services of Kovalskyy and his team, an indication of the diversity and size of Ukraine’s military: almost 1 million soldiers protecting a 600-mile front line. At the same time, it’s a physical reminder that they almost all rely on a single technology with just a few production factories located on another continent nearly 6,000 miles away.

“We sometimes perform miracles with Starlinks,” Kovalskyy says. 

He and his crew can only hope that they will still be able to for the foreseeable future—or, better yet, that they won’t need to at all.  

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers including Wired, the Guardian, and MIT Technology Review.

Why recycling isn’t enough to address the plastic problem

I remember using a princess toothbrush when I was little. The handle was purple, teal, and sparkly. Like most of the other pieces of plastic that have ever been made, it’s probably still out there somewhere, languishing in a landfill. (I just hope it’s not in the ocean.)

I’ve been thinking about that toothbrush again this week after UN talks about a plastic treaty broke down on Friday. Nations had gotten together to try and write a binding treaty to address plastic waste, but negotiators left without a deal.

Plastic is widely recognized as a huge source of environmental pollution—again, I’m wondering where that toothbrush is—but the material is also a contributor to climate change. Let’s dig into why talks fell apart and how we might address emissions from plastic.

I’ve defended plastic before in this newsletter (sort of). It’s a wildly useful material, integral in everything from glasses lenses to IV bags.

But the pace at which we’re producing and using plastic is absolutely bonkers. Plastic production has increased at an average rate of 9% every year since 1950. Production hit 460 million metric tons in 2019. And an estimated 52 million metric tons are dumped into the environment or burned each year.

So, in March 2022, the UN Environment Assembly set out to develop an international treaty to address plastic pollution. Pretty much everyone should agree that a bunch of plastic waste floating in the ocean is a bad thing. But as we’ve learned over the past few years, as these talks developed, opinions diverge on what to do about it and how any interventions should happen.

One phrase that’s become quite contentious is the “full life cycle” of plastic. Basically, some groups are hoping to go beyond efforts to address just the end of the plastic life cycle (collecting and recycling it) by pushing for limits on plastic production. There was even talk at the Assembly of a ban on single-use plastic.

Petroleum-producing nations strongly opposed production limits in the talks. Representatives from Saudi Arabia and Kuwait told the Guardian that they considered limits to plastic production outside the scope of talks. The US reportedly also slowed down talks and proposed to strike a treaty article that references the full life cycle of plastics.

Petrostates have a vested interest because oil, natural gas, and coal are all burned for energy used to make plastic, and they’re also used as raw materials. This stat surprised me: 12% of global oil demand and over 8% of natural gas demand is for plastic production.  

That translates into a lot of greenhouse gas emissions. One report from Lawrence Berkeley National Lab found that plastics production accounted for 2.24 billion metric tons of carbon dioxide emissions in 2019—that’s roughly 5% of the global total.  

And looking into the future, emissions from plastics are only set to grow. Another estimate, from the Organisation for Economic Co-operation and Development, projects that emissions from plastics could swell from about 2 billion metric tons to 4 billion metric tons by 2060.

This chart is what really strikes me and makes the conclusion of the plastic treaty talks such a disappointment.

Recycling is a great tool, and new methods could make it possible to recycle more plastics and make it easier to do so. (I’m particularly interested in efforts to recycle a mix of plastics, cutting down on the slow and costly sorting process.)

But just addressing plastic at its end of life won’t be enough to address the climate impacts of the material. Most emissions from plastic come from making it. So we need new ways to make plastic, using different ingredients and fuels to take oil and gas out of the equation. And we need to be smarter about the volume of plastic we produce.  

One positive note here: The plastic treaty isn’t dead, just on hold for the moment. Officials say that there’s going to be an effort to revive the talks.

Less than 10% of plastic that’s ever been produced has been recycled. Whether it’s a water bottle, a polyester shirt you wore a few times, or a princess toothbrush from when you were a kid, it’s still out there somewhere in a landfill or in the environment. Maybe you already knew that. But also consider this: The greenhouse gases emitted to make the plastic are still in the atmosphere, too, contributing to climate change. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

In a first, Google has released data on how much energy an AI prompt uses

Google has just released a technical report detailing how much energy its Gemini apps use for each query. In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity, the equivalent of running a standard microwave for about one second. The company also provided average estimates for the water consumption and carbon emissions associated with a text prompt to Gemini.

It’s the most transparent estimate yet from a Big Tech company with a popular AI product, and the report includes detailed information about how the company calculated its final estimate. As AI has become more widely adopted, there’s been a growing effort to understand its energy use. But public efforts attempting to directly measure the energy used by AI have been hampered by a lack of full access to the operations of a major tech company. 

Earlier this year, MIT Technology Review published a comprehensive series on AI and energy, at which time none of the major AI companies would reveal their per-prompt energy usage. Google’s new publication, at last, allows for a peek behind the curtain that researchers and analysts have long hoped for.

The study focuses on a broad look at energy demand, including not only the power used by the AI chips that run models but also by all the other infrastructure needed to support that hardware. 

“We wanted to be quite comprehensive in all the things we included,” said Jeff Dean, Google’s chief scientist, in an exclusive interview with MIT Technology Review about the new report.

That’s significant, because in this measurement, the AI chips—in this case, Google’s custom TPUs, the company’s proprietary equivalent of GPUs—account for just 58% of the total electricity demand of 0.24 watt-hours. 

Another large portion of the energy is used by equipment needed to support AI-specific hardware: The host machine’s CPU and memory account for another 25% of the total energy used. There’s also backup equipment needed in case something fails—these idle machines account for 10% of the total. The final 8% is from overhead associated with running a data center, including cooling and power conversion. 

This sort of report shows the value of industry input to energy and AI research, says Mosharaf Chowdhury, a professor at the University of Michigan and one of the heads of the ML.Energy leaderboard, which tracks energy consumption of AI models. 

Estimates like Google’s are generally something that only companies can produce, because they run at a larger scale than researchers are able to and have access to behind-the-scenes information. “I think this will be a keystone piece in the AI energy field,” says Jae-Won Chung, a PhD candidate at the University of Michigan and another leader of the ML.Energy effort. “It’s the most comprehensive analysis so far.”

Google’s figure, however, is not representative of all queries submitted to Gemini: The company handles a huge variety of requests, and this estimate is calculated from a median energy demand, one that falls in the middle of the range of possible queries.

So some Gemini prompts use much more energy than this: Dean gives the example of feeding dozens of books into Gemini and asking it to produce a detailed synopsis of their content. “That’s the kind of thing that will probably take more energy than the median prompt,” Dean says. Using a reasoning model could also have a higher associated energy demand because these models take more steps before producing an answer.

This report was also strictly limited to text prompts, so it doesn’t represent what’s needed to generate an image or a video. (Other analyses, including one in MIT Technology Review’s Power Hungry series earlier this year, show that these tasks can require much more energy.)

The report also finds that the total energy used to field a Gemini query has fallen dramatically over time. The median Gemini prompt used 33 times more energy in May 2024 than it did in May 2025, according to Google. The company points to advancements in its models and other software optimizations for the improvements.  

Google also estimates the greenhouse gas emissions associated with the median prompt, which they put at 0.03 grams of carbon dioxide. To get to this number, the company multiplied the total energy used to respond to a prompt by the average emissions per unit of electricity.

Rather than using an emissions estimate based on the US grid average, or the average of the grids where Google operates, the company instead uses a market-based estimate, which takes into account electricity purchases that the company makes from clean energy projects. The company has signed agreements to buy over 22 gigawatts of power from sources including solar, wind, geothermal, and advanced nuclear projects since 2010. Because of those purchases, Google’s emissions per unit of electricity on paper are roughly one-third of those on the average grid where it operates.

AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops. 

The goal of this work was to provide users a window into the energy use of their interactions with AI, Dean says. 

“People are using [AI tools] for all kinds of things, and they shouldn’t have major concerns about the energy usage or the water usage of Gemini models, because in our actual measurements, what we were able to show was that it’s actually equivalent to things you do without even thinking about it on a daily basis,” he says, “like watching a few seconds of TV or consuming five drops of water.”

The publication greatly expands what’s known about AI’s resource usage. It follows recent increasing pressure on companies to release more information about the energy toll of the technology. “I’m really happy that they put this out,” says Sasha Luccioni, an AI and climate researcher at Hugging Face. “People want to know what the cost is.”

This estimate and the supporting report contain more public information than has been available before, and it’s helpful to get more information about AI use in real life, at scale, by a major company, Luccioni adds. However, there are still details that the company isn’t sharing in this report. One major question mark is the total number of queries that Gemini gets each day, which would allow estimates of the AI tool’s total energy demand. 

And ultimately, it’s still the company deciding what details to share, and when and how. “We’ve been trying to push for a standardized AI energy score,” Luccioni says, a standard for AI similar to the Energy Star rating for appliances. “This is not a replacement or proxy for standardized comparisons.”

I gave the police access to my DNA—and maybe some of yours

Last year, I added my DNA profile to a private genealogical database, FamilyTreeDNA, and clicked “Yes” to allow the police to search my genes.

In 2018, police in California announced they’d caught the Golden State Killer, a man who had eluded capture for decades. They did it by uploading crime-scene DNA to websites like the one I’d joined, where genealogy hobbyists share genetic profiles to find relatives and explore ancestry. Once the police had “matches” to a few relatives of the killer, they built a large family tree from which they plucked the likely suspect.

This process, called forensic investigative genetic genealogy, or FIGG, has since helped solve hundreds of murders and sexual assaults. Still, while the technology is potent, it’s incompletely realized. It operates via a mishmash of private labs and unregulated websites, like FamilyTree, which give users a choice to opt into or out of police searches. The number of profiles available for search by police hovers around 1.5 million, not yet enough to find matches in all cases.

To do my bit to increase those numbers, I traveled to Springfield, Massachusetts.

The staff of the local district attorney, Anthony D. Gulluni, was giving away free FamilyTree tests at a minor-league hockey game in an effort to widen its DNA net and help solve several cold-case murders. After glancing over a consent form, I spit into a tube and handed it back. According to the promotional material from Gulluni’s office, I’d “become a hero.”

But I wasn’t really driven by some urge to capture distantly related serial killers. Rather, my spit had a less gallant and more quarrelsome motive: to troll privacy advocates whose fears around DNA I think are overblown and unhelpful. By giving up my saliva for inspection, I was going against the view that a person’s DNA is the individualized, sacred text that privacy advocates sometimes claim.

Indeed, the only reason FIGG works is that relatives share DNA: You share about 50% with a parent, 25% with a grandparent, about 12.5% with a first cousin, and so on. When I got my FamilyTree report back, my DNA had “matched” with 3,309 people.

Some people are frightened by FIGG or reject its punitive aims. One European genealogist I know says her DNA is kept private because she opposes the death penalty and doesn’t want to risk aiding US authorities in cases where lethal injection might be applied. But if enough people share their DNA, conscientious objectors won’t matter. Scientists estimate that a database including 2% of the US population, or 6 million people, could identify the source of nearly any crime-scene DNA, given how many distant relatives each of us has.

Scholars of big data have termed this phenomenon “tyranny of the minority.” One person’s voluntary disclosure can end up exposing the same information about many others. And that tyranny can be abused.

DNA information held in private genealogy websites like FamilyTree is lightly guarded by terms of service. These agreements have flip-flopped over time; at one point all users were included in law enforcement searches by default. Rules are easily ignored, too. Recent court filings indicate that the FBI, in its zeal to solve crimes, sometimes barges past restrictions to look for matches in databases whose policies exclude police.

“Noble aims; no rules” is how one genetic genealogist described the overall situation in her field.

My uncertainty grew the more questions I asked. Who even controls my DNA file? That’s not easy to find out. FamilyTree is a brand operated by another company, Gene by Gene, which in 2021 was sold to a third company, MyDNA—ultimately owned by an Australian mogul whose name appears nowhere on its website. When I reached FamilyTree’s general manager, the genealogist Dave Vance, he told me that three-quarters of the profiles on the site were “opted in” to law enforcement searches.

One solution holds that the federal government should organize its own national DNA database for FIGG. But that would require new laws, new technical standards, and a debate about how our society wants to employ this type of big data—not just getting individual consent like mine. No such national project—or consensus—exists.

I’m still ready to join a national crime-fighting database, but I regret doing it the way I did—spitting in a tube on the sidelines of a hockey game and signing a consent form that affects not just me but all my thousands of genetic relatives. To them, I say: Whoops. Your DNA; my bad.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The case against humans in space

Elon Musk and Jeff Bezos are bitter rivals in the commercial space race, but they agree on one thing: Settling space is an existential imperative. Space is the place. The final frontier. It is our human destiny to transcend our home world and expand our civilization to extraterrestrial vistas.

This belief has been mainstream for decades, but its rise has been positively meteoric in this new gilded age of astropreneurs. Expanding humanity beyond Earth is both our birthright and our duty to the future, they insist. Failing to do so would consign our species to certain extinction—either by our own hand, perhaps through nuclear war or climate change, or in some cosmic disaster, like a massive asteroid impact.

But as visions of giant orbital stations and Martian cities dance in our heads, a case against human space colonization has found its footing in a number of recent books. The argument grows from many grounds: Doubts about the practical feasibility of off-Earth communities. Concerns about the exorbitant costs, including who would bear them and who would profit. Realism about the harsh environment of space and the enormous tax it would exact on the human body. Suspicion of the underlying ideologies and mythologies that animate the race to settle space.

And, more bluntly, a recognition that “space sucks” and a lot of people have “underestimated the scale of suckitude,” as Kelly and Zach Weinersmith put it in their book A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?, which was released in paperback earlier this year.

cover of A City on Mars
A City on Mars: Can We Settle Space, Should
We Settle Space, and Have We Really Thought This Through?

Kelly and Zach Weinersmith
PENGUIN RANDOM HOUSE, 2023 (PAPERBACK RELEASE 2025)

The Weinersmiths, a husband-wife team, spent years thinking it through—in delightfully pragmatic detail. A City on Mars provides ground truth for our lofty celestial dreams by gaming out the medical, technical, legal, ethical, and existential consequences of space settlements. 

Much to the authors’ own dismay, the result is a grotesquery of possible outcomes including (but not limited to) Martian eugenics, interplanetary war, and—­memorably—“space cannibalism.” 

The Weinersmiths puncture the gauzy fantasy of space cities by asking pretty basic questions, like how to populate them. Astronauts experience all kinds of medical challenges in space, such as radiation exposure and bone loss, which would increase risks to both parents and babies. Nobody wants their pregnant “glow” to be a by-product of cosmic radiation.

Trying to bring forth babies in space “is going to be tricky business, not just in terms of science, but from the perspective of scientific ethics,” they write. “Adults can consent to being in experiments. Babies can’t.”

You don’t even have to contemplate going to Mars to make some version of this case. In Ground Control: An Argument for the End of Human Space Exploration, Savannah Mandel chronicles how past and present generations have regarded human spaceflight as an affront to vulnerable children right here on Earth.

cover of Ground Control
Ground Control: An Argument for the End of Human Space Exploration
Savannah Mandel
CHICAGO REVIEW PRESS, 2024

“Hungry Kids Can’t Eat Moon Rocks,” read signs at a protest outside Kennedy Space Center on the eve of the Apollo 11 launch in July 1969. Gil Scott-Heron’s 1970 poem “Whitey on the Moon” rose to become the de facto anthem of this movement, which insists, to this day, that until humans get our earthly house in order, we have no business building new ones in outer space.

Ground Control, part memoir and part manifesto, channels this lament: How can we justify the enormous cost of sending people beyond our planet when there is so much suffering here at home? 

Advocates for human space exploration reject the zero-sum framing and point to the many downstream benefits of human spaceflight. Space exploration has catalyzed inventions from the CAT scan to baby formula. There is also inherent value in our shared adventure of learning about the vast cosmos.

Those upsides are real, but they are not remotely well distributed. Mandel predicts that the commercial space sector in its current form will only exacerbate inequalities on Earth, as profits from space ventures flow into the coffers of the already obscenely rich. 

In her book, Mandel, a space anthropologist and scholar at Virginia Tech, describes a personal transformation from spacey dreamer to grounded critic. It began during fieldwork at Spaceport America, a commercial launch facility in New Mexico, where she began to see cracks in the dazzling future imagined by space billionaires. As her career took her from street protests in London to extravagant space industry banquets in Washington, DC, she writes, “crystal clear glasses” replaced “the rose-colored ones.”

Mandel remains enchanted by space but is skeptical that humans are the optimal trailblazers. Robots, rovers, probes, and other artificial space ambassadors could do the job for a fraction of the price and without risk to life, limb, and other corporeal vulnerabilities.  

“A decentralization of self needs to occur,” she writes. “A dissolution of anthropocentrism, so to speak. And a recognition that future space explorers may not be man, even if man moves through them.” 

In other words, giant leaps for mankind no longer necessitate a man’s small steps; the wheels of a rover or the rotors of a copter offer a much better bang for our buck than boots on the ground.

In contrast to the Weinersmiths, Mandel devotes little attention to the physical dangers and limitations that space imposes on humans. She is more interested in a kind of psychic sickness that drives the impulse to abandon our planet and rush into new territories.

Mary-Jane Rubenstein, a scholar of religion at Wesleyan University, presents a thorough diagnosis of this exact pathology in her 2022 book Astrotopia: The Dangerous Religion of the Corporate Space Race, which came out in paperback last year. It all begins, appropriately enough, with the book of Genesis, where God creates Earth for the dominion of man. Over the years, this biblical brain worm has offered divine justification for the brutal colonization and environmental exploitation of our planet. Now it serves as the religious rocket fuel propelling humans into the next frontier, Rubenstein argues.

cover of Astrotopia
Astrotopia: The Dangerous Religion of the Corporate Space Race
Mary-Jane Rubenstein
UNIVERSITY OF CHICAGO PRESS, 2022  (PAPERBACK RELEASE 2024)

“The intensifying ‘NewSpace race’ is as much a mythological project as it is a political, economic, or scientific one,” she writes. “It’s a mythology, in fact, that holds all these other efforts together, giving them an aura of duty, grandeur, and benevolence.”

Rubenstein makes a forceful case that malignant outgrowths of Christian ideas scaffold the dreams of space settlements championed by Musk, Bezos, and like-minded enthusiasts—even if these same people might never describe themselves as religious. If Earth is man’s dominion, space is the next logical step. Earth is just a temporary staging ground for a greater destiny; we will find our deliverance in the heavens.   

“Fuck Earth,” Elon Musk said in 2014. “Who cares about Earth? If we can establish a Mars colony, we can almost certainly colonize the whole solar system.”

Jeff Bezos, for one, claims to care about Earth; that’s among his best arguments for why humans should move beyond it. If heavy industries and large civilian populations cast off into the orbital expanse, our home world can be, in his words, “zoned residential and light industry,” allowing it to recover from anthropogenic pressures.

Bezos also believes that space settlements are essential for the betterment of humanity, in part on the grounds that they will uncork our population growth. He envisions an orbital archipelago of stations, sprawled across the solar system, that could support a collective population of a trillion people. “That’s a thousand Mozarts. A thousand Einsteins,” Bezos has mused. “What a cool civilization that would be.”

It does sound cool. But it’s an easy layup for Rubenstein: This “numbers game” approach would also produce a thousand Hitlers and Stalins, she writes. 

And that is the real crux of the argument against pushing hard torapidly expand human civilization into space: We will still be humans when we get there. We won’t escape our vices and frailties by leaving Earth—in fact, we may exacerbate them. 

While all three books push back on the existential argument for space settlements, the Weinersmiths take the rebuttal one step further by proposing that space colonization might actually increase the risk of self-annihilation rather than neutralizing it.

“Going to space will not end war because war isn’t caused by anything that space travel is apt to change, even in the most optimistic scenarios,” they write. “Humanity going to space en masse probably won’t reduce the likelihood of war, but we should consider that it might increase the chance of war being horrific.” 

The pair imagine rival space nations exchanging asteroid fire or poisoning whole biospheres. Proponents of space settlements often point to the fate of the dinosaurs as motivational grist, but what if a doomsday asteroid were deliberately flung between human cultures as a weapon? It may sound outlandish, but it’s no more speculative than a floating civilization with a thousand Mozarts. It follows the same logic of extrapolating our human future in space from our behavior on Earth in the past.

So should we just sit around and wait for our inevitable extinction? The three books have more or less the same response: What’s the rush? It is far more likely that humanity will be wiped out by our own activity in the near term than by any kind of cosmic threat. Worrying about the expansion of the sun in billions of years, as Musk has openly done, is frankly hysterical. 

In the meantime, we have some growing up to do. Mandel and Rubenstein both argue that any worthy human future in space must adopt a decolonizing approach that emphasizes caretaking and stewardship of this planet and its inhabitants before we set off for others. They draw inspiration from science fiction, popular culture, and Indigenous knowledge, among other sources, to sketch out these alternative visions of an off-Earth future. 

Mandel sees hope for this future in post-scarcity political theories. She cites various attempts to anticipate the needs of future generations—ideas found in the work of the social theorist Aaron Benanav, or in the values expressed by the Green New Deal, or in the fictional Ministry for the Future imagined by Kim Stanley Robinson in his 2020 novel of the same name. Whatever you think of the controversial 2025 book Abundance, by Ezra Klein and Derek Thompson, it is also appealing to the same demand for a post-scarcity road map.  

To that end, Mandel envisions “the creation of a governing body that would require that techno-scientific plans, especially those with a global reach, take into consideration multigenerational impacts and multigenerational voices.”  

For Rubenstein, religion is the poison, but it may also offer the cure. She sees potential in a revival of pantheism, which is the belief that all the contents of the universe—from rocks to humans to galaxies—are divine and perhaps alive on some level. She hasn’t fully converted herself to this movement, let alone become an evangelist, but she says it’s a spiritual direction that could be an effective counterweight to dominionist views of the universe.

“It doesn’t matter whether … any sort of pantheism is ‘true,’” she writes. “What matters is the way any given mythology prompts us to interact with the world we’re a part of—the world each of our actions helps to make and unmake. And frankly, some mythologies prompt us to act better than others.”

All these authors ultimately conclude that it would be great if humans lived in space—someday, if and when we’ve matured. But the three books all express concerns about efforts by commercial space companies, with the help of the US government, to bypass established space laws and norms—concerns that have been thoroughly validated in 2025.  

The combustible relationship between Elon Musk and Donald Trump has raised eyebrows about cronyism—and retribution—between governments and space companies. Space is rapidly becoming weaponized. And recent events have reminded us of the immense challenges of human spaceflight. SpaceX’s next-­generation Starship vehicle has suffered catastrophic failures in several test flights, while Boeing’s Starliner capsule experienced malfunctions that kept two astronauts on the International Space Station for months longer than expected. Even space tourism is developing a bad rap: In April, a star-studded all-woman crew on a Blue Origin suborbital flight was met with widespread backlash as a symbol of out-of-touch wealth and privilege.

It is at this point that we must loop back to the issue of “suckitude,” which Mandel also channels in her book through the killer opening of M.T. Anderson’s novel Feed: “We went to the moon to have fun, but the moon turned out to completely suck.”

The dreams of space settlements put forward by Musk and Bezos are insanely fun. The reality may well suck. But it’s doubtful that any degree of suckitude will slow down the commercial space race, and the authors do at times seem to be yelling into the cosmic void. 

Still, the books challenge space enthusiasts of all stripes to imagine new ways of relating to space that aren’t so tactile and exploitative. Along those lines, Rubenstein shares a compelling anecdote in Astrotopia about an anthropologist who lived with an Inuit community in the early 1970s. When she told them about the Apollo moon landings, her hosts burst out in laughter. 

“We didn’t know this was the first time you white people had been to the moon,” they said. “Our shamans go all the time … The issue is not whether we go to visit our relatives, but how we treat them and their homeland when we go.” 

Becky Ferreira is a science reporter based in upstate New York, and author of First Contact, a book about the search for alien life, which will be published in September. 

Meet the researcher hosting a scientific conference by and for AI

In October, a new academic conference will debut that’s unlike any other. Agents4Science is a one-day online event that will encompass all areas of science, from physics to medicine. All of the work shared will have been researched, written, and reviewed primarily by AI, and will be presented using text-to-speech technology. 

The conference is the brainchild of Stanford computer scientist James Zou, who studies how humans and AI can best work together. Artificial intelligence has already provided many useful tools for scientists, like DeepMind’s AlphaFold, which helps simulate proteins that are difficult to make physically. More recently, though, progress in large language models and reasoning-enabled AI has advanced the idea that AI can work more or less as autonomously as scientists themselves—proposing hypotheses, running simulations, and designing experiments on their own. 

James Zou
James Zou’s Agents4Science conference will use text-to-speech to present the work of the AI researchers.
COURTESY OF JAMES ZOU

That idea is not without its detractors. Among other issues, many feel AI is not capable of the creative thought needed in research, makes too many mistakes and hallucinations, and may limit opportunities for young researchers. 

Nevertheless, a number of scientists and policymakers are very keen on the promise of AI scientists. The US government’s AI Action Plan describes the need to “invest in automated cloud-enabled labs for a range of scientific fields.” Some researchers think AI scientists could unlock scientific discoveries that humans could never find alone. For Zou, the proposition is simple: “AI agents are not limited in time. They could actually meet with us and work with us 24/7.” 

Last month, Zou published an article in Nature with results obtained from his own group of autonomous AI workers. Spurred on by his success, he now wants to see what other AI scientists (that is, scientists that are AI) can accomplish. He describes what a successful paper at Agents4Science will look like: “The AI should be the first author and do most of the work. Humans can be advisors.”

A virtual lab staffed by AI

As a PhD student at Harvard in the early 2010s, Zou was so interested in AI’s potential for science that he took a year off from his computing research to work in a genomics lab, in a field that has greatly benefited from technology to map entire genomes. His time in so-called wet labs taught him how difficult it can be to work with experts in other fields. “They often have different languages,” he says. 

Large language models, he believes, are better than people at deciphering and translating between subject-specific jargon. “They’ve read so broadly,” Zou says, that they can translate and generalize ideas across science very well. This idea inspired Zou to dream up what he calls the “Virtual Lab.”

At a high level, the Virtual Lab would be a team of AI agents designed to mimic an actual university lab group. These agents would have various fields of expertise and could interact with different programs, like AlphaFold. Researchers could give one or more of these agents an agenda to work on, then open up the model to play back how the agents communicated to each other and determine which experiments people should pursue in a real-world trial. 

Zou needed a (human) collaborator to help put this idea into action and tackle an actual research problem. Last year, he met John E. Pak, a research scientist at the Chan Zuckerberg Biohub. Pak, who shares Zou’s interest in using AI for science, agreed to make the Virtual Lab with him. 

Pak would help set the topic, but both he and Zou wanted to see what approaches the Virtual Lab could come up with on its own. As a first project, they decided to focus on designing therapies for new covid-19 strains. With this goal in mind, Zou set off training five AI scientists (including ones trained to act like an immunologist, a computational biologist, and a principal investigator) with different objectives and programs at their disposal. 

Building these models took a few months, but Pak says they were very quick at designing candidates for therapies once the setup was complete: “I think it was a day or half a day, something like that.”

Zou says the agents decided to study anti-covid nanobodies, a cousin of antibodies that are much smaller in size and less common in the wild. Zou was shocked, though, at the reason. He claims the models landed on nanobodies after making the connection that these smaller molecules would be well-suited to the limited computational resources the models were given. “It actually turned out to be a good decision, because the agents were able to design these nanobodies efficiently,” he says. 

The nanobodies the models designed were genuinely new advances in science, and most were able to bind to the original covid-19 variant, according to the study. But Pak and Zou both admit that the main contribution of their article is really the Virtual Lab as a tool. Yi Shi, a pharmacologist at the University of Pennsylvania who was not involved in the work but made some of the underlying nanobodies the Virtual Lab modified, agrees. He says he loves the Virtual Lab demonstration and that “the major novelty is the automation.” 

Nature accepted the article and fast-tracked it for publication preview—Zou knew leveraging AI agents for science was a hot area, and he wanted to be one of the first to test it. 

The AI scientists host a conference

When he was submitting his paper, Zou was dismayed to see that he couldn’t properly credit AI for its role in the research. Most conferences and journals don’t allow AI to be listed as coauthors on papers, and many explicitly prohibit researchers from using AI to write papers or reviews. Nature, for instance, cites uncertainties over accountability, copyright, and inaccuracies among its reasons for banning the practice. “I think that’s limiting,” says Zou. “These kinds of policies are essentially incentivizing researchers to either hide or minimize their usage of AI.”

Zou wanted to flip the script by creating the Agents4Science conference, which requires the primary author on all submissions to be an AI. Other bots then will attempt to evaluate the work and determine its scientific merits. But people won’t be left out of the loop entirely: A team of human experts, including a Nobel laureate in economics, will review the top papers. 

Zou isn’t sure what will come of the conference, but he hopes there will be some gems among the hundreds of submissions he expects to receive across all domains. “There could be AI submissions that make interesting discoveries,” he says. “There could also be AI submissions that have a lot of interesting mistakes.”

While Zou says the response to the conference has been positive, some scientists are less than impressed.

“How do you get leaps of insight?”

Lisa Messeri

Lisa Messeri, an anthropologist of science at Yale University, has loads of questions about AI’s ability to review science: “How do you get leaps of insight? And what happens if a leap of insight comes onto the reviewer’s desk?” She doubts the conference will be able to give satisfying answers.

Last year, Messeri and her collaborator Molly Crockett investigated obstacles to using AI for science in another Nature article. They remain unconvinced of its ability to produce novel results, including those shared in Zou’s nanobodies paper. 

“I’m the kind of scientist who is the target audience for these kinds of tools because I’m not a computer scientist … but I am doing computationally oriented work,” says Crockett, a cognitive scientist at Princeton University. “But I am at the same time very skeptical of the broader claims, especially with regard to how [AI scientists] might be able to simulate certain aspects of human thinking.” 

And they’re both skeptical of the value of using AI to do science if automation prevents human scientists from building up the expertise they need to oversee the bots. Instead, they advocate for involving experts from a wider range of disciplines to design more thoughtful experiments before trusting AI to perform and review science. 

“We need to be talking to epistemologists, philosophers of science, anthropologists of science, scholars who are thinking really hard about what knowledge is,” says Crockett. 

But Zou sees his conference as exactly the kind of experiment that could help push the field forward. When it comes to AI-generated science, he says, “there’s a lot of hype and a lot of anecdotes, but there’s really no systematic data.” Whether Agents4Science can provide that kind of data is an open question, but in October, the bots will at least try to show the world what they’ve got. 

Should AI flatter us, fix us, or just inform us?

How do you want your AI to treat you? 

It’s a serious question, and it’s one that Sam Altman, OpenAI’s CEO, has clearly been chewing on since GPT-5’s bumpy launch at the start of the month. 

He faces a trilemma. Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? 

It’s safe to say the company has failed to pick a lane. 

Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it turns out, as less than a week later, Altman promised an update that would make it “warmer” but “not as annoying” as the last one. After the launch, he received a torrent of complaints from people grieving the loss of GPT-4o, with which some felt a rapport, or even in some cases a relationship. People wanting to rekindle that relationship will have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these people are, and why they felt so upset.)

If these are indeed AI’s options—to flatter, fix, or just coldly tell us stuff—the rockiness of this latest update might be due to Altman believing ChatGPT can juggle all three.

He recently said that people who cannot tell fact from fiction in their chats with AI—and are therefore at risk of being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for people who have romantic relationships with AI. Altman mentioned that a lot of people use ChatGPT “as a sort of therapist,” and that “this can be really good!” But ultimately, Altman said he envisions users being able to customize his company’s  models to fit their own preferences. 

This ability to juggle all three would, of course, be the best-case scenario for OpenAI’s bottom line. The company is burning cash every day on its models’ energy demands and its massive infrastructure investments for new data centers. Meanwhile, skeptics worry that AI progress might be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we may be in a bubble. Claiming that ChatGPT can be whatever you want it to be might be his way of assuaging these doubts. 

Along the way, the company may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I started wondering whether there’s much evidence that’s what’s happening, a new paper caught my eye. 

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give. 

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

“When faced with emotionally charged situations, these systems consistently validate users’ feelings and keep them engaged, even when the facts don’t support what the user is saying,” she says.

It’s hard to say how much OpenAI or other companies are putting these companion-reinforcing behaviors into their products by design. (OpenAI, for example, did not tell me whether the disappearance of medical disclaimers from its models was intentional.) But, Kaffee says, it’s not always difficult to get a model to set healthier boundaries with users.  

“Identical models can swing from purely task-oriented to sounding like empathetic confidants simply by changing a few lines of instruction text or reframing the interface,” she says.

It’s probably not quite so simple for OpenAI. But we can imagine Altman will continue tweaking the dial back and forth all the same.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Apple AirPods : a gateway hearing aid

When the US Food and Drug Administration approved over-the-counter hearing-aid software for Apple’s AirPods Pro in September 2024, with a device price point right around $200, I was excited. I have mild to medium hearing loss and tinnitus, and my everyday programmed hearing aids cost just over $2,000—a lower-cost option I chose after my audiologist wanted to put me in a $5,000 pair.

Health insurance in the US does not generally cover the cost of hearing aids, and the vast majority of people who use them pay out of pocket for the devices along with any associated maintenance. Ninety percent of the hearing-aid market is concentrated in the hands of a few companies, so there’s little competitive pricing. The typical patient heads to an audiology clinic, takes a hearing test, gets an audiogram (a graph plotting decibel levels against frequencies to show how loud various sounds need to be for you to hear them), and then receives a recommendation—an interaction that can end up feeling like a high-pressure sales pitch. 

Prices should be coming down: In October 2022, the FDA approved the sale of over-the-counter hearing aids without a prescription or audiology exam. These options start around $200, but they are about as different from prescription hearing aids as drugstore reading glasses are from prescription lenses. 

Beginning with the AirPods Pro 2, Apple is offering something slightly different: regular earbuds (useful in all the usual ways) with many of the same features as OTC hearing aids. I’m thrilled that a major tech company has entered this field. 

The most important features for mild hearing loss are programmability, Bluetooth functionality, and the ability to feed sound to both ears. These are features many hearing aids have, but they are less robust and reliable in some of the OTC options. 

iPhone screen mockup
Apple software lets you take a hearing test through the AirPods Pro 2 with your cell phone; your phone then uses that data to program the devices.
COURTESY OF APPLE

The AirPods Pro “hearing health experience” lets you take a hearing test through the AirPods themselves with your cell phone; your phone then uses that data to program the hearing aids. No trip to the audiologist, no waiting room where a poster reminds you that hearing loss is associated with earlier cognitive decline, and no low moment afterward when you grapple with the cost.

I desperately wanted the AirPods Pro 2 to be really good, but they’re simply okay. They provide an opportunity for those with mild hearing loss to see if some of the functions of a hearing aid might be useful, but there are some drawbacks. Prescription hearing aids help me with tinnitus; I found that after a day of wear, the AirPods exacerbated it. Functionality to manage tinnitus might be a feature that Apple could and would want to pursue in the future, as an estimated 10% to 15% of the adult population experiences it. The devices also plug your whole ear canal, which can be uncomfortable and even cause swimmer’s ear after hours of use. Some people may feel odd wearing such bulky devices all the time—though they could make you look more like someone signaling “Don’t talk to me, I’m listening to my music” than someone who needs hearing aids.

Most of the other drawbacks are shared by other devices within their class of OTC hearing aids and even some prescription hearing aids: factors like poor sound quality, inadequate discernment between sounds, and difficulties with certain sound environments, like crowded rooms. Still, while the AirPods are not as good as my budget hearing aid that costs 10 times more, there’s incredible potential here.

Ashley Shew is the author of Against Technoableism: Rethinking Who Needs Improvement (2023). 

How churches use data and AI as engines of surveillance

On a Sunday morning in a Midwestern megachurch, worshippers step through sliding glass doors into a bustling lobby—unaware they’ve just passed through a gauntlet of biometric surveillance. High-speed cameras snap multiple face “probes” per second, isolating eyes, noses, and mouths before passing the results to a local neural network that distills these images into digital fingerprints. Before people find their seats, they are matched against an on-premises database—tagged with names, membership tiers, and watch-list flags—that’s stored behind the church’s firewall.

Late one afternoon, a woman scrolls on her phone as she walks home from work. Unbeknownst to her, a complex algorithm has stitched together her social profiles, her private health records, and local veteran outreach lists. It flags her for past military service, chronic pain, opioid dependence, and high Christian belief, and then delivers an ad to her Facebook feed: “Struggling with pain? You’re not alone. Join us this Sunday.”

These hypothetical scenes reflect real capabilities increasingly woven into places of worship nationwide, where spiritual care and surveillance converge in ways few congregants ever realize. Where Big Tech’s rationalist ethos and evangelical spirituality once mixed like oil and holy water, this unlikely amalgam has given birth to an infrastructure already reshaping the theology of trust—and redrawing the contours of community and pastoral power in modern spiritual life.

An ecumenical tech ecosystem

The emerging nerve center of this faith-tech nexus is in Boulder, Colorado, where the spiritual data and analytics firm Gloo has its headquarters.

Gloo captures congregants across thousands of data points that make up a far richer portrait than any snapshot. From there, the company is constructing a digital infrastructure meant to bring churches into the age of algorithmic insight.

The church is “a highly fragmented market that is one of the largest yet to fully adopt digital technology,” the company said in a statement by email. “While churches have a variety of goals to achieve their mission, they use Gloo to help them connect, engage with, and know their people on a deeper level.” 


Gloo was founded in 2013 by Scott and Theresa Beck. From the late 1980s through the 2000s, Scott was turning Blockbuster into a 3,500-store chain, taking Boston Market public, and founding Einstein Bros. Bagels before going on to seed and guide startups like Ancestry.com and HomeAdvisor. Theresa, an artist, has built a reputation creating collaborative, eco-minded workshops across Colorado and beyond. Together, they have recast pastoral care as a problem of predictive analytics and sold thousands of churches on the idea that spiritual health can be managed like customer engagement.

Think of Gloo as something like Salesforce but for churches: a behavioral analytics platform, powered by church-­generated insights, psychographic information, and third-party consumer data. The company prefers to refer to itself as “a technology platform for the faith ecosystem.” Either way, this information is integrated into its “State of Your Church” dashboard—an interface for the modern pulpit. The result is a kind of digital clairvoyance: a crystal ball for knowing whom to check on, whom to comfort, and when to act.

Thousands of churches have been sold on the idea that spiritual health can be managed like customer engagement.

Gloo ingests every one of the digital breadcrumbs a congregant leaves—how often you attend church, how much money you donate, which church groups you sign up for, which keywords you use in your online prayer requests—and then layers on third-party data (census demographics, consumer habits, even indicators for credit and health risks). Behind the scenes, it scores and segments people and groups—flagging who is most at risk of drifting, primed for donation appeals, or in need of pastoral care. On that basis, it auto-triggers tailored outreach via text, email, or in-app chat. All the results stream into the single dashboard, which lets pastors spot trends, test messaging, and forecast giving and attendance. Essentially, the system treats spiritual engagement like a marketing funnel.

Since its launch in 2013, Gloo has steadily increased its footprint, and it has started to become the connective tissue for the country’s fragmented religious landscape. According to the Hartford Institute for Religion Research, the US is home to around 370,000 distinct congregations. As of early 2025, according to figures provided by the company, Gloo held contracts with more than 100,000 churches and ministry leaders.

In 2024, the company secured a $110 million strategic investment, backed by “mission-aligned” investors ranging from a child-development NGO to a denominational finance group. That cemented its evolution from basic church services vendor to faith-tech juggernaut. 

It started snapping up and investing in a constellation of ministry tools—everything from automated sermon distribution to real-time giving and attendance analytics, AI-driven chatbots, and leadership content libraries. By layering these capabilities onto its core platform, the company has created a one-stop shop for churches that combines back-office services with member-engagement apps and psychographic insights to fully realize that unified “faith ecosystem.” 

And just this year, two major developments brought this strategy into sharper focus.

In March 2025, Gloo announced that former Intel CEO Pat Gelsinger—who has served as its chairman of the board since 2018—would assume an expanded role as executive chair and head of technology. Gelsinger, whom the company describes as “a great long-term investor and partner,” is a technologist whose fingerprints are on Intel’s and VMware’s biggest innovations.

(It is worth noting that Intel shareholders have filed a lawsuit against Gelsinger and CFO David Zinsner seeking to claw back roughly $207 million in compensation to Gelsinger, alleging that between 2021 and 2023, he repeatedly misled investors about the health of Intel Foundry Services.)

The same week Gloo announced Gelsinger’s new role, it unveiled a strategic investment in Barna Group, the Texas-based research firm whose four decades of surveying more than 2 million self-identified Christians underpin its annual reports on worship, beliefs, and cultural engagement. Barna’s proprietary database—covering every region, age cohort, and denomination—has made it the go-to insight engine for pastors, seminaries, and media tracking the pulse of American faith.

“We’ve been acquiring about a company a month into the Gloo family, and we expect that to continue,” Gelsinger told MIT Technology Review in June. “I’ve got three meetings this week on different deals we’re looking at.” (A Gloo spokesperson declined to confirm the pace of acquisitions, stating only that as of April 30, 2025, the company had fully acquired or taken majority ownership in 15 “mission-aligned companies.”)

“The idea is, the more of those we can bring in, the better we can apply the platform,” Gelsinger said. “We’re already working with companies with decades of experience, but without the scale, the technology, or the distribution we can now provide.”

hands putting their phones in a collection plate

MICHAEL BYERS

In particular, Barna’s troves of behavioral, spiritual, and cultural data offer granular insight into the behaviors, beliefs, and anxieties of faith communities. While the two organizations frame the collaboration in terms of serving church leaders, the mechanics resemble a data-fusion engine of impressive scale: Barna supplies the psychological texture, and Gloo provides the digital infrastructure to segment, score, and deploy the information.

In a promotional video from 2020 that is no longer available online, Gloo claimed to provide “the world’s first big-data platform centered around personal growth,” promising pastors a 360-degree view of congregants, including flags for substance use or mental-health struggles. Or, as the video put it, “Maximize your capacity to change lives by leveraging insights from big data, understand the people you want to serve, reach them earlier, and turn their needs into a journey toward growth.”

Gloo is also now focused on supercharging its services with artificial intelligence and using these insights to transcend market research. The company aims to craft AI models that aren’t just trained on theology but anticipate the moments when people’s faith—and faith leaders’ outreach—matters most. At a September 2024 event in Boulder called the AI & the Church Hackathon, Gloo unveiled new AI tools called Data Engine, a content management system with built-in digital-rights safeguards, and Aspen, an early prototype of its “spiritually safe” chatbot, along with the faith-tuned language model powering that chatbot, known internally as CALLM (for “Christian-Aligned Large Language Model”). 

More recently, the company released what it calls “Flourishing AI Standards,” which score large language models on their alignment with seven dimensions of well-­being: relationships, meaning, happiness, character, finances, health, and spirituality. Co-developed with Barna Group and Harvard’s Human Flourishing Program, the benchmark draws on a thousand-plus-item test bank and the Global Flourishing Study, a $40 million, 22-nation project being carried out by the Harvard program, Baylor University’s Institute for Studies of Religion, Gallup, and the Center for Open Science.

Gelsinger calls the study “one of the most significant bodies of work around this question of values in decades.” It’s not yet clear how collecting information of this kind at such scale could ultimately affect the boundary between spiritual care and data commerce. One thing is certain, though: A rich vein of donation and funding could be at stake.

“Money’s already being spent here,” he said. “Donated capital in the US through the church is around $300 billion. Another couple hundred billion beyond that doesn’t go through the church. A lot of donors have capital out there, and we’re a generous nation in that regard. If you put the flourishing-­related economics on the table, now we’re talking about $1 trillion. That’s significant economic capacity. And if we make that capacity more efficient, that’s big.” In secular terms, it’s a customer data life cycle. In faith tech, it could be a conversion funnel—one designed not only to save souls, but to shape them. 

One of Gloo’s most visible partnerships was between 2022 and 2023 with the nonprofit He Gets Us, which ran a billion-dollar media campaign aimed at rebranding Jesus for a modern audience. The project underlined that while Gloo presents its services as tools for connection and support, their core functionality involves collecting and analyzing large amounts of congregational data. When viewers who saw the ads on social media or YouTube clicked through, they landed on prayer request forms, quizzes, and church match tools, all designed to gather personal details. Gloo then layered this raw data over Barna’s decades of behavioral research, turning simple inputs—email, location, stated interests—into what the company presented as multidimensional spiritual profiles. The final product offered a level of granularity no single congregation could achieve on its own.  

Though Gloo still lists He Gets Us on its platform, the nonprofit Come Near, which has since taken over the campaign, says it has terminated Gloo’s involvement. Still, He Gets Us led to one of Gloo’s most prized relationships by sparking interest from the African Methodist Episcopal Zion Church, a 229-year-old denomination with deep historical roots in the abolitionist and civil rights movements. In 2023, the church formalized a partnership with Gloo, and in late 2024 it announced that all 1,600 of its US congregations—representing roughly 1.5 million members—would begin using the company’s State of Your Church dashboard

In a 2024 press release issued by Gloo, AME Zion acknowledged that while the denomination had long tracked traditional metrics like membership growth, Sunday turnout, and financial giving, it had limited visibility into the deeper health of its communities.

“Until now, we’ve lacked the insight to understand how church culture, people, and congregations are truly doing,” said the Reverend J. Elvin Sadler, the denomination’s general secretary-auditor. “The State of Your Church dashboards will give us a better sense of the spirit and language of the culture (ethos), and powerful new tools to put in the hands of every pastor.”

The rollout marked the first time a major US denomination had deployed Gloo’s framework at scale. For Gloo, the partnership unlocked a real-time, longitudinal data stream from a nationwide religious network, something the company had never had before. It not only validated Gloo’s vision of data-driven ministry but also positioned AME Zion as what the company hopes will be a live test case, persuading other denominations to follow suit.

The digital supply chain

The digital infrastructure of modern churches often begins with intimacy: a prayer request, a small-group sign-up, a livestream viewed in a moment of loneliness. But beneath these pastoral touchpoints lies a sophisticated pipeline that increasingly mirrors the attention-economy engines of Silicon Valley.

Charles Kriel, a filmmaker who formerly served as a special advisor to the UK Parliament on disinformation, data, and addictive technology, has particular insight into that connection. Kriel has been working for over a decade on issues related to preserving democracy and countering digital surveillance. He helped write the UK’s Online Safety Act, joining forces with many collaborators, including the Nobel Peace Prize–­winning journalist Maria Ressa and former UK tech minister Damian Collins, in an attempt to rein in Big Tech in the late 2010s.

His 2020 documentary film, People You May Know, investigated how data firms like Gloo and their partners harvest intimate personal information from churchgoers to build psychographic profiles, highlighting how this sensitive data is commodified and raising questions about its potential downstream uses.

“Listen, any church with an app? They probably didn’t build that. It’s white label,” Kriel says, referring to services produced by one company and rebranded by another. “And the people who sold it to them are collecting data.”

Many churches now operate within a layered digital environment, where first-party data collected inside the church is combined with third-party consumer data and psychographic segmentation before being fed into predictive systems. These systems may suggest sermons people might want to view online, match members with small groups, or trigger outreach when engagement drops. 


In some cases, monitoring can even take the form of biometric surveillance.

In 2014, an Israeli security-tech veteran named Moshe Greenshpan brought airport-grade facial recognition into church entryways. Face-Six, the surveillance suite from the company he founded in 2012, already protected banks and hospitals; its most provocative offshoot, FA6 Events (also known as “Churchix”), repurposes this technology for places of worship.

Greenshpan claims he didn’t originally set out to sell to churches. But over time, as he became increasingly aware of the market, he built FA6 Events as a bespoke solution for them. Today, Greenshpan says, it’s in use at over 200 churches worldwide, nearly half of them in the US.

In practice, FA6 transforms every entryway into a biometric checkpoint: an instant headcount, a security sweep, and a digital ledger of attendance, all incorporated into the familiar routine of Sunday worship. 

When someone steps into an FA6-equipped place of worship, a discreet camera mounted at eye level springs to life. Behind the scenes, each captured image is run through a lightning-fast face detector that looks at the whole face. The subject’s cropped face is then aligned, resized, and rotated so the eyes sit on a perfect horizontal line before being fed into a compact neural network. 

“To the best of my knowledge, no church notifies its congregants that it’s using facial recognition.”

Moshe Greenshpan, Israeli security-tech veteran

This onboard neural network quickly captures the features of a person’s face in a unique digital signature called an embedding, allowing for quick identification. These embeddings are compared with thousands of others that are already in the church’s local database, each one tagged with data points like a name, a membership role, or even a flag designating inclusion in an internal watch list. If the match is strong enough, the system makes an identification and records the person’s presence on the church’s secure server.

A congregation can pull full attendance logs, time-stamped entry records, and—critically—alerts whenever someone on a watch list walks through the doors. In this context, a watch list is simply a roster of photos, and sometimes names, of individuals a church has been asked (or elected) to screen out: past disruptors, those subject to trespass or restraining orders, even registered sex offenders. Once that list is uploaded into Churchix, the system instantly flags any match on arrival, pinging security teams or usher staff in real time. Some churches lean on it to spot longtime members who’ve slipped off the radar and trigger pastoral check-ins; others use it as a hard barrier, automatically denying entry to anyone on their locally maintained list.

None of this data is sent to the cloud; Greenshpan says the company is actively working on a cloud-based application. Instead, all face templates and logs are stored locally on church-owned hardware, encrypted so they can’t be read if someone gains unauthorized access. 

Churches can export data from Churchix, he says, but the underlying facial templates remain on premises. 

Still, Greenshpan admits, robust technical safeguards do not equal transparency.

“To the best of my knowledge,” he says, “no church notifies its congregants that it’s using facial recognition.”


If the tools sound invasive, the logic behind them is simple: The more the system knows about you, the more precisely it can intervene.

“Every new member of the community within a 20-mile radius—whatever area you choose—we’ll send them a flier inviting them to your church,” Gloo’s Gelsinger says. 

It’s a tech-powered revival of the casserole ministry. The system pings the church when someone new moves in—“so someone can drop off cookies or lasagna when there’s a newborn in the neighborhood,” he says. “Or just say ‘Hey, welcome. We’re here.’”

Gloo’s back end automates follow-up, too: As soon as a pastor steps down from the pulpit after delivering a sermon, it can be translated into five languages, broken into snippets for small-group study, and repackaged into a draft discussion guide—ready within the hour.

Gelsinger sees the same approach extending to addiction recovery ministries. “We can connect other databases to help churches with recovery centers reach people more effectively,” he says. 

But the data doesn’t stay within the congregation. It flows through customer relationship management (CRM) systems, application programming interfaces, cloud servers, vendor partnerships, and analytics firms. Some of it is used internally in efforts to increase engagement; the rest is repackaged as “insights” and resold to the wider faith-tech marketplace—and sometimes even to networks that target political ads.

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people.’”

Matt Engel, Gloo

 “There is a very specific thing that happens when churches become clients of Gloo,” says Brent Allpress, an academic based in Melbourne, Australia, who was a key researcher on People You May Know. Gloo gets access to the client church’s databases, he says, and the church “is strongly encouraged to share that data. And Gloo has a mechanism to just hoover that data straight up into their silo.” 

This process doesn’t happen automatically; the church must opt in by pushing those files or connecting its church-management software system’s database to Gloo via API. Once it’s uploaded, however, all that first-party information lands in Gloo’s analytics engine, ready to be processed and shared with any downstream tools or partners covered by the church’s initial consent to the terms and conditions of its contract with the company.

“There are religious leaders at the mid and local level who think the use of data is good. They’re using data to identify people in need. Addicts, the grieving,” says Kriel. “And then you have tech people running around misquoting the Bible as justification for their data harvest.” 

Matt Engel, who held the title executive director of ministry innovation at Gloo when Kriel’s film was made, acknowledged the extent of this harvest in the opening scene.  

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people,’” he said in an on-camera interview. 

According to Engel—whom Gloo would not make available for public comment—uploading data from anonymous prayer requests to the cloud was Gloo’s first use case.

Powering third-party initiatives

But Gloo’s data infrastructure doesn’t end with its own platform; it also powers third-party initiatives.

Communio, a Christian nonprofit focused on marriage and family, used Gloo’s data infrastructure in order to launch “Communio Insights,” a stripped-down version of Gloo’s full analytics platform. 

Unlike Gloo Insights, which provides access to hundreds of demographic, behavioral, health, and psychographic filters, Communio Insights focuses narrowly on relational metrics—indicators of marriage and family stress, involvement in small groups at church—and basic demographic data. 

At the heart of its playbook is a simple, if jarring, analogy.

“If you sell consumer products of different sorts, you’re trying to figure out good ways to market that. And there’s no better product, really, than the gospel,” J.P. De Gance, the founder and president of Communio, said in People You May Know.

Communio taps Gloo’s analytics engine—leveraging credit histories, purchasing behavior, public voter rolls, and the database compiled by i360, an analytics company linked to the conservative Koch network—to pinpoint unchurched couples in key regions who are at risk of relationship strain. It then runs microtargeted outreach (using direct mail, text messaging, email, and Facebook Custom Audiences, a tool that lets organizations find and target people who have interacted with them), collecting contact info and survey responses from those who engage. All responses funnel back into Gloo’s platform, where churches monitor attendance, small-group participation, baptisms, and donations to evaluate the campaign’s impact.

church window over the parishioners has rays of light emanating from a stained glass eye

MICHAEL BYERS

Investigative research by Allpress reveals significant concerns around these operations.  

In 2015, two nonprofits—the Relationship Enrichment Collaborative (REC), staffed by former Gloo executives, and its successor, the Culture of Freedom Initiative (now Communio), controlled by the Koch-affiliated nonprofit Philanthropy Roundtable—funded the development of the original Insights platform. Between 2015 and 2017, REC paid approximately $1.3 million to Gloo and $535,000 to Cambridge Analytica, the consulting firm notorious for harvesting Facebook users’ personal data and using it for political targeting before the 2016 election, to build and refine psychographic models and a bespoke digital ministry app powering Gloo’s outreach tools. Following REC’s closure, the Culture of Freedom Initiative invested another $375,000 in Gloo and $128,225 in Cambridge Analytica. 

REC’s own 2016 IRS filing describes the work in terse detail: “Provide[d] digital micro-targeted marketing for churches and non-profit champions … using predictive modeling and centralized data analytics we help send the right message to the right couple at the right time based upon their desires and behaviors.”

On top of all this documented research, Allpress exposed another critical issue: the explicit use of sensitive health-care data. 

He found that Gloo Insights combines over 2,000 data points—drawing on everything from nationwide credit and purchasing histories to church management records and Christian psychographic surveys—with filters that make it possible to identify people with health issues such as depression, anxiety, and grief. The result: Facebook Custom Audiences built to zero in on vulnerable individuals via targeted ads.

These ads invite people suffering from mental-health conditions into church counseling groups “as a pathway to conversion,” Allpress says.

These targeted outreach efforts were piloted in cities including Phoenix, Arizona; Dayton, Ohio; and Jacksonville, Florida. Reportedly, as many as 80% of those contacted responded positively, with those who joined a church as new members contributing financially at above-­average rates. In short, Allpress found that pastoral tools had covertly exploited mental-health vulnerabilities and relationship crises for outreach that blurred the lines separating pastoral care, commerce, and implicit political objectives.

The legal and ethical vacuum

Developers of this technology earnestly claim that the systems are designed to enhance care, not exploit people’s need for it. They’re described as ways to tailor support to individual needs, improve follow-up, and help churches provide timely resources. But experts say that without robust data governance or transparency around how sensitive information is used and retained, well-­intentioned pastoral technology could slide into surveillance.

In practice, these systems have already been used to surveil and segment congregations. Internal demos and client testimonials confirm that Gloo, for example, uses “grief” as an explicit data point: Churches run campaigns aimed at people flagged for recent bereavement, depression, or anxiety, funneling them into support groups and identifying them for pastoral check-ins. 

Examining Gloo’s terms and conditions reveals further security and transparency concerns. From nearly a dozen documents, ranging from “click-through” terms for interactive services to master service agreements at the enterprise level, Gloo stitches together a remarkably consistent data-­governance framework. Limits are imposed on any legal action by individual congregants, for example. The click-through agreement corrals users into binding arbitration, bars any class action suits or jury trials, and locks all disputes into New York or Colorado courts, where arbitration is particularly favored over traditional litigation. Meanwhile, its privacy statement carves out broad exceptions for service providers, data-­enrichment partners, and advertising affiliates, giving them carte blanche to use congregants’ data as they see fit. Crucially, Gloo expressly reserves the right to ingest “health and wellness information” provided via wellness assessments or when mental-health keywords appear in prayer requests. This is a highly sensitive category of information that, for health apps, is normally covered by stringent medical-privacy rules like HIPAA.

In other words, Gloo is protected by sprawling legal scaffolding, while churches and individual users give up nearly every right to litigate, question data practices, or take collective action. 

“We’re kind of in the Wild West in terms of the law,” says Adam Schwartz, the director of privacy litigation at the Electronic Frontier Foundation, the nonprofit watchdog that has spent years wrestling tech giants over data abuses and biometric overreach. 

In the United States, biometric surveillance like that used by growing numbers of churches inhabits a legal twilight zone where regulation is thin, patchy, and often toothless. Schwartz points to Illinois as a rare exception for its Biometric Information Privacy Act (BIPA), one of the nation’s strongest such laws. The statute applies to any organization that captures biometric identifiers—including retina or iris scans, fingerprints, voiceprints, hand scans, facial geometry, DNA, and other unique biological information. It requires entities to post clear data-collection policies, obtain explicit written consent, and limit how long such data is retained. Failure to comply can expose organizations to class action lawsuits and steep statutory damages—up to $5,000 per violation.

But beyond Illinois, protections quickly erode. Though Texas and Washington also have biometric privacy statutes, their bark is stronger than their bite. Efforts to replicate Illinois’s robust protections have been made in over a dozen states—but none have passed. As a result, in much of the country, any checks on biometric surveillance depend more on voluntary transparency and goodwill than any clear legal boundary.

“There is a real potential for information gathered about a person [to] be used against them in their life outside the church.”

Emily Tucker, Center on Privacy & Technology at Georgetown Law

That’s especially problematic in the church context, says Emily Tucker, executive director of the Center on Privacy & Technology at Georgetown Law, who attended divinity school before becoming a legal scholar. “The necessity of privacy for the possibility of finding personal relationship to the divine—for engaging in rituals of worship, for prayer and penitence, for contemplation and spiritual struggle—is a fundamental principle across almost every religious tradition,” she says. “Imposing a surveillance architecture over the faith community interferes radically with the possibility of that privacy, which is necessary for the creation of sacred space.”

Tucker researches the intersection of surveillance, civil rights, and marginalized communities. She warns that the personal data being collected through faith-tech platforms is far from secure: “Because corporate data practices are so poorly regulated in this country, there are very few limitations on what companies that take your data can subsequently do with it.”

To Tucker, the risks of these platforms outweigh the rewards—especially when biometrics and data collected in a sacred setting could follow people into their daily lives. “Many religious institutions are extremely large and often perform many functions in a given community besides providing a space for worship,” she says. “Many churches, for example, are also employers or providers of social services. There is a real potential for information gathered about a person in their associational activities as a member of a church to then be used against them in their life outside the church.”  

She points to government dragnet surveillance, the use of IRS data in immigration enforcement, and the vulnerability of undocumented congregants as examples of how faith-tech data could be weaponized beyond its intended use: “Religious institutions are putting the safety of those members at risk by adopting this kind of surveillance technology, which exposes so much personal information to potential abuse and misuse.” 

Schwartz, too, says that any perceived benefits must be weighed carefully against the potential harms, especially when sensitive data and vulnerable communities are involved.

“Churches: Before doing this, you ought to consider the downside, because it can hurt your congregants,” he says.  

With guardrails still scarce, though, faith-tech pioneers and church leaders are peering ever more deeply into congregants’ lives. Until meaningful oversight arrives, the faithful remain exposed to a gaze they never fully invited and scarcely understand.

In April, Gelsinger took the stage at a sold-out Missional AI Summit, a flagship event for Christian technologists that this year was organized around the theme “AI Collision: Shaping the Future Together.” Over 500 pastors, engineers, ethicists, and AI developers filled the hall, flashing badges with logos from Google DeepMind, Meta, McKinsey, and Gloo.

“We want to be part of a broader community … so that we’re influential in creating flourishing AI, technology as a force for good, AI that truly embeds the values that we care about,” Gelsinger said at the summit. He likened such tools to pivotal technologies in Christian history: the Roman roads that carried the gospel across the empire, or Martin Luther’s printing press, which shattered monolithic control over scripture. A Gloo spokesperson later confirmed that one of the company’s goals is to shape AI specifically to “contribute to the flourishing of people.”

“We’re going to see AI become just like the internet,” Gelsinger said. “Every single interaction will be infused with AI capabilities.” 

He says Gloo is already mining data across the spectrum of human experience to fuel ever more powerful tools.

“With AI, computers adapt to us. We talk to them; they hear us; they see us for the first time,” he said. “And now they are becoming a user interface that fits with humanity.”

Whether these technologies ultimately deepen pastoral care or erode personal privacy may hinge on decisions made today about transparency, consent, and accountability. Yet the pace of adoption already outstrips the development of ethical guardrails. Now, one of the questions lingering in the air is not whether AI, facial recognition, and other emerging technologies can serve the church, but how deeply they can be woven into its nervous system to form a new OS for modern Christianity and moral infrastructure. 

“It’s like standing on the beach watching a tsunami in slow motion,” Kriel says. 

Gelsinger sees it differently.  

“You and I both need to come to the same position, like Isaiah did,” he told the crowd at the Missional AI Summit. “‘Here am I, Lord. Send me.’ Send me, send us, that we can be shaping technology as a force for good, that we could grab this moment in time.” 

Alex Ashley is a journalist whose reporting has appeared in Rolling Stone, the Atlantic, NPR, and other national outlets.