In a first, Google has released data on how much energy an AI prompt uses

Google has just released a technical report detailing how much energy its Gemini apps use for each query. In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity, the equivalent of running a standard microwave for about one second. The company also provided average estimates for the water consumption and carbon emissions associated with a text prompt to Gemini.

It’s the most transparent estimate yet from a Big Tech company with a popular AI product, and the report includes detailed information about how the company calculated its final estimate. As AI has become more widely adopted, there’s been a growing effort to understand its energy use. But public efforts attempting to directly measure the energy used by AI have been hampered by a lack of full access to the operations of a major tech company. 

Earlier this year, MIT Technology Review published a comprehensive series on AI and energy, at which time none of the major AI companies would reveal their per-prompt energy usage. Google’s new publication, at last, allows for a peek behind the curtain that researchers and analysts have long hoped for.

The study focuses on a broad look at energy demand, including not only the power used by the AI chips that run models but also by all the other infrastructure needed to support that hardware. 

“We wanted to be quite comprehensive in all the things we included,” said Jeff Dean, Google’s chief scientist, in an exclusive interview with MIT Technology Review about the new report.

That’s significant, because in this measurement, the AI chips—in this case, Google’s custom TPUs, the company’s proprietary equivalent of GPUs—account for just 58% of the total electricity demand of 0.24 watt-hours. 

Another large portion of the energy is used by equipment needed to support AI-specific hardware: The host machine’s CPU and memory account for another 25% of the total energy used. There’s also backup equipment needed in case something fails—these idle machines account for 10% of the total. The final 8% is from overhead associated with running a data center, including cooling and power conversion. 

This sort of report shows the value of industry input to energy and AI research, says Mosharaf Chowdhury, a professor at the University of Michigan and one of the heads of the ML.Energy leaderboard, which tracks energy consumption of AI models. 

Estimates like Google’s are generally something that only companies can produce, because they run at a larger scale than researchers are able to and have access to behind-the-scenes information. “I think this will be a keystone piece in the AI energy field,” says Jae-Won Chung, a PhD candidate at the University of Michigan and another leader of the ML.Energy effort. “It’s the most comprehensive analysis so far.”

Google’s figure, however, is not representative of all queries submitted to Gemini: The company handles a huge variety of requests, and this estimate is calculated from a median energy demand, one that falls in the middle of the range of possible queries.

So some Gemini prompts use much more energy than this: Dean gives the example of feeding dozens of books into Gemini and asking it to produce a detailed synopsis of their content. “That’s the kind of thing that will probably take more energy than the median prompt,” Dean says. Using a reasoning model could also have a higher associated energy demand because these models take more steps before producing an answer.

This report was also strictly limited to text prompts, so it doesn’t represent what’s needed to generate an image or a video. (Other analyses, including one in MIT Technology Review’s Power Hungry series earlier this year, show that these tasks can require much more energy.)

The report also finds that the total energy used to field a Gemini query has fallen dramatically over time. The median Gemini prompt used 33 times more energy in May 2024 than it did in May 2025, according to Google. The company points to advancements in its models and other software optimizations for the improvements.  

Google also estimates the greenhouse gas emissions associated with the median prompt, which they put at 0.03 grams of carbon dioxide. To get to this number, the company multiplied the total energy used to respond to a prompt by the average emissions per unit of electricity.

Rather than using an emissions estimate based on the US grid average, or the average of the grids where Google operates, the company instead uses a market-based estimate, which takes into account electricity purchases that the company makes from clean energy projects. The company has signed agreements to buy over 22 gigawatts of power from sources including solar, wind, geothermal, and advanced nuclear projects since 2010. Because of those purchases, Google’s emissions per unit of electricity on paper are roughly one-third of those on the average grid where it operates.

AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops. 

The goal of this work was to provide users a window into the energy use of their interactions with AI, Dean says. 

“People are using [AI tools] for all kinds of things, and they shouldn’t have major concerns about the energy usage or the water usage of Gemini models, because in our actual measurements, what we were able to show was that it’s actually equivalent to things you do without even thinking about it on a daily basis,” he says, “like watching a few seconds of TV or consuming five drops of water.”

The publication greatly expands what’s known about AI’s resource usage. It follows recent increasing pressure on companies to release more information about the energy toll of the technology. “I’m really happy that they put this out,” says Sasha Luccioni, an AI and climate researcher at Hugging Face. “People want to know what the cost is.”

This estimate and the supporting report contain more public information than has been available before, and it’s helpful to get more information about AI use in real life, at scale, by a major company, Luccioni adds. However, there are still details that the company isn’t sharing in this report. One major question mark is the total number of queries that Gemini gets each day, which would allow estimates of the AI tool’s total energy demand. 

And ultimately, it’s still the company deciding what details to share, and when and how. “We’ve been trying to push for a standardized AI energy score,” Luccioni says, a standard for AI similar to the Energy Star rating for appliances. “This is not a replacement or proxy for standardized comparisons.”

I gave the police access to my DNA—and maybe some of yours

Last year, I added my DNA profile to a private genealogical database, FamilyTreeDNA, and clicked “Yes” to allow the police to search my genes.

In 2018, police in California announced they’d caught the Golden State Killer, a man who had eluded capture for decades. They did it by uploading crime-scene DNA to websites like the one I’d joined, where genealogy hobbyists share genetic profiles to find relatives and explore ancestry. Once the police had “matches” to a few relatives of the killer, they built a large family tree from which they plucked the likely suspect.

This process, called forensic investigative genetic genealogy, or FIGG, has since helped solve hundreds of murders and sexual assaults. Still, while the technology is potent, it’s incompletely realized. It operates via a mishmash of private labs and unregulated websites, like FamilyTree, which give users a choice to opt into or out of police searches. The number of profiles available for search by police hovers around 1.5 million, not yet enough to find matches in all cases.

To do my bit to increase those numbers, I traveled to Springfield, Massachusetts.

The staff of the local district attorney, Anthony D. Gulluni, was giving away free FamilyTree tests at a minor-league hockey game in an effort to widen its DNA net and help solve several cold-case murders. After glancing over a consent form, I spit into a tube and handed it back. According to the promotional material from Gulluni’s office, I’d “become a hero.”

But I wasn’t really driven by some urge to capture distantly related serial killers. Rather, my spit had a less gallant and more quarrelsome motive: to troll privacy advocates whose fears around DNA I think are overblown and unhelpful. By giving up my saliva for inspection, I was going against the view that a person’s DNA is the individualized, sacred text that privacy advocates sometimes claim.

Indeed, the only reason FIGG works is that relatives share DNA: You share about 50% with a parent, 25% with a grandparent, about 12.5% with a first cousin, and so on. When I got my FamilyTree report back, my DNA had “matched” with 3,309 people.

Some people are frightened by FIGG or reject its punitive aims. One European genealogist I know says her DNA is kept private because she opposes the death penalty and doesn’t want to risk aiding US authorities in cases where lethal injection might be applied. But if enough people share their DNA, conscientious objectors won’t matter. Scientists estimate that a database including 2% of the US population, or 6 million people, could identify the source of nearly any crime-scene DNA, given how many distant relatives each of us has.

Scholars of big data have termed this phenomenon “tyranny of the minority.” One person’s voluntary disclosure can end up exposing the same information about many others. And that tyranny can be abused.

DNA information held in private genealogy websites like FamilyTree is lightly guarded by terms of service. These agreements have flip-flopped over time; at one point all users were included in law enforcement searches by default. Rules are easily ignored, too. Recent court filings indicate that the FBI, in its zeal to solve crimes, sometimes barges past restrictions to look for matches in databases whose policies exclude police.

“Noble aims; no rules” is how one genetic genealogist described the overall situation in her field.

My uncertainty grew the more questions I asked. Who even controls my DNA file? That’s not easy to find out. FamilyTree is a brand operated by another company, Gene by Gene, which in 2021 was sold to a third company, MyDNA—ultimately owned by an Australian mogul whose name appears nowhere on its website. When I reached FamilyTree’s general manager, the genealogist Dave Vance, he told me that three-quarters of the profiles on the site were “opted in” to law enforcement searches.

One solution holds that the federal government should organize its own national DNA database for FIGG. But that would require new laws, new technical standards, and a debate about how our society wants to employ this type of big data—not just getting individual consent like mine. No such national project—or consensus—exists.

I’m still ready to join a national crime-fighting database, but I regret doing it the way I did—spitting in a tube on the sidelines of a hockey game and signing a consent form that affects not just me but all my thousands of genetic relatives. To them, I say: Whoops. Your DNA; my bad.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The case against humans in space

Elon Musk and Jeff Bezos are bitter rivals in the commercial space race, but they agree on one thing: Settling space is an existential imperative. Space is the place. The final frontier. It is our human destiny to transcend our home world and expand our civilization to extraterrestrial vistas.

This belief has been mainstream for decades, but its rise has been positively meteoric in this new gilded age of astropreneurs. Expanding humanity beyond Earth is both our birthright and our duty to the future, they insist. Failing to do so would consign our species to certain extinction—either by our own hand, perhaps through nuclear war or climate change, or in some cosmic disaster, like a massive asteroid impact.

But as visions of giant orbital stations and Martian cities dance in our heads, a case against human space colonization has found its footing in a number of recent books. The argument grows from many grounds: Doubts about the practical feasibility of off-Earth communities. Concerns about the exorbitant costs, including who would bear them and who would profit. Realism about the harsh environment of space and the enormous tax it would exact on the human body. Suspicion of the underlying ideologies and mythologies that animate the race to settle space.

And, more bluntly, a recognition that “space sucks” and a lot of people have “underestimated the scale of suckitude,” as Kelly and Zach Weinersmith put it in their book A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?, which was released in paperback earlier this year.

cover of A City on Mars
A City on Mars: Can We Settle Space, Should
We Settle Space, and Have We Really Thought This Through?

Kelly and Zach Weinersmith
PENGUIN RANDOM HOUSE, 2023 (PAPERBACK RELEASE 2025)

The Weinersmiths, a husband-wife team, spent years thinking it through—in delightfully pragmatic detail. A City on Mars provides ground truth for our lofty celestial dreams by gaming out the medical, technical, legal, ethical, and existential consequences of space settlements. 

Much to the authors’ own dismay, the result is a grotesquery of possible outcomes including (but not limited to) Martian eugenics, interplanetary war, and—­memorably—“space cannibalism.” 

The Weinersmiths puncture the gauzy fantasy of space cities by asking pretty basic questions, like how to populate them. Astronauts experience all kinds of medical challenges in space, such as radiation exposure and bone loss, which would increase risks to both parents and babies. Nobody wants their pregnant “glow” to be a by-product of cosmic radiation.

Trying to bring forth babies in space “is going to be tricky business, not just in terms of science, but from the perspective of scientific ethics,” they write. “Adults can consent to being in experiments. Babies can’t.”

You don’t even have to contemplate going to Mars to make some version of this case. In Ground Control: An Argument for the End of Human Space Exploration, Savannah Mandel chronicles how past and present generations have regarded human spaceflight as an affront to vulnerable children right here on Earth.

cover of Ground Control
Ground Control: An Argument for the End of Human Space Exploration
Savannah Mandel
CHICAGO REVIEW PRESS, 2024

“Hungry Kids Can’t Eat Moon Rocks,” read signs at a protest outside Kennedy Space Center on the eve of the Apollo 11 launch in July 1969. Gil Scott-Heron’s 1970 poem “Whitey on the Moon” rose to become the de facto anthem of this movement, which insists, to this day, that until humans get our earthly house in order, we have no business building new ones in outer space.

Ground Control, part memoir and part manifesto, channels this lament: How can we justify the enormous cost of sending people beyond our planet when there is so much suffering here at home? 

Advocates for human space exploration reject the zero-sum framing and point to the many downstream benefits of human spaceflight. Space exploration has catalyzed inventions from the CAT scan to baby formula. There is also inherent value in our shared adventure of learning about the vast cosmos.

Those upsides are real, but they are not remotely well distributed. Mandel predicts that the commercial space sector in its current form will only exacerbate inequalities on Earth, as profits from space ventures flow into the coffers of the already obscenely rich. 

In her book, Mandel, a space anthropologist and scholar at Virginia Tech, describes a personal transformation from spacey dreamer to grounded critic. It began during fieldwork at Spaceport America, a commercial launch facility in New Mexico, where she began to see cracks in the dazzling future imagined by space billionaires. As her career took her from street protests in London to extravagant space industry banquets in Washington, DC, she writes, “crystal clear glasses” replaced “the rose-colored ones.”

Mandel remains enchanted by space but is skeptical that humans are the optimal trailblazers. Robots, rovers, probes, and other artificial space ambassadors could do the job for a fraction of the price and without risk to life, limb, and other corporeal vulnerabilities.  

“A decentralization of self needs to occur,” she writes. “A dissolution of anthropocentrism, so to speak. And a recognition that future space explorers may not be man, even if man moves through them.” 

In other words, giant leaps for mankind no longer necessitate a man’s small steps; the wheels of a rover or the rotors of a copter offer a much better bang for our buck than boots on the ground.

In contrast to the Weinersmiths, Mandel devotes little attention to the physical dangers and limitations that space imposes on humans. She is more interested in a kind of psychic sickness that drives the impulse to abandon our planet and rush into new territories.

Mary-Jane Rubenstein, a scholar of religion at Wesleyan University, presents a thorough diagnosis of this exact pathology in her 2022 book Astrotopia: The Dangerous Religion of the Corporate Space Race, which came out in paperback last year. It all begins, appropriately enough, with the book of Genesis, where God creates Earth for the dominion of man. Over the years, this biblical brain worm has offered divine justification for the brutal colonization and environmental exploitation of our planet. Now it serves as the religious rocket fuel propelling humans into the next frontier, Rubenstein argues.

cover of Astrotopia
Astrotopia: The Dangerous Religion of the Corporate Space Race
Mary-Jane Rubenstein
UNIVERSITY OF CHICAGO PRESS, 2022  (PAPERBACK RELEASE 2024)

“The intensifying ‘NewSpace race’ is as much a mythological project as it is a political, economic, or scientific one,” she writes. “It’s a mythology, in fact, that holds all these other efforts together, giving them an aura of duty, grandeur, and benevolence.”

Rubenstein makes a forceful case that malignant outgrowths of Christian ideas scaffold the dreams of space settlements championed by Musk, Bezos, and like-minded enthusiasts—even if these same people might never describe themselves as religious. If Earth is man’s dominion, space is the next logical step. Earth is just a temporary staging ground for a greater destiny; we will find our deliverance in the heavens.   

“Fuck Earth,” Elon Musk said in 2014. “Who cares about Earth? If we can establish a Mars colony, we can almost certainly colonize the whole solar system.”

Jeff Bezos, for one, claims to care about Earth; that’s among his best arguments for why humans should move beyond it. If heavy industries and large civilian populations cast off into the orbital expanse, our home world can be, in his words, “zoned residential and light industry,” allowing it to recover from anthropogenic pressures.

Bezos also believes that space settlements are essential for the betterment of humanity, in part on the grounds that they will uncork our population growth. He envisions an orbital archipelago of stations, sprawled across the solar system, that could support a collective population of a trillion people. “That’s a thousand Mozarts. A thousand Einsteins,” Bezos has mused. “What a cool civilization that would be.”

It does sound cool. But it’s an easy layup for Rubenstein: This “numbers game” approach would also produce a thousand Hitlers and Stalins, she writes. 

And that is the real crux of the argument against pushing hard torapidly expand human civilization into space: We will still be humans when we get there. We won’t escape our vices and frailties by leaving Earth—in fact, we may exacerbate them. 

While all three books push back on the existential argument for space settlements, the Weinersmiths take the rebuttal one step further by proposing that space colonization might actually increase the risk of self-annihilation rather than neutralizing it.

“Going to space will not end war because war isn’t caused by anything that space travel is apt to change, even in the most optimistic scenarios,” they write. “Humanity going to space en masse probably won’t reduce the likelihood of war, but we should consider that it might increase the chance of war being horrific.” 

The pair imagine rival space nations exchanging asteroid fire or poisoning whole biospheres. Proponents of space settlements often point to the fate of the dinosaurs as motivational grist, but what if a doomsday asteroid were deliberately flung between human cultures as a weapon? It may sound outlandish, but it’s no more speculative than a floating civilization with a thousand Mozarts. It follows the same logic of extrapolating our human future in space from our behavior on Earth in the past.

So should we just sit around and wait for our inevitable extinction? The three books have more or less the same response: What’s the rush? It is far more likely that humanity will be wiped out by our own activity in the near term than by any kind of cosmic threat. Worrying about the expansion of the sun in billions of years, as Musk has openly done, is frankly hysterical. 

In the meantime, we have some growing up to do. Mandel and Rubenstein both argue that any worthy human future in space must adopt a decolonizing approach that emphasizes caretaking and stewardship of this planet and its inhabitants before we set off for others. They draw inspiration from science fiction, popular culture, and Indigenous knowledge, among other sources, to sketch out these alternative visions of an off-Earth future. 

Mandel sees hope for this future in post-scarcity political theories. She cites various attempts to anticipate the needs of future generations—ideas found in the work of the social theorist Aaron Benanav, or in the values expressed by the Green New Deal, or in the fictional Ministry for the Future imagined by Kim Stanley Robinson in his 2020 novel of the same name. Whatever you think of the controversial 2025 book Abundance, by Ezra Klein and Derek Thompson, it is also appealing to the same demand for a post-scarcity road map.  

To that end, Mandel envisions “the creation of a governing body that would require that techno-scientific plans, especially those with a global reach, take into consideration multigenerational impacts and multigenerational voices.”  

For Rubenstein, religion is the poison, but it may also offer the cure. She sees potential in a revival of pantheism, which is the belief that all the contents of the universe—from rocks to humans to galaxies—are divine and perhaps alive on some level. She hasn’t fully converted herself to this movement, let alone become an evangelist, but she says it’s a spiritual direction that could be an effective counterweight to dominionist views of the universe.

“It doesn’t matter whether … any sort of pantheism is ‘true,’” she writes. “What matters is the way any given mythology prompts us to interact with the world we’re a part of—the world each of our actions helps to make and unmake. And frankly, some mythologies prompt us to act better than others.”

All these authors ultimately conclude that it would be great if humans lived in space—someday, if and when we’ve matured. But the three books all express concerns about efforts by commercial space companies, with the help of the US government, to bypass established space laws and norms—concerns that have been thoroughly validated in 2025.  

The combustible relationship between Elon Musk and Donald Trump has raised eyebrows about cronyism—and retribution—between governments and space companies. Space is rapidly becoming weaponized. And recent events have reminded us of the immense challenges of human spaceflight. SpaceX’s next-­generation Starship vehicle has suffered catastrophic failures in several test flights, while Boeing’s Starliner capsule experienced malfunctions that kept two astronauts on the International Space Station for months longer than expected. Even space tourism is developing a bad rap: In April, a star-studded all-woman crew on a Blue Origin suborbital flight was met with widespread backlash as a symbol of out-of-touch wealth and privilege.

It is at this point that we must loop back to the issue of “suckitude,” which Mandel also channels in her book through the killer opening of M.T. Anderson’s novel Feed: “We went to the moon to have fun, but the moon turned out to completely suck.”

The dreams of space settlements put forward by Musk and Bezos are insanely fun. The reality may well suck. But it’s doubtful that any degree of suckitude will slow down the commercial space race, and the authors do at times seem to be yelling into the cosmic void. 

Still, the books challenge space enthusiasts of all stripes to imagine new ways of relating to space that aren’t so tactile and exploitative. Along those lines, Rubenstein shares a compelling anecdote in Astrotopia about an anthropologist who lived with an Inuit community in the early 1970s. When she told them about the Apollo moon landings, her hosts burst out in laughter. 

“We didn’t know this was the first time you white people had been to the moon,” they said. “Our shamans go all the time … The issue is not whether we go to visit our relatives, but how we treat them and their homeland when we go.” 

Becky Ferreira is a science reporter based in upstate New York, and author of First Contact, a book about the search for alien life, which will be published in September. 

Meet the researcher hosting a scientific conference by and for AI

In October, a new academic conference will debut that’s unlike any other. Agents4Science is a one-day online event that will encompass all areas of science, from physics to medicine. All of the work shared will have been researched, written, and reviewed primarily by AI, and will be presented using text-to-speech technology. 

The conference is the brainchild of Stanford computer scientist James Zou, who studies how humans and AI can best work together. Artificial intelligence has already provided many useful tools for scientists, like DeepMind’s AlphaFold, which helps simulate proteins that are difficult to make physically. More recently, though, progress in large language models and reasoning-enabled AI has advanced the idea that AI can work more or less as autonomously as scientists themselves—proposing hypotheses, running simulations, and designing experiments on their own. 

James Zou
James Zou’s Agents4Science conference will use text-to-speech to present the work of the AI researchers.
COURTESY OF JAMES ZOU

That idea is not without its detractors. Among other issues, many feel AI is not capable of the creative thought needed in research, makes too many mistakes and hallucinations, and may limit opportunities for young researchers. 

Nevertheless, a number of scientists and policymakers are very keen on the promise of AI scientists. The US government’s AI Action Plan describes the need to “invest in automated cloud-enabled labs for a range of scientific fields.” Some researchers think AI scientists could unlock scientific discoveries that humans could never find alone. For Zou, the proposition is simple: “AI agents are not limited in time. They could actually meet with us and work with us 24/7.” 

Last month, Zou published an article in Nature with results obtained from his own group of autonomous AI workers. Spurred on by his success, he now wants to see what other AI scientists (that is, scientists that are AI) can accomplish. He describes what a successful paper at Agents4Science will look like: “The AI should be the first author and do most of the work. Humans can be advisors.”

A virtual lab staffed by AI

As a PhD student at Harvard in the early 2010s, Zou was so interested in AI’s potential for science that he took a year off from his computing research to work in a genomics lab, in a field that has greatly benefited from technology to map entire genomes. His time in so-called wet labs taught him how difficult it can be to work with experts in other fields. “They often have different languages,” he says. 

Large language models, he believes, are better than people at deciphering and translating between subject-specific jargon. “They’ve read so broadly,” Zou says, that they can translate and generalize ideas across science very well. This idea inspired Zou to dream up what he calls the “Virtual Lab.”

At a high level, the Virtual Lab would be a team of AI agents designed to mimic an actual university lab group. These agents would have various fields of expertise and could interact with different programs, like AlphaFold. Researchers could give one or more of these agents an agenda to work on, then open up the model to play back how the agents communicated to each other and determine which experiments people should pursue in a real-world trial. 

Zou needed a (human) collaborator to help put this idea into action and tackle an actual research problem. Last year, he met John E. Pak, a research scientist at the Chan Zuckerberg Biohub. Pak, who shares Zou’s interest in using AI for science, agreed to make the Virtual Lab with him. 

Pak would help set the topic, but both he and Zou wanted to see what approaches the Virtual Lab could come up with on its own. As a first project, they decided to focus on designing therapies for new covid-19 strains. With this goal in mind, Zou set off training five AI scientists (including ones trained to act like an immunologist, a computational biologist, and a principal investigator) with different objectives and programs at their disposal. 

Building these models took a few months, but Pak says they were very quick at designing candidates for therapies once the setup was complete: “I think it was a day or half a day, something like that.”

Zou says the agents decided to study anti-covid nanobodies, a cousin of antibodies that are much smaller in size and less common in the wild. Zou was shocked, though, at the reason. He claims the models landed on nanobodies after making the connection that these smaller molecules would be well-suited to the limited computational resources the models were given. “It actually turned out to be a good decision, because the agents were able to design these nanobodies efficiently,” he says. 

The nanobodies the models designed were genuinely new advances in science, and most were able to bind to the original covid-19 variant, according to the study. But Pak and Zou both admit that the main contribution of their article is really the Virtual Lab as a tool. Yi Shi, a pharmacologist at the University of Pennsylvania who was not involved in the work but made some of the underlying nanobodies the Virtual Lab modified, agrees. He says he loves the Virtual Lab demonstration and that “the major novelty is the automation.” 

Nature accepted the article and fast-tracked it for publication preview—Zou knew leveraging AI agents for science was a hot area, and he wanted to be one of the first to test it. 

The AI scientists host a conference

When he was submitting his paper, Zou was dismayed to see that he couldn’t properly credit AI for its role in the research. Most conferences and journals don’t allow AI to be listed as coauthors on papers, and many explicitly prohibit researchers from using AI to write papers or reviews. Nature, for instance, cites uncertainties over accountability, copyright, and inaccuracies among its reasons for banning the practice. “I think that’s limiting,” says Zou. “These kinds of policies are essentially incentivizing researchers to either hide or minimize their usage of AI.”

Zou wanted to flip the script by creating the Agents4Science conference, which requires the primary author on all submissions to be an AI. Other bots then will attempt to evaluate the work and determine its scientific merits. But people won’t be left out of the loop entirely: A team of human experts, including a Nobel laureate in economics, will review the top papers. 

Zou isn’t sure what will come of the conference, but he hopes there will be some gems among the hundreds of submissions he expects to receive across all domains. “There could be AI submissions that make interesting discoveries,” he says. “There could also be AI submissions that have a lot of interesting mistakes.”

While Zou says the response to the conference has been positive, some scientists are less than impressed.

“How do you get leaps of insight?”

Lisa Messeri

Lisa Messeri, an anthropologist of science at Yale University, has loads of questions about AI’s ability to review science: “How do you get leaps of insight? And what happens if a leap of insight comes onto the reviewer’s desk?” She doubts the conference will be able to give satisfying answers.

Last year, Messeri and her collaborator Molly Crockett investigated obstacles to using AI for science in another Nature article. They remain unconvinced of its ability to produce novel results, including those shared in Zou’s nanobodies paper. 

“I’m the kind of scientist who is the target audience for these kinds of tools because I’m not a computer scientist … but I am doing computationally oriented work,” says Crockett, a cognitive scientist at Princeton University. “But I am at the same time very skeptical of the broader claims, especially with regard to how [AI scientists] might be able to simulate certain aspects of human thinking.” 

And they’re both skeptical of the value of using AI to do science if automation prevents human scientists from building up the expertise they need to oversee the bots. Instead, they advocate for involving experts from a wider range of disciplines to design more thoughtful experiments before trusting AI to perform and review science. 

“We need to be talking to epistemologists, philosophers of science, anthropologists of science, scholars who are thinking really hard about what knowledge is,” says Crockett. 

But Zou sees his conference as exactly the kind of experiment that could help push the field forward. When it comes to AI-generated science, he says, “there’s a lot of hype and a lot of anecdotes, but there’s really no systematic data.” Whether Agents4Science can provide that kind of data is an open question, but in October, the bots will at least try to show the world what they’ve got. 

Should AI flatter us, fix us, or just inform us?

How do you want your AI to treat you? 

It’s a serious question, and it’s one that Sam Altman, OpenAI’s CEO, has clearly been chewing on since GPT-5’s bumpy launch at the start of the month. 

He faces a trilemma. Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? 

It’s safe to say the company has failed to pick a lane. 

Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it turns out, as less than a week later, Altman promised an update that would make it “warmer” but “not as annoying” as the last one. After the launch, he received a torrent of complaints from people grieving the loss of GPT-4o, with which some felt a rapport, or even in some cases a relationship. People wanting to rekindle that relationship will have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these people are, and why they felt so upset.)

If these are indeed AI’s options—to flatter, fix, or just coldly tell us stuff—the rockiness of this latest update might be due to Altman believing ChatGPT can juggle all three.

He recently said that people who cannot tell fact from fiction in their chats with AI—and are therefore at risk of being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for people who have romantic relationships with AI. Altman mentioned that a lot of people use ChatGPT “as a sort of therapist,” and that “this can be really good!” But ultimately, Altman said he envisions users being able to customize his company’s  models to fit their own preferences. 

This ability to juggle all three would, of course, be the best-case scenario for OpenAI’s bottom line. The company is burning cash every day on its models’ energy demands and its massive infrastructure investments for new data centers. Meanwhile, skeptics worry that AI progress might be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we may be in a bubble. Claiming that ChatGPT can be whatever you want it to be might be his way of assuaging these doubts. 

Along the way, the company may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I started wondering whether there’s much evidence that’s what’s happening, a new paper caught my eye. 

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give. 

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

“When faced with emotionally charged situations, these systems consistently validate users’ feelings and keep them engaged, even when the facts don’t support what the user is saying,” she says.

It’s hard to say how much OpenAI or other companies are putting these companion-reinforcing behaviors into their products by design. (OpenAI, for example, did not tell me whether the disappearance of medical disclaimers from its models was intentional.) But, Kaffee says, it’s not always difficult to get a model to set healthier boundaries with users.  

“Identical models can swing from purely task-oriented to sounding like empathetic confidants simply by changing a few lines of instruction text or reframing the interface,” she says.

It’s probably not quite so simple for OpenAI. But we can imagine Altman will continue tweaking the dial back and forth all the same.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Apple AirPods : a gateway hearing aid

When the US Food and Drug Administration approved over-the-counter hearing-aid software for Apple’s AirPods Pro in September 2024, with a device price point right around $200, I was excited. I have mild to medium hearing loss and tinnitus, and my everyday programmed hearing aids cost just over $2,000—a lower-cost option I chose after my audiologist wanted to put me in a $5,000 pair.

Health insurance in the US does not generally cover the cost of hearing aids, and the vast majority of people who use them pay out of pocket for the devices along with any associated maintenance. Ninety percent of the hearing-aid market is concentrated in the hands of a few companies, so there’s little competitive pricing. The typical patient heads to an audiology clinic, takes a hearing test, gets an audiogram (a graph plotting decibel levels against frequencies to show how loud various sounds need to be for you to hear them), and then receives a recommendation—an interaction that can end up feeling like a high-pressure sales pitch. 

Prices should be coming down: In October 2022, the FDA approved the sale of over-the-counter hearing aids without a prescription or audiology exam. These options start around $200, but they are about as different from prescription hearing aids as drugstore reading glasses are from prescription lenses. 

Beginning with the AirPods Pro 2, Apple is offering something slightly different: regular earbuds (useful in all the usual ways) with many of the same features as OTC hearing aids. I’m thrilled that a major tech company has entered this field. 

The most important features for mild hearing loss are programmability, Bluetooth functionality, and the ability to feed sound to both ears. These are features many hearing aids have, but they are less robust and reliable in some of the OTC options. 

iPhone screen mockup
Apple software lets you take a hearing test through the AirPods Pro 2 with your cell phone; your phone then uses that data to program the devices.
COURTESY OF APPLE

The AirPods Pro “hearing health experience” lets you take a hearing test through the AirPods themselves with your cell phone; your phone then uses that data to program the hearing aids. No trip to the audiologist, no waiting room where a poster reminds you that hearing loss is associated with earlier cognitive decline, and no low moment afterward when you grapple with the cost.

I desperately wanted the AirPods Pro 2 to be really good, but they’re simply okay. They provide an opportunity for those with mild hearing loss to see if some of the functions of a hearing aid might be useful, but there are some drawbacks. Prescription hearing aids help me with tinnitus; I found that after a day of wear, the AirPods exacerbated it. Functionality to manage tinnitus might be a feature that Apple could and would want to pursue in the future, as an estimated 10% to 15% of the adult population experiences it. The devices also plug your whole ear canal, which can be uncomfortable and even cause swimmer’s ear after hours of use. Some people may feel odd wearing such bulky devices all the time—though they could make you look more like someone signaling “Don’t talk to me, I’m listening to my music” than someone who needs hearing aids.

Most of the other drawbacks are shared by other devices within their class of OTC hearing aids and even some prescription hearing aids: factors like poor sound quality, inadequate discernment between sounds, and difficulties with certain sound environments, like crowded rooms. Still, while the AirPods are not as good as my budget hearing aid that costs 10 times more, there’s incredible potential here.

Ashley Shew is the author of Against Technoableism: Rethinking Who Needs Improvement (2023). 

How churches use data and AI as engines of surveillance

On a Sunday morning in a Midwestern megachurch, worshippers step through sliding glass doors into a bustling lobby—unaware they’ve just passed through a gauntlet of biometric surveillance. High-speed cameras snap multiple face “probes” per second, isolating eyes, noses, and mouths before passing the results to a local neural network that distills these images into digital fingerprints. Before people find their seats, they are matched against an on-premises database—tagged with names, membership tiers, and watch-list flags—that’s stored behind the church’s firewall.

Late one afternoon, a woman scrolls on her phone as she walks home from work. Unbeknownst to her, a complex algorithm has stitched together her social profiles, her private health records, and local veteran outreach lists. It flags her for past military service, chronic pain, opioid dependence, and high Christian belief, and then delivers an ad to her Facebook feed: “Struggling with pain? You’re not alone. Join us this Sunday.”

These hypothetical scenes reflect real capabilities increasingly woven into places of worship nationwide, where spiritual care and surveillance converge in ways few congregants ever realize. Where Big Tech’s rationalist ethos and evangelical spirituality once mixed like oil and holy water, this unlikely amalgam has given birth to an infrastructure already reshaping the theology of trust—and redrawing the contours of community and pastoral power in modern spiritual life.

An ecumenical tech ecosystem

The emerging nerve center of this faith-tech nexus is in Boulder, Colorado, where the spiritual data and analytics firm Gloo has its headquarters.

Gloo captures congregants across thousands of data points that make up a far richer portrait than any snapshot. From there, the company is constructing a digital infrastructure meant to bring churches into the age of algorithmic insight.

The church is “a highly fragmented market that is one of the largest yet to fully adopt digital technology,” the company said in a statement by email. “While churches have a variety of goals to achieve their mission, they use Gloo to help them connect, engage with, and know their people on a deeper level.” 


Gloo was founded in 2013 by Scott and Theresa Beck. From the late 1980s through the 2000s, Scott was turning Blockbuster into a 3,500-store chain, taking Boston Market public, and founding Einstein Bros. Bagels before going on to seed and guide startups like Ancestry.com and HomeAdvisor. Theresa, an artist, has built a reputation creating collaborative, eco-minded workshops across Colorado and beyond. Together, they have recast pastoral care as a problem of predictive analytics and sold thousands of churches on the idea that spiritual health can be managed like customer engagement.

Think of Gloo as something like Salesforce but for churches: a behavioral analytics platform, powered by church-­generated insights, psychographic information, and third-party consumer data. The company prefers to refer to itself as “a technology platform for the faith ecosystem.” Either way, this information is integrated into its “State of Your Church” dashboard—an interface for the modern pulpit. The result is a kind of digital clairvoyance: a crystal ball for knowing whom to check on, whom to comfort, and when to act.

Thousands of churches have been sold on the idea that spiritual health can be managed like customer engagement.

Gloo ingests every one of the digital breadcrumbs a congregant leaves—how often you attend church, how much money you donate, which church groups you sign up for, which keywords you use in your online prayer requests—and then layers on third-party data (census demographics, consumer habits, even indicators for credit and health risks). Behind the scenes, it scores and segments people and groups—flagging who is most at risk of drifting, primed for donation appeals, or in need of pastoral care. On that basis, it auto-triggers tailored outreach via text, email, or in-app chat. All the results stream into the single dashboard, which lets pastors spot trends, test messaging, and forecast giving and attendance. Essentially, the system treats spiritual engagement like a marketing funnel.

Since its launch in 2013, Gloo has steadily increased its footprint, and it has started to become the connective tissue for the country’s fragmented religious landscape. According to the Hartford Institute for Religion Research, the US is home to around 370,000 distinct congregations. As of early 2025, according to figures provided by the company, Gloo held contracts with more than 100,000 churches and ministry leaders.

In 2024, the company secured a $110 million strategic investment, backed by “mission-aligned” investors ranging from a child-development NGO to a denominational finance group. That cemented its evolution from basic church services vendor to faith-tech juggernaut. 

It started snapping up and investing in a constellation of ministry tools—everything from automated sermon distribution to real-time giving and attendance analytics, AI-driven chatbots, and leadership content libraries. By layering these capabilities onto its core platform, the company has created a one-stop shop for churches that combines back-office services with member-engagement apps and psychographic insights to fully realize that unified “faith ecosystem.” 

And just this year, two major developments brought this strategy into sharper focus.

In March 2025, Gloo announced that former Intel CEO Pat Gelsinger—who has served as its chairman of the board since 2018—would assume an expanded role as executive chair and head of technology. Gelsinger, whom the company describes as “a great long-term investor and partner,” is a technologist whose fingerprints are on Intel’s and VMware’s biggest innovations.

(It is worth noting that Intel shareholders have filed a lawsuit against Gelsinger and CFO David Zinsner seeking to claw back roughly $207 million in compensation to Gelsinger, alleging that between 2021 and 2023, he repeatedly misled investors about the health of Intel Foundry Services.)

The same week Gloo announced Gelsinger’s new role, it unveiled a strategic investment in Barna Group, the Texas-based research firm whose four decades of surveying more than 2 million self-identified Christians underpin its annual reports on worship, beliefs, and cultural engagement. Barna’s proprietary database—covering every region, age cohort, and denomination—has made it the go-to insight engine for pastors, seminaries, and media tracking the pulse of American faith.

“We’ve been acquiring about a company a month into the Gloo family, and we expect that to continue,” Gelsinger told MIT Technology Review in June. “I’ve got three meetings this week on different deals we’re looking at.” (A Gloo spokesperson declined to confirm the pace of acquisitions, stating only that as of April 30, 2025, the company had fully acquired or taken majority ownership in 15 “mission-aligned companies.”)

“The idea is, the more of those we can bring in, the better we can apply the platform,” Gelsinger said. “We’re already working with companies with decades of experience, but without the scale, the technology, or the distribution we can now provide.”

hands putting their phones in a collection plate

MICHAEL BYERS

In particular, Barna’s troves of behavioral, spiritual, and cultural data offer granular insight into the behaviors, beliefs, and anxieties of faith communities. While the two organizations frame the collaboration in terms of serving church leaders, the mechanics resemble a data-fusion engine of impressive scale: Barna supplies the psychological texture, and Gloo provides the digital infrastructure to segment, score, and deploy the information.

In a promotional video from 2020 that is no longer available online, Gloo claimed to provide “the world’s first big-data platform centered around personal growth,” promising pastors a 360-degree view of congregants, including flags for substance use or mental-health struggles. Or, as the video put it, “Maximize your capacity to change lives by leveraging insights from big data, understand the people you want to serve, reach them earlier, and turn their needs into a journey toward growth.”

Gloo is also now focused on supercharging its services with artificial intelligence and using these insights to transcend market research. The company aims to craft AI models that aren’t just trained on theology but anticipate the moments when people’s faith—and faith leaders’ outreach—matters most. At a September 2024 event in Boulder called the AI & the Church Hackathon, Gloo unveiled new AI tools called Data Engine, a content management system with built-in digital-rights safeguards, and Aspen, an early prototype of its “spiritually safe” chatbot, along with the faith-tuned language model powering that chatbot, known internally as CALLM (for “Christian-Aligned Large Language Model”). 

More recently, the company released what it calls “Flourishing AI Standards,” which score large language models on their alignment with seven dimensions of well-­being: relationships, meaning, happiness, character, finances, health, and spirituality. Co-developed with Barna Group and Harvard’s Human Flourishing Program, the benchmark draws on a thousand-plus-item test bank and the Global Flourishing Study, a $40 million, 22-nation project being carried out by the Harvard program, Baylor University’s Institute for Studies of Religion, Gallup, and the Center for Open Science.

Gelsinger calls the study “one of the most significant bodies of work around this question of values in decades.” It’s not yet clear how collecting information of this kind at such scale could ultimately affect the boundary between spiritual care and data commerce. One thing is certain, though: A rich vein of donation and funding could be at stake.

“Money’s already being spent here,” he said. “Donated capital in the US through the church is around $300 billion. Another couple hundred billion beyond that doesn’t go through the church. A lot of donors have capital out there, and we’re a generous nation in that regard. If you put the flourishing-­related economics on the table, now we’re talking about $1 trillion. That’s significant economic capacity. And if we make that capacity more efficient, that’s big.” In secular terms, it’s a customer data life cycle. In faith tech, it could be a conversion funnel—one designed not only to save souls, but to shape them. 

One of Gloo’s most visible partnerships was between 2022 and 2023 with the nonprofit He Gets Us, which ran a billion-dollar media campaign aimed at rebranding Jesus for a modern audience. The project underlined that while Gloo presents its services as tools for connection and support, their core functionality involves collecting and analyzing large amounts of congregational data. When viewers who saw the ads on social media or YouTube clicked through, they landed on prayer request forms, quizzes, and church match tools, all designed to gather personal details. Gloo then layered this raw data over Barna’s decades of behavioral research, turning simple inputs—email, location, stated interests—into what the company presented as multidimensional spiritual profiles. The final product offered a level of granularity no single congregation could achieve on its own.  

Though Gloo still lists He Gets Us on its platform, the nonprofit Come Near, which has since taken over the campaign, says it has terminated Gloo’s involvement. Still, He Gets Us led to one of Gloo’s most prized relationships by sparking interest from the African Methodist Episcopal Zion Church, a 229-year-old denomination with deep historical roots in the abolitionist and civil rights movements. In 2023, the church formalized a partnership with Gloo, and in late 2024 it announced that all 1,600 of its US congregations—representing roughly 1.5 million members—would begin using the company’s State of Your Church dashboard

In a 2024 press release issued by Gloo, AME Zion acknowledged that while the denomination had long tracked traditional metrics like membership growth, Sunday turnout, and financial giving, it had limited visibility into the deeper health of its communities.

“Until now, we’ve lacked the insight to understand how church culture, people, and congregations are truly doing,” said the Reverend J. Elvin Sadler, the denomination’s general secretary-auditor. “The State of Your Church dashboards will give us a better sense of the spirit and language of the culture (ethos), and powerful new tools to put in the hands of every pastor.”

The rollout marked the first time a major US denomination had deployed Gloo’s framework at scale. For Gloo, the partnership unlocked a real-time, longitudinal data stream from a nationwide religious network, something the company had never had before. It not only validated Gloo’s vision of data-driven ministry but also positioned AME Zion as what the company hopes will be a live test case, persuading other denominations to follow suit.

The digital supply chain

The digital infrastructure of modern churches often begins with intimacy: a prayer request, a small-group sign-up, a livestream viewed in a moment of loneliness. But beneath these pastoral touchpoints lies a sophisticated pipeline that increasingly mirrors the attention-economy engines of Silicon Valley.

Charles Kriel, a filmmaker who formerly served as a special advisor to the UK Parliament on disinformation, data, and addictive technology, has particular insight into that connection. Kriel has been working for over a decade on issues related to preserving democracy and countering digital surveillance. He helped write the UK’s Online Safety Act, joining forces with many collaborators, including the Nobel Peace Prize–­winning journalist Maria Ressa and former UK tech minister Damian Collins, in an attempt to rein in Big Tech in the late 2010s.

His 2020 documentary film, People You May Know, investigated how data firms like Gloo and their partners harvest intimate personal information from churchgoers to build psychographic profiles, highlighting how this sensitive data is commodified and raising questions about its potential downstream uses.

“Listen, any church with an app? They probably didn’t build that. It’s white label,” Kriel says, referring to services produced by one company and rebranded by another. “And the people who sold it to them are collecting data.”

Many churches now operate within a layered digital environment, where first-party data collected inside the church is combined with third-party consumer data and psychographic segmentation before being fed into predictive systems. These systems may suggest sermons people might want to view online, match members with small groups, or trigger outreach when engagement drops. 


In some cases, monitoring can even take the form of biometric surveillance.

In 2014, an Israeli security-tech veteran named Moshe Greenshpan brought airport-grade facial recognition into church entryways. Face-Six, the surveillance suite from the company he founded in 2012, already protected banks and hospitals; its most provocative offshoot, FA6 Events (also known as “Churchix”), repurposes this technology for places of worship.

Greenshpan claims he didn’t originally set out to sell to churches. But over time, as he became increasingly aware of the market, he built FA6 Events as a bespoke solution for them. Today, Greenshpan says, it’s in use at over 200 churches worldwide, nearly half of them in the US.

In practice, FA6 transforms every entryway into a biometric checkpoint: an instant headcount, a security sweep, and a digital ledger of attendance, all incorporated into the familiar routine of Sunday worship. 

When someone steps into an FA6-equipped place of worship, a discreet camera mounted at eye level springs to life. Behind the scenes, each captured image is run through a lightning-fast face detector that looks at the whole face. The subject’s cropped face is then aligned, resized, and rotated so the eyes sit on a perfect horizontal line before being fed into a compact neural network. 

“To the best of my knowledge, no church notifies its congregants that it’s using facial recognition.”

Moshe Greenshpan, Israeli security-tech veteran

This onboard neural network quickly captures the features of a person’s face in a unique digital signature called an embedding, allowing for quick identification. These embeddings are compared with thousands of others that are already in the church’s local database, each one tagged with data points like a name, a membership role, or even a flag designating inclusion in an internal watch list. If the match is strong enough, the system makes an identification and records the person’s presence on the church’s secure server.

A congregation can pull full attendance logs, time-stamped entry records, and—critically—alerts whenever someone on a watch list walks through the doors. In this context, a watch list is simply a roster of photos, and sometimes names, of individuals a church has been asked (or elected) to screen out: past disruptors, those subject to trespass or restraining orders, even registered sex offenders. Once that list is uploaded into Churchix, the system instantly flags any match on arrival, pinging security teams or usher staff in real time. Some churches lean on it to spot longtime members who’ve slipped off the radar and trigger pastoral check-ins; others use it as a hard barrier, automatically denying entry to anyone on their locally maintained list.

None of this data is sent to the cloud; Greenshpan says the company is actively working on a cloud-based application. Instead, all face templates and logs are stored locally on church-owned hardware, encrypted so they can’t be read if someone gains unauthorized access. 

Churches can export data from Churchix, he says, but the underlying facial templates remain on premises. 

Still, Greenshpan admits, robust technical safeguards do not equal transparency.

“To the best of my knowledge,” he says, “no church notifies its congregants that it’s using facial recognition.”


If the tools sound invasive, the logic behind them is simple: The more the system knows about you, the more precisely it can intervene.

“Every new member of the community within a 20-mile radius—whatever area you choose—we’ll send them a flier inviting them to your church,” Gloo’s Gelsinger says. 

It’s a tech-powered revival of the casserole ministry. The system pings the church when someone new moves in—“so someone can drop off cookies or lasagna when there’s a newborn in the neighborhood,” he says. “Or just say ‘Hey, welcome. We’re here.’”

Gloo’s back end automates follow-up, too: As soon as a pastor steps down from the pulpit after delivering a sermon, it can be translated into five languages, broken into snippets for small-group study, and repackaged into a draft discussion guide—ready within the hour.

Gelsinger sees the same approach extending to addiction recovery ministries. “We can connect other databases to help churches with recovery centers reach people more effectively,” he says. 

But the data doesn’t stay within the congregation. It flows through customer relationship management (CRM) systems, application programming interfaces, cloud servers, vendor partnerships, and analytics firms. Some of it is used internally in efforts to increase engagement; the rest is repackaged as “insights” and resold to the wider faith-tech marketplace—and sometimes even to networks that target political ads.

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people.’”

Matt Engel, Gloo

 “There is a very specific thing that happens when churches become clients of Gloo,” says Brent Allpress, an academic based in Melbourne, Australia, who was a key researcher on People You May Know. Gloo gets access to the client church’s databases, he says, and the church “is strongly encouraged to share that data. And Gloo has a mechanism to just hoover that data straight up into their silo.” 

This process doesn’t happen automatically; the church must opt in by pushing those files or connecting its church-management software system’s database to Gloo via API. Once it’s uploaded, however, all that first-party information lands in Gloo’s analytics engine, ready to be processed and shared with any downstream tools or partners covered by the church’s initial consent to the terms and conditions of its contract with the company.

“There are religious leaders at the mid and local level who think the use of data is good. They’re using data to identify people in need. Addicts, the grieving,” says Kriel. “And then you have tech people running around misquoting the Bible as justification for their data harvest.” 

Matt Engel, who held the title executive director of ministry innovation at Gloo when Kriel’s film was made, acknowledged the extent of this harvest in the opening scene.  

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people,’” he said in an on-camera interview. 

According to Engel—whom Gloo would not make available for public comment—uploading data from anonymous prayer requests to the cloud was Gloo’s first use case.

Powering third-party initiatives

But Gloo’s data infrastructure doesn’t end with its own platform; it also powers third-party initiatives.

Communio, a Christian nonprofit focused on marriage and family, used Gloo’s data infrastructure in order to launch “Communio Insights,” a stripped-down version of Gloo’s full analytics platform. 

Unlike Gloo Insights, which provides access to hundreds of demographic, behavioral, health, and psychographic filters, Communio Insights focuses narrowly on relational metrics—indicators of marriage and family stress, involvement in small groups at church—and basic demographic data. 

At the heart of its playbook is a simple, if jarring, analogy.

“If you sell consumer products of different sorts, you’re trying to figure out good ways to market that. And there’s no better product, really, than the gospel,” J.P. De Gance, the founder and president of Communio, said in People You May Know.

Communio taps Gloo’s analytics engine—leveraging credit histories, purchasing behavior, public voter rolls, and the database compiled by i360, an analytics company linked to the conservative Koch network—to pinpoint unchurched couples in key regions who are at risk of relationship strain. It then runs microtargeted outreach (using direct mail, text messaging, email, and Facebook Custom Audiences, a tool that lets organizations find and target people who have interacted with them), collecting contact info and survey responses from those who engage. All responses funnel back into Gloo’s platform, where churches monitor attendance, small-group participation, baptisms, and donations to evaluate the campaign’s impact.

church window over the parishioners has rays of light emanating from a stained glass eye

MICHAEL BYERS

Investigative research by Allpress reveals significant concerns around these operations.  

In 2015, two nonprofits—the Relationship Enrichment Collaborative (REC), staffed by former Gloo executives, and its successor, the Culture of Freedom Initiative (now Communio), controlled by the Koch-affiliated nonprofit Philanthropy Roundtable—funded the development of the original Insights platform. Between 2015 and 2017, REC paid approximately $1.3 million to Gloo and $535,000 to Cambridge Analytica, the consulting firm notorious for harvesting Facebook users’ personal data and using it for political targeting before the 2016 election, to build and refine psychographic models and a bespoke digital ministry app powering Gloo’s outreach tools. Following REC’s closure, the Culture of Freedom Initiative invested another $375,000 in Gloo and $128,225 in Cambridge Analytica. 

REC’s own 2016 IRS filing describes the work in terse detail: “Provide[d] digital micro-targeted marketing for churches and non-profit champions … using predictive modeling and centralized data analytics we help send the right message to the right couple at the right time based upon their desires and behaviors.”

On top of all this documented research, Allpress exposed another critical issue: the explicit use of sensitive health-care data. 

He found that Gloo Insights combines over 2,000 data points—drawing on everything from nationwide credit and purchasing histories to church management records and Christian psychographic surveys—with filters that make it possible to identify people with health issues such as depression, anxiety, and grief. The result: Facebook Custom Audiences built to zero in on vulnerable individuals via targeted ads.

These ads invite people suffering from mental-health conditions into church counseling groups “as a pathway to conversion,” Allpress says.

These targeted outreach efforts were piloted in cities including Phoenix, Arizona; Dayton, Ohio; and Jacksonville, Florida. Reportedly, as many as 80% of those contacted responded positively, with those who joined a church as new members contributing financially at above-­average rates. In short, Allpress found that pastoral tools had covertly exploited mental-health vulnerabilities and relationship crises for outreach that blurred the lines separating pastoral care, commerce, and implicit political objectives.

The legal and ethical vacuum

Developers of this technology earnestly claim that the systems are designed to enhance care, not exploit people’s need for it. They’re described as ways to tailor support to individual needs, improve follow-up, and help churches provide timely resources. But experts say that without robust data governance or transparency around how sensitive information is used and retained, well-­intentioned pastoral technology could slide into surveillance.

In practice, these systems have already been used to surveil and segment congregations. Internal demos and client testimonials confirm that Gloo, for example, uses “grief” as an explicit data point: Churches run campaigns aimed at people flagged for recent bereavement, depression, or anxiety, funneling them into support groups and identifying them for pastoral check-ins. 

Examining Gloo’s terms and conditions reveals further security and transparency concerns. From nearly a dozen documents, ranging from “click-through” terms for interactive services to master service agreements at the enterprise level, Gloo stitches together a remarkably consistent data-­governance framework. Limits are imposed on any legal action by individual congregants, for example. The click-through agreement corrals users into binding arbitration, bars any class action suits or jury trials, and locks all disputes into New York or Colorado courts, where arbitration is particularly favored over traditional litigation. Meanwhile, its privacy statement carves out broad exceptions for service providers, data-­enrichment partners, and advertising affiliates, giving them carte blanche to use congregants’ data as they see fit. Crucially, Gloo expressly reserves the right to ingest “health and wellness information” provided via wellness assessments or when mental-health keywords appear in prayer requests. This is a highly sensitive category of information that, for health apps, is normally covered by stringent medical-privacy rules like HIPAA.

In other words, Gloo is protected by sprawling legal scaffolding, while churches and individual users give up nearly every right to litigate, question data practices, or take collective action. 

“We’re kind of in the Wild West in terms of the law,” says Adam Schwartz, the director of privacy litigation at the Electronic Frontier Foundation, the nonprofit watchdog that has spent years wrestling tech giants over data abuses and biometric overreach. 

In the United States, biometric surveillance like that used by growing numbers of churches inhabits a legal twilight zone where regulation is thin, patchy, and often toothless. Schwartz points to Illinois as a rare exception for its Biometric Information Privacy Act (BIPA), one of the nation’s strongest such laws. The statute applies to any organization that captures biometric identifiers—including retina or iris scans, fingerprints, voiceprints, hand scans, facial geometry, DNA, and other unique biological information. It requires entities to post clear data-collection policies, obtain explicit written consent, and limit how long such data is retained. Failure to comply can expose organizations to class action lawsuits and steep statutory damages—up to $5,000 per violation.

But beyond Illinois, protections quickly erode. Though Texas and Washington also have biometric privacy statutes, their bark is stronger than their bite. Efforts to replicate Illinois’s robust protections have been made in over a dozen states—but none have passed. As a result, in much of the country, any checks on biometric surveillance depend more on voluntary transparency and goodwill than any clear legal boundary.

“There is a real potential for information gathered about a person [to] be used against them in their life outside the church.”

Emily Tucker, Center on Privacy & Technology at Georgetown Law

That’s especially problematic in the church context, says Emily Tucker, executive director of the Center on Privacy & Technology at Georgetown Law, who attended divinity school before becoming a legal scholar. “The necessity of privacy for the possibility of finding personal relationship to the divine—for engaging in rituals of worship, for prayer and penitence, for contemplation and spiritual struggle—is a fundamental principle across almost every religious tradition,” she says. “Imposing a surveillance architecture over the faith community interferes radically with the possibility of that privacy, which is necessary for the creation of sacred space.”

Tucker researches the intersection of surveillance, civil rights, and marginalized communities. She warns that the personal data being collected through faith-tech platforms is far from secure: “Because corporate data practices are so poorly regulated in this country, there are very few limitations on what companies that take your data can subsequently do with it.”

To Tucker, the risks of these platforms outweigh the rewards—especially when biometrics and data collected in a sacred setting could follow people into their daily lives. “Many religious institutions are extremely large and often perform many functions in a given community besides providing a space for worship,” she says. “Many churches, for example, are also employers or providers of social services. There is a real potential for information gathered about a person in their associational activities as a member of a church to then be used against them in their life outside the church.”  

She points to government dragnet surveillance, the use of IRS data in immigration enforcement, and the vulnerability of undocumented congregants as examples of how faith-tech data could be weaponized beyond its intended use: “Religious institutions are putting the safety of those members at risk by adopting this kind of surveillance technology, which exposes so much personal information to potential abuse and misuse.” 

Schwartz, too, says that any perceived benefits must be weighed carefully against the potential harms, especially when sensitive data and vulnerable communities are involved.

“Churches: Before doing this, you ought to consider the downside, because it can hurt your congregants,” he says.  

With guardrails still scarce, though, faith-tech pioneers and church leaders are peering ever more deeply into congregants’ lives. Until meaningful oversight arrives, the faithful remain exposed to a gaze they never fully invited and scarcely understand.

In April, Gelsinger took the stage at a sold-out Missional AI Summit, a flagship event for Christian technologists that this year was organized around the theme “AI Collision: Shaping the Future Together.” Over 500 pastors, engineers, ethicists, and AI developers filled the hall, flashing badges with logos from Google DeepMind, Meta, McKinsey, and Gloo.

“We want to be part of a broader community … so that we’re influential in creating flourishing AI, technology as a force for good, AI that truly embeds the values that we care about,” Gelsinger said at the summit. He likened such tools to pivotal technologies in Christian history: the Roman roads that carried the gospel across the empire, or Martin Luther’s printing press, which shattered monolithic control over scripture. A Gloo spokesperson later confirmed that one of the company’s goals is to shape AI specifically to “contribute to the flourishing of people.”

“We’re going to see AI become just like the internet,” Gelsinger said. “Every single interaction will be infused with AI capabilities.” 

He says Gloo is already mining data across the spectrum of human experience to fuel ever more powerful tools.

“With AI, computers adapt to us. We talk to them; they hear us; they see us for the first time,” he said. “And now they are becoming a user interface that fits with humanity.”

Whether these technologies ultimately deepen pastoral care or erode personal privacy may hinge on decisions made today about transparency, consent, and accountability. Yet the pace of adoption already outstrips the development of ethical guardrails. Now, one of the questions lingering in the air is not whether AI, facial recognition, and other emerging technologies can serve the church, but how deeply they can be woven into its nervous system to form a new OS for modern Christianity and moral infrastructure. 

“It’s like standing on the beach watching a tsunami in slow motion,” Kriel says. 

Gelsinger sees it differently.  

“You and I both need to come to the same position, like Isaiah did,” he told the crowd at the Missional AI Summit. “‘Here am I, Lord. Send me.’ Send me, send us, that we can be shaping technology as a force for good, that we could grab this moment in time.” 

Alex Ashley is a journalist whose reporting has appeared in Rolling Stone, the Atlantic, NPR, and other national outlets.

Material Cultures looks to the past to build the future

Despite decades of green certifications, better material sourcing, and the use of more sustainable materials such as mass timber, the built environment is still responsible for a third of global emissions worldwide. According to a 2024 UN report, the building sector has fallen “significantly behind on progress” toward becoming more sustainable. Changing the way we erect and operate buildings remains key to even approaching climate goals. 

“As soon as you set out and do something differently in construction, you are constantly bumping your head against the wall,” says Paloma Gormley, a director of the London-based design and research nonprofit Material Cultures. “You can either stop there or take a step back and try to find a way around it.”

Gormley has been finding a “way around it” by systematically exploring how tradition can be harnessed in new ways to repair what she has dubbed the “oil vernacular”—the contemporary building system shaped not by local, natural materials but by global commodities and plastic products made largely from fossil fuels.

Though she grew up in a household rich in art and design—she’s the daughter of the famed British sculptor Antony Gormley—she’s quick to say she’s far from a brilliant maker and more of a “bodger,” a term that means someone who does work that’s botched or shoddy. 

Improviser or DIYer might be more accurate. One of her first bits of architecture was a makeshift home built on the back of a truck she used to tour around England one summer in her 20s. The work of her first firm, Practice Architecture, which she cofounded after graduating from the University of Cambridge in 2009, was informed by London’s DIY subcultures and informal art spaces. She says these scenes “existed in the margins and cracks between things, but in which a lot felt possible.” 

Frank’s Café, a bar and restaurant she built in 2009 on the roof of a parking garage in Peckham that hosted a sculpture park, was constructed from ratchet straps, scaffold boards, and castoffs she’d source from lumberyards and transport on the roof rack of an old Volvo. It was the first of a series of cultural and social spaces she and her partner Lettice Drake created using materials both low-budget and local. 

Material Cultures grew out of connections Gormley made while she was teaching at London Metropolitan University. In 2019, she was a teaching assistant along with Summer Islam, who was friends with George Massoud, both architects and partners in the firm Study Abroad and advocates of more socially conscious design. The trio had a shared interest in sustainability and building practices, as well as a frustration with the architecture world’s focus on improving sustainability through high-tech design. Instead of using modern methods to build more efficient commercial and residential spaces from carbon-intensive materials like steel, they thought, why not revisit first principles? Build with locally sourced, natural materials and you don’t have to worry about making up a carbon deficit in the first place. 

The frame of Clearfell House was built with ash and larch, two species of wood vulnerable to climate change
HENRY WOIDE/COURTESY OF MATERIAL CULTURES
office in a house
Flat House was built with pressed panels of hemp grown in the fields surrounding the home.
OSKAR PROCTOR

As many other practitioners look to artificial intelligence and other high-tech approaches to building, Material Cultures has always focused on sustainability, finding creative ways to turn local materials into new buildings. And the three of them don’t just design and build. They team up with traditional craft experts to explore the potential of materials like reeds and clay, and techniques like thatching and weaving. 

More than any one project, Gormley, Islam, and Massoud are perhaps best known for their meditation on the subject of how architects work. Published in 2022, Material Reform: Building for a Post-Carbon Future is a pocket-size book that drills into materials and methodologies to suggest a more thoughtful, ecological architecture.

“There is a huge amount of technological knowledge and intelligence in historic, traditional, vernacular ways of doing things that’s been evolved over millennia, not just the last 100 years,” Gormley says. “We’re really about trying to tap into that.”

One of Material Cultures’ early works, Flat House, a home built in 2019 in Cambridgeshire, England, with pressed panels of hemp grown in the surrounding fields, was meant as an exploration of what kind of building could be made from what a single farm could produce. Gormley was there from the planting of the seeds to the harvesting of the hemp plants to the completion of construction. 

“It was incredible understanding that buildings could be part of these natural cycles,” she says. 

Clearfell House, a timber A-frame cabin tucked into a clearing in the Dalby Forest in North Yorkshire, England, exemplifies the firm’s obsession with elevating humble materials and vernacular techniques. Every square inch of the house, which was finished in late 2024 as part of a construction class Material Cultures’ architects taught at Central Saint Martins design school in London, emerged from extensive research into British timber, the climate crisis, and how forestry is changing. That meant making the frame from local ash and larch, two species of wood specifically chosen because they were affected by climate change, and avoiding the use of factory-farmed lumber. The modular system used for the structure was made to be replicated at scale.  

“I find it rare that architecture offices have such a clear framing and mission,” says Andreas Lang, head of the Saint Martins architecture program. “Emerging practices often become client-dependent. For [Material Cultures], the client is maybe the planet.”

Material Cultures fits in with the boom in popularity for more sustainable materials, waste-minimizing construction, and panelized building using straw and hemp, says Michael Burchert, a German expert on decarbonized buildings. “People are grabbing the good stuff from the hippies at the moment,” he says. Regulation has started to follow: France recently mandated that new public buildings be constructed with 50% timber or other biological material, and Denmark’s construction sector has embarked on a project, Pathways to Biobased Construction, to promote use of nature-based products in new building.

Burchert appreciates the way the firm melds theory and practice. “We have academia, and academia is full of papers,” he says. “We need makers.” 

Over the last several years, Gormley and her cofounders have developed a portfolio of work that rethinks construction supply chains and stays grounded in social impact. The just-finished Wolves Lane Centre, a $2.4 million community center in North London run by a pair of groups that work on food and racial justice, didn’t just reflect Material Cultures’ typical focus on bio-based materials—in this case, local straw, lime, and timber. 

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

For Wolves Lane Centre, a $2.4 million community facility for groups working on food and racial justice, expert plasterers and specialists in straw-bale construction were brought in so their processes could be shared and learned.

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

It was a project of self-determination and learning, says Gormley. Expert plasterers and specialists in straw-bale construction were brought in so the processes could be shared and learned. Introducing this kind of teaching into the construction process was quite time-consuming and, Gormley says, was as expensive as using contemporary techniques, if not more so. But the added value was worth it. 

“The people who become the custodians of these buildings then have the skills to maintain and repair, as well as evolve, the site over time,” she says. 

As Burchert puts it, science fiction tends to show a future built of concrete and steel; Material Cultures instead offers something natural, communal, and innovative, a needed paradigm shift. And it’s increasingly working on a larger scale. The Phoenix, a forthcoming low-carbon development in the southern English city of Lewes that’s being developed by a former managing director for Greenpeace, will use the firm’s designs for 70 of its 700 planned homes. 

The project Gormley may be most excited about is an interdisciplinary school Material Cultures is creating north of London: a 500-acre former farm in Essex that will be a living laboratory bridging the firm’s work in supply chains, materials science, and construction. The rural site for the project, which has the working title Land Lab, was deliberately chosen as a place where those connections would be inherent, Gormley says. 

The Essex project advances the firm’s larger mission. As Gormley, Massoud, and Islam advise in their book, “Hold a vision of a radically different world in your mind while continuing to act in the world as it is, persisting in the project of making changes that are within the scope of action.” 

Patrick Sisson, a Chicago expat living in Los Angeles, covers technology and urbanism.

NASA’s new AI model can predict when a solar storm may strike

NASA and IBM have released a new open-source machine learning model to help scientists better understand and predict the physics and weather patterns of the sun. Surya, trained on over a decade’s worth of NASA solar data, should help give scientists an early warning when a dangerous solar flare is likely to hit Earth.

Solar storms occur when the sun erupts energy and particles into space. They can produce solar flares and slower-moving coronal mass ejections that can disrupt radio signals, flip computer bits onboard satellites, and endanger astronauts with bursts of radiation. 

There’s no way to prevent these sorts of effects, but being able to predict when a large solar flare will occur could let people work around them. However, as Louise Harra, an astrophysicist at ETH Zurich, puts it, “when it erupts is always the sticking point.”

Scientists can easily tell from an image of the sun if there will be a solar flare in the near future, says Harra, who did not work on Surya. But knowing the exact timing and strength of a flare is much harder, she says. That’s a problem because a flare’s size can make the difference between small regional radio blackouts every few weeks (which can still be disruptive) or a devastating solar superstorm that would cause satellites to fall out of orbit and electrical grids to fail. Some solar scientists believe we are overdue for a solar superstorm of this magnitude.

While machine learning has been used to study solar weather events before, the researchers behind Surya hope the quality and sheer scale of their data will help it predict a wider range of events more accurately. 

The model’s training data came from NASA’s Solar Dynamics Observatory, which collects pictures of the sun at many different wavelengths of light simultaneously. That made for a dataset of over 250 terabytes in total.

Early testing of Surya showed it could predict some solar flares two hours in advance. “It can predict the solar flare’s shape, the position in the sun, the intensity,” says Juan Bernabe-Moreno, an AI researcher at IBM who led the Surya project. Two hours may not be enough to protect against all the impacts a strong flare could have, but every moment counts. IBM claims in a blog post that this can as much as double the warning time currently possible with state-of-the-art methods, though exact reported lead times vary. It’s possible this predictive power could be improved through, for example, fine-tuning or by adding other data, as well. 

According to Harra, the hidden patterns underlying events like solar flares are hard to understand from Earth. She says that while astrophysicists know the conditions that make these events happen, they still do not understand why they occur when they do. “It’s just those tiny destabilizations that we know happen, but we don’t know when,” says Harra. The promise of Surya lies in whether it can find the patterns underlying those destabilizations faster than any existing methods, buying us extra time.

However, Bernabe-Moreno is excited for the potential beyond predicting solar flares. He hopes to use Surya alongside previous models he worked on for IBM and NASA that predict weather here on Earth to better understand how solar storms and Earth weather are connected. “There is some evidence about solar weather influencing lightning, for example,” he says. “What are the cross effects, and where and how do you map the influence from one type of weather to the other?”

Because Surya is a foundation model, trained without a specialized job, NASA and IBM hope that it can find many patterns in the sun’s physics, much as general-purpose large language models like ChatGPT can take on many different tasks. They believe Surya could even enable new understandings about how other celestial bodies work. 

“Understanding the sun is a proxy for understanding many other stars,” Bernabe-Moreno says. “We look at the sun as a laboratory.”

Why we should thank pigeons for our AI breakthroughs

In 1943, while the world’s brightest physicists split atoms for the Manhattan Project, the American psychologist B.F. Skinner led his own secret government project to win World War II. 

Skinner did not aim to build a new class of larger, more destructive weapons. Rather, he wanted to make conventional bombs more precise. The idea struck him as he gazed out the window of his train on the way to an academic conference. “I saw a flock of birds lifting and wheeling in formation as they flew alongside the train,” he wrote. “Suddenly I saw them as ‘devices’ with excellent vision and maneuverability. Could they not guide a missile?”

Skinner started his missile research with crows, but the brainy black birds proved intractable. So he went to a local shop that sold pigeons to Chinese restaurants, and “Project Pigeon” was born. Though ordinary pigeons, Columba livia, were no one’s idea of clever animals, they proved remarkably cooperative subjects in the lab. Skinner rewarded the birds with food for pecking at the right target on aerial photographs—and eventually planned to strap the birds into a device in the nose of a warhead, which they would steer by pecking at the target on a live image projected through a lens onto a screen. 

The military never deployed Skinner’s kamikaze pigeons, but his experiments convinced him that the pigeon was “an extremely reliable instrument” for studying the underlying processes of learning. “We have used pigeons, not because the pigeon is an intelligent bird, but because it is a practical one and can be made into a machine,” he said in 1944.

People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is Skinner’s research with pigeons in the middle of the 20th century. Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His “behaviorist” theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI.  

These companies’ programs are increasingly incorporating a kind of machine learning whose core concept—reinforcement—is taken directly from Skinner’s school of psychology and whose main architects, the computer scientists Richard Sutton and Andrew Barto, won the 2024 Turing Award, an honor widely considered to be the Nobel Prize of computer science. Reinforcement learning has helped enable computers to drive cars, solve complex math problems, and defeat grandmasters in games like chess and Go—but it has not done so by emulating the complex workings of the human mind. Rather, it has supercharged the simple associative processes of the pigeon brain. 

It’s a “bitter lesson” of 70 years of AI research, Sutton has written: that human intelligence has not worked as a model for machine learning—instead, the lowly principles of associative learning are what power the algorithms that can now simulate or outperform humans on a variety of tasks. If artificial intelligence really is close to throwing off the yoke of its creators, as many people fear, then our computer overlords may be less like ourselves than like “rats with wings”—and planet-size brains. And even if it’s not, the pigeon brain can at least help demystify a technology that many worry (or rejoice) is “becoming human.” 

In turn, the recent accomplishments of AI are now prompting some animal researchers to rethink the evolution of natural intelligence. Johan Lind, a biologist at Stockholm University, has written about the “associative learning paradox,” wherein the process is largely dismissed by biologists as too simplistic to produce complex behaviors in animals but celebrated for producing humanlike behaviors in computers. The research suggests not only a greater role for associative learning in the lives of intelligent animals like chimpanzees and crows, but also far greater complexity in the lives of animals we’ve long dismissed as simple-minded, like the ordinary Columba livia


When Sutton began working in AI, he felt as if he had a “secret weapon,” he told me: He had studied psychology as an undergrad. “I was mining the psychological literature for animals,” he says.

Skinner started his missile research with crows but switched to pigeons when the brainy black birds proved intractable.
B.F. SKINNER FOUNDATION

Ivan Pavlov began to uncover the mechanics of associative learning at the end of the 19th century in his famous experiments on “classical conditioning,” which showed that dogs would salivate at a neutral stimulus—like a bell or flashing light—if it was paired predictably with the presentation of food. In the middle of the 20th century, Skinner took Pavlov’s principles of conditioning and extended them from an animal’s involuntary reflexes to its overall behavior. 

Skinner wrote that “behavior is shaped and maintained by its consequences”—that a random action with desirable results, like pressing a lever that releases a food pellet, will be “reinforced” so that the animal is likely to repeat it. Skinner reinforced his lab animals’ behavior step by step, teaching rats to manipulate marbles and pigeons to play simple tunes on four-key pianos. The animals learned chains of behavior, through trial and error, in order to maximize long-term rewards. Skinner argued that this type of associative learning, which he called “operant conditioning” (and which other psychologists had called “instrumental learning”), was the building block of all behavior. He believed that psychology should study only behaviors that could be observed and measured without ever making reference to an “inner agent” in the mind.

When Richard Sutton began working in AI, he felt as if he had a “secret weapon”: He studied psychology as an undergrad. “I was mining the psychological literature for animals,” he says.

Skinner thought that even human language developed through operant conditioning, with children learning the meanings of words through reinforcement. But his 1957 book on the subject, Verbal Behavior, provoked a brutal review from Noam Chomsky, and psychology’s focus started to swing from observable behavior to innate “cognitive” abilities of the human mind, like logic and symbolic thinking. Biologists soon rebelled against behaviorism also, attacking psychologists’ quest to explain the diversity of animal behavior through an elementary and universal mechanism. They argued that each species evolved specific behaviors suited to its habitat and lifestyle, and that most behaviors were inherited, not learned. 

By the ’70s, when Sutton started reading about Skinner’s and similar experiments, many psychologists and researchers interested in intelligence had moved on from pea-brained pigeons, which learn mostly by association, to large-brained animals with more sophisticated behaviors that suggested potential cognitive abilities. “This was clearly old stuff that was not exciting to people anymore,” he told me. Still, Sutton found these old experiments instructive for machine learning: “I was coming to AI with an animal-learning-theorist mindset and seeing the big lack of anything like instrumental learning in engineering.” 


Many engineers in the second half of the 20th century tried to model AI on human intelligence, writing convoluted programs that attempted to mimic human thinking and implement rules that govern human response and behavior. This approach—commonly called “symbolic AI”—was severely limited; the programs stumbled over tasks that were easy for people, like recognizing objects and words. It just wasn’t possible to write into code the myriad classification rules human beings use to, say, separate apples from oranges or cats from dogs—and without pattern recognition, breakthroughs in more complex tasks like problem solving, game playing, and language translation seemed unlikely too. These computer scientists, the AI skeptic Hubert Dreyfus wrote in 1972, accomplished nothing more than “a small engineering triumph, an ad hoc solution of a specific problem, without general applicability.”

Pigeon research, however, suggested another route. A 1964 study showed that pigeons could learn to discriminate between photographs with people and photographs without people. Researchers simply presented the birds with a series of images and rewarded them with a food pellet for pecking an image showing a person. They pecked randomly at first but quickly learned to identify the right images, including photos where people were partially obscured. The results suggested that you didn’t need rules to sort objects; it was possible to learn concepts and use categories through associative learning alone. 

In another Skinner experiment, a pigeon receives food after correctly matching a colored light to a corresponding colored panel.
GETTY IMAGES

When Sutton began working with Barto on AI in the late ’70s, they wanted to create a “complete, interactive goal-seeking agent” that could explore and influence its environment like a pigeon or rat. “We always felt the problems we were studying were closer to what animals had to face in evolution to actually survive,” Barto told me. The agent needed two main functions: search, to try out and choose from many actions in a situation, and memory, to associate an action with the situation where it resulted in a reward. Sutton and Barto called their approach “reinforcement learning”; as Sutton said, “It’s basically instrumental learning.” In 1998, they published the definitive exploration of the concept in a book, Reinforcement Learning: An Introduction. 

Over the following two decades, as computing power grew exponentially, it became possible to train AI on increasingly complex tasks—that is, essentially, to run the AI “pigeon” through millions more trials. 

Programs trained with a mix of human input and reinforcement learning defeated human experts at chess and Atari. Then, in 2017, engineers at Google DeepMind built the AI program AlphaGo Zero entirely through reinforcement learning, giving it a numerical reward of +1 for every game of Go that it won and −1 for every game that it lost. Programmed to seek the maximum reward, it began without any knowledge of Go but improved over 40 days until it attained what its creators called “superhuman performance.” Not only could it defeat the world’s best human players at Go, a game considered even more complicated than chess, but it actually pioneered new strategies that professional players now use. 

“Humankind has accumulated Go knowledge from millions of games played over thousands of years,” the program’s builders wrote in Nature in 2017. “In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.” The team’s lead researcher was David Silver, who studied reinforcement learning under Sutton at the University of Alberta.

Today, more and more tech companies have turned to reinforcement learning in products such as consumer-facing chatbots and agents. The first generation of generative AI, including large language models like OpenAI’s GPT-2 and GPT-3, tapped into a simpler form of associative learning called “supervised learning,” which trained the model on data sets that had been labeled by people. Programmers often used reinforcement to fine-tune their results by asking people to rate a program’s performance and then giving these ratings back to the program as goals to pursue. (Researchers call this “reinforcement learning from feedback.”) 

Then, last fall, OpenAI revealed its o-series of large language models, which it classifies as “reasoning” models. The pioneering AI firm boasted that they are “trained with reinforcement learning to perform reasoning” and claimed they are capable of “a long internal chain of thought.” The Chinese startup DeepSeek also used reinforcement learning to train its attention-grabbing “reasoning” LLM, R1. “Rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-­solving strategies,” they explained.

These descriptions might impress users, but at least psychologically speaking, they are confused. A computer trained on reinforcement learning needs only search and memory, not reasoning or any other cognitive mechanism, in order to form associations and maximize rewards. Some computer scientists have criticized the tendency to anthropomorphize these models’ “thinking,” and a team of Apple engineers recently published a paper noting their failure at certain complex tasks and “raising crucial questions about their true reasoning capabilities.”

Sutton, too, dismissed the claims of reasoning as “marketing” in an email, adding that “no serious scholar of mind would use ‘reasoning’ to describe what is going on in LLMs.” Still, he has argued, with Silver and other coauthors, that the pigeons’ method—learning, through trial and error, which actions will yield rewards—is “enough to drive behavior that exhibits most if not all abilities that are studied in natural and artificial intelligence,” including human language “in its full richness.” 

In a paper published in April, Sutton and Silver stated that “today’s technology, with appropriately chosen algorithms, already provides a sufficiently powerful foundation to … rapidly progress AI towards truly superhuman agents.” The key, they argue, is building AI agents that depend less than LLMs on human dialogue and prejudgments to inform their behavior. 

“Powerful agents should have their own stream of experience that progresses, like humans, over a long time-scale,” they wrote. “Ultimately, experiential data will eclipse the scale and quality of human generated data. This paradigm shift, accompanied by algorithmic advancements in RL, will unlock in many domains new capabilities that surpass those possessed by any human.”


If computers can do all that with just a pigeonlike brain, some animal researchers are now wondering if actual pigeons deserve more credit than they’re commonly given. 

“When considered in light of the accomplishments of AI, the extension of associative learning to purportedly more complicated forms of cognitive performance offers fresh prospects for understanding how biological systems may have evolved,” Ed Wasserman, a psychologist at the University of Iowa, wrote in a recent study in the journal Current Biology

Wasserman trained pigeons to succeed at a complex categorization task, which several undergraduate students failed. The students tried to find a rule that would help them sort various discs; the pigeons simply developed a sense for the group to which any given disc belonged.

In one experiment, Wasserman trained pigeons to succeed at a complex categorization task, which several undergraduate students failed. The students tried, in vain, to find a rule that would help them sort various discs with parallel black lines of various widths and tilts; the pigeons simply developed a sense, through practice and association, for the group to which any given disc belonged. 

Like Sutton, Wasserman became interested in behaviorist psychology when Skinner’s theories were out of fashion. He didn’t switch to computer science, however: He stuck with pigeons. “The pigeon lives or dies by these really rudimentary learning rules,” Wasserman told me recently, “but they are powerful enough to have succeeded colossally in object recognition.” In his most famous experiments, Wasserman trained pigeons to detect cancerous tissue and symptoms of heart disease in medical scans as accurately as experienced doctors with framed diplomas behind their desks. Given his results, Wasserman found it odd that so many psychologists and ethologists regarded associative learning as a crude, mechanical mechanism, incapable of producing the intelligence of clever animals like apes, elephants, dolphins, parrots, and crows. 

Other researchers also started to reconsider the role of associative learning in animal behavior after AI started besting human professionals in complex games. “With the progress of artificial intelligence, which in essence is built upon associative processes, it is increasingly ironic that associative learning is considered too simple and insufficient for generating biological intelligence,” Lind, the biologist from Stockholm University, wrote in 2023. He often cites Sutton and Barto’s computer science in his biological research, and he believes it’s human beings’ symbolic language and cumulative cultures that really put them in a cognitive category of their own.

Ethologists generally propose cognitive mechanisms, like theory of mind (that is, the ability to attribute mental states to others), to explain remarkable animal behaviors like social learning and tool use. But Lind has built models showing that these flexible behaviors could have developed through associative learning, suggesting that there may be no need to invoke cognitive mechanisms at all. If animals learn to associate a behavior with a reward, then the behavior itself will come to approximate the value of the reward. A new behavior can then become associated with the first behavior, allowing the animal to learn chains of actions that ultimately lead to the reward. In Lind’s view, studies demonstrating self-control and planning in chimpanzees and ravens are probably describing behaviors acquired through experience rather than innate mechanisms of the mind.  

Lind has been frustrated with what he calls the “low standard that is accepted in animal cognition studies.” As he wrote in an email, “Many researchers in this field do not seem to worry about excluding alternative hypotheses and they seem happy to neglect a lot of current and historical knowledge.” There are some signs, though, that his arguments are catching on. A group of psychologists not affiliated with Lind referenced his “associative learning paradox” last year in a criticism of a Current Biology study, which purported to show that crows used “true statistical inference” and not “low-level associative learning strategies” in an experiment. The psychologists found that they could explain the crows’ performance with a simple reinforcement-­learning model—“exactly the kind of low-level associative learning process that [the original authors] ruled out.” 

Skinner might have felt vindicated by such arguments. He lamented psychology’s cognitive turn until his death in 1990, maintaining that it was scientifically irresponsible to probe the minds of living beings. After “Project Pigeon,” he became increasingly obsessed with “behaviorist” solutions to societal problems. He went from training pigeons for war to inventions like the “Air Crib,” which aimed to “simplify” baby care by keeping the infant behind glass in a climate-­controlled chamber and eliminating the need for clothing and bedding. Skinner rejected free will, arguing that human behavior is determined by environmental variables, and wrote a novel, Walden II, about a utopian community founded on his ideas.


People who care about animals might feel uneasy about a revival in behaviorist theory. The “cognitive revolution” broke with centuries of Western thinking, which had emphasized human supremacy over animals and treated other creatures like stimulus-response machines. But arguing that animals learn by association is not the same as arguing that they are simple-minded. Scientists like Lind and Wasserman do not deny that internal forces like instinct and emotion also influence animal behavior. Sutton, too, believes that animals develop models of the world through their experiences and use them to plan actions. Their point is not that intelligent animals are empty-headed but that associative learning is a much more powerful—indeed, “cognitive”—mechanism than many of their peers believe. The psychologists who recently criticized the study on crows and statistical inference did not conclude that the birds were stupid. Rather, they argued “that a reinforcement learning model can produce complex, flexible behaviour.”

This is largely in line with the work of another psychologist, Robert Rescorla, whose work in the ’70s and ’80s influenced both Wasserman and Sutton. Rescorla encouraged people to think of association not as a “low-level mechanical process” but as “the learning that results from exposure to relations among events in the environment” and “a primary means by which the organism represents the structure of its world.” 

This is true even of a laboratory pigeon pecking at screens and buttons in a small experimental box, where scientists carefully control and measure stimuli and rewards. But the pigeon’s learning extends outside the box. Wasserman’s students transport the birds between the aviary and the laboratory in buckets—and experienced pigeons jump immediately into the buckets whenever the students open the doors. Much as Rescorla suggested, they are learning the structure of their world inside the laboratory and the relation of its parts, like the bucket and the box, even though they do not always know the specific task they will face inside. 

Comparative psychologists and animal researchers have long grappled with a question that suddenly seems urgent because of AI: How do we attribute sentience to other living beings?

The same associative mechanisms through which the pigeon learns the structure of its world can open a window to the kind of inner life that Skinner and many earlier psychologists said did not exist. Pharmaceutical researchers have long used pigeons in drug-discrimination tasks, where they’re given, say, an amphetamine or a sedative and rewarded with a food pellet for correctly identifying which drug they took. The birds’ success suggests they both experience and discriminate between internal states. “Is that not tantamount to introspection?” Wasserman asked.

It is hard to imagine AI matching a pigeon on this specific task—a reminder that, though AI and animals share associative mechanisms, there is more to life than behavior and learning. A pigeon deserves ethical consideration as a living creature not because of how it learns but because of what it feels. A pigeon can experience pain and suffer, while an AI chatbot cannot—even if some large language models, trained on corpora that include descriptions of human suffering and sci-fi stories of sentient computers, can trick people into believing otherwise. 

a pigeon in a box facing a lit screen with colored rectangles on it.
Psychologist Ed Wasserman trained pigeons to detect cancerous tissue and symptoms of heart disease in medical scans as accurately as experienced physicians.
UNIVERSITY OF IOWA/WASSERMAN LAB

“The intensive public and private investments into AI research in recent years have resulted in the very technologies that are forcing us to confront the question of AI sentience today,” two philosophers of science wrote in Aeon in 2023. “To answer these current questions, we need a similar degree of investment into research on animal cognition and behavior.” Indeed, comparative psychologists and animal researchers have long grappled with questions that suddenly seem urgent because of AI: How do we attribute sentience to other living beings? How can we distinguish true sentience from a very convincing performance of sentience?

Such an undertaking would yield knowledge not only about technology and animals but also about ourselves. Most psychologists probably wouldn’t go as far as Sutton in arguing that reward is enough to explain most if not all human behavior, but no one would dispute that people often learn by association too. In fact, most of Wasserman’s undergraduate students eventually succeeded at his recent experiment with the striped discs, but only after they gave up searching for rules. They resorted, like the pigeons, to association and couldn’t easily explain afterwards what they’d learned. It was just that with enough practice, they started to get a feel for the categories. 

It is another irony about associative learning: What has long been considered the most complex form of intelligence—a cognitive ability like rule-based learning—may make us human, but we also call on it for the easiest of tasks, like sorting objects by color or size. Meanwhile, some of the most refined demonstrations of human learning—like, say, a sommelier learning to taste the difference between grapes—are learned not through rules, but only through experience. 

Learning through experience relies on ancient associative mechanisms that we share with pigeons and countless other creatures, from honeybees to fish. The laboratory pigeon is not only in our computers but in our brains—and the engine behind some of humankind’s most impressive feats. 

Ben Crair is a science and travel writer based in Berlin.