Big Tech’s big bet on a controversial carbon removal tactic

Over the last century, much of the US pulp and paper industry crowded into the southeastern corner of the nation, setting up mills amid sprawling timber forests to strip the fibers from juvenile loblolly, long leaf, and slash pine trees.

Today, after the factories chip the softwood and digest it into pulp, the leftover lignin, spent chemicals, and remaining organic matter form a dark, syrupy by-product known as black liquor. It’s then concentrated into a biofuel and burned, which heats the towering boilers that power the facility—and releases carbon dioxide into the air.

Microsoft, JP MorganChase, and a tech company consortium that includes Alphabet, Meta, Shopify, and Stripe have all recently struck multimillion-dollar deals to pay paper mill owners to capture at least hundreds of thousands of tons of this greenhouse gas by installing carbon scrubbing equipment in their facilities.

The captured carbon dioxide will then be piped down into saline aquifers more than a mile underground, where it should be sequestered permanently.

Big Tech is suddenly betting big on this form of carbon removal, known as bioenergy with carbon capture and storage, or BECCS. The sector also includes biomass-fueled power plants, waste incinerators, and biofuel refineries that add carbon capturing equipment to their facilities.

Since trees and other plants absorb carbon dioxide through photosynthesis and these factories will trap emissions that would have gone into the air, together they can theoretically remove more greenhouse gas from the atmosphere than was released, achieving what’s known as “negative emissions.”

The companies that pay for this removal can apply that reduction in carbon dioxide to cancel out a share of their own corporate pollution. BECCS now accounts for nearly 70% of the announced contracts in carbon removal, a popularity due largely to the fact that it can be tacked onto industrial facilities already operating on large scales.

“If we’re balancing cost, time to market, and ultimate scale potential, BECCS offers a really attractive value proposition across all three of those,” says Brian Marrs, senior director of energy and carbon removal at Microsoft, which has become by far the largest buyer of carbon removal credits as it races to balance out its ongoing emissions by the end of the decade.

But experts have raised a number of concerns about various approaches to BECCS, stressing they may inflate the climate benefits of the projects, conflate prevented emissions with carbon removal, and extend the life of facilities that pollute in other ways. It could also create greater financial incentives to log forests or convert them to agricultural land. 

When greenhouse-gas sources and sinks are properly tallied across all the fields, forests, and factories involved, it’s highly difficult to achieve negative emissions with many approaches to BECCS, says Tim Searchinger, a senior research scholar at Princeton University. That undermines the logic of dedicating more of the world’s limited land, crops, and woods to such projects, he argues.

“I call it a ‘BECCS and switch,’” he says, adding later: “It’s folly at some level.”

The logic of BECCS

For a biomass-fueled power plant, BECCS works like this:

A tree captures carbon dioxide from the atmosphere as it grows, sequestering the carbon in its bark, trunk, branches, and roots while releasing the oxygen. Someone then cuts it down, converts it into wood pellets, and delivers it to a power plant that, in turn, burns the wood to produce heat or electricity.

Usually, that facility will produce carbon dioxide as the wood incinerates. But under both European Union and US rules, the burning of the wood is generally treated as carbon neutral, so long as the timber forests are managed in sustainable ways and the various operations abide by other regulations. The argument is that the tree pulled CO2 out of the air in the first place, and new plant growth will bring that emissions debt back into balance over time. 

If that same power plant now captures a significant share of the greenhouse gas produced in the process and pumps it underground, the process can potentially go from carbon neutral to carbon negative.

But the starting assumption that biomass is carbon neutral is fundamentally flawed, because it doesn’t fully take into account other ways that emissions are released throughout the process, according to Searchinger.

Among other things, a proper analysis must also ask: How much carbon is left behind in roots or branches on the forest floor that will begin to decompose and release greenhouse gases after the plant is removed? How much fossil fuel was burned in the process of cutting, collecting, and distributing the biomass? How much greenhouse gas was produced while converting timber into wood pellets and shipping them elsewhere? And how long will it take to grow back the trees or plants that would have otherwise continued capturing and storing carbon?

“If you’re harvesting wood, it’s essentially impossible to get negative emissions,” Searchinger says.

Burning biomass, or the biofuels created from it, can also produce other forms of pollution that can harm human health, including particulate matter, volatile organic compounds, sulfur dioxide, and carbon monoxide.

Preventing carbon dioxide emissions at a given factory may necessitate capturing certain other pollutants as well, notably sulfur dioxide. But it doesn’t necessarily filter out all the other pollution floating out of the flue stack, notes Emily Grubert, an associate professor of sustainable energy policy at the University of Notre Dame who focuses on carbon management issues and the transition away from fossil fuels. 

Driving demand

The idea that we might be able to use biomass to generate energy and suck down carbon dates back decades. But as global temperatures and emissions both continued to rise, climate modelers found that more and more BECCS or other types of carbon removal would be needed to prevent the planet from tipping past increasingly dangerous warming thresholds.

In addition to dramatic cuts in emissions, the world may need to suck down 11 billion tons of carbon dioxide per year by 2050 and 20 billion by 2100 to limit warming to 2 °C over preindustrial levels, according to a 2022 UN climate panel report. That’s a threshold we’re increasingly likely to blow past.

These grave climate warnings sparked growing interest and investments in ways to draw carbon dioxide out of the atmosphere. Companies sprang up offering to sink seaweed, bury biomass, develop carbon-sucking direct air capture factories, and add alkaline substances to agricultural fields or the oceans. 

But BECCS purchases have dwarfed those other approaches.

For companies with fast-approaching climate deadlines, BECCS is one of the few options for removing hundreds of thousands of tons over the next few years, says Robert Höglund, who cofounded CDR.fyi, ​​a public-benefit corporation that analyzes the carbon removal sector.

“If you have a target you want to meet in 2030 and you want durable carbon removal, that’s the thing you can buy,” he says.

That’s chiefly because these projects can harness the infrastructure of existing industries. At least for now, you don’t have to finance, permit, and develop new facilities.

“They’re not that hard to build, because it’s often a retrofitting of an existing plant,” Höglund says. 

BECCS is also substantially less expensive for buyers than, say, direct air capture, with weighted average prices of $210 a ton compared with $490 among the deals to date, according to CDR.fyi. That’s in part because capturing the carbon dioxide from, say, a pulp and paper mill, where it makes up around 15% of flue gas, takes far less energy than plucking CO2 molecules out of the open air, where they account for just 0.04%.

Microsoft’s big BECCS bet

In 2020, Microsoft announced plans to become carbon negative by the end of this decade and, by midcentury, to remove all the emissions the company generated directly and from electricity use throughout its corporate history. 

It’s leaning particularly heavily on BECCS to meet those climate commitments, with the category accounting for 76% of its known carbon removal purchases to date.

In April, the company announced it would purchase 3.7 million tons of carbon dioxide that a paper and pulp mill, located at some unspecified site in the southern US, will eventually capture and store over a 12-year period. It reached the deal through CO280, a startup based in Vancouver, British Columbia, that is forming joint ventures with paper and pulp mill companies in the US and Canada, to finance, develop, and operate the projects. 

It was the biggest carbon removal purchase on record—until four days later, when Microsoft revealed it had agreed to buy 6.75 million tons of carbon removal from AtmosClear, CDR.fyi noted. That company is building a biomass power plant at the Port of Greater Baton Rouge in Louisiana, which will run largely on sugarcane bagasse (a by-product of sugar production) and forest trimmings. AtmosClear says the facility will be able to capture 680,000 tons of carbon dioxide per year.

“What we’ve seen is a lot of these BECCS projects have been very helpful, if not transformational, for providing investment in rural economies,” Marrs says. “We look at our BECCS deals, in Louisiana with AtmosClear and some other Gulf State providers, like CO280, as a real means of helping support these economies, while at the same time promoting sustainable forestry practices.”

In earlier quarters, Microsoft also made substantial purchases from Orsted, which operates power plants that burn wood pellets; Gaia, which runs facilities that convert municipal waste into energy; and Arbor, whose plants are fueled by “overgrown brush, crop residues, and food waste.” 

Don’t let waste go to waste

Notably, at least three of these projects rely on some form of waste, a category distinct from fresh-cut timber or crops grown for the purpose of fueling BECCS projects. Solid waste, agricultural residues, logging leftovers, and plant material removed from forests to prevent fires present some of the ripest opportunities for BECCS—as well as some difficult questions of carbon accounting.

A 2019 report from the National Academy of Sciences estimated that the US could achieve more than 500 million tons of carbon removal a year through BECCS by 2040, while the world could exceed 3.5 billion tons, by relying just on agricultural by-products, logging residues, and organic waste—without needing to grow crops dedicated to energy.

Roger Aines, chief scientist of the energy program at Lawrence Livermore National Laboratory, argues we should at least be putting these sources to use rather than burning them or leaving them to decompose in fields. (Aines coauthored a similar analysis focused on California’s waste biomass and contributed to a 2022 lab report prepared for Microsoft to evaluate costs and options for carbon removal purchases.)

He stresses that the BECCS sector can learn a lot from using that waste material. For example, it should help to provide a sharper sense of whether the carbon math will work if more land, forests, and crops are dedicated to these sorts of purposes.

“The point is you won’t grow new material to do this in most cases, and won’t have to for a very long time, because there’s so much waste available,” Aines says. “If we get to that point, long into the future, we can address that then.”

Wonky accounting

But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere? 

Sugarcane bagasse, for instance, is or could also be used to produce recyclable packaging and paper, biodegradable food packaging and cutlery, building materials, or soil amendments that add nutrients back to agricultural fields.

“A lot of the time those materials are being used for something else already, so the accounting gets wonky really quickly,” Grubert says. 

Some fear that the financial incentives to pursue BECCS could also compel companies to trim away more trees and plants than is truly necessary to, say, manage forests or prevent fires—particularly as more and more BECCS plants create greater and greater demand for the limited supplies of such materials.

“Once you start capturing waste, you create an incentive to produce waste, so you have to be very careful about the perverse incentives,” says Danny Cullenward, a researcher and senior fellow at the Kleinman Center for Energy Policy at the University of Pennsylvania who studies carbon markets.

Due diligence 

Like other big tech companies, Microsoft has lost some momentum when it comes to its climate goals, in large part because of the surging energy demands of its AI data centers. 

But the company has generally earned a reputation for striving to clean up its direct emissions where possible and for seeking out high-quality approaches to carbon removal. It has consulted extensively with critically minded researchers at advisory firms like Carbon Direct and demonstrated a willingness to pay higher prices to support more credible projects.

Marrs says the company has extended that scrutiny to its BECCS deals.

“We want as much positive environmental impact as possible from every project,” he says.

“We’re doing months and months of technical due diligence with teams that visit the site, that interview stakeholders, that produce a report for us that we go through in depth with a third-party engineering provider or technical perspective provider,” he adds.

In a follow-up statement, Microsoft stressed that it strives to validate that every BECCS project it supports will achieve negative emissions, whatever the fuel source.

“Across all of these projects, we conducted substantial due diligence to ensure that BECCS feedstocks would otherwise return carbon to the atmosphere in a few years,” the company said. 

Likewise, Jonathan Rhone, the cofounder and chief executive of CO280, stresses that they’ve worked with consultants, carbon market registries, and pulp and paper mills “to make sure we’re adopting the best standards.” He says they strive to conservatively assess the release and uptake of greenhouse gases across the supply chain of the mills they work with, taking into account the type of biomass used by a given plant, the growth rate of the forests it’s harvested from, the distance trucks drive to ship the timber or sawmill residues, the total emissions of the facility, and more.

Rhone says its typical projects will capture and store away on the order of 850,000 to 900,000 tons of carbon dioxide per year. How much that would make up of the plant’s total emissions would vary, based in part on how much of the facility’s energy comes from by-product biomatter and how much comes from fossil fuels. For its first projects, the company will aim to capture 50% to 65% of the CO2 emissions at the pulp and paper mills, but it eventually hopes to exceed 90%. 

In a follow-up email, Rhone said the carbon capture equipment at the mills it works with will also prevent “substantial levels” of particulate matter and sulfur dioxide emissions and might reduce emissions of other pollutants as well.

The company is in active discussions with 10 pulp and paper mills in the Gulf Coast and Canada. Each carbon capture and storage project could cost hundreds of millions of dollars. 

“What we’re trying to do at CO280 is show and demonstrate that we can create a stable, repeatable playbook for developing projects that are low risk and provide the market with what it wants, with what it needs,” Rhone says. 

Proponents of BECCS say we could leverage biomass to deliver substantial volumes of carbon removal, so long as appropriate industry standards are put in place to prevent, or at least minimize, bad behavior.

The question is whether that will be the case—or whether, as the BECCS sector matures, it will veer closer to the pattern of carbon offset markets. 

Studies and investigations have consistently shown that loosely regulated or poorly designed carbon credit and offset programs have allowed, if not invited, companies to significantly exaggerate the climate benefits of tree planting, forest preservation, and similar projects. 

“It appears to me to be something that will be manageable but that we’ll always have to keep an eye on,” Aines says. 

Magic

Even with all these carbon accounting complexities, BECCS projects can often deliver climate benefits, particularly for existing plants.

Adding carbon capture to an operating paper and pulp mill, power plant, or refinery is at least an improvement over the status quo from a climate perspective, insofar as it prevents emissions that would otherwise have continued.

But ambitions for BECCS are already growing beyond existing plants: Last year Drax, the controversial UK power giant, announced plans to launch a Houston-based division tasked with developing enough new BECCS projects to deliver 6 million tons of carbon removal per year, in the US or elsewhere.

Numerous other companies have also built or proposed biomass power plants in recent years, with or without carbon capture systems—decisions driven in part by policies that classify them as carbon neutral.

But if biomass isn’t carbon neutral, as Searchinger and others argue it can’t be in many applications, then these new unfiltered power plants are just adding more emissions to the atmosphere—and BECCS projects aren’t drawing any out of the air. And if that’s the case, it raises tough questions about corporate climate claims that depend on its doing so and the societal trade-offs involved in building lots of new plants dedicated to these purposes.

That’s because crops grown for energy require land, fertilizer, insecticides, and human labor that might otherwise go toward producing food for an expanding global population. And greater demand for wood invites the timber industry to chop down more and more of the world’s forests, which are already sucking up and storing away vast amounts of carbon dioxide and providing homes for immense varieties of plants and animals.

If these projects are merely preventing greenhouse gas from floating into the atmosphere but not drawing any down, we’re better off adding carbon capture and storage (CCS) equipment to an existing natural-gas plant instead, Searchinger argues.

Companies may think that harnessing nature to draw carbon dioxide out of the sky sounds better than cutting the emissions of a fossil-fuel turbine. But the electricity from the latter plant would cost dramatically less, the carbon capture system would reduce emissions more for the amount of same energy generated, and it would avoid the added pressures to cut down trees, he says.

“People think some magic happens—this magic combination of using biomass and CCS creates something bigger than its parts,” Searchinger says. “But it’s not magic; it’s simply the sum of the two.”

The quest to find out how our bodies react to extreme temperatures

It’s the 25th of June and I’m shivering in my lab-issued underwear in Fort Worth, Texas. Libby Cowgill, an anthropologist in a furry parka, has wheeled me and my cot into a metal-walled room set to 40 °F. A loud fan pummels me from above and siphons the dregs of my body heat through the cot’s mesh from below. A large respirator fits snug over my nose and mouth. The device tracks carbon dioxide in my exhales—a proxy for how my metabolism speeds up or slows down throughout the experiment. Eventually Cowgill will remove my respirator to slip a wire-thin metal temperature probe several pointy inches into my nose.

Cowgill and a graduate student quietly observe me from the corner of their so-called “climate chamber. Just a few hours earlier I’d sat beside them to observe as another volunteer, a 24-year-old personal trainer, endured the cold. Every few minutes, they measured his skin temperature with a thermal camera, his core temperature with a wireless pill, and his blood pressure and other metrics that hinted at how his body handles extreme cold. He lasted almost an hour without shivering; when my turn comes, I shiver aggressively on the cot for nearly an hour straight.

I’m visiting Texas to learn about this experiment on how different bodies respond to extreme climates. “What’s the record for fastest to shiver so far?” I jokingly ask Cowgill as she tapes biosensing devices to my chest and legs. After I exit the cold, she surprises me: “You, believe it or not, were not the worst person we’ve ever seen.”

Climate change forces us to reckon with the knotty science of how our bodies interact with the environment.

Cowgill is a 40-something anthropologist at the University of Missouri who powerlifts and teaches CrossFit in her spare time. She’s small and strong, with dark bangs and geometric tattoos. Since 2022, she’s spent the summers at the University of North Texas Health Science Center tending to these uncomfortable experiments. Her team hopes to revamp the science of thermoregulation. 

While we know in broad strokes how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. “We have the general picture. We don’t have a lot of the specifics for vulnerable groups,” says Kristie Ebi, an epidemiologist with the University of Washington who has studied heat and health for over 30 years. “How does thermoregulation work if you’ve got heart disease?” 

“Epidemiologists have particular tools that they’re applying for this question,” Ebi continues. “But we do need more answers from other disciplines.”

Climate change is subjecting vulnerable people to temperatures that push their limits. In 2023, about 47,000 heat-related deaths are believed to have occurred in Europe. Researchers estimate that climate change could add an extra 2.3 million European heat deaths this century. That’s heightened the stakes for solving the mystery of just what happens to bodies in extreme conditions. 

Extreme temperatures already threaten large stretches of the world. Populations across the Middle East, Asia, and sub-­Saharan Africa regularly face highs beyond widely accepted levels of human heat tolerance. Swaths of the southern US, northern Europe, and Asia now also endure unprecedented lows: The 2021 Texas freeze killed at least 246 people, and a 2023 polar vortex sank temperatures in China’s northernmost city to a hypothermic record of –63.4 °F. 

This change is here, and more is coming. Climate scientists predict that limiting emissions can prevent lethal extremes from encroaching elsewhere. But if emissions keep course, fierce heat and even cold will reach deeper into every continent. About 2.5 billion people in the world’s hottest places don’t have air-­conditioning. When people do, it can make outdoor temperatures even worse, intensifying the heat island effect in dense cities. And neither AC nor radiators are much help when heat waves and cold snaps capsize the power grid.

A thermal image shows a human male holding up peace signs during a test of extreme temperatures.

COURTESY OF MAX G. LEVY
A thermal image shows a human hand during a test of extreme temperatures.

COURTESY OF MAX G. LEVY
A thermal image shows a human foot during a test of extreme temperatures.

COURTESY OF MAX G. LEVY

“You, believe it or not, were not the worst person we’ve ever seen,” the author was told after enduring Cowgill’s “climate chamber.”

Through experiments like Cowgill’s, researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. 

Embodied change

Archaeologists have known for some time that we once braved colder temperatures than anyone previously imagined. Humans pushed into Eurasia and North America well before the last glacial period ended about 11,700 years ago. We were the only hominins to make it out of this era. Neanderthals, Denisovans, and Homo floresiensis all went extinct. We don’t know for certain what killed those species. But we do know that humans survived thanks to protection from clothing, large social networks, and physiological flexibility. Human resilience to extreme temperature is baked into our bodies, behavior, and genetic code. We wouldn’t be here without it. 

“Our bodies are constantly in communication with the environment,” says Cara Ocobock, an anthropologist at the University of Notre Dame who studies how we expend energy in extreme conditions. She has worked closely with Finnish reindeer herders and Wyoming mountaineers. 

But the relationship between bodies and temperature is surprisingly still a mystery to scientists. In 1847, the anatomist Carl Bergmann observed that animal species grow larger in cold climates. The zoologist Joel Asaph Allen noted in 1877 that cold-dwellers had shorter appendages. Then there’s the nose thing: In the 1920s, the British anthropologist Arthur Thomson theorized that people in cold places have relatively long, narrow noses, the better to heat and humidify the air they take in. These theories stemmed from observations of animals like bears and foxes, and others that followed stemmed from studies comparing the bodies of cold-accustomed Indigenous populations with white male control groups. Some, like those having to do with optimization of surface area, do make sense: It seems reasonable that a tall, thin body increases the amount of skin available to dump excess heat. The problem is, scientists have never actually tested this stuff in humans. 

“Our bodies are constantly in communication with the environment.”

Cara Ocobock, anthropologist, University of Notre Dame

Some of what we know about temperature tolerance thus far comes from century-old race science or assumptions that anatomy controls everything. But science has evolved. Biology has matured. Childhood experiences, lifestyles, fat cells, and wonky biochemical feedback loops can contribute to a picture of the body as more malleable than anything imagined before. And that’s prompting researchers to change how they study it.

“If you take someone who’s super long and lanky and lean and put them in a cold climate, are they gonna burn more calories to stay warm than somebody who’s short and broad?” Ocobock says. “No one’s looked at that.”

Ocobock and Cowgill teamed up with Scott Maddux and Elizabeth Cho at the Center for Anatomical Sciences at the University of North Texas Health Fort Worth. All four are biological anthropologists who have also puzzled over whether the rules Bergmann, Allen, and Thomson proposed are actually true. 

For the past four years, the team has been studying how factors like metabolism, fat, sweat, blood flow, and personal history control thermoregulation. 

Your native climate, for example, may influence how you handle temperature extremes. In a unique study of mortality statistics from 1980s Milan, Italians raised in warm southern Italy were more likely to survive heat waves in the northern part of the country. 

Similar trends have appeared in cold climes. Researchers often measure cold tolerance by a person’s “brown adipose,” a type of fat that is specialized for generating heat (unlike white fat, which primarily stores energy). Brown fat is a cold adaptation because it delivers heat without the mechanism of shivering. Studies have linked it to living in cold climates, particularly at young ages. Wouter van Marken Lichtenbelt, the physiologist at Maastricht University who with colleagues discovered brown fat in adults, has shown that this tissue can further activate with cold exposure and even help regulate blood sugar and influence how the body burns other fat. 

That adaptability served as an early clue for the Texas team. They want to know how a person’s response to hot and cold correlates with height, weight, and body shape. What is the difference, Maddux asks, between “a male who’s 6 foot 6 and weighs 240 pounds” and someone else in the same environment “who’s 4 foot 10 and weighs 89 pounds”? But the team also wondered if shape was only part of the story. 

Their multi-year experiment uses tools that anthropologists couldn’t have imagined a century ago—devices that track metabolism in real time and analyze genetics. Each participant gets a CT scan (measuring body shape), a DEXA scan (estimating percentages of fat and muscle), high-resolution 3D scans, and DNA analysis from saliva to examine ancestry genetically. 

Volunteers lie on a cot in underwear, as I did, for about 45 minutes in each climate condition, all on separate days. There’s dry cold, around 40 °F, akin to braving a walk-in refrigerator. Then dry heat and humid heat: 112 °F with 15% humidity and 98 °F with 85% humidity. They call it “going to Vegas” and “going to Houston,” says Cowgill. The chamber session is long enough to measure an effect, but short enough to be safe. 

Before I traveled to Texas, Cowgill told me she suspected the old rules would fall. Studies linking temperature tolerance to race and ethnicity, for example, seemed tenuous because biological anthropologists today reject the concept of distinct races. It’s a false premise, she told me: “No one in biological anthropology would argue that human beings do not vary across the globe—that’s obvious to anyone with eyes. [But] you can’t draw sharp borders around populations.” 

She added, “I think there’s a substantial possibility that we spend four years testing this and find out that really, limb length, body mass, surface area […] are not the primary things that are predicting how well you do in cold and heat.” 

Adaptable to a degree

In July 1995, a week-long heat wave pushed Chicago above 100 °F, killing roughly 500 people. Thirty years later, Ollie Jay, a physiologist at the University of Sydney, can duplicate the conditions of that exceptionally humid heat wave in a climate chamber at his laboratory. 

“We can simulate the Chicago heat wave of ’95. The Paris heat wave of 2003. The heat wave [in early July of this year]  in Europe,” Jay says. “As long as we’ve got the temperature and humidity information, we can re-create those conditions.”

“Everybody has quite an intimate experience of feeling hot, so we’ve got 8 billion experts on how to keep cool,” he says. Yet our internal sense of when heat turns deadly is unreliable. Even professional athletes overseen by experienced medics have died after missing dangerous warning signs. And little research has been done to explore how vulnerable populations such as elderly people, those with heart disease, and low-income communities with limited access to cooling respond to extreme heat. 

Jay’s team researches the most effective strategies for surviving it. He lambastes air-conditioning, saying it demands so much energy that it can aggravate climate change in “a vicious cycle.” Instead, he has monitored people’s vital signs while they use fans and skin mists to endure three hours in humid and dry heat. In results published last year, his research found that fans reduced cardiovascular strain by 86% for people with heart disease in the type of humid heat familiar in Chicago. 

Dry heat was a different story. In that simulation, fans not only didn’t help but actually doubled the rate at which core temperatures rose in healthy older people.

Heat kills. But not without a fight. Your body must keep its internal temperature in a narrow window flanking 98 °F by less than two degrees. The simple fact that you’re alive means you are producing heat. Your body needs to export that heat without amassing much more. The nervous system relaxes narrow blood vessels along your skin. Your heart rate increases, propelling more warm blood to your extremities and away from your organs. You sweat. And when that sweat evaporates, it carries a torrent of body heat away with it. 

This thermoregulatory response can be trained. Studies by van Marken Lichtenbelt have shown that exposure to mild heat increases sweat capacity, decreases blood pressure, and drops resting heart rate. Long-term studies based on Finnish saunas suggest similar correlations

The body may adapt protectively to cold, too. In this case, body heat is your lifeline. Shivering and exercise help keep bodies warm. So can clothing. Cardiovascular deaths are thought to spike in cold weather. But people more adapted to cold seem better able to reroute their blood flow in ways that keep their organs warm without dropping their temperature too many degrees in their extremities. 

Earlier this year, the biological anthropologist Stephanie B. Levy (no relation) reported that New Yorkers who experienced lower average temperatures had more productive brown fat, adding evidence for the idea that the inner workings of our bodies adjust to the climate throughout the year and perhaps even throughout our lives. “Do our bodies hold a biological memory of past seasons?” Levy wonders. “That’s still an open question. There’s some work in rodent models to suggest that that’s the case.”

Although people clearly acclimatize with enough strenuous exposures to either cold or heat, Jay says, “you reach a ceiling.” Consider sweat: Heat exposure can increase the amount you sweat only until your skin is completely saturated. It’s a non­negotiable physical limit. Any additional sweat just means leaking water without carrying away any more heat. “I’ve heard people say we’ll just find a way of evolving out of this—we’ll biologically adapt,” Jay says. “Unless we’re completely changing our body shape, then that’s not going to happen.”

And body shape may not even sway thermoregulation as much as previously believed. The subject I observed, a personal trainer, appeared outwardly adapted for cold: his broad shoulders didn’t even fit in a single CT scan image. Cowgill supposed that this muscle mass insulated him. When he emerged from his session in the 40 °F environment, though, he had finally started shivering—intensely. The researchers covered him in a heated blanket. He continued shivering. Driving to lunch over an hour later in a hot car, he still mentioned feeling cold. An hour after that, a finger prick drew no blood, a sign that blood vessels in his extremities remained constricted. His body temperature fell about half a degree C in the cold session—a significant drop—and his wider build did not appear to shield him from the cold as well as my involuntary shivering protected me. 

I asked Cowgill if perhaps there is no such thing as being uniquely predisposed to hot or cold. “Absolutely,” she said. 

A hot mess

So if body shape doesn’t tell us much about how a person maintains body temperature, and acclimation also runs into limits, then how do we determine how hot is too hot? 

In 2010 two climate change researchers, Steven Sherwood and Matthew Huber, argued that regions around the world become uninhabitable at wet-bulb temperatures of 35 °C, or 95 °F. (Wet-bulb measurements are a way to combine air temperature and relative humidity.) Above 35 °C, a person simply wouldn’t be able to dissipate heat quickly enough. But it turns out that their estimate was too optimistic. 

Researchers “ran with” that number for a decade, says Daniel Vecellio, a bioclimatologist at the University of Nebraska, Omaha. “But the number had never been actually empirically tested.” In 2021 a Pennsylvania State University physiologist, W. Larry Kenney, worked with Vecellio and others to test wet-bulb limits in a climate chamber. Kenney’s lab investigates which combinations of temperature, humidity, and time push a person’s body over the edge. 

Not long after, the researchers came up with their own wet-bulb limit of human tolerance: below 31 °C in warm, humid conditions for the youngest cohort, people in their thermoregulatory prime. Their research suggests that a day reaching 98 °F and 65% humidity, for example, poses danger in a matter of hours, even for healthy people. 

JUSTIN CLEMONS

JUSTIN CLEMONS
three medical team members make preparations around a person on a gurney

JUSTIN CLEMONS

Cowgill and her colleagues Elizabeth Cho (top) and Scott Maddux prepare graduate student Joanna Bui for a “room-temperature test.”

In 2023, Vecellio and Huber teamed up, combining the growing arsenal of lab data with state-of-the-art climate simulations to predict where heat and humidity most threatened global populations: first the Middle East and South Asia, then sub-Saharan Africa and eastern China. And assuming that warming reaches 3 to 4 °C over preindustrial levels this century, as predicted, parts of North America, South America, and northern and central Australia will be next. 

Last June, Vecellio, Huber, and Kenney co-published an article revising the limits that Huber had proposed in 2010. “Why not 35 °C?” explained why the human limits have turned out to be lower than expected. Those initial estimates overlooked the fact that our skin temperature can quickly jump above 101 °F in hot weather, for example, making it harder to dump internal heat.

The Penn State team has published deep dives on how heat tolerance changes with sex and age. Older participants’ wet-bulb limits wound up being even lower—between 27 and 28 °C in warm, humid conditions—and varied more from person to person than they did in young people. “The conditions that we experience now—especially here in North America and Europe, places like that—are well below the limits that we found in our research,” Vecellio says. “We know that heat kills now.”  

What this fast-growing body of research suggests, Vecellio stresses, is that you can’t define heat risk by just one or two numbers. Last year, he and researchers at Arizona State University pulled up the hottest 10% of hours between 2005 and 2020 for each of 96 US cities. They wanted to compare recent heat-health research with historical weather data for a new perspective: How frequently is it so hot that people’s bodies can’t compensate for it? Over 88% of those “hot hours” met that criterion for people in full sun. In the shade, most of those heat waves became meaningfully less dangerous. 

“There’s really almost no one who ‘needs’ to die in a heat wave,” says Ebi, the epidemiologist. “We have the tools. We have the understanding. Essentially all [those] deaths are preventable.”

More than a number

A year after visiting Texas, I called Cowgill to hear what she was thinking after four summers of chamber experiments. She told me that the only rule about hot and cold she currently stands behind is … well, none.

She recalled a recent participant—the smallest man in the study, weighing 114 pounds. “He shivered like a leaf on a tree,” Cowgill says. Normally, a strong shiverer warms up quickly. Core temperature may even climb a little. “This [guy] was just shivering and shivering and shivering and not getting any warmer,” she says. She doesn’t know why this happened. “Every time I think I get a picture of what’s going on in there, we’ll have one person come in and just kind of be a complete exception to the rule,” she says, adding that you can’t just gloss over how much human bodies vary inside and out.

The same messiness complicates physiology studies. 

Jay looks to embrace bodily complexities by improving physiological simulations of heat and the human strain it causes. He’s piloted studies that input a person’s activity level and type of clothing to predict core temperature, dehydration, and cardiovascular strain based on the particular level of heat. One can then estimate the person’s risk on the basis of factors like age and health. He’s also working on physiological models to identify vulnerable groups, inform early-warning systems ahead of heat waves, and possibly advise cities on whether interventions like fans and mists can help protect residents. “Heat is an all-of-­society issue,” Ebi says. Officials could better prepare the public for cold snaps this way too.

“Death is not the only thing we’re concerned about,” Jay adds.  Extreme temperatures bring morbidity and sickness and strain hospital systems: “There’s all these community-level impacts that we’re just completely missing.”

Climate change forces us to reckon with the knotty science of how our bodies interact with the environment. Predicting the health effects is a big and messy matter. 

The first wave of answers from Fort Worth will materialize next year. The researchers will analyze thermal images to crunch data on brown fat. They’ll resolve whether, as Cowgill suspects, your body shape may not sway temperature tolerance as much as previously assumed. “Human variation is the rule,” she says, “not the exception.” 

Max G. Levy is an independent journalist who writes about chemistry, public health, and the environment.

AI is changing how we quantify pain

For years at Orchard Care Homes, a 23‑facility dementia-care chain in northern England, Cheryl Baird watched nurses fill out the Abbey Pain Scale, an observational methodology used to evaluate pain in those who can’t communicate verbally. Baird, a former nurse who was then the facility’s director of quality, describes it as “a tick‑box exercise where people weren’t truly considering pain indicators.”

As a result, agitated residents were assumed to have behavioral issues, since the scale does not always differentiate well between pain and other forms of suffering or distress. They were often prescribed psychotropic sedatives, while the pain itself went untreated.

Then, in January 2021, Orchard Care Homes began a trial of PainChek, a smartphone app that scans a resident’s face for microscopic muscle movements and uses artificial intelligence to output an expected pain score. Within weeks, the pilot unit saw fewer prescriptions and had calmer corridors. “We immediately saw the benefits: ease of use, accuracy, and identifying pain that wouldn’t have been spotted using the old scale,” Baird recalls.

In nursing homes, neonatal units, and ICU wards, researchers are racing to turn pain into something a camera or sensor can score as reliably as blood pressure.

This kind of technology-assisted diagnosis hints at a bigger trend. In nursing homes, neonatal units, and ICU wards, researchers are racing to turn pain—medicine’s most subjective vital sign—into something a camera or sensor can score as reliably as blood pressure. The push has already produced PainChek, which has been cleared by regulators on three continents and has logged more than 10 million pain assessments. Other startups are beginning to make similar inroads in care settings.

The way we assess pain may finally be shifting, but when algorithms measure our suffering, does that change the way we understand and treat it?

Science already understands certain aspects of pain. We know that when you stub your toe, for example, microscopic alarm bells called nociceptors send electrical impulses toward your spinal cord on “express” wires, delivering the first stab of pain, while a slower convoy follows with the dull throb that lingers. At the spinal cord, the signal meets a microscopic switchboard scientists call the gate. Flood that gate with friendly touches—say, by rubbing the bruise—or let the brain return an instruction born of panic or calm, and the gate might muffle or magnify the message before you even become aware of it.

The gate can either let pain signals pass through or block them, depending on other nerve activity and instructions from your brain. Only the signals that succeed in getting past this gate travel up to your brain’s sensory map to help locate the damage, while others branch out to emotion centers that decide how bad it feels. Within milliseconds, those same hubs in the brain shoot fresh orders back down the line, releasing built-in painkillers or stoking the alarm. In other words, pain isn’t a straightforward translation of damage or sensation but a live negotiation between the body and the brain.

But much of how that negotiation plays out is still a mystery. For instance, scientists cannot predict what causes someone to slip from a routine injury into years-long hypersensitivity; the molecular shift from acute to chronic pain is still largely unknown. Phantom-limb pain remains equally puzzling: About two-thirds of amputees feel agony in a part of their body that no longer exists, yet competing theories—cortical remapping, peripheral neuromas, body-schema mismatch—do not explain why they suffer while the other third feel nothing.

The first serious attempt at a system for quantifying pain was introduced in 1921. Patients marked their degree of pain as a point on a blank 10‑centimeter line and clinicians scored the distance in millimeters, converting lived experience into a 0–100 ladder. By 1975, psychologist Ronald Melzack’s McGill Pain Questionnaire offered 78 adjectives like “burning,” “stabbing,” and “throbbing,” so that pain’s texture could join intensity in the chart. Over the past few decades, hospitals have ultimately settled on the 0–10 Numeric Rating Scale.

Yet pain is stubbornly subjective. Feedback from the brain in the form of your reaction can send instructions back down the spinal cord, meaning that expectation and emotion can change how much the same injury hurts. In one trial, volunteers who believed they had received a pain relief cream reported a stimulus as 22% less painful than those who knew the cream was inactive—and a functional magnetic resonance image of their brains showed that the drop corresponded with decreased activity in the parts of the brain that report pain, meaning they really did feel less hurt.

What’s more, pain can also be affected by a slew of external factors. In one study, experimenters applied the same calibrated electrical stimulus to volunteers from Italy, Sweden, and Saudi Arabia, and the ratings varied dramatically. Italian women recorded the highest scores on the 0–10 scale, while Swedish and Saudi participants judged the identical burn several points lower, implying that culture can amplify or dampen the felt intensity of the same experience.

Bias inside the clinic can drive different responses even to the same pain score. A 2024 analysis of discharge notes found that women’s scores were recorded 10% less often than men’s. At a large pediatric emergency department, Black children presenting with limb fractures were roughly 39% less likely to receive an opioid analgesic than their white non-Hispanic peers, even after the researchers controlled for pain score and other clinical factors. Together these studies make clear that an “8 out of 10” does not always result in the same reaction or treatment. And many patients cannot self-report their pain at all—for example, a review of bedside studies concludes that about 70% of intensive-care patients have pain that goes unrecognized or undertreated, a problem the authors link to their impaired communication due to sedation or intubation.

These issues have prompted a search for a better, more objective way to understand and assess pain. Progress in artificial intelligence has brought a new dimension to that hunt.

Research groups are pursuing two broad routes. The first listens underneath the skin. Electrophysiologists strap electrode nets to volunteers and look for neural signatures that rise and fall with administered stimuli. A 2024 machine-learning study reported that one such algorithm could tell with over 80% accuracy, using a few minutes of resting-state EEG, which subjects experienced chronic pain and which were pain-free control participants. Other researchers combine EEG with galvanic skin response and heart-rate variability, hoping a multisignal “pain fingerprint” will provide more robust measurements.

One example of this method is the PMD-200 patient monitor from Medasense, which uses AI-based tools to output pain scores. The device uses physiological patterns like heart rate, sweating, or peripheral temperature changes as the input and focuses on surgical patients, with the goal of helping anesthesiologists adjust doses during operations. In a 2022 study of 75 patients undergoing major abdominal surgery, use of the monitor resulted in lower self-reported pain scores after the operation—a median score of 3 out of 10, versus 5 out of 10 in controls—without an increase in opioid use. The device is authorized by the US Food and Drug Administration and is in use in the United States, the European Union, Canada, and elsewhere.

The second path is behavioral. A grimace, a guarded posture, or a sharp intake of breath correlates with various levels of pain. Computer-vision teams have fed high-speed video of patients’ changing expressions into neural networks trained on the Face Action Coding System (FACS), which was introduced in the late 1970s with the goal of creating an objective and universal system to analyze such expressions—it’s the Rosetta stone of 44 facial micro-movements. In lab tests, those models can flag frames indicating pain from the data set with over 90% accuracy, edging close to the consistency of expert human assessors. Similar approaches mine posture and even sentence fragments in clinical notes, using natural-language processing, to spot phrases like “curling knees to chest” that often correlate with high pain.

PainChek is one of these behavioral models, and it acts like a camera‑based thermometer, but for pain: A care worker opens the app and holds a phone 30 centimeters from a person’s face. For three seconds, a neural network looks for nine particular microscopic movements—upper‑lip raise, brow pinch, cheek tension, and so on—that research has linked most strongly to pain. Then the screen flashes a score of 0 to 42. “There’s a catalogue of ‘action‑unit codes’—facial expressions common to all humans. Nine of those are associated with pain,” explains Kreshnik Hoti, a senior research scientist with PainChek and a co-inventor of the device. This system is built directly on the foundation of FACS. After the scan, the app walks the user through a yes‑or‑no checklist of other signs, like groaning, “guarding,” and sleep disruption, and stores the result on a cloud dashboard that can show trends.

Linking the scan to a human‑filled checklist was, Hoti admits, a late design choice. “Initially, we thought AI should automate everything, but now we see [that] hybrid use—AI plus human input—is our major strength,” he says. Care aides, not nurses, complete most assessments, freeing clinicians to act on the data rather than gather it.

PainChek was cleared by Australia’s Therapeutic Goods Administration in 2017, and national rollout funding from Canberra helped embed it in hundreds of nursing homes in the country. The system has also won authorization in the UK—where expansion began just before covid-19 started spreading and resumed as lockdowns eased—and in Canada and New Zealand, which are running pilot programs. In the US, it’s currently awaiting an FDA decision. Company‑wide data show “about a 25% drop in anti­psychotic use and, in Scotland, a 42% reduction in falls,” Hoti says.

a person holding a phone up in front of an elderly person, whose face is visible on the screen
PainChek is a mobile app that estimates pain scores by applying artificial intelligence to facial scans.
COURTESY OF PAINCHEK

Orchard Care Homes is one of its early adopters. Baird, then the facility’s director of quality, remembers the pre‑AI routine that was largely done “to prove compliance,” she says.

PainChek added an algorithm to that workflow, and the hybrid approach has paid off. Orchard’s internal study of four care homes tracked monthly pain scores, behavioral incidents, and prescriptions. Within weeks, psychotropic scripts fell and residents’ behavior calmed. The ripple effects went beyond pharmacy tallies. Residents who had skipped meals because of undetected dental pain “began eating again,” Baird notes, and “those who were isolated due to pain began socializing.”

Inside Orchard facilities, a cultural shift is underway. When Baird trained new staff, she likened pain “to measuring blood pressure or oxygen,” she says. “We wouldn’t guess those, so why guess pain?” The analogy lands, but getting people fully on board is still a slog. Some nurses insist their clinical judgment is enough; others balk at another login and audit trail. “The sector has been slow to adopt technology, but it’s changing,” Baird says. That’s helped by the fact that administering a full Abbey Pain Scale takes 20 minutes, while a PainChek scan and checklist take less than five.

Engineers at PainChek are now adapting the code for the very youngest patients. PainChek Infant targets babies under one year, whose grimaces flicker faster than adults’. The algorithm, retrained on neonatal faces, detects six validated facial action units based on the well-established Baby Facial Action Coding System. PainChek Infant is starting limited testing in Australia while the company pursues a separate regulatory pathway.

Skeptics raise familiar red flags about these devices. Facial‑analysis AI has a history of skin‑tone bias, for example. Facial analysis may also misread grimaces stemming from nausea or fear. The tool is only as good as the yes‑or‑no answers that follow the scan; sloppy data entry can skew results in either direction. Results lack the broader clinical and interpersonal context a caregiver is likely to have from interacting with individual patients regularly and understanding their medical history. It’s also possible that clinicians might defer too strongly to the algorithm, over-relying on outside judgment and eroding their own.

If PainChek is approved by the FDA this fall, it will be part of a broader effort to create a system of new pain measurement technology. Other startups are pitching EEG headbands for neuropathic pain, galvanic skin sensors that flag breakthrough cancer pain, and even language models that comb nursing notes for evidence of hidden distress. Still, quantifying pain with an external device could be rife with hidden issues, like bias or inaccuracies, that we will uncover only after significant use.

For Baird, the issue is fairly straightforward nonetheless. “I’ve lived with chronic pain and had a hard time getting people to believe me. [PainChek] would have made a huge difference,” she says. If artificial intelligence can give silent sufferers a numerical voice—and make clinicians listen—then adding one more line to the vital‑sign chart might be worth the screen time.

Deena Mousa is a researcher, grantmaker, and journalist focused on global health, economic development, and scientific and technological progress.

Mousa is employed as lead researcher by Open Philanthropy, a funder and adviser focused on high-impact causes, including global health and the potential risks posed by AI. The research team investigates new causes of focus and is not involved in work related to pain management. Mousa has not been involved with any grants related to pain management, although Open Philanthropy has funded research in this area in the past.

Future-proofing business capabilities with AI technologies

Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.

“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.

Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale. 

At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.

That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.

“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.

Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative. 

The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: Big Tech’s carbon removals plans, and the next wave of nuclear reactors

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Big Tech’s big bet on a controversial carbon removal tactic

Microsoft, JP MorganChase, and a tech company consortium that includes Alphabet, Meta, Shopify, and Stripe have all recently struck multimillion-dollar deals to pay paper mill owners to capture at least hundreds of thousands of tons of this greenhouse gas by installing carbon scrubbing equipment in their facilities.

The captured carbon dioxide will then be piped down into saline aquifers more than a mile underground, where it should be sequestered permanently.

Big Tech is suddenly betting big on this form of carbon removal, known as bioenergy with carbon capture and storage, or BECCS. But experts have raised a number of concerns. Read the full story.

—James Temple

2025 climate tech companies to watch: Kairos Power and its next-generation nuclear reactors

Like many new nuclear startups, Kairos promises a path to reliable, 24/7 decarbonized power. Unlike most, it already has prototypes under construction and permits for several reactors.

The company uses molten salt to cool its reactions and transfer heat, rather than the high-pressure water that’s used in existing fission reactors. It hopes its technology will enable commercial reactors that are cost-competitive with natural gas plants and boast safer operation than conventional reactors, even in the event of complete power loss. Read the full story.

Mark Harris

Kairos Power is one of our 10 climate tech companies to watch—our annual list of some of the most promising climate tech firms on the planet. Check out the rest of the list here.

MIT Technology Review Narrated: Inside the strange limbo facing millions of IVF embryos

Millions of embryos created through IVF sit frozen in time, stored in tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.

The problem is that no one can really agree on what that status is. While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them?

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ChatGPT will start talking dirty to verified adults 
The chatbot is getting a new erotica function as part of OpenAI’s bid to “safely relax” its restrictions. (The Verge)
+ The company has created its own wellness council to inform its decisions. (Ars Technica)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

2 A secret surveillance empire tracked thousands of people across the world
The European-led First Wap has operated covertly for more than two decades. (Mother Jones)
+ The group ran at least 10 scam compounds across the country. (Wired $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

3 YouTube ran Israel-funded ads claiming there was food in famine-struck Gaza
And allowed them to remain online even after complaints from multiple government authorities. (WP $)
+ Companies have denied they’re involved in rebuilding Gaza. (Wired $)

4 Instagram wants to become a more teen-friendly space
It’s bringing in new age-gating measures inspired by the PG-13 movie rating. (NBC News)
+ The policy will also extend to its chatbots. (NYT $)

5 A massive Cambodia-based pig butchering scheme has been foiled
It’s the biggest forfeiture action the US Department of Justice has ever pursued. (CNBC)

6 Waymo’s driverless taxis are coming to London
From next year, it says pedestrians will be able to hail its robotaxis. (WSJ $)

7 Black patients were failed by a race-based medical calculation
It delayed their access to life-saving kidney transplants. (The Markup)
+ A woman in the US is the third person to receive a gene-edited pig kidney. (MIT Technology Review)

8 AI flood forecasting is helping farmers across the world
Nonprofits are using it to deliver early aid. (Rest of World)

9 A man with paralysis can feel objects through another person’s hand
Thanks to a new brain implant. (New Scientist $)
+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

10 Tech internships are alive and well 
Despite all the AI angst. (Insider $)

Quote of the day

“You made ChatGPT “pretty restrictive”? Really. Is that why it has been recommending kids harm and kill themselves?”

—Josh Hawley, US Senator for Missouri, reacts to the news OpenAI is planning to loosen its restrictions in a post on X.

One more thing

Why we should thank pigeons for our AI breakthroughs

People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is American psychologist B.F. Skinner’s research with pigeons in the middle of the 20th century.

Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings.

His “behaviorist” theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI. Read the full story.

—Ben Crair

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love the sound of Grateful Fishing TV—starring two fishermen who just love hanging out and frying some fish. Truly wholesome stuff (thanks to Chino Moreno via Perfectly Imperfect for the recommendation!)
+ Rest in power D’Angelo, your timeless tunes will live on.
+ If you’re into stress-watches, this list is full of anxiety-inducing classics.
+ One of the world’s longest dinosaur superhighways has been uncovered in a sleepy part of England.

New Ecommerce Tools: October 15, 2025

This week’s rundown includes products and services for AI-powered advertising, email marketing, AI agents for shopping and B2B, social commerce, one-click payments, cryptocurrencies, personalization platforms, and ecommerce deliveries.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

GoDaddy brings AI-powered ads to entrepreneurs. GoDaddy has expanded the digital ads feature on its Airo platform to nine English-language markets: Ireland, Malaysia, New Zealand, Pakistan, Philippines, Singapore, South Africa, and the United Arab Emirates. GoDaddy Airo drafts persuasive ad copy, selects relevant keywords based on the offering, and structures the campaign. Users can launch campaigns and track performance all within the digital ads dashboard.

Web page of GoDaddy Airo

GoDaddy Airo

Visa introduces Trusted Agent Protocol for AI commerce. Visa has unveiled Trusted Agent Protocol, a foundational framework for agentic commerce that enables secure transaction communication between AI agents and merchants. According to Visa, Trusted Agent Protocol addresses consumer challenges of searching, comparing, and paying by identifying trusted agents with commerce intent and distinguishing them from malicious bots. Trusted Agent Protocol is available in the Visa Developer Center and GitHub.

Adobe introduces AI Agents for B2B businesses. Adobe has released AI agents for B2B sales and marketing teams, simplifying buying cycles and leveraging engagement insights to make informed decisions. Adobe Experience Platform Agent Orchestrator powers the agents — Audience, Journey, and Data Insights — which surface in enterprise apps such as Adobe’s Journey Optimizer B2B Edition and its Customer Journey Analytics B2B Edition.

Intuit Mailchimp unveils new marketing tools for holidays. Intuit Mailchimp has released new features for improved Shopify integration, smarter segmentation tools, advanced ecommerce analytics, global and multi-audience SMS capabilities, and a refreshed email template library. The updated Shopify integration unlocks deeper behavioral insights and new triggers, such as product views, checkout started, page views, and search terms. Additional capabilities include single-use Shopify discount codes and expanded segmentation.

Intuit Mailchimp home page

Intuit Mailchimp

Commercetools previews Cora, an AI shopping companion. Commercetools, an ecommerce platform, has announced a preview of Cora, an AI-native and multimodal shopping companion. Cora shows how enterprises can deliver human-like continuity across web, mobile, WhatsApp, and other channels. According to Commercetools, shoppers can begin a journey on one device and continue it on another without losing context or progress, giving enterprises a branded companion they control.

Walmart partners with OpenAI on shopping experiences. Walmart has announced a partnership with OpenAI, allowing customers to shop at Walmart through ChatGPT using Instant Checkout. Walmart is empowering employees with AI tools, training, and literacy through OpenAI Certifications, and it is rolling out ChatGPT Enterprise to teams across the company.

Ecommerce delivery platform Veho expands in U.S. markets. Veho, a parcel delivery platform, has expanded its capacity by 50% in select U.S. markets, including Philadelphia, Indianapolis, and Atlanta, and in other areas by expanding or collaborating with 3PLs. Veho will now have a second distribution center in 10 of its markets.

Home page of Veho

Veho

Checkout.com launches Flow Remember Me, a one-click payment tool. Checkout.com, a payments provider, has launched Flow Remember Me, an extension of its basic Flow offering that allows shoppers to save their card details once and then use them across Checkout.com’s global network of merchants. Flow offers customizable, ready-to-deploy payment components, enabling 35 payment methods through a single integration with Checkout.com’s network in 194 countries.

Shipping platform Shippo launches TikTok Shop integration. Shippo, a shipping platform for ecommerce businesses, now integrates with TikTok Shop. The integration automatically imports TikTok Shop orders into Shippo, creates discounted shipping labels, and syncs tracking information back to TikTok. The integration allows merchants to meet TikTok’s 2-business-day dispatch requirement with discounted rates, select carriers, and live support, alongside orders from Shopify, Amazon, and Walmart, all in a single dashboard.

Canopy Management acquires Area 6 Marketing for D2C brands. Canopy Management, a marketing agency for sellers on Amazon, Walmart, and TikTok, has announced its acquisition of Area 6 Marketing, an agency specializing in growing Shopify and Amazon brands through Meta and Google advertising. Canopy Management says clients will benefit from Area 6’s expertise in scaling D2C brands across fashion, beauty, health, and wellness verticals.

Home page of Canopy Management

Canopy Management

Splitit launches BNPL partner program for agentic commerce. Splitit, an installment payments developer, has launched an invite-only program for buy-now pay-later capabilities on agentic shopping. Splitit states its Agentic Commerce Partner Program brings card-linked installment capabilities to autonomous shopping agents, those that search, recommend, and purchase on behalf of consumers. Registered AI agents can request real-time installment options directly within merchant checkout flows. Splitit’s new program aligns with emerging industry frameworks such as Google’s AP2 and OpenAI’s Agentic Commerce Protocol.

Ordoro and Zing partner to help merchants build fast and ship smarter. Ordoro, a platform for ecommerce operations, has partnered with Zing, a developer. Per the companies, Zing helps brands launch stores and mobile apps, while Ordoro powers inventory, shipping, and order management. The partnership aims to help merchants grow and scale.

Block debuts Square Bitcoin payment and wallet tools for local businesses. Block, owner of the Square point-of-sale platform, has introduced Square Bitcoin, an integrated payments and wallet tool. A component, Bitcoin Conversions, enables sellers to automatically convert a percentage of their daily card sales into bitcoin. Bitcoin Wallet, built natively into Square, lets sellers manage bitcoin alongside their finances, with buy, sell, hold, and withdrawal functionality integrated into the dashboard.

Nosto introduces Huginn, an AI agent for its personalization platform. Nosto, a user experience platform, has launched Huginn, an AI agent built to transform how brands run digital commerce. Huginn resides in Nosto’s core Commerce Experience Platform, designed to reduce manual work, accelerate execution, and deliver growth. According to Nosto, Huginn coordinates a network of trained, specialized AI agents that suggest and execute actions, providing insights, actions, search, contextual shopping, product suggestions, and more.

Home page of Nosto

Nosto

Bing Supports data-nosnippet For Search Snippets & AI Answers via @sejournal, @MattGSouthern

Bing now supports the data-nosnippet HTML attribute, giving websites more precise control over what appears in search snippets and AI-generated answers.

The attribute lets you exclude specific page sections from Bing Search results and Copilot while keeping the page indexed.

Content marked with data-nosnippet remains eligible to rank but will not surface in previews.

What’s New

data-nosnippet can be applied to any HTML element you want to keep out of previews.

When Bing crawls your site, marked sections are discoverable but are omitted from snippet text and AI summaries.

Bing highlights common use cases:

  • Keep paywalled or premium content out of previews
  • Reduce exposure of user comments or reviews in AI answers
  • Hide legal boilerplate, disclaimers, and cookie notices
  • Suppress outdated notices and expired promotions
  • Exclude sponsored blurbs and affiliate disclaimers from neutral previews
  • Avoid A/B test noise by hiding variant copy during experiments
  • Emphasize high-value content while keeping sensitive parts behind the click

Implementation is straightforward. Add the attribute to any element:

Subscriber Content

This section will not appear in Bing Search or Copilot answers.

After adding it, you can verify changes in Bing Webmaster Tools with URL inspection. Depending on crawl timing, updates may appear within seconds or take up to a week.

How It Compares To Other Directives

data-nosnippet complements page-level directives.

  • noindex removes a page from the index
  • nosnippet blocks all text and preview thumbnails
  • max-snippet, max-image-preview, and max-video-preview cap preview length or size

Unlike those page-wide controls, data-nosnippet targets specific sections for finer control.

Why This Matters

If you run a subscription site, you can keep subscriber-only passages out of previews without sacrificing indexation.

For pages with user-generated content, you can prevent comments or reviews from appearing in AI summaries while leaving your editorial copy visible.

In short, it lets websites exclude specific sections from search snippets and maintain ranking potential.

Looking Ahead

The attribute is available now. Consider adding it to pages where preview control matters most, then confirm behavior in Bing Webmaster Tools.


Featured Image: T. Schneider/Shutterstock

Google Says It Surfaces More Video, Forums, And UGC via @sejournal, @MattGSouthern

Google says it has adjusted rankings to surface more short-form video, forums, and user-generated content in response to how people search.

Liz Reid, VP and head of Google Search, discussed the changes in a Wall Street Journal Bold Names podcast interview.

What Reid Said

Reid described a shift in where people go for certain questions, especially among younger users:

“There’s a behavioral shift that is happening in conjunction with the move to AI, and that is a shift of who people are going to for a set of questions. And they are going to short-form video, they are going to forums, they are going to user-generated content a lot more than traditional sites.”

She added:

“We do have to respond to who users want to hear from. We are in the business of both giving them high quality information but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.”

To illustrate the behavior change, she gave a lifestyle example:

“Where are you getting your cooking? Are you getting your cooking recipes from a newspaper? Are you getting your cooking recipes from YouTube?”

Reid also highlighted a pattern with search updates:

“One of the things that’s always true about Google Search is that you make changes and there are winners and losers. That’s true on any ranking update.”

Ads And Query Mix

Reid said the impact of AI Overviews on ads is offset by people running more searches overall:

“The revenue with AI Overviews has been relatively stable… some queries may get less clicks on ads, but also it grows overall queries so people do more searches. And so those two things end up balancing out.”

She noted most queries have no ads:

“Most queries don’t have any ads at all… that query is sort of unaffected by ads.”

Reid also described how lowering friction (e.g., Lens, multi-page answers via AI Overviews) increases total searches.

Attribution & Personalization

Reid highlighted work on link prominence and loyal-reader connections:

“We’ve started doing more with inline links that allows you to say according to so-and-so with a big link for whoever the so-and-so is… building both the brand, as well as the click through.”

Quality Signals & Low-Value Content

On quality and spam posture:

“We’ve… expanded beyond this concept of spam to sort of low-value content.”

She said richer, deeper material tends to drive the clicks from AI experiences.

How Google Tests Changes

Asked whether there is a “push” as well as a “pull,” Reid described the evaluate-and-learn loop:

“You take feedback from what you hear from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

Why This Matters

In certain cases, your pages may face increased competition from forum threads and short videos.

That means improvements in quality and technical SEO alone might not fully account for traffic fluctuations if the distribution of formats has changed.

If hit by a Google update, teams should examine where visibility decreases and identify which query types are impacted. From there, determine if competing results have shifted to forum threads or short videos.

Open Questions

Reid didn’t provide timing for when the adjustments began or metrics indicating how much weighting changed.

It’s unclear which categories are most affected or whether the impact will expand further.

Looking Ahead

Reid’s comments confirm that Google has adjusted ranking to reflect evolving user behavior.

Given this, it makes sense to consider creating complementary formats like short videos while continuing to invest in in-depth expertise where traditional pages still win.


Featured Image: Michael Vi/Shutterstock

Google’s John Mueller Flags SEO Issues In Vibe Coded Website via @sejournal, @MattGSouthern

Google Search Advocate John Mueller provided detailed technical SEO feedback to a developer on Reddit who vibe coded a website in two days and launched it on Product Hunt.

The developer posted in r/vibecoding that they built a Bento Grid Generator for personal use, published it on Product Hunt, and received over 90 upvotes within two hours.

Mueller responded with specific technical issues affecting the site’s search visibility.

Mueller wrote:

“I love seeing vibe-coded sites, it’s cool to see new folks make useful & self-contained things for the web, I hope it works for you.

This is just a handful of the things I noticed here. I’ve seen similar things across many vibe-coded sites, so perhaps this is useful for others too.”

Mueller’s Technical Feedback

Mueller identified multiple issues with the site.

The homepage stores key content in a llms.txt JavaScript file. Mueller noted that Google doesn’t use this file, and he’s not aware of other search engines using it either.

Mueller wrote:

“Generally speaking, your homepage should have everything that people and bots need to understand what your site is about, what the value of your service / app / site is.”

He recommended adding a popup-welcome-div in HTML that includes the information to make it immediately available to bots.

For meta tags, Mueller said the site only needs title and description tags. The keywords, author, and robots meta tags provide no SEO benefit.

The site includes hreflang tags despite having just one language version. Mueller said these aren’t necessary for single-language sites.

Mueller flagged the JSON-LD structured data as ineffective, noting:

“Check out Google’s ‘Structured data markup that Google Search supports’ for the types supported by Google. I don’t think anyone else supports your structured data.”

He called the hidden h1 and h2 tags “cheap & useless.” Mueller suggested using a visible, dismissable banner in the HTML instead.

The robots.txt file contains unnecessary directives. Mueller recommended skipping the sitemap if it’s just one page.

Mueller suggested adding the domain to Search Console and making it easier for visitors to understand what the app or site does.

Setting Expectations

Mueller closed his feedback with realistic expectations about the impact of technical SEO fixes.

He said:

“Will you automatically get tons of traffic from just doing these things? No, definitely not. However, it makes it easier for search engines to understand your site, so that they could be sending you traffic from search.”

He noted that implementing these changes now sets you up for success later.

Mueller added:

“Doing these things sets you up well, so that you can focus more on the content & functionality, without needing to rework everything later on.”

The Vibe Coding Trade-Off

This exchange highlights a tension with vibe coding and search visibility.

The developer built a functional product that generated immediate user engagement. The site works, looks polished, and achieved success on Product Hunt within hours.

None of the flagged issues affects user experience. But every implementation choice Mueller criticized shares the same characteristic. It works for visitors while providing nothing to search engines.

Sites built for rapid launch can achieve product success without search visibility. But the technical debt adds up.

The fixes aren’t too challenging, but they require addressing issues that seemed fine when the goal was to ship fast rather than rank well.


Featured Image: Panchenko Vladimir/Shutterstock

Google’s AI Mode SEO Impact | AI Mode User Behavior Study [Part 2] via @sejournal, @Kevin_Indig

Last week, I shared the largest usability study of AI Mode, and it revealed how users interact with the new search surface:

They focus on the AI Mode text first 88% of the time, ignore link icons, and rarely click out.

This week, for Part 2, I’m covering what’s measurable, what’s guesswork, and what’s possibly next for visibility, trust, and monetization in AI Mode.

If you have questions about the study methodology or initial findings, make sure to check out What Our AI Mode User Behavior Study Reveals about the Future of Search to get up to speed.

Because this week, we’re jumping right in.

Which AI Mode Elements Can You “Optimize” For?

Before we dive into additional findings that I didn’t have room to cover last week, first, we need to get on the same page about your brand’s visibility opportunities in AI Mode.

There are a few distinct visibility opportunities, each with different functions:

  • Inline text links or inline links: A hyperlink directly in the AI Mode output copy that opens a feature in the right side panel for user exploration; extremely rarely, an AI Mode inline text link may open an external page in a new tab.
  • Link icons: Grey link icon that displays citations in the right sidebar.
  • Citation listings side panel/sidebar: List of external links (with an image thumbnail) the AI Mode is sourcing from; appears in the right column. The link icon “shuffles” this list when clicked.
  • Shopping packs: These appear similar to shopping carousels within classic organic search, and they occur in the left panel within the AI Mode text output.
  • Local packs: These are similar to the local packs paired with the embedded map within classic organic search, and they occur in the left panel within the AI Mode text output (very similar to the Shopping packs above).
  • Merchant card: Once a selection is made in the shopping pack, it opens a merchant card for further inspection.
  • Google Business Profile (GBP) Card: This appears on the right when a merchant card from a local pack is clicked. Once clicked, the GBP Card opens for further inspection.
  • Map embed: Embedded local map displaying solutions to the prompt/search need in the area.

Our AI Mode usability study collected data from 37 participants across seven specific search tasks, resulting in 250 unique tasks that provided robust insight into how people navigate the different elements within AI Mode.

The data showed that some of these visibility opportunities are more valuable than others, and it might not be the ones you think.

Let me level with you: I will not pretend I have the answers to exactly how you can earn appearance in each of the above AI Mode visibility opportunities (yet – I’m studying this intently as AI Mode rolls out globally across my clients and user adoption increases).

I would argue that none of us have enough data – at least, as of right now – to give exact plays and tactics to earn reliable, recurring visibility in new AI-chat-based search systems.

But what I can tell you is that high-quality, holistic SEO and brand authority practices have influence on AIO and AI Mode visibility outcomes.

Brand Trust Is The No. 1 Influence Factor In AI Mode

If it feels like I keep saying this repeatedly over the past few months – that brand trust and authority matter more than ever in AI Mode and AI Overviews – it’s because it’s true and underrated.

Similar to the UX study of AI Overviews I published in May 2025, the AI Mode study I published last week also confirms:

If AI Mode is a game of influence, then trust has the biggest impact on user decisions.

Your goal is to ensure your brand is (1) trusted by your target audience and (2) visible in AI Mode output text.

I’ll explain.

Study participants took on the following seven tasks:

  1. What do people say about Liquid Death, the beverage company? Do their drinks appeal to you?
  2. Imagine you’re going to buy a sleep tracker and the only two available are the Oura Ring 3 or the Apple Watch 9. Which would you choose, and why?
  3. You’re getting insights about the perks of a Ramp credit card vs. a Brex Card for small businesses. Which one seems better? What would make a business switch from another card: fee detail, eligibility fine print, or rewards?
  4. In the “Ask Anything” box in AI Mode, enter “Help me purchase a waterproof canvas bag.” Select one that best fits your needs and you would buy (for example, a camera bag, tote bag, duffel bag, etc.).
    • Proceed to the seller’s page. Click to add to the shopping cart and complete this task without going further.
  5. Compare subscription language apps to free language apps. Would you pay, and in what situation? Which product would you choose?
  6. Suppose you are visiting a friend in a large city and want to go to either: 1. A virtual reality arcade OR 2. A smart home showroom. What’s the name of the city you’re visiting?
  7. Suppose you work at a small desk and your cables are a mess. In the “Ask anything” box in AI Mode, enter: “The device cables are cluttering up my desk space. What can I buy today to help?” Then choose the one product you think would be the best solution. Put it in the shopping cart on the external website and end this task.

Look at these quotes from users as they made shopping decisions:

“If I were to choose one, I would probably just choose Duolingo just because I’ve used it. … I’m not too certain about the others.”

“Okay, we’re going with REI, that’s a good brand.”

“I don’t know the brand … that’s why I’m hesitant.”

“I trust Rosetta Stone more.”

Unless we’re talking about utility goods (like cables), where users decide by price and availability, brand makes a huge difference.

Participants’ reactions were strongly shaped by how familiar they were with the product and how complex it seemed.

With simple, familiar items like cable organizers or canvas bags, people could lean on prior knowledge and make choices confidently, even when AI Mode wasn’t perfectly clear.

But with less familiar or more abstract categories – like Liquid Death, language apps, or Ramp vs. Brex – user hesitation spiked, and participants often defaulted to a brand they already recognized.

Image Credit: Kevin Indig

Our AI Mode usability study showed that when brand familiarity is absent, shoppers default to marketplaces – or they keep reading the output.

Speaking of continuing reading through the AI Mode output, the overwhelming majority of tasks (221 out of 248, ~89%) show AI Mode text as the first thing participants noticed and engaged with.

This cannot be stressed enough.

It suggests the AI Mode output text itself is by far the most attention-grabbing entry point, ahead of any visual elements.

Inline Text Links Beat Link Icons

Recently, VP Product Search at Google, Robby Stein, said on X:

“We’ve found that people really prefer and are more likely to click links that are embedded within AI Mode responses, when they have more context on what they’re clicking and where they want to dig deeper.”

We can validate why Google made this choice with data.

But before you dive in below, here’s some additional context:

  • The inline text links are what we call the actual URL hyperlinks within the AI Mode copy, which is what Robby Stein is referring to above in his quote.
  • The grey link icon users hover over is what we call (in this study) the link icon.
  • The rich snippet on the right side of AI mode is what we refer to as the side panel or sidebar.
Image Credit: Kevin Indig

We found that inline text links draw about 27% more clicks than the right side panel of citations.

Inline links are within the copy or claim users are trying to verify, while the link icons feel detached and demand a context switch of sorts. People aren’t used to clicking on icons vs. text or a button for navigation.

Image Credit: Kevin Indig

This is notable because if Google were to adopt inline links as the default, it could raise the number of click-outs in AI Mode.

The biggest takeaway from this?

Getting a citation/inclusion within a link icon isn’t as valuable as an inline text link in the body of AI Mode.

It’s important to mention this, because many SEOs/marketers could assume that getting some kind of visibility within the link icon citations is valuable for our brands or clients.

Of course, I’d argue that any hard-won organic visibility is worth something in this era of search. But this usability study indicates that inclusion in a link icon citation likely has no real impact on visitors. So correcting this assumption amongst our industry – and our clients – is wise to do.

Local Packs, Maps, And GBP Cards Need More Data

Another interesting find?

Only 9.6% of valid tasks performed by study participants showed a Local Pack, and the Google Business Profile (GBP) card was effectively absent in nearly all test scenarios.

Only 3% of search tasks for the study showed a GBP card presence in any form.

Image Credit: Kevin Indig

Most notably: Though not always present, GBP cards played a curious and important role in driving on-SERP engagement. Users tended to scan them quickly, but also click them often.

Their presence appears to compete effectively with external links and merchant cards, which were used much less in the same contexts.

While the user behavior observed here is valid and notable enough for sharing, we must acknowledge that only one search task in the study had a specific localized or geographical intent.

More data is needed to solidify behavioral patterns across search tasks with geographical intent, and SEOs can also take into account that well-optimized GBP cards would be incredibly valuable based on high engagement with that feature.

Ecommerce SEOs Rest Easy: Shopping Tasks Take The External Clicks

In last week’s memo, I highlighted the following:

Clicks are rare and mostly transactional. The median number of external clicks per task was zero. Yep. You read that right. Ze-ro. And 77.6% of sessions had zero external visits.

Here, I’m going to expand on that finding. It’s more nuanced than “users rarely click at all.”

External clicks depend on transactional vs non-transactional tasks. And when the search task was shopping-related, the chance of an external click was 100%.

Shopping Packs appeared in 26% of tasks within this study. When they did appear, as in the screenshot, 34 of 65 times it was clicked.

Image Credit: Kevin Indig

Keep in mind, study participants were directed to take all steps to move through a shopping selection, including making a decision on an item and adding it to cart – just like a high-purchase-intent user would outside of the study environment.

However, when the search task was informational and non-transactional, the number of external clicks to sources outside the AI Mode output was nearly zero across all tasks in this study.

There were common sequences to user behavior when the search task was shopping-related:

  • Shopping Pack clicked → Merchant Card pop-up opened (occurrence: 28 times).
  • Inline Text Link clicked → Merchant Card pop-up opened (occurrence: 17 times).
  • Right panel clicked only (occurrence: 15 times).

Shopping packs are popular elements that people click on when they want to buy. Remember, clicking on one item or product in a pack brings up the detailed view of that one item (a Merchant Card).

One logical reason? They have images (common UX wisdom says people click where there’s an image).

Questions Are The New Search Habit – And Reveal An Interesting Behavior Pattern

It’s no mystery that users have been increasing conversational search since the advent of ChatGPT & Co.

This AI Mode study verified this once again, but the data also surfaced an interesting finding.

Out of 250 tasks, 88.8% of the prompts were framed as AI chatbot queries, or conversational prompts, while 11.2% resembled search-style queries, like classic search keywords. However, it’s important to note that we only analyzed the first initial query of the user and not subsequent follow-ups.

This data validation means users are overwhelmingly leaning toward conversational (chatbot-like) interactions rather than “search-like” phrasing of the past.

But here’s the unusual pattern we spotted in the data:

Users who phrased queries conversationally were much more likely to click out to external websites.

This is interesting because this behavior pattern may also mean “experienced” AI-based search or AI chat users click more to validate or explore information.

This is one hypothesis of why this pattern occurs. Another idea?

If a user takes the time to write a question, they are more careful in their approach to finding information, and therefore, they also want to look outside the “walled garden” of AI mode. This behavior could influence any search personas you develop for your brand.

Our data did not point to an entirely clear reason why the longer conversational phrasing was correlated to a higher likelihood of external website clicks, but it’s noteworthy nonetheless.


Featured Image: Paulo Bobita/Search Engine Journal