The data center boom in the desert

In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city.

Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities within the Tahoe Reno Industrial Center, a business park bigger than the city of Detroit. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


Meanwhile, Microsoft acquired more than 225 acres of undeveloped property within the center and an even larger plot in nearby Silver Springs, Nevada. Apple is expanding its data center, located just across the Truckee River from the industrial park. OpenAI has said it’s considering building a data center in Nevada as well.

The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. 

Switch, a data center company based in Las Vegas, says the full build-out of its campus at the Tahoe Reno Industrial Center could exceed seven million square feet.
EMILY NAJERA

The full scale and potential environmental impacts of the developments aren’t known, because the footprint, energy needs, and water requirements are often closely guarded corporate secrets. Most of the companies didn’t respond to inquiries from MIT Technology Review, or declined to provide additional information about the projects. 

But there’s “a whole lot of construction going on,” says Kris Thompson, who served as the longtime project manager for the industrial center before stepping down late last year. “The last number I heard was 13 million square feet under construction right now, which is massive.”

Indeed, it’s the equivalent of almost five Empire State Buildings laid out flat. In addition, public filings from NV Energy, the state’s near-monopoly utility, reveal that a dozen data-center projects, mostly in this area, have requested nearly six gigawatts of electricity capacity within the next decade. 

That would make the greater Reno area—the biggest little city in the world—one of the largest data-center markets around the globe.

It would also require expanding the state’s power sector by about 40%, all for a single industry in an explosive growth stage that may, or may not, prove sustainable. The energy needs, in turn, suggest those projects could consume billions of gallons of water per year, according to an analysis conducted for this story. 

Construction crews are busy building data centers throughout the Tahoe Reno Industrial Center.
EMILY NAJERA

The build-out of a dense cluster of energy and water-hungry data centers in a small stretch of the nation’s driest state, where climate change is driving up temperatures faster than anywhere else in the country, has begun to raise alarms among water experts, environmental groups, and residents. That includes members of the Pyramid Lake Paiute Tribe, whose namesake water body lies within their reservation and marks the end point of the Truckee River, the region’s main source of water.

Much of Nevada has suffered through severe drought conditions for years, farmers and communities are drawing down many of the state’s groundwater reservoirs faster than they can be refilled, and global warming is sucking more and more moisture out of the region’s streams, shrubs, and soils.

“Telling entities that they can come in and stick more straws in the ground for data centers is raising a lot of questions about sound management,” says Kyle Roerink, executive director of the Great Basin Water Network, a nonprofit that works to protect water resources throughout Nevada and Utah. 

“We just don’t want to be in a situation where the tail is wagging the dog,” he later added, “where this demand for data centers is driving water policy.”

Luring data centers

In the late 1850s, the mountains southeast of Reno began enticing prospectors from across the country, who hoped to strike silver or gold in the famed Comstock Lode. But Storey County had few residents or economic prospects by the late 1990s, around the time when Don Roger Norman, a media-shy real estate speculator, spotted a new opportunity in the sagebrush-covered hills. 

He began buying up tens of thousands of acres of land for tens of millions of dollars and lining up development approvals to lure industrial projects to what became the Tahoe Reno Industrial Center. His partners included Lance Gilman, a cowboy-hat-wearing real estate broker, who later bought the nearby Mustang Ranch brothel and won a seat as a county commissioner.

In 1999, the county passed an ordinance that preapproves companies to develop most types of commercial and industrial projects across the business park, cutting months to years off the development process. That helped cinch deals with a flock of tenants looking to build big projects fast, including Walmart, Tesla, and Redwood Materials. Now the promise of fast permits is helping to draw data centers by the gigawatt.

On a clear, cool January afternoon, Brian Armon, a commercial real estate broker who leads the industrial practices group at NAI Alliance, takes me on a tour of the projects around the region, which mostly entails driving around the business center.

Lance Gilman standing on a hill overlooking building in the industrial center
Lance Gilman, a local real estate broker, helped to develop the Tahoe Reno Industrial Center and land some of its largest tenants.
GREGG SEGAL

After pulling off Interstate 80 onto USA Parkway, he points out the cranes, earthmovers, and riprap foundations, where a variety of data centers are under construction. Deeper into the industrial park, Armon pulls up near Switch’s long, low, arched-roof facility, which sits on a terrace above cement walls and security gates. The Las Vegas–based company says the first phase of its data center campus encompasses more than a million square feet, and that the full build-out will cover seven times that space. 

Over the next hill, we turn around in Google’s parking lot. Cranes, tents, framing, and construction equipment extend behind the company’s existing data center, filling much of the 1,210-acre lot that the search engine giant acquired in 2017.

Last August, during an event at the University of Nevada, Reno, the company announced it would spend $400 million to expand the data center campus along with another one in Las Vegas.

Thompson says that the development company, Tahoe Reno Industrial LLC, has now sold off every parcel of developable land within the park (although several lots are available for resale following the failed gamble of one crypto tenant).

When I ask Armon what’s attracting all the data centers here, he starts with the fast approvals but cites a list of other lures as well: The inexpensive land. NV Energy’s willingness to strike deals to supply relatively low-cost electricity. Cool nighttime and winter temperatures, as far as American deserts go, which reduce the energy and water needs. The proximity to tech hubs such as Silicon Valley, which cuts latency for applications in which milliseconds matter. And the lack of natural disasters that could shut down the facilities, at least for the most part.

“We are high in seismic activity,” he says. “But everything else is good. We’re not going to have a tornado or flood or a devastating wildfire.”

Then there’s the generous tax policies.

In 2023, Novva, a Utah-based data center company, announced plans to build a 300,000-square-foot facility within the industrial business park.

Nevada doesn’t charge corporate income tax, and it has also enacted deep tax cuts specifically for data centers that set up shop in the state. That includes abatements of up to 75% on property tax for a decade or two—and nearly as much of a bargain on the sales and use taxes applied to equipment purchased for the facilities.

Data centers don’t require many permanent workers to run the operations, but the projects have created thousands of construction jobs. They’re also helping to diversify the region’s economy beyond casinos and generating tax windfalls for the state, counties, and cities, says Jeff Sutich, executive director of the Northern Nevada Development Authority. Indeed, just three data-center projects, developed by Apple, Google, and Vantage, will produce nearly half a billion dollars in tax revenue for Nevada, even with those generous abatements, according to the Nevada Governor’s Office of Economic Development.

The question is whether the benefits of data centers are worth the tradeoffs for Nevadans, given the public health costs, greenhouse-gas emissions, energy demands, and water strains.

The rain shadow

The Sierra Nevada’s granite peaks trace the eastern edge of California, forcing Pacific Ocean winds to rise and cool. That converts water vapor in the air into the rain and snow that fill the range’s tributaries, rivers, and lakes. 

But the same meteorological phenomenon casts a rain shadow over much of neighboring Nevada, forming an arid expanse known as the Great Basin Desert. The state receives about 10 inches of precipitation a year, about a third of the national average.

The Truckee River draws from the melting Sierra snowpack at the edge of Lake Tahoe, cascades down the range, and snakes through the flatlands of Reno and Sparks. It forks at the Derby Dam, a Reclamation Act project a few miles from the Tahoe Reno Industrial Center, which diverts water to a farming region further east while allowing the rest to continue north toward Pyramid Lake. 

Along the way, an engineered system of reservoirs, canals, and treatment plants divert, store, and release water from the river, supplying businesses, cities, towns, and native tribes across the region. But Nevada’s population and economy are expanding, creating more demands on these resources even as they become more constrained. 

The Truckee River, which originates at Lake Tahoe and terminates at Pyramid Lake, is the major water source for cities, towns, and farms across northwestern Nevada.
EMILY NAJERA

Throughout much of the 2020s the state has suffered through one of the hottest and most widespread droughts on record, extending two decades of abnormally dry conditions across the American West. Some scientists fear it may constitute an emerging megadrought

About 50% of Nevada currently faces moderate to exceptional drought conditions. In addition, more than half of the state’s hundreds of groundwater basins are already “over-appropriated,” meaning the water rights on paper exceed the levels believed to be underground. 

It’s not clear if climate change will increase or decrease the state’s rainfall levels, on balance. But precipitation patterns are expected to become more erratic, whiplashing between short periods of intense rainfall and more-frequent, extended, or severe droughts. 

In addition, more precipitation will fall as rain rather than snow, shortening the Sierra snow season by weeks to months over the coming decades. 

“In the extreme case, at the end of the century, that’s pretty much all of winter,” says Sean McKenna, executive director of hydrologic sciences at the Desert Research Institute, a research division of the Nevada System of Higher Education.

That loss will undermine an essential function of the Sierra snowpack: reliably delivering water to farmers and cities when it’s most needed in the spring and summer, across both Nevada and California. 

These shifting conditions will require the region to develop better ways to store, preserve, and recycle the water it does get, McKenna says. Northern Nevada’s cities, towns, and agencies will also need to carefully evaluate and plan for the collective impacts of continuing growth and development on the interconnected water system, particularly when it comes to water-hungry projects like data centers, he adds.

“We can’t consider each of these as a one-off, without considering that there may be tens or dozens of these in the next 15 years,” McKenna says.

Thirsty data centers

Data centers suck up water in two main ways.

As giant rooms of server racks process information and consume energy, they generate heat that must be shunted away to prevent malfunctions and damage to the equipment. The processing units optimized for training and running AI models often draw more electricity and, in turn, produce more heat.

To keep things cool, more and more data centers have turned to liquid cooling systems that don’t need as much electricity as fan cooling or air-conditioning.

These often rely on water to absorb heat and transfer it to outdoor cooling towers, where much of the moisture evaporates. Microsoft’s US data centers, for instance, could have directly evaporated nearly 185,000 gallons of “clean freshwater” in the course of training OpenAI’s GPT-3 large language model, according to a 2023 preprint study led by researchers at the University of California, Riverside. (The research has since been peer-reviewed and is awaiting publication.)

What’s less appreciated, however, is that the larger data-center drain on water generally occurs indirectly, at the power plants generating extra electricity for the turbocharged AI sector. These facilities, in turn, require more water to cool down equipment, among other purposes.

You have to add up both uses “to reflect the true water cost of data centers,” says Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside and coauthor of the study.

Ren estimates that the 12 data-center projects listed in NV Energy’s report would directly consume between 860 million gallons and 5.7 billion gallons a year, based on the requested electricity capacity. (“Consumed” here means the water is evaporated, not merely withdrawn and returned to the engineered water system.) The indirect water drain associated with electricity generation for those operations could add up to 15.5 billion gallons, based on the average consumption of the regional grid.

The exact water figures would depend on shifting climate conditions, the type of cooling systems each data center uses, and the mix of power sources that supply the facilities.

Solar power, which provides roughly a quarter of Nevada’s power, requires relatively little water to operate, for instance. But natural-gas plants, which generate about 56%, withdraw 2,803 gallons per megawatt-hour on average, according to the Energy Information Administration

Geothermal plants, which produce about 10% of the state’s electricity by cycling water through hot rocks, generally consume less water than fossil fuel plants do but often require more water than other renewables, according to some research

But here too, the water usage varies depending on the type of geothermal plant in question. Google has lined up several deals to partially power its data centers through Fervo Energy, which has helped to commercialize an emerging approach that injects water under high pressure to fracture rock and form wells deep below the surface. 

The company stresses that it doesn’t evaporate water for cooling and that it relies on brackish groundwater, not fresh water, to develop and run its plants. In a recent post, Fervo noted that its facilities consume significantly less water per megawatt-hour than coal, nuclear, or natural-gas plants do.

Part of NV Energy’s proposed plan to meet growing electricity demands in Nevada includes developing several natural-gas peaking units, adding more than one gigawatt of solar power and installing another gigawatt of battery storage. It’s also forging ahead with a more than $4 billion transmission project.

But the company didn’t respond to questions concerning how it will supply all of the gigawatts of additional electricity requested by data centers, if the construction of those power plants will increase consumer rates, or how much water those facilities are expected to consume.

NV Energy operates a transmission line, substation, and power plant in or around the Tahoe Reno Industrial Center.
EMILY NAJERA

“NV Energy teams work diligently on our long-term planning to make investments in our infrastructure to serve new customers and the continued growth in the state without putting existing customers at risk,” the company said in a statement.

An added challenge is that data centers need to run around the clock. That will often compel utilities to develop new electricity-generating sources that can run nonstop as well, as natural-gas, geothermal, or nuclear plants do, says Emily Grubert, an associate professor of sustainable energy policy at the University of Notre Dame, who has studied the relative water consumption of electricity sources. 

“You end up with the water-intensive resources looking more important,” she adds.

Even if NV Energy and the companies developing data centers do strive to power them through sources with relatively low water needs, “we only have so much ability to add six gigawatts to Nevada’s grid,” Grubert explains. “What you do will never be system-neutral, because it’s such a big number.”

Securing supplies

On a mid-February morning, I meet TRI’s Thompson and Don Gilman, Lance Gilman’s son, at the Storey County offices, located within the industrial center. 

“I’m just a country boy who sells dirt,” Gilman, also a real estate broker, says by way of introduction. 

We climb into his large SUV and drive to a reservoir in the heart of the industrial park, filled nearly to the lip. 

Thompson explains that much of the water comes from an on-site treatment facility that filters waste fluids from companies in the park. In addition, tens of millions of gallons of treated effluent will also likely flow into the tank this year from the Truckee Meadows Water Authority Reclamation Facility, near the border of Reno and Sparks. That’s thanks to a 16-mile pipeline that the developers, the water authority, several tenants, and various local cities and agencies partnered to build, through a project that began in 2021.

“Our general improvement district is furnishing that water to tech companies here in the park as we speak,” Thompson says. “That helps preserve the precious groundwater, so that is an environmental feather in the cap for these data centers. They are focused on environmental excellence.”

The reservoir within the industrial business park provides water to data centers and other tenants.
EMILY NAJERA

But data centers often need drinking-quality water—not wastewater merely treated to irrigation standards—for evaporative cooling, “to avoid pipe clogs and/or bacterial growth,” the UC Riverside study notes. For instance, Google says its data centers withdrew about 7.7 billion gallons of water in 2023, and nearly 6 billion of those gallons were potable. 

Tenants in the industrial park can potentially obtain access to water from the ground and the Truckee River, as well. From early on, the master developers worked hard to secure permits to water sources, since they are nearly as precious as development entitlements to companies hoping to build projects in the desert.

Initially, the development company controlled a private business, the TRI Water and Sewer Company, that provided those services to the business park’s tenants, according to public documents. The company set up wells, a water tank, distribution lines, and a sewer disposal system. 

But in 2000, the board of county commissioners established a general improvement district, a legal mechanism for providing municipal services in certain parts of the state, to manage electricity and then water within the center. It, in turn, hired TRI Water and Sewer as the operating company.

As of its 2020 service plan, the general improvement district held permits for nearly 5,300 acre-feet of groundwater, “which can be pumped from well fields within the service area and used for new growth as it occurs.” The document lists another 2,000 acre-feet per year available from the on-site treatment facility, 1,000 from the Truckee River, and 4,000 more from the effluent pipeline. 

Those figures haven’t budged much since, according to Shari Whalen, general manager of the TRI General Improvement District. All told, they add up to more than 4 billion gallons of water per year for all the needs of the industrial park and the tenants there, data centers and otherwise.

Whalen says that the amount and quality of water required for any given data center depends on its design, and that those matters are worked out on a case-by-case basis. 

When asked if the general improvement district is confident that it has adequate water resources to supply the needs of all the data centers under development, as well as other tenants at the industrial center, she says: “They can’t just show up and build unless they have water resources designated for their projects. We wouldn’t approve a project if it didn’t have those water resources.”

Water battles

As the region’s water sources have grown more constrained, lining up supplies has become an increasingly high-stakes and controversial business.

More than a century ago, the US federal government filed a lawsuit against an assortment of parties pulling water from the Truckee River. The suit would eventually establish that the Pyramid Lake Paiute Tribe’s legal rights to water for irrigation superseded other claims. But the tribe has been fighting to protect those rights and increase flows from the river ever since, arguing that increasing strains on the watershed from upstream cities and businesses threaten to draw away water reserved for reservation farming, decrease lake levels, and harm native fish.

The Pyramid Lake Paiute Tribe considers the water body and its fish, including the endangered cui-ui and threatened Lahontan cutthroat trout, to be essential parts of its culture, identity, and way of life. The tribe was originally named Cui-ui Ticutta, which translates to cui-ui eaters. The lake continues to provide sustenance as well as business for the tribe and its members, a number of whom operate boat charters and fishing guide services.

“It’s completely tied into us as a people,” says Steven Wadsworth, chairman of the Pyramid Lake Paiute Tribe.

“That is what has sustained us all this time,” he adds. “It’s just who we are. It’s part of our spiritual well-being.”

Steven Wadsworth, chairman of the Pyramid Lake Paiute Tribe, fears that data centers will divert water that would otherwise reach the tribe’s namesake lake.
EMILY NAJERA

In recent decades, the tribe has sued the Nevada State Engineer, Washoe County, the federal government, and others for overallocating water rights and endangering the lake’s fish. It also protested the TRI General Improvement District’s applications to draw thousands of additional acre‑feet of groundwater from a basin near the business park. In 2019, the State Engineer’s office rejected those requests, concluding that the basin was already fully appropriated. 

More recently, the tribe took issue with the plan to build the pipeline and divert effluent that would have flown into the Truckee, securing an agreement that required the Truckee Meadows Water Authority and other parties to add back several thousand acre‑feet of water to the river. 

Whalen says she’s sensitive to Wadsworth’s concerns. But she says that the pipeline promises to keep a growing amount of treated wastewater out of the river, where it could otherwise contribute to rising salt levels in the lake.

“I think that the pipeline from [the Truckee Meadows Water Authority] to our system is good for water quality in the river,” she says. “I understand philosophically the concerns about data centers, but the general improvement district is dedicated to working with everyone on the river for regional water-resource planning—and the tribe is no exception.”

Water efficiency 

In an email, Thompson added that he has “great respect and admiration,” for the tribe and has visited the reservation several times in an effort to help bring industrial or commercial development there.

He stressed that all of the business park’s groundwater was “validated by the State Water Engineer,” and that the rights to surface water and effluent were purchased “for fair market value.”

During the earlier interview at the industrial center, he and Gilman had both expressed confidence that tenants in the park have adequate water supplies, and that the businesses won’t draw water away from other areas. 

“We’re in our own aquifer, our own water basin here,” Thompson said. “You put a straw in the ground here, you’re not going to pull water from Fernley or from Reno or from Silver Springs.”

Gilman also stressed that data-center companies have gotten more water efficient in recent years, echoing a point others made as well.

“With the newer technology, it’s not much of a worry,” says Sutich, of the Northern Nevada Development Authority. “The technology has come a long way in the last 10 years, which is really giving these guys the opportunity to be good stewards of water usage.”

An aerial view of the cooling tower fans at Google’s data center in the Tahoe Reno Industrial Center.
GOOGLE

Indeed, Google’s existing Storey County facility is air-cooled, according to the company’s latest environmental report. The data center withdrew 1.9 million gallons in 2023 but only consumed 200,000 gallons. The rest cycles back into the water system.

Google said all the data centers under construction on its campus will also “utilize air-cooling technology.” The company didn’t respond to a question about the scale of its planned expansion in the Tahoe Reno Industrial Center, and referred a question about indirect water consumption to NV Energy.

The search giant has stressed that it strives to be water efficient across all of its data centers, and decides whether to use air or liquid cooling based on local supply and projected demand, among other variables.

Four years ago, the company set a goal of replenishing more water than it consumes by 2030. Locally, it also committed to provide half a million dollars to the National Forest Foundation to improve the Truckee River watershed and reduce wildfire risks. 

Microsoft clearly suggested in earlier news reports that the Silver Springs land it purchased around the end of 2022 would be used for a data center. NAI Alliance’s market real estate report identifies that lot, as well as the parcel Microsoft purchased within the Tahoe Reno Industrial Center, as data center sites.

But the company now declines to specify what it intends to build in the region. 

“While the land purchase is public knowledge, we have not disclosed specific details [of] our plans for the land or potential development timelines,” wrote Donna Whitehead, a Microsoft spokesperson, in an email. 

Workers have begun grading land inside a fenced off lot within the Tahoe Reno Industrial Center.
EMILY NAJERA

Microsoft has also scaled down its global data-center ambitions, backing away from several projects in recent months amid shifting economic conditions, according to various reports.

Whatever it ultimately does or doesn’t build, the company stresses that it has made strides to reduce water consumption in its facilities. Late last year, the company announced that it’s using “chip-level cooling solutions” in data centers, which continually circulate water between the servers and chillers through a closed loop that the company claims doesn’t lose any water to evaporation. It says the design requires only a “nominal increase” in energy compared to its data centers that rely on evaporative water cooling.

Others seem to be taking a similar approach. EdgeCore also said its 900,000-square-foot data center at the Tahoe Reno Industrial Center will rely on an “air-cooled closed-loop chiller” that doesn’t require water evaporation for cooling. 

But some of the companies seem to have taken steps to ensure access to significant amounts of water. Switch, for instance, took a lead role in developing the effluent pipeline. In addition, Tract, which develops campuses on which third-party data centers can build their own facilities, has said it lined up more than 1,100 acre-feet of water rights, the equivalent of nearly 360 million gallons a year. 

Apple, Novva, Switch, Tract, and Vantage didn’t respond to inquiries from MIT Technology Review

Coming conflicts 

The suggestion that companies aren’t straining water supplies when they adopt air cooling is, in many cases, akin to saying they’re not responsible for the greenhouse gas produced through their power use simply because it occurs outside of their facilities. In fact, the additional water used at a power plant to meet the increased electricity needs of air cooling may exceed any gains at the data center, Ren, of UC Riverside, says.

“That’s actually very likely, because it uses a lot more energy,” he adds.

That means that some of the companies developing data centers in and around Storey County may simply hand off their water challenges to other parts of Nevada or neighboring states across the drying American West, depending on where and how the power is generated, Ren says. 

Google has said its air-cooled facilities require about 10% more electricity, and its environmental report notes that the Storey County facility is one of its two least-energy-efficient data centers. 

Pipes running along Google’s data center campus help the search company cool its servers.
GOOGLE

Some fear there’s also a growing mismatch between what Nevada’s water permits allow, what’s actually in the ground, and what nature will provide as climate conditions shift. Notably, the groundwater committed to all parties from the Tracy Segment basin—a long-fought-over resource that partially supplies the TRI General Improvement District—already exceeds the “perennial yield.” That refers to the maximum amount that can be drawn out every year without depleting the reservoir over the long term.

“If pumping does ultimately exceed the available supply, that means there will be conflict among users,” Roerink, of the Great Basin Water Network, said in an email. “So I have to wonder: Who could be suing whom? Who could be buying out whom? How will the tribe’s rights be defended?”

The Truckee Meadows Water Authority, the community-owned utility that manages the water system for Reno and Sparks, said it is planning carefully for the future and remains confident there will be “sufficient resources for decades to come,” at least within its territory east of the industrial center.

Storey County, the Truckee-Carson Irrigation District, and the State Engineer’s office didn’t respond to questions or accept interview requests. 

Open for business

As data center proposals have begun shifting into Northern Nevada’s cities, more local residents and organizations have begun to take notice and express concerns. The regional division of the Sierra Club, for instance, recently sought to overturn the approval of Reno’s first data center, about 20 miles west of the Tahoe Reno Industrial Center. 

Olivia Tanager, director of the Sierra Club’s Toiyabe Chapter, says the environmental organization was shocked by the projected electricity demands from data centers highlighted in NV Energy’s filings.

Nevada’s wild horses are a common sight along USA Parkway, the highway cutting through the industrial business park. 
EMILY NAJERA

“We have increasing interest in understanding the impact that data centers will have to our climate goals, to our grid as a whole, and certainly to our water resources,” she says. “The demands are extraordinary, and we don’t have that amount of water to toy around with.”

During a city hall hearing in January that stretched late into the evening, she and a line of residents raised concerns about the water, energy, climate, and employment impacts of AI data centers. At the end, though, the city council upheld the planning department’s approval of the project, on a 5-2 vote.

“Welcome to Reno,” Kathleen Taylor, Reno’s vice mayor, said before casting her vote. “We’re open for business.”

Where the river ends

In late March, I walk alongside Chairman Wadsworth, of the Pyramid Lake Paiute Tribe, on the shores of Pyramid Lake, watching a row of fly-fishers in waders cast their lines into the cold waters. 

The lake is the largest remnant of Lake Lahontan, an Ice Age inland sea that once stretched across western Nevada and would have submerged present-day Reno. But as the climate warmed, the lapping waters retreated, etching erosional terraces into the mountainsides and exposing tufa deposits around the lake, large formations of porous rock made of calcium-carbonate. That includes the pyramid-shaped island on the eastern shore that inspired the lake’s name.

A lone angler stands along the shores of Pyramid Lake.

In the decades after the US Reclamation Service completed the Derby Dam in 1905, Pyramid Lake declined another 80 feet and nearby Winnemucca Lake dried up entirely.

“We know what happens when water use goes unchecked,” says Wadsworth, gesturing eastward toward the range across the lake, where Winnemucca once filled the next basin over. “Because all we have to do is look over there and see a dry, barren lake bed that used to be full.”

In an earlier interview, Wadsworth acknowledged that the world needs data centers. But he argued they should be spread out across the country, not densely clustered in the middle of the Nevada desert.

Given the fierce competition for resources up to now, he can’t imagine how there could be enough water to meet the demands of data centers, expanding cities, and other growing businesses without straining the limited local supplies that should, by his accounting, flow to Pyramid Lake.

He fears these growing pressures will force the tribe to wage new legal battles to protect their rights and preserve the lake, extending what he refers to as “a century of water wars.”

“We have seen the devastating effects of what happens when you mess with Mother Nature,” Wadsworth says. “Part of our spirit has left us. And that’s why we fight so hard to hold on to what’s left.”

Everything you need to know about estimating AI’s energy and emissions burden

When we set out to write a story on the best available estimates for AI’s energy and emissions burden, we knew there would be caveats and uncertainties to these numbers. But, we quickly discovered, the caveats are the story too. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


Measuring the energy used by an AI model is not like evaluating a car’s fuel economy or an appliance’s energy rating. There’s no agreed-upon method or public database of values. There are no regulators who enforce standards, and consumers don’t get the chance to evaluate one model against another. 

Despite the fact that billions of dollars are being poured into reshaping energy infrastructure around the needs of AI, no one has settled on a way to quantify AI’s energy usage. Worse, companies are generally unwilling to disclose their own piece of the puzzle. There are also limitations to estimating the emissions associated with that energy demand, because the grid hosts a complicated, ever-changing mix of energy sources. 

It’s a big mess, basically. So, that said, here are the many variables, assumptions, and caveats that we used to calculate the consequences of an AI query. (You can see the full results of our investigation here.)

Measuring the energy a model uses

Companies like OpenAI, dealing in “closed-source” models, generally offer access to their  systems through an interface where you input a question and receive an answer. What happens in between—which data center in the world processes your request, the energy it takes to do so, and the carbon intensity of the energy sources used—remains a secret, knowable only to the companies. There are few incentives for them to release this information, and so far, most have not.

That’s why, for our analysis, we looked at open-source models. They serve as a very imperfect proxy but the best one we have. (OpenAI, Microsoft, and Google declined to share specifics on how much energy their closed-source models use.) 

The best resources for measuring the energy consumption of open-source AI models are AI Energy Score, ML.Energy, and MLPerf Power. The team behind ML.Energy assisted us with our text and image model calculations, and the team behind AI Energy Score helped with our video model calculations.

Text models

AI models use up energy in two phases: when they initially learn from vast amounts of data, called training, and when they respond to queries, called inference. When ChatGPT was launched a few years ago, training was the focus, as tech companies raced to keep up and build ever-bigger models. But now, inference is where the most energy is used.

The most accurate way to understand how much energy an AI model uses in the inference stage is to directly measure the amount of electricity used by the server handling the request. Servers contain all sorts of components—powerful chips called GPUs that do the bulk of the computing, other chips called CPUs, fans to keep everything cool, and more. Researchers typically measure the amount of power the GPU draws and estimate the rest (more on this shortly). 

To do this, we turned to PhD candidate Jae-Won Chung and associate professor Mosharaf Chowdhury at the University of Michigan, who lead the ML.Energy project. Once we collected figures for different models’ GPU energy use from their team, we had to estimate how much energy is used for other processes, like cooling. We examined research literature, including a 2024 paper from Microsoft, to understand how much of a server’s total energy demand GPUs are responsible for. It turns out to be about half. So we took the team’s GPU energy estimate and doubled it to get a sense of total energy demands. 

The ML.Energy team uses a batch of 500 prompts from a larger dataset to test models. The hardware is kept the same throughout; the GPU is a popular Nvidia chip called the H100. We decided to focus on models of three sizes from the Meta Llama family: small (8 billion parameters), medium (70 billion), and large (405 billion). We also identified a selection of prompts to test. We compared these with the averages for the entire batch of 500 prompts. 

Image models

Stable Diffusion 3 from Stability AI is one of the most commonly used open-source image-generating models, so we made it our focus. Though we tested multiple sizes of the text-based Meta Llama model, we focused on one of the most popular sizes of Stable Diffusion 3, with 2 billion parameters. 

The team uses a dataset of example prompts to test a model’s energy requirements. Though the energy used by large language models is determined partially by the prompt, this isn’t true for diffusion models. Diffusion models can be programmed to go through a prescribed number of “denoising steps” when they generate an image or video, with each step being an iteration of the algorithm that adds more detail to the image. For a given step count and model, all images generated have the same energy footprint.

The more steps, the higher quality the end result—but the more energy used. Numbers of steps vary by model and application, but 25 is pretty common, and that’s what we used for our standard quality. For higher quality, we used 50 steps. 

We mentioned that GPUs are usually responsible for about half of the energy demands of large language model requests. There is not sufficient research to know how this changes for diffusion models that generate images and videos. In the absence of a better estimate, and after consulting with researchers, we opted to stick with this 50% rule of thumb for images and videos too.

Video models

Chung and Chowdhury do test video models, but only ones that generate short, low-quality GIFs. We don’t think the videos these models produce mirror the fidelity of the AI-generated video that many people are used to seeing. 

Instead, we turned to Sasha Luccioni, the AI and climate lead at Hugging Face, who directs the AI Energy Score project. She measures the energy used by the GPU during AI requests. We chose two versions of the CogVideoX model to test: an older, lower-quality version and a newer, higher-quality one. 

We asked Luccioni to use her tool, called Code Carbon, to test both and measure the results of a batch of video prompts we selected, using the same hardware as our text and image tests to keep as many variables as possible the same. She reported the GPU energy demands, which we again doubled to estimate total energy demands. 

Tracing where that energy comes from

After we understand how much energy it takes to respond to a query, we can translate that into the total emissions impact. Doing so requires looking at the power grid from which data centers draw their electricity. 

Nailing down the climate impact of the grid can be complicated, because it’s both interconnected and incredibly local. Imagine the grid as a system of connected canals and pools of water. Power plants add water to the canals, and electricity users, or loads, siphon it out. In the US, grid interconnections stretch all the way across the country. So, in a way, we’re all connected, but we can also break the grid up into its component pieces to get a sense for how energy sources vary across the country. 

Understanding carbon intensity

The key metric to understand here is called carbon intensity, which is basically a measure of how many grams of carbon dioxide pollution are released for every kilowatt-hour of electricity that’s produced. 

To get carbon intensity figures, we reached out to Electricity Maps, a Danish startup company that gathers data on grids around the world. The team collects information from sources including governments and utilities and uses them to publish historical and real-time estimates of the carbon intensity of the grid. You can find more about their methodology here

The company shared with us historical data from 2024, both for the entire US and for a few key balancing authorities (more on this in a moment). After discussions with Electricity Maps founder Olivier Corradi and other experts, we made a few decisions about which figures we would use in our calculations. 

One way to measure carbon intensity is to simply look at all the power plants that are operating on the grid, add up the pollution they’re producing at the moment, and divide that total by the electricity they’re producing. But that doesn’t account for the emissions that are associated with building and tearing down power plants, which can be significant. So we chose to use carbon intensity figures that account for the whole life cycle of a power plant. 

We also chose to use the consumption-based carbon intensity of energy rather than production-based. This figure accounts for imports and exports moving between different parts of the grid and best represents the electricity that’s being used, in real time, within a given region. 

For most of the calculations you see in the story, we used the average carbon intensity for the US for 2024, according to Electricity Maps, which is 402.49 grams of carbon dioxide equivalent per kilowatt-hour. 

Understanding balancing authorities

While understanding the picture across the entire US can be helpful, the grid can look incredibly different in different locations. 

One way we can break things up is by looking at balancing authorities. These are independent bodies responsible for grid balancing in a specific region. They operate mostly independently, though there’s a constant movement of electricity between them as well. There are 66 balancing authorities in the US, and we can calculate a carbon intensity for the part of the grid encompassed by a specific balancing authority.

Electricity Maps provided carbon intensity figures for a few key balancing authorities, and we focused on several that play the largest roles in data center operations. ERCOT (which covers most of Texas) and PJM (a cluster of states on the East Coast, including Virginia, Pennsylvania, and New Jersey) are two of the regions with the largest burden of data centers, according to research from the Harvard School of Public Health

We added CAISO (in California) because it covers the most populated state in the US. CAISO also manages a grid with a significant number of renewable energy sources, making it a good example of how carbon intensity can change drastically depending on the time of day. (In the middle of the day, solar tends to dominate, while natural gas plays a larger role overnight, for example.)

One key caveat here is that we’re not entirely sure where companies tend to send individual AI inference requests. There are clusters of data centers in the regions we chose as examples, but when you use a tech giant’s AI model, your request could be handled by any number of data centers owned or contracted by the company. One reasonable approximation is location: It’s likely that the data center servicing a request is close to where it’s being made, so a request on the West Coast might be most likely to be routed to a data center on that side of the country. 

Explaining what we found

To better contextualize our calculations, we introduced a few comparisons people might be more familiar with than kilowatt-hours and grams of carbon dioxide. In a few places, we took the amount of electricity estimated to be used by a model and calculated how long that electricity would be able to power a standard microwave, as well as how far it might take someone on an e-bike. 

In the case of the e-bike, we assumed an efficiency of 25 watt-hours per mile, which falls in the range of frequently cited efficiencies for a pedal-assisted bike. For the microwave, we assumed an 800-watt model, which falls within the average range in the US. 

We also introduced a comparison to contextualize greenhouse gas emissions: miles driven in a gas-powered car. For this, we used data from the US Environmental Protection Agency, which puts the weighted average fuel economy of vehicles in the US in 2022 at 393 grams of carbon dioxide equivalent per mile. 

Predicting how much energy AI will use in the future

After measuring the energy demand of an individual query and the emissions it generated, it was time to estimate how all of this added up to national demand. 

There are two ways to do this. In a bottom-up analysis, you estimate how many individual queries there are, calculate the energy demands of each, and add them up to determine the total. For a top-down look, you estimate how much energy all data centers are using by looking at larger trends. 

Bottom-up is particularly difficult, because, once again, closed-source companies do not share such information and declined to talk specifics with us. While we can make some educated guesses to give us a picture of what might be happening right now, looking into the future is perhaps better served by taking a top-down approach.

This data is scarce as well. The most important report was published in December by the Lawrence Berkeley National Laboratory, which is funded by the Department of Energy, and the report authors noted that it’s only the third such report released in the last 20 years. Academic climate and energy researchers we spoke with said it’s a major problem that AI is not considered its own economic sector for emissions measurements, and there aren’t rigorous reporting requirements. As a result, it’s difficult to track AI’s climate toll. 

Still, we examined the report’s results, compared them with other findings and estimates, and consulted independent experts about the data. While much of the report was about data centers more broadly, we drew out data points that were specific to the future of AI. 

Company goals

We wanted to contrast these figures with the amounts of energy that AI companies themselves say they need. To do so, we collected reports by leading tech and AI companies about their plans for energy and data center expansions, as well as the dollar amounts they promised to invest. Where possible, we fact-checked the promises made in these claims. (Meta and Microsoft’s pledges to use more nuclear power, for example, would indeed reduce the carbon emissions of the companies, but it will take years, if not decades, for these additional nuclear plants to come online.) 

Requests to companies

We submitted requests to Microsoft, Google, and OpenAI to have data-driven conversations about their models’ energy demands for AI inference. None of the companies made executives or leadership available for on-the-record interviews about their energy usage.

This story was supported by a grant from the Tarbell Center for AI Journalism.

Inside the story that enraged OpenAI

In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review

I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said.

At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely.

Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas.

But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform.

Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government. 

So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company.


Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.

I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else?

Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said.

He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?”

On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer.

Why did we need AGI to do that instead of AI? I asked.

This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI.

And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on most (economically valuable) tasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it.

AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care.

Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.”

This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us.

“No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.”

That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival.

“I actually think that’s a very beautiful thing,” he said.

In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born.

“What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.”

His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started.


Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one.

Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said.

I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models.

That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples.

“It is unquestioningly very highly desirable that data centers be as green as possible,” he added.

“No question,” Brockman quipped.

“Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue.

“It’s 2 percent globally,” I offered.

“Isn’t Bitcoin like 1 percent?” Brockman said.

Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative.

Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.”

I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.”

“I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.”

“The day we announced the deal,” he said, referring to Microsoft’s new $1 billion investment, “Microsoft’s market cap went up by $10 billion. People believe there is a positive ROI even just on short‑term technology.”

OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.”

Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step.

He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself.


There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees.

This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public?

At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said.

In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future.

“Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.

What motivated him? I asked Brockman.

What are the chances that a transformative technology could arrive in your lifetime? he countered.

He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said.

Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him.

A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point.

In 2022, he became OpenAI’s president.


During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said.

OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.

Brockman pointed once again to the $10 billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone.

Was there a historical example of a technology’s benefits that had been successfully distributed? I asked.

“Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative.

“Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards.

“Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly.

“I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.”

His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”

It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else.

He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.

“The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.”


In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

Hours later, Elon Musk replied to the story with three tweets in rapid succession:

“OpenAI should be more open imo”

“I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research.

“All orgs developing advanced AI should be regulated, including Tesla”

Afterward, Altman sent OpenAI employees an email.

“I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.”

It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models.

“The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team (but not give the press the public fight they’d love right now).”

OpenAI wouldn’t speak to me again for three years.

From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao.

Inside the controversial tree farms powering Apple’s carbon neutral goal

We were losing the light, and still about 20 kilometers from the main road, when the car shuddered and died at the edge of a strange forest. 

The grove grew as if indifferent to certain unspoken rules of botany. There was no understory, no foreground or background, only the trees themselves, which grew as a wall of bare trunks that rose 100 feet or so before concluding with a burst of thick foliage near the top. The rows of trees ran perhaps the length of a New York City block and fell away abruptly on either side into untidy fields of dirt and grass. The vista recalled the husk of a failed condo development, its first apartments marooned when the builders ran out of cash.

Standing there against the setting sun, the trees were, in their odd way, also rather stunning. I had no service out here—we had just left a remote nature preserve in southwestern Brazil—but I reached for my phone anyway, for a picture. The concern on the face of my travel partner, Clariana Vilela Borzone, a geographer and translator who grew up nearby, flicked to amusement. My camera roll was already full of eucalyptus.

The trees sprouted from every hillside, along every road, and more always seemed to be coming. Across the dirt path where we were stopped, another pasture had been cleared for planting. The sparse bushes and trees that had once shaded cattle in the fields had been toppled and piled up, as if in a Pleistocene gravesite. 

Borzone’s friends and neighbors were divided on the aesthetics of these groves. Some liked the order and eternal verdancy they brought to their slice of the Cerrado, a large botanical region that arcs diagonally across Brazil’s midsection. Its native savanna landscape was largely gnarled, low-slung, and, for much of the year, rather brown. And since most of that flora had been cleared decades ago for cattle pasture, it was browner and flatter still. Now that land was becoming trees. It was becoming beautiful. 

sun setting over the Cerrado with a flock of animals grazing in the foreground
Some locals say they like the order and eternal verdancy of the eucalyptus, which often stand in stark contrast to the Cerrado’s native savanna landscape.
PABLO ALBARENGA

Others considered this beauty a mirage. “Green deserts,” they called the groves, suggesting bounty from afar but holding only dirt and silence within. These were not actually forests teeming with animals and undergrowth, they charged, but at best tinder for a future megafire in a land parched, in part, by their vigorous growth. This was in fact a common complaint across Latin America: in Chile, the planted rows of eucalyptus were called the “green soldiers.” It was easy to imagine getting lost in the timber, a funhouse mirror of trunks as far as the eye could see.

The timber companies that planted these trees push back on these criticisms as caricatures of a genus that’s demonized all over the world. They point to their sustainable forestry certifications and their handsome spending on fire suppression, and to the microphones they’ve placed that record cacophonies of birds and prove the groves are anything but barren. Whether people like the look of these trees or not, they are meeting a human need, filling an insatiable demand for paper and pulp products all over the world. Much of the material for the world’s toilet and tissue paper is grown in Brazil, and that, they argue, is a good thing: Grow fast and furious here, as responsibly as possible, to save many more trees elsewhere. 

But I was in this region for a different reason: Apple. And also Microsoft and Meta and TSMC, and many smaller technology firms too. I was here because tech executives many thousands of miles away were racing toward, and in some cases stumbling, on their way to meet their climate promises—too little time, and too much demand for new devices and AI data centers. Not far from here, they had struck some of the largest-ever deals for carbon credits. They were asking something new of this tree: Could Latin America’s eucalyptus be a scalable climate solution? 

On a practical level, the answer seemed straightforward. Nobody disputed how swiftly or reliably eucalyptus could grow in the tropics. This knowledge was the product of decades of scientific study and tabulations of biomass for wood or paper. Each tree was roughly 47% carbon, which meant that many tons of it could be stored within every planted hectare. This could be observed taking place in real time, in the trees by the road. Come back and look at these young trees tomorrow, and you’d see it: fresh millimeters of carbon, chains of cellulose set into lignin. 

At the same time, Apple and the others were also investing in an industry, and a tree, with a long and controversial history in this part of Brazil and elsewhere. They were exerting their wealth and technological oversight to try to make timber operations more sustainable, more supportive of native flora, and less water intensive. Still, that was a hard sell to some here, where hundreds of thousands of hectares of pasture are already in line for planting; more trees were a bleak prospect in a land increasingly racked by drought and fire. Critics called the entire exercise an excuse to plant even more trees for profit. 

Borzone and I did not plan to stay and watch the eucalyptus grow. Garden or forest or desert, ally or antagonist—it did not matter much with the stars of the Southern Cross emerging and our gas tank empty. We gathered our things from our car and set off down the dirt road through the trees.

A big promise

My journey into the Cerrado had begun months earlier, in the fall of 2023, when the actress Octavia Spencer appeared as Mother Nature in an ad alongside Apple CEO Tim Cook. In 2020, the company had set a goal to go “net zero” by the end of the decade, at which point all of its products—laptops, CPUs, phones, earbuds—would be produced without increasing the level of carbon in the atmosphere. “Who wants to disappoint me first?” Mother Nature asked with a sly smile. It was a third of the way to 2030—a date embraced by many corporations aiming to stay in line with the UN’s goal of limiting warming to 1.5 °C over preindustrial levels—and where was the progress?

Tim Cook
Apple CEO Tim Cook stares down Octavia Spencer as “Mother Nature” in their ad spot touting the company’s claims for carbon neutrality.
APPLE VIA YOUTUBE

Cook was glad to inform her of the good news: The new Apple Watch was leading the way. A limited supply of the devices were already carbon neutral, thanks to things like recycled materials and parts that were specially sent by ship—not flown—from one factory to another. These special watches were labeled with a green leaf on Apple’s iconically soft, white boxes.

Critics were quick to point out that declaring an individual product “carbon neutral” while the company was still polluting had the whiff of an early victory lap, achieved with some convenient accounting. But the work on the watch spoke to the company’s grand ambitions. Apple claimed that changes like procuring renewable power and using recycled materials had enabled it to cut emissions 75% since 2015. “We’re always prioritizing reductions; they’ve got to come first,” Chris Busch, Apple’s director of environmental initiatives, told me soon after the launch. 

The company also acknowledged that it could not find reductions to balance all its emissions. But it was trying something new. 

Since the 1990s, companies have purchased carbon credits based largely on avoiding emissions. Take some patch of forest that was destined for destruction and protect it; the stored carbon that wasn’t lost is turned into credits. But as the carbon market expanded, so did suspicion of carbon math—in some cases, because of fraud or bad science, but also because efforts to contain deforestation are often frustrated, with destruction avoided in one place simply happening someplace else. Corporations that once counted on carbon credits for “avoided” emissions can no longer trust them. (Many consumers feel they can’t either, with some even suing Apple over the ways it used past carbon projects to make its claims about the Apple Watch.)

But that demand to cancel out carbon dioxide hasn’t gone anywhere—if anything, as AI-driven emissions knock some companies off track from reaching their carbon targets (and raise questions about the techniques used to claim emissions reductions), the need is growing. For Apple, even under the rosiest assumptions about how much it will continue to pollute, the gap is significant: In 2024, the company reported offsetting 700,000 metric tons of CO2, but the number it will need to hit in 2030 to meet its goals is 9.6 million. 

So the new move is to invest in carbon “removal” rather than avoidance. The idea implies a more solid achievement: taking carbon molecules out of the atmosphere. There are many ways to attempt that, from trying to change the pH of the oceans so that they absorb more of the molecules to building machines that suck carbon straight out of the air. But these are long-term fixes. None of these technologies work at the scale and price that would help Apple and others meet their shorter-term targets. For that, trees have emerged again as the answer. This time the idea is to plant new ones instead of protecting old ones. 

To expand those efforts in a way that would make a meaningful dent in emissions, Apple determined, it would also need to make carbon removal profitable. A big part of this effort would be driven by the Restore Fund, a $200 million partnership with Goldman Sachs and Conservation International, a US environmental nonprofit, to invest in “high quality” projects that promoted reforestation on degraded lands.  

Profits would come from responsibly turning trees into products, Goldman’s head of sustainability explained when the fund was announced in 2021. But it was also an opportunity for Apple, and future investors, to “almost look at, touch, and feel their carbon,” he said—a concreteness that carbon credits had previously failed to offer. “The aim is to generate real, measurable carbon benefits, but to do that alongside financial returns,” Busch told me. It was intended as a flywheel of sorts: more investors, more planting, more carbon—an approach to climate action that looked to abundance rather than sacrifice.

pedestrian walks past the Apple Store with reflection of branches in the glass
Apple's Carbon Neutral logo with the product Apple Watch

Apple markets its watch as a carbon-neutral product, a claim based in part on the use of carbon credits.

The announcement of the carbon-neutral Apple Watch was the occasion to promote the Restore Fund’s three initial investments, which included a native forestry project as well as eucalyptus farms in Paraguay and Brazil. The Brazilian timber plans were by far the largest in scale, and were managed by BTG Pactual, Latin America’s largest investment bank. 

Busch connected me with Mark Wishnie, head of sustainability for Timberland Investment Group, BTG’s US-based subsidiary, which acquires and manages properties on behalf of institutional investors. After years in the eucalyptus business, Wishnie, who lives in Seattle, was used to strong feelings about the tree. It’s just that kind of plant—heralded as useful, even ornamental; demonized as a fire starter, water-intensive, a weed. “Has the idea that eucalyptus is invasive come up?” he asked pointedly. (It’s an “exotic” species in Brazil, yes, but the risk of invasiveness is low for the varieties most commonly planted for forestry.) He invited detractors to consider the alternative to the scale and efficiency of eucalyptus, which, he pointed out, relieves the pressure that humans put on beloved old-growth forests elsewhere. 

Using eucalyptus for carbon removal also offered a new opportunity. Wishnie was overseeing a planned $1 billion initiative that was set to transform BTG’s timber portfolio; it aimed at a 50-50 split between timber and native restoration on old pastureland, with an emphasis on connecting habitats along rivers and streams. As a “high quality” project, it was meant to do better than business as usual. The conservation areas would exceed the legal requirements for native preservation in Brazil, which range from 20% to 35% in the Cerrado. In a part of Brazil that historically gets little conservation attention, it would potentially represent the largest effort yet to actually bring back the native landscape. 

When BTG approached Conservation International with the 50% figure, the organization thought it was “too good to be true,” Miguel Calmon, the senior director of the nonprofit’s Brazilian programs, told me. With the restoration work paid for by the green financing and the sale of carbon credits, scale and longevity could be achieved. “Some folks may do this, but they never do this as part of the business,” he said. “It comes from not a corporate responsibility. It’s about, really, the business that you can optimize.”

So far, BTG has raised $630 million for the initiative and earmarked 270,000 hectares, an area more than double the city of Los Angeles. The first farm in the plan, located on a 24,000-hectare cattle ranch, was called Project Alpha. The location, Wishnie said, was confidential. 

“We talk about restoration as if it’s a thing that happens,” Mark Wishnie says, promoting BTG’s plans to intermingle new farms alongside native preserves.
COURTESY OF BTG

But a property of that size sticks out, even in a land of large farms. It didn’t take very much digging into municipal land records in the Brazilian state of Mato Grosso do Sul, where many of the company’s Cerrado holdings are located, to turn up a recently sold farm that matched the size. It was called Fazenda Engano, or “Deception Farm”—hence the rebrand. The land was registered to an LLC with links to holding companies for other BTG eucalyptus plantations located in a neighboring region that locals had taken to calling the Cellulose Valley for its fast-expanding tree farms and pulp factories.  

The area was largely seen as a land of opportunity, even as some locals had raised the alarm over concerns that the land couldn’t handle the trees. They had allies in prominent ecologists who have long questioned the wisdom of tree-planting in the Cerrado—and increasingly spar with other conservationists who see great potential in turning pasture into forest. The fight has only gotten more heated as more investors hunt for new climate solutions. 

Still, where Apple goes, others often follow. And when it comes to sustainability, other companies look to it as a leader. I wasn’t sure if I could visit Project Alpha and see whether Apple and its partners had really found a better way to plant, but I started making plans to go to the Cerrado anyway, to see the forests behind those little green leaves on the box. 

Complex calculations

In 2015, a study by Thomas Crowther, an ecologist then at ETH Zürich, attempted a census of global tree cover, finding more than 3 trillion trees in all. A useful number, surprisingly hard to divine, like counting insects or bacteria. 

A follow-up study a few years later proved more controversial: Earth’s surface held space for at least 1 trillion more trees. That represented a chance to store 200 metric gigatons, or about 25%, of atmospheric carbon once they matured. (The paper was later corrected in multiple ways, including an acknowledgment that the carbon storage potential could be about one-third less.)

The study became a media sensation, soon followed by a fleet of tree-planting initiatives with “trillion” in the name—most prominently through a World Economic Forum effort launched by Salesforce CEO Marc Benioff at Davos, which President Donald Trump pledged to support during his first term. 

But for as long as tree planting has been heralded as a good deed—from Johnny Appleseed to programs that promise a tree for every shoe or laptop purchased—the act has also been chased closely by a follow-up question: How many of those trees survive? Consider Trump’s most notable planting, which placed an oak on the White House grounds in 2018. It died just over a year later. 

Donald Trump and Emmanuel Macron with shovels of dirt around a sapling. Melania Trump stands behind them watching.
During President Donald Trump’s first term, he and French president Emmanuel Macron planted an oak on the South Lawn of the White House.
CHIP SOMODEVILLA/GETTY IMAGES

To critics, including Bill Gates, the efforts were symbolic of short-term thinking at the expense of deeper efforts to cut or remove carbon. (Gates’s spat with Benioff descended to name-calling in the New York Times. “Are we the science people or are we the idiots?” he asked.) The lifespan of a tree, after all, is brief—a pit stop—compared with the thousand-year carbon cycle, so its progeny must carry the torch to meaningfully cancel out emissions. Most don’t last that long. 

“The number of trees planted has become a kind of currency, but it’s meaningless,” Pedro Brancalion, a professor of tropical forestry at the University of São Paulo, told me. He had nothing against the trees, which the world could, in general, use a lot more of. But to him, a lot of efforts were riding more on “good vibes” than on careful strategy. 

Soon after arriving in São Paulo last summer, I drove some 150 miles into the hills outside the city to see the outdoor lab Brancalion has filled with experiments on how to plant trees better: trees given too many nutrients or too little; saplings monitored with wires and tubes like ICU admits, or skirted with tarps that snatch away rainwater. At the center of one of Brancalion’s plots stands a tower topped with a whirling station, the size of a hobby drone, monitoring carbon going in and out of the air (and, therefore, the nearby vegetation)—a molecular tango known as flux. 

Brancalion works part-time for a carbon-focused restoration company, Re:Green, which had recently sold 3 million carbon credits to Microsoft and was raising a mix of native trees in parts of the Amazon and the Atlantic Forest. While most of the trees in his lab were native ones too, like jacaranda and brazilwood, he also studies eucalyptus. The lab in fact sat on a former eucalyptus farm; in the heart of his fields, a grove of 80-year-old trees dripped bark like molting reptiles. 

Pedro H.S. Brancalion
To Pedro Brancalion, a lot of tree-planting efforts are riding more on “good vibes” than on careful strategy. He experiments with new ways to grow eucalyptus interspersed with native species.
PABLO ALBARENGA

Eucalyptus planting swelled dramatically under Brazil’s military dictatorship in the 1960s. The goal was self-sufficiency—a nation’s worth of timber and charcoal, quickly—and the expansion was fraught. Many opinions of the tree were forged in a spate of dubious land seizures followed by clearing of the existing vegetation—disputes that, in some places, linger to this day. Still, that campaign is also said to have done just as Wishnie described, easing the demand that would have been put on regions like the Amazon as Rio and São Paulo were built. 

The new trees also laid the foundation for Brazil to become a global hub for engineered forestry; it’s currently home to about a third of the world’s farmed eucalyptus. Today’s saplings are the products of decades of tinkering with clonal breeding, growing quick and straight, resistant to pestilence and drought, with exacting growth curves that chart biomass over time: Seven years to maturity is standard for pulp. Trees planted today grow more than three times as fast as their ancestors. 

If the goal is a trillion trees, or many millions of tons of carbon, no business is better suited to keeping count than timber. It might sound strange to claim carbon credits for trees that you plan to chop down and turn into toilet paper or chairs. Whatever carbon is stored in those ephemeral products is, of course, a blip compared with the millennia that CO2 hangs in the atmosphere. 

But these carbon projects take a longer view. While individual trees may go, more trees are planted. The forest constantly regrows and recaptures carbon from the air. Credits are issued annually over decades, so long as the long-term average of the carbon stored in the grove continues to increase. What’s more, because the timber is constantly being tracked, the carbon is easy to measure, solving a key problem with carbon credits. 

Most mature native ecosystems, whether tropical forests or grasslands, will eventually store more carbon than a tree farm. But that could take decades. Eucalyptus can be planted immediately, with great speed, and the first carbon credits are issued in just a few years. “It fits a corporate model very well, and it fits the verification model very well,” said Robin Chazdon, a forest researcher at Australia’s University of the Sunshine Coast.

Today’s eucalyptus saplings—like those shown here in Brancalion’s lab—are the products of decades of tinkering with clonal breeding, growing quick and straight.
PABLO ALBARENGA

Reliability and stability have also made eucalyptus, as well as pine, quietly dominant in global planting efforts. A 2019 analysis published in Nature found that 45% of carbon removal projects the researchers studied worldwide involved single-species tree farms. In Brazil, the figure was 82%. The authors called this a “scandal,” accusing environmental organizations and financiers of misleading the public and pursuing speed and convenience at the expense of native restoration.  

In 2023, the nonprofit Verra, the largest bearer of carbon credit standards, said it would forbid projects using “non-native monocultures”—that is, plants like eucalyptus or pine that don’t naturally grow in the places where they’re being farmed. The idea was to assuage concerns that carbon credits were going to plantations that would have been built anyway given the demand for wood, meaning they wouldn’t actually remove any extra carbon from the atmosphere.

The uproar was immediate—from timber companies, but also from carbon developers and NGOs. How would it be possible to scale anything—conservation, carbon removal—without them?

Verra reversed course several months later. It would allow non-native monocultures so long as they grew in land that was deemed “degraded,” or previously cleared of vegetation—land like cattle pasture. And it took steps to avoid counting plantings in close proximity to other areas of fast tree growth, the idea being that they wanted to avoid rewarding purely industrial projects that would’ve been planted anyway. 

Native trees surrounded by eucalyptus
Despite the potential benefits of intermixing them, foresters generally prefer to keep eucalyptus and native species separate.
PABLO ALBARENGA

Brancalion happened to agree with the criticisms of exotic monocultures. But all the same, he believed eucalyptus had been unfairly demonized. It was a marvelous genus, actually, with nearly 800 species with unique adaptations. Natives could be planted as monocultures too, or on stolen land, or tended with little care. He had been testing ways to turn eucalyptus from perceived foes into friends of native forest restoration.

His idea was to use rows of eucalyptus, which rocket above native species, as a kind of stabilizer. While these natives can be valuable—either as lumber or for biodiversity—they may grow slowly, or twist in ways that make their wood unprofitable, or suddenly and inexplicably die. It’s never like that with eucalyptus, which are wonderfully predictable growers. Eventually, their harvested wood would help pay for the hard work of growing the others. 

In practice, foresters have generally preferred to keep things separate. Eucalyptus here; restoration there. It was far more efficient. The approach was emblematic, Brancalion thought, of letting the economics of the industry guide what was planted, how, and where, even with green finance involved. Though he admitted he was speaking as something of a competitor given his own carbon work, he was perplexed by Apple’s choices. The world’s richest company was doing eucalyptus? And with a bank better known locally as a major investor in industries, like beef and soy, that contributed to deforestation than any efforts for native restoration.

It also worried him to see the planting happening west of here, in the Cerrado, where land is cheaper and also, for much of the year, drier. “It’s like a bomb,” Brancalion told me. “You can come interview me in five, six years. You don’t have to be super smart to realize what will happen after planting too many eucalyptus in a dry region.” He wished me luck on my journey westward.   

The sacrifice zone

Savanna implies openness, but the European settlers passing through the Cerrado called it the opposite; the name literally means “closed.” Grasses and shrubs grow to chest height, scaled as if to maximize human inconvenience. A machete is advised. 

As I headed with Borzone toward a small nature preserve called Parque do Pombo, she told me that young Brazilians are often raised with a sense of dislike, if not fear, of this land. When Borzone texted her mother, a local biologist, to say where we were going, she replied: “I hear that place is full of ticks.” (Her intel, it turned out, was correct.)

At one point, even prominent ecologists, fearing total destruction of the Amazon, advocated moving industry to the Cerrado, invoking a myth about casting a cow into piranha-infested waters so that the other cows could ford downstream.
PABLO ALBARENGA

What can be easy to miss is the fantastic variety of these plants, the result of natural selection cranked into overdrive. Species, many of which blew in from the Amazon, survived by growing deep roots through the acidic soil and thicker bark to resist regular brush fires. Many of the trees developed the ability to shrivel upon themselves and drop their leaves during the long, dry winter. Some call it a forest that has grown upside down, because much of the growth occurs in the roots. The Cerrado is home to 12,000 flowering plant species, 4,000 of which are found only there. In terms of biodiversity, it is second in the world only to its more famous neighbor, the Amazon. 

Caryocar brasiliense flowers and fruits
Pequi is an edible fruit-bearing tree common in the Cerrado—one of the many unique species native to the area.
ADOBE STOCK

Each stop on our drive seemed to yield a new treasure for Borzone to show me: Guavira, a tree that bears fruit in grape-like bunches that appear only two weeks in a year; it can be made into a jam that is exceptionally good on toast. Pequi, more divisive, like fermented mango mixed with cheese. Others bear names Borzone can only faintly recall in the Indigenous Guaraní language and is thus unable to google. Certain uses are more memorable: Give this one here, a tiny frond that looks like a miniature Christmas fir, to make someone get pregnant.

Borzone had grown up in the heart of the savanna, and the land had changed significantly since she was a kid going to the river every weekend with her family. Since the 1970s, about half of the savanna has been cleared, mostly for ranching and, where the soil is good, soybeans. At that time, even prominent ecologists, fearing total destruction of the Amazon, advocated moving industry here, invoking what Brazilians call the boi de piranha—a myth about casting a cow into infested waters so that the other cows could ford downstream. 

Toby Pennington, a Cerrado ecologist at the University of Exeter, told me it remains a sacrificial zone, at times faring worse when environmentally minded politicians are in power. In 2023, when deforestation fell by half in the Amazon, it rose by 43% in the Cerrado. Some ecologists warn that this ecosystem could be entirely gone in the next decade.

Perhaps unsurprisingly, there’s a certain prickliness among grassland researchers, who are, like their chosen flora, used to being trampled. In 2019, 46 of them authored a response in Science to Crowther’s trillion-trees study, arguing not about tree counting but about the land he proposed for reforestation. Much of it, they argued, including places like the Cerrado, was not appropriate for so many trees. It was too much biomass for the land to handle. (If their point was not already clear, the scientists later labeled the phenomenon “biome awareness disparity,” or BAD.)

“It’s a controversial ecosystem,” said Natashi Pilon, a grassland ecologist at the University of Campinas near São Paulo. “With Cerrado, you have to forget everything that you learn about ecology, because it’s all based in forest ecology. In the Cerrado, everything works the opposite way. Burning? It’s good. Shade? It’s not good.” The Cerrado contains a vast range of landscapes, from grassy fields to wooded forests, but the majority of it, she explained, is poorly suited to certain rules of carbon finance that would incentivize people to protect or restore it. While the underground forest stores plenty of carbon, it builds up its stock slowly and can be difficult to measure. 

The result is a slightly uncomfortable position for ecologists studying and trying to protect a vanishing landscape. Pilon and her former academic advisor, Giselda Durigan, a Cerrado ecologist at the Environmental Research Institute of the State of São Paulo and one of the scientists behind BAD, have gotten accustomed to pushing back on people who arrived preaching “improvement” through trees—first from nonprofits, mostly of the trillion-trees variety, but now from the timber industry. “They are using the carbon discourse as one more argument to say that business is great,” Durigan told me. “They are happy to be seen as the good guys.” 

Durigan saw tragedy in the way that Cerrado had been transformed into cattle pasture in just a generation, but there was also opportunity in restoring it once the cattle left. Bringing the Cerrado back would be hard work—usually requiring fire and hacking away at invasive grasses. But even simply leaving it alone could allow the ecosystem to begin to repair itself and offer something like the old savanna habitat. Abandoned eucalyptus farms, by contrast, were nightmares to return to native vegetation; the strange Cerrado plants refused to take root in the highly modified soil. 

In recent years, Durigan had visited hundreds of eucalyptus farms in the area, shadowing her students who had been hired by timber companies to help establish promised corridors of native vegetation in accordance with federal rules. “They’re planting entire watersheds,” she said. “The rivers are dying.” 

Durigan saw plants in isolated patches growing taller than they normally would, largely thanks to the suppression of regular brush fires. They were throwing shade on the herbs and grasses and drawing more water. The result was an environment gradually choking on itself, at risk of collapse during drought and retaining only a fraction of the Cerrado’s original diversity. If this was what people meant by bringing back the Cerrado, she believed it was only hastening its ultimate disappearance. 

In a recent survey of the watershed around the Parque do Pombo, which is hemmed in on each side by eucalyptus, two other researchers reported finding “devastation” and turned to Plato’s description of Attica’s forests, cleared to build the city of Athens: “What remains now compared to what existed is like the skeleton of a sick man … All the rich and soft soil has dissolved, leaving the country of skin and bones.” 

aerial view of the highway with trucks. On the right hand side trees are being felled and stacked by machines
A highway runs through the Cellulose Valley, connecting commercial eucalyptus farms and pulp factories.
PABLO ALBARENGA

After a long day of touring the land—and spinning out on the clay—we found that our fuel was low. The Parque do Pombo groundskeeper looked over at his rusting fuel tank and apologized. It had been spoiled by the last rain. At least, he said, it was all downhill to the highway. 

The road of opportunity

We only made it about halfway down the eucalyptus-lined road. After the car huffed and left us stranded, Borzone and I started walking toward the highway, anticipating a long night. We remembered locals’ talk of jaguars recently pushed into the area by development. 

But after only 30 minutes or so, a set of lights came into view across the plain. Then another, and another. Then the outline of a tractor, a small tanker truck, and, somewhat curiously, a tour bus. The gear and the vehicles bore the logo of Suzano, the world’s largest pulp and paper company.

After talking to a worker, we boarded the empty tour bus and were taken to a cluster of spotlit tents, where women prepared eucalyptus seedlings, stacking crates of them on white fold-out tables. A night shift like this one was unusual. But they were working around the clock—aiming to plant a million trees per day across Suzano’s farms, in preparation for opening the world’s largest pulp factory just down the highway. It would open in a few weeks with a capacity of 2.55 million metric tons of pulp per year. 

Semi trucks laden with trees
Eucalyptus has become the region’s new lifeblood. “I’m going to plant some eucalyptus / I’ll get rich and you’ll fall in love with me,” sings a local country duo.
PABLO ALBARENGA

The tour bus was standing by to take the workers down the highway at 1 a.m., arriving in the nearest city, Três Lagoas, by 3 a.m. to pick up the next shift. “You don’t do this work without a few birds at home to feed,” a driver remarked as he watched his colleagues filling holes in the field by the light of their headlamps. After getting permission from his boss, he drove us an hour each way to town to the nearest gas station.

This highway through the Cellulose Valley has become known as a road of opportunity, with eucalyptus as the region’s new lifeblood after the cattle industry shrank its footprint. Not far from the new Suzano factory, a popular roadside attraction is an oversize sculpture of a black bull at the gates of a well-known ranch. The ranch was recently planted, and the bull is now guarded by a phalanx of eucalyptus. 

On TikTok, workers post selfies and views from tractors in the nearby groves, backed by a song from the local country music duo Jads e Jadson. “I’m going to plant some eucalyptus / I’ll get rich and you’ll fall in love with me,” sings a down-on-his-luck man at risk of losing his fiancée. Later, when he cuts down the trees and becomes a wealthy man with better options, he cuts off his betrothed, too. 

The race to plant more eucalyptus here is backed heavily by the state government, which last year waived environmental requirements for new farms on pasture and hopes to quickly double its area in just a few years. The trees were an important component of Brazil’s plan to meet its global climate commitments, and the timber industry was keen to cash in. Companies like Suzano have already proposed that tens of thousands of their hectares become eligible for carbon credits. 

What’s top of mind for everyone, though, is worsening fires. Even when we visited in midwinter, the weather was hot and dry. The wider region was in a deep drought, perhaps the worst in 700 years, and in a few weeks, one of the worst fire seasons ever would begin. Suzano would be forced to make a rare pause in its planting when soil temperatures reached 154 °F. 

Posted along the highway are constant reminders of the coming danger: signs, emblazoned with the logos of a dozen timber companies, that read “FOGO ZERO,” or “ZERO FIRE.” 

land recently cleared on eucalyptus with the straight trunk stacked in piles along a dirt road for the machines to pass through
The race to plant more eucalyptus is backed heavily by the state government, which hopes to quickly double its area in just a few years.
PABLO ALBARENGA

In other places struck by megafires, like Portugal and Chile, eucalyptus has been blamed for worsening the flames. (The Chilean government has recently excluded pine and eucalyptus farms from its climate plans.) But here in Brazil, where climate change is already supersizing the blazes, the industry offers sophisticated systems to detect and suppress fires, argued Calmon of Conservation International. “You really need to protect it because that’s your asset,” he said. (BTG also noted that in parts of the Cerrado where human activity has increased, fires have decreased.) 

Eucalyptus is often portrayed as impossibly thirsty compared with other trees, but Calmon pointed out it is not uniquely so. In some parts of the Cerrado, it has been found to consume four times as much water as native vegetation; in others, the two landscapes have been roughly in line. It depends on many factors—what type of soil it’s planted in, what Cerrado vegetation coexists with it, how intensely the eucalyptus is farmed. Timber companies, which have no interest in seeing their own plantations run dry, invest heavily in managing water. Another hope, Wishnie told me, is that by vastly increasing the forest canopy, the new eucalyptus will actually gather moisture and help produce rain. 

Marine Dubos-Raoul
Marine Dubos-Raoul has tracked waves of planting in the Cerrado for years and has spoken to residents who worry about how the trees strain local water supplies.
PABLO ALBARENGA

That’s a common narrative and one that’s been taught in schools here in Três Lagoas for decades, Borzone explained when we met up the day after our rescue with Marine Dubos-Raoul, a local geographer and university professor, and two of her students. Dubos-Raoul laughed uneasily. If this idea about rain was in fact true, they hadn’t seen it here. They crouched around the table at the cafe, speaking in a hush; their opinions weren’t particularly popular in this lumber town.

Dubos-Raoul had long tracked the impacts of the waves of planting on longtime rural residents, who complained that industry had taken their water or sprayed their gardens with pesticides. 

The evidence tying the trees to water problems in the region, Dubos-Raoul admitted, is more anecdotal than data driven. But she heard it in conversation after conversation. “People would have tears in their eyes,” she said. “It was very clear to them that it was connected to the arrival of the eucalyptus.” (Since our meeting, a study, carried out in response to demands from local residents, has blamed the planting for 350 depleted springs in the area, sparking a rare state inquiry into the issue.) In any case, Dubos-Raoul thought, it didn’t make much sense to keep adding matches to the tinderbox.

Shortly after talking with Dubos-Raoul, we ventured to the town of Ribas do Rio Pardo to meet Charlin Castro at his family’s river resort. Suzano’s new pulp factory stood on the horizon, surrounded by one of the densest areas of planting in the region. 

The Suzano pulp factory—the world’s largest—has pulled the once-sleepy town of Ribas do Rio Pardo into the bustling hub of Brazil’s eucalyptus industry.
PABLO ALBARENGA
five people with a dog, seated outdoors under a pergola
Charlin Castro, his father Camilo, and other locals talk about how the area around the family’s river resort has changed since eucalyptus came to town.
two men in the river; the opposite bank has been cordoned off with caution tape.
The public area for bathing on the far side of the shrinking river was closed after the Suzano pulp factory was installed.

Charlin and Camilo admit they aren’t exactly sure what is causing low water levels—maybe it’s silt, maybe it’s the trees.
PABLO ALBARENGA

With thousands of workers arriving, mostly temporarily, to build the factory and plant the fields, the sleepy farming village had turned into a boomtown, and developed something of a lawless reputation—prostitution, homelessness, collisions between logging trucks and drunk drivers—and Castro was chronicling much of it for a hyperlocal Instagram news outlet, while also running for city council. 

But overall, he was thankful to Suzano. The factory was transforming the town into a “a real place,” as he put it, even if change was at times painful. 

His father, Camilo, gestured with a sinewy arm over to the water, where he recalled boat races involving canoes with crews of a dozen. That was 30 years ago. It was impossible to imagine now as I watched a family cool off in this bend in the river, the water just knee deep. But it’s hard to say what exactly is causing the low water levels. Perhaps it’s silt from the ranches, Charlin suggested. Or a change in the climate. Or, maybe, it could be the trees. 

Upstream, Ana Cláudia (who goes by “Tica”) and Antonio Gilberto Lima were more certain what was to blame. The couple, who are in their mid-60s, live in a simple brick house surrounded by fruit trees. They moved there a decade ago, seeking a calm retirement—one of a hundred or so families taking part in land reforms that returned land to smallholders. But recently, life has been harder. To preserve their well, they had let their vegetable garden go to seed. Streams were dry, and the old pools in the pastures where they used to fish were gone, replaced by trees; tapirs were rummaging through their garden, pushed, they believed, by lack of habitat. 

Antônio Gilberto Lima and Ana Cláudia Gregório Braguim standing in front of semi trucks
Ana Cláudia and Antonio Gilberto Lima have seen their land struggle since eucalyptus plantations took over the region.
PABLO ALBARENGA
close up of a hand touching a branch with numerous bite holes and brown spots on all the leaves
Plants have been attacked by hungry insects at their home.
closeup on a cluster of insects nesting in a plant
Pollinators like these stingless bees, faced with a lack of variety of native plant species, must fly greater distances to collect pollen they need.

They were surrounded by eucalyptus, planted in waves with the arrival of each new factory. No one was listening, they told me, as the cattle herd bellowed outside the door. “The trees are sad,” Gilberto said, looking out over his few dozen pale-humped animals grazing around scattered Cerrado species left in the paddock. Tica told me she knew that paper and pulp had to come from somewhere, and that many people locally were benefiting. But the downsides were getting overlooked, she thought. They had signed a petition to the government, organized by Dubos-Raoul, seeking to rein in the industry. Perhaps, she hoped, it could reach American investors, too. 

The green halo 

A few weeks before my trip, BTG had decided it was ready to show off Project Alpha. The visit was set for my last day in Brazil; the farm formerly known as Fazenda Engano was further upriver in Camapuã, a town that borders Ribas do Rio Pardo. It was a long, circuitous drive north to get out there, but it wouldn’t be that way much longer; a new highway was being paved that would directly connect the two towns, part of an initiative between the timber industry and government to expand the cellulose hub northward. A local official told me he expected tens of thousands of hectares of eucalyptus in the next few years.

For now, though, it was still the frontier. The intention was to plant “well outside the forest sector,” Wishnie told me—not directly in the shadow of a mill, but close enough for the operation to be practical, with access to labor and logistics. That distance was important evidence that the trees would store more carbon than what’s accounted for in a business-as-usual scenario. The other guarantee was the restoration. It wasn’t good business to buy land and not plant every acre you could with timber. It was made possible only with green investments from Apple and others.

That morning, Wishnie had emailed me a press release announcing that Microsoft had joined Apple in seeking help from BTG to help meet its carbon demands. The technology giant had made the largest-ever purchase of carbon credits, representing 8 million tons of CO2, from Project Alpha, following smaller commitments from TSMC and Murata, two of Apple’s suppliers. 

I was set to meet Carlos Guerreiro, head of Latin American operations for BTG’s timber subsidiary, at a gas station in town, where we would set off together for the 24,000-hectare property. A forester in Brazil for much of his life, he had flown in from his home near São Paulo early that morning; he planned to check out the progress of the planting at Project Alpha and then swing down to the bank’s properties across the Cellulose Valley, where BTG was finalizing a $376 million deal to sell land to Suzano. 

BTG plans to mix preserves of native restoration and eucalyptus farms and eventually reach a 50-50 mix on their properties.
COURTESY OF BTG

Guerreiro defended BTG’s existing holdings as sustainable engines of development in the region. But all the same, Project Alpha felt like a new beginning for the company, he told me. About a quarter of this property had been left untouched when the pasture was first cleared in the 1980s, but the plan now was to restore an additional 13% of the property to native Cerrado plants, bringing the total to 37%. (BTG says it will protect more land on future farms to arrive at its 50-50 target.) Individual patches of existing native vegetation would be merged with others around the property, creating a 400-meter corridor that largely followed the streams and rivers—beyond the 60 meters required by law. 

The restoration work was happening with the help of researchers from a Brazilian university, though they were still testing the best methods. We stood over trenches that had been planted with native seeds just weeks before, shoots only starting to poke out of the dirt. Letting the land regenerate on its own was often preferable, Guerreiro told me, but the best approach would depend on the specifics of each location. In other places, assistance with planting or tending or clearing back the invasive grasses could be better. 

The approach of largely letting things be was already yielding results, he noted: In parts of the property that hadn’t been grazed in years, they could already see the hardscrabble Cerrado clawing back with a vengeance. They’d been marveling at the fauna, caught on camera traps: tapirs, anteaters, all kinds of birds. They had even spotted a jaguar. The project would ensure that this growth would continue for decades. The land wouldn’t be sold to another rancher and go back to looking like other parts of the property, which were regularly cleared of native habitat. The hope, he said, was that over time the regenerating ecosystems would store more carbon, and generate more credits, than the eucalyptus. (The company intends to submit its carbon plans to Verra later this year.)

We stopped for lunch at the dividing line between the preserve and the eucalyptus, eating ham sandwiches in the shade of the oldest trees on the property, already two stories tall and still, by Guerreiro’s estimate, putting on a centimeter per day. He was planting at a rate of 40,000 seedlings per day in neat trenches filled with white lime to make the sandy Cerrado soil more inviting. In seven years or so, half of the trees will be thinned and pulped. The rest will keep growing. They’ll stand for seven years longer and grow thick and firm enough for plywood. The process will then start anew. Guerreiro described a model where clusters of farms mixed with preserves like this one will be planted around mills throughout the Cerrado. But nothing firm had been decided.

Eucalyptus tree seedlings
“Under no circumstances should planting eucalyptus ever be considered a viable project to receive carbon credits in the Cerrado,” says Lucy Rowland, an expert on the region at the University of Exeter.
PABLO ALBARENGA

This experiment, Wishnie told me later, could have a big payoff. The important thing, he reminded me, was that stretches of the Cerrado would be protected at a scale no one had achieved before—something that wouldn’t happen without eucalyptus. He strongly disagreed with the scientists who said eucalyptus didn’t fit here. The government had analyzed the watershed, he explained, and he was confident the land could support the trees. At the end of the day, the choice was between doing something and doing nothing. “We talk about restoration as if it’s a thing that happens,” he said. 

When I asked Pilon to take a look at satellite imagery and photos of the property, she was unimpressed. It looked to her like yet another misguided attempt at planting trees in an area that had once naturally been a dense savanna. (Her assessment is supported by a land survey from the 1980s that classified this land as a typical Cerrado ecosystem—some trees, but mostly shrubbery. BTG responded that the survey was incorrect and the satellite images clearly showed a closed-canopy forest.) 

As Lucy Rowland, an expert on the region at the University of Exeter and another BAD signatory, put it: “Under no circumstances should planting eucalyptus ever be considered a viable project to receive carbon credits in the Cerrado.” 

Over months of reporting, the way that both sides spoke in absolutes about how to save this vanishing ecosystem had become familiar. Chazdon, the Australia-based forest researcher, told me she too felt that the tenor of the argument over how and where to grow has become more vehement as demand for tree-based carbon removal has intensified. “Nobody’s a villain,” she said. “There are disconnects on both sides.”

Chazdon had been excited to hear about BTG’s project. It was, she thought, the type of thing that was sorely needed in conservation—mixing profitable enterprises with an approach to restoration that considers the wider landscape. “I can understand why the Cerrado ecologists are up in arms,” she said. “They get the feeling that nobody cares about their ecosystems.” But demands for ecological purity could indeed get in the way of doing much of anything—especially in places like the Cerrado, where laws and financing favor destruction over restoration. 

Still, thinking about the scale of the carbon removal problem, she considered it sensible to wonder about the future that was being hatched. While there is, in fact, a limit to how much additional land the world needs for pulp and plywood products in the near future, there is virtually no limit to how much land it could devote to sequestering carbon. Which means we need to ask hard questions about the best way to use it. 

More eucalyptus may support claims about greener paper products, but some argue that it’s not so simple for laptops and smart watches and ChatGPT queries.
PABLO ALBARENGA

It was true, Chazdon said, that planting eucalyptus in the Cerrado was an act of destruction—it’d make that land nearly impossible to recover. The areas preserved in between them would also likely struggle to fully renew itself, without fire or clearing. She would feel more comfortable with such large-scale projects if the bar for restoration were much higher—say, 75% or more. But that almost certainly wouldn’t satisfy her grassland colleagues who don’t want any eucalyptus at all. And it might not fit the profit model—the flywheel that Apple and others are seeking in order to scale up carbon removal fast. 

Barbara Haya, who studies carbon offsets at the University of California, Berkeley, encouraged me to think about all of it differently. The improvements to planting eucalyptus here, at this farm, could be a perfectly good thing for this industry, she said. Perhaps they merit some claim about greener toilet paper or plywood. Haya would leave that debate to the ecologists.

But we weren’t talking about toilet paper or plywood. We were talking about laptops and smart watches and ChatGPT. And the path to connecting those things to these trees was more convoluted. The carbon had to be disentangled first from the wood’s other profitable uses and then from the wider changes that were happening in this region and its industries. There seemed to be many plausible scenarios for where this land was heading. Was eucalyptus the only feasible route for carbon to find its way here? 

Haya is among the experts who argue that the idea of precisely canceling out corporate emissions to reach carbon neutrality is a broken one. That’s not to say protecting nature can’t help fight climate change. Conserving existing forests and grasslands, for example, could often yield greater carbon and biodiversity benefits in the long run than planting new forests. But the carbon math used to justify those efforts was often fuzzier. This makes every claim of carbon neutrality fragile and drives companies toward projects that are easier to prove, she thinks, but perhaps have less impact. 

One idea is that companies should instead shift to a “contribution” model that tracks how much money they put toward climate mitigation, without worrying about the exact amount of carbon removed. “Let’s say the goal is to save the Cerrado,” Haya said. “Could they put that same amount of money and really make a difference?” Such an approach, she pointed out, could help finance the preservation of those last intact Cerrado remnants. Or it could fund restoration, even if the restored vegetation takes years to grow or sometimes needs to burn. 

The approach raises its own questions—about how to measure the impact of those investments and what kinds of incentives would motivate corporations to act. But it’s a vision that has gained more popularity as scrutiny of carbon credits grows and the options available to companies narrow. With the current state of the world, “what private companies do matters more than ever,” Haya told me. “We need them not to waste money.” 

In the meantime, it’s up to the consumer reading the label to decide what sort of path we’re on. 

A row of eucalyptus running horizontally across the frame in a pink and purple sky
“There’s nothing wrong with the trees,” geographer and translator Clariana Vilela Borzone says. “I have to remind myself of that.”
PABLO ALBARENGA

Before we left the farm, Borzone and I had one more task: to plant a tree. The sun was getting low over Project Alpha when I was handed an iron contraption that cradled a eucalyptus seedling, pulled from a tractor piled with plants. 

“There’s nothing wrong with the trees,” Borzone had said earlier, squinting up at the row of 18-month-old eucalyptus, their fluttering leaves flashing in the hot wind as if in an ill-practiced burlesque show. “I have to remind myself of that.” But still it felt strange putting one in the ground. We were asking so much of it, after all. And we were poised to ask more.

I squeezed the handle, pulling the iron hinge taut and forcing the plant deep into the soil. It poked out at a slight angle that I was sure someone else would need to fix later, or else this eucalyptus tree would grow askew. I was slow and clumsy in my work, and by the time I finished, the tractor was far ahead of us, impossibly small on the horizon. The worker grabbed the tool from my hand and headed toward it, pushing seedlings down as he went, hurried but precise, one tree after another.

Gregory Barber is a journalist based in San Francisco. 

This story was produced in partnership with the McGraw Center for Business Journalism at the Craig Newmark Graduate School of Journalism at the City University of New York, as well as support from the Fund for Investigative Journalism.

AI is pushing the limits of the physical world

Architecture often assumes a binary between built projects and theoretical ones. What physics allows in actual buildings, after all, is vastly different from what architects can imagine and design (often referred to as “paper architecture”). That imagination has long been supported and enabled by design technology, but the latest advancements in artificial intelligence have prompted a surge in the theoretical. 

ai-generated shapes
Karl Daubmann, College of Architecture and Design at Lawrence Technological University
“Very often the new synthetic image that comes from a tool like Midjourney or Stable Diffusion feels new,” says Daubmann, “infused by each of the multiple tools but rarely completely derived from them.”

“Transductions: Artificial Intelligence in Architectural Experimentation,” a recent exhibition at the Pratt Institute in Brooklyn, brought together works from over 30 practitioners exploring the experimental, generative, and collaborative potential of artificial intelligence to open up new areas of architectural inquiry—something they’ve been working on for a decade or more, since long before AI became mainstream. Architects and exhibition co-­curators Jason Vigneri-Beane, Olivia Vien, Stephen Slaughter, and Hart Marlow explain that the works in “Transductions” emerged out of feedback loops among architectural discourses, techniques, formats, and media that range from imagery, text, and animation to mixed-­reality media and fabrication. The aim isn’t to present projects that are going to break ground anytime soon; architects already know how to build things with the tools they have. Instead, the show attempts to capture this very early stage in architecture’s exploratory engagement with AI.

Technology has long enabled architecture to push the limits of form and function. As early as 1963, Sketchpad, one of the first architectural software programs, allowed architects and designers to move and change objects on screen. Rapidly, traditional hand drawing gave way to an ever-expanding suite of programs—­Revit, SketchUp, and BIM, among many others—that helped create floor plans and sections, track buildings’ energy usage, enhance sustainable construction, and aid in following building codes, to name just a few uses. 

The architects exhibiting in “Trans­ductions” view newly evolving forms of AI “like a new tool rather than a profession-­ending development,” says Vigneri-Beane, despite what some of his peers fear about the technology. He adds, “I do appreciate that it’s a somewhat unnerving thing for people, [but] I feel a familiarity with the rhetoric.”

After all, he says, AI doesn’t just do the job. “To get something interesting and worth saving in AI, an enormous amount of time is required,” he says. “My architectural vocabulary has gotten much more precise and my visual sense has gotten an incredible workout, exercising all these muscles which have atrophied a little bit.”

Vien agrees: “I think these are extremely powerful tools for an architect and designer. Do I think it’s the entire future of architecture? No, but I think it’s a tool and a medium that can expand the long history of mediums and media that architects can use not just to represent their work but as a generator of ideas.”

Andrew Kudless, Hines College of Architecture and Design
This image, part of the Urban Resolution series, shows how the Stable Diffusion AI model “is unable to focus on constructing a realistic image and instead duplicates features that are prominent in the local latent space,” Kudless says.

Jason Vigneri-Beane, Pratt Institute
“These images are from a larger series on cyborg ecologies that have to do with co-creating with machines to imagine [other] machines,” says Vigneri-Beane. “I might refer to these as cryptomegafauna—infrastructural robots operating at an architectural scale.”

Martin Summers, University of Kentucky College of Design
“Most AI is racing to emulate reality,” says Summers. “I prefer to revel in the hallucinations and misinterpretations like glitches and the sublogic they reveal present in a mediated reality.”
Jason Lee, Pratt Institute
Lee typically uses AI “to generate iterations or high-resolution sketches,” he says. “I am also using it to experiment with how much realism one can incorporate with more abstract representation methods.”

Olivia Vien, Pratt Institute
For the series Imprinting Grounds, Vien created images digitally and fed them into Midjourney. “It riffs on the ideas of damask textile patterns in a more digital realm,” she says.

Robert Lee Brackett III, Pratt Institute
“While new software raises concerns about the absence of traditional tools like hand drawing and modeling, I view these technologies as collaborators rather than replacements,” Brackett says.
How creativity became the reigning value of our time

Americans don’t agree on much these days. Yet even at a time when consensus reality seems to be on the verge of collapse, there remains at least one quintessentially modern value we can all still get behind: creativity. 

We teach it, measure it, envy it, cultivate it, and endlessly worry about its death. And why wouldn’t we? Most of us are taught from a young age that creativity is the key to everything from finding personal fulfillment to achieving career success to solving the world’s thorniest problems. Over the years, we’ve built creative industries, creative spaces, and creative cities and populated them with an entire class of people known simply as “creatives.” We read thousands of books and articles each year that teach us how to unleash, unlock, foster, boost, and hack our own personal creativity. Then we read even more to learn how to manage and protect this precious resource. 

Given how much we obsess over it, the concept of creativity can feel like something that has always existed, a thing philosophers and artists have pondered and debated throughout the ages. While it’s a reasonable assumption, it’s one that turns out to be very wrong. As Samuel Franklin explains in his recent book, The Cult of Creativity, the first known written use of creativity didn’t actually occur until 1875, “making it an infant as far as words go.” What’s more, he writes, before about 1950, “there were approximately zero articles, books, essays, treatises, odes, classes, encyclopedia entries, or anything of the sort dealing explicitly with the subject of ‘creativity.’”

This raises some obvious questions. How exactly did we go from never talking about creativity to always talking about it? What, if anything, distinguishes creativity from other, older words, like ingenuity, cleverness, imagination, and artistry? Maybe most important: How did everyone from kindergarten teachers to mayors, CEOs, designers, engineers, activists, and starving artists come to believe that creativity isn’t just good—personally, socially, economically—but the answer to all life’s problems?

Thankfully, Franklin offers some potential answers in his book. A historian and design researcher at the Delft University of Technology in the Netherlands, he argues that the concept of creativity as we now know it emerged during the post–World War II era in America as a kind of cultural salve—a way to ease the tensions and anxieties caused by increasing conformity, bureaucracy, and suburbanization.

“Typically defined as a kind of trait or process vaguely associated with artists and geniuses but theoretically possessed by anyone and applicable to any field, [creativity] provided a way to unleash individualism within order,” he writes, “and revive the spirit of the lone inventor within the maze of the modern corporation.”

Brainstorming, a new method for encouraging creative thinking, swept corporate America in the 1950s. A response to pressure for new products and new ways of marketing them, as well as a panic over conformity, it inspired passionate debate about whether true creativity should be an individual affair or could be systematized for corporate use.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

I spoke to Franklin about why we continue to be so fascinated by creativity, how Silicon Valley became the supposed epicenter of it, and what role, if any, technologies like AI might have in reshaping our relationship with it. 

I’m curious what your personal relationship to creativity was growing up. What made you want to write a book about it?

Like a lot of kids, I grew up thinking that creativity was this inherently good thing. For me—and I imagine for a lot of other people who, like me, weren’t particularly athletic or good at math and science—being creative meant you at least had some future in this world, even if it wasn’t clear what that future would entail. By the time I got into college and beyond, the conventional wisdom among the TED Talk register of thinkers—people like Daniel Pink and Richard Florida—was that creativity was actually the most important trait to have for the future. Basically, the creative people were going to inherit the Earth, and society desperately needed them if we were going to solve all of these compounding problems in the world. 

On the one hand, as someone who liked to think of himself as creative, it was hard not to be flattered by this. On the other hand, it all seemed overhyped to me. What was being sold as the triumph of the creative class wasn’t actually resulting in a more inclusive or creative world order. What’s more, some of the values embedded in what I call the cult of creativity seemed increasingly problematic—specifically, the focus on self-­realization, doing what you love, and following your passion. Don’t get me wrong—it’s a beautiful vision, and I saw it work out for some people. But I also started to feel like it was just a cover for what was, economically speaking, a pretty bad turn of events for many people.  

Staff members at the University of California’s Institute of Personality Assessment and Research simulate a situational procedure involving group interaction, called the Bingo Test. Researchers of the 1950s hoped to learn how factors in people’s lives and environments shaped their creative aptitude.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

Nowadays, it’s quite common to bash the “follow your passion,” “hustle culture” idea. But back when I started this project, the whole move-fast-and-break-things, disrupter, innovation-economy stuff was very much unquestioned. In a way, the idea for the book came from recognizing that creativity was playing this really interesting role in connecting two worlds: this world of innovation and entrepreneurship and this more soulful, bohemian side of our culture. I wanted to better understand the history of that relationship.

When did you start thinking about creativity as a kind of cultone that we’re all a part of? 

Similar to something like the “cult of domesticity,” it was a way of describing a historical moment in which an idea or value system achieves a kind of broad, uncritical acceptance. I was finding that everyone was selling stuff based on the idea that it boosted your creativity, whether it was a new office layout, a new kind of urban design, or the “Try these five simple tricks” type of thing. 

You start to realize that nobody is bothering to ask, “Hey, uh, why do we all need to be creative again? What even is this thing, creativity?” It had become this unimpeachable value that no one, regardless of what side of the political spectrum they fell on, would even think to question. That, to me, was really unusual, and I think it signaled that something interesting was happening.

Your book highlights midcentury efforts by psychologists to turn creativity into a quantifiable mental trait and the “creative person” into an identifiable type. How did that play out? 

The short answer is: not very well. To study anything, you of course need to agree on what it is you’re looking at. Ultimately, I think these groups of psychologists were frustrated in their attempts to come up with scientific criteria that defined a creative person. One technique was to go find people who were already eminent in fields that were deemed creative—writers like Truman Capote and Norman Mailer, architects like Louis Kahn and Eero Saarinen—and just give them a battery of cognitive and psychoanalytic tests and then write up the results. This was mostly done by an outfit called the Institute of Personality Assessment and Research (IPAR) at Berkeley. Frank Barron and Don MacKinnon were the two biggest researchers in that group.

Another way psychologists went about it was to say, all right, that’s not going to be practical for coming up with a good scientific standard. We need numbers, and lots and lots of people to certify these creative criteria. This group of psychologists theorized that something called “divergent thinking” was a major component of creative accomplishment. You’ve heard of the brick test, where you’re asked to come up with many creative uses for a brick in a given amount of time? They basically gave a version of that test to Army officers, schoolchildren, rank-and-file engineers at General Electric, all kinds of people. It’s tests like those that ultimately became stand-ins for what it means to be “creative.”

Are they still used? 

When you see a headline about AI making people more creative, or actually being more creative than humans, the tests they are basing that assertion on are almost always some version of a divergent thinking test. It’s highly problematic for a number of reasons. Chief among them is the fact that these tests have never been shown to have predictive value—that’s to say, a third grader, a 21-year-old, or a 35-year-old who does really well on divergent thinking tests doesn’t seem to have any greater likelihood of being successful in creative pursuits. The whole point of developing these tests in the first place was to both identify and predict creative people. None of them have been shown to do that. 

Reading your book, I was struck by how vague and, at times, contradictory the concept of “creativity” was from the beginning. You characterize that as “a feature, not a bug.” How so?

Ask any creativity expert today what they mean by “creativity,” and they’ll tell you it’s the ability to generate something new and useful. That something could be an idea, a product, an academic paper—whatever. But the focus on novelty has remained an aspect of creativity from the beginning. It’s also what distinguishes it from other similar words, like imagination or cleverness. But you’re right: Creativity is a flexible enough concept to be used in all sorts of ways and to mean all sorts of things, many of them contradictory. I think I write in the book that the term may not be precise, but that it’s vague in precise and meaningful ways. It can be both playful and practical, artsy and technological, exceptional and pedestrian. That was and remains a big part of its appeal. 

The question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming [AI] into our lives as advisors and assistants.

Is that emphasis on novelty and utility a part of why Silicon Valley likes to think of itself as the new nexus for creativity?

Absolutely. The two criteria go together. In techno-solutionist, hypercapitalist milieus like Silicon Valley, novelty isn’t any good if it’s not useful (or at least marketable), and utility isn’t any good (or marketable) unless it’s also novel. That’s why they’re often dismissive of boring-but-important things like craft, infrastructure, maintenance, and incremental improvement, and why they support art—which is traditionally defined by its resistance to utility—only insofar as it’s useful as inspiration for practical technologies.

At the same time, Silicon Valley loves to wrap itself in “creativity” because of all the artsy and individualist connotations. It has very self-consciously tried to distance itself from the image of the buttoned-down engineer working for a large R&D lab of a brick-and-mortar manufacturing corporation and instead raise up the idea of a rebellious counterculture type tinkering in a garage making weightless products and experiences. That, I think, has saved it from a lot of public scrutiny.

Up until recently, we’ve tended to think of creativity as a human trait, maybe with a few exceptions from the rest of the animal world. Is AI changing that?

When people started defining creativity in the ’50s, the threat of computers automating white-collar work was already underway. They were basically saying, okay, rational and analytical thinking is no longer ours alone. What can we do that the computers can never do? And the assumption was that humans alone could be “truly creative.” For a long time, computers didn’t do much to really press the issue on what that actually meant. Now they’re pressing the issue. Can they do art and poetry? Yes. Can they generate novel products that also make sense or work? Sure.

I think that’s by design. The kinds of LLMs that Silicon Valley companies have put forward are meant to appear “creative” in those conventional senses. Now, whether or not their products are meaningful or wise in a deeper sense, that’s another question. If we’re talking about art, I happen to think embodiment is an important element. Nerve endings, hormones, social instincts, morality, intellectual honesty—those are not things essential to “creativity” necessarily, but they are essential to putting things out into the world that are good, and maybe even beautiful in a certain antiquated sense. That’s why I think the question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming them into our lives as advisors and assistants. 

This interview is based on two conversations and has been edited and condensed for clarity.

Bryan Gardiner is a writer based in Oakland, California.

Inside a romance scam compound—and how people get tricked into being there

Heading north in the dark, the only way Gavesh could try to track his progress through the Thai countryside was by watching the road signs zip by. The Jeep’s three occupants—Gavesh, a driver, and a young Chinese woman—had no languages in common, so they drove for hours in nervous silence as they wove their way out of Bangkok and toward Mae Sot, a city on Thailand’s western border with Myanmar.

When they reached the city, the driver pulled off the road toward a small hotel, where another car was waiting. “I had some suspicions—like, why are we changing vehicles?” Gavesh remembers. “But it happened so fast.”

They left the highway and drove on until, in total darkness, they parked at what looked like a private house. “We stopped the vehicle. There were people gathered. Maybe 10 of them. They took the luggage and they asked us to come,” Gavesh says. “One was going in front, there was another one behind, and everyone said: ‘Go, go, go.’” 

Gavesh and the Chinese woman were marched through the pitch-black fields by flashlight to a riverside where a boat was moored. By then, it was far too late to back out.

Gavesh’s journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.

Instead, he found himself trafficked into a business commonly known as “pig butchering”—a form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar. 

We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized. By the same token, it is Big Tech that may hold the key to breaking up the scam syndicates—if only these companies can be persuaded or compelled to act.


We’re identifying Gavesh using a pseudonym to protect his identity. He is from a country in South Asia, one he asked us not to name. He hasn’t shared his story much, and he still hasn’t told his family. He worries about how they’d handle it. 

Until the pandemic, he had held down a job in the tourism industry. But lockdowns had gutted the sector, and two years later he was working as a day laborer to support himself and his father and sister. “I was fed up with my life,” he says. “I was trying so hard to find a way to get out.”

When he saw the Facebook post in mid-2022, it seemed like a godsend. A company in Thailand was looking for English-speaking customer service and data entry specialists. The monthly salary was $1,500—far more than he could earn at home—with meals, travel costs, a visa, and accommodation included. “I knew if I got this job, my life would turn around. I would be able to give my family a good life,” Gavesh says.

What came next was life-changing, but not in the way Gavesh had hoped. The advert was a fraud—and a classic tactic syndicates use to force workers like Gavesh into an economy that operates as something like a dark mirror of the global outsourcing industry. 

The true scale of this type of fraud is hard to estimate, but the United Nations reported in 2023 that hundreds of thousands of people had been trafficked to work as online scammers in Southeast Asia. One 2024 study, from the University of Texas, estimates that the criminal syndicates that run these businesses have stolen at least $75 billion since 2020. 

These schemes have been going on for more than two decades, but they’ve started to capture global attention only recently, as the syndicates running them increasingly shift from Chinese targets toward the West. And even as investigators, international organizations, and journalists gradually pull back the curtain on the brutal conditions inside scamming compounds and document their vast scale, what is far less exposed is the pivotal role platforms owned by Big Tech play throughout the industry—from initially coercing individuals to become scammers to, finally, duping scam targets out of their life savings. 

As losses mount, governments and law enforcement agencies have looked for ways to disrupt the syndicates, which have become adept at using ungoverned spaces in lawless borderlands and partnering with corrupt regimes. But on the whole, the syndicates have managed to stay a step ahead of law enforcement—in part by relying on services from the world’s tech giants. Apple iPhones are their preferred scamming tools. Meta-owned Facebook and WhatsApp are used to recruit people into forced labor, as is Telegram. Social media and messaging platforms, including Facebook, Instagram, WhatsApp, WeChat, and X, provide spaces for scammers to find and lure targets. So do dating apps, including Tinder. Some of the scam compounds have their own Starlink terminals. And cryptocurrencies like tether and global crypto platforms like Binance have allowed the criminal operations to move money with little or no oversight.

view from the back of crowd of people seated on the ground in a courtyard surrounded aby guards
Scam workers sit inside Myanmar’s KK Park, a notorious fraud hub near the border with Thailand, following a recent crackdown by law enforcement.
REUTERS

“Private-sector corporations are, unfortunately, inadvertently enabling this criminal industry,” says Andrew Wasuwongse, the Thailand country director at the anti-trafficking nonprofit International Justice Mission (IJM). “The private sector holds significant tools and responsibility to disrupt and prevent its further growth.”

Yet while the tech sector has, slowly, begun to roll out anti-scam tools and policies, experts in human trafficking, platform integrity, and cybercrime tell us that these measures largely focus on the downstream problem: the losses suffered by the victims of the scams. That approach overlooks the other set of victims, often from lower-income countries, at the far end of a fraud “supply chain” that is built on human misery—and on Big Tech. Meanwhile, the scams continue on a mass scale.

Tech companies could certainly be doing more to crack down, the experts say. Even relatively small interventions, they argue, could start to erode the business model of the scam syndicates; with enough of these, the whole business could start to founder. 

“The trick is: How do you make it unprofitable?” says Eric Davis, a platform integrity expert and senior vice president of special projects at the Institute for Security and Technology (IST), a think tank in California. “How do you create enough friction?”

That question is only becoming more urgent as many tech companies pull back on efforts to moderate their platforms, artificial intelligence supercharges scam operations, and the Trump administration signals broad support for deregulation of the tech sector while withdrawing support from organizations that study the scams and support the victims. All these trends may further embolden the syndicates. And even as the human costs keep building, global governments exert ineffectual pressure—if any at all—on the tech sector to turn its vast financial and technical resources against a criminal economy that has thrived in the spaces Silicon Valley built. 


Capturing a vulnerable workforce

The roots of “pig butchering” scams reach back to the offshore gambling industry that emerged from China in the early 2000s. Online casinos had become hugely popular in China, but the government cracked down, forcing the operators to relocate to Cambodia, the Philippines, Laos, and Myanmar. There, they could continue to target Chinese gamblers with relative impunity. Over time, the casinos began to use social media to entice people back home, deploying scam-like tactics that frequently centered on attractive and even nude dealers.

The doubts didn’t really start until after Gavesh reached Bangkok’s Suvarnabhumi Airport. As time ticked by, it began to occur to him that he was alone, with no money, no return ticket, and no working SIM card.

“Often the romance scam was a part of that—building romantic relationships with people that you eventually would aim to hook,” says Jason Tower, Myanmar country director at the United States Institute of Peace (USIP), a research and diplomacy organization funded by the US government, who researches the cyber scam industry. (USIP’s leadership was recently targeted by the Trump administration and Elon Musk’s Department of Government Efficiency task force, leaving the organization’s future uncertain; its website, which previously housed its research, is also currently offline.)

By the late 2010s, many of the casinos were big, professional operations. Gradually, says Tower, the business model turned more sinister, with a tactic called sha zhu pan in Chinese emerging as a core strategy. Scamming operatives work to “fatten up” or cultivate a target by building a relationship before going in for the “slaughter”—persuading them to invest in a supposedly once-in-a-lifetime scheme and then absconding with the money. “That actually ended up being much, much more lucrative than online gambling,” Tower says. (The international law enforcement organization Interpol no longer uses the graphic term “pig butchering,” citing concerns that it dehumanizes and stigmatizes victims.) 

Like other online industries, the romance scamming business was supercharged by the pandemic. There were simply more isolated people to defraud, and more people out of work who might be persuaded to try scamming others—or who were vulnerable to being trafficked into the industry.

Initially, most of the workers carrying out the frauds were Chinese, as were the fraud victims. But after the government in Beijing tightened travel restrictions, making it hard to recruit Chinese laborers, the syndicates went global. They started targeting more Western markets and turning, Tower says, to “much more malign types of approaches to tricking people into scam centers.” 


Getting recruited

Gavesh was scrolling through Facebook when he saw the ad. He sent his résumé to a Telegram contact number. A human resources representative replied and had him demonstrate his English and typing skills over video. It all felt very professional. “I didn’t have any reason to suspect,” he says.

The doubts didn’t really start until after he reached Bangkok’s Suvarnabhumi Airport. After being met at arrivals by a man who spoke no English, he was left to wait. As time ticked by, it began to occur to Gavesh that he was alone, with no money, no return ticket, and no working SIM card. Finally, the Jeep arrived to pick him up.

Hours later, exhausted, he was on a boat crossing the Moei River from Thailand into Myanmar. On the far bank, a group was waiting. One man was in military uniform and carried a gun. “In my country, if we see an army guy when we are in trouble, we feel safe,” Gavesh says. “So my initial thoughts were: Okay, there’s nothing to be worried about.”

They hiked a kilometer across a sodden paddy field and emerged at the other side caked in mud. There a van was parked, and the driver took them to what he called, in broken English, “the office.” They arrived at the gate of a huge compound, surrounded by high walls topped with barbed wire. 

While some people are drawn into online scamming directly by friends and relatives, Facebook is, according to IJM’s Wasuwongse, the most common entry point for people recruited on social media. 

Meta has known for years that its platforms host this kind of content. Back in 2019, the BBC exposed “slave markets” that were running on Instagram; in 2021, the Wall Street Journal reported, drawing on documents leaked by a whistleblower, that Meta had long struggled to rein in the problem but took meaningful action only after Apple threatened to pull Instagram from its app store. 

Today, years on, ads like the one that Gavesh responded to are still easy to find on Facebook if you know what to look for.

Examples of fraudulent Facebook ads, shared by International Justice Mission.

They are typically posted in job seekers’ groups and usually seem to be advertising legitimate jobs in areas like customer service. They offer attractive wages, especially for people with language skills—usually English or Chinese. 

The traffickers tend to finish the recruitment process on encrypted or private messaging apps. In our research, many experts said that Telegram, which is notorious for hosting terrorist content, child sexual abuse material, and other communication related to criminal activity, was particularly problematic. Many spoke with a combination of anger and resignation about its apparent lack of interest in working with them to address the problem; Mina Chiang, founder of Humanity Research Consultancy, an anti-trafficking organization, accuses the app of being “very much complicit” in human trafficking and “proactively facilitating” these scams. (Telegram did not respond to a request for comment.)

But while Telegram users have the option of encrypting their messages end to end, making them almost impossible to monitor, social media companies are of course able to access users’ posts. And it’s here, at the beginning of the romance scam supply chain, where Big Tech could arguably make its most consequential intervention. 

Social media is monitored by a combination of human moderators and AI systems, which help flag users and content—ads, posts, pages—that break the law or violate the companies’ own policies. Dangerous content is easiest to police when it follows predictable patterns or is posted by users acting in distinctive and suspicious ways.

“They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”

Anti-trafficking experts say the scam advertising tends to follow formulaic templates and use common language, and that they routinely report the ads to Meta and point out the markers they have identified. Their hope is that this information will be fed into the data sets that train the content moderation models. 

While individual ads may be taken down, even in big waves—last November, Meta said it had purged 2 million accounts connected to scamming syndicates over the previous year—experts say that Facebook still continues to be used in recruiting. And new ads keep appearing. 

(In response to a request for comment, a Meta spokesperson shared links to policies about bans on content or advertisements that facilitate human trafficking, as well as company blog posts telling users how to protect themselves from romance scams and sharing details about the company’s efforts to disrupt fraud on its platforms, one stating that it is “constantly rolling out new product features to help protect people on [its] apps from known scam tactics at scale.” The spokesperson also said that WhatsApp has spam detection technology, and millions of accounts are banned per month.)

Anti-trafficking experts we spoke with say that as recently as last fall, Meta was engaging with them and had told them it was ramping up its capabilities. But Chiang says there still isn’t enough urgency from tech companies. “There’s a question about speed. They might be able to say That’s the goal for the next two years. No. But that’s not fast enough. We need it now,” she says. “They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”

Part of the answer comes down to money, according to experts we spoke with. Scaling up content moderation and other processes that could cause users to be kicked off a platform requires not only technological staff but also legal and policy experts—which not everyone sees as worth the cost. 

“The vast majority of these companies are doing the minimum or less,” says Tower of USIP. “If not properly incentivized, either through regulatory action or through exposure by media or other forms of pressure … often, these companies will underinvest in keeping their platforms safe.”


Getting set up

Gavesh’s new “office” turned out to be one of the most infamous scamming hubs in Southeast Asia: KK Park in Myanmar’s Myawaddy region. Satellite imagery shows it as a densely packed cluster of buildings, surrounded by fields. Most of it has been built since late 2019. 

Inside, it runs like a hybrid of a company campus and a prison. 

When Gavesh arrived, he handed over his phone and passport and was assigned to a dormitory and an employer. He was allowed his own phone back only for short periods, and his calls were monitored. Security was tight. He had to pass through airport-style metal detectors when he went in or out of the office. Black-uniformed personnel patrolled the buildings, while armed men in combat fatigues watched the perimeter fences from guard posts. 

On his first full day, he was put in front of a computer with just four documents on it, which he had to read over and over—guides on how to approach strangers. On his second day, he learned to build fake profiles on social media and dating apps. The trick was to find real people on Instagram or Facebook who were physically attractive, posted often, and appeared to be wealthy and living “a luxurious life,” he says, and use their photos to build a new account: “There are so many Instagram models that pretend they have a lot of money.”

After Gavesh was trafficked into Myanmar, he was taken to KK Park. Most of the compound has been built since late 2019.
LUKE DUGGLEBY/REDUX

Next, he was given a batch of iPhone 8s—most people on his team used between eight and 10 devices each—loaded with local SIM cards and apps that spoofed their location so that they appeared to be in the US. Using male and female aliases, he set up dozens of accounts on Facebook, WhatsApp, Telegram, Instagram, and X and profiles on several dating platforms, though he can’t remember exactly which ones. 

Different scamming operations teach different techniques for finding and reaching out to potential victims, several people who worked in the compounds tell us. Some people used direct approaches on dating apps, Facebook, Instagram, or—for those targeting Chinese victims—WeChat. One worker from Myanmar sent out mass messages on WhatsApp, pretending to have accidentally messaged a wrong number, in the hope of striking up a conversation. (Tencent, which owns WeChat, declined to comment.)

Some scamming workers we spoke to were told to target white, middle-aged or older men in Western countries who seemed to be well off. Gavesh says he would pretend to be white men and women, using information found from Google to add verisimilitude to his claims of living in, say, Miami Beach. He would chat with the targets, trying to figure out from their jobs, spending habits, and ambitions whether they’d be worth investing time in.

One South African woman, trafficked to Myanmar in 2022, says she was given a script and told to pose as an Asian woman living in Chicago. She was instructed to study her assigned city and learn quotidian details about life there. “They kept on punishing people all the time for not knowing or for forgetting that they’re staying in Chicago,” she says, “or for forgetting what’s Starbucks or what’s [a] latte.” 

Fake users have, of course, been a problem on social media platforms and dating sites for years. Some platforms, such as X, allow practically anyone to create accounts and even to have them verified for a fee. Others, including Facebook, have periodically conducted sweeps to get rid of fake accounts engaged in what Meta calls “coordinated inauthentic behavior.” (X did not respond to requests for comment.)

But scam workers tell us they were advised on simple ways to circumvent detection mechanisms on social media. They were given basic training in how to avoid suspicious behavior such as adding too many contacts too quickly, which might trigger the company to review whether someone’s profile is authentic. The South African woman says she was shown how to manipulate the dates on a Facebook account “to seem as if you opened the account in 2019 or whatever,” making it easier to add friends. (Meta’s spam filters—meant to reduce the spread of unwanted content—include limits on friend requests and bulk messaging.)

Wang set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch.

Dating apps, whose users generally hope to meet other users in real life, have a particular need to make sure that people are who they say they are. But Match Group, the parent company of Tinder, ended its partnership with a company doing background checks in 2023. It now encourages users to verify their profile with a selfie and further ID checks, though insiders say these systems are often rudimentary. “They just check a box and [do] what is legally required or what will make the media get off of [their] case,” says one tech executive who has worked with multiple dating apps on safety systems, speaking on the condition of anonymity because they were not permitted to speak about their work with certain companies. 

Fangzhou Wang, an assistant professor at the University of Texas at Arlington who studies romance scams, ran a test: She set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch. “They are not providing enough security measures to filter out fraudulent profiles,” Wang says. “Everybody can create anything.”

Like recruitment ads, the scam profiles tend to follow patterns that should raise red flags. They use photos copied from existing users or made by artificial intelligence, and the accounts are sometimes set up using phone numbers generated by voice-over-internet-protocol services. Then there’s the scammers’ behavior: They swipe too fast, or spend too much time logged in. “A normal human doesn’t spend … eight hours on a dating app a day,” the tech executive says. 

What’s more, scammers use the same language over and over again as they reach out to potential targets. “The majority of them are using predesigned scripts,” says Wang. 

It would be fairly easy for platforms to detect these signs and either stop accounts from being created or make the users go through further checks, experts tell us. Signals of some of these behaviors “can potentially be embedded into a type of machine-learning algorithm,” Wang says. She approached Tinder a few years ago with her research into the language that scammers use on the platforms, and offered to help build data sets for its moderation models. She says the company didn’t reply. 

(In a statement, Yoel Roth, vice president of trust and safety at Match Group, said that the company invests in “proactive tools, advanced detection systems and user education to help prevent harm.” He wrote, “We use proprietary AI-powered tools to help identify scammer messaging, and unlike many platforms, we moderate messages, which allows us to detect suspicious patterns early and act quickly,” adding that the company has recently worked with Reality Defender, a provider of deepfake detection tools, to strengthen its ability to detect AI-generated content. A company spokesperson reported having no record of Wang’s outreach but said that the company “welcome[s] collaboration and [is] always open to reviewing research that can help strengthen user safety.”)

A recent investigation published in The Markup found that Match Group has long possessed the tools and resources to track sex offenders and other bad actors but has resisted efforts to roll out safety protocols for fear they might slow growth. 

This tension, between the desire to keep increasing the number of users and the need to ensure that these users and their online activity are authentic, is often behind safety issues on platforms. While no platform wants to be a haven for fraudsters, identity verification creates friction for users, which stops real people as well as impostors from signing up. And again, cracking down on platform violations costs money.

According to Josh Kim, an economist who works in Big Tech, it would be costly for tech companies to build out the legal, policy, and operational teams for content moderation tools that could get users kicked off a platform—and the expense is one companies may find hard to justify in the current business climate. “The shift toward profitability means that you have to be very selective in … where you invest the resources that you have,” he says.

“My intuition here is that unless there are fines or pressure from governments or regulatory agencies or the public themselves,” he adds, “the current atmosphere in the tech ecosystem is to focus on building a product that is profitable and grows fast, and things that don’t contribute to those two points are probably being deprioritized.”


Getting online—and staying in line

At work, Gavesh wore a blue tag, marking him as belonging to the lowest rank of workers. “On top of us are the ones who are wearing the yellow tags—they call themselves HR or translators, or office guys,” he says. “Red tags are team leaders, managers … And then moving from that, they have black and ash tags. Those are the ones running the office.” Most of the latter were Chinese, Gavesh says, as were the really “big bosses,” who didn’t wear tags at all.

Within this hierarchy operated a system of incentives and punishments. Workers who followed orders and proved successful at scamming could rise through the ranks to training or supervisory positions, and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation. 

Gavesh says he was once beaten because he broke an unwritten rule that it was forbidden to cross your legs at work. Yawning was banned, and bathroom breaks were limited to two minutes at a time. 

rows of workers lit by their screens

KATHERINE LAM

Beatings were usually conducted in the open, though the most severe punishments at Gavesh’s company happened in a room called the “water jail.” One day a coworker was there alongside the others, “and the next day he was not,” Gavesh recalls. When the colleague was brought back to the office, he had been so badly beaten he couldn’t walk or speak. “They took him to the front, and they said: ‘If you do not listen to us, this is what will happen to you.’”

Gavesh was desperate to leave but felt there was no chance of escaping. The armed guards seemed ready to shoot, and there were rumors in the compound that some people who jumped the fence had been found drowned in the river. 

This kind of physical and psychological abuse is routine across the industry. Gavesh and others we spoke to describe working 12 hours or more a day, without days off. They faced strict quotas for the number of scam targets they had to have on the hook. If they failed to reach them, they were punished. The UN has documented cases of torture, arbitrary detention, and sexual violence in the compounds. We heard accounts of people made to perform calisthenics and being thrashed on the backside in front of other workers. 

Even if someone could escape, there is often no authority to appeal to on the outside. KK Park and other scam factories in Myanmar are situated in a geopolitical gray zone—borderlands where criminal enterprises have based themselves for decades, trading in narcotics and other unlawful industries. Armed groups, some of them operating under the command of the military, are credibly believed to profit directly from the trade in people and contraband in these areas, in some cases facing international sanctions as a result. Illicit industries in Myanmar have only expanded since a military coup in 2021. By August 2023, according to UN estimates, more than 120,000 people were being held in the country for the purposes of forced scamming, making it the largest hub for the frauds in Southeast Asia. 

Workers who followed orders and proved successful at scamming could rise through the ranks and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation. 

In at least some attempt to get a handle on this lawlessness, Thailand tried to cut off internet services for some compounds across its western border starting last May. Syndicates adapted by running fiber-optic cables across the river. When some of those were discovered, they were severed by Thai authorities. Thailand again ramped up its crackdowns on the industry earlier this year, with tactics that included cutting off internet, gas, and electricity to known scamming enclaves, following the trafficking of a Chinese celebrity through Thailand into Myanmar. 

Still, the scammers keep adapting—again, using Western technology. “We’ve started to see and hear of Starlink systems being used by these compounds,” says Eric Heintz, a global analyst at IJM.

While the military junta has criminalized the use of unauthorized satellite internet service, intercepted shipments and raids on scamming centers over the past year indicate that syndicates smuggle in equipment. The crackdowns seem to have had a limited impact—a Wired investigation published in February found that scamming networks appeared to be “widely using” Starlink in Myanmar. The journalist, using mobile-phone connection data collected by an online advertising industry tool, identified eight known scam compounds on the Myanmar-Thailand border where hundreds of phones had used Starlink more than 40,000 times since November 2024. He also identified photos that appeared to show dozens of Starlink satellite dishes on a scamming compound rooftop.

Starlink could provide another prime opportunity for systematic efforts to interrupt the scams, particularly since it requires a subscription and is able to geofence its services. “I could give you coordinates of where some of these [scamming operations] are, like IP addresses that are connecting to them,” Heintz says. “That should make a huge paper trail.” 

Starlink’s parent company, SpaceX, has previously limited access in areas of Ukraine under Russian occupation, after all. Its policies also state that SpaceX may terminate Starlink services to users who participate in “fraudulent” activities. (SpaceX did not respond to a request for comment.)

Knowing the locations of scam compounds could also allow Apple to step in: Workers rely on iPhones to make contact with victims, and these have to be associated with an Apple ID, even if the workers use apps to spoof their addresses. 

As Heintz puts it, “[If] you have an iCloud account with five phones, and you know that those phones’ GPS antenna locates those phones inside a known scam compound, then all of those phones should be bricked. The account should be locked.” 

(Apple did not provide a response to a request for comment.)

“This isn’t like the other trafficking cases that we’ve worked on, where we’re trying to find a boat in the middle of the ocean,” Heintz adds. “These are city-size compounds. We all know where they are, and we’ve watched them being built via satellite imagery. We should be able to do something location-based to take these accounts offline.”


Getting paid

Once Gavesh developed a relationship on social media or a dating site, he was supposed to move the conversation to WhatsApp. That platform is end-to-end encrypted, meaning even Meta can’t read the content of messages—although it should be possible for the company to spot a user’s unusual patterns of behavior, like opening large numbers of WhatsApp accounts or sending numerous messages in a short span of time.

“If you have an account that is suddenly adding people in large quantities all over the world, should you immediately flag it and freeze that account or require that that individual verify his or her information?” USIP’s Tower says.

After cultivating targets’ trust, scammers would inevitably shift the conversation to the subject of money. Having made themselves out to be living a life of luxury, they would offer a chance to share in the secrets of their wealth. Gavesh was taught to make the approach as if it were an extension of an existing intimacy. “I would not show this platform to anyone else,” he says he was supposed to say. “But since I feel like you are my life partner, I feel like you are my future.”

Lower-level workers like Gavesh were only expected to get scamming targets on the hook; then they’d pass off the relationship to a manager. From there, there is some variation in the approach, but the target is sometimes encouraged to set up an account with a mainstream crypto exchange and buy some tokens. Then the scammer sends the victim—or “customer,” as some workers say they called these targets—a link to a convincing, but fake, crypto investment platform.

After the target invests an initial amount of money, the scammer typically sends fake investment return charts that seem to show the value of that stake rising and rising. To demonstrate good faith, the scammer sends a few hundred dollars back to the victim’s crypto wallet, all the while working to convince the mark to keep investing. Then, once the customer is all in, the scammer goes in for the kill, using every means possible to take more money. “We [would] pull out bigger amounts from the customers and squeeze them out of their possessions,” one worker tells us.  

The design of cryptocurrency allows some degree of anonymity, but with enough time, persistence, and luck, it’s possible to figure out where tokens are flowing. It’s also possible, though even more difficult, to discover who owns the crypto wallets.

In early 2024, University of Texas researchers John M. Griffin and Kevin Mei published a paper that followed money from crypto wallets associated with scammers. They tracked hundreds of thousands of transactions, collectively worth billions of dollars—money that was transferred in and out of mainstream exchanges, including Binance, Coinbase, and Crypto.com. 

hands in the dark holding a phone with an image of a woman's torso
Scam workers spend time gaining the trust of their targets, often by deploying fraudulent personas and developing romantic relationships.
REUTERS/CARLOS BARRIA

Some scam syndicates would move crypto off these big exchanges, launder it through anonymous platforms known as mixers (which can be used to obscure crypto transactions), and then come back to the exchanges to cash out into fiat currency such as dollars.

Griffin and Mei were able to identify deposit addresses on Binance and smaller platforms, including Hong Kong–based Huobi and Seychelles-based OKX, that were collectively receiving billions of dollars from suspected scams. These addresses were being used over and over again to send and receive money, “suggesting limited monitoring by crypto exchanges,” the authors wrote.

(We were unable to reach OKX for comment; Coinbase and Huobi did not respond to requests for comment. A Binance spokesperson said that the company disputes the findings of the University of Texas study, alleging that they are “misleading at best and, at worst, wildly inaccurate.” The spokesperson also said that the company has extensive know-your-customer requirements, uses internal and third-party tools to spot illicit activity, freezes funds, and works with law enforcement to help reclaim stolen assets, claiming to have “proactively prevented $4.2 billion in potential losses for 2.8 million users from scams and frauds” and “recovered $88 million in stolen or misplaced funds” last year. A Crypto.com spokesperson said that the company is “committed to security, compliance and consumer protection” and that it uses “robust” transaction monitoring and fraud detection controls, “rigorously investigates accounts flagged for potential fraudulent activity or victimization,” and has internal blacklisting processes for wallet addresses known to be linked to scams.)

But while tracking illicit payments through the crypto ecosystem is possible, it’s “messy” and “complicated” to actually pin down who owns a scam wallet, according to Griffin Hotchkiss, a writer and use-case researcher at the Ethereum Foundation who has worked on crypto projects in Myanmar and who spoke in his personal capacity. Investigators have to build models that connect users to accounts by the flows of money going through them, which involves a degree of “guesswork” and “red string and sticky notes on the board trying to trace the flow of funds,” he says.

There are, however, certain actors within the crypto ecosystem who should have a good vantage point for observing how money moves through it. The most significant of these is Tether Holdings, a company formerly based in the British Virgin Islands (it has since relocated to El Salvador) that issues tether or USDT, a so-called stablecoin whose value is nominally pegged to the US dollar. Tether is widely used by crypto traders to park their money in dollar-denominated assets without having to convert cryptocurrencies into fiat currency. It is also widely used in criminal activity. 

“There was this one guy I was chatting with, [using] a girl’s profile. He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].” 

There is more than $140 billion worth of USDT in circulation; in 2023, TRM Labs, a firm that traces crypto fraud, estimated that $19.3 billion worth of tether transactions was associated with illicit activity. In January 2024, the UN’s Office on Drugs and Crime said that tether was a leading means of exchange for fraudsters and money launderers operating in Southeast Asia. In October, US federal investigators reportedly opened an investigation alleging possible sanctions violations and complicity in money laundering (though at the time, Tether Holdings’ CEO said there was “no indication” the company was under investigation).

Tech experts tell us that USDT is ever-present in the scam business, used to move money and as the main medium of exchange on anonymous marketplaces such as Cambodia-based Huione Guarantee, which has been accused of allowing romance scammers to launder the proceeds of their crimes. (Cambodia revoked the banking license of Huione Pay in March of this year. Huione, which did not respond to a request for comment, has previously denied engaging in criminal activity.)

While much of the crypto ecosystem is decentralized, USDT “does have a central authority” that could intervene, Hotchkiss says. Tether’s code has functions that allow the company to blacklist users, freeze accounts, and even destroy tokens, he adds. (Tether Holdings did not respond to requests for comment.)

In practice, Hotchkiss says, the company has frozen very few accounts—and, like other experts we spoke to, he thinks it’s unlikely to happen at scale. If it were to start acting like a regulator or a bank, the currency would lose a fundamental part of its appeal: its anonymity and independence from the mainstream of finance. The more you intervene, “the less trust people have in your coin,” he says. “The incentives are kind of misaligned.”


Getting out

Gavesh really wasn’t very good at scamming. The knowledge that the person on the other side of the conversation was working hard for money that he was trying to steal weighed heavily on him. “There was this one guy I was chatting with, [using] a girl’s profile,” he says. “He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].” 

The nature of the work left him racked with guilt. “I believe in karma,” he says. “What goes around comes around.”

Twice during Gavesh’s incarceration, he was sold on from one “employer” to another, but he still struggled with scamming. In February 2023, he was put up for sale a third time, along with some other workers.

“We went to the boss and begged him not to sell [us] and to please let us go home,” Gavesh says. The boss eventually agreed but told them it would cost them. As well as forgoing their salaries, they had to pay a ransom—Gavesh’s was set at 72,000 Thai baht, more than $2,000. 

Gavesh managed to scrape the money together, and he and around a dozen others were driven to the river in a military vehicle. “We had to be very silent,” he says. They were told “not to make any sounds or anything—just to get on the boat.” They slipped back into Thailand the way they had come.

close up on a guard counting money with a small figure in wearing a blue tag standing behind waiting

KATHERINE LAM

To avoid checkpoints on the way to Bangkok, the smugglers took paths through the jungle and changed vehicles around 10 times.

The group barely had enough money to survive a couple of days in the city, so they stuck together, staying in a cheap hotel while figuring out what to do next. With the help of a compatriot, Gavesh got in touch with IJM, which offered to help him navigate the legal bureaucracy ahead.

The traffickers hadn’t given him back his passport, and he was in Thailand without authorization. It was April before he was finally able to board a flight home, where he faced yet more questioning from police and immigration officials. He told his family he had “a small visa issue” and that he had lost his passport in Bangkok. He has never told them about his ordeal. “It would be very hard for them to process,” he says.

Recent history shows it’s very unlikely Gavesh will get any justice. That’s part of the reason why disrupting scams’ technology supply chain is so important: It’s incredibly challenging to hold the people operating the syndicates accountable. They straddle borders and jurisdictions. They have trafficked people from more than 60 countries, according to research from USIP, and scam targets come from all over the world. Much of the stolen money is moved through crypto wallets based in secrecy jurisdictions. “This thing is really like an onion. You’ve got layer after layer after layer of it, and it’s just really difficult to see where jurisdiction starts and where jurisdiction ends,” Tower says.

Chinese authorities are often more willing to cooperate with the military junta and armed groups in Myanmar that Western governments will not deal with, and they have cracked down where they can on operations involving their nationals. Thailand has also stepped up its efforts to address the human trafficking crisis and shut down scamming operations across its border in recent months. But when it comes to regulating tech platforms, the reaction from governments has been slower. 

The few legislative efforts in the US, which are still in the earliest stages, focus on supporting law enforcement and financial institutions, not directly on ways to address the abuse of American tech platforms for scamming. And they probably won’t take that on anytime soon. Trump, who has been boosted and courted by several high-profile tech executives, has indicated that his administration opposes heavier online moderation. One executive order, signed in February, vows to impose tariffs on foreign governments if they introduce measures that could “inhibit the growth” of US companies—particularly those in tech—or compel them to moderate online content. 

The Trump White House also supports reducing regulation in the crypto industry; it has halted major investigations into crypto companies and just this month removed sanctions on the crypto mixer Tornado Cash. In what was widely seen as a nod to libertarian-leaning crypto-enthusiasts, Trump pardoned Ross Ulbricht, the founder of the dark web marketplace Silk Road and one of the earlier adopters of crypto for large-scale criminal activity. The administration’s embrace of crypto could indeed have implications for the scamming industry, notes Kim, the economist: “It makes it much easier for crypto services to proliferate and have wider-spread adoption, and that might make it easier for criminal enterprises to tap into that and exploit that for their own means.” 

What’s more, the new US administration has overseen the rollback of funding for myriad international aid programs, primarily programs run through the US Agency for International Development and including those working to help the people who’ve been trafficked into scam compounds. In late February, CNN reports, every one of the agency’s anti-trafficking projects was halted.

This all means it’s up to the tech companies themselves to act on their own initiative. And Big Tech has rarely acted without legislative threats or significant social or financial pressure. Companies won’t do anything if “it’s not mandatory, it’s not enforced by the government,” and most important, if companies don’t profit from it, says Wang, from the University of Texas. While a group of tech companies, including Meta, Match, and Coinbase, last year announced the formation of Tech Against Scams, a collaboration to share tips and best practices, experts tell us there are no concrete actions to point to yet. 

And at a time when more resources are desperately needed to address the growing problems on their platforms, social media companies like X, Meta, and others have laid off hundreds of people from their trust and safety departments in recent years, reducing their capacity to tackle even the most pressing issues. Since the reelection of Trump, Meta has signaled an even greater rollback of its moderation and fact checking, a decision that earned praise from the president. 

Still, companies may feel pressure given that a handful of entities and executives have in recent years been held legally responsible for criminal activity on their platforms. Changpeng Zhao, who founded Binance, the world’s largest cryptocurrency exchange, was sentenced to four months in jail last April after pleading guilty to breaking US money-laundering laws, and the company had to forfeit some $4 billion for offenses that included allowing users to bypass sanctions. Then last May, Alexey Pertsev, a Tornado Cash cofounder, was sentenced to more than five years in a Dutch prison for facilitating the laundering of money stolen by, among others, the Lazarus Group, North Korea’s infamous state-backed hacking team. And in August last year, French authorities arrested Pavel Durov, the CEO of Telegram, and charged him with complicity in drug trafficking and distribution of child sexual abuse material. 

“I think all social media [companies] should really be looking at the case of Telegram right now,” USIP’s Tower says. “At that CEO level, you’re starting to see states try to hold a company accountable for its role in enabling major transnational criminal activity on a global scale.”

Compounding all the challenges, however, is the integration of cheap and easy-to-use artificial intelligence into scamming operations. The trafficked individuals we spoke to, who had mostly left the compounds before the widespread adoption of generative AI, said that if targets suggested a video call they would deflect or, as a last resort, play prerecorded video clips. Only one described the use of AI by his company; he says he was paid to record himself saying various sentences in ways that reflected different emotions, for the purposes of feeding the audio into an AI model. Recently, reports have emerged of scammers who have used AI-powered “face swap” and voice-altering products so that they can impersonate their characters more convincingly. “Malicious actors can exploit these models, especially open-source models, to produce content at an unprecedented scale,” says Gabrielle Tran, senior analyst for technology and society at IST. “These models are purposefully being fine-tuned … to serve as convincing humans.”  

Experts we spoke with warn that if platforms don’t pick up the pace on enforcement now, they’re likely to fall even further behind. 

Every now and again, Gavesh still goes on Facebook to report pages he thinks are scams. He never hears back. 

But he is working again in the tourism industry and on the path to recovering from his ordeal. “I can’t say that I’m 100% out of the trauma, but I’m trying to survive because I have responsibilities,” he says. 

He chose to speak out because he doesn’t want anyone else to be tricked—into a scamming compound, or into giving up their life savings to a stranger. He’s seen behind the scenes into a brutal industry that exploits people’s real needs for work, connection, and human contact, and he wants to make sure no one else ends up where he did. 

“There’s a very scary world,” he says. “A world beyond what we have seen.”

Peter Guest is a journalist based in London. Emily Fishbein is a freelance journalist focusing on Myanmar.

Additional reporting by Nu Nu Lusan. 

Inside the strange limbo facing millions of IVF embryos

Lisa Holligan already had two children when she decided to try for another baby. Her first two pregnancies had come easily. But for some unknown reason, the third didn’t. Holligan and her husband experienced miscarriage after miscarriage after miscarriage.

Like many other people struggling to conceive, Holligan turned to in vitro fertilization, or IVF. The technology allows embryologists to take sperm and eggs and fuse them outside the body, creating embryos that can then be transferred into a person’s uterus.

The fertility clinic treating Holligan was able to create six embryos using her eggs and her husband’s sperm. Genetic tests revealed that only three of these were “genetically normal.” After the first was transferred, Holligan got pregnant. Then she experienced yet another miscarriage. “I felt numb,” she recalls. But the second transfer, which took place several months later, stuck. And little Quinn, who turns four in February, was the eventual happy result. “She is the light in our lives,” says Holligan.

Holligan, who lives in the UK, opted to donate her “genetically abnormal” embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesn’t know what to do with it.

Should she and her husband donate it to another family? Destroy it? “It’s almost four years down the line, and we still haven’t done anything with [the embryo],” she says. The clinic hasn’t been helpful—Holligan doesn’t remember talking about what to do with leftover embryos at the time, and no one there has been in touch with her for years, she says.

Holligan’s embryo is far from the only one in this peculiar limbo. Millions—or potentially tens of millions—of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates. 

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, they’re human cells and nothing else. To others, they’re morally equivalent to children. Many feel they exist somewhere between those two extremes.

There are debates, too, over how we should classify embryos in law. Are they property? Do they have a legal status? These questions are important: There have been multiple legal disputes over who gets to use embryos, who is responsible if they are damaged, and who gets the final say over their fate. And the answers will depend not only on scientific factors, but also on ethical, cultural, and religious ones.  

The options currently available to people with leftover IVF embryos mirror this confusion. As a UK resident, Holligan can choose to discard her embryos, make them available to other prospective parents, or donate them for research. People in the US can also opt for “adoption,” “placing” their embryos with families they get to choose. In Germany, people are not typically allowed to freeze embryos at all. And in Italy, embryos that are not used by the intended parents cannot be discarded or donated. They must remain frozen, ostensibly forever. 

While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? 

Meanwhile, many of these same people are trying to find ways to bring down the total number of embryos in storage. Maintenance costs are high. Some clinics are running out of space. And with a greater number of embryos in storage, there are more opportunities for human error. They are grappling with how to get a handle on the growing number of embryos stuck in storage with nowhere to go.

The embryo boom

There are a few reasons why this has become such a conundrum. And they largely come down to an increasing demand for IVF and improvements in the way it is practiced. “It’s a problem of our own creation,” says Pietro Bortoletto, a reproductive endocrinologist at Boston IVF in Massachusetts. IVF has only become as successful as it is today by “generating lots of excess eggs and embryos along the way,” he says. 

To have the best chance of creating healthy embryos that will attach to the uterus and grow in a successful pregnancy, clinics will try to collect multiple eggs. People who undergo IVF will typically take a course of hormone injections to stimulate their ovaries. Instead of releasing a single egg that month, they can expect to produce somewhere between seven and 20 eggs. These eggs can be collected via a needle that passes through the vagina and into the ovaries. The eggs are then taken to a lab, where they are introduced to sperm. Around 70% to 80% of IVF eggs are successfully fertilized to create embryos.

The embryos are then grown in the lab. After around five to seven days an embryo reaches a stage of development at which it is called a blastocyst, and it is ready to be transferred to a uterus. Not all IVF embryos reach this stage, however—only around 30% to 50% of them make it to day five. This process might leave a person with no viable embryos. It could also result in more than 10, only one of which is typically transferred in each pregnancy attempt. In a typical IVF cycle, one embryo might be transferred to the person’s uterus “fresh,” while any others that were created are frozen and stored.

IVF success rates have increased over time, in large part thanks to improvements in this storage technology. A little over a decade ago, embryologists tended to use a “slow freeze” technique, says Bortoletto, and many embryos didn’t survive the process. Embryos are now vitrified instead, using liquid nitrogen to rapidly cool them from room temperature to -196 °C in less than two seconds. Vitrification essentially turns all the water in the embryos into a glasslike state, avoiding the formation of damaging ice crystals. 

Now, clinics increasingly take a “freeze all” approach, in which they cryopreserve all the viable embryos and don’t start transferring them until later. In some cases, this is so that the clinic has a chance to perform genetic tests on the embryo they plan to transfer.

An assortment of sperm and embryos, preserved in liquid nitrogen.
ALAMY

Once a lab-grown embryo is around seven days old, embryologists can remove a few cells for preimplantation genetic testing (PGT), which screens for genetic factors that might make healthy development less likely or predispose any resulting children to genetic diseases. PGT is increasingly popular in the US—in 2014, it was used in 13% of IVF cycles, but by 2016, that figure had increased to 27%. Embryos that undergo PGT have to be frozen while the tests are run, which typically takes a week or two, says Bortoletto: “You can’t continue to grow them until you get those results back.”

And there doesn’t seem to be a limit to how long an embryo can stay in storage. In 2022, a couple in Oregon had twins who developed from embryos that had been frozen for 30 years.

Put this all together, and it’s easy to see how the number of embryos in storage is rocketing. We’re making and storing more embryos than ever before. When you combine that with the growing demand for IVF, which is increasing in use by the year, perhaps it’s not surprising that the number of embryos sitting in storage tanks is estimated to be in the millions.

I say estimated, because no one really knows how many there are. In 2003, the results of a survey of fertility clinics in the US suggested that there were around 400,000 in storage. Ten years later, in 2013, another pair of researchers estimated that, in total, around 1.4 million embryos had been cryopreserved in the US. But Alana Cattapan, now a political scientist at the University of Waterloo in Ontario, Canada, and her colleagues found flaws in the study and wrote in 2015 that the number could be closer to 4 million.  

That was a decade ago. When I asked embryologists what they thought the number might be in the US today, I got responses between 1 million and 10 million. Bortoletto puts it somewhere around 5 million.

Globally, the figure is much higher. There could be tens of millions of embryos, invisible to the naked eye, kept in a form of suspended animation. Some for months, years, or decades. Others indefinitely.

Stuck in limbo

In theory, people who have embryos left over from IVF have a few options for what to do with them. They could donate the embryos for someone else to use. Often this can be done anonymously (although genetic tests might later reveal the biological parents of any children that result). They could also donate the embryos for research purposes. Or they could choose to discard them. One way to do this is to expose the embryos to air, causing the cells to die.

Studies suggest that around 40% of people with cryopreserved embryos struggle to make this decision, and that many put it off for five years or more. For some people, none of the options are appealing.

In practice, too, the available options vary greatly depending on where you are. And many of them lead to limbo.

Take Spain, for example, which is a European fertility hub, partly because IVF there is a lot cheaper than in other Western European countries, says Giuliana Baccino, managing director of New Life Bank, a storage facility for eggs and sperm in Buenos Aires, Argentina, and vice chair of the European Fertility Society. Operating costs are low, and there’s healthy competition—there are around 330 IVF clinics operating in Spain. (For comparison, there are around 500 IVF clinics in the US, which has a population almost seven times greater.)

Baccino, who is based in Madrid, says she often hears of foreign patients in their late 40s who create eight or nine embryos for IVF in Spain but end up using only one or two of them. They go back to their home countries to have their babies, and the embryos stay in Spain, she says. These individuals often don’t come back for their remaining embryos, either because they have completed their families or because they age out of IVF eligibility (Spanish clinics tend not to offer the treatment to people over 50). 

Doctors hands removing embryo samples from cryogenic storage
An embryo sample is removed from cryogenic storage.
GETTY IMAGES

In 2023, the Spanish Fertility Society estimated that there were 668,082 embryos in storage in Spain, and that around 60,000 of them were “in a situation of abandonment.” In these cases the clinics might not be able to reach the intended parents, or might not have a clear directive from them, and might not want to destroy any embryos in case the patients ask for them later. But Spanish clinics are wary of discarding embryos even when they have permission to do so, says Baccino. “We always try to avoid trouble,” she says. “And we end up with embryos in this black hole.”

This happens to embryos in the US, too. Clinics can lose touch with their patients, who may move away or forget about their remaining embryos once they have completed their families. Other people may put off making decisions about those embryos and stop communicating with the clinic. In cases like these, clinics tend to hold onto the embryos, covering the storage fees themselves.

Nowadays clinics ask their patients to sign contracts that cover long-term storage of embryos—and the conditions of their disposal. But even with those in hand, it can be easier for clinics to leave the embryos in place indefinitely. “Clinics are wary of disposing of them without explicit consent, because of potential liability,” says Cattapan, who has researched the issue. “People put so much time, energy, money into creating these embryos. What if they come back?”

Bortoletto’s clinic has been in business for 35 years, and the handful of sites it operates in the US have a total of over 47,000 embryos in storage, he says. “Our oldest embryo in storage was frozen in 1989,” he adds. 

Some people may not even know where their embryos are. Sam Everingham, who founded and directs Growing Families, an organization offering advice on surrogacy and cross-border donations, traveled with his partner from their home in Melbourne, Australia, to India to find an egg donor and surrogate back in 2009. “It was a Wild West back then,” he recalls. Everingham and his partner used donor eggs to create eight embryos with their sperm.

Everingham found the experience of trying to bring those embryos to birth traumatic. Baby Zac was stillborn. Baby Ben died at seven weeks. “We picked ourselves up and went again,” he recalls. Two embryo transfers were successful, and the pair have two daughters today.

But the fate of the rest of their embryos is unclear. India’s government decided to ban commercial surrogacy for foreigners in 2015, and Everingham lost track of where they are. He says he’s okay with that. As far as he’s concerned, those embryos are just cells.

He knows not everyone feels the same way. A few days before we spoke, Everingham had hosted a couple for dinner. They had embryos in storage and couldn’t agree on what to do with them. “The mother … wanted them donated to somebody,” says Everingham. Her husband was very uncomfortable with the idea. “[They have] paid storage fees for 14 years for those embryos because neither can agree on what to do with them,” says Everingham. “And this is a very typical scenario.”

Lisa Holligan’s experience is similar. Holligan thought she’d like to donate her last embryo to another person—someone else who might have been struggling to conceive. “But my husband and I had very different views on it,” she recalls. He saw the embryo as their child and said he wouldn’t feel comfortable with giving it up to another family. “I started having these thoughts about a child coming to me when they’re older, saying they’ve had a terrible life, and [asking] ‘Why didn’t you have me?’” she says.

After all, her daughter Quinn began as an embryo that was in storage for months. “She was frozen in time. She could have been frozen for five years like [the leftover] embryo and still be her,” she says. “I know it sounds a bit strange, but this embryo could be a child in 20 years’ time. The science is just mind-blowing, and I think I just block it out. It’s far too much to think about.”

No choice at all

Choosing the fate of your embryos can be difficult. But some people have no options at all.

This is the case in Italy, where the laws surrounding assisted reproductive technology have grown increasingly restrictive. Since 2004, IVF has been accessible only to heterosexual couples who are either married or cohabiting. Surrogacy has also been prohibited in the country for the last 20 years, and in 2024, it was made a “universal crime.” The move means Italians can be prosecuted for engaging in surrogacy anywhere in the world, a position Italy has also taken on the crimes of genocide and torture, says Sara Dalla Costa, a lawyer specializing in assisted reproduction and an IVF clinic manager at Instituto Bernabeu on the outskirts of Venice.

The law surrounding leftover embryos is similarly inflexible. Dalla Costa says there are around 900,000 embryos in storage in Italy, basing the estimate on figures published in 2021 and the number of IVF cycles performed since then. By law, these embryos cannot be discarded. They cannot be donated to other people, and they cannot be used for research. 

Even when genetic tests show that the embryo has genetic features making it “incompatible with life,” it must remain in storage, forever, says Dalla Costa. 

“There are a lot of patients that want to destroy embryos,” she says. For that, they must transfer their embryos to Spain or other countries where it is allowed.

Even people who want to use their embryos may “age out” of using them. Dalla Costa gives the example of a 48-year-old woman who undergoes IVF and creates five embryos. If the first embryo transfer happens to result in a successful pregnancy, the other four will end up in storage. Once she turns 50, this woman won’t be eligible for IVF in Italy. Her remaining embryos become stuck in limbo. “They will be stored in our biobanks forever,” says Dalla Costa.

Dalla Costa says she has “a lot of examples” of couples who separate after creating embryos together. For many of them, the stored embryos become a psychological burden. With no way of discarding them, these couples are forever connected through their cryopreserved cells. “A lot of our patients are stressed for this reason,” she says.

Earlier this year, one of Dalla Costa’s clients passed away, leaving behind the embryos she’d created with her husband. He asked the clinic to destroy them. In cases like these, Dalla Costa will contact the Italian Ministry of Health. She has never been granted permission to discard an embryo, but she hopes that highlighting cases like these might at least raise awareness about the dilemmas the country’s policies are creating for some people.

Snowflakes and embabies

In Italy, embryos have a legal status. They have protected rights and are viewed almost as children. This sentiment isn’t specific to Italy. It is shared by plenty of individuals who have been through IVF. “Some people call them ‘embabies’ or ‘freezer babies,’” says Cattapan.

It is also shared by embryo adoption agencies in the US. Beth Button is executive director of one such program, called Snowflakes—a division of Nightlight Christian Adoptions agency, which considers cryopreserved embryos to be children, frozen in time, waiting to be born. Snowflakes matches embryo donors, or “placing families,” with recipients, termed “adopting families.” Both parties share their information and essentially get to choose who they donate to or receive from. By the end of 2024, 1,316 babies had been born through the Snowflakes embryo adoption program, says Button. 

Button thinks that far too many embryos are being created in IVF labs around the US. Around 10 years ago, her agency received a donation from a couple that had around 38 leftover embryos to donate. “We really encourage [people with leftover embryos in storage] to make a decision [about their fate], even though it’s an emotional, difficult decision,” she says. “Obviously, we just try to keep [that discussion] focused on the child,” she says. “Is it better for these children to be sitting in a freezer, even though that might be easier for you, or is it better for them to have a chance to be born into a loving family? That kind of pushes them to the point where they’re ready to make that decision.”

Button and her colleagues feel especially strongly about embryos that have been in storage for a long time. These embryos are usually difficult to place, because they are thought to be of poorer quality, or less likely to successfully thaw and result in a healthy birth. The agency runs a program called Open Hearts specifically to place them, along with others that are harder to match for various reasons. People who accept one but fail to conceive are given a shot with another embryo, free of charge.

These nitrogen tanks at New Hope Fertility Center in New York hold tens of thousands of frozen embryos and eggs.
GETTY IMAGES

“We have seen perfectly healthy children born from very old embryos, [as well as] embryos that were considered such poor quality that doctors didn’t even want to transfer them,” says Button. “Right now, we have a couple who is pregnant with [an embryo] that was frozen for 30 and a half years. If that pregnancy is successful, that will be a record for us, and I think it will be a worldwide record as well.”

Many embryologists bristle at the idea of calling an embryo a child, though. “Embryos are property. They are not unborn children,” says Bortoletto. In the best case, embryos create pregnancies around 65% of the time, he says. “They are not unborn children,” he repeats.

Person or property?

In 2020, an unauthorized person allegedly entered an IVF clinic in Alabama and pulled frozen embryos from storage, destroying them. Three sets of intended parents filed suit over their “wrongful death.” A trial court dismissed the claims, but the Alabama Supreme Court disagreed, essentially determining that those embryos were people. The ruling shocked many and was expected to have a chilling effect on IVF in the state, although within a few weeks, the state legislature granted criminal and civil immunity to IVF clinics.

But the Alabama decision is the exception. While there are active efforts in some states to endow embryos with the same legal rights as people, a move that could potentially limit access to abortion, “most of the [legal] rulings in this area have made it very clear that embryos are not people,” says Rich Vaughn, an attorney specializing in fertility law and the founder of the US-based International Fertility Law Group. At the same time, embryos are not just property. “They’re something in between,” says Vaughn. “They’re sort of a special type of property.” 

UK law takes a similar approach: The language surrounding embryos and IVF was drafted with the idea that the embryo has some kind of “special status,” although it was never made entirely clear exactly what that special status is, says James Lawford Davies, a solicitor and partner at LDMH Partners, a law firm based in York, England, that specializes in life sciences. Over the years, the language has been tweaked to encompass embryos that might arise from IVF, cloning, or other means; it is “a bit of a fudge,” says Lawford Davies. Today, the official—if somewhat circular—legal definition in the Human Fertilisation and Embryology Act reads: “embryo means a live human embryo.” 

And while people who use their eggs or sperm to create embryos might view these embryos as theirs, according to UK law, embryos are more like “a stateless bundle of cells,” says Lawford Davies. They’re not quite property—people don’t own embryos. They just have control over how they are used. 

Many legal disputes revolve around who has control. This was the experience of Natallie Evans, who created embryos with her then partner Howard Johnston in the UK in 2001. The couple separated in 2002. Johnston wrote to the clinic to ask that their embryos be destroyed. But Evans, who had been diagnosed with ovarian cancer in 2001, wanted to use them. She argued that Johnston had already consented to their creation, storage, and use and should not be allowed to change his mind. The case eventually made it to the European Court of Human Rights, and Evans lost. The case set a precedent that consent was key and could be withdrawn at any time.

In Italy, on the other hand, withdrawing consent isn’t always possible. In 2021, a case like Natallie Evans’s unfolded in the Italian courts: A woman who wanted to proceed with implantation after separating from her partner went to court for authorization. “She said that it was her last chance to be a mother,” says Dalla Costa. The judge ruled in her favor.

Dalla Costa’s clinics in Italy are now changing their policies to align with this decision. Male partners must sign a form acknowledging that they cannot prevent embryos from being used once they’ve been created.

The US situation is even more complicated, because each state has its own approach to fertility regulation. When I looked through a series of published legal disputes over embryos, I found little consistency—sometimes courts ruled to allow a woman to use an embryo without the consent of her former partner, and sometimes they didn’t. “Some states have comprehensive … legislation; some do not,” says Vaughn. “Some have piecemeal legislation, some have only case law, some have all of the above, some have none of the above.”

The meaning of an embryo

So how should we define an embryo? “It’s the million-dollar question,” says Heidi Mertes, a bioethicist at Ghent University in Belgium. Some bioethicists and legal scholars, including Vaughn, think we’d all stand to benefit from clear legal definitions. 

Risa Cromer, a cultural anthropologist at Purdue University in Indiana, who has spent years researching the field, is less convinced. Embryos exist in a murky, in-between state, she argues. You can (usually) discard them, or transfer them, but you can’t sell them. You can make claims against damages to them, but an embryo is never viewed in the same way as a car, for example. “It doesn’t fit really neatly into that property category,” says Cromer. “But, very clearly, it doesn’t fit neatly into the personhood category either.”

And there are benefits to keeping the definition vague, she adds: “There is, I think, a human need for there to be a wide range of interpretive space for what IVF embryos are or could be.”

That’s because we don’t have a fixed moral definition of what an embryo is. Embryos hold special value even for people who don’t view them as children. They hold potential as human life. They can come to represent a fertility journey—one that might have been expensive, exhausting, and traumatizing.  “Even for people who feel like they’re just cells, it still cost a lot of time, money, [and effort] to get those [cells],” says Cattapan.

“I think it’s an illusion that we might all agree on what the moral status of an embryo is,” Mertes says.

In the meantime, a growing number of embryologists, ethicists, and researchers are working to persuade fertility clinics and their patients not to create or freeze so many embryos in the first place. Early signs aren’t promising, says Baccino. The patients she has encountered aren’t particularly receptive to the idea. “They think, ‘If I will pay this amount for a cycle, I want to optimize my chances, so in my case, no,’” she says. She expects the number of embryos in storage to continue to grow.

Holligan’s embryo has been in storage for almost five years. And she still doesn’t know what to do with it. She tears up as she talks through her options. Would discarding the embryo feel like a miscarriage? Would it be a sad thing? If she donated the embryo, would she spend the rest of her life wondering what had become of her biological child, and whether it was having a good life? Should she hold on to the embryo for another decade in case her own daughter needs to use it at some point?

“The question [of what to do with the embryo] does pop into my head, but I quickly try to move past it and just say ‘Oh, that’s something I’ll deal with at a later time,’” says Holligan. “I’m sure [my husband] does the same.”

The accumulation of frozen embryos is “going to continue this way for some time until we come up with something that fully addresses everyone’s concerns,” says Vaughn. But will we ever be able to do that?

“I’m an optimist, so I’m gonna say yes,” he says with a hopeful smile. “But I don’t know at the moment.”

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. 

But all that is up for grabs. We are at a new inflection point.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. 

Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”

AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. 

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.

Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. 

I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. 

On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. 

People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer.

It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.”

But this isn’t just about publishers (or my own self-interest). 

People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer.

But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. 

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? 


In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. 

Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey.

Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. 

And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was.

But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.  

But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 

And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing

Sundar Pichai
Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”
JENS GYARMATY/LAIF/REDUX

For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  

But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. 

“It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. 

It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. 

But once you’ve used AI Overviews a bit, you realize they are different

Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world.

While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 

“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.”

The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) 

“[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” 

That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 

That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. 

When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous.

“We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. 

There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous.

In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from.

Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? 

I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources.

“When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.”

In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. 

“Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.”

The new search

Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work.


Search Engine

Google
The search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries.

What it’s good at

Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material.


Perplexity
Perplexity is a conversational search engine that uses third-party large
language models from OpenAI and Anthropic to answer queries.

Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content.


ChatGPT
While Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search.

Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent.


When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. 

“You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.”

There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. 

“If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” 

But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?  

Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.  

“If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. 

Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” 

Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”


 “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” 

He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? 

A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.  

“There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.”

Kevin Weil, chief product officer, OpenAI

According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. 

OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. 

“I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.”

Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. 

Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. 

Elizabeth Reid
“For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.
WINNI WINTERMEYER/REDUX

Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.)

But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners.

Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” 

When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. 

“And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.”

Indeed! 

The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. 


It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” 

We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. 

The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities.

“A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.”

This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. 

Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. 

“It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.”

And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.”

“We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.”

This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. 

In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. 

But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today.

These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not.

That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on.