In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review
I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said.
At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely.
Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas.
But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform.
Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.
So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company.
Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.
I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else?
Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said.
He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?”
On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer.
Why did we need AGI to do that instead of AI? I asked.
This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI.
And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on most (economically valuable) tasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it.
AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care.
Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.”
This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us.
“No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.”
That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival.
“I actually think that’s a very beautiful thing,” he said.
In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born.
“What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.”
His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started.
Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one.
Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said.
I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models.
That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples.
“It is unquestioningly very highly desirable that data centers be as green as possible,” he added.
“No question,” Brockman quipped.
“Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue.
“It’s 2 percent globally,” I offered.
“Isn’t Bitcoin like 1 percent?” Brockman said.
“Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative.
Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.”
I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.”
“I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.”
“The day we announced the deal,” he said, referring to Microsoft’s new $1 billion investment, “Microsoft’s market cap went up by $10 billion. People believe there is a positive ROI even just on short‑term technology.”
OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.”
Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step.
He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself.
There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees.
This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public?
At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said.
In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future.
“Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.
What motivated him? I asked Brockman.
What are the chances that a transformative technology could arrive in your lifetime? he countered.
He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said.
Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him.
A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point.
In 2022, he became OpenAI’s president.
During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said.
OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.
Brockman pointed once again to the $10 billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone.
Was there a historical example of a technology’s benefits that had been successfully distributed? I asked.
“Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative.
“Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards.
“Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly.
“I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.”
His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”
It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else.
He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.
“The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.”
In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”
Hours later, Elon Musk replied to the story with three tweets in rapid succession:
“OpenAI should be more open imo”
“I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research.
“All orgs developing advanced AI should be regulated, including Tesla”
Afterward, Altman sent OpenAI employees an email.
“I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.”
It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models.
“The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team (but not give the press the public fight they’d love right now).”
OpenAI wouldn’t speak to me again for three years.
We were losing the light, and still about 20 kilometers from the main road, whenthe car shuddered and died at the edge of a strange forest.
The grove grew as if indifferent to certain unspoken rules of botany. There was no understory, no foreground or background, only the trees themselves, which grew as a wall of bare trunks that rose 100 feet or so before concluding with a burst of thick foliage near the top. The rows of trees ran perhaps the length of a New York City block and fell away abruptly on either side into untidy fields of dirt and grass. The vista recalled the husk of a failed condo development, its first apartments marooned when the builders ran out of cash.
Standing there against the setting sun, the trees were, in their odd way, also rather stunning. I had no service out here—we had just left a remote nature preserve in southwestern Brazil—but I reached for my phone anyway, for a picture. The concern on the face of my travel partner, Clariana Vilela Borzone, a geographer and translator who grew up nearby, flicked to amusement. My camera roll was already full of eucalyptus.
The trees sprouted from every hillside, along every road, and more always seemed to be coming. Across the dirt path where we were stopped, another pasture had been cleared for planting. The sparse bushes and trees that had once shaded cattle in the fields had been toppled and piled up, as if in a Pleistocene gravesite.
Borzone’s friends and neighbors were divided on the aesthetics of these groves. Some liked the order and eternal verdancy they brought to their slice of the Cerrado, a large botanical region that arcs diagonally across Brazil’s midsection. Its native savanna landscape was largely gnarled, low-slung, and, for much of the year, rather brown. And since most of that flora had been cleared decades ago for cattle pasture, it was browner and flatter still. Now that land was becoming trees. It was becoming beautiful.
Some locals say they like the order and eternal verdancy of the eucalyptus, which often stand in stark contrast to the Cerrado’s native savanna landscape.
PABLO ALBARENGA
Others considered this beauty a mirage. “Green deserts,” they called the groves, suggesting bounty from afar but holding only dirt and silence within. These were not actually forests teeming with animals and undergrowth, they charged, but at best tinder for a future megafire in a land parched, in part, by their vigorous growth. This was in fact a common complaint across Latin America: in Chile, the planted rows of eucalyptus were called the “green soldiers.” It was easy to imagine getting lost in the timber, a funhouse mirror of trunks as far as the eye could see.
The timber companies that planted these trees push back on these criticisms as caricatures of a genus that’s demonized all over the world. They point to their sustainable forestry certifications and their handsome spending on fire suppression, and to the microphones they’ve placed that record cacophonies of birds and prove the groves are anything but barren. Whether people like the look of these trees or not, they are meeting a human need, filling an insatiable demand for paper and pulp products all over the world. Much of the material for the world’s toilet and tissue paper is grown in Brazil, and that, they argue, is a good thing: Grow fast and furious here, as responsibly as possible, to save many more trees elsewhere.
But I was in this region for a different reason: Apple. And also Microsoft and Meta and TSMC, and many smaller technology firms too. I was here becausetechexecutives many thousands of miles away were racing toward, and in some cases stumbling, on their way to meet their climate promises—too little time, and too much demand for new devices and AI data centers. Not far from here, they had struck some of the largest-ever deals for carbon credits. They were asking something new of this tree: Could Latin America’s eucalyptus be a scalable climate solution?
On a practical level, the answer seemed straightforward. Nobody disputed how swiftly or reliably eucalyptus could grow in the tropics. This knowledge was the product of decades of scientific study and tabulations of biomass for wood or paper. Each tree was roughly 47% carbon, which meant that many tons of it could be stored within every planted hectare. This could be observed taking place in real time, in the trees by the road. Come back and look at these young trees tomorrow, and you’d see it: fresh millimeters of carbon, chains of cellulose set into lignin.
At the same time, Apple and the others were also investing in an industry, and a tree, with a long and controversial history in this part of Brazil and elsewhere. They were exerting their wealth and technological oversight to try to make timber operations more sustainable, more supportive of native flora, and less water intensive. Still, that was a hard sell to some here, where hundreds of thousands of hectares of pasture are already in line for planting; more trees were a bleak prospect in a land increasingly racked by drought and fire. Critics called the entire exercise an excuse to plant even more trees for profit.
Borzone and I did not plan to stay and watch the eucalyptus grow. Garden or forest or desert, ally or antagonist—it did not matter much with the stars of the Southern Cross emerging and our gas tank empty. We gathered our things from our car and set off down the dirt road through the trees.
A big promise
My journey into the Cerrado had begun months earlier, in the fall of 2023, when the actress Octavia Spencer appeared as Mother Nature in an ad alongside Apple CEO Tim Cook. In 2020, the company had set a goal to go “net zero” by the end of the decade, at which point all of its products—laptops, CPUs, phones, earbuds—would be produced without increasing the level of carbon in the atmosphere. “Who wants to disappoint me first?” Mother Nature asked with a sly smile. It was a third of the way to 2030—a date embraced by many corporations aiming to stay in line with the UN’s goal of limiting warming to 1.5 °C over preindustrial levels—and where was the progress?
Apple CEO Tim Cook stares down Octavia Spencer as “Mother Nature” in their ad spot touting the company’s claims for carbon neutrality.
APPLE VIA YOUTUBE
Cook was glad to inform her of the good news: The new Apple Watch was leading the way. A limited supply of the devices were already carbon neutral, thanks to things like recycled materials and parts that were specially sent by ship—not flown—from one factory to another. These special watches were labeled with a green leaf on Apple’s iconically soft, white boxes.
Critics were quick to point out that declaring an individual product “carbon neutral” while the company was still polluting had the whiff of an early victory lap, achieved with some convenient accounting. But the work on the watch spoke to the company’s grand ambitions. Apple claimed that changes like procuring renewable power and using recycled materials had enabled it to cut emissions 75% since 2015. “We’re always prioritizing reductions; they’ve got to come first,” Chris Busch, Apple’s director of environmental initiatives, told me soon after the launch.
The company also acknowledged that it could not find reductions to balance all its emissions. But it was trying something new.
Since the 1990s, companies have purchased carbon credits based largely on avoiding emissions. Take some patch of forest that was destined for destruction and protect it; the stored carbon that wasn’t lost is turned into credits. But as the carbon market expanded, so did suspicion of carbon math—in some cases, because of fraud or bad science, but also because efforts to contain deforestation are often frustrated, with destruction avoided in one place simply happening someplace else. Corporations that once counted on carbon credits for “avoided” emissions can no longer trust them. (Many consumers feel they can’t either, with some even suing Apple over the ways it used past carbon projects to make its claims about the Apple Watch.)
But that demand to cancel out carbon dioxide hasn’t gone anywhere—if anything, as AI-driven emissions knock some companies off track from reaching their carbon targets (and raisequestions about the techniques used to claim emissions reductions), the need is growing. For Apple, even under the rosiest assumptions about how much it will continue to pollute, the gap is significant: In 2024, the company reported offsetting 700,000 metric tons of CO2, but the number it will need to hit in 2030 to meet its goals is 9.6 million.
So the new move is to invest in carbon “removal” rather than avoidance. The idea implies a more solid achievement: taking carbon molecules out of the atmosphere. There are many ways to attempt that, from trying to change the pH of the oceans so that they absorb more of the molecules to building machines that suck carbon straight out of the air. But these are long-term fixes. None of these technologies work at the scale and price that would help Apple and others meet their shorter-term targets. For that, trees have emerged again as the answer. This time the idea is to plant new ones instead of protecting old ones.
To expand those efforts in a way that would make a meaningful dent in emissions, Apple determined, it would also need to make carbon removal profitable. A big part of this effort would be driven by the Restore Fund, a $200 million partnership with Goldman Sachs and Conservation International, a US environmental nonprofit,to invest in “high quality” projects that promoted reforestation on degraded lands.
Profits would come from responsibly turning trees into products, Goldman’s head of sustainability explained when the fund was announced in 2021. But it was also an opportunity for Apple, and future investors, to “almost look at, touch, and feel their carbon,” he said—a concreteness that carbon credits had previously failed to offer. “The aim is to generate real, measurable carbon benefits, but to do that alongside financial returns,” Busch told me. It was intended as a flywheel of sorts: more investors, more planting, more carbon—an approach to climate action that looked to abundance rather than sacrifice.
Apple markets its watch as a carbon-neutral product, a claim based in part on the use of carbon credits.
The announcement of the carbon-neutral Apple Watch was the occasion to promote the Restore Fund’s three initial investments, which included a native forestry project as well as eucalyptus farms in Paraguay and Brazil. The Brazilian timber plans were by far the largest in scale, and were managed by BTG Pactual, Latin America’s largest investment bank.
Busch connected me with MarkWishnie, head of sustainability for Timberland Investment Group, BTG’s US-based subsidiary, which acquires and manages properties on behalf of institutional investors. After years in the eucalyptus business, Wishnie, who lives in Seattle, was used to strong feelings about the tree. It’s just that kind of plant—heralded as useful, even ornamental; demonized as a fire starter, water-intensive, a weed. “Has the idea that eucalyptus is invasive come up?” he asked pointedly. (It’s an “exotic” species in Brazil, yes, but the risk of invasiveness is low for the varieties most commonly planted for forestry.) He invited detractors to consider the alternative to the scale and efficiency of eucalyptus, which, he pointed out, relieves the pressure that humans put on beloved old-growth forests elsewhere.
Using eucalyptus for carbon removal also offered a new opportunity. Wishnie was overseeing a planned $1 billion initiative that was set to transform BTG’s timber portfolio; it aimed at a 50-50 split between timber and native restoration on old pastureland, with an emphasis on connecting habitats along rivers and streams. As a “high quality” project, it was meant to do better than business as usual. The conservation areas would exceed the legal requirements for native preservation in Brazil, which range from 20% to 35% in the Cerrado. In a part of Brazil that historically gets little conservation attention, it would potentially represent the largest effort yet to actually bring back the native landscape.
When BTG approached Conservation International with the 50% figure, the organization thought it was “too good to be true,” Miguel Calmon, the senior director of the nonprofit’s Brazilian programs, told me. With the restoration work paid for by the green financing and the sale of carbon credits, scale and longevity could be achieved. “Some folks may do this, but they never do this as part of the business,” he said. “It comes from not a corporate responsibility. It’s about, really, the business that you can optimize.”
So far, BTG has raised $630 million for the initiative and earmarked 270,000 hectares, an area more than double the city of Los Angeles. The first farm in the plan, located on a 24,000-hectare cattle ranch, was called Project Alpha. The location, Wishnie said, was confidential.
“We talk about restoration as if it’s a thing that happens,” Mark Wishnie says, promoting BTG’s plans to intermingle new farms alongside native preserves.
COURTESY OF BTG
But a property of that size sticks out, even in a land of large farms. It didn’t take very much digging into municipal land records in the Brazilian state of Mato Grosso do Sul, where many of the company’s Cerrado holdings are located, to turn up a recently sold farm that matched the size. It was called Fazenda Engano, or “Deception Farm”—hence the rebrand. The land was registered to an LLC with links to holding companies for other BTG eucalyptus plantations located in a neighboring region that locals had taken to calling the Cellulose Valley for its fast-expanding tree farms and pulp factories.
The area was largely seen as a land of opportunity, even as some locals had raised the alarm over concerns that the land couldn’t handle the trees. They had allies in prominent ecologists who have long questioned the wisdom of tree-planting in the Cerrado—and increasingly spar with other conservationists who see great potential in turning pasture into forest. The fight has only gotten more heated as more investors hunt for new climate solutions.
Still, where Apple goes, others often follow. And when it comes to sustainability, other companies look to it as a leader. I wasn’t sure if I could visit Project Alpha and see whether Apple and its partners had really found a better way to plant, but I started making plans to go to the Cerrado anyway, to see the forests behind those little green leaves on the box.
Complex calculations
In 2015, a study by Thomas Crowther, an ecologist then at ETH Zürich, attempted a census of global tree cover, finding more than 3 trillion trees in all. A useful number, surprisingly hard to divine, like counting insects or bacteria.
A follow-up study a few years later proved more controversial: Earth’s surface held space for at least 1 trillion more trees. That represented a chance to store 200 metric gigatons, or about 25%, of atmospheric carbon once they matured. (The paper was later corrected in multiple ways, including an acknowledgment that the carbon storage potential could be about one-third less.)
The study became a media sensation, soon followed by a fleet of tree-planting initiatives with “trillion” in the name—most prominently through a World Economic Forum effort launched by Salesforce CEO Marc Benioff at Davos, which President Donald Trump pledged to support during his first term.
But for as long as tree planting has been heralded as a good deed—from Johnny Appleseed to programs that promise a tree for every shoe or laptop purchased—the act has also been chased closely by a follow-up question: How many of those trees survive? Consider Trump’s most notable planting, which placed an oak on the White House grounds in 2018. It died just over a year later.
During President Donald Trump’s first term, he and French president Emmanuel Macron planted an oak on the South Lawn of the White House.
CHIP SOMODEVILLA/GETTY IMAGES
To critics, including Bill Gates, the efforts were symbolic of short-term thinking at the expense of deeper efforts to cut or remove carbon. (Gates’s spat with Benioff descended to name-calling in the New York Times. “Are we the science people or are we the idiots?” he asked.) The lifespan of a tree, after all, is brief—a pit stop—compared with the thousand-year carbon cycle, so its progeny must carry the torch to meaningfully cancel out emissions. Most don’t last that long.
“The number of trees planted has become a kind of currency, but it’s meaningless,” Pedro Brancalion, a professor of tropical forestry at the University of São Paulo, told me. He had nothing against the trees, which the world could, in general, use a lot more of. But to him, a lot of efforts were riding more on “good vibes” than on careful strategy.
Soon after arriving in São Paulo last summer, I drove some 150 miles into the hills outside the city to see the outdoor lab Brancalion has filled with experiments on how to plant trees better: trees given too many nutrients or too little; saplings monitored with wires and tubes like ICU admits, or skirted with tarps that snatch away rainwater. At the center of one of Brancalion’s plots stands a tower topped with a whirling station, the size of a hobby drone, monitoring carbon going in and out of the air (and, therefore, the nearby vegetation)—a molecular tango known as flux.
Brancalion works part-time for a carbon-focused restoration company, Re:Green, which had recently sold 3 million carbon credits to Microsoft and was raising a mix of native trees in parts of the Amazon and the Atlantic Forest. While most of the trees in his lab were native ones too, like jacaranda and brazilwood, he also studies eucalyptus. The lab in fact sat on a former eucalyptus farm; in the heart of his fields, a grove of 80-year-old trees dripped bark like molting reptiles.
To Pedro Brancalion, a lot of tree-planting efforts are riding more on “good vibes” than on careful strategy. He experiments with new ways to grow eucalyptus interspersed with native species.
PABLO ALBARENGA
Eucalyptus planting swelled dramatically under Brazil’s military dictatorship in the 1960s. The goal was self-sufficiency—a nation’s worth of timber and charcoal, quickly—and the expansion was fraught. Many opinions of the tree were forged in a spate of dubious land seizures followed by clearing of the existing vegetation—disputes that, in some places, linger to this day. Still, that campaign is also said to have done just as Wishnie described, easing the demand that would have been put on regions like the Amazon as Rio and São Paulo were built.
The new trees also laid the foundation for Brazil to become a global hub for engineered forestry; it’s currently home to about a third of the world’s farmed eucalyptus. Today’s saplings are the products of decades of tinkering with clonal breeding, growing quick and straight, resistant to pestilence and drought, with exacting growth curves that chart biomass over time: Seven years to maturity is standard for pulp. Trees planted today grow more than three times as fast as their ancestors.
If the goal is a trillion trees, or many millions of tons of carbon, no business is better suited to keeping count than timber. It might sound strange to claim carbon credits fortrees that you plan to chop down and turn into toilet paper or chairs. Whatever carbon is stored in those ephemeral products is, of course, a blip compared with the millennia that CO2 hangs in the atmosphere.
But these carbon projects take a longer view. While individual trees may go, more trees are planted. The forest constantly regrows and recaptures carbon from the air. Credits are issued annually over decades, so long as the long-term average of the carbon stored in the grove continues to increase. What’s more, because the timber is constantly being tracked, the carbon is easy to measure, solving a key problem with carbon credits.
Most mature native ecosystems, whether tropical forests or grasslands, will eventually store more carbon than a tree farm. But that could take decades. Eucalyptus can be planted immediately, with great speed, and the first carbon credits are issued in just a few years. “It fits a corporate model very well, and it fits the verification model very well,” said Robin Chazdon, a forest researcher at Australia’s University of the Sunshine Coast.
Today’s eucalyptus saplings—like those shown here in Brancalion’s lab—are the products of decades of tinkering with clonal breeding, growing quick and straight.
PABLO ALBARENGA
Reliability and stability have also made eucalyptus, as well as pine, quietly dominant in global planting efforts. A 2019 analysis published in Nature found that 45% of carbon removal projects the researchers studied worldwide involved single-species tree farms.In Brazil, the figure was 82%.The authors called this a “scandal,” accusing environmental organizations and financiers of misleading the public and pursuing speed and convenience at the expense of native restoration.
In 2023, the nonprofit Verra, the largest bearer of carbon credit standards, said it would forbid projects using “non-native monocultures”—that is, plants like eucalyptus or pine that don’t naturally grow in the places where they’re being farmed. The idea was to assuage concerns that carbon credits were going to plantations that would have been built anyway given the demand for wood, meaning they wouldn’t actually remove any extra carbon from the atmosphere.
The uproar was immediate—from timber companies, but also from carbon developers and NGOs. How would it be possible to scale anything—conservation, carbon removal—without them?
Verra reversed course several months later. It would allow non-native monocultures so long as they grew in land that was deemed “degraded,” or previously cleared of vegetation—land like cattle pasture.Andit took steps to avoid counting plantings in close proximity to other areas of fast tree growth, the idea being that they wanted to avoid rewarding purely industrial projects that would’ve been planted anyway.
Despite the potential benefits of intermixing them, foresters generally prefer to keep eucalyptus and native species separate.
PABLO ALBARENGA
Brancalion happened to agree with the criticisms of exotic monocultures. But all the same, he believed eucalyptus had been unfairly demonized. It was a marvelous genus, actually, with nearly 800 species with unique adaptations. Natives could be planted as monocultures too, or on stolen land, or tended with little care. He had been testing ways to turn eucalyptus from perceived foes into friends of native forest restoration.
His idea was to use rows of eucalyptus, which rocket above native species, as a kind of stabilizer. While these natives can be valuable—either as lumber or for biodiversity—they may grow slowly, or twist in ways that make their wood unprofitable, or suddenly and inexplicably die. It’s never like that with eucalyptus, which are wonderfully predictable growers. Eventually, their harvested wood would help pay for the hard work of growing the others.
In practice, foresters have generally preferred to keep things separate. Eucalyptus here; restoration there. It was far more efficient. The approach was emblematic, Brancalion thought, of letting the economics of the industry guide what was planted, how, and where, even with green finance involved. Though he admitted he was speaking as something of a competitor given his own carbon work, he was perplexed by Apple’s choices. The world’s richest company was doing eucalyptus? And with a bank better known locally as a major investor in industries, like beef and soy, that contributed to deforestation than any efforts for native restoration.
It also worried him to see the planting happening west of here, in the Cerrado, where land is cheaper and also, for much of the year, drier. “It’s like a bomb,” Brancalion told me. “You can come interview me in five, six years. You don’t have to be super smart to realize what will happen after planting too many eucalyptus in a dry region.” He wished me luck on my journey westward.
The sacrifice zone
Savanna implies openness, but the European settlers passing through the Cerrado called it the opposite; the name literally means “closed.” Grasses and shrubs grow to chest height, scaled as if to maximize human inconvenience. A machete is advised.
As I headed with Borzone toward a small nature preserve called Parque do Pombo, she told me that young Brazilians are often raised with a sense of dislike, if not fear, of this land. When Borzone texted her mother, a local biologist, to say where we were going, she replied: “I hear that place is full of ticks.” (Her intel, it turned out, was correct.)
At one point, even prominent ecologists, fearing total destruction of the Amazon, advocated moving industry to the Cerrado, invoking a myth about casting a cow into piranha-infested waters so that the other cows could ford downstream.
PABLO ALBARENGA
What can be easy to miss is the fantastic variety of these plants, the result of natural selection cranked into overdrive. Species, many of which blew in from the Amazon, survived by growing deep roots through the acidic soil and thicker bark to resist regular brush fires. Many of the trees developed the ability to shrivel upon themselves and drop their leaves during the long, dry winter. Some call it a forest that has grown upside down, because much of the growth occurs in the roots. The Cerrado is home to 12,000 flowering plant species, 4,000 of which are found only there. In terms of biodiversity, it is second in the world only to its more famous neighbor, the Amazon.
Pequi is an edible fruit-bearing tree common in the Cerrado—one of the many unique species native to the area.
ADOBE STOCK
Each stop on our drive seemed to yield a new treasure for Borzone to show me: Guavira, a tree that bears fruit in grape-like bunches that appear only two weeks in a year; it can be made into a jam that is exceptionally good on toast. Pequi, more divisive, like fermented mango mixed with cheese. Others bear names Borzone can only faintly recall in the Indigenous Guaraní language and is thus unable to google. Certain uses are more memorable: Give this one here, a tiny frond that looks like a miniature Christmas fir, to make someone get pregnant.
Borzone had grown up in the heart of the savanna, and the land had changed significantly since she was a kid going to the river every weekend with her family. Since the 1970s, about half of the savanna has been cleared, mostly for ranching and, where the soil is good, soybeans. At that time, even prominent ecologists, fearing total destruction of the Amazon, advocated moving industry here, invoking what Brazilians call the boi de piranha—a myth about casting a cow into infested waters so that the other cows could ford downstream.
Toby Pennington, a Cerrado ecologist at the University of Exeter, told me it remains a sacrificial zone, at times faring worse when environmentally minded politicians are in power. In 2023, when deforestation fell by half in the Amazon, it rose by 43% in the Cerrado. Some ecologists warn that this ecosystem could be entirely gone in the next decade.
Perhaps unsurprisingly, there’s a certain prickliness among grassland researchers, who are, like their chosen flora, used to being trampled. In 2019, 46 of them authored a response in Science to Crowther’s trillion-trees study, arguing not about tree counting but about the land he proposed for reforestation. Much of it, they argued, including places like the Cerrado, was not appropriate for so many trees. It was too much biomass for the land to handle.(If their point was not already clear, the scientists later labeled the phenomenon “biome awareness disparity,” or BAD.)
“It’s a controversial ecosystem,” said Natashi Pilon, a grassland ecologist at the University of Campinas near São Paulo. “With Cerrado, you have to forget everything that you learn about ecology, because it’s all based in forest ecology. In the Cerrado, everything works the opposite way. Burning? It’s good. Shade? It’s not good.” The Cerrado contains a vast range of landscapes, from grassy fields to wooded forests, but the majority of it, she explained, is poorly suited to certain rules of carbon finance that would incentivize people to protect or restore it. While the underground forest stores plenty of carbon, it builds up its stock slowly and can be difficult to measure.
The result is a slightly uncomfortable position for ecologists studying and trying to protect a vanishing landscape. Pilon and her former academic advisor, Giselda Durigan, a Cerrado ecologist at the Environmental Research Institute of the State of São Paulo and one of the scientists behind BAD, have gotten accustomed to pushing back on people who arrived preaching “improvement” through trees—first from nonprofits, mostly of the trillion-trees variety, but now from the timber industry. “They are using the carbon discourse as one more argument to say that business is great,” Durigan told me. “They are happy to be seen as the good guys.”
Durigan saw tragedy in the way that Cerrado had been transformed into cattle pasture in just a generation, but there was also opportunity in restoring it once the cattle left. Bringing the Cerrado back would be hard work—usually requiring fire and hacking away at invasive grasses. But even simply leaving it alone could allow the ecosystem to begin to repair itself and offer something like the old savanna habitat. Abandoned eucalyptus farms, by contrast, were nightmares to return to native vegetation; the strange Cerrado plants refused to take root in the highly modified soil.
In recent years, Durigan had visited hundreds of eucalyptus farms in the area, shadowing her students who had been hired by timber companies to help establish promised corridors of native vegetation in accordance with federal rules. “They’re planting entire watersheds,” she said. “The rivers are dying.”
Durigan saw plants in isolated patches growing taller than they normally would, largely thanks to the suppression of regular brush fires. They were throwing shade on the herbs and grasses and drawing more water. The result was an environment gradually choking on itself, at risk of collapse during drought and retaining only a fraction of the Cerrado’s original diversity. If this was what people meant by bringing back the Cerrado, she believed it was only hastening its ultimate disappearance.
In a recent survey of the watershed around the Parque do Pombo,which is hemmed in on each side by eucalyptus, two other researchers reported finding “devastation” and turned to Plato’s description of Attica’s forests, cleared to build the city of Athens: “What remains now compared to what existed is like the skeleton of a sick man … All the rich and soft soil has dissolved, leaving the country of skin and bones.”
A highway runs through the Cellulose Valley, connecting commercial eucalyptus farms and pulp factories.
PABLO ALBARENGA
After a long day of touring the land—and spinning out on the clay—we foundthat our fuel was low. The Parque do Pombo groundskeeper looked over at his rusting fuel tank and apologized. It had been spoiled by the last rain. At least, he said, it was all downhill to the highway.
The road of opportunity
We only made it about halfway down the eucalyptus-lined road. After the car huffed and left us stranded, Borzone and I started walking toward the highway, anticipating a long night. We remembered locals’ talk of jaguars recently pushed into the area by development.
But after only 30 minutes or so, a set of lights came into view across the plain. Then another, and another.Then the outline of a tractor, a small tanker truck, and, somewhat curiously, a tour bus. The gear and the vehicles bore the logo of Suzano, the world’s largest pulp and paper company.
After talking to a worker, we boarded the empty tour bus and were taken to a cluster of spotlit tents, where women prepared eucalyptus seedlings, stacking crates of them on white fold-out tables. A night shift like this one was unusual. But they were working around the clock—aiming to plant a million trees per day across Suzano’s farms, in preparation for opening the world’s largest pulp factory just down the highway. It would open in a few weeks with a capacity of 2.55 million metric tons of pulp per year.
Eucalyptus has become the region’s new lifeblood. “I’m going to plant some eucalyptus / I’ll get rich and you’ll fall in love with me,” sings a local country duo.
PABLO ALBARENGA
The tour bus was standing by to take the workers down the highway at 1 a.m., arriving in the nearest city, Três Lagoas, by 3 a.m. to pick up the next shift. “You don’t do this work without a few birds at home to feed,” a driver remarked as he watched his colleagues filling holes in the field by the light of their headlamps. After getting permission from his boss, he drove us an hour each way to town to the nearest gas station.
This highway through the Cellulose Valley has become known as a road of opportunity, with eucalyptus as the region’s new lifeblood after the cattle industry shrank its footprint.Not far from the new Suzano factory, a popular roadside attraction is an oversize sculpture of a black bull at the gates of a well-known ranch. The ranch was recently planted, and the bull is now guarded by a phalanx of eucalyptus.
On TikTok, workers post selfies and views from tractors in the nearby groves, backed by a song from the local country music duo Jads e Jadson. “I’m going to plant some eucalyptus / I’ll get rich and you’ll fall in love with me,” sings a down-on-his-luck man at risk of losing his fiancée. Later, when he cuts down the trees and becomes a wealthy man with better options, he cuts off his betrothed, too.
The race to plant more eucalyptus here is backed heavily by the state government, which last year waived environmental requirements for new farms on pasture and hopes to quickly double its area in just a few years. The trees were an important component of Brazil’s plan to meet its global climate commitments, and the timber industry was keen to cash in.Companies like Suzano have already proposed that tens of thousands of their hectares become eligible for carbon credits.
What’s top of mind for everyone, though, is worsening fires. Even when we visited in midwinter, the weather was hot and dry. The wider region was in a deep drought, perhaps the worst in 700 years, and in a few weeks, one of the worst fire seasons ever would begin. Suzano would be forced to make a rare pause in its planting when soil temperatures reached 154 °F.
Posted along the highway are constant reminders of the coming danger: signs, emblazoned with the logos of a dozen timber companies, that read “FOGO ZERO,” or “ZERO FIRE.”
The race to plant more eucalyptus is backed heavily by the state government, which hopes to quickly double its area in just a few years.
PABLO ALBARENGA
In other places struck by megafires, like Portugal and Chile, eucalyptus has been blamed for worsening the flames. (The Chilean government has recently excluded pine and eucalyptus farms from its climate plans.) But here in Brazil, where climate change is already supersizing the blazes, the industry offers sophisticated systems to detect and suppress fires, argued Calmon of Conservation International. “You really need to protect it because that’s your asset,” he said. (BTG also noted that in parts of the Cerrado where human activity has increased, fires have decreased.)
Eucalyptus is often portrayed as impossibly thirsty compared with other trees, but Calmon pointed out it is not uniquely so. In some parts of the Cerrado, it has been found to consume four times as much water as native vegetation; in others, the two landscapes have been roughly in line. It depends on many factors—what type of soil it’s planted in, what Cerrado vegetation coexists with it, how intensely the eucalyptus is farmed. Timber companies, which have no interest in seeing their own plantations run dry, invest heavily in managing water. Another hope, Wishnie told me, is that by vastly increasing the forest canopy, the new eucalyptus will actually gather moisture and help produce rain.
Marine Dubos-Raoul has tracked waves of planting in the Cerrado for years and has spoken to residents who worry about how the trees strain local water supplies.
PABLO ALBARENGA
That’s a common narrative and one that’s been taught in schools here in Três Lagoas for decades, Borzone explained when we met up the day after our rescue with Marine Dubos-Raoul, a local geographer and university professor, and two of her students. Dubos-Raoul laughed uneasily. If this idea about rain was in fact true, they hadn’t seen it here. They crouched around the table at the cafe, speaking in a hush; their opinions weren’t particularly popular in this lumber town.
Dubos-Raoul had long tracked the impacts of the waves of planting on longtime rural residents, who complained that industry had taken their water or sprayed their gardens with pesticides.
The evidence tying the trees to water problems in the region, Dubos-Raoul admitted, is more anecdotal than data driven. But she heard it in conversation after conversation. “People would have tears in their eyes,” she said. “It was very clear to them that it was connected to the arrival of the eucalyptus.” (Since our meeting, a study, carried out in response to demands from local residents, has blamed the planting for 350 depleted springs in the area, sparking a rare state inquiry into the issue.) In any case, Dubos-Raoul thought, it didn’t make much sense to keep adding matches to the tinderbox.
Shortly after talking with Dubos-Raoul, we ventured to the town of Ribas do Rio Pardo to meet Charlin Castro at his family’s river resort. Suzano’s new pulp factory stood on the horizon, surrounded by one of the densest areas of planting in the region.
The Suzano pulp factory—the world’s largest—has pulled the once-sleepy town of Ribas do Rio Pardo into the bustling hub of Brazil’s eucalyptus industry.
PABLO ALBARENGA
Charlin Castro, his father Camilo, and other locals talk about how the area around the family’s river resort has changed since eucalyptus came to town.
The public area for bathing on the far side of the shrinking river was closed after the Suzano pulp factory was installed.
Charlin and Camilo admit they aren’t exactly sure what is causing low water levels—maybe it’s silt, maybe it’s the trees.
PABLO ALBARENGA
With thousands of workers arriving, mostly temporarily, to build the factory and plant the fields, the sleepy farming village had turned into a boomtown, and developed something of a lawless reputation—prostitution, homelessness, collisions between logging trucks and drunk drivers—and Castro was chronicling much of it for a hyperlocal Instagram news outlet, while also running for city council.
But overall, he was thankful to Suzano. The factory was transforming the town into a “a real place,” as he put it, even if change was at times painful.
His father, Camilo, gestured with a sinewy arm over to the water, where he recalled boat races involving canoes with crews of a dozen. That was 30 years ago.It was impossible to imagine now as I watched a family cool off in this bend in the river, the water just knee deep. But it’s hard to say what exactly is causing the low water levels. Perhaps it’s silt from the ranches, Charlin suggested. Ora change in the climate. Or, maybe, it could be the trees.
Upstream, Ana Cláudia (who goes by “Tica”) and Antonio Gilberto Lima were more certain what was to blame.The couple, who are in their mid-60s, live in a simple brick house surrounded by fruit trees. They moved there a decade ago, seeking a calm retirement—one of a hundred or so families taking part in land reforms that returned land to smallholders. But recently, life has been harder. To preserve their well, they had let their vegetable garden go to seed. Streams were dry, and the old pools in the pastures where they used to fish were gone, replaced by trees; tapirs were rummaging through their garden, pushed, they believed, by lack of habitat.
Ana Cláudia and Antonio Gilberto Lima have seen their land struggle since eucalyptus plantations took over the region.
PABLO ALBARENGA
Plants have been attacked by hungry insects at their home.
Pollinators like these stingless bees, faced with a lack of variety of native plant species, must fly greater distances to collect pollen they need.
They were surrounded by eucalyptus, planted in waves with the arrival of each new factory.No one was listening, they told me, as the cattle herd bellowed outside the door. “The trees are sad,” Gilberto said, looking out over his few dozen pale-humped animals grazing around scattered Cerrado species left in the paddock. Tica told me she knew that paper and pulp had to come from somewhere, and that many people locally were benefiting.But the downsides were getting overlooked, she thought. They had signed a petition to the government, organized by Dubos-Raoul, seeking to rein in the industry. Perhaps, she hoped, it could reach American investors, too.
The green halo
A few weeks before my trip, BTG had decided it was ready to show off Project Alpha. The visit was set for my last day in Brazil; the farm formerly known as Fazenda Engano was further upriver in Camapuã, a town that borders Ribas do Rio Pardo. It was a long, circuitous drive north to get out there, but it wouldn’t be that way much longer; a new highway was being paved that would directly connect the two towns, part of an initiative between the timber industry and government to expand the cellulose hub northward. A local official told me he expected tens of thousands of hectares of eucalyptus in the next few years.
For now, though, it was still the frontier. The intention was to plant “well outside the forest sector,” Wishnie told me—not directly in the shadow of a mill, but close enough for the operation to be practical, with access to labor and logistics. That distance was important evidence that the trees would store more carbon than what’s accounted for in a business-as-usual scenario. The other guarantee was the restoration. It wasn’t good business to buy land and not plant every acre you could with timber. It was made possible only with green investments from Apple and others.
That morning, Wishnie had emailed me a press release announcing that Microsoft had joined Apple in seeking help from BTG to help meet its carbon demands. The technology giant had made the largest-ever purchase of carbon credits, representing 8 million tons of CO2, from Project Alpha, following smaller commitments from TSMC and Murata, two of Apple’s suppliers.
I was set to meet Carlos Guerreiro, head of Latin American operations for BTG’s timber subsidiary, at a gas station in town, where we would set off together for the 24,000-hectare property. A forester in Brazil for much of his life, he had flown in from his home near São Paulo early that morning; he planned to check out the progress of the planting at Project Alpha and then swing down to the bank’s properties across the Cellulose Valley, where BTG was finalizing a $376 million deal to sell land to Suzano.
BTG plans to mix preserves of native restoration and eucalyptus farms and eventually reach a 50-50 mix on their properties.
COURTESY OF BTG
Guerreiro defended BTG’s existing holdings as sustainable engines of development in the region. But all the same, Project Alpha felt likea new beginning for the company, he told me. About a quarter of this property had been left untouched when the pasture was first cleared in the 1980s, but the plan now was to restore an additional 13% of the property to native Cerrado plants, bringing the total to 37%. (BTG says it will protect more land on future farms to arrive at its 50-50 target.) Individual patches of existing native vegetation would be merged with others around the property, creating a 400-meter corridor that largely followed the streams and rivers—beyond the 60 meters required by law.
The restoration work was happening with the help of researchers from a Brazilian university, though they were still testing the best methods. We stood over trenches that had been planted with native seeds just weeks before, shoots only starting to poke out of the dirt. Letting the land regenerate on its own was often preferable, Guerreiro told me, but the best approach would depend on the specifics of each location. In other places, assistance with planting or tending or clearing back the invasive grasses could be better.
The approach of largely letting things be was already yielding results, he noted: In parts of the property that hadn’t been grazed in years, they could already see the hardscrabble Cerrado clawing back with a vengeance. They’d been marveling at the fauna, caught on camera traps: tapirs, anteaters, all kinds of birds. They had even spotted a jaguar. The project would ensure that this growth would continue for decades. The land wouldn’t be sold to another rancher and go back to looking like other parts of the property, which were regularly cleared of native habitat. The hope, he said, was that over time the regenerating ecosystems would store more carbon, and generate more credits, than the eucalyptus. (The company intends to submit its carbon plans to Verra later this year.)
We stopped for lunch at the dividing line between the preserve and the eucalyptus, eating ham sandwiches in the shade of the oldest trees on the property, already two stories tall and still, by Guerreiro’s estimate, putting on a centimeter per day.He was planting at a rate of 40,000 seedlings per day in neat trenches filled with white lime to make the sandy Cerrado soil more inviting. In seven years or so, half of the trees will be thinned and pulped. The rest will keep growing. They’ll stand for seven years longer and grow thick and firm enough for plywood. The process will then start anew. Guerreiro described a model where clusters of farms mixed with preserves like this one will be planted around mills throughout the Cerrado. But nothing firm had been decided.
“Under no circumstances should planting eucalyptus ever be considered a viable project to receive carbon credits in the Cerrado,” says Lucy Rowland, an expert on the region at the University of Exeter.
PABLO ALBARENGA
This experiment, Wishnie told me later, could have a big payoff. The important thing, he reminded me, was that stretches of the Cerrado would be protected at a scale no one had achieved before—something that wouldn’t happen without eucalyptus. He strongly disagreed with the scientists who said eucalyptus didn’t fit here. The government had analyzed the watershed, he explained, and he was confident the land could support the trees. At the end of the day, the choice was between doing something and doing nothing. “We talk about restoration as if it’s a thing that happens,” he said.
When I asked Pilon to take a look at satellite imagery and photos of the property, she was unimpressed. It looked to her like yet another misguided attempt at planting trees in an area that had once naturally been a dense savanna. (Her assessment is supported by a land survey from the 1980s that classified this land as a typical Cerrado ecosystem—some trees, but mostly shrubbery. BTG responded that the survey was incorrect and the satellite images clearly showed a closed-canopy forest.)
As Lucy Rowland, an expert on the region at the University of Exeter and another BAD signatory, put it: “Under no circumstances should planting eucalyptus ever be considered a viable project to receive carbon credits in the Cerrado.”
Over months of reporting, the way that both sides spoke in absolutes about how to save this vanishing ecosystem had become familiar. Chazdon, the Australia-based forest researcher, told me she too felt that the tenor of the argument over how and where to grow has become more vehement as demand for tree-based carbon removal has intensified. “Nobody’s a villain,” she said. “There are disconnects on both sides.”
Chazdon had been excited to hear about BTG’s project. It was, she thought, the type of thing that was sorely needed in conservation—mixing profitable enterprises with an approach to restoration that considers the wider landscape.“I can understand why the Cerrado ecologists are up in arms,”she said. “They get the feeling that nobody cares about their ecosystems.” But demands for ecological purity could indeed get in the way of doing much of anything—especially in places like the Cerrado, where laws and financing favor destruction over restoration.
Still, thinking about the scale of the carbon removal problem, she considered it sensible to wonder about the future that was being hatched. While there is, in fact, a limit to how much additional land the world needs for pulp and plywood products in the near future, there is virtually no limit to how much land it could devote to sequestering carbon. Which means we need to ask hard questions about the best way to use it.
More eucalyptus may support claims about greener paper products, but some argue that it’s not so simple for laptops and smart watches and ChatGPT queries.
PABLO ALBARENGA
It was true, Chazdon said, that planting eucalyptus in the Cerrado was an act of destruction—it’d make that land nearly impossible to recover. The areas preserved in between them would also likely struggle to fully renew itself, without fire or clearing. She would feel more comfortable with such large-scale projects if the bar for restoration were much higher—say, 75% or more. But that almost certainly wouldn’t satisfy her grassland colleagues who don’t want any eucalyptus at all.And it might not fit the profit model—the flywheel that Apple and others are seeking in order to scale up carbon removal fast.
Barbara Haya, who studies carbon offsets at the University of California, Berkeley, encouraged me to think about all of it differently. The improvements to planting eucalyptus here, at this farm, could be a perfectly good thing for this industry, she said. Perhaps they merit some claim about greener toilet paper or plywood. Haya would leave that debate to the ecologists.
But we weren’t talking about toilet paper or plywood.We were talking about laptops and smart watches and ChatGPT. And the path to connecting those things to these trees was more convoluted. The carbon had to be disentangled first from the wood’s other profitable uses and then from the wider changes that were happening in this region and its industries. There seemed to be many plausible scenarios for where this land was heading. Was eucalyptus the only feasible route for carbon to find its way here?
Haya is among the experts who argue that the idea of precisely canceling out corporate emissions to reach carbon neutrality is a broken one. That’s not to say protecting nature can’t help fight climate change. Conserving existing forests and grasslands, for example, could often yield greater carbon and biodiversity benefits in the long run than planting new forests. But the carbon math used to justify those efforts was often fuzzier. This makes every claim of carbon neutrality fragile and drives companies toward projects that are easier to prove, she thinks, but perhaps have less impact.
One idea is that companies should instead shift to a “contribution” model that tracks how much money they put toward climate mitigation, without worrying about the exact amount of carbon removed. “Let’s say the goal is to save the Cerrado,” Haya said. “Could they put that same amount of money and really make a difference?” Such an approach, she pointed out, could help finance the preservation of those last intact Cerrado remnants. Or it could fund restoration, even if the restored vegetation takes years to grow or sometimes needs to burn.
The approach raises its own questions—about how to measure the impact of those investments and what kinds of incentives would motivate corporations to act. But it’s a vision that has gained more popularity as scrutiny of carbon credits grows and the options available to companies narrow. With the current state of the world, “what private companies do matters more than ever,” Haya told me. “We need them not to waste money.”
In the meantime, it’s up to the consumer reading the label to decide what sort of path we’re on.
“There’s nothing wrong with the trees,” geographer and translator Clariana Vilela Borzone says. “I have to remind myself of that.”
PABLO ALBARENGA
Before we left the farm, Borzone and I had one more task: to plant a tree. The sun was getting low over Project Alpha when I was handed an iron contraption that cradled a eucalyptus seedling, pulled from a tractor piled with plants.
“There’s nothing wrong with the trees,” Borzone had said earlier, squinting up at the row of 18-month-old eucalyptus, their fluttering leaves flashing in the hot wind as if in an ill-practiced burlesque show. “I have to remind myself of that.” But still it felt strange putting one in the ground. We were asking so much of it, after all. And we were poised to ask more.
I squeezed the handle, pulling the iron hinge taut and forcing the plant deep into the soil. It poked out at a slight angle that I was sure someone else would need to fix later, or else this eucalyptus tree would grow askew. I was slow and clumsy in my work, and by the time I finished, the tractor was far ahead of us, impossibly small on the horizon. The worker grabbed the tool from my hand and headed toward it, pushing seedlings down as he went, hurried but precise, one tree after another.
Gregory Barber is a journalist based in San Francisco.
Architecture often assumes a binary between built projects and theoretical ones. What physics allows in actual buildings, after all, is vastly different from what architects can imagine and design (often referred to as “paper architecture”). That imagination has long been supported and enabled by design technology, but the latest advancements in artificial intelligence have prompted a surge in the theoretical.
Karl Daubmann, College of Architecture and Design at Lawrence Technological University “Very often the new synthetic image that comes from a tool like Midjourney or Stable Diffusion feels new,” says Daubmann, “infused by each of the multiple tools but rarely completely derived from them.”
“Transductions: Artificial Intelligence in Architectural Experimentation,” a recent exhibition at the Pratt Institute in Brooklyn, brought together works from over 30 practitioners exploring the experimental, generative, and collaborative potential of artificial intelligence to open up new areas of architectural inquiry—something they’ve been working on for a decade or more, since long before AI became mainstream. Architects and exhibition co-curators Jason Vigneri-Beane, Olivia Vien, Stephen Slaughter, and Hart Marlow explain that the works in “Transductions” emerged out of feedback loops among architectural discourses, techniques, formats, and media that range from imagery, text, and animation to mixed-reality media and fabrication. The aim isn’t to present projects that are going to break ground anytime soon; architects already know how to build things with the tools they have. Instead, the show attempts to capture this very early stage in architecture’s exploratory engagement with AI.
Technology has long enabled architecture to push the limits of form and function. As early as 1963, Sketchpad, one of the first architectural software programs, allowed architects and designers to move and change objects on screen. Rapidly, traditional hand drawing gave way to an ever-expanding suite of programs—Revit, SketchUp, and BIM, among many others—that helped create floor plans and sections, track buildings’ energy usage, enhance sustainable construction, and aid in following building codes, to name just a few uses.
The architects exhibiting in “Transductions” view newly evolving forms of AI “like a new tool rather than a profession-ending development,” says Vigneri-Beane, despite what some of his peers fear about the technology. He adds, “I do appreciate that it’s a somewhat unnerving thing for people, [but] I feel a familiarity with the rhetoric.”
After all, he says, AI doesn’t just do the job. “To get something interesting and worth saving in AI, an enormous amount of time is required,” he says. “My architectural vocabulary has gotten much more precise and my visual sense has gotten an incredible workout, exercising all these muscles which have atrophied a little bit.”
Vien agrees: “I think these are extremely powerful tools for an architect and designer. Do I think it’s the entire future of architecture? No, but I think it’s a tool and a medium that can expand the long history of mediums and media that architects can use not just to represent their work but as a generator of ideas.”
Andrew Kudless, Hines College of Architecture and Design This image, part of the Urban Resolution series, shows how the Stable Diffusion AI model “is unable to focus on constructing a realistic image and instead duplicates features that are prominent in the local latent space,” Kudless says.
Jason Vigneri-Beane, Pratt Institute “These images are from a larger series on cyborg ecologies that have to do with co-creating with machines to imagine [other] machines,” says Vigneri-Beane. “I might refer to these as cryptomegafauna—infrastructural robots operating at an architectural scale.”
Martin Summers, University of Kentucky College of Design “Most AI is racing to emulate reality,” says Summers. “I prefer to revel in the hallucinations and misinterpretations like glitches and the sublogic they reveal present in a mediated reality.”Jason Lee, Pratt Institute Lee typically uses AI “to generate iterations or high-resolution sketches,” he says. “I am also using it to experiment with how much realism one can incorporate with more abstract representation methods.”
Olivia Vien, Pratt Institute For the series Imprinting Grounds, Vien created images digitally and fed them into Midjourney. “It riffs on the ideas of damask textile patterns in a more digital realm,” she says.
Robert Lee Brackett III, Pratt Institute “While new software raises concerns about the absence of traditional tools like hand drawing and modeling, I view these technologies as collaborators rather than replacements,” Brackett says.
Americans don’t agree on much these days. Yet even at a time when consensus reality seems to be on the verge of collapse, there remains at least one quintessentially modern value we can all still get behind: creativity.
We teach it, measure it, envy it, cultivate it, and endlessly worry about its death. And why wouldn’t we? Most of us are taught from a young age that creativity is the key to everything from finding personal fulfillment to achieving career success to solving the world’s thorniest problems. Over the years, we’ve built creative industries, creative spaces, and creative cities and populated them with an entire class of people known simply as “creatives.” We read thousands of books and articles each year that teach us how to unleash, unlock, foster, boost, and hack our own personal creativity. Then we read even more to learn how to manage and protect this precious resource.
Given how much we obsess over it, the concept of creativity can feel like something that has always existed, a thing philosophers and artists have pondered and debated throughout the ages. While it’s a reasonable assumption, it’s one that turns out to be very wrong. As Samuel Franklin explains in his recent book, The Cult of Creativity, the first known written use of creativity didn’t actually occur until 1875, “making it an infant as far as words go.” What’s more, he writes, before about 1950, “there were approximately zero articles, books, essays, treatises, odes, classes, encyclopedia entries, or anything of the sort dealing explicitly with the subject of ‘creativity.’”
This raises some obvious questions. How exactly did we go from never talking about creativity to always talking about it? What, if anything, distinguishes creativity from other, older words, like ingenuity, cleverness, imagination, and artistry? Maybe most important: How did everyone from kindergarten teachers to mayors, CEOs, designers, engineers, activists, and starving artists come to believe that creativity isn’t just good—personally, socially, economically—but the answer to all life’s problems?
Thankfully, Franklin offers some potential answers in his book. A historian and design researcher at the Delft University of Technology in the Netherlands, he argues that the concept of creativity as we now know it emerged during the post–World War II era in America as a kind of cultural salve—a way to ease the tensions and anxieties caused by increasing conformity, bureaucracy, and suburbanization.
“Typically defined as a kind of trait or process vaguely associated with artists and geniuses but theoretically possessed by anyone and applicable to any field, [creativity] provided a way to unleash individualism within order,” he writes, “and revive the spirit of the lone inventor within the maze of the modern corporation.”
Brainstorming, a new method for encouraging creative thinking, swept corporate America in the 1950s. A response to pressure for new products and new ways of marketing them, as well as a panic over conformity, it inspired passionate debate about whether true creativity should be an individual affair or could be systematized for corporate use.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS
I spoke to Franklin about why we continue to be so fascinated by creativity, how Silicon Valley became the supposed epicenter of it, and what role, if any, technologies like AI might have in reshaping our relationship with it.
I’m curious what your personal relationship to creativity was growing up. What made you want to write a book about it?
Like a lot of kids, I grew up thinking that creativity was this inherently good thing. For me—and I imagine for a lot of other people who, like me, weren’t particularly athletic or good at math and science—being creative meant you at least had some future in this world, even if it wasn’t clear what that future would entail. By the time I got into college and beyond, the conventional wisdom among the TED Talk register of thinkers—people like Daniel Pink and Richard Florida—was that creativity was actually the most important trait to have for the future. Basically, the creative people were going to inherit the Earth, and society desperately needed them if we were going to solve all of these compounding problems in the world.
On the one hand, as someone who liked to think of himself as creative, it was hard not to be flattered by this. On the other hand, it all seemed overhyped to me. What was being sold as the triumph of the creative class wasn’t actually resulting in a more inclusive or creative world order. What’s more, some of the values embedded in what I call the cult of creativity seemed increasingly problematic—specifically, the focus on self-realization, doing what you love, and following your passion. Don’t get me wrong—it’s a beautiful vision, and I saw it work out for some people. But I also started to feel like it was just a cover for what was, economically speaking, a pretty bad turn of events for many people.
Staff members at the University of California’s Institute of Personality Assessment and Research simulate a situational procedure involving group interaction, called the Bingo Test. Researchers of the 1950s hoped to learn how factors in people’s lives and environments shaped their creative aptitude.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS
Nowadays, it’s quite common to bash the “follow your passion,” “hustle culture” idea. But back when I started this project, the whole move-fast-and-break-things, disrupter, innovation-economy stuff was very much unquestioned. In a way, the idea for the book came from recognizing that creativity was playing this really interesting role in connecting two worlds: this world of innovation and entrepreneurship and this more soulful, bohemian side of our culture. I wanted to better understand the history of that relationship.
When did you start thinking about creativity as a kind of cult—one that we’re all a part of?
Similar to something like the “cult of domesticity,” it was a way of describing a historical moment in which an idea or value system achieves a kind of broad, uncritical acceptance. I was finding that everyone was selling stuff based on the idea that it boosted your creativity, whether it was a new office layout, a new kind of urban design, or the “Try these five simple tricks” type of thing.
You start to realize that nobody is bothering to ask, “Hey, uh, why do we all need to be creative again? What even is this thing, creativity?” It had become this unimpeachable value that no one, regardless of what side of the political spectrum they fell on, would even think to question. That, to me, was really unusual, and I think it signaled that something interesting was happening.
Your book highlights midcentury efforts by psychologists to turn creativity into a quantifiable mental trait and the “creative person” into an identifiable type. How did that play out?
The short answer is: not very well. To study anything, you of course need to agree on what it is you’re looking at. Ultimately, I think these groups of psychologists were frustrated in their attempts to come up with scientific criteria that defined a creative person. One technique was to go find people who were already eminent in fields that were deemed creative—writers like Truman Capote and Norman Mailer, architects like Louis Kahn and Eero Saarinen—and just give them a battery of cognitive and psychoanalytic tests and then write up the results. This was mostly done by an outfit called the Institute of Personality Assessment and Research (IPAR) at Berkeley. Frank Barron and Don MacKinnon were the two biggest researchers in that group.
Another way psychologists went about it was to say, all right, that’s not going to be practical for coming up with a good scientific standard. We need numbers, and lots and lots of people to certify these creative criteria. This group of psychologists theorized that something called “divergent thinking” was a major component of creative accomplishment. You’ve heard of the brick test, where you’re asked to come up with many creative uses for a brick in a given amount of time? They basically gave a version of that test to Army officers, schoolchildren, rank-and-file engineers at General Electric, all kinds of people. It’s tests like those that ultimately became stand-ins for what it means to be “creative.”
Are they still used?
When you see a headline about AI making people more creative, or actually being more creative than humans, the tests they are basing that assertion on are almost always some version of a divergent thinking test. It’s highly problematic for a number of reasons. Chief among them is the fact that these tests have never been shown to have predictive value—that’s to say, a third grader, a 21-year-old, or a 35-year-old who does really well on divergent thinking tests doesn’t seem to have any greater likelihood of being successful in creative pursuits. The whole point of developing these tests in the first place was to both identify and predict creative people. None of them have been shown to do that.
Reading your book, I was struck by how vague and, at times, contradictory the concept of “creativity” was from the beginning. You characterize that as “a feature, not a bug.” How so?
Ask any creativity expert today what they mean by “creativity,” and they’ll tell you it’s the ability to generate something new and useful. That something could be an idea, a product, an academic paper—whatever. But the focus on novelty has remained an aspect of creativity from the beginning. It’s also what distinguishes it from other similar words, like imagination or cleverness. But you’re right: Creativity is a flexible enough concept to be used in all sorts of ways and to mean all sorts of things, many of them contradictory. I think I write in the book that the term may not be precise, but that it’s vague in precise and meaningful ways. It can be both playful and practical, artsy and technological, exceptional and pedestrian. That was and remains a big part of its appeal.
The question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming [AI] into our lives as advisors and assistants.
Is that emphasis on novelty and utility a part of why Silicon Valley likes to think of itself as the new nexus for creativity?
Absolutely. The two criteria go together. In techno-solutionist, hypercapitalist milieus like Silicon Valley, novelty isn’t any good if it’s not useful (or at least marketable), and utility isn’t any good (or marketable) unless it’s also novel. That’s why they’re often dismissive of boring-but-important things like craft, infrastructure, maintenance, and incremental improvement, and why they support art—which is traditionally defined by its resistance to utility—only insofar as it’s useful as inspiration for practical technologies.
At the same time, Silicon Valley loves to wrap itself in “creativity” because of all the artsy and individualist connotations. It has very self-consciously tried to distance itself from the image of the buttoned-down engineer working for a large R&D lab of a brick-and-mortar manufacturing corporation and instead raise up the idea of a rebellious counterculture type tinkering in a garage making weightless products and experiences. That, I think, has saved it from a lot of public scrutiny.
Up until recently, we’ve tended to think of creativity as a human trait, maybe with a few exceptions from the rest of the animal world. Is AI changing that?
When people started defining creativity in the ’50s, the threat of computers automating white-collar work was already underway. They were basically saying, okay, rational and analytical thinking is no longer ours alone. What can we do that the computers can never do? And the assumption was that humans alone could be “truly creative.” For a long time, computers didn’t do much to really press the issue on what that actually meant. Now they’re pressing the issue. Can they do art and poetry? Yes. Can they generate novel products that also make sense or work? Sure.
I think that’s by design. The kinds of LLMs that Silicon Valley companies have put forward are meant to appear “creative” in those conventional senses. Now, whether or not their products are meaningful or wise in a deeper sense, that’s another question. If we’re talking about art, I happen to think embodiment is an important element. Nerve endings, hormones, social instincts, morality, intellectual honesty—those are not things essential to “creativity” necessarily, but they are essential to putting things out into the world that are good, and maybe even beautiful in a certain antiquated sense. That’s why I think the question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming them into our lives as advisors and assistants.
This interview is based on two conversations and has been edited and condensed for clarity.
Bryan Gardiner is a writer based in Oakland, California.
Heading north in the dark, the only way Gavesh could try to track his progress through the Thai countryside was by watching the road signs zip by. The Jeep’s three occupants—Gavesh, a driver, and a young Chinese woman—had no languages in common, so they drove for hours in nervous silence as they wove their way out of Bangkok and toward Mae Sot, a city on Thailand’s western border with Myanmar.
When they reached the city, the driver pulled off the road toward a small hotel, where another car was waiting. “I had some suspicions—like, why are we changing vehicles?” Gavesh remembers. “But it happened so fast.”
They left the highway and drove on until, in total darkness, they parked at what looked like a private house. “We stopped the vehicle. There were people gathered. Maybe 10 of them. They took the luggage and they asked us to come,” Gavesh says. “One was going in front, there was another one behind, and everyone said: ‘Go, go, go.’”
Gavesh and the Chinese woman were marched through the pitch-black fields by flashlight to a riverside where a boat was moored. By then, it was far too late to back out.
Gavesh’s journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.
Instead, he found himself trafficked into a business commonly known as “pig butchering”—a form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar.
We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized. By the same token, it is Big Tech that may hold the key to breaking up the scam syndicates—if only these companies can be persuaded or compelled to act.
We’re identifying Gavesh using a pseudonym to protect his identity. He is from a country in South Asia, one he asked us not to name. He hasn’t shared his story much, and he still hasn’t told his family. He worries about how they’d handle it.
Until the pandemic, he had held down a job in the tourism industry. But lockdowns had gutted the sector, and two years later he was working as a day laborer to support himself and his father and sister. “I was fed up with my life,” he says. “I was trying so hard to find a way to get out.”
When he saw the Facebook post in mid-2022, it seemed like a godsend. A company in Thailand was looking for English-speaking customer service and data entry specialists. The monthly salary was $1,500—far more than he could earn at home—with meals, travel costs, a visa, and accommodation included. “I knew if I got this job, my life would turn around. I would be able to give my family a good life,” Gavesh says.
What came next was life-changing, but not in the way Gavesh had hoped. The advert was a fraud—and a classic tactic syndicates use to force workers like Gavesh into an economy that operates as something like a dark mirror of the global outsourcing industry.
The true scale of this type of fraud is hard to estimate, but the United Nations reported in 2023 that hundreds of thousands of people had been trafficked to work as online scammers in Southeast Asia. One 2024 study, from the University of Texas, estimates that the criminal syndicates that run these businesses have stolen at least $75 billion since 2020.
These schemes have been going on for more than two decades, but they’ve started to capture global attention only recently, as the syndicates running them increasingly shift from Chinese targets toward the West. And even as investigators, international organizations, and journalists gradually pull back the curtain on the brutal conditions inside scamming compounds and document their vast scale, what is far less exposed is the pivotal role platforms owned by Big Tech play throughout the industry—from initially coercing individuals to become scammers to, finally, duping scam targets out of their life savings.
As losses mount, governments and law enforcement agencies have looked for ways to disrupt the syndicates, which have become adept at using ungoverned spaces in lawless borderlands and partnering with corrupt regimes. But on the whole, the syndicates have managed to stay a step ahead of law enforcement—in part by relying on services from the world’s tech giants. Apple iPhones are their preferred scamming tools. Meta-owned Facebook and WhatsApp are used to recruit people into forced labor, as is Telegram. Social media and messaging platforms, including Facebook, Instagram, WhatsApp, WeChat, and X, provide spaces for scammers to find and lure targets. So do dating apps, including Tinder. Some of the scam compounds have their own Starlink terminals. And cryptocurrencies like tether and global crypto platforms like Binance have allowed the criminal operations to move money with little or no oversight.
Scam workers sit inside Myanmar’s KK Park, a notorious fraud hub near the border with Thailand, following a recent crackdown by law enforcement.
REUTERS
“Private-sector corporations are, unfortunately, inadvertently enabling this criminal industry,” says Andrew Wasuwongse, the Thailand country director at the anti-trafficking nonprofit International Justice Mission (IJM). “The private sector holds significant tools and responsibility to disrupt and prevent its further growth.”
Yet while the tech sector has, slowly, begun to roll out anti-scam tools and policies, experts in human trafficking, platform integrity, and cybercrime tell us that these measures largely focus on the downstream problem: the losses suffered by the victims of the scams. That approach overlooks the other set of victims, often from lower-income countries, at the far end of a fraud “supply chain” that is built on human misery—and on Big Tech. Meanwhile, the scams continue on a mass scale.
Tech companies could certainly be doing more to crack down, the experts say. Even relatively small interventions, they argue, could start to erode the business model of the scam syndicates; with enough of these, the whole business could start to founder.
“The trick is: How do you make it unprofitable?” says Eric Davis, a platform integrity expert and senior vice president of special projects at the Institute for Security and Technology (IST), a think tank in California. “How do you create enough friction?”
That question is only becoming more urgent as many tech companies pull back on efforts to moderate their platforms, artificial intelligence supercharges scam operations, and the Trump administration signals broad support for deregulation of the tech sector while withdrawing support from organizations that study the scams and support the victims. All these trends may further embolden the syndicates. And even as the human costs keep building, global governments exert ineffectual pressure—if any at all—on the tech sector to turn its vast financial and technical resources against a criminal economy that has thrived in the spaces Silicon Valley built.
Capturing a vulnerable workforce
The roots of “pig butchering” scams reach back to the offshore gambling industry that emerged from China in the early 2000s. Online casinos had become hugely popular in China, but the government cracked down, forcing the operators to relocate to Cambodia, the Philippines, Laos, and Myanmar. There, they could continue to target Chinese gamblers with relative impunity. Over time, the casinos began to use social media to entice people back home, deploying scam-like tactics that frequently centered on attractive and even nude dealers.
The doubts didn’t really start until after Gavesh reached Bangkok’s Suvarnabhumi Airport. As time ticked by, it began to occur to him that he was alone, with no money, no return ticket, and no working SIM card.
“Often the romance scam was a part of that—building romantic relationships with people that you eventually would aim to hook,” says Jason Tower, Myanmar country director at the United States Institute of Peace (USIP), a research and diplomacy organization funded by the US government, who researches the cyber scam industry. (USIP’s leadership was recently targeted by the Trump administration and Elon Musk’s Department of Government Efficiency task force, leaving the organization’s future uncertain; its website, which previously housed its research, is also currently offline.)
By the late 2010s, many of the casinos were big, professional operations. Gradually, says Tower, the business model turned more sinister, with a tactic called sha zhu pan in Chinese emerging as a core strategy. Scamming operatives work to “fatten up” or cultivate a target by building a relationship before going in for the “slaughter”—persuading them to invest in a supposedly once-in-a-lifetime scheme and then absconding with the money. “That actually ended up being much, much more lucrative than online gambling,” Tower says. (The international law enforcement organization Interpol no longer uses the graphic term “pig butchering,” citing concerns that it dehumanizes and stigmatizes victims.)
Like other online industries, the romance scamming business was supercharged by the pandemic. There were simply more isolated people to defraud, and more people out of work who might be persuaded to try scamming others—or who were vulnerable to being trafficked into the industry.
Initially, most of the workers carrying out the frauds were Chinese, as were the fraud victims. But after the government in Beijing tightened travel restrictions, making it hard to recruit Chinese laborers, the syndicates went global. They started targeting more Western markets and turning, Tower says, to “much more malign types of approaches to tricking people into scam centers.”
Getting recruited
Gavesh was scrolling through Facebook when he saw the ad. He sent his résumé to a Telegram contact number. A human resources representative replied and had him demonstrate his English and typing skills over video. It all felt very professional. “I didn’t have any reason to suspect,” he says.
The doubts didn’t really start until after he reached Bangkok’s Suvarnabhumi Airport. After being met at arrivals by a man who spoke no English, he was left to wait. As time ticked by, it began to occur to Gavesh that he was alone, with no money, no return ticket, and no working SIM card. Finally, the Jeep arrived to pick him up.
Hours later, exhausted, he was on a boat crossing the Moei River from Thailand into Myanmar. On the far bank, a group was waiting. One man was in military uniform and carried a gun. “In my country, if we see an army guy when we are in trouble, we feel safe,” Gavesh says. “So my initial thoughts were: Okay, there’s nothing to be worried about.”
They hiked a kilometer across a sodden paddy field and emerged at the other side caked in mud. There a van was parked, and the driver took them to what he called, in broken English, “the office.” They arrived at the gate of a huge compound, surrounded by high walls topped with barbed wire.
While some people are drawn into online scamming directly by friends and relatives, Facebook is, according to IJM’s Wasuwongse, the most common entry point for people recruited on social media.
Meta has known for years that its platforms host this kind of content. Back in 2019, the BBC exposed “slave markets” that were running on Instagram; in 2021, the Wall Street Journal reported, drawing on documents leaked by a whistleblower, that Meta had long struggled to rein in the problem but took meaningful action only after Apple threatened to pull Instagram from its app store.
Today, years on, ads like the one that Gavesh responded to are still easy to find on Facebook if you know what to look for.
Examples of fraudulent Facebook ads, shared by International Justice Mission.
They are typically posted in job seekers’ groups and usually seem to be advertising legitimate jobs in areas like customer service. They offer attractive wages, especially for people with language skills—usually English or Chinese.
The traffickers tend to finish the recruitment process on encrypted or private messaging apps. In our research, many experts said that Telegram, which is notorious for hosting terrorist content, child sexual abuse material, and other communication related to criminal activity, was particularly problematic. Many spoke with a combination of anger and resignation about its apparent lack of interest in working with them to address the problem; Mina Chiang, founder of Humanity Research Consultancy, an anti-trafficking organization, accuses the app of being “very much complicit” in human trafficking and “proactively facilitating” these scams. (Telegram did not respond to a request for comment.)
But while Telegram users have the option of encrypting their messages end to end, making them almost impossible to monitor, social media companies are of course able to access users’ posts. And it’s here, at the beginning of the romance scam supply chain, where Big Tech could arguably make its most consequential intervention.
Social media is monitored by a combination of human moderators and AI systems, which help flag users and content—ads, posts, pages—that break the law or violate the companies’ own policies. Dangerous content is easiest to police when it follows predictable patterns or is posted by users acting in distinctive and suspicious ways.
“They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”
Anti-trafficking experts say the scam advertising tends to follow formulaic templates and use common language, and that they routinely report the ads to Meta and point out the markers they have identified. Their hope is that this information will be fed into the data sets that train the content moderation models.
While individual ads may be taken down, even in big waves—last November, Meta said it had purged 2 million accounts connected to scamming syndicates over the previous year—experts say that Facebook still continues to be used in recruiting. And new ads keep appearing.
(In response to a request for comment, a Meta spokesperson shared links to policies about bans on content or advertisements that facilitate human trafficking, as well as company blog posts telling users how to protect themselves from romance scams and sharing details about the company’s efforts to disrupt fraud on its platforms, one stating that it is “constantly rolling out new product features to help protect people on [its] apps from known scam tactics at scale.” The spokesperson also said that WhatsApp has spam detection technology, and millions of accounts are banned per month.)
Anti-trafficking experts we spoke with say that as recently as last fall, Meta was engaging with them and had told them it was ramping up its capabilities. But Chiang says there still isn’t enough urgency from tech companies. “There’s a question about speed. They might be able to say That’s the goal for the next two years. No. But that’s not fast enough. We need it now,” she says. “They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”
Part of the answer comes down to money, according to experts we spoke with. Scaling up content moderation and other processes that could cause users to be kicked off a platform requires not only technological staff but also legal and policy experts—which not everyone sees as worth the cost.
“The vast majority of these companies are doing the minimum or less,” says Tower of USIP. “If not properly incentivized, either through regulatory action or through exposure by media or other forms of pressure … often, these companies will underinvest in keeping their platforms safe.”
Getting set up
Gavesh’s new “office” turned out to be one of the most infamous scamming hubs in Southeast Asia: KK Park in Myanmar’s Myawaddy region. Satellite imagery shows it as a densely packed cluster of buildings, surrounded by fields. Most of it has been built since late 2019.
Inside, it runs like a hybrid of a company campus and a prison.
When Gavesh arrived, he handed over his phone and passport and was assigned to a dormitory and an employer. He was allowed his own phone back only for short periods, and his calls were monitored. Security was tight. He had to pass through airport-style metal detectors when he went in or out of the office. Black-uniformed personnel patrolled the buildings, while armed men in combat fatigues watched the perimeter fences from guard posts.
On his first full day, he was put in front of a computer with just four documents on it, which he had to read over and over—guides on how to approach strangers. On his second day, he learned to build fake profiles on social media and dating apps. The trick was to find real people on Instagram or Facebook who were physically attractive, posted often, and appeared to be wealthy and living “a luxurious life,” he says, and use their photos to build a new account: “There are so many Instagram models that pretend they have a lot of money.”
After Gavesh was trafficked into Myanmar, he was taken to KK Park. Most of the compound has been built since late 2019.
LUKE DUGGLEBY/REDUX
Next, he was given a batch of iPhone 8s—most people on his team used between eight and 10 devices each—loaded with local SIM cards and apps that spoofed their location so that they appeared to be in the US. Using male and female aliases, he set up dozens of accounts on Facebook, WhatsApp, Telegram, Instagram, and X and profiles on several dating platforms, though he can’t remember exactly which ones.
Different scamming operations teach different techniques for finding and reaching out to potential victims, several people who worked in the compounds tell us. Some people used direct approaches on dating apps, Facebook, Instagram, or—for those targeting Chinese victims—WeChat. One worker from Myanmar sent out mass messages on WhatsApp, pretending to have accidentally messaged a wrong number, in the hope of striking up a conversation. (Tencent, which owns WeChat, declined to comment.)
Some scamming workers we spoke to were told to target white, middle-aged or older men in Western countries who seemed to be well off. Gavesh says he would pretend to be white men and women, using information found from Google to add verisimilitude to his claims of living in, say, Miami Beach. He would chat with the targets, trying to figure out from their jobs, spending habits, and ambitions whether they’d be worth investing time in.
One South African woman, trafficked to Myanmar in 2022, says she was given a script and told to pose as an Asian woman living in Chicago. She was instructed to study her assigned city and learn quotidian details about life there. “They kept on punishing people all the time for not knowing or for forgetting that they’re staying in Chicago,” she says, “or for forgetting what’s Starbucks or what’s [a] latte.”
Fake users have, of course, been a problem on social media platforms and dating sites for years. Some platforms, such as X, allow practically anyone to create accounts and even to have them verified for a fee. Others, including Facebook, have periodically conducted sweeps to get rid of fake accounts engaged in what Meta calls “coordinated inauthentic behavior.” (X did not respond to requests for comment.)
But scam workers tell us they were advised on simple ways to circumvent detection mechanisms on social media. They were given basic training in how to avoid suspicious behavior such as adding too many contacts too quickly, which might trigger the company to review whether someone’s profile is authentic. The South African woman says she was shown how to manipulate the dates on a Facebook account “to seem as if you opened the account in 2019 or whatever,” making it easier to add friends. (Meta’s spam filters—meant to reduce the spread of unwanted content—include limits on friend requests and bulk messaging.)
Wang set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch.
Dating apps, whose users generally hope to meet other users in real life, have a particular need to make sure that people are who they say they are. But Match Group, the parent company of Tinder, ended its partnership with a company doing background checks in 2023. It now encourages users to verify their profile with a selfie and further ID checks, though insiders say these systems are often rudimentary. “They just check a box and [do] what is legally required or what will make the media get off of [their] case,” says one tech executive who has worked with multiple dating apps on safety systems, speaking on the condition of anonymity because they were not permitted to speak about their work with certain companies.
Fangzhou Wang, an assistant professor at the University of Texas at Arlington who studies romance scams, ran a test: She set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch. “They are not providing enough security measures to filter out fraudulent profiles,” Wang says. “Everybody can create anything.”
Like recruitment ads, the scam profiles tend to follow patterns that should raise red flags. They use photos copied from existing users or made by artificial intelligence, and the accounts are sometimes set up using phone numbers generated by voice-over-internet-protocol services. Then there’s the scammers’ behavior: They swipe too fast, or spend too much time logged in. “A normal human doesn’t spend … eight hours on a dating app a day,” the tech executive says.
What’s more, scammers use the same language over and over again as they reach out to potential targets. “The majority of them are using predesigned scripts,” says Wang.
It would be fairly easy for platforms to detect these signs and either stop accounts from being created or make the users go through further checks, experts tell us. Signals of some of these behaviors “can potentially be embedded into a type of machine-learning algorithm,” Wang says. She approached Tinder a few years ago with her research into the language that scammers use on the platforms, and offered to help build data sets for its moderation models. She says the company didn’t reply.
(In a statement, Yoel Roth, vice president of trust and safety at Match Group, said that the company invests in “proactive tools, advanced detection systems and user education to help prevent harm.” He wrote, “We use proprietary AI-powered tools to help identify scammer messaging, and unlike many platforms, we moderate messages, which allows us to detect suspicious patterns early and act quickly,” adding that the company has recently worked with Reality Defender, a provider of deepfake detection tools, to strengthen its ability to detect AI-generated content. A company spokesperson reported having no record of Wang’s outreach but said that the company “welcome[s] collaboration and [is] always open to reviewing research that can help strengthen user safety.”)
A recent investigation published in The Markup found that Match Group has long possessed the tools and resources to track sex offenders and other bad actors but has resisted efforts to roll out safety protocols for fear they might slow growth.
This tension, between the desire to keep increasing the number of users and the need to ensure that these users and their online activity are authentic, is often behind safety issues on platforms. While no platform wants to be a haven for fraudsters, identity verification creates friction for users, which stops real people as well as impostors from signing up. And again, cracking down on platform violations costs money.
According to Josh Kim, an economist who works in Big Tech, it would be costly for tech companies to build out the legal, policy, and operational teams for content moderation tools that could get users kicked off a platform—and the expense is one companies may find hard to justify in the current business climate. “The shift toward profitability means that you have to be very selective in … where you invest the resources that you have,” he says.
“My intuition here is that unless there are fines or pressure from governments or regulatory agencies or the public themselves,” he adds, “the current atmosphere in the tech ecosystem is to focus on building a product that is profitable and grows fast, and things that don’t contribute to those two points are probably being deprioritized.”
Getting online—and staying in line
At work, Gavesh wore a blue tag, marking him as belonging to the lowest rank of workers. “On top of us are the ones who are wearing the yellow tags—they call themselves HR or translators, or office guys,” he says. “Red tags are team leaders, managers … And then moving from that, they have black and ash tags. Those are the ones running the office.” Most of the latter were Chinese, Gavesh says, as were the really “big bosses,” who didn’t wear tags at all.
Within this hierarchy operated a system of incentives and punishments. Workers who followed orders and proved successful at scamming could rise through the ranks to training or supervisory positions, and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation.
Gavesh says he was once beaten because he broke an unwritten rule that it was forbidden to cross your legs at work. Yawning was banned, and bathroom breaks were limited to two minutes at a time.
KATHERINE LAM
Beatings were usually conducted in the open, though the most severe punishments at Gavesh’s company happened in a room called the “water jail.” One day a coworker was there alongside the others, “and the next day he was not,” Gavesh recalls. When the colleague was brought back to the office, he had been so badly beaten he couldn’t walk or speak. “They took him to the front, and they said: ‘If you do not listen to us, this is what will happen to you.’”
Gavesh was desperate to leave but felt there was no chance of escaping. The armed guards seemed ready to shoot, and there were rumors in the compound that some people who jumped the fence had been found drowned in the river.
This kind of physical and psychological abuse is routine across the industry. Gavesh and others we spoke to describe working 12 hours or more a day, without days off. They faced strict quotas for the number of scam targets they had to have on the hook. If they failed to reach them, they were punished. The UN has documented cases of torture, arbitrary detention, and sexual violence in the compounds. We heard accounts of people made to perform calisthenics and being thrashed on the backside in front of other workers.
Even if someone could escape, there is often no authority to appeal to on the outside. KK Park and other scam factories in Myanmar are situated in a geopolitical gray zone—borderlands where criminal enterprises have based themselves for decades, trading in narcotics and other unlawful industries. Armed groups, some of them operating under the command of the military, are credibly believed to profit directly from the trade in people and contraband in these areas, in some cases facing international sanctions as a result. Illicit industries in Myanmar have onlyexpanded since a military coup in 2021. By August 2023, according to UN estimates, more than 120,000 people were being held in the country for the purposes of forced scamming, making it the largest hub for the frauds in Southeast Asia.
Workers who followed orders and proved successful at scamming could rise through the ranks and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation.
In at least some attempt to get a handle on this lawlessness, Thailand tried to cut off internet services for some compounds across its western border starting last May. Syndicates adapted by running fiber-optic cables across the river. When some of those were discovered, they were severed by Thai authorities. Thailand again ramped up its crackdowns on the industry earlier this year, with tactics that included cutting off internet, gas, and electricity to known scamming enclaves, following the trafficking of a Chinese celebrity through Thailand into Myanmar.
Still, the scammers keep adapting—again, using Western technology. “We’ve started to see and hear of Starlink systems being used by these compounds,” says Eric Heintz, a global analyst at IJM.
While the military junta has criminalized the use of unauthorized satellite internet service, intercepted shipments and raids on scamming centers over the past year indicate that syndicates smuggle in equipment. The crackdowns seem to have had a limited impact—a Wired investigation published in February found that scamming networks appeared to be “widely using” Starlink in Myanmar. The journalist, using mobile-phone connection data collected by an online advertising industry tool, identified eight known scam compounds on the Myanmar-Thailand border where hundreds of phones had used Starlink more than 40,000 times since November 2024. He also identified photos that appeared to show dozens of Starlink satellite dishes on a scamming compound rooftop.
Starlink could provide another prime opportunity for systematic efforts to interrupt the scams, particularly since it requires a subscription and is able to geofence its services. “I could give you coordinates of where some of these [scamming operations] are, like IP addresses that are connecting to them,” Heintz says. “That should make a huge paper trail.”
Starlink’s parent company, SpaceX, has previously limited access in areas of Ukraine under Russian occupation, after all. Its policies also state that SpaceX may terminate Starlink services to users who participate in “fraudulent” activities. (SpaceX did not respond to a request for comment.)
Knowing the locations of scam compounds could also allow Apple to step in: Workers rely on iPhones to make contact with victims, and these have to be associated with an Apple ID, even if the workers use apps to spoof their addresses.
As Heintz puts it, “[If] you have an iCloud account with five phones, and you know that those phones’ GPS antenna locates those phones inside a known scam compound, then all of those phones should be bricked. The account should be locked.”
(Apple did not provide a response to a request for comment.)
“This isn’t like the other trafficking cases that we’ve worked on, where we’re trying to find a boat in the middle of the ocean,” Heintz adds. “These are city-size compounds. We all know where they are, and we’ve watched them being built via satellite imagery. We should be able to do something location-based to take these accounts offline.”
Getting paid
Once Gavesh developed a relationship on social media or a dating site, he was supposed to move the conversation to WhatsApp. That platform is end-to-end encrypted, meaning even Meta can’t read the content of messages—although it should be possible for the company to spot a user’s unusual patterns of behavior, like opening large numbers of WhatsApp accounts or sending numerous messages in a short span of time.
“If you have an account that is suddenly adding people in large quantities all over the world, should you immediately flag it and freeze that account or require that that individual verify his or her information?” USIP’s Tower says.
After cultivating targets’ trust, scammers would inevitably shift the conversation to the subject of money. Having made themselves out to be living a life of luxury, they would offer a chance to share in the secrets of their wealth. Gavesh was taught to make the approach as if it were an extension of an existing intimacy. “I would not show this platform to anyone else,” he says he was supposed to say. “But since I feel like you are my life partner, I feel like you are my future.”
Lower-level workers like Gavesh were only expected to get scamming targets on the hook; then they’d pass off the relationship to a manager. From there, there is some variation in the approach, but the target is sometimes encouraged to set up an account with a mainstream crypto exchange and buy some tokens. Then the scammer sends the victim—or “customer,” as some workers say they called these targets—a link to a convincing, but fake, crypto investment platform.
After the target invests an initial amount of money, the scammer typically sends fake investment return charts that seem to show the value of that stake rising and rising. To demonstrate good faith, the scammer sends a few hundred dollars back to the victim’s crypto wallet, all the while working to convince the mark to keep investing. Then, once the customer is all in, the scammer goes in for the kill, using every means possible to take more money. “We [would] pull out bigger amounts from the customers and squeeze them out of their possessions,” one worker tells us.
The design of cryptocurrency allows some degree of anonymity, but with enough time, persistence, and luck, it’s possible to figure out where tokens are flowing. It’s also possible, though even more difficult, to discover who owns the crypto wallets.
In early 2024, University of Texas researchers John M. Griffin and Kevin Mei published a paper that followed money from crypto wallets associated with scammers. They tracked hundreds of thousands of transactions, collectively worth billions of dollars—money that was transferred in and out of mainstream exchanges, including Binance, Coinbase, and Crypto.com.
Scam workers spend time gaining the trust of their targets, often by deploying fraudulent personas and developing romantic relationships.
REUTERS/CARLOS BARRIA
Some scam syndicates would move crypto off these big exchanges, launder it through anonymous platforms known as mixers (which can be used to obscure crypto transactions), and then come back to the exchanges to cash out into fiat currency such as dollars.
Griffin and Mei were able to identify deposit addresses on Binance and smaller platforms, including Hong Kong–based Huobi and Seychelles-based OKX, that were collectively receiving billions of dollars from suspected scams. These addresses were being used over and over again to send and receive money, “suggesting limited monitoring by crypto exchanges,” the authors wrote.
(We were unable to reach OKX for comment; Coinbase and Huobi did not respond to requests for comment. A Binance spokesperson said that the company disputes the findings of the University of Texas study, alleging that they are “misleading at best and, at worst, wildly inaccurate.” The spokesperson also said that the company has extensive know-your-customer requirements, uses internal and third-party tools to spot illicit activity, freezes funds, and works with law enforcement to help reclaim stolen assets, claiming to have “proactively prevented $4.2 billion in potential losses for 2.8 million users from scams and frauds” and “recovered $88 million in stolen or misplaced funds” last year. A Crypto.com spokesperson said that the company is “committed to security, compliance and consumer protection” and that it uses “robust” transaction monitoring and fraud detection controls, “rigorously investigates accounts flagged for potential fraudulent activity or victimization,” and has internal blacklisting processes for wallet addresses known to be linked to scams.)
But while tracking illicit payments through the crypto ecosystem is possible, it’s “messy” and “complicated” to actually pin down who owns a scam wallet, according to Griffin Hotchkiss, a writer and use-case researcher at the Ethereum Foundation who has worked on crypto projects in Myanmar and who spoke in his personal capacity. Investigators have to build models that connect users to accounts by the flows of money going through them, which involves a degree of “guesswork” and “red string and sticky notes on the board trying to trace the flow of funds,” he says.
There are, however, certain actors within the crypto ecosystem who should have a good vantage point for observing how money moves through it. The most significant of these is Tether Holdings, a company formerly based in the British Virgin Islands (it has since relocated to El Salvador) that issues tether or USDT, a so-called stablecoin whose value is nominally pegged to the US dollar. Tether is widely used by crypto traders to park their money in dollar-denominated assets without having to convert cryptocurrencies into fiat currency. It is also widely used in criminal activity.
“There was this one guy I was chatting with, [using] a girl’s profile. He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].”
There is more than $140 billion worth of USDT in circulation; in 2023, TRM Labs, a firm that traces crypto fraud, estimated that $19.3 billion worth of tether transactions was associated with illicit activity. In January 2024, the UN’s Office on Drugs and Crime said that tether was a leading means of exchange for fraudsters and money launderers operating in Southeast Asia. In October, US federal investigators reportedly opened an investigation alleging possible sanctions violations and complicity in money laundering (though at the time, Tether Holdings’ CEO said there was “no indication” the company was under investigation).
Tech experts tell us that USDT is ever-present in the scam business, used to move money and as the main medium of exchange on anonymous marketplaces such as Cambodia-based Huione Guarantee, which has been accusedof allowing romance scammers to launder the proceeds of their crimes. (Cambodia revoked the banking license of Huione Pay in March of this year. Huione, which did not respond to a request for comment, has previously denied engaging in criminal activity.)
While much of the crypto ecosystem is decentralized, USDT “does have a central authority” that could intervene, Hotchkiss says. Tether’s code has functions that allow the company to blacklist users, freeze accounts, and even destroy tokens, he adds. (Tether Holdings did not respond to requests for comment.)
In practice, Hotchkiss says, the company has frozen very few accounts—and, like other experts we spoke to, he thinks it’s unlikely to happen at scale. If it were to start acting like a regulator or a bank, the currency would lose a fundamental part of its appeal: its anonymity and independence from the mainstream of finance. The more you intervene, “the less trust people have in your coin,” he says. “The incentives are kind of misaligned.”
Getting out
Gavesh really wasn’t very good at scamming. The knowledge that the person on the other side of the conversation was working hard for money that he was trying to steal weighed heavily on him. “There was this one guy I was chatting with, [using] a girl’s profile,” he says. “He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].”
The nature of the work left him racked with guilt. “I believe in karma,” he says. “What goes around comes around.”
Twice during Gavesh’s incarceration, he was sold on from one “employer” to another, but he still struggled with scamming. In February 2023, he was put up for sale a third time, along with some other workers.
“We went to the boss and begged him not to sell [us] and to please let us go home,” Gavesh says. The boss eventually agreed but told them it would cost them. As well as forgoing their salaries, they had to pay a ransom—Gavesh’s was set at 72,000 Thai baht, more than $2,000.
Gavesh managed to scrape the money together, and he and around a dozen others were driven to the river in a military vehicle. “We had to be very silent,” he says. They were told “not to make any sounds or anything—just to get on the boat.” They slipped back into Thailand the way they had come.
KATHERINE LAM
To avoid checkpoints on the way to Bangkok, the smugglers took paths through the jungle and changed vehicles around 10 times.
The group barely had enough money to survive a couple of days in the city, so they stuck together, staying in a cheap hotel while figuring out what to do next. With the help of a compatriot, Gavesh got in touch with IJM, which offered to help him navigate the legal bureaucracy ahead.
The traffickers hadn’t given him back his passport, and he was in Thailand without authorization. It was April before he was finally able to board a flight home, where he faced yet more questioning from police and immigration officials. He told his family he had “a small visa issue” and that he had lost his passport in Bangkok. He has never told them about his ordeal. “It would be very hard for them to process,” he says.
Recent history shows it’s very unlikely Gavesh will get any justice. That’s part of the reason why disrupting scams’ technology supply chain is so important: It’s incredibly challenging to hold the people operating the syndicates accountable. They straddle borders and jurisdictions. They have trafficked people from more than 60 countries, according to research from USIP, and scam targets come from all over the world. Much of the stolen money is moved through crypto wallets based in secrecy jurisdictions. “This thing is really like an onion. You’ve got layer after layer after layer of it, and it’s just really difficult to see where jurisdiction starts and where jurisdiction ends,” Tower says.
Chinese authorities are often more willing to cooperate with the military junta and armed groups in Myanmar that Western governments will not deal with, and they have cracked down where they can on operations involving their nationals. Thailand has also stepped up its efforts to address the human trafficking crisis and shut down scamming operations across its border in recent months. But when it comes to regulating tech platforms, the reaction from governments has been slower.
The few legislativeefforts in the US, which are still in the earliest stages, focus on supporting law enforcement and financial institutions, not directly on ways to address the abuse of American tech platforms for scamming. And they probably won’t take that on anytime soon. Trump, who has been boosted and courted by several high-profile tech executives, has indicated that his administration opposes heavier online moderation. One executive order, signed in February, vows to impose tariffs on foreign governments if they introduce measures that could “inhibit the growth” of US companies—particularly those in tech—or compel them to moderate online content.
The Trump White House also supports reducing regulation in the crypto industry; it has halted major investigations into crypto companies and just this month removed sanctions on the crypto mixer Tornado Cash. In what was widely seen as a nod to libertarian-leaning crypto-enthusiasts, Trump pardoned Ross Ulbricht, the founder of the dark web marketplace Silk Road and one of the earlier adopters of crypto for large-scale criminal activity. The administration’s embrace of crypto could indeed have implications for the scamming industry, notes Kim, the economist: “It makes it much easier for crypto services to proliferate and have wider-spread adoption, and that might make it easier for criminal enterprises to tap into that and exploit that for their own means.”
What’s more, the new US administration has overseen the rollback of funding for myriad international aid programs, primarily programs run through the US Agency for International Development and including those working to help the people who’ve been trafficked into scam compounds. In late February, CNN reports, every one of the agency’s anti-trafficking projects was halted.
This all means it’s up to the tech companies themselves to act on their own initiative. And Big Tech has rarely acted without legislative threats or significant social or financial pressure. Companies won’t do anything if “it’s not mandatory, it’s not enforced by the government,” and most important, if companies don’t profit from it, says Wang, from the University of Texas. While a group of tech companies, including Meta, Match, and Coinbase, last year announced the formation of Tech Against Scams, a collaboration to share tips and best practices, experts tell us there are no concrete actions to point to yet.
And at a time when more resources are desperately needed to address the growing problems on their platforms, social media companies like X, Meta, and others have laid off hundreds of people from their trust and safety departments in recent years, reducing their capacity to tackle even the most pressing issues. Since the reelection of Trump, Meta has signaled an even greater rollback of its moderation and fact checking, a decision that earned praise from the president.
Still, companies may feel pressure given that a handful of entities and executives have in recent years been held legally responsible for criminal activity on their platforms. Changpeng Zhao, who founded Binance, the world’s largest cryptocurrency exchange, was sentenced to four months in jail last April after pleading guilty to breaking US money-laundering laws, and the company had to forfeit some $4 billion for offenses that included allowing users to bypass sanctions. Then last May, Alexey Pertsev, a Tornado Cash cofounder, was sentenced to more than five years in a Dutch prison for facilitating the laundering of money stolen by, among others, the Lazarus Group, North Korea’s infamous state-backed hacking team. And in August last year, French authorities arrested Pavel Durov, the CEO of Telegram, and charged him with complicity in drug trafficking and distribution of child sexual abuse material.
“I think all social media [companies] should really be looking at the case of Telegram right now,” USIP’s Tower says. “At that CEO level, you’re starting to see states try to hold a company accountable for its role in enabling major transnational criminal activity on a global scale.”
Compounding all the challenges, however, is the integration of cheap and easy-to-use artificial intelligence into scamming operations. The trafficked individuals we spoke to, who had mostly left the compounds before the widespread adoption of generative AI, said that if targets suggested a video call they would deflect or, as a last resort, play prerecorded video clips. Only one described the use of AI by his company; he says he was paid to record himself saying various sentences in ways that reflected different emotions, for the purposes of feeding the audio into an AI model. Recently, reports have emerged of scammers who have used AI-powered “face swap” and voice-altering products so that they can impersonate their characters more convincingly. “Malicious actors can exploit these models, especially open-source models, to produce content at an unprecedented scale,” says Gabrielle Tran, senior analyst for technology and society at IST. “These models are purposefully being fine-tuned … to serve as convincing humans.”
Experts we spoke with warn that if platforms don’t pick up the pace on enforcement now, they’re likely to fall even further behind.
Every now and again, Gavesh still goes on Facebook to report pages he thinks are scams. He never hears back.
But he is working again in the tourism industry and on the path to recovering from his ordeal. “I can’t say that I’m 100% out of the trauma, but I’m trying to survive because I have responsibilities,” he says.
He chose to speak out because he doesn’t want anyone else to be tricked—into a scamming compound, or into giving up their life savings to a stranger. He’s seen behind the scenes into a brutal industry that exploits people’s real needs for work, connection, and human contact, and he wants to make sure no one else ends up where he did.
“There’s a very scary world,” he says. “A world beyond what we have seen.”
Peter Guest is a journalist based in London.Emily Fishbein is a freelance journalist focusing on Myanmar.
Lisa Holligan already had two children when she decided to try for another baby. Her first two pregnancies had come easily. But for some unknown reason, the third didn’t. Holligan and her husband experienced miscarriage after miscarriage after miscarriage.
Like many other people struggling to conceive, Holligan turned to in vitro fertilization, or IVF. The technology allows embryologists to take sperm and eggs and fuse them outside the body, creating embryos that can then be transferred into a person’s uterus.
The fertility clinic treating Holligan was able to create six embryos using her eggs and her husband’s sperm. Genetic tests revealed that only three of these were “genetically normal.” After the first was transferred, Holligan got pregnant. Then she experienced yet another miscarriage. “I felt numb,” she recalls. But the second transfer, which took place several months later, stuck. And little Quinn, who turns four in February, was the eventual happy result. “She is the light in our lives,” says Holligan.
Holligan, who lives in the UK, opted to donate her “genetically abnormal” embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesn’t know what to do with it.
Should she and her husband donate it to another family? Destroy it? “It’s almost four years down the line, and we still haven’t done anything with [the embryo],” she says. The clinic hasn’t been helpful—Holligan doesn’t remember talking about what to do with leftover embryos at the time, and no one there has been in touch with her for years, she says.
Holligan’s embryo is far from the only one in this peculiar limbo. Millions—or potentially tens of millions—of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.
At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, they’re human cells and nothing else. To others, they’re morally equivalent to children. Many feel they exist somewhere between those two extremes.
There are debates, too, over how we should classify embryos in law. Are they property? Do they have a legal status? These questions are important: There have been multiple legal disputes over who gets to use embryos, who is responsible if they are damaged, and who gets the final say over their fate. And the answers will depend not only on scientific factors, but also on ethical, cultural, and religious ones.
The options currently available to people with leftover IVF embryos mirror this confusion. As a UK resident, Holligan can choose to discard her embryos, make them available to other prospective parents, or donate them for research. People in the US can also opt for “adoption,” “placing” their embryos with families they get to choose. In Germany, people are not typically allowed to freeze embryos at all. And in Italy, embryos that are not used by the intended parents cannot be discarded or donated. They must remain frozen, ostensibly forever.
While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them?
Meanwhile, many of these same people are trying to find ways to bring down the total number of embryos in storage. Maintenance costs are high. Some clinics are running out of space. And with a greater number of embryos in storage, there are more opportunities for human error. They are grappling with how to get a handle on the growing number of embryos stuck in storage with nowhere to go.
The embryo boom
There are a few reasons why this has become such a conundrum. And they largely come down to an increasing demand for IVF and improvements in the way it is practiced. “It’s a problem of our own creation,” says Pietro Bortoletto, a reproductive endocrinologist at Boston IVF in Massachusetts. IVF has only become as successful as it is today by “generating lots of excess eggs and embryos along the way,” he says.
To have the best chance of creating healthy embryos that will attach to the uterus and grow in a successful pregnancy, clinics will try to collect multiple eggs. People who undergo IVF will typically take a course of hormone injections to stimulate their ovaries. Instead of releasing a single egg that month, they can expect to produce somewhere between seven and 20 eggs. These eggs can be collected via a needle that passes through the vagina and into the ovaries. The eggs are then taken to a lab, where they are introduced to sperm. Around 70% to 80% of IVF eggs are successfully fertilized to create embryos.
The embryos are then grown in the lab. After around five to seven days an embryo reaches a stage of development at which it is called a blastocyst, and it is ready to be transferred to a uterus. Not all IVF embryos reach this stage, however—only around 30% to 50% of them make it to day five. This process might leave a person with no viable embryos. It could also result in more than 10, only one of which is typically transferred in each pregnancy attempt. In a typical IVF cycle, one embryo might be transferred to the person’s uterus “fresh,” while any others that were created are frozen and stored.
IVF success rates have increased over time, in large part thanks to improvements in this storage technology. A little over a decade ago, embryologists tended to use a “slow freeze” technique, says Bortoletto, and many embryos didn’t survive the process. Embryos are now vitrified instead, using liquid nitrogen to rapidly cool them from room temperature to -196 °C in less than two seconds. Vitrification essentially turns all the water in the embryos into a glasslike state, avoiding the formation of damaging ice crystals.
Now, clinics increasingly take a “freeze all” approach, in which they cryopreserve all the viable embryos and don’t start transferring them until later. In some cases, this is so that the clinic has a chance to perform genetic tests on the embryo they plan to transfer.
An assortment of sperm and embryos, preserved in liquid nitrogen.
ALAMY
Once a lab-grown embryo is around seven days old, embryologists can remove a few cells for preimplantation genetic testing (PGT), which screens for genetic factors that might make healthy development less likely or predispose any resulting children to genetic diseases. PGT is increasingly popular in the US—in 2014, it was used in 13% of IVF cycles, but by 2016, that figure had increased to 27%. Embryos that undergo PGT have to be frozen while the tests are run, which typically takes a week or two, says Bortoletto: “You can’t continue to grow them until you get those results back.”
Put this all together, and it’s easy to see how the number of embryos in storage is rocketing. We’re making and storing more embryos than ever before. When you combine that with the growing demand for IVF, which is increasing in use by the year, perhaps it’s not surprising that the number of embryos sitting in storage tanks is estimated to be in the millions.
That was a decade ago. When I asked embryologists what they thought the number might be in the US today, I got responses between 1 million and 10 million. Bortoletto puts it somewhere around 5 million.
Globally, the figure is much higher. There could be tens of millions of embryos, invisible to the naked eye, kept in a form of suspended animation. Some for months, years, or decades. Others indefinitely.
Stuck in limbo
In theory, people who have embryos left over from IVF have a few options for what to do with them. They could donate the embryos for someone else to use. Often this can be done anonymously (although genetic tests might later reveal the biological parents of any children that result). They could also donate the embryos for research purposes. Or they could choose to discard them. One way to do this is to expose the embryos to air, causing the cells to die.
In practice, too, the available options vary greatly depending on where you are. And many of them lead to limbo.
Take Spain, for example, which is a European fertility hub, partly because IVF there is a lot cheaper than in other Western European countries, says Giuliana Baccino, managing director of New Life Bank, a storage facility for eggs and sperm in Buenos Aires, Argentina, and vice chair of the European Fertility Society. Operating costs are low, and there’s healthy competition—there are around 330 IVF clinics operating in Spain. (For comparison, there are around 500 IVF clinics in the US, which has a population almost seven times greater.)
Baccino, who is based in Madrid, says she often hears of foreign patients in their late 40s who create eight or nine embryos for IVF in Spain but end up using only one or two of them. They go back to their home countries to have their babies, and the embryos stay in Spain, she says. These individuals often don’t come back for their remaining embryos, either because they have completed their families or because they age out of IVF eligibility (Spanish clinics tend not to offer the treatment to people over 50).
An embryo sample is removed from cryogenic storage.
GETTY IMAGES
In 2023, the Spanish Fertility Society estimated that there were 668,082 embryos in storage in Spain, and that around 60,000 of them were “in a situation of abandonment.” In these cases the clinics might not be able to reach the intended parents, or might not have a clear directive from them, and might not want to destroy any embryos in case the patients ask for them later. But Spanish clinics are wary of discarding embryos even when they have permission to do so, says Baccino. “We always try to avoid trouble,” she says. “And we end up with embryos in this black hole.”
This happens to embryos in the US, too. Clinics can lose touch with their patients, who may move away or forget about their remaining embryos once they have completed their families. Other people may put off making decisions about those embryos and stop communicating with the clinic. In cases like these, clinics tend to hold onto the embryos, covering the storage fees themselves.
Nowadays clinics ask their patients to sign contracts that cover long-term storage of embryos—and the conditions of their disposal. But even with those in hand, it can be easier for clinics to leave the embryos in place indefinitely. “Clinics are wary of disposing of them without explicit consent, because of potential liability,” says Cattapan, who has researched the issue. “People put so much time, energy, money into creating these embryos. What if they come back?”
Bortoletto’s clinic has been in business for 35 years, and the handful of sites it operates in the US have a total of over 47,000 embryos in storage, he says. “Our oldest embryo in storage was frozen in 1989,” he adds.
Some people may not even know where their embryos are. Sam Everingham, who founded and directs Growing Families, an organization offering advice on surrogacy and cross-border donations, traveled with his partner from their home in Melbourne, Australia, to India to find an egg donor and surrogate back in 2009. “It was a Wild West back then,” he recalls. Everingham and his partner used donor eggs to create eight embryos with their sperm.
Everingham found the experience of trying to bring those embryos to birth traumatic. Baby Zac was stillborn. Baby Ben died at seven weeks. “We picked ourselves up and went again,” he recalls. Two embryo transfers were successful, and the pair have two daughters today.
But the fate of the rest of their embryos is unclear. India’s government decided to ban commercial surrogacy for foreigners in 2015, and Everingham lost track of where they are. He says he’s okay with that. As far as he’s concerned, those embryos are just cells.
He knows not everyone feels the same way. A few days before we spoke, Everingham had hosted a couple for dinner. They had embryos in storage and couldn’t agree on what to do with them. “The mother … wanted them donated to somebody,” says Everingham. Her husband was very uncomfortable with the idea. “[They have] paid storage fees for 14 years for those embryos because neither can agree on what to do with them,” says Everingham. “And this is a very typical scenario.”
Lisa Holligan’s experience is similar. Holligan thought she’d like to donate her last embryo to another person—someone else who might have been struggling to conceive. “But my husband and I had very different views on it,” she recalls. He saw the embryo as their child and said he wouldn’t feel comfortable with giving it up to another family. “I started having these thoughts about a child coming to me when they’re older, saying they’ve had a terrible life, and [asking] ‘Why didn’t you have me?’” she says.
After all, her daughter Quinn began as an embryo that was in storage for months. “She was frozen in time. She could have been frozen for five years like [the leftover] embryo and still be her,” she says. “I know it sounds a bit strange, but this embryo could be a child in 20 years’ time. The science is just mind-blowing, and I think I just block it out. It’s far too much to think about.”
No choice at all
Choosing the fate of your embryos can be difficult. But some people have no options at all.
This is the case in Italy, where the laws surrounding assisted reproductive technology have grown increasingly restrictive. Since 2004, IVF has been accessible only to heterosexual couples who are either married or cohabiting. Surrogacy has also been prohibited in the country for the last 20 years, and in 2024, it was made a “universal crime.” The move means Italians can be prosecuted for engaging in surrogacy anywhere in the world, a position Italy has also taken on the crimes of genocide and torture, says Sara Dalla Costa, a lawyer specializing in assisted reproduction and an IVF clinic manager at Instituto Bernabeu on the outskirts of Venice.
The law surrounding leftover embryos is similarly inflexible. Dalla Costa says there are around 900,000 embryos in storage in Italy, basing the estimate on figures published in 2021 and the number of IVF cycles performed since then. By law, these embryos cannot be discarded. They cannot be donated to other people, and they cannot be used for research.
Even when genetic tests show that the embryo has genetic features making it “incompatible with life,” it must remain in storage, forever, says Dalla Costa.
“There are a lot of patients that want to destroy embryos,” she says. For that, they must transfer their embryos to Spain or other countries where it is allowed.
Even people who want to use their embryos may “age out” of using them. Dalla Costa gives the example of a 48-year-old woman who undergoes IVF and creates five embryos. If the first embryo transfer happens to result in a successful pregnancy, the other four will end up in storage. Once she turns 50, this woman won’t be eligible for IVF in Italy. Her remaining embryos become stuck in limbo. “They will be stored in our biobanks forever,” says Dalla Costa.
Dalla Costa says she has “a lot of examples” of couples who separate after creating embryos together. For many of them, the stored embryos become a psychological burden. With no way of discarding them, these couples are forever connected through their cryopreserved cells. “A lot of our patients are stressed for this reason,” she says.
Earlier this year, one of Dalla Costa’s clients passed away, leaving behind the embryos she’d created with her husband. He asked the clinic to destroy them. In cases like these, Dalla Costa will contact the Italian Ministry of Health. She has never been granted permission to discard an embryo, but she hopes that highlighting cases like these might at least raise awareness about the dilemmas the country’s policies are creating for some people.
Snowflakes and embabies
In Italy, embryos have a legal status. They have protected rights and are viewed almost as children. This sentiment isn’t specific to Italy. It is shared by plenty of individuals who have been through IVF. “Some people call them ‘embabies’ or ‘freezer babies,’” says Cattapan.
It is also shared by embryo adoption agencies in the US. Beth Button is executive director of one such program, called Snowflakes—a division of Nightlight Christian Adoptions agency, which considers cryopreserved embryos to be children, frozen in time, waiting to be born. Snowflakes matches embryo donors, or “placing families,” with recipients, termed “adopting families.” Both parties share their information and essentially get to choose who they donate to or receive from. By the end of 2024, 1,316 babies had been born through the Snowflakes embryo adoption program, says Button.
Button thinks that far too many embryos are being created in IVF labs around the US. Around 10 years ago, her agency received a donation from a couple that had around 38 leftover embryos to donate. “We really encourage [people with leftover embryos in storage] to make a decision [about their fate], even though it’s an emotional, difficult decision,” she says. “Obviously, we just try to keep [that discussion] focused on the child,” she says. “Is it better for these children to be sitting in a freezer, even though that might be easier for you, or is it better for them to have a chance to be born into a loving family? That kind of pushes them to the point where they’re ready to make that decision.”
Button and her colleagues feel especially strongly about embryos that have been in storage for a long time. These embryos are usually difficult to place, because they are thought to be of poorer quality, or less likely to successfully thaw and result in a healthy birth. The agency runs a program called Open Hearts specifically to place them, along with others that are harder to match for various reasons. People who accept one but fail to conceive are given a shot with another embryo, free of charge.
These nitrogen tanks at New Hope Fertility Center in New York hold tens of thousands of frozen embryos and eggs.
GETTY IMAGES
“We have seen perfectly healthy children born from very old embryos, [as well as] embryos that were considered such poor quality that doctors didn’t even want to transfer them,” says Button. “Right now, we have a couple who is pregnant with [an embryo] that was frozen for 30 and a half years. If that pregnancy is successful, that will be a record for us, and I think it will be a worldwide record as well.”
Many embryologists bristle at the idea of calling an embryo a child, though. “Embryos are property. They are not unborn children,” says Bortoletto. In the best case, embryos create pregnancies around 65% of the time, he says. “They are not unborn children,” he repeats.
Person or property?
In 2020, an unauthorized person allegedly entered an IVF clinic in Alabama and pulled frozen embryos from storage, destroying them. Three sets of intended parents filed suit over their “wrongful death.” A trial court dismissed the claims, but the Alabama Supreme Court disagreed, essentially determining that those embryos were people. The ruling shocked many and was expected to have a chilling effect on IVF in the state, although within a few weeks, the state legislature granted criminal and civil immunity to IVF clinics.
But the Alabama decision is the exception. While there are active efforts in some states to endow embryos with the same legal rights as people, a move that could potentially limit access to abortion, “most of the [legal] rulings in this area have made it very clear that embryos are not people,” says Rich Vaughn, an attorney specializing in fertility law and the founder of the US-based International Fertility Law Group. At the same time, embryos are not just property. “They’re something in between,” says Vaughn. “They’re sort of a special type of property.”
UK law takes a similar approach: The language surrounding embryos and IVF was drafted with the idea that the embryo has some kind of “special status,” although it was never made entirely clear exactly what that special status is, says James Lawford Davies, a solicitor and partner at LDMH Partners, a law firm based in York, England, that specializes in life sciences. Over the years, the language has been tweaked to encompass embryos that might arise from IVF, cloning, or other means; it is “a bit of a fudge,” says Lawford Davies. Today, the official—if somewhat circular—legal definition in the Human Fertilisation and Embryology Act reads: “embryo means a live human embryo.”
And while people who use their eggs or sperm to create embryos might view these embryos as theirs, according to UK law, embryos are more like “a stateless bundle of cells,” says Lawford Davies. They’re not quite property—people don’t own embryos. They just have control over how they are used.
Many legal disputes revolve around who has control. This was the experience of Natallie Evans, who created embryos with her then partner Howard Johnston in the UK in 2001. The couple separated in 2002. Johnston wrote to the clinic to ask that their embryos be destroyed. But Evans, who had been diagnosed with ovarian cancer in 2001,wanted to use them. She argued that Johnston had already consented to their creation, storage, and use and should not be allowed to change his mind. The case eventually made it to the European Court of Human Rights, and Evans lost. The case set a precedent that consent was key and could be withdrawn at any time.
In Italy, on the other hand, withdrawing consent isn’t always possible. In 2021, a case like Natallie Evans’s unfolded in the Italian courts: A woman who wanted to proceed with implantation after separating from her partner went to court for authorization. “She said that it was her last chance to be a mother,” says Dalla Costa. The judge ruled in her favor.
Dalla Costa’s clinics in Italy are now changing their policies to align with this decision. Male partners must sign a form acknowledging that they cannot prevent embryos from being used once they’ve been created.
The US situation is even more complicated, because each state has its own approach to fertility regulation. When I looked through a series of published legal disputes over embryos, I found little consistency—sometimes courts ruled to allow a woman to use an embryo without the consent of her former partner, and sometimes they didn’t. “Some states have comprehensive … legislation; some do not,” says Vaughn. “Some have piecemeal legislation, some have only case law, some have all of the above, some have none of the above.”
The meaning of an embryo
So how should we define an embryo? “It’s the million-dollar question,” says Heidi Mertes, a bioethicist at Ghent University in Belgium. Some bioethicists and legal scholars, including Vaughn, think we’d all stand to benefit from clear legal definitions.
Risa Cromer, a cultural anthropologist at Purdue University in Indiana, who has spent years researching the field, is less convinced. Embryos exist in a murky, in-between state, she argues. You can (usually) discard them, or transfer them, but you can’t sell them. You can make claims against damages to them, but an embryo is never viewed in the same way as a car, for example. “It doesn’t fit really neatly into that property category,” says Cromer. “But, very clearly, it doesn’t fit neatly into the personhood category either.”
And there are benefits to keeping the definition vague, she adds: “There is, I think, a human need for there to be a wide range of interpretive space for what IVF embryos are or could be.”
That’s because we don’t have a fixed moral definition of what an embryo is. Embryos hold special value even for people who don’t view them as children. They hold potential as human life. They can come to represent a fertility journey—one that might have been expensive, exhausting, and traumatizing. “Even for people who feel like they’re just cells, it still cost a lot of time, money, [and effort] to get those [cells],” says Cattapan.
“I think it’s an illusion that we might all agree on what the moral status of an embryo is,” Mertes says.
In the meantime, a growing number of embryologists, ethicists, and researchers are working to persuade fertility clinics and their patients not to create or freeze so many embryos in the first place. Early signs aren’t promising, says Baccino. The patients she has encountered aren’t particularly receptive to the idea. “They think, ‘If I will pay this amount for a cycle, I want to optimize my chances, so in my case, no,’” she says. She expects the number of embryos in storage to continue to grow.
Holligan’s embryo has been in storage for almost five years. And she still doesn’t know what to do with it. She tears up as she talks through her options. Would discarding the embryo feel like a miscarriage? Would it be a sad thing? If she donated the embryo, would she spend the rest of her life wondering what had become of her biological child, and whether it was having a good life? Should she hold on to the embryo for another decade in case her own daughter needs to use it at some point?
“The question [of what to do with the embryo] does pop into my head, but I quickly try to move past it and just say ‘Oh, that’s something I’ll deal with at a later time,’” says Holligan. “I’m sure [my husband] does the same.”
The accumulation of frozen embryos is “going to continue this way for some time until we come up with something that fully addresses everyone’s concerns,” says Vaughn. But will we ever be able to do that?
“I’m an optimist, so I’m gonna say yes,” he says with a hopeful smile. “But I don’t know at the moment.”
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.
But all that is up for grabs. We are at a new inflection point.
The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.
Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.
More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.
Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.
I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.
On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.
People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer.
It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.”
But this isn’t just about publishers (or my own self-interest).
People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer.
But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see.
Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.
Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?
In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.
Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey.
Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.
And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was.
But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.
But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad.
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.
Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”
JENS GYARMATY/LAIF/REDUX
For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.
“It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.
It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.
But once you’ve used AI Overviews a bit, you realize they are different.
Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world.
While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web.
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.”
The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)
“[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”
That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language.
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.
When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous.
“We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.
There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous.
In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from.
Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?
I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources.
“When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.”
In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.
“Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.”
The new search
Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work.
Search Engine
Google The search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries.
What it’s good at
Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material.
Perplexity Perplexity is a conversational search engine that uses third-party large language models from OpenAI and Anthropic to answer queries.
Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content.
ChatGPT While Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search.
Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent.
When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.
“You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.”
There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.
“If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”
But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?
Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.
“If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.
Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”
Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”
“I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”
He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?
A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.
“There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.”
Kevin Weil, chief product officer, OpenAI
According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.
OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.
“I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.”
Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.
Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.
“For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.
WINNI WINTERMEYER/REDUX
Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.)
But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners.
Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”
When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.
“And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.”
Indeed!
The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.
It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”
We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.
The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities.
“A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.”
This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.
Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.
“It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.”
And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.”
“We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.”
This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.
In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.
But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today.
These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not.
That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on.
At first glance, the Mosphera scooter may look normal—just comically oversized. It’s like the monster truck of scooters, with a footplate seven inches off the ground that’s wide enough to stand on with your feet slightly apart—which you have to do to keep your balance, because when you flip the accelerator with a thumb, it takes off like a rocket. While the version I tried in a parking lot in Riga’s warehouse district had a limiter on the motor, the production version of the supersized electric scooter can hit 100 kilometers (62 miles) per hour on the flat. The all-terrain vehicle can also go 300 kilometers on a single charge and climb 45-degree inclines.
Latvian startup Global Wolf Motors launched in 2020 with a hope that the Mosphera would fill a niche in micromobility. Like commuters who use scooters in urban environments, farmers and vintners could use the Mosphera to zip around their properties; miners and utility workers could use it for maintenance and security patrols; police and border guards could drive them on forest paths. And, they thought, maybe the military might want a few to traverse its bases or even the battlefield—though they knew that was something of a long shot.
When co-founders Henrijs Bukavs and Klavs Asmanis first went to talk to Latvia’s armed forces, they were indeed met with skepticism—a military scooter, officials implied, didn’t make much sense—and a wall of bureaucracy. They found that no matter how good your pitch or how glossy your promo video (and Global Wolf’s promo is glossy: a slick montage of scooters jumping, climbing, and speeding in formation through woodlands and deserts), getting into military supply chains meant navigating layer upon layer of officialdom.
Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed. In the desperate early days of the war, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas—like a military scooter—that might not have made the cut in peacetime. Asmanis knew a Latvian journalist heading to Ukraine; through the reporter’s contacts, the startup arranged to ship two Mospheras to the Ukrainian army.
Within weeks, the scooters were at the front line—and even behind it, being used by Ukrainian special forces scouts on daring reconnaissance missions. It was an unexpected but momentous step for Global Wolf, and an early indicator of a new demand that’s sweeping across tech companies along Ukraine’s borders: for civilian products that can be adapted quickly for military use.
COURTESY OF GLOBAL WOLF
Global Wolf’s high-definition marketing materials turned out to be nowhere near as effective as a few minutes of grainy phone footage from the war. The company has since shipped out nine more scooters to the Ukrainian army, which has asked for another 68. Where Latvian officials once scoffed, the country’s prime minister went to see Mosphera’s factory in April 2024, and now dignitaries and defense officials from the country are regular visitors.
It might have been hard a few years ago to imagine soldiers heading to battle on oversized toys made by a tech startup with no military heritage. But Ukraine’s resistance to Russia’s attacks has been a miracle of social resilience and innovation—and the way the country has mobilized is serving both a warning and an inspiration to its neighbors. They’ve watched as startups, major industrial players, and political leaders in Ukraine have worked en masse to turn civilian technology into weapons and civil defense systems. They’ve seen Ukrainian entrepreneurs help bootstrap a military-industrial complex that is retrofitting civilian drones into artillery spotters and bombers, while software engineers become cyberwarriors and AI companies shift to battlefield intelligence. Engineers work directly with friends and family on the front line, iterating their products with incredible speed.
Their successes—often at a fraction of the cost of conventional weapons systems—have in turn awakened European governments and militaries to the potential of startup-style innovation and startups to the potential dual uses of their products, meaning ones that have legitimate civilian applications but can be modified at scale to turn them into weapons.
This heady mix of market demand and existential threat is pulling tech companies in Latvia and the other Baltic states into a significant pivot. Companies that can find military uses for their products are hardening them and discovering ways to get them in front of militaries that are increasingly willing to entertain the idea of working with startups. It’s a turn that may only become more urgent if the US under incoming President Donald Trump becomes less willing to underwrite the continent’s defense.
But while national governments, the European Union, and NATO are all throwing billions of dollars of public money into incubators and investment funds—followed closely by private-sector investors—some entrepreneurs and policy experts who have worked closely with Ukraine warn that Europe might have only partially learned the lessons from Ukraine’s resistance.
If Europe wants to be ready to meet the threat of attack, it needs to find new ways of working with the tech sector. That includes learning how Ukraine’s government and civil society adapted to turn civilian products into dual-use tools quickly and cut through bureaucracy to get innovative solutions to the front. Ukraine’s resilience shows that military technology isn’t just about what militaries buy but about how they buy it, and about how politics, civil society, and the tech sector can work together in a crisis.
“[Ukraine], unfortunately, is the best defense technology experimentation ground in the world right now. If you are not in Ukraine, then you are not in the defense business.”
“I think that a lot of tech companies in Europe would do what is needed to do. They would put their knowledge and skills where they’re needed,” says Ieva Ilves, a veteran Latvian diplomat and technology policy expert. But many governments across the continent are still too slow, too bureaucratic, and too worried that they might appear to be wasting money, meaning, she says, that they are not necessarily “preparing the soil for if [a] crisis comes.”
“The question is,” she says, “on a political level, are we capable of learning from Ukraine?”
Waking up the neighbors
Many Latvians and others across the Baltic nations feel the threat of Russian aggression more viscerally than their neighbors in Western Europe. Like Ukraine, Latvia has a long border with Russia and Belarus, a large Russian-speaking minority, and a history of occupation. Also like Ukraine, it has been the target of more than a decade of so-called “hybrid war” tactics—cyberattacks, disinformation campaigns, and other attempts at destabilization—directed by Moscow.
Since Russian tanks crossed into Ukraine two-plus years ago, Latvia has stepped up its preparations for a physical confrontation, investing more than €300 million ($316 million) in fortifications along the Russian border and reinstating a limited form of conscription to boost its reserve forces. Since the start of this year, the Latvian fire service has been inspecting underground structures around the country, looking for cellars, parking garages, and metro stations that could be turned into bomb shelters.
And much like Ukraine,Latvia doesn’t have a huge military-industrial complex that can churn out artillery shells or tanks en masse.
What it and other smaller European countries can produce for themselves—and potentially sell to their allies—are small-scale weapons systems, software platforms, telecoms equipment, and specialized vehicles. The country is now making a significant investment in tools like Exonicus, a medical technology platform founded 11 years ago by Latvian sculptor Sandis Kondrats. Users of its augmented-reality battlefield-medicine training simulator put on a virtual reality headset that presents them with casualties, which they have to diagnose and figure out how to treat. The all-digital training saves money on mannequins, Kondrats says, and on critical field resources.
“If you use all the medical supplies on training, then you don’t have any medical supplies,” he says. Exonicus has recently broken into the military supply chain, striking deals with the Latvian, Estonian, US, and German militaries, and it has been training Ukrainian combat medics.
Medical technology company Exonicus has created an augmented-reality battlefield-medicine training simulator that presents users with casualties, which they have to diagnose and figure out how to treat.
GATIS ORLICKIS/BALTIC PICTURES
There’s also VR Cars, a company founded by two Latvian former rally drivers, that signed a contract in 2022 to develop off-road vehicles for the army’s special forces. And there is Entangle, a quantum encryption company that sells widgets that turn mobile phones into secure communications devices, and has recently received an innovation grant from the Latvian Ministry of Defense.
Unsurprisingly, a lot of the focus in Latvia has been on unmanned aerial vehicles (UAVs), or drones, which have become ubiquitous on both sides fighting in Ukraine, often outperforming weapons systems that cost an order of magnitude more. In the early days of the war, Ukraine found itself largely relying on machines bought from abroad, such as the Turkish-made Bayraktar strike aircraft and jury-rigged DJI quadcopters from China. It took a while, but within a year the country was able to produce home-grown systems.
As a result, a lot of the emphasis in defense programs across Europe is on UAVs that can be built in-country. “The biggest thing when you talk to [European ministries of defense] now is that they say, ‘We want a big amount of drones, but we also want our own domestic production,’” says Ivan Tolchinsky, CEO of Atlas Dynamics, a drone company headquartered in Riga. Atlas Dynamics builds drones for industrial uses and has now made hardened versions of its surveillance UAVs that can resist electronic warfare and operate in battlefield conditions.
Agris Kipurs founded AirDog in 2014 to make drones that could track a subject autonomously; they were designed for people doing outdoor sports who wanted to film themselves without needing to fiddle with a controller. He and his co-founders sold the company to a US home security company, Alarm.com, in 2020. “For a while, we did not know exactly what we would build next,” Kipurs says. “But then, with the full-scale invasion of Ukraine, it became rather obvious.”
His new company, Origin Robotics, has recently “come out of stealth mode,” he says, after two years of research and development. Origin has built on the team’s experience in consumer drones and its expertise in autonomous flight to begin to build what Kipurs calls “an airborne precision-guided weapon system”—a guided bomb that a soldier can carry in a backpack.
The Latvian government has invested in encouraging startups like these, as well as small manufacturers, to develop military-capable UAVs by establishing a €600,000 prize fund for domestic drone startups and a €10 million budget to create a new drone program, working with local and international manufacturers.
VR Cars was founded by two Latvian former rally drivers and has developed off-road vehicles for the army’s special forces.
Latvia is also the architect and co-leader, with the UK, of the Drone Coalition, a multicountry initiative that’s directing more than €500 million toward building a drone supply chain in the West. Under the initiative, militaries run competitions for drone makers, rewarding high performers with contracts and sending their products to Ukraine. Its grantees are often not allowed to publicize their contracts, for security reasons. “But the companies which are delivering products through that initiative are new to the market,” Kipurs says. “They are not the companies that were there five years ago.”
Even national telecommunications company LMT, which is partly government owned, is working on drones and other military-grade hardware, including sensor equipment and surveillance balloons. It’s developing a battlefield “internet of things” system—essentially, a system that can track in real time all the assets and personnel in a theater of war. “In Latvia, more or less, we are getting ready for war,” says former naval officer Kaspars Pollaks, who heads an LMT division that focuses on defense innovation. “We are just taking the threat really seriously. Because we will be operationally alone [if Russia invades].”
The Latvian government’s investments are being mirrored across Europe: NATO has expanded its Defence Innovation Accelerator for the North Atlantic (DIANA) program, which runs startup incubators for dual-use technologies across the continent and the US, and launched a separate €1 billion startup fund in 2022. Adding to this, the European Investment Fund, a publicly owned investment company, launched a €175 million fund-of-funds this year to support defense technologies with dual-use potential. And the European Commission has earmarked more than €7 billion for defense research and development between now and 2027.
Private investors are also circling, looking for opportunities to profit from the boom. Figures from the European consultancy Dealroom show that fundraising by dual-use and military-tech companies on the continent was just shy of $1 billion in 2023—up nearly a third over 2022, despite an overall slowdown in venture capital activity.
Atlas Dynamics builds drones for industrial uses and now makes hardened versions that can resist electronic warfare and operate in battlefield conditions.
ATLAS AERO
When Atlas Dynamics started in 2015, funding was hard to come by, Tolchinsky says: “It’s always hard to make it as a hardware company, because VCs are more interested in software. And if you start talking about the defense market, people say, ‘Okay, it’s a long play for 10 or 20 years, it’s not interesting.’” That’s changed since 2022. “Now, what we see because of this war is more and more venture capital that wants to invest in defense companies,” Tolchinsky says.
But while money is helping startups get off the ground, to really prove the value of their products they need to get their tools in the hands of people who are going to use them. When I asked Kipurs if his products are currently being used in Ukraine, he only said: “I’m not allowed to answer that question directly. But our systems are with end users.”
Battle tested
Ukraine has moved on from the early days of the conflict, when it was willing to take almost anything that could be thrown at the invaders. But that experience has been critical in pushing the government to streamline its procurement processes dramatically to allow its soldiers to try out new defense-tech innovations.
Origin Robotics has built on a history of producing consumer drones to create a guided bomb that a soldier can carry in a backpack.
Technology that doesn’t work at the front puts soldiers at risk, so in many cases they have taken matters into their own hands. Two Ukrainian drone makers tell me that military procurement in the country has been effectively flipped on its head: If you want to sell your gear to the armed forces, you don’t go to the general staff—you go directly to the soldiers and put it in their hands. Once soldiers start asking their senior officers for your tool, you can go back to the bureaucrats and make a deal.
Many foreign companies have simply donated their products to Ukraine—partly out of a desire to help, and partly because they’ve identified a (potentially profitable) opportunity to expose them to the shortened innovation cycles of conflict and to get live feedback from those fighting. This can be surprisingly easy as some volunteer units handle their own parallel supply chains through crowdfunding and donations, and they are eager to try out new tools if someone is willing to give them freely. One logistics specialist supplying a front line unit, speaking anonymously as he’s not authorized to talk to the media, tells me that this spring, they turned to donated gear from startups in Europe and the US to fill gaps left by delayed US military aid, including untested prototypes of UAVs and communications equipment.
All of this has allowed many companies to bypass the traditionally slow process of testing and demonstrating their products, for better and worse.
Tech companies’ rush into the conflict zone has unnerved some observers, who are worried that by going to war, companies have sidestepped ethical and safety concerns over their tools. Clearview AI gave Ukraine access to its controversial facial recognition tools to help identify Russia’s war dead, for example, sparking moral and practical questions over accuracy, privacy, and human rights—publishing images of those killed in war is arguably a violation of the Geneva Convention. Some high-profile tech executives, including Palantir CEO Alex Karp and former Google CEO-turned-military-tech-investor Eric Schmidt, have used the conflict to try to shift the global norms for using artificial intelligence in war, building systems that let machines select targets for attacks—which some experts worry is a gateway into autonomous “killer robots.”
LMT’s Pollaks says he has visited Ukraine often since the war began. Though he declines to give more details, he euphemistically describes Ukraine’s wartime bureaucracy as “nonstandardized.” If you want to blow something up in front of an audience in the EU, he says, you have to go through a whole lot of approvals, and the paperwork can take months, even years. In Ukraine, plenty of people are willing to try out your tools.
“[Ukraine], unfortunately, is the best defense technology experimentation ground in the world right now,” Pollaks says. “If you are not in Ukraine, then you are not in the defense business.”
Jack Wang, principal at UK-based venture capital fund Project A, which invests in military-tech startups, agrees that the Ukraine “track” can be incredibly fruitful. “If you sell to Ukraine, you get faster product and tech iteration, and live field testing,” he says. “The dollars might vary. Sometimes zero, sometimes quite a bit. But you get your product in the field faster.”
The feedback that comes from the front is invaluable. Atlas Dynamics has opened an office in Ukraine, and its representatives there work with soldiers and special forces to refine and modify their products. When Russian forces started jamming a wide band of radio frequencies to disrupt communication with the drones, Atlas designed a smart frequency-hopping system, which scans for unjammed frequencies and switches control of the drone over to them, putting soldiers a step ahead of the enemy.
At Global Wolf, battlefield testing for the Mosphera has led to small but significant iterations of the product, which have come naturally as soldiers use it. One scooter-related problem on the front turned out to be resupplying soldiers in entrenched positions with ammunition. Just as urban scooters have become last-mile delivery solutions in cities, troops found that the Mosphera was well suited to shuttling small quantities of ammo at high speeds across rough ground or through forests. To make this job easier, Global Wolf tweaked the design of the vehicle’s optional extra trailer so that it perfectly fits eight NATO standard-sized bullet boxes.
Within weeks of Russia’s full-scale invasion, Mosphera scooters were at Ukraine’s front line—and even behind it, being used by Ukrainian special forces scouts.
GLOBAL WOLF
Some snipers prefer the electric Mosphera to noisy motorbikes or quads, using the vehicles to weave between trees to get into position. But they also like to shoot from the saddle—something they couldn’t do from the scooter’s footplate. So Global Wolf designed a stable seat that lets shooters fire without having to dismount. Some units wanted infrared lights, and the company has made those, too. These types of requests give the team ideas for new upgrades: “It’s like buying a car,” Asmanis says. “You can have it with air conditioning, without air conditioning, with heated seats.”
Being battle-tested is already proving to be a powerful marketing tool. Bukavs told me he thinks defense ministers are getting closer to moving from promises toward “action.” The Latvian police have bought a handful of Mospheras, and the country’s military has acquired some, too, for special forces units. (“We don’t have any information on how they’re using them,” Asmanis says. “It’s better we don’t ask,” Bukavs interjects.) Military distributors from several other countries have also approached them to market their units locally.
Although they say their donations were motivated first and foremost by a desire to help Ukraine resist the Russian invasion, Bukavs and Asmanis admit that they have been paid back for their philanthropy many times over.
Of course, all this could change soon, and the Ukraine “track” could very well be disrupted when Trump returns to office in January. The US has provided more than $64 billion worth of military aid to Ukraine since the start of the full-scale invasion. A significant amount of that has been spent in Europe, in what Wang calls a kind of “drop-shipping”—Ukraine asks for drones, for instance, and the US buys them from a company in Europe, which ships them directly to the war effort.
Wang showed me a recent pitch deck from one European military-tech startup. In assessing the potential budgets available for its products, it compares the Ukrainian budget, which was in the tens of millions of dollars, and the “donated from everybody else” budget, which was a billion dollars. A large amount of that “everybody else” money comes from the US.
If, as many analysts expect, the Trump administration dramatically reduces or entirely stops US military aid to Ukraine, these young companies focused on military tech and dual-use tech will likely take a hit. “Ideally, the European side will step up their spending on European companies, but there will be a short-term gap,” Wang says.
A lasting change?
Russia’s full-scale invasion exposed how significantly the military-industrial complex in Europe has withered since the Cold War. Across the continent, governments have cut back investments in hardware like ships, tanks, and shells, partly because of a belief that wars would be fought on smaller scales, and partly to trim their national budgets.
“After decades of Europe reducing its combat capability,” Pollaks says, “now we are in the situation we are in. [It] will be a real challenge to ramp it up. And the way to do that, at least from our point of view, is real close integration between industry and the armed forces.”
This would hardly be controversial in the US, where the military and the defense industry often work closely together to develop new systems. But in Europe, this kind of collaboration would be “a bit wild,” Pollaks says. Militaries tend to be more closed off, working mainly with large defense contractors, and European investors have tended to be more squeamish about backing companies whose products could end up going to war.
As a result, despite the many positive signs for the developers of military tech, progress in overhauling the broader supply chain has been slower than many people in the sector would like.
Several founders of dual-use and military-tech companies in Latvia and the other Baltic states tell me they are often invited to events where they pitch to enthusiastic audiences of policymakers, but they never see any major orders afterward. “I don’t think any amount of VC blogging or podcasting will change how the military actually procures technology,” says Project A’s Wang. Despite what’s happening next door, Ukraine’s neighbors are still ultimately operating in peacetime. Government budgets remain tight, and even if the bureaucracy has become more flexible, layers upon layers of red tape remain.
Soldiers of the Latvian National Defense Service learn field combat skills in a training exercise.
GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE
Even Global Wolf’s Bukavs laments that a caravan of political figures has visited their factory but has not rewarded the company with big contracts. Despite Ukraine’s requests for the Mosphera scooters, for instance, they ultimately weren’t included in Latvia’s 2024 package of military aid due to budgetary constraints.
What this suggests is that European governments have learned a partial lesson from Ukraine—that startups can give you an edge in conflict. But experts worry that the continent’s politics means it may still struggle to innovate at speed. Many Western European countries have built up substantial bureaucracies to protect their democracies from corruption or external influences. Authoritarian states aren’t so hamstrung, and they, too, have been watching the war in Ukraine closely. Russian forces are reportedly testing Chinese and Iranian drones at the front line. Even North Korea has its own drone program.
The solution isn’t necessarily to throw out the mechanisms for accountability that are part of democratic society. But the systems that have been built up for good governance have led to fragility, sometimes leading governments to worry more about the politics of procurement than preparing for crises, according to Ilves and other policy experts I spoke to.
“Procurement problems grow bigger and bigger when democratic societies lose trust in leadership,” says Ilves, who now advises Ukraine’s Ministry of Digital Transformation on cybersecurity policy and international cooperation. “If a Twitter [troll] starts to go after a defense procurement budget, he can start to shape policy.”
That makes it hard to give financial support to a tech company whose products you don’t need now, for example, but whose capabilities might be useful to have in an emergency—a kind of merchant marine for technology, on constant reserve in case it’s needed. “We can’t push European tech to keep innovating imaginative crisis solutions,” Ilves says. “Business is business. It works for money, not for ideas.”
Even in Riga the war can feel remote, despite the Ukrainian flags flying from windows and above government buildings. Conversations about ordnance delivery and electronic warfare held in airy warehouse conversions can feel academic, even faintly absurd. In one incubator hub I visited in April, a company building a heavy-duty tracked ATV worked next door to an accounting software startup. On the top floor, bean bag chairs were laid out and a karaoke machine had been set up for a party that evening.
A sense of crisis is needed to jolt politicians, companies, and societies into understanding that the front line can come to them, Ilves says: “That’s my take on why I think the Baltics are ahead. Unfortunately not because we are so smart, but because we have this sense of necessity.”
Nevertheless, she says her experience over the past few years suggests there’s cause for hope if, or when, danger breaks through a country’s borders. Before the full-scale invasion, Ukraine’s government wasn’t exactly popular among the domestic business and tech communities. “And yet, they came together and put their brains and resources behind [the war effort],” she says. “I have a feeling that our societies are sometimes better than we think.”
If you’ve ever been through a large US airport, you’re probably at least vaguely aware of Clear. Maybe your interest (or irritation) has been piqued by the pods before the security checkpoints, the attendants in navy bluevests who usher clients to the front of the security line (perhaps just ahead of you), and the sometimes pushy sales pitches to sign up and skip ahead yourself. After all, is there anything people dislike more than waiting in line?
Its position in airports has made Clear Secure, with its roughly $3.75 billion market capitalization, the most visible biometric identity company in the United States. Over the past two decades, Clear has put more than 100 lanes in 58 airports across the US, and in the past decade it has entered 17 sports arenas and stadiums, from San Jose to Denver to Atlanta. Now you can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber.
And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctor’s office—or anywhere else that you currently have to pull out a wallet (or, of course, wait in line). The company that has helped millions of vetted members skip airport security lines is now working to expand its “frictionless,” “face-first” line-cutting service from the airport to just about everywhere, online and off, by promising to verify that you are who you say you are and you are where you are supposed to be. In doing so, CEO Caryn Seidman Becker told investors in an earnings call earlier this year, it has designs on being no less than the “identity layer of the internet,” as well as the “universal identity platform” of the physical world.
All you have to do is show up—and show your face.
This is enabled by biometric technology, but Clear is far more than just a biometrics company. As Seidman Becker has told investors, “biometrics aren’t the product … they are a feature.” Or, as she put it in a 2022 podcast interview, Clear is ultimately a platform company “no different than Amazon or Apple”—with dreams, she added, “of making experiences safer and easier, of giving people back their time, of giving people control, of using technology for … frictionless experiences.” (Clear did not make Seidman Becker available for an interview.)
While the company has been building toward this sweeping vision for years, it now seems the time has finally come. A confluence of factors is currently accelerating the adoption of—even necessity for—identity verification technologies: increasingly sophisticated fraud, supercharged by artificial intelligence that is making it harder to distinguish who or what is real; data breaches that seem to occur on a near daily basis; consumers who are more concerned about data privacy and security; and the lingering effects of the pandemic’s push toward “contactless” experiences.
All of this is creating a new urgency around ways to verify information, especially our identities—and, in turn, generating a massive opportunity for Clear. For years, Seidman Becker has been predicting that biometrics will go mainstream.
But now that biometrics have, arguably, gone mainstream, what—and who—bears the cost? Because convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. Some critics warn that not everyone will benefit from a world where identity is routed through Clear—maybe because it’s too expensive, and maybe because biometric technologies are often less effective at identifying people of color, people with disabilities, or those whose gender identity may not match what official documents say.
What’s more, says Kaliya Young, an identity expert who has advised the US government, having a single private company “disintermediating” our biometric data—especially facial data—is the wrong “architecture” to manage identity. “It seems they are trying to create a system like login with Google, but for everything in real life,” Young warns. While the single sign-on option that Google (or Facebook or Apple) provides for websites and apps may make life easy, it also poses greater security and privacy risks by putting both our personal data and the keys to it in the hands of a single profit-driven entity: “We’re basically selling our identity soul to a private company, who’s then going to be the gatekeeper … everywhere one goes.”
Though Clear remains far less well known than Google, more than 27 million people have already helped it become that very gatekeeper—and “one of the largest private repositories of identities on the planet,” as Nicholas Peddy, Clear’s chief technology officer, put it in an interview with MIT Technology Review this summer.
With Clear well on the way to realizing its plan for a frictionless future, it’s time to try to understand both how we got here and what we have (been) signed up for.
A new frontier in identity management
Imagine this: On a Friday morning in the near future, you are rushing to get through your to-do list before a weekend trip to New York.
In the morning, you apply for a new job on LinkedIn. During lunch, assured that recruiters are seeing your professional profile because it’s been verified by Clear, you pop out to Home Depot, confirm your identity with a selfie, and rent a power drill for a quick bathroom repair. Then, in the midafternoon, you drive to your doctor’s office; having already verified your identity—prompted by a text message sent a few days earlier—you confirm your arrival with a selfie at a Clear kiosk. Before you go to bed, you plan your morning trip to the airport and set an alarm—but not too early, because you know that with Clear, you can quickly drop your bags and breeze through security.
Once you’re in New York, you head to Barclays Center, where you’ll be seeing your favorite singer; you skip the long queue out front to hop in the fast-track Clear line. It’s late when the show is over, so you grab an Uber home and barely need to wait for a driver, who feels more comfortable thanks to your verified rider profile.
At no point did you pull out your driver’s license or fill out repetitive paperwork. All that was already on file. Everything was easy; everything was frictionless.
More than 27 million people have already helped Clear become “one of the largest private repositories of identities on the planet.”
This, at least, is the world that Clear is actively building toward.
Part of Clear’s power, Seidman Becker often says, is that it can wholly replace our wallets: our credit cards, driver’s licenses, health insurance cards, perhaps even building key fobs. But you can’t just suddenly be all the cards you carry. For Clear to link your digital identity to your real-world self, you must first give up a bit of personal data—specifically, your biometric data.
Biometrics refers to the unique physical and behavioral characteristics—faces, fingerprints, irises, voices, and gaits, among others—that identify each of us as individuals. For better or worse, they typically remain stable during our lifetimes.
Relying on biometrics for identification can be convenient, since people are apt to misplace a wallet or forget the answer to a security question. But on the other hand, if someone manages to compromise a database of biometric information, that convenience can become dangerous: We cannot easily change our face or fingerprint to secure our data again, the way we could change a compromised password.
On a practical level, there are generally two ways that biometrics are used to identify individuals. The first, generally referred to “one-to-many” or “one-to-n” matching, compares one person’s biometric identifier with a database full of them. This is sometimes associated with a stereotypical idea of dystopian surveillance in which real-time facial recognition from live video could allow authorities to identify anyone walking down the street. The other, “one-to-one” matching, is the basis for Clear; it compares a biometric identifier (like the face of a live person standing before an airport agent) with a previously recorded biometric template (such as a passport photo) to verify that they match. This is usually done with the individual’s knowledge and consent, and it arguably poses a lower privacy risk. Often, one-to-one matching includes a layer of document verification, like checking that your passport is legitimate and matches a photograph you used to register with the system.
The US Congress urgently saw the need for better identity management following the September 11 terrorist attacks; 18 of the 19 hijackers used fake identity documents to board their flights. In the aftermath, the newly created Transportation Security Administration (TSA) implemented security processes that slowed down air travel significantly. Part of the problem was that “everybody was just treated the same at airports,” recalls the serial media entrepreneur Steven Brill—including, famously, former vice president Al Gore. “It sounded awfully democratic … but in terms of basic risk management and allocation of resources, it just didn’t make any sense.”
Congress agreed, authorizing the TSA to create a program that would allow people who passed background checks to be recognized as trusted travelers and skip some of the scrutiny at the airport.
In 2007, San Francisco’s then mayor, Gavin Newsom, had his irises scanned by Clear at San Francisco International Airport.
DAVID PAUL MORRIS/GETTY
In 2003, Brill teamed up with Ajay Amlani, a technology entrepreneur and former adviser to the Department of Homeland Security, and founded a company called Verified Identity Pass (VIP) to provide biometric identity verification in the TSA’s new program. “The vision,” says Amlani, “was a unified fast lane—similar to a toll lane.”
It appeared to be a win-win solution. The TSA had a private-sector partner for its registered-traveler program; VIP had a revenue stream from user fees; airports got a cut of the fees in exchange for leasing VIP space;and initial members—typically frequent business travelers—were happy to cut down on airport wait times.
By 2005, VIP had launched in its first airport, Orlando International in Florida. Members—initially paying $80—received “Clear cards” that contained a cryptographic representation of their fingerprint, iris scans, and a photo of their face taken at enrollment. They could use those cards at the airport to be escorted to the front of the security lines.
The defense contracting giant Lockheed Martin, which already provided biometric capabilities to the US Department of Defense and the FBI, was responsible for deploying and providing technology for VIP’s system, with additional technical expertise from Oracle and others. This left VIP to “focus on marketing, pricing, branding, customer service, and consumer privacy policies,” as the president of Lockheed Transportation and Security Solutions, Don Antonucci, said at the time.
By 2009, nearly 200,000 people had joined. The company had received $116 million in investments and signed contracts with about 20 airports. It all seemed so promising—if VIP had not already inadvertently revealed the risks inherent in a system built on sensitive personal data.
A lost laptop and a big opportunity
From the beginning, there were concerns about the implications of VIP’s Clear card for privacy, civil liberty, and equity, as well as questions about its effectiveness at actually stopping future terrorist attacks. Advocacy groups like the Electronic Privacy Information Center (EPIC) warned that the biometrics-based system would result in a surveillance infrastructure built on sensitive personal information, but data from the Pew Research Center shows that a majority of the public at the time felt that it was generally necessary to sacrifice some civil liberties in the name of safety.
Then a security lapse sent the whole operation crumbling.
In the summer of 2008, VIP reported that an unencrypted company laptop containing addresses, birthdays, and driver’s license and passport numbers of 33,000 applicants had gone missing from an office at San Francisco International Airport (SFO)—even though TSA’s security protocol required it to encrypt all laptops holding personal data.
NEIL WEBB
The laptop was found about two weeks later and the company said no data was compromised. But it was still a mess for VIP. Months later, investors pushed Brill out, and associated costs led the company to declare bankruptcy and close the following year.
Disgruntled users filed a class action lawsuit against VIP to recoup membership fees and “punitive damages.” Some users were upset they had recently renewed their subscriptions, and others worried about what would happen to their personal information. A judge temporarily prevented the company from selling user data, but the decision didn’t hold.
Seidman Becker and her longtime business partner Ken Cornick, both hedge fund managers, saw an opportunity. In 2010, they bought VIP—and its user data—in a bankruptcy sale for just under $6 million and registered a new company called Alclear. “I was a big believer in biometrics,” Seidman Becker told the tech journalists Kara Swisher and Lauren Goode in 2017. “I wanted to build something that made the world a better place, and Clear was that platform.”
Initially, the new Clear followed closely in the footsteps of its predecessor: Lockheed Martin transferred the members’ information to the new company, which had acquired VIP’s hardware and continued to use Clear cards to hold members’ biometrics.
After the relaunch, Clear also started building partnerships with other companies in the travel industry—including American Express, United Airlines, Alaska Airlines, Delta Airlines, and Hertz Rental Cars—to bundle its service for free or at a discount. (Clear declined to specify how many of its users have such discounts, but in earnings calls the company has stressed its efforts to reduce the number of members paying reduced rates.)
By 2014, improvements in internet latency and biometric processing speeds allowed Clear to eliminate the cards and migrate to a server-based system—without compromising data security, the company says. Clear emphasizes that it meets industry standards for keeping data secure, with methods including encryption, firewalls, and regular penetration testing by both internal and external teams. The company says it also maintains “locked boxes” around data relating to air travelers.
Still, the reality is that every database of this kind is ultimately a target, and “almost every day there’s a massive breach or hack,” says Chris Gilliard, a privacy and surveillance researcher who was recently named co-director of the Critical Internet Studies Institute. Over the years, even apparently well-protected biometric information has been compromised. Last year, for instance, a data breach at the genetic testing company 23andMe exposed sensitive information—including geographic locations, birth years, family trees, and user-uploaded photos—from nearly 7 million customers.
This is what Young, who helped facilitate the creation of the open-source identity management standards Open ID Connect and OAuth, means when she says that Clear has the wrong “architecture” for managing digital identity; it’s too much of a risk to keep our digital identities in a central database, cryptographically protected or not. She and many other identity and privacy experts believe that the most privacy-protecting way to manage digital identity is to “use credentials, like a mobile driver’s license, stored on people’s devices in digital wallets,“ she says. “These digital credentials can have biometrics, but the biometrics in a central database are not being pinged for day to day use.”
But it’s not just data that’s potentially vulnerable. In 2022 and 2023, Clear faced three high-profile security incidents in airports, including one in which a passenger successfully got through the company’s checks using a boarding pass found in the trash. In another, a traveler in Alabama used someone else’s ID to register for Clear and, later, to successfully pass initial security checks; he was discovered only when he tried to bring ammunition through a subsequent checkpoint.
This spurred an investigation by the TSA, which turned up more alarming information: Nearly 50,000 photos used by Clear to enroll customers were flagged as “non-matches” by the company’s facial recognition software. Some photos didn’t even contain full faces, according to Bloomberg. (In a press release after the incident, the company refuted the reporting, describing it as “a single human error—having nothing to do with our technology” and stating that “the images in question were not relied upon during the secure, multi-layered enrollment process.”)
“How do you get to be the one?”
When I spoke to Brill this spring, he told me he’d always envisioned that Clear would expand far beyond the airport. “The idea I had was that once you had a trusted identity, you would potentially be able to use it for a lot of different things,” he said, but “the trick is to get something that is universally accepted. And that’s the battle that Clear and anybody else has to fight, which is: How do you get to be the one?”
Goode Intelligence, a market research firm that focuses on the booming identity space, estimates that by 2029, there will be 1.5 billion digital identity wallets around the world—with use for travel leading the way and generating an estimated $4.6 billion in revenue. Clear is just one player, and certainly not the biggest. ID.me, for instance, provides similar face-based identity verification and has over 130 million users, dwarfing Clear’s roughly 27 million. It’s also already in use by numerous US federal and state agencies, including the IRS.
The reality is that every database of this kind is ultimately a target, and “almost every day there’s a massive breach or hack.”
But as Goode Intelligence CEO Alan Goode tells me, Clear’s early-mover advantage, particularly in the US, “puts it in a good space within North America … [to] be more pervasive”—or to become what Brill called “the one” that is most closely stitched into people’s daily lives.
Clear began growing beyond travel in 2015, when it started offering biometric fast-pass access to what was then AT&T Park in San Francisco. Stadiums across California, Colorado, and Washington, and in major cities in other states, soon followed.Fans can simply download the free Clear app and scan the QR code to bypass normal lines in favor of designated Clear lanes. For a time, Clear also promoted its biometric payment systems at some venues, including two in Seattle, which could include built-in age verification. It even partnered with Budweiser for a “Bud Now” machine that used your fingerprint to verify your identity, age, and payment. (These payment programs, which a Clear representative called “pilots” in an email, have since ended; representatives for the Seattle Mariners and Seahawks did not respond to multiple requests for comment on why.) Clear’s programs for expedited event access have been popular enough to drive greater user growth than its paid airport service, according to numbers provided by the company.
Then came the pandemic, hitting Clear (and the entire travel industry) hard. But the crisis for Clear’s primary business actually accelerated its move into new spaces with “Health Pass,” which allowed organizations to confirm the health status of employees, residents, students, and visitors who sought access to a physical space. Users could upload vaccination cards to the Health Pass section in the Clear mobile app; the program was adopted by nearly 70 partners in 110 unique locations, including NFL stadiums, the Mariners’ T-Mobile Park, and the 9/11 Memorial Museum.
Demand for vaccine verification eventually slowed, and Health Pass shut down in March 2024. But as Jason Sherwin, Clear’s senior director of health-care business development, said in a podcast interview earlier this year, it was the company’s “first foray into health care”—the business line that currently represents its “primary focus across everything we’re doing outside of the airport.” Today, Clear kiosks for patient sign-ins are being piloted at Georgia’s Wellstar Health Systems, in conjunction with one of the largest providers of electronic health records in the United States: Epic (which is unrelated to the privacy nonprofit).
What’s more, Health Pass enabled Clear to expand at a time when the survival of travel-focused businesses wasn’t guaranteed. In November 2020, Clear had roughly 5 million members; today, that number has grown fivefold. The company went public in 2021 and has experienced double-digit revenue growth annually.
These doctor’s office sign-ins, in which the system verifies patient identity via a selfie, rely on what’s called Clear Verified, a platform the company has rolled out over the past several years that allows partners (health-care systems, as well as brick-and-mortar retailers, hotels, and online platforms) to integrate Clear’s identity checks into their own user-verification processes. It again seems like a win-win situation: Clear gets more users and a fee from companies using the platform, while companies confirm customers’ identity and information, and customers, in theory, get that valuable frictionless experience. One high-profile partnership, with LinkedIn, was announced last year: “We know authenticity matters and we want the people, companies and jobs you engage with everyday to be real and trusted,” Oscar Rodriguez, LinkedIn’s head of trust and privacy, said in a press release.
All this comes together to create the foundation for what is Clear’s biggest advantage today: its network. The company’s executives often speak about its “embedded” users across various services and platforms, as well as its “ecosystem,” meaning the venues where it is used. As Peddy explains, the value proposition for Clear today is not necessarily any particular technology or biometric algorithm, but how it all comes together—and can work universally. Clear would be “wherever our consumers need us to be,” he says—it would “sort of just be this ubiquitous thing that everybody has.”
Clear CEO Caryn Seidman Becker (left) rings the bell at the New York Stock Exchange in 2021.
NYSE VIA TWITTER
A prospectus to investors from the company’s IPO makes the pitch simple: “We believe Clear enables our partners to capture not just a greater share of their customers’ wallet, but a greater share of their overall lives.”
The more Clear is able to reach into customers’ lives, the more valuable customer data it can collect. All user interactions and experiences can be tracked, the company’s privacy policy explains. While the policy states that Clear will not sell data and will never share biometric or health information without “express consent,” it also lays out the non-health and non-biometric data that it collects and can use for consumer research and marketing. This includes members’ demographic details, a record of every use of Clear’s various products, and even digital images and videos of the user. Documents obtained by OneZerooffer some further detail into what Clear has at least considered doing with customer data: David Gershgorn wrote about a 2015 presentation to representatives from Los Angeles International Airport, titled “Identity Dashboard—Valuable Marketing Data,” which “showed off” what the company had collected, including the number of sports games users had attended and with whom, which credit cards they had, their favorite airlines and top destinations, and how often they flew first class or economy.
Clear representatives emphasized to MIT Technology Review that the company “does not share or sell information without consent,” though they “had nothing to add” in response to a question about whether Clear can or does aggregate data to derive its own marketing insights, a business model popularized by Facebook. “At Clear, privacy and security are job one,” spokesperson Ricardo Quinto wrote in an email. “We are opt-in. We never sell or share our members’ information and utilize a multilayered, best-in-class infosec system that meets the highest standards and compliance requirements.”
Nevertheless, this influx of customer data is not just good for business; it’s risky for customers. It creates “another attack surface,” Gilliard warns. “This makes us less safe, not more, as a consistent identifier across your entire public and private life is the dream of every hacker, bad actor, and authoritarian.”
A face-based future for some
Today, Clear is in the middle of another major change: replacing its use of iris scans and fingerprints with facial verification in airports—part of “a TSA-required upgrade in identity verification,” a TSA spokesperson wrote in an email to MIT Technology Review.
For a long time, facial recognition technology “for the highest security purposes” was “not ready for prime time,” Seidman Becker told Swisher and Goode back in 2017. It wasn’t operating with “five nines,” she added—that is, “99.999% from a matching and an accuracy perspective.” But today, facial recognition has “significantly improved” and the company has invested “in enhancing image quality through improved capture, focus, and illumination,” according to Quinto.
Clear says switching to facial images in airports will also further decrease friction, enabling travelers to verify their identity so effortlessly it’s “almost like you don’t really break stride,” Peddy says. “You walk up, you scan your face. You walk straight to the TSA.”
The move is part of a broader shift toward facial recognition technology in US travel, bringing the country in line with practices at many international airports. The TSA began expanding facial identification from a few pilot programs this year, while airlines including Delta and United are also introducing face-based boarding, baggage drops, and even lounge access. And the International Air Transport Association, a trade group for the airline industry, is rolling out a “contactless travel” process that will allow passengers to check in, drop off their bags, and board their flights—all without showing either passports or tickets, just their faces.
NEIL WEBB
Privacy experts worry that relying on faces for identity verification is even riskier than other biometric methods. After all, “it’s a lot easier to scan people’s faces passively than it is to scan irises or take fingerprints,” Senator Jeff Merkley of Oregon, an outspoken critic of government surveillance and of the TSA’s plans to employ facial verification at airports, said in an email. The point is that once a database of faces is built, it is potentially far more useful for surveillance purposes than, say, fingerprints. “Everyone who values privacy, freedom, and civil rights should be concerned about the increasing, unchecked use of facial recognition technology by corporations and the federal government,” Merkley wrote.
Even if Clear is not in the business of surveillance today, it could, theoretically, pivot or go bankrupt and (again) sell off its parts, including user data. Jeramie Scott, senior counsel and director of the Project on Surveillance Oversight at EPIC, says that ultimately, the “lack of federal [privacy] regulation” means that we’re just taking the promises of companies like Clear at face value: “Whatever they say about how they implement facial recognition today does not mean that that’s how they’ll be implementing facial recognition tomorrow.”
Making this particular scenario potentially more concerning is that the images stored by this private company are “generally going to be much higher quality” than those collected by scraping the internet—which Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project (STOP), says would make its data far more useful for surveillance than that held by more controversial facial recognition companies like Clearview AI.
Even a far less pessimistic read of Clear’s data collection reveals the challenges of using facial identification systems, which—as a 2019 report from the National Institute for Standards and Technology revealed—have been shown to work less effectively in certain populations, particularly people of African and East Asian descent, women, and elderly and very young people. NIST has also not tested identification accuracy for individuals who are transgender, but Gilliard says he expects the algorithms would fall short.
More recent testing shows that some algorithms have improved, NIST spokesperson Chad Boutin tells MIT Technology Review—though accuracy is still short of the “five nines” that Seidman Becker once said Clear was aiming for. (Quinto, the Clear representative, maintains that Clear’s recent upgrades, combined with the fact that the company’s testing involves “comparing member photos to smaller galleries, rather than the millions used in NIST scenarios,” means its technology “remains accurate and suitable for secure environments like airports.”)
Even a very small error rate “in a system that is deployed hundreds of thousands of times a day” could still leave “a lot of people” at risk of misidentification, explains Hannah Quay-de La Vallee, a technologist at the Center for Democracy & Technology, a nonprofit based in Washington, DC. All this could make Clear’s services inaccessible to some—even if they can afford it, which is less likely given the recent increase in the subscription fee for travelers to $199 a year.
The free Clear Verified Platform is already giving rise to access problems in at least one partnership, with LinkedIn. The professional networking site encourages users to verify their identities either with an employer email address or with Clear, which marketing materials say will yield more engagement. But some LinkedIn users have expressed concerns, claiming that even after uploading a selfie, they were unable to verify their identities with Clear if they were subscribed to a smaller phone company or if they had simply not had their phone number for enough time. As one Reddit user emphasized, “Getting verified is a huge deal when getting a job.” LinkedIn said it does not enable recruiters to filter, rank, or sort by whether a candidate has a verification badge, but also said that verified information does “help people make more informed decisions as they build their network or apply for a job.” Clear only said it “works with our partners to provide them with the level of identity assurance that they require for their customers” and referred us back to LinkedIn.
An opt-in future that may not really be optional
Maybe what’s worse than waiting in line, or even being cut in front of, is finding yourself stuck in what turns out to be the wrong line—perhaps one that you never want to be in.
That may be how it feels if you don’t use Clear and similar biometric technologies. “When I look at companies stuffing these technologies into vending machines, fast-food restaurants, schools, hospitals, and stadiums, what I see is resignation rather than acceptance—people often don’t have a choice,” says Gilliard, the privacy and surveillance scholar. “The life cycle of these things is that … even when it is ‘optional,’ oftentimes it is difficult to opt out.”
And while the stakes may seem relatively low—Clear is, after all, a voluntary membership program—they will likely grow as the system is deployed more widely. As Seidman Becker said on Clear’s latest earnings call in early November, “The lines between physical and digital interactions continue to blur. A verified identity isn’t just a check mark. It’s the foundation for everything we do in a high-stakes digital world.” Consider a job ad posted by Clear earlier this year, seeking to hire a vice president for business development; it noted that the company has its eye on a number of additional sectors, including financial services, e-commerce, P2P networking, “online trust,” gaming, government, and more.
“Increasingly, companies and the government are making the submission of your biometrics a barrier to participation in society,” Gilliard says.
This will be particularly true at the airport, with the increasing ubiquity of facial recognition across all security checks and boarding processes, and where time-crunched travelers could be particularly vulnerable to Clear’s sales pitch. Airports have even privately expressed concerns about these scenarios to Clear. Correspondence from early 2022 between the company and staff at SFO, released in response to a public records request, reveals that the airport “received a number of complaints” about Clear staff “improperly and deceitfully soliciting approaching passengers in the security checkpoint lanes outside of its premises,” with an airport employee calling it “completely unacceptable” and “aggressive and deceptive behavior.”
Of course, this isn’t to say everyone with a Clear membership was coerced into signing up. Many people love it; the company told MIT Technology Review that it had a nearly 84% retention rate earlier this year. Still, for some experts, it’s worrisome to think that what Clear users are comfortable with ends up setting the ground rules for the rest of us.
“We’re going to normalize potentially a bunch of biometric stuff but not have a sophisticated conversation about where and how we’re normalizing what,” says Young. She worries this will empower “actors who want to move toward a creepy surveillance state, or corporate surveillance capitalism on steroids.”
“Without understanding what we’re building or how or where the guardrails are,” she adds, “I also worry that there could be major public backlash, and then legitimate uses [of biometric technology] are not understood and supported.”
But in the meantime, even superfans are grumbling about an uptick in wait times in the airport’s Clear lines. After all, if everyone decides to cut to the front of the line, that just creates a new long line of line-cutters.
Palmer Luckey has, in some ways, come full circle.
His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.
Now Luckey is redirecting his energy again, to headsets for the military. In September, Anduril announced it would partner with Microsoft on the US Army’s Integrated Visual Augmentation System (IVAS), arguably the military’s largest effort to develop a headset for use on the battlefield. Luckey says the IVAS project is his top priority at Anduril.
“There is going to be a heads-up display on every soldier within a pretty short period of time,” he told MIT Technology Review in an interview last week on his work with the IVAS goggles. “The stuff that we’re building—it’s going to be a big part of that.”
Though few would bet against Luckey’s expertise in the realm of mixed reality, few observers share his optimism for the IVAS program. They view it, thus far, as an avalanche of failures.
IVAS was first approved in 2018 as an effort to build state-of-the-art mixed-reality headsets for soldiers. In March 2021, Microsoft was awarded nearly $22 billion over 10 years to lead the project, but it quickly became mired in delays. Just a year later, a Pentagon audit criticized the program for not properly testing the goggles, saying its choices “could result in wasting up to $21.88 billion in taxpayer funds to field a system that soldiers may not want to use or use as intended.” The first two variants of the goggles—of which the army purchased 10,000 units—gave soldiers nausea, neck pain, and eye strain, according to internal documents obtained by Bloomberg.
Such reports have left IVAS on a short leash with members of the Senate Armed Services Committee, which helps determine how much money should be spent on the program. In a subcommittee meeting in May, Senator Tom Cotton, an Arkansas Republican and ranking member, expressed frustration at IVAS’s slow pace and high costs, and in July the committee suggested a $200 million cut to the program.
Meanwhile, Microsoft has for years been cutting investments into its HoloLens headset—the hardware on which the IVAS program is based—for lack of adoption. In June, Microsoft announced layoffs to its HoloLens teams, suggesting the project is now focused solely on serving the Department of Defense. The company received a serious blow in August, when reports revealed that the Army is considering reopening bidding for the contract to oust Microsoft entirely.
This is the catastrophe that Luckey’s stepped into. Anduril’s contribution to the project will be Lattice, an AI-powered system that connects everything from drones to radar jammers to surveil, detect objects, and aid in decision-making. Lattice is increasingly becoming Anduril’s flagship offering. It’s a tool that allows soldiers to receive instantaneous information not only from Anduril’s hardware, but also from radars, vehicles, sensors, and other equipment not made by Anduril. Now it will be built into the IVAS goggles. “It’s not quite a hive mind, but it’s certainly a hive eye” is how Luckey described it to me.
Anvil, seen here held by Luckey in Anduril’s Costa Mesa Headquarters, integrates with the Lattice OS and can navigate autonomously to intercept hostile drones.
PHILIP CHEUNG
Boosted by Lattice, the IVAS program aims to produce a headset that can help soldiers “rapidly identify potential threats and take decisive action” on the battlefield, according to the Army. If designed well, the device will automatically sort through countless pieces of information—drone locations, vehicles, intelligence—and flag the most important ones to the wearer in real time.
Luckey defends the IVAS program’s bumps in the road as exactly what one should expect when developing mixed reality for defense. “None of these problems are anything that you would consider insurmountable,” he says. “It’s just a matter of if it’s going to be this year or a few years from now.” He adds that delaying a product is far better than releasing an inferior product, quoting Shigeru Miyamoto, the game director of Nintendo: “A delayed game is delayed only once, but a bad game is bad forever.”
He’s increasingly convinced that the military, not consumers, will be the most important testing ground for mixed-reality hardware: “You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he says. In the consumer world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense.
“The gains are so different when we talk about life-or-death scenarios. You don’t have to worry about things like ‘Oh, this is kind of dorky looking,’ or ‘Oh, you know, this is slightly heavier than I would prefer,’” he says. “Because the alternatives of, you know, getting killed or failing your mission are a lot less desirable.”
Those in charge of the IVAS program remain steadfast in the expectation that it will pay off with huge gains for those on the battlefield. “If it works,” James Rainey, commanding general of the Army Futures Command, told the Armed Services Committee in May, “it is a legitimate 10x upgrade to our most important formations.” That’s a big “if,” and one that currently depends on Microsoft’s ability to deliver. Luckey didn’t get specific when I asked if Anduril was positioning itself to bid to become IVAS’s primary contractor should the opportunity arise.
If that happens, US troops may, willingly or not, become the most important test subjects for augmented- and virtual-reality technology as it is developed in the coming decades. The commercial sector doesn’t have thousands of individuals within a single institution who can test hardware in physically and mentally demanding situations and provide their feedback on how to improve it.
That’s one of the ways selling to the defense sector is very different from selling to consumers, Luckey says: “You don’t actually have to convince every single soldier that they personally want to use it. You need to convince the people in charge of him, his commanding officer, and the people in charge of him that this is a thing that is worth wearing.” The iterations that eventually come from IVAS—if it keeps its funding—could signal what’s coming next for the commercial market.
When I asked Luckey if there were lessons from Oculus he had to unlearn when working with the Department of Defense, he said there’s one: worrying about budgets. “I prided myself for years, you know—I’m the guy who’s figured out how to make VR accessible to the masses by being absolutely brutal at every part of the design process, trying to get costs down. That isn’t what the DOD wants,” he says. “They don’t want the cheapest headset in a vacuum. They want to save money, and generally, spending a bit more money on a headset that is more durable or that has better vision—and therefore allows you to complete a mission faster—is definitely worth the extra few hundred dollars.”
I asked if he’s impressed by the progress that’s been made during his eight-year hiatus from mixed reality. Since he left Facebook in 2017, Apple, Magic Leap, Meta, Snap, and a cascade of startups have been racing to move the technology from the fringe to the mainstream. Everything in mixed reality is about trade-offs, he says. Would you like more computing power, or a lighter and more comfortable headset?
With more time at Meta, “I would have made different trade-offs in a way that I think would have led to greater adoption,” he says. “But of course, everyone thinks that.” While he’s impressed with the gains, “having been on the inside, I also feel like things could be moving faster.”
Years after leaving, Luckey remains noticeably annoyed by one specific decision he thinks Meta got wrong: not offloading the battery. Dwelling on technical details is unsurprising from someone who spent his formative years living in a trailer in his parents’ driveway posting in obscure forums and obsessing over goggle prototypes. He pontificated on the benefits of packing the heavy batteries and chips in removable pucks that the user could put in a pocket, rather than in the headset itself. Doing so makes the headset lighter and more comfortable. He says he was pushing Facebook to go that route before he was ousted, but when he left, it abandoned the idea. Apple chose to have an external battery for its Vision Pro, which Luckey praised.
“Anyway,” he told me. “I’m still sore about it eight years later.”
Speaking of soreness, Luckey’s most public professional wound, his ouster from Facebook in 2017, was partially healed last month. The story—involving countless Twitter threads, doxxing, retractions and corrections to news articles, suppressed statements, and a significant segment in Blake Harris’s 2020 bookThe History of the Future—is difficult to boil down. But here’s the short version: A donation by Luckey to a pro-Trump group called Nimble America in late 2016 led to turmoil within Facebook after it was reported by the Daily Beast. That turmoil grew, especially after Ars Technica wrote that his donation was funding racist memes (the founders of Nimble America were involved in the subreddit r/TheDonald, but the organization itself was focused on creating pro-Trump billboards). Luckey left in March 2017, but Meta has never disclosed why.
This April, Oculus’s former CTO John Carmack posted on X that he regretted not supporting Luckey more. Meta’s CTO, Andrew Bosworth, argued with Carmack, largely siding with Meta. In response, Luckey said, “You publicly told everyone my departure had nothing to do with politics, which is absolutely insane and obviously contradicted by reams of internal communications.” The two argued. In the X argument, Bosworth cautioned that there are “limits on what can be said here,” to which Luckey responded, “I am down to throw it all out there. We can make everything public and let people judge for themselves. Just say the word.”
Six months later, Bosworth apologized to Luckey for the comments. Luckey responded, writing that although he is “infamously good at holding grudges,” neither Bosworth nor current leadership at Meta was involved in the incident.
By now Luckey has spent years mulling over how much of his remaining anger is irrational or misplaced, but one thing is clear. He has a grudge left, but it’s against people behind the scenes—PR agents, lawyers, reporters—who, from his perspective, created a situation that forced him to accept and react to an account he found totally flawed. He’s angry about the steps Facebook took to keep him from communicating his side (Luckey has said he wrote versions of a statement at the time but that Facebook threatened further escalation if he posted it).
“What am I actually angry at? Am I angry that my life went in that direction? Absolutely,” he says.
“I have a lot more anger for the people who lied in a way that ruined my entire life and that saw my own company ripped out from under me that I’d spent my entire adult life building,” he says. “I’ve got plenty of anger left, but it’s not at Meta, the corporate entity. It’s not at Zuck. It’s not at Boz. Those are not the people who wronged me.”
While various subcommittees within the Senate and House deliberate how many millions to spend on IVAS each year, what is not in question is the Pentagon is investing to prepare for a potential conflict in the Pacific between China and Taiwan. The Pentagon requested nearly $10 billion for the Pacific Deterrence Initiative in its latest budget. The prospect of such a conflict is something Luckey considers often.
He told the authors of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War that Anduril’s “entire internal road map” has been organized around the question “How do you deter China? Not just in Taiwan, but Taiwan and beyond?”
At this point, nothing about IVAS is geared specifically toward use in the South Pacific as opposed to Ukraine or anywhere else. The design is in early stages. According to transcripts of a Senate Armed Services Subcommittee meeting in May, the military was scheduled to receive the third iteration of IVAS goggles earlier this summer. If they were on schedule, they’re currently in testing. That version is likely to change dramatically before it approaches Luckey’s vision for the future of mixed-reality warfare, in which “you have a little bit of an AI guardian angel on your shoulder, helping you out and doing all the stuff that is easy to miss in the midst of battle.”
Designs for IVAS will have to adapt amid a shifting landscape of global conflict.
PHILIP CHEUNG
But will soldiers ever trust such a “guardian angel”? If the goggles of the future rely on AI-powered software like Lattice to identify threats—say, an enemy drone ahead or an autonomous vehicle racing toward you—Anduril is making the promise that it can sort through the false positives, recognize threats with impeccable accuracy, and surface critical information when it counts most.
Luckey says the real test is how the technology compares with the current abilities of humans. “In a lot of cases, it’s already better,” he says, referring to Lattice, as measured by Anduril’s internal tests (it has not released these, and they have not been assessed by any independent external experts). “People are fallible in ways that machines aren’t necessarily,” he adds.
Still, Luckey admits he does worry about the threats Lattice will miss.
“One of the things that really worries me is there’s going to be people who die because Lattice misunderstood something, or missed a threat to a soldier that it should have seen,” he says. “At the same time, I can recognize that it’s still doing far better than people are doing today.”
When Lattice makes a significant mistake, it’s unlikely the public will know. Asked about the balance between transparency and national security in disclosing these errors, Luckey said that Anduril’s customer, the Pentagon, will receive complete information about what went wrong. That’s in line with the Pentagon’s policies on responsible AI adoption, which require that AI-driven systems be “developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.”
However, the policies promise nothing about disclosure to the public, a fact that’s led some progressive think tanks, like the Brennan Center for Justice, to call on federal agencies to modernize public transparency efforts for the age of AI.
“It’s easy to say, Well, shouldn’t you be honest about this failure of your system to detect something?” Luckey says, regarding Anduril’s obligations. “Well, what if the failure was because the Chinese figured out a hole in the system and leveraged that to speed past our defenses of some military base? I’d say there’s not very much public good served in saying, ‘Attention, everyone—there is a way to get past all of the security on every US military base around the world.’ I would say that transparency would be the worst thing you could do.”