Inside a romance scam compound—and how people get tricked into being there

Heading north in the dark, the only way Gavesh could try to track his progress through the Thai countryside was by watching the road signs zip by. The Jeep’s three occupants—Gavesh, a driver, and a young Chinese woman—had no languages in common, so they drove for hours in nervous silence as they wove their way out of Bangkok and toward Mae Sot, a city on Thailand’s western border with Myanmar.

When they reached the city, the driver pulled off the road toward a small hotel, where another car was waiting. “I had some suspicions—like, why are we changing vehicles?” Gavesh remembers. “But it happened so fast.”

They left the highway and drove on until, in total darkness, they parked at what looked like a private house. “We stopped the vehicle. There were people gathered. Maybe 10 of them. They took the luggage and they asked us to come,” Gavesh says. “One was going in front, there was another one behind, and everyone said: ‘Go, go, go.’” 

Gavesh and the Chinese woman were marched through the pitch-black fields by flashlight to a riverside where a boat was moored. By then, it was far too late to back out.

Gavesh’s journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.

Instead, he found himself trafficked into a business commonly known as “pig butchering”—a form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar. 

We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized. By the same token, it is Big Tech that may hold the key to breaking up the scam syndicates—if only these companies can be persuaded or compelled to act.


We’re identifying Gavesh using a pseudonym to protect his identity. He is from a country in South Asia, one he asked us not to name. He hasn’t shared his story much, and he still hasn’t told his family. He worries about how they’d handle it. 

Until the pandemic, he had held down a job in the tourism industry. But lockdowns had gutted the sector, and two years later he was working as a day laborer to support himself and his father and sister. “I was fed up with my life,” he says. “I was trying so hard to find a way to get out.”

When he saw the Facebook post in mid-2022, it seemed like a godsend. A company in Thailand was looking for English-speaking customer service and data entry specialists. The monthly salary was $1,500—far more than he could earn at home—with meals, travel costs, a visa, and accommodation included. “I knew if I got this job, my life would turn around. I would be able to give my family a good life,” Gavesh says.

What came next was life-changing, but not in the way Gavesh had hoped. The advert was a fraud—and a classic tactic syndicates use to force workers like Gavesh into an economy that operates as something like a dark mirror of the global outsourcing industry. 

The true scale of this type of fraud is hard to estimate, but the United Nations reported in 2023 that hundreds of thousands of people had been trafficked to work as online scammers in Southeast Asia. One 2024 study, from the University of Texas, estimates that the criminal syndicates that run these businesses have stolen at least $75 billion since 2020. 

These schemes have been going on for more than two decades, but they’ve started to capture global attention only recently, as the syndicates running them increasingly shift from Chinese targets toward the West. And even as investigators, international organizations, and journalists gradually pull back the curtain on the brutal conditions inside scamming compounds and document their vast scale, what is far less exposed is the pivotal role platforms owned by Big Tech play throughout the industry—from initially coercing individuals to become scammers to, finally, duping scam targets out of their life savings. 

As losses mount, governments and law enforcement agencies have looked for ways to disrupt the syndicates, which have become adept at using ungoverned spaces in lawless borderlands and partnering with corrupt regimes. But on the whole, the syndicates have managed to stay a step ahead of law enforcement—in part by relying on services from the world’s tech giants. Apple iPhones are their preferred scamming tools. Meta-owned Facebook and WhatsApp are used to recruit people into forced labor, as is Telegram. Social media and messaging platforms, including Facebook, Instagram, WhatsApp, WeChat, and X, provide spaces for scammers to find and lure targets. So do dating apps, including Tinder. Some of the scam compounds have their own Starlink terminals. And cryptocurrencies like tether and global crypto platforms like Binance have allowed the criminal operations to move money with little or no oversight.

view from the back of crowd of people seated on the ground in a courtyard surrounded aby guards
Scam workers sit inside Myanmar’s KK Park, a notorious fraud hub near the border with Thailand, following a recent crackdown by law enforcement.
REUTERS

“Private-sector corporations are, unfortunately, inadvertently enabling this criminal industry,” says Andrew Wasuwongse, the Thailand country director at the anti-trafficking nonprofit International Justice Mission (IJM). “The private sector holds significant tools and responsibility to disrupt and prevent its further growth.”

Yet while the tech sector has, slowly, begun to roll out anti-scam tools and policies, experts in human trafficking, platform integrity, and cybercrime tell us that these measures largely focus on the downstream problem: the losses suffered by the victims of the scams. That approach overlooks the other set of victims, often from lower-income countries, at the far end of a fraud “supply chain” that is built on human misery—and on Big Tech. Meanwhile, the scams continue on a mass scale.

Tech companies could certainly be doing more to crack down, the experts say. Even relatively small interventions, they argue, could start to erode the business model of the scam syndicates; with enough of these, the whole business could start to founder. 

“The trick is: How do you make it unprofitable?” says Eric Davis, a platform integrity expert and senior vice president of special projects at the Institute for Security and Technology (IST), a think tank in California. “How do you create enough friction?”

That question is only becoming more urgent as many tech companies pull back on efforts to moderate their platforms, artificial intelligence supercharges scam operations, and the Trump administration signals broad support for deregulation of the tech sector while withdrawing support from organizations that study the scams and support the victims. All these trends may further embolden the syndicates. And even as the human costs keep building, global governments exert ineffectual pressure—if any at all—on the tech sector to turn its vast financial and technical resources against a criminal economy that has thrived in the spaces Silicon Valley built. 


Capturing a vulnerable workforce

The roots of “pig butchering” scams reach back to the offshore gambling industry that emerged from China in the early 2000s. Online casinos had become hugely popular in China, but the government cracked down, forcing the operators to relocate to Cambodia, the Philippines, Laos, and Myanmar. There, they could continue to target Chinese gamblers with relative impunity. Over time, the casinos began to use social media to entice people back home, deploying scam-like tactics that frequently centered on attractive and even nude dealers.

The doubts didn’t really start until after Gavesh reached Bangkok’s Suvarnabhumi Airport. As time ticked by, it began to occur to him that he was alone, with no money, no return ticket, and no working SIM card.

“Often the romance scam was a part of that—building romantic relationships with people that you eventually would aim to hook,” says Jason Tower, Myanmar country director at the United States Institute of Peace (USIP), a research and diplomacy organization funded by the US government, who researches the cyber scam industry. (USIP’s leadership was recently targeted by the Trump administration and Elon Musk’s Department of Government Efficiency task force, leaving the organization’s future uncertain; its website, which previously housed its research, is also currently offline.)

By the late 2010s, many of the casinos were big, professional operations. Gradually, says Tower, the business model turned more sinister, with a tactic called sha zhu pan in Chinese emerging as a core strategy. Scamming operatives work to “fatten up” or cultivate a target by building a relationship before going in for the “slaughter”—persuading them to invest in a supposedly once-in-a-lifetime scheme and then absconding with the money. “That actually ended up being much, much more lucrative than online gambling,” Tower says. (The international law enforcement organization Interpol no longer uses the graphic term “pig butchering,” citing concerns that it dehumanizes and stigmatizes victims.) 

Like other online industries, the romance scamming business was supercharged by the pandemic. There were simply more isolated people to defraud, and more people out of work who might be persuaded to try scamming others—or who were vulnerable to being trafficked into the industry.

Initially, most of the workers carrying out the frauds were Chinese, as were the fraud victims. But after the government in Beijing tightened travel restrictions, making it hard to recruit Chinese laborers, the syndicates went global. They started targeting more Western markets and turning, Tower says, to “much more malign types of approaches to tricking people into scam centers.” 


Getting recruited

Gavesh was scrolling through Facebook when he saw the ad. He sent his résumé to a Telegram contact number. A human resources representative replied and had him demonstrate his English and typing skills over video. It all felt very professional. “I didn’t have any reason to suspect,” he says.

The doubts didn’t really start until after he reached Bangkok’s Suvarnabhumi Airport. After being met at arrivals by a man who spoke no English, he was left to wait. As time ticked by, it began to occur to Gavesh that he was alone, with no money, no return ticket, and no working SIM card. Finally, the Jeep arrived to pick him up.

Hours later, exhausted, he was on a boat crossing the Moei River from Thailand into Myanmar. On the far bank, a group was waiting. One man was in military uniform and carried a gun. “In my country, if we see an army guy when we are in trouble, we feel safe,” Gavesh says. “So my initial thoughts were: Okay, there’s nothing to be worried about.”

They hiked a kilometer across a sodden paddy field and emerged at the other side caked in mud. There a van was parked, and the driver took them to what he called, in broken English, “the office.” They arrived at the gate of a huge compound, surrounded by high walls topped with barbed wire. 

While some people are drawn into online scamming directly by friends and relatives, Facebook is, according to IJM’s Wasuwongse, the most common entry point for people recruited on social media. 

Meta has known for years that its platforms host this kind of content. Back in 2019, the BBC exposed “slave markets” that were running on Instagram; in 2021, the Wall Street Journal reported, drawing on documents leaked by a whistleblower, that Meta had long struggled to rein in the problem but took meaningful action only after Apple threatened to pull Instagram from its app store. 

Today, years on, ads like the one that Gavesh responded to are still easy to find on Facebook if you know what to look for.

Examples of fraudulent Facebook ads, shared by International Justice Mission.

They are typically posted in job seekers’ groups and usually seem to be advertising legitimate jobs in areas like customer service. They offer attractive wages, especially for people with language skills—usually English or Chinese. 

The traffickers tend to finish the recruitment process on encrypted or private messaging apps. In our research, many experts said that Telegram, which is notorious for hosting terrorist content, child sexual abuse material, and other communication related to criminal activity, was particularly problematic. Many spoke with a combination of anger and resignation about its apparent lack of interest in working with them to address the problem; Mina Chiang, founder of Humanity Research Consultancy, an anti-trafficking organization, accuses the app of being “very much complicit” in human trafficking and “proactively facilitating” these scams. (Telegram did not respond to a request for comment.)

But while Telegram users have the option of encrypting their messages end to end, making them almost impossible to monitor, social media companies are of course able to access users’ posts. And it’s here, at the beginning of the romance scam supply chain, where Big Tech could arguably make its most consequential intervention. 

Social media is monitored by a combination of human moderators and AI systems, which help flag users and content—ads, posts, pages—that break the law or violate the companies’ own policies. Dangerous content is easiest to police when it follows predictable patterns or is posted by users acting in distinctive and suspicious ways.

“They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”

Anti-trafficking experts say the scam advertising tends to follow formulaic templates and use common language, and that they routinely report the ads to Meta and point out the markers they have identified. Their hope is that this information will be fed into the data sets that train the content moderation models. 

While individual ads may be taken down, even in big waves—last November, Meta said it had purged 2 million accounts connected to scamming syndicates over the previous year—experts say that Facebook still continues to be used in recruiting. And new ads keep appearing. 

(In response to a request for comment, a Meta spokesperson shared links to policies about bans on content or advertisements that facilitate human trafficking, as well as company blog posts telling users how to protect themselves from romance scams and sharing details about the company’s efforts to disrupt fraud on its platforms, one stating that it is “constantly rolling out new product features to help protect people on [its] apps from known scam tactics at scale.” The spokesperson also said that WhatsApp has spam detection technology, and millions of accounts are banned per month.)

Anti-trafficking experts we spoke with say that as recently as last fall, Meta was engaging with them and had told them it was ramping up its capabilities. But Chiang says there still isn’t enough urgency from tech companies. “There’s a question about speed. They might be able to say That’s the goal for the next two years. No. But that’s not fast enough. We need it now,” she says. “They have financial resources. You can hire the most talented coding engineers in the world. Why can’t you just find people who understand the issue properly?”

Part of the answer comes down to money, according to experts we spoke with. Scaling up content moderation and other processes that could cause users to be kicked off a platform requires not only technological staff but also legal and policy experts—which not everyone sees as worth the cost. 

“The vast majority of these companies are doing the minimum or less,” says Tower of USIP. “If not properly incentivized, either through regulatory action or through exposure by media or other forms of pressure … often, these companies will underinvest in keeping their platforms safe.”


Getting set up

Gavesh’s new “office” turned out to be one of the most infamous scamming hubs in Southeast Asia: KK Park in Myanmar’s Myawaddy region. Satellite imagery shows it as a densely packed cluster of buildings, surrounded by fields. Most of it has been built since late 2019. 

Inside, it runs like a hybrid of a company campus and a prison. 

When Gavesh arrived, he handed over his phone and passport and was assigned to a dormitory and an employer. He was allowed his own phone back only for short periods, and his calls were monitored. Security was tight. He had to pass through airport-style metal detectors when he went in or out of the office. Black-uniformed personnel patrolled the buildings, while armed men in combat fatigues watched the perimeter fences from guard posts. 

On his first full day, he was put in front of a computer with just four documents on it, which he had to read over and over—guides on how to approach strangers. On his second day, he learned to build fake profiles on social media and dating apps. The trick was to find real people on Instagram or Facebook who were physically attractive, posted often, and appeared to be wealthy and living “a luxurious life,” he says, and use their photos to build a new account: “There are so many Instagram models that pretend they have a lot of money.”

After Gavesh was trafficked into Myanmar, he was taken to KK Park. Most of the compound has been built since late 2019.
LUKE DUGGLEBY/REDUX

Next, he was given a batch of iPhone 8s—most people on his team used between eight and 10 devices each—loaded with local SIM cards and apps that spoofed their location so that they appeared to be in the US. Using male and female aliases, he set up dozens of accounts on Facebook, WhatsApp, Telegram, Instagram, and X and profiles on several dating platforms, though he can’t remember exactly which ones. 

Different scamming operations teach different techniques for finding and reaching out to potential victims, several people who worked in the compounds tell us. Some people used direct approaches on dating apps, Facebook, Instagram, or—for those targeting Chinese victims—WeChat. One worker from Myanmar sent out mass messages on WhatsApp, pretending to have accidentally messaged a wrong number, in the hope of striking up a conversation. (Tencent, which owns WeChat, declined to comment.)

Some scamming workers we spoke to were told to target white, middle-aged or older men in Western countries who seemed to be well off. Gavesh says he would pretend to be white men and women, using information found from Google to add verisimilitude to his claims of living in, say, Miami Beach. He would chat with the targets, trying to figure out from their jobs, spending habits, and ambitions whether they’d be worth investing time in.

One South African woman, trafficked to Myanmar in 2022, says she was given a script and told to pose as an Asian woman living in Chicago. She was instructed to study her assigned city and learn quotidian details about life there. “They kept on punishing people all the time for not knowing or for forgetting that they’re staying in Chicago,” she says, “or for forgetting what’s Starbucks or what’s [a] latte.” 

Fake users have, of course, been a problem on social media platforms and dating sites for years. Some platforms, such as X, allow practically anyone to create accounts and even to have them verified for a fee. Others, including Facebook, have periodically conducted sweeps to get rid of fake accounts engaged in what Meta calls “coordinated inauthentic behavior.” (X did not respond to requests for comment.)

But scam workers tell us they were advised on simple ways to circumvent detection mechanisms on social media. They were given basic training in how to avoid suspicious behavior such as adding too many contacts too quickly, which might trigger the company to review whether someone’s profile is authentic. The South African woman says she was shown how to manipulate the dates on a Facebook account “to seem as if you opened the account in 2019 or whatever,” making it easier to add friends. (Meta’s spam filters—meant to reduce the spread of unwanted content—include limits on friend requests and bulk messaging.)

Wang set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch.

Dating apps, whose users generally hope to meet other users in real life, have a particular need to make sure that people are who they say they are. But Match Group, the parent company of Tinder, ended its partnership with a company doing background checks in 2023. It now encourages users to verify their profile with a selfie and further ID checks, though insiders say these systems are often rudimentary. “They just check a box and [do] what is legally required or what will make the media get off of [their] case,” says one tech executive who has worked with multiple dating apps on safety systems, speaking on the condition of anonymity because they were not permitted to speak about their work with certain companies. 

Fangzhou Wang, an assistant professor at the University of Texas at Arlington who studies romance scams, ran a test: She set up a Tinder profile with a picture of a dog and a bio that read, “I am a dog.” It passed through the platform’s verification system without a hitch. “They are not providing enough security measures to filter out fraudulent profiles,” Wang says. “Everybody can create anything.”

Like recruitment ads, the scam profiles tend to follow patterns that should raise red flags. They use photos copied from existing users or made by artificial intelligence, and the accounts are sometimes set up using phone numbers generated by voice-over-internet-protocol services. Then there’s the scammers’ behavior: They swipe too fast, or spend too much time logged in. “A normal human doesn’t spend … eight hours on a dating app a day,” the tech executive says. 

What’s more, scammers use the same language over and over again as they reach out to potential targets. “The majority of them are using predesigned scripts,” says Wang. 

It would be fairly easy for platforms to detect these signs and either stop accounts from being created or make the users go through further checks, experts tell us. Signals of some of these behaviors “can potentially be embedded into a type of machine-learning algorithm,” Wang says. She approached Tinder a few years ago with her research into the language that scammers use on the platforms, and offered to help build data sets for its moderation models. She says the company didn’t reply. 

(In a statement, Yoel Roth, vice president of trust and safety at Match Group, said that the company invests in “proactive tools, advanced detection systems and user education to help prevent harm.” He wrote, “We use proprietary AI-powered tools to help identify scammer messaging, and unlike many platforms, we moderate messages, which allows us to detect suspicious patterns early and act quickly,” adding that the company has recently worked with Reality Defender, a provider of deepfake detection tools, to strengthen its ability to detect AI-generated content. A company spokesperson reported having no record of Wang’s outreach but said that the company “welcome[s] collaboration and [is] always open to reviewing research that can help strengthen user safety.”)

A recent investigation published in The Markup found that Match Group has long possessed the tools and resources to track sex offenders and other bad actors but has resisted efforts to roll out safety protocols for fear they might slow growth. 

This tension, between the desire to keep increasing the number of users and the need to ensure that these users and their online activity are authentic, is often behind safety issues on platforms. While no platform wants to be a haven for fraudsters, identity verification creates friction for users, which stops real people as well as impostors from signing up. And again, cracking down on platform violations costs money.

According to Josh Kim, an economist who works in Big Tech, it would be costly for tech companies to build out the legal, policy, and operational teams for content moderation tools that could get users kicked off a platform—and the expense is one companies may find hard to justify in the current business climate. “The shift toward profitability means that you have to be very selective in … where you invest the resources that you have,” he says.

“My intuition here is that unless there are fines or pressure from governments or regulatory agencies or the public themselves,” he adds, “the current atmosphere in the tech ecosystem is to focus on building a product that is profitable and grows fast, and things that don’t contribute to those two points are probably being deprioritized.”


Getting online—and staying in line

At work, Gavesh wore a blue tag, marking him as belonging to the lowest rank of workers. “On top of us are the ones who are wearing the yellow tags—they call themselves HR or translators, or office guys,” he says. “Red tags are team leaders, managers … And then moving from that, they have black and ash tags. Those are the ones running the office.” Most of the latter were Chinese, Gavesh says, as were the really “big bosses,” who didn’t wear tags at all.

Within this hierarchy operated a system of incentives and punishments. Workers who followed orders and proved successful at scamming could rise through the ranks to training or supervisory positions, and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation. 

Gavesh says he was once beaten because he broke an unwritten rule that it was forbidden to cross your legs at work. Yawning was banned, and bathroom breaks were limited to two minutes at a time. 

rows of workers lit by their screens

KATHERINE LAM

Beatings were usually conducted in the open, though the most severe punishments at Gavesh’s company happened in a room called the “water jail.” One day a coworker was there alongside the others, “and the next day he was not,” Gavesh recalls. When the colleague was brought back to the office, he had been so badly beaten he couldn’t walk or speak. “They took him to the front, and they said: ‘If you do not listen to us, this is what will happen to you.’”

Gavesh was desperate to leave but felt there was no chance of escaping. The armed guards seemed ready to shoot, and there were rumors in the compound that some people who jumped the fence had been found drowned in the river. 

This kind of physical and psychological abuse is routine across the industry. Gavesh and others we spoke to describe working 12 hours or more a day, without days off. They faced strict quotas for the number of scam targets they had to have on the hook. If they failed to reach them, they were punished. The UN has documented cases of torture, arbitrary detention, and sexual violence in the compounds. We heard accounts of people made to perform calisthenics and being thrashed on the backside in front of other workers. 

Even if someone could escape, there is often no authority to appeal to on the outside. KK Park and other scam factories in Myanmar are situated in a geopolitical gray zone—borderlands where criminal enterprises have based themselves for decades, trading in narcotics and other unlawful industries. Armed groups, some of them operating under the command of the military, are credibly believed to profit directly from the trade in people and contraband in these areas, in some cases facing international sanctions as a result. Illicit industries in Myanmar have only expanded since a military coup in 2021. By August 2023, according to UN estimates, more than 120,000 people were being held in the country for the purposes of forced scamming, making it the largest hub for the frauds in Southeast Asia. 

Workers who followed orders and proved successful at scamming could rise through the ranks and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation. 

In at least some attempt to get a handle on this lawlessness, Thailand tried to cut off internet services for some compounds across its western border starting last May. Syndicates adapted by running fiber-optic cables across the river. When some of those were discovered, they were severed by Thai authorities. Thailand again ramped up its crackdowns on the industry earlier this year, with tactics that included cutting off internet, gas, and electricity to known scamming enclaves, following the trafficking of a Chinese celebrity through Thailand into Myanmar. 

Still, the scammers keep adapting—again, using Western technology. “We’ve started to see and hear of Starlink systems being used by these compounds,” says Eric Heintz, a global analyst at IJM.

While the military junta has criminalized the use of unauthorized satellite internet service, intercepted shipments and raids on scamming centers over the past year indicate that syndicates smuggle in equipment. The crackdowns seem to have had a limited impact—a Wired investigation published in February found that scamming networks appeared to be “widely using” Starlink in Myanmar. The journalist, using mobile-phone connection data collected by an online advertising industry tool, identified eight known scam compounds on the Myanmar-Thailand border where hundreds of phones had used Starlink more than 40,000 times since November 2024. He also identified photos that appeared to show dozens of Starlink satellite dishes on a scamming compound rooftop.

Starlink could provide another prime opportunity for systematic efforts to interrupt the scams, particularly since it requires a subscription and is able to geofence its services. “I could give you coordinates of where some of these [scamming operations] are, like IP addresses that are connecting to them,” Heintz says. “That should make a huge paper trail.” 

Starlink’s parent company, SpaceX, has previously limited access in areas of Ukraine under Russian occupation, after all. Its policies also state that SpaceX may terminate Starlink services to users who participate in “fraudulent” activities. (SpaceX did not respond to a request for comment.)

Knowing the locations of scam compounds could also allow Apple to step in: Workers rely on iPhones to make contact with victims, and these have to be associated with an Apple ID, even if the workers use apps to spoof their addresses. 

As Heintz puts it, “[If] you have an iCloud account with five phones, and you know that those phones’ GPS antenna locates those phones inside a known scam compound, then all of those phones should be bricked. The account should be locked.” 

(Apple did not provide a response to a request for comment.)

“This isn’t like the other trafficking cases that we’ve worked on, where we’re trying to find a boat in the middle of the ocean,” Heintz adds. “These are city-size compounds. We all know where they are, and we’ve watched them being built via satellite imagery. We should be able to do something location-based to take these accounts offline.”


Getting paid

Once Gavesh developed a relationship on social media or a dating site, he was supposed to move the conversation to WhatsApp. That platform is end-to-end encrypted, meaning even Meta can’t read the content of messages—although it should be possible for the company to spot a user’s unusual patterns of behavior, like opening large numbers of WhatsApp accounts or sending numerous messages in a short span of time.

“If you have an account that is suddenly adding people in large quantities all over the world, should you immediately flag it and freeze that account or require that that individual verify his or her information?” USIP’s Tower says.

After cultivating targets’ trust, scammers would inevitably shift the conversation to the subject of money. Having made themselves out to be living a life of luxury, they would offer a chance to share in the secrets of their wealth. Gavesh was taught to make the approach as if it were an extension of an existing intimacy. “I would not show this platform to anyone else,” he says he was supposed to say. “But since I feel like you are my life partner, I feel like you are my future.”

Lower-level workers like Gavesh were only expected to get scamming targets on the hook; then they’d pass off the relationship to a manager. From there, there is some variation in the approach, but the target is sometimes encouraged to set up an account with a mainstream crypto exchange and buy some tokens. Then the scammer sends the victim—or “customer,” as some workers say they called these targets—a link to a convincing, but fake, crypto investment platform.

After the target invests an initial amount of money, the scammer typically sends fake investment return charts that seem to show the value of that stake rising and rising. To demonstrate good faith, the scammer sends a few hundred dollars back to the victim’s crypto wallet, all the while working to convince the mark to keep investing. Then, once the customer is all in, the scammer goes in for the kill, using every means possible to take more money. “We [would] pull out bigger amounts from the customers and squeeze them out of their possessions,” one worker tells us.  

The design of cryptocurrency allows some degree of anonymity, but with enough time, persistence, and luck, it’s possible to figure out where tokens are flowing. It’s also possible, though even more difficult, to discover who owns the crypto wallets.

In early 2024, University of Texas researchers John M. Griffin and Kevin Mei published a paper that followed money from crypto wallets associated with scammers. They tracked hundreds of thousands of transactions, collectively worth billions of dollars—money that was transferred in and out of mainstream exchanges, including Binance, Coinbase, and Crypto.com. 

hands in the dark holding a phone with an image of a woman's torso
Scam workers spend time gaining the trust of their targets, often by deploying fraudulent personas and developing romantic relationships.
REUTERS/CARLOS BARRIA

Some scam syndicates would move crypto off these big exchanges, launder it through anonymous platforms known as mixers (which can be used to obscure crypto transactions), and then come back to the exchanges to cash out into fiat currency such as dollars.

Griffin and Mei were able to identify deposit addresses on Binance and smaller platforms, including Hong Kong–based Huobi and Seychelles-based OKX, that were collectively receiving billions of dollars from suspected scams. These addresses were being used over and over again to send and receive money, “suggesting limited monitoring by crypto exchanges,” the authors wrote.

(We were unable to reach OKX for comment; Coinbase and Huobi did not respond to requests for comment. A Binance spokesperson said that the company disputes the findings of the University of Texas study, alleging that they are “misleading at best and, at worst, wildly inaccurate.” The spokesperson also said that the company has extensive know-your-customer requirements, uses internal and third-party tools to spot illicit activity, freezes funds, and works with law enforcement to help reclaim stolen assets, claiming to have “proactively prevented $4.2 billion in potential losses for 2.8 million users from scams and frauds” and “recovered $88 million in stolen or misplaced funds” last year. A Crypto.com spokesperson said that the company is “committed to security, compliance and consumer protection” and that it uses “robust” transaction monitoring and fraud detection controls, “rigorously investigates accounts flagged for potential fraudulent activity or victimization,” and has internal blacklisting processes for wallet addresses known to be linked to scams.)

But while tracking illicit payments through the crypto ecosystem is possible, it’s “messy” and “complicated” to actually pin down who owns a scam wallet, according to Griffin Hotchkiss, a writer and use-case researcher at the Ethereum Foundation who has worked on crypto projects in Myanmar and who spoke in his personal capacity. Investigators have to build models that connect users to accounts by the flows of money going through them, which involves a degree of “guesswork” and “red string and sticky notes on the board trying to trace the flow of funds,” he says.

There are, however, certain actors within the crypto ecosystem who should have a good vantage point for observing how money moves through it. The most significant of these is Tether Holdings, a company formerly based in the British Virgin Islands (it has since relocated to El Salvador) that issues tether or USDT, a so-called stablecoin whose value is nominally pegged to the US dollar. Tether is widely used by crypto traders to park their money in dollar-denominated assets without having to convert cryptocurrencies into fiat currency. It is also widely used in criminal activity. 

“There was this one guy I was chatting with, [using] a girl’s profile. He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].” 

There is more than $140 billion worth of USDT in circulation; in 2023, TRM Labs, a firm that traces crypto fraud, estimated that $19.3 billion worth of tether transactions was associated with illicit activity. In January 2024, the UN’s Office on Drugs and Crime said that tether was a leading means of exchange for fraudsters and money launderers operating in Southeast Asia. In October, US federal investigators reportedly opened an investigation alleging possible sanctions violations and complicity in money laundering (though at the time, Tether Holdings’ CEO said there was “no indication” the company was under investigation).

Tech experts tell us that USDT is ever-present in the scam business, used to move money and as the main medium of exchange on anonymous marketplaces such as Cambodia-based Huione Guarantee, which has been accused of allowing romance scammers to launder the proceeds of their crimes. (Cambodia revoked the banking license of Huione Pay in March of this year. Huione, which did not respond to a request for comment, has previously denied engaging in criminal activity.)

While much of the crypto ecosystem is decentralized, USDT “does have a central authority” that could intervene, Hotchkiss says. Tether’s code has functions that allow the company to blacklist users, freeze accounts, and even destroy tokens, he adds. (Tether Holdings did not respond to requests for comment.)

In practice, Hotchkiss says, the company has frozen very few accounts—and, like other experts we spoke to, he thinks it’s unlikely to happen at scale. If it were to start acting like a regulator or a bank, the currency would lose a fundamental part of its appeal: its anonymity and independence from the mainstream of finance. The more you intervene, “the less trust people have in your coin,” he says. “The incentives are kind of misaligned.”


Getting out

Gavesh really wasn’t very good at scamming. The knowledge that the person on the other side of the conversation was working hard for money that he was trying to steal weighed heavily on him. “There was this one guy I was chatting with, [using] a girl’s profile,” he says. “He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you don’t want to get these people [involved].” 

The nature of the work left him racked with guilt. “I believe in karma,” he says. “What goes around comes around.”

Twice during Gavesh’s incarceration, he was sold on from one “employer” to another, but he still struggled with scamming. In February 2023, he was put up for sale a third time, along with some other workers.

“We went to the boss and begged him not to sell [us] and to please let us go home,” Gavesh says. The boss eventually agreed but told them it would cost them. As well as forgoing their salaries, they had to pay a ransom—Gavesh’s was set at 72,000 Thai baht, more than $2,000. 

Gavesh managed to scrape the money together, and he and around a dozen others were driven to the river in a military vehicle. “We had to be very silent,” he says. They were told “not to make any sounds or anything—just to get on the boat.” They slipped back into Thailand the way they had come.

close up on a guard counting money with a small figure in wearing a blue tag standing behind waiting

KATHERINE LAM

To avoid checkpoints on the way to Bangkok, the smugglers took paths through the jungle and changed vehicles around 10 times.

The group barely had enough money to survive a couple of days in the city, so they stuck together, staying in a cheap hotel while figuring out what to do next. With the help of a compatriot, Gavesh got in touch with IJM, which offered to help him navigate the legal bureaucracy ahead.

The traffickers hadn’t given him back his passport, and he was in Thailand without authorization. It was April before he was finally able to board a flight home, where he faced yet more questioning from police and immigration officials. He told his family he had “a small visa issue” and that he had lost his passport in Bangkok. He has never told them about his ordeal. “It would be very hard for them to process,” he says.

Recent history shows it’s very unlikely Gavesh will get any justice. That’s part of the reason why disrupting scams’ technology supply chain is so important: It’s incredibly challenging to hold the people operating the syndicates accountable. They straddle borders and jurisdictions. They have trafficked people from more than 60 countries, according to research from USIP, and scam targets come from all over the world. Much of the stolen money is moved through crypto wallets based in secrecy jurisdictions. “This thing is really like an onion. You’ve got layer after layer after layer of it, and it’s just really difficult to see where jurisdiction starts and where jurisdiction ends,” Tower says.

Chinese authorities are often more willing to cooperate with the military junta and armed groups in Myanmar that Western governments will not deal with, and they have cracked down where they can on operations involving their nationals. Thailand has also stepped up its efforts to address the human trafficking crisis and shut down scamming operations across its border in recent months. But when it comes to regulating tech platforms, the reaction from governments has been slower. 

The few legislative efforts in the US, which are still in the earliest stages, focus on supporting law enforcement and financial institutions, not directly on ways to address the abuse of American tech platforms for scamming. And they probably won’t take that on anytime soon. Trump, who has been boosted and courted by several high-profile tech executives, has indicated that his administration opposes heavier online moderation. One executive order, signed in February, vows to impose tariffs on foreign governments if they introduce measures that could “inhibit the growth” of US companies—particularly those in tech—or compel them to moderate online content. 

The Trump White House also supports reducing regulation in the crypto industry; it has halted major investigations into crypto companies and just this month removed sanctions on the crypto mixer Tornado Cash. In what was widely seen as a nod to libertarian-leaning crypto-enthusiasts, Trump pardoned Ross Ulbricht, the founder of the dark web marketplace Silk Road and one of the earlier adopters of crypto for large-scale criminal activity. The administration’s embrace of crypto could indeed have implications for the scamming industry, notes Kim, the economist: “It makes it much easier for crypto services to proliferate and have wider-spread adoption, and that might make it easier for criminal enterprises to tap into that and exploit that for their own means.” 

What’s more, the new US administration has overseen the rollback of funding for myriad international aid programs, primarily programs run through the US Agency for International Development and including those working to help the people who’ve been trafficked into scam compounds. In late February, CNN reports, every one of the agency’s anti-trafficking projects was halted.

This all means it’s up to the tech companies themselves to act on their own initiative. And Big Tech has rarely acted without legislative threats or significant social or financial pressure. Companies won’t do anything if “it’s not mandatory, it’s not enforced by the government,” and most important, if companies don’t profit from it, says Wang, from the University of Texas. While a group of tech companies, including Meta, Match, and Coinbase, last year announced the formation of Tech Against Scams, a collaboration to share tips and best practices, experts tell us there are no concrete actions to point to yet. 

And at a time when more resources are desperately needed to address the growing problems on their platforms, social media companies like X, Meta, and others have laid off hundreds of people from their trust and safety departments in recent years, reducing their capacity to tackle even the most pressing issues. Since the reelection of Trump, Meta has signaled an even greater rollback of its moderation and fact checking, a decision that earned praise from the president. 

Still, companies may feel pressure given that a handful of entities and executives have in recent years been held legally responsible for criminal activity on their platforms. Changpeng Zhao, who founded Binance, the world’s largest cryptocurrency exchange, was sentenced to four months in jail last April after pleading guilty to breaking US money-laundering laws, and the company had to forfeit some $4 billion for offenses that included allowing users to bypass sanctions. Then last May, Alexey Pertsev, a Tornado Cash cofounder, was sentenced to more than five years in a Dutch prison for facilitating the laundering of money stolen by, among others, the Lazarus Group, North Korea’s infamous state-backed hacking team. And in August last year, French authorities arrested Pavel Durov, the CEO of Telegram, and charged him with complicity in drug trafficking and distribution of child sexual abuse material. 

“I think all social media [companies] should really be looking at the case of Telegram right now,” USIP’s Tower says. “At that CEO level, you’re starting to see states try to hold a company accountable for its role in enabling major transnational criminal activity on a global scale.”

Compounding all the challenges, however, is the integration of cheap and easy-to-use artificial intelligence into scamming operations. The trafficked individuals we spoke to, who had mostly left the compounds before the widespread adoption of generative AI, said that if targets suggested a video call they would deflect or, as a last resort, play prerecorded video clips. Only one described the use of AI by his company; he says he was paid to record himself saying various sentences in ways that reflected different emotions, for the purposes of feeding the audio into an AI model. Recently, reports have emerged of scammers who have used AI-powered “face swap” and voice-altering products so that they can impersonate their characters more convincingly. “Malicious actors can exploit these models, especially open-source models, to produce content at an unprecedented scale,” says Gabrielle Tran, senior analyst for technology and society at IST. “These models are purposefully being fine-tuned … to serve as convincing humans.”  

Experts we spoke with warn that if platforms don’t pick up the pace on enforcement now, they’re likely to fall even further behind. 

Every now and again, Gavesh still goes on Facebook to report pages he thinks are scams. He never hears back. 

But he is working again in the tourism industry and on the path to recovering from his ordeal. “I can’t say that I’m 100% out of the trauma, but I’m trying to survive because I have responsibilities,” he says. 

He chose to speak out because he doesn’t want anyone else to be tricked—into a scamming compound, or into giving up their life savings to a stranger. He’s seen behind the scenes into a brutal industry that exploits people’s real needs for work, connection, and human contact, and he wants to make sure no one else ends up where he did. 

“There’s a very scary world,” he says. “A world beyond what we have seen.”

Peter Guest is a journalist based in London. Emily Fishbein is a freelance journalist focusing on Myanmar.

Additional reporting by Nu Nu Lusan. 

Inside the strange limbo facing millions of IVF embryos

Lisa Holligan already had two children when she decided to try for another baby. Her first two pregnancies had come easily. But for some unknown reason, the third didn’t. Holligan and her husband experienced miscarriage after miscarriage after miscarriage.

Like many other people struggling to conceive, Holligan turned to in vitro fertilization, or IVF. The technology allows embryologists to take sperm and eggs and fuse them outside the body, creating embryos that can then be transferred into a person’s uterus.

The fertility clinic treating Holligan was able to create six embryos using her eggs and her husband’s sperm. Genetic tests revealed that only three of these were “genetically normal.” After the first was transferred, Holligan got pregnant. Then she experienced yet another miscarriage. “I felt numb,” she recalls. But the second transfer, which took place several months later, stuck. And little Quinn, who turns four in February, was the eventual happy result. “She is the light in our lives,” says Holligan.

Holligan, who lives in the UK, opted to donate her “genetically abnormal” embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesn’t know what to do with it.

Should she and her husband donate it to another family? Destroy it? “It’s almost four years down the line, and we still haven’t done anything with [the embryo],” she says. The clinic hasn’t been helpful—Holligan doesn’t remember talking about what to do with leftover embryos at the time, and no one there has been in touch with her for years, she says.

Holligan’s embryo is far from the only one in this peculiar limbo. Millions—or potentially tens of millions—of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates. 

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, they’re human cells and nothing else. To others, they’re morally equivalent to children. Many feel they exist somewhere between those two extremes.

There are debates, too, over how we should classify embryos in law. Are they property? Do they have a legal status? These questions are important: There have been multiple legal disputes over who gets to use embryos, who is responsible if they are damaged, and who gets the final say over their fate. And the answers will depend not only on scientific factors, but also on ethical, cultural, and religious ones.  

The options currently available to people with leftover IVF embryos mirror this confusion. As a UK resident, Holligan can choose to discard her embryos, make them available to other prospective parents, or donate them for research. People in the US can also opt for “adoption,” “placing” their embryos with families they get to choose. In Germany, people are not typically allowed to freeze embryos at all. And in Italy, embryos that are not used by the intended parents cannot be discarded or donated. They must remain frozen, ostensibly forever. 

While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? 

Meanwhile, many of these same people are trying to find ways to bring down the total number of embryos in storage. Maintenance costs are high. Some clinics are running out of space. And with a greater number of embryos in storage, there are more opportunities for human error. They are grappling with how to get a handle on the growing number of embryos stuck in storage with nowhere to go.

The embryo boom

There are a few reasons why this has become such a conundrum. And they largely come down to an increasing demand for IVF and improvements in the way it is practiced. “It’s a problem of our own creation,” says Pietro Bortoletto, a reproductive endocrinologist at Boston IVF in Massachusetts. IVF has only become as successful as it is today by “generating lots of excess eggs and embryos along the way,” he says. 

To have the best chance of creating healthy embryos that will attach to the uterus and grow in a successful pregnancy, clinics will try to collect multiple eggs. People who undergo IVF will typically take a course of hormone injections to stimulate their ovaries. Instead of releasing a single egg that month, they can expect to produce somewhere between seven and 20 eggs. These eggs can be collected via a needle that passes through the vagina and into the ovaries. The eggs are then taken to a lab, where they are introduced to sperm. Around 70% to 80% of IVF eggs are successfully fertilized to create embryos.

The embryos are then grown in the lab. After around five to seven days an embryo reaches a stage of development at which it is called a blastocyst, and it is ready to be transferred to a uterus. Not all IVF embryos reach this stage, however—only around 30% to 50% of them make it to day five. This process might leave a person with no viable embryos. It could also result in more than 10, only one of which is typically transferred in each pregnancy attempt. In a typical IVF cycle, one embryo might be transferred to the person’s uterus “fresh,” while any others that were created are frozen and stored.

IVF success rates have increased over time, in large part thanks to improvements in this storage technology. A little over a decade ago, embryologists tended to use a “slow freeze” technique, says Bortoletto, and many embryos didn’t survive the process. Embryos are now vitrified instead, using liquid nitrogen to rapidly cool them from room temperature to -196 °C in less than two seconds. Vitrification essentially turns all the water in the embryos into a glasslike state, avoiding the formation of damaging ice crystals. 

Now, clinics increasingly take a “freeze all” approach, in which they cryopreserve all the viable embryos and don’t start transferring them until later. In some cases, this is so that the clinic has a chance to perform genetic tests on the embryo they plan to transfer.

An assortment of sperm and embryos, preserved in liquid nitrogen.
ALAMY

Once a lab-grown embryo is around seven days old, embryologists can remove a few cells for preimplantation genetic testing (PGT), which screens for genetic factors that might make healthy development less likely or predispose any resulting children to genetic diseases. PGT is increasingly popular in the US—in 2014, it was used in 13% of IVF cycles, but by 2016, that figure had increased to 27%. Embryos that undergo PGT have to be frozen while the tests are run, which typically takes a week or two, says Bortoletto: “You can’t continue to grow them until you get those results back.”

And there doesn’t seem to be a limit to how long an embryo can stay in storage. In 2022, a couple in Oregon had twins who developed from embryos that had been frozen for 30 years.

Put this all together, and it’s easy to see how the number of embryos in storage is rocketing. We’re making and storing more embryos than ever before. When you combine that with the growing demand for IVF, which is increasing in use by the year, perhaps it’s not surprising that the number of embryos sitting in storage tanks is estimated to be in the millions.

I say estimated, because no one really knows how many there are. In 2003, the results of a survey of fertility clinics in the US suggested that there were around 400,000 in storage. Ten years later, in 2013, another pair of researchers estimated that, in total, around 1.4 million embryos had been cryopreserved in the US. But Alana Cattapan, now a political scientist at the University of Waterloo in Ontario, Canada, and her colleagues found flaws in the study and wrote in 2015 that the number could be closer to 4 million.  

That was a decade ago. When I asked embryologists what they thought the number might be in the US today, I got responses between 1 million and 10 million. Bortoletto puts it somewhere around 5 million.

Globally, the figure is much higher. There could be tens of millions of embryos, invisible to the naked eye, kept in a form of suspended animation. Some for months, years, or decades. Others indefinitely.

Stuck in limbo

In theory, people who have embryos left over from IVF have a few options for what to do with them. They could donate the embryos for someone else to use. Often this can be done anonymously (although genetic tests might later reveal the biological parents of any children that result). They could also donate the embryos for research purposes. Or they could choose to discard them. One way to do this is to expose the embryos to air, causing the cells to die.

Studies suggest that around 40% of people with cryopreserved embryos struggle to make this decision, and that many put it off for five years or more. For some people, none of the options are appealing.

In practice, too, the available options vary greatly depending on where you are. And many of them lead to limbo.

Take Spain, for example, which is a European fertility hub, partly because IVF there is a lot cheaper than in other Western European countries, says Giuliana Baccino, managing director of New Life Bank, a storage facility for eggs and sperm in Buenos Aires, Argentina, and vice chair of the European Fertility Society. Operating costs are low, and there’s healthy competition—there are around 330 IVF clinics operating in Spain. (For comparison, there are around 500 IVF clinics in the US, which has a population almost seven times greater.)

Baccino, who is based in Madrid, says she often hears of foreign patients in their late 40s who create eight or nine embryos for IVF in Spain but end up using only one or two of them. They go back to their home countries to have their babies, and the embryos stay in Spain, she says. These individuals often don’t come back for their remaining embryos, either because they have completed their families or because they age out of IVF eligibility (Spanish clinics tend not to offer the treatment to people over 50). 

Doctors hands removing embryo samples from cryogenic storage
An embryo sample is removed from cryogenic storage.
GETTY IMAGES

In 2023, the Spanish Fertility Society estimated that there were 668,082 embryos in storage in Spain, and that around 60,000 of them were “in a situation of abandonment.” In these cases the clinics might not be able to reach the intended parents, or might not have a clear directive from them, and might not want to destroy any embryos in case the patients ask for them later. But Spanish clinics are wary of discarding embryos even when they have permission to do so, says Baccino. “We always try to avoid trouble,” she says. “And we end up with embryos in this black hole.”

This happens to embryos in the US, too. Clinics can lose touch with their patients, who may move away or forget about their remaining embryos once they have completed their families. Other people may put off making decisions about those embryos and stop communicating with the clinic. In cases like these, clinics tend to hold onto the embryos, covering the storage fees themselves.

Nowadays clinics ask their patients to sign contracts that cover long-term storage of embryos—and the conditions of their disposal. But even with those in hand, it can be easier for clinics to leave the embryos in place indefinitely. “Clinics are wary of disposing of them without explicit consent, because of potential liability,” says Cattapan, who has researched the issue. “People put so much time, energy, money into creating these embryos. What if they come back?”

Bortoletto’s clinic has been in business for 35 years, and the handful of sites it operates in the US have a total of over 47,000 embryos in storage, he says. “Our oldest embryo in storage was frozen in 1989,” he adds. 

Some people may not even know where their embryos are. Sam Everingham, who founded and directs Growing Families, an organization offering advice on surrogacy and cross-border donations, traveled with his partner from their home in Melbourne, Australia, to India to find an egg donor and surrogate back in 2009. “It was a Wild West back then,” he recalls. Everingham and his partner used donor eggs to create eight embryos with their sperm.

Everingham found the experience of trying to bring those embryos to birth traumatic. Baby Zac was stillborn. Baby Ben died at seven weeks. “We picked ourselves up and went again,” he recalls. Two embryo transfers were successful, and the pair have two daughters today.

But the fate of the rest of their embryos is unclear. India’s government decided to ban commercial surrogacy for foreigners in 2015, and Everingham lost track of where they are. He says he’s okay with that. As far as he’s concerned, those embryos are just cells.

He knows not everyone feels the same way. A few days before we spoke, Everingham had hosted a couple for dinner. They had embryos in storage and couldn’t agree on what to do with them. “The mother … wanted them donated to somebody,” says Everingham. Her husband was very uncomfortable with the idea. “[They have] paid storage fees for 14 years for those embryos because neither can agree on what to do with them,” says Everingham. “And this is a very typical scenario.”

Lisa Holligan’s experience is similar. Holligan thought she’d like to donate her last embryo to another person—someone else who might have been struggling to conceive. “But my husband and I had very different views on it,” she recalls. He saw the embryo as their child and said he wouldn’t feel comfortable with giving it up to another family. “I started having these thoughts about a child coming to me when they’re older, saying they’ve had a terrible life, and [asking] ‘Why didn’t you have me?’” she says.

After all, her daughter Quinn began as an embryo that was in storage for months. “She was frozen in time. She could have been frozen for five years like [the leftover] embryo and still be her,” she says. “I know it sounds a bit strange, but this embryo could be a child in 20 years’ time. The science is just mind-blowing, and I think I just block it out. It’s far too much to think about.”

No choice at all

Choosing the fate of your embryos can be difficult. But some people have no options at all.

This is the case in Italy, where the laws surrounding assisted reproductive technology have grown increasingly restrictive. Since 2004, IVF has been accessible only to heterosexual couples who are either married or cohabiting. Surrogacy has also been prohibited in the country for the last 20 years, and in 2024, it was made a “universal crime.” The move means Italians can be prosecuted for engaging in surrogacy anywhere in the world, a position Italy has also taken on the crimes of genocide and torture, says Sara Dalla Costa, a lawyer specializing in assisted reproduction and an IVF clinic manager at Instituto Bernabeu on the outskirts of Venice.

The law surrounding leftover embryos is similarly inflexible. Dalla Costa says there are around 900,000 embryos in storage in Italy, basing the estimate on figures published in 2021 and the number of IVF cycles performed since then. By law, these embryos cannot be discarded. They cannot be donated to other people, and they cannot be used for research. 

Even when genetic tests show that the embryo has genetic features making it “incompatible with life,” it must remain in storage, forever, says Dalla Costa. 

“There are a lot of patients that want to destroy embryos,” she says. For that, they must transfer their embryos to Spain or other countries where it is allowed.

Even people who want to use their embryos may “age out” of using them. Dalla Costa gives the example of a 48-year-old woman who undergoes IVF and creates five embryos. If the first embryo transfer happens to result in a successful pregnancy, the other four will end up in storage. Once she turns 50, this woman won’t be eligible for IVF in Italy. Her remaining embryos become stuck in limbo. “They will be stored in our biobanks forever,” says Dalla Costa.

Dalla Costa says she has “a lot of examples” of couples who separate after creating embryos together. For many of them, the stored embryos become a psychological burden. With no way of discarding them, these couples are forever connected through their cryopreserved cells. “A lot of our patients are stressed for this reason,” she says.

Earlier this year, one of Dalla Costa’s clients passed away, leaving behind the embryos she’d created with her husband. He asked the clinic to destroy them. In cases like these, Dalla Costa will contact the Italian Ministry of Health. She has never been granted permission to discard an embryo, but she hopes that highlighting cases like these might at least raise awareness about the dilemmas the country’s policies are creating for some people.

Snowflakes and embabies

In Italy, embryos have a legal status. They have protected rights and are viewed almost as children. This sentiment isn’t specific to Italy. It is shared by plenty of individuals who have been through IVF. “Some people call them ‘embabies’ or ‘freezer babies,’” says Cattapan.

It is also shared by embryo adoption agencies in the US. Beth Button is executive director of one such program, called Snowflakes—a division of Nightlight Christian Adoptions agency, which considers cryopreserved embryos to be children, frozen in time, waiting to be born. Snowflakes matches embryo donors, or “placing families,” with recipients, termed “adopting families.” Both parties share their information and essentially get to choose who they donate to or receive from. By the end of 2024, 1,316 babies had been born through the Snowflakes embryo adoption program, says Button. 

Button thinks that far too many embryos are being created in IVF labs around the US. Around 10 years ago, her agency received a donation from a couple that had around 38 leftover embryos to donate. “We really encourage [people with leftover embryos in storage] to make a decision [about their fate], even though it’s an emotional, difficult decision,” she says. “Obviously, we just try to keep [that discussion] focused on the child,” she says. “Is it better for these children to be sitting in a freezer, even though that might be easier for you, or is it better for them to have a chance to be born into a loving family? That kind of pushes them to the point where they’re ready to make that decision.”

Button and her colleagues feel especially strongly about embryos that have been in storage for a long time. These embryos are usually difficult to place, because they are thought to be of poorer quality, or less likely to successfully thaw and result in a healthy birth. The agency runs a program called Open Hearts specifically to place them, along with others that are harder to match for various reasons. People who accept one but fail to conceive are given a shot with another embryo, free of charge.

These nitrogen tanks at New Hope Fertility Center in New York hold tens of thousands of frozen embryos and eggs.
GETTY IMAGES

“We have seen perfectly healthy children born from very old embryos, [as well as] embryos that were considered such poor quality that doctors didn’t even want to transfer them,” says Button. “Right now, we have a couple who is pregnant with [an embryo] that was frozen for 30 and a half years. If that pregnancy is successful, that will be a record for us, and I think it will be a worldwide record as well.”

Many embryologists bristle at the idea of calling an embryo a child, though. “Embryos are property. They are not unborn children,” says Bortoletto. In the best case, embryos create pregnancies around 65% of the time, he says. “They are not unborn children,” he repeats.

Person or property?

In 2020, an unauthorized person allegedly entered an IVF clinic in Alabama and pulled frozen embryos from storage, destroying them. Three sets of intended parents filed suit over their “wrongful death.” A trial court dismissed the claims, but the Alabama Supreme Court disagreed, essentially determining that those embryos were people. The ruling shocked many and was expected to have a chilling effect on IVF in the state, although within a few weeks, the state legislature granted criminal and civil immunity to IVF clinics.

But the Alabama decision is the exception. While there are active efforts in some states to endow embryos with the same legal rights as people, a move that could potentially limit access to abortion, “most of the [legal] rulings in this area have made it very clear that embryos are not people,” says Rich Vaughn, an attorney specializing in fertility law and the founder of the US-based International Fertility Law Group. At the same time, embryos are not just property. “They’re something in between,” says Vaughn. “They’re sort of a special type of property.” 

UK law takes a similar approach: The language surrounding embryos and IVF was drafted with the idea that the embryo has some kind of “special status,” although it was never made entirely clear exactly what that special status is, says James Lawford Davies, a solicitor and partner at LDMH Partners, a law firm based in York, England, that specializes in life sciences. Over the years, the language has been tweaked to encompass embryos that might arise from IVF, cloning, or other means; it is “a bit of a fudge,” says Lawford Davies. Today, the official—if somewhat circular—legal definition in the Human Fertilisation and Embryology Act reads: “embryo means a live human embryo.” 

And while people who use their eggs or sperm to create embryos might view these embryos as theirs, according to UK law, embryos are more like “a stateless bundle of cells,” says Lawford Davies. They’re not quite property—people don’t own embryos. They just have control over how they are used. 

Many legal disputes revolve around who has control. This was the experience of Natallie Evans, who created embryos with her then partner Howard Johnston in the UK in 2001. The couple separated in 2002. Johnston wrote to the clinic to ask that their embryos be destroyed. But Evans, who had been diagnosed with ovarian cancer in 2001, wanted to use them. She argued that Johnston had already consented to their creation, storage, and use and should not be allowed to change his mind. The case eventually made it to the European Court of Human Rights, and Evans lost. The case set a precedent that consent was key and could be withdrawn at any time.

In Italy, on the other hand, withdrawing consent isn’t always possible. In 2021, a case like Natallie Evans’s unfolded in the Italian courts: A woman who wanted to proceed with implantation after separating from her partner went to court for authorization. “She said that it was her last chance to be a mother,” says Dalla Costa. The judge ruled in her favor.

Dalla Costa’s clinics in Italy are now changing their policies to align with this decision. Male partners must sign a form acknowledging that they cannot prevent embryos from being used once they’ve been created.

The US situation is even more complicated, because each state has its own approach to fertility regulation. When I looked through a series of published legal disputes over embryos, I found little consistency—sometimes courts ruled to allow a woman to use an embryo without the consent of her former partner, and sometimes they didn’t. “Some states have comprehensive … legislation; some do not,” says Vaughn. “Some have piecemeal legislation, some have only case law, some have all of the above, some have none of the above.”

The meaning of an embryo

So how should we define an embryo? “It’s the million-dollar question,” says Heidi Mertes, a bioethicist at Ghent University in Belgium. Some bioethicists and legal scholars, including Vaughn, think we’d all stand to benefit from clear legal definitions. 

Risa Cromer, a cultural anthropologist at Purdue University in Indiana, who has spent years researching the field, is less convinced. Embryos exist in a murky, in-between state, she argues. You can (usually) discard them, or transfer them, but you can’t sell them. You can make claims against damages to them, but an embryo is never viewed in the same way as a car, for example. “It doesn’t fit really neatly into that property category,” says Cromer. “But, very clearly, it doesn’t fit neatly into the personhood category either.”

And there are benefits to keeping the definition vague, she adds: “There is, I think, a human need for there to be a wide range of interpretive space for what IVF embryos are or could be.”

That’s because we don’t have a fixed moral definition of what an embryo is. Embryos hold special value even for people who don’t view them as children. They hold potential as human life. They can come to represent a fertility journey—one that might have been expensive, exhausting, and traumatizing.  “Even for people who feel like they’re just cells, it still cost a lot of time, money, [and effort] to get those [cells],” says Cattapan.

“I think it’s an illusion that we might all agree on what the moral status of an embryo is,” Mertes says.

In the meantime, a growing number of embryologists, ethicists, and researchers are working to persuade fertility clinics and their patients not to create or freeze so many embryos in the first place. Early signs aren’t promising, says Baccino. The patients she has encountered aren’t particularly receptive to the idea. “They think, ‘If I will pay this amount for a cycle, I want to optimize my chances, so in my case, no,’” she says. She expects the number of embryos in storage to continue to grow.

Holligan’s embryo has been in storage for almost five years. And she still doesn’t know what to do with it. She tears up as she talks through her options. Would discarding the embryo feel like a miscarriage? Would it be a sad thing? If she donated the embryo, would she spend the rest of her life wondering what had become of her biological child, and whether it was having a good life? Should she hold on to the embryo for another decade in case her own daughter needs to use it at some point?

“The question [of what to do with the embryo] does pop into my head, but I quickly try to move past it and just say ‘Oh, that’s something I’ll deal with at a later time,’” says Holligan. “I’m sure [my husband] does the same.”

The accumulation of frozen embryos is “going to continue this way for some time until we come up with something that fully addresses everyone’s concerns,” says Vaughn. But will we ever be able to do that?

“I’m an optimist, so I’m gonna say yes,” he says with a hopeful smile. “But I don’t know at the moment.”

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. 

But all that is up for grabs. We are at a new inflection point.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. 

Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”

AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. 

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.

Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. 

I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. 

On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. 

People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer.

It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.”

But this isn’t just about publishers (or my own self-interest). 

People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer.

But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. 

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? 


In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. 

Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey.

Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. 

And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was.

But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.  

But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 

And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing

Sundar Pichai
Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”
JENS GYARMATY/LAIF/REDUX

For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  

But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. 

“It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. 

It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. 

But once you’ve used AI Overviews a bit, you realize they are different

Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world.

While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 

“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.”

The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) 

“[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” 

That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 

That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. 

When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous.

“We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. 

There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous.

In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from.

Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? 

I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources.

“When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.”

In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. 

“Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.”

The new search

Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work.


Search Engine

Google
The search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries.

What it’s good at

Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material.


Perplexity
Perplexity is a conversational search engine that uses third-party large
language models from OpenAI and Anthropic to answer queries.

Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content.


ChatGPT
While Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search.

Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent.


When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. 

“You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.”

There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. 

“If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” 

But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?  

Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.  

“If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. 

Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” 

Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”


 “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” 

He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? 

A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.  

“There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.”

Kevin Weil, chief product officer, OpenAI

According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. 

OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. 

“I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.”

Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. 

Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. 

Elizabeth Reid
“For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.
WINNI WINTERMEYER/REDUX

Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.)

But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners.

Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” 

When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. 

“And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.”

Indeed! 

The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. 


It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” 

We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. 

The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities.

“A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.”

This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. 

Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. 

“It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.”

And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.”

“We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.”

This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. 

In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. 

But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today.

These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not.

That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe

At first glance, the Mosphera scooter may look normal—just comically oversized. It’s like the monster truck of scooters, with a footplate seven inches off the ground that’s wide enough to stand on with your feet slightly apart—which you have to do to keep your balance, because when you flip the accelerator with a thumb, it takes off like a rocket. While the version I tried in a parking lot in Riga’s warehouse district had a limiter on the motor, the production version of the supersized electric scooter can hit 100 kilometers (62 miles) per hour on the flat. The all-terrain vehicle can also go 300 kilometers on a single charge and climb 45-degree inclines. 

Latvian startup Global Wolf Motors launched in 2020 with a hope that the Mosphera would fill a niche in micromobility. Like commuters who use scooters in urban environments, farmers and vintners could use the Mosphera to zip around their properties; miners and utility workers could use it for maintenance and security patrols; police and border guards could drive them on forest paths. And, they thought, maybe the military might want a few to traverse its bases or even the battlefield—though they knew that was something of a long shot.

When co-founders Henrijs Bukavs and Klavs Asmanis first went to talk to Latvia’s armed forces, they were indeed met with skepticism—a military scooter, officials implied, didn’t make much sense—and a wall of bureaucracy. They found that no matter how good your pitch or how glossy your promo video (and Global Wolf’s promo is glossy: a slick montage of scooters jumping, climbing, and speeding in formation through woodlands and deserts), getting into military supply chains meant navigating layer upon layer of officialdom.

Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed. In the desperate early days of the war, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas—like a military scooter—that might not have made the cut in peacetime. Asmanis knew a Latvian journalist heading to Ukraine; through the reporter’s contacts, the startup arranged to ship two Mospheras to the Ukrainian army. 

Within weeks, the scooters were at the front line—and even behind it, being used by Ukrainian special forces scouts on daring reconnaissance missions. It was an unexpected but momentous step for Global Wolf, and an early indicator of a new demand that’s sweeping across tech companies along Ukraine’s borders: for civilian products that can be adapted quickly for military use.

COURTESY OF GLOBAL WOLF

Global Wolf’s high-definition marketing materials turned out to be nowhere near as effective as a few minutes of grainy phone footage from the war. The company has since shipped out nine more scooters to the Ukrainian army, which has asked for another 68. Where Latvian officials once scoffed, the country’s prime minister went to see Mosphera’s factory in April 2024, and now dignitaries and defense officials from the country are regular visitors. 

It might have been hard a few years ago to imagine soldiers heading to battle on oversized toys made by a tech startup with no military heritage. But Ukraine’s resistance to Russia’s attacks has been a miracle of social resilience and innovation—and the way the country has mobilized is serving both a warning and an inspiration to its neighbors. They’ve watched as startups, major industrial players, and political leaders in Ukraine have worked en masse to turn civilian technology into weapons and civil defense systems. They’ve seen Ukrainian entrepreneurs help bootstrap a military-industrial complex that is retrofitting civilian drones into artillery spotters and bombers, while software engineers become cyberwarriors and AI companies shift to battlefield intelligence. Engineers work directly with friends and family on the front line, iterating their products with incredible speed.

Their successes—often at a fraction of the cost of conventional weapons systems—have in turn awakened European governments and militaries to the potential of startup-style innovation and startups to the potential dual uses of their products, meaning ones that have legitimate civilian applications but can be modified at scale to turn them into weapons. 

This heady mix of market demand and existential threat is pulling tech companies in Latvia and the other Baltic states into a significant pivot. Companies that can find military uses for their products are hardening them and discovering ways to get them in front of militaries that are increasingly willing to entertain the idea of working with startups. It’s a turn that may only become more urgent if the US under incoming President Donald Trump becomes less willing to underwrite the continent’s defense.

But while national governments, the European Union, and NATO are all throwing billions of dollars of public money into incubators and investment funds—followed closely by private-sector investors—some entrepreneurs and policy experts who have worked closely with Ukraine warn that Europe might have only partially learned the lessons from Ukraine’s resistance.

If Europe wants to be ready to meet the threat of attack, it needs to find new ways of working with the tech sector. That includes learning how Ukraine’s government and civil society adapted to turn civilian products into dual-use tools quickly and cut through bureaucracy to get innovative solutions to the front. Ukraine’s resilience shows that military technology isn’t just about what militaries buy but about how they buy it, and about how politics, civil society, and the tech sector can work together in a crisis. 

“[Ukraine], unfortunately, is the best defense technology experimentation ground in the world right now. If you are not in Ukraine, then you are not in the defense business.”

“I think that a lot of tech companies in Europe would do what is needed to do. They would put their knowledge and skills where they’re needed,” says Ieva Ilves, a veteran Latvian diplomat and technology policy expert. But many governments across the continent are still too slow, too bureaucratic, and too worried that they might appear to be wasting money, meaning, she says, that they are not necessarily “preparing the soil for if [a] crisis comes.”

“The question is,” she says, “on a political level, are we capable of learning from Ukraine?”

Waking up the neighbors

Many Latvians and others across the Baltic nations feel the threat of Russian aggression more viscerally than their neighbors in Western Europe. Like Ukraine, Latvia has a long border with Russia and Belarus, a large Russian-speaking minority, and a history of occupation. Also like Ukraine, it has been the target of more than a decade of so-called “hybrid war” tactics—cyberattacks, disinformation campaigns, and other attempts at destabilization—directed by Moscow. 

Since Russian tanks crossed into Ukraine two-plus years ago, Latvia has stepped up its preparations for a physical confrontation, investing more than €300 million ($316 million) in fortifications along the Russian border and reinstating a limited form of conscription to boost its reserve forces. Since the start of this year, the Latvian fire service has been inspecting underground structures around the country, looking for cellars, parking garages, and metro stations that could be turned into bomb shelters.

And much like Ukraine, Latvia doesn’t have a huge military-industrial complex that can churn out artillery shells or tanks en masse. 

What it and other smaller European countries can produce for themselves—and potentially sell to their allies—are small-scale weapons systems, software platforms, telecoms equipment, and specialized vehicles. The country is now making a significant investment in tools like Exonicus, a medical technology platform founded 11 years ago by Latvian sculptor Sandis Kondrats. Users of its augmented-reality battlefield-medicine training simulator put on a virtual reality headset that presents them with casualties, which they have to diagnose and figure out how to treat. The all-digital training saves money on mannequins, Kondrats says, and on critical field resources.

“If you use all the medical supplies on training, then you don’t have any medical supplies,” he says. Exonicus has recently broken into the military supply chain, striking deals with the Latvian, Estonian, US, and German militaries, and it has been training Ukrainian combat medics.

Medical technology company Exonicus has created an augmented-reality battlefield-medicine training simulator that presents users with casualties, which they have to diagnose and figure out how to treat.
GATIS ORLICKIS/BALTIC PICTURES

There’s also VR Cars, a company founded by two Latvian former rally drivers, that signed a contract in 2022 to develop off-road vehicles for the army’s special forces. And there is Entangle, a quantum encryption company that sells widgets that turn mobile phones into secure communications devices, and has recently received an innovation grant from the Latvian Ministry of Defense.

Unsurprisingly, a lot of the focus in Latvia has been on unmanned aerial vehicles (UAVs), or drones, which have become ubiquitous on both sides fighting in Ukraine, often outperforming weapons systems that cost an order of magnitude more. In the early days of the war, Ukraine found itself largely relying on machines bought from abroad, such as the Turkish-made Bayraktar strike aircraft and jury-rigged DJI quadcopters from China. It took a while, but within a year the country was able to produce home-grown systems.

As a result, a lot of the emphasis in defense programs across Europe is on UAVs that can be built in-country. “The biggest thing when you talk to [European ministries of defense] now is that they say, ‘We want a big amount of drones, but we also want our own domestic production,’” says Ivan Tolchinsky, CEO of Atlas Dynamics, a drone company headquartered in Riga. Atlas Dynamics builds drones for industrial uses and has now made hardened versions of its surveillance UAVs that can resist electronic warfare and operate in battlefield conditions.

Agris Kipurs founded AirDog in 2014 to make drones that could track a subject autonomously; they were designed for people doing outdoor sports who wanted to film themselves without needing to fiddle with a controller. He and his co-founders sold the company to a US home security company, Alarm.com, in 2020. “For a while, we did not know exactly what we would build next,” Kipurs says. “But then, with the full-scale invasion of Ukraine, it became rather obvious.”

His new company, Origin Robotics, has recently “come out of stealth mode,” he says, after two years of research and development. Origin has built on the team’s experience in consumer drones and its expertise in autonomous flight to begin to build what Kipurs calls “an airborne precision-guided weapon system”—a guided bomb that a soldier can carry in a backpack. 

The Latvian government has invested in encouraging startups like these, as well as small manufacturers, to develop military-capable UAVs by establishing a €600,000 prize fund for domestic drone startups and a €10 million budget to create a new drone program, working with local and international manufacturers. 

VR Cars was founded by two Latvian former rally drivers and has developed off-road vehicles for the army’s special forces.

Latvia is also the architect and co-leader, with the UK, of the Drone Coalition, a multicountry initiative that’s directing more than €500 million toward building a drone supply chain in the West. Under the initiative, militaries run competitions for drone makers, rewarding high performers with contracts and sending their products to Ukraine. Its grantees are often not allowed to publicize their contracts, for security reasons. “But the companies which are delivering products through that initiative are new to the market,” Kipurs says. “They are not the companies that were there five years ago.”

Even national telecommunications company LMT, which is partly government owned, is working on drones and other military-grade hardware, including sensor equipment and surveillance balloons. It’s developing a battlefield “internet of things” system—essentially, a system that can track in real time all the assets and personnel in a theater of war. “In Latvia, more or less, we are getting ready for war,” says former naval officer Kaspars Pollaks, who heads an LMT division that focuses on defense innovation. “We are just taking the threat really seriously. Because we will be operationally alone [if Russia invades].”

The Latvian government’s investments are being mirrored across Europe: NATO has expanded its Defence Innovation Accelerator for the North Atlantic (DIANA) program, which runs startup incubators for dual-use technologies across the continent and the US, and launched a separate €1 billion startup fund in 2022. Adding to this, the European Investment Fund, a publicly owned investment company, launched a €175 million fund-of-funds this year to support defense technologies with dual-use potential. And the European Commission has earmarked more than €7 billion for defense research and development between now and 2027. 

Private investors are also circling, looking for opportunities to profit from the boom. Figures from the European consultancy Dealroom show that fundraising by dual-use and military-tech companies on the continent was just shy of $1 billion in 2023—up nearly a third over 2022, despite an overall slowdown in venture capital activity. 

Atlas Dynamics builds drones for industrial uses and now makes hardened versions that can resist electronic warfare and operate in battlefield conditions.
ATLAS AERO

When Atlas Dynamics started in 2015, funding was hard to come by, Tolchinsky says: “It’s always hard to make it as a hardware company, because VCs are more interested in software. And if you start talking about the defense market, people say, ‘Okay, it’s a long play for 10 or 20 years, it’s not interesting.’” That’s changed since 2022. “Now, what we see because of this war is more and more venture capital that wants to invest in defense companies,” Tolchinsky says.

But while money is helping startups get off the ground, to really prove the value of their products they need to get their tools in the hands of people who are going to use them. When I asked Kipurs if his products are currently being used in Ukraine, he only said: “I’m not allowed to answer that question directly. But our systems are with end users.”

Battle tested

Ukraine has moved on from the early days of the conflict, when it was willing to take almost anything that could be thrown at the invaders. But that experience has been critical in pushing the government to streamline its procurement processes dramatically to allow its soldiers to try out new defense-tech innovations. 

a soldier's hands as he kneels on the ground to assemble a UAV

Origin Robotics has built on a history of producing consumer drones to create a guided bomb that a soldier can carry in a backpack. 

This system has, at times, been chaotic and fraught with risk. Fake crowdfunding campaigns have been set up to scam donors and steal money. Hackers have used open-source drone manuals and fake procurement contracts in phishing attacks in Ukraine. Some products have simply not worked as well at the front as their designers hoped, with reports of US-made drones falling victim to Russian jamming—or even failing to take off at all. 

Technology that doesn’t work at the front puts soldiers at risk, so in many cases they have taken matters into their own hands. Two Ukrainian drone makers tell me that military procurement in the country has been effectively flipped on its head: If you want to sell your gear to the armed forces, you don’t go to the general staff—you go directly to the soldiers and put it in their hands. Once soldiers start asking their senior officers for your tool, you can go back to the bureaucrats and make a deal.

Many foreign companies have simply donated their products to Ukraine—partly out of a desire to help, and partly because they’ve identified a (potentially profitable) opportunity to expose them to the shortened innovation cycles of conflict and to get live feedback from those fighting. This can be surprisingly easy as some volunteer units handle their own parallel supply chains through crowdfunding and donations, and they are eager to try out new tools if someone is willing to give them freely. One logistics specialist supplying a front line unit, speaking anonymously as he’s not authorized to talk to the media, tells me that this spring, they turned to donated gear from startups in Europe and the US to fill gaps left by delayed US military aid, including untested prototypes of UAVs and communications equipment. 

All of this has allowed many companies to bypass the traditionally slow process of testing and demonstrating their products, for better and worse.

Tech companies’ rush into the conflict zone has unnerved some observers, who are worried that by going to war, companies have sidestepped ethical and safety concerns over their tools. Clearview AI gave Ukraine access to its controversial facial recognition tools to help identify Russia’s war dead, for example, sparking moral and practical questions over accuracy, privacy, and human rights—publishing images of those killed in war is arguably a violation of the Geneva Convention. Some high-profile tech executives, including Palantir CEO Alex Karp and former Google CEO-turned-military-tech-investor Eric Schmidt, have used the conflict to try to shift the global norms for using artificial intelligence in war, building systems that let machines select targets for attacks—which some experts worry is a gateway into autonomous “killer robots.”

LMT’s Pollaks says he has visited Ukraine often since the war began. Though he declines to give more details, he euphemistically describes Ukraine’s wartime bureaucracy as “nonstandardized.” If you want to blow something up in front of an audience in the EU, he says, you have to go through a whole lot of approvals, and the paperwork can take months, even years. In Ukraine, plenty of people are willing to try out your tools.

“[Ukraine], unfortunately, is the best defense technology experimentation ground in the world right now,” Pollaks says. “If you are not in Ukraine, then you are not in the defense business.”

Jack Wang, principal at UK-based venture capital fund Project A, which invests in military-tech startups, agrees that the Ukraine “track” can be incredibly fruitful. “If you sell to Ukraine, you get faster product and tech iteration, and live field testing,” he says. “The dollars might vary. Sometimes zero, sometimes quite a bit. But you get your product in the field faster.” 

The feedback that comes from the front is invaluable. Atlas Dynamics has opened an office in Ukraine, and its representatives there work with soldiers and special forces to refine and modify their products. When Russian forces started jamming a wide band of radio frequencies to disrupt communication with the drones, Atlas designed a smart frequency-hopping system, which scans for unjammed frequencies and switches control of the drone over to them, putting soldiers a step ahead of the enemy.

At Global Wolf, battlefield testing for the Mosphera has led to small but significant iterations of the product, which have come naturally as soldiers use it. One scooter-related problem on the front turned out to be resupplying soldiers in entrenched positions with ammunition. Just as urban scooters have become last-mile delivery solutions in cities, troops found that the Mosphera was well suited to shuttling small quantities of ammo at high speeds across rough ground or through forests. To make this job easier, Global Wolf tweaked the design of the vehicle’s optional extra trailer so that it perfectly fits eight NATO standard-sized bullet boxes.

Within weeks of Russia’s full-scale invasion, Mosphera scooters were at Ukraine’s front line—and even behind it, being used by Ukrainian special forces scouts.
GLOBAL WOLF

Some snipers prefer the electric Mosphera to noisy motorbikes or quads, using the vehicles to weave between trees to get into position. But they also like to shoot from the saddle—something they couldn’t do from the scooter’s footplate. So Global Wolf designed a stable seat that lets shooters fire without having to dismount. Some units wanted infrared lights, and the company has made those, too. These types of requests give the team ideas for new upgrades: “It’s like buying a car,” Asmanis says. “You can have it with air conditioning, without air conditioning, with heated seats.”

Being battle-tested is already proving to be a powerful marketing tool. Bukavs told me he thinks defense ministers are getting closer to moving from promises toward “action.” The Latvian police have bought a handful of Mospheras, and the country’s military has acquired some, too, for special forces units. (“We don’t have any information on how they’re using them,” Asmanis says. “It’s better we don’t ask,” Bukavs interjects.) Military distributors from several other countries have also approached them to market their units locally. 

Although they say their donations were motivated first and foremost by a desire to help Ukraine resist the Russian invasion, Bukavs and Asmanis admit that they have been paid back for their philanthropy many times over. 

Of course, all this could change soon, and the Ukraine “track” could very well be disrupted when Trump returns to office in January. The US has provided more than $64 billion worth of military aid to Ukraine since the start of the full-scale invasion. A significant amount of that has been spent in Europe, in what Wang calls a kind of “drop-shipping”—Ukraine asks for drones, for instance, and the US buys them from a company in Europe, which ships them directly to the war effort. 

Wang showed me a recent pitch deck from one European military-tech startup. In assessing the potential budgets available for its products, it compares the Ukrainian budget, which was in the tens of millions of dollars, and the “donated from everybody else” budget, which was a billion dollars. A large amount of that “everybody else” money comes from the US.

If, as many analysts expect, the Trump administration dramatically reduces or entirely stops US military aid to Ukraine, these young companies focused on military tech and dual-use tech will likely take a hit. “Ideally, the European side will step up their spending on European companies, but there will be a short-term gap,” Wang says.

A lasting change? 

Russia’s full-scale invasion exposed how significantly the military-industrial complex in Europe has withered since the Cold War. Across the continent, governments have cut back investments in hardware like ships, tanks, and shells, partly because of a belief that wars would be fought on smaller scales, and partly to trim their national budgets. 

“After decades of Europe reducing its combat capability,” Pollaks says, “now we are in the situation we are in. [It] will be a real challenge to ramp it up. And the way to do that, at least from our point of view, is real close integration between industry and the armed forces.”

This would hardly be controversial in the US, where the military and the defense industry often work closely together to develop new systems. But in Europe, this kind of collaboration would be “a bit wild,” Pollaks says. Militaries tend to be more closed off, working mainly with large defense contractors, and European investors have tended to be more squeamish about backing companies whose products could end up going to war.

As a result, despite the many positive signs for the developers of military tech, progress in overhauling the broader supply chain has been slower than many people in the sector would like.

Several founders of dual-use and military-tech companies in Latvia and the other Baltic states tell me they are often invited to events where they pitch to enthusiastic audiences of policymakers, but they never see any major orders afterward. “I don’t think any amount of VC blogging or podcasting will change how the military actually procures technology,” says Project A’s Wang. Despite what’s happening next door, Ukraine’s neighbors are still ultimately operating in peacetime. Government budgets remain tight, and even if the bureaucracy has become more flexible, layers upon layers of red tape remain.  

soldier in full camoflage firing a gun in a wooded area with smoke and several other soldiers out of focus behind him
Soldiers of the Latvian National Defense Service learn field combat skills in a training exercise.
GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE

Even Global Wolf’s Bukavs laments that a caravan of political figures has visited their factory but has not rewarded the company with big contracts. Despite Ukraine’s requests for the Mosphera scooters, for instance, they ultimately weren’t included in Latvia’s 2024 package of military aid due to budgetary constraints. 

What this suggests is that European governments have learned a partial lesson from Ukraine—that startups can give you an edge in conflict. But experts worry that the continent’s politics means it may still struggle to innovate at speed. Many Western European countries have built up substantial bureaucracies to protect their democracies from corruption or external influences. Authoritarian states aren’t so hamstrung, and they, too, have been watching the war in Ukraine closely. Russian forces are reportedly testing Chinese and Iranian drones at the front line. Even North Korea has its own drone program. 

The solution isn’t necessarily to throw out the mechanisms for accountability that are part of democratic society. But the systems that have been built up for good governance have led to fragility, sometimes leading governments to worry more about the politics of procurement than preparing for crises, according to Ilves and other policy experts I spoke to. 

“Procurement problems grow bigger and bigger when democratic societies lose trust in leadership,” says Ilves, who now advises Ukraine’s Ministry of Digital Transformation on cybersecurity policy and international cooperation. “If a Twitter [troll] starts to go after a defense procurement budget, he can start to shape policy.”

That makes it hard to give financial support to a tech company whose products you don’t need now, for example, but whose capabilities might be useful to have in an emergency—a kind of merchant marine for technology, on constant reserve in case it’s needed. “We can’t push European tech to keep innovating imaginative crisis solutions,” Ilves says. “Business is business. It works for money, not for ideas.” 

Even in Riga the war can feel remote, despite the Ukrainian flags flying from windows and above government buildings. Conversations about ordnance delivery and electronic warfare held in airy warehouse conversions can feel academic, even faintly absurd. In one incubator hub I visited in April, a company building a heavy-duty tracked ATV worked next door to an accounting software startup. On the top floor, bean bag chairs were laid out and a karaoke machine had been set up for a party that evening. 

A sense of crisis is needed to jolt politicians, companies, and societies into understanding that the front line can come to them, Ilves says: “That’s my take on why I think the Baltics are ahead. Unfortunately not because we are so smart, but because we have this sense of necessity.” 

Nevertheless, she says her experience over the past few years suggests there’s cause for hope if, or when, danger breaks through a country’s borders. Before the full-scale invasion, Ukraine’s government wasn’t exactly popular among the domestic business and tech communities. “And yet, they came together and put their brains and resources behind [the war effort],” she says. “I have a feeling that our societies are sometimes better than we think.” 

Peter Guest is a journalist based in London. 

Inside Clear’s ambitions to manage your identity beyond the airport

If you’ve ever been through a large US airport, you’re probably at least vaguely aware of Clear. Maybe your interest (or irritation) has been piqued by the pods before the security checkpoints, the attendants in navy blue vests who usher clients to the front of the security line (perhaps just ahead of you), and the sometimes pushy sales pitches to sign up and skip ahead yourself. After all, is there anything people dislike more than waiting in line?

Its position in airports has made Clear Secure, with its roughly $3.75 billion market capitalization, the most visible biometric identity company in the United States. Over the past two decades, Clear has put more than 100 lanes in 58 airports across the US, and in the past decade it has entered 17 sports arenas and stadiums, from San Jose to Denver to Atlanta. Now you can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber.

And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctor’s office—or anywhere else that you currently have to pull out a wallet (or, of course, wait in line). The company that has helped millions of vetted members skip airport security lines is now working to expand its “frictionless,” “face-first” line-cutting service from the airport to just about everywhere, online and off, by promising to verify that you are who you say you are and you are where you are supposed to be. In doing so, CEO Caryn Seidman Becker told investors in an earnings call earlier this year, it has designs on being no less than the “identity layer of the internet,” as well as the “universal identity platform” of the physical world.

All you have to do is show up—and show your face. 

This is enabled by biometric technology, but Clear is far more than just a biometrics company. As Seidman Becker has told investors, “biometrics aren’t the product … they are a feature.” Or, as she put it in a 2022 podcast interview, Clear is ultimately a platform company “no different than Amazon or Apple”—with dreams, she added, “of making experiences safer and easier, of giving people back their time, of giving people control, of using technology for … frictionless experiences.” (Clear did not make Seidman Becker available for an interview.)

While the company has been building toward this sweeping vision for years, it now seems the time has finally come. A confluence of factors is currently accelerating the adoption of—even necessity for—identity verification technologies: increasingly sophisticated fraud, supercharged by artificial intelligence that is making it harder to distinguish who or what is real; data breaches that seem to occur on a near daily basis; consumers who are more concerned about data privacy and security; and the lingering effects of the pandemic’s push toward “contactless” experiences. 

All of this is creating a new urgency around ways to verify information, especially our identities—and, in turn, generating a massive opportunity for Clear. For years, Seidman Becker has been predicting that biometrics will go mainstream. 

But now that biometrics have, arguably, gone mainstream, what—and who—bears the cost? Because convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. Some critics warn that not everyone will benefit from a world where identity is routed through Clear—maybe because it’s too expensive, and maybe because biometric technologies are often less effective at identifying people of color, people with disabilities, or those whose gender identity may not match what official documents say.

What’s more, says Kaliya Young, an identity expert who has advised the US government, having a single private company “disintermediating” our biometric data—especially facial data—is the wrong “architecture” to manage identity. “It seems they are trying to create a system like login with Google, but for everything in real life,” Young warns. While the single sign-on option that Google (or Facebook or Apple) provides for websites and apps may make life easy, it also poses greater security and privacy risks by putting both our personal data and the keys to it in the hands of a single profit-driven entity: “We’re basically selling our identity soul to a private company, who’s then going to be the gatekeeper … everywhere one goes.” 

Though Clear remains far less well known than Google, more than 27 million people have already helped it become that very gatekeeper—and “one of the largest private repositories of identities on the planet,” as Nicholas Peddy, Clear’s chief technology officer, put it in an interview with MIT Technology Review this summer. 

With Clear well on the way to realizing its plan for a frictionless future, it’s time to try to understand both how we got here and what we have (been) signed up for.

A new frontier in identity management

Imagine this: On a Friday morning in the near future, you are rushing to get through your to-do list before a weekend trip to New York. 

In the morning, you apply for a new job on LinkedIn. During lunch, assured that recruiters are seeing your professional profile because it’s been verified by Clear, you pop out to Home Depot, confirm your identity with a selfie, and rent a power drill for a quick bathroom repair. Then, in the midafternoon, you drive to your doctor’s office; having already verified your identity—prompted by a text message sent a few days earlier—you confirm your arrival with a selfie at a Clear kiosk. Before you go to bed, you plan your morning trip to the airport and set an alarm—but not too early, because you know that with Clear, you can quickly drop your bags and breeze through security.

Once you’re in New York, you head to Barclays Center, where you’ll be seeing your favorite singer; you skip the long queue out front to hop in the fast-track Clear line. It’s late when the show is over, so you grab an Uber home and barely need to wait for a driver, who feels more comfortable thanks to your verified rider profile. 

At no point did you pull out your driver’s license or fill out repetitive paperwork. All that was already on file. Everything was easy; everything was frictionless

More than 27 million people have already helped Clear become “one of the largest private repositories of identities on the planet.”

This, at least, is the world that Clear is actively building toward. 

Part of Clear’s power, Seidman Becker often says, is that it can wholly replace our wallets: our credit cards, driver’s licenses, health insurance cards, perhaps even building key fobs. But you can’t just suddenly be all the cards you carry. For Clear to link your digital identity to your real-world self, you must first give up a bit of personal data—specifically, your biometric data. 

Biometrics refers to the unique physical and behavioral characteristics—faces, fingerprints, irises, voices, and gaits, among others—that identify each of us as individuals. For better or worse, they typically remain stable during our lifetimes. 

Relying on biometrics for identification can be convenient, since people are apt to misplace a wallet or forget the answer to a security question. But on the other hand, if someone manages to compromise a database of biometric information, that convenience can become dangerous: We cannot easily change our face or fingerprint to secure our data again, the way we could change a compromised password. 

On a practical level, there are generally two ways that biometrics are used to identify individuals. The first, generally referred to “one-to-many” or “one-to-n” matching, compares one person’s biometric identifier with a database full of them. This is sometimes associated with a stereotypical idea of dystopian surveillance in which real-time facial recognition from live video could allow authorities to identify anyone walking down the street. The other, “one-to-one” matching, is the basis for Clear; it compares a biometric identifier (like the face of a live person standing before an airport agent) with a previously recorded biometric template (such as a passport photo) to verify that they match. This is usually done with the individual’s knowledge and consent, and it arguably poses a lower privacy risk. Often, one-to-one matching includes a layer of document verification, like checking that your passport is legitimate and matches a photograph you used to register with the system.

The US Congress urgently saw the need for better identity management following the September 11 terrorist attacks; 18 of the 19 hijackers used fake identity documents to board their flights. In the aftermath, the newly created Transportation Security Administration (TSA) implemented security processes that slowed down air travel significantly. Part of the problem was that “everybody was just treated the same at airports,” recalls the serial media entrepreneur Steven Brill—including, famously, former vice president Al Gore. “It sounded awfully democratic … but in terms of basic risk management and allocation of resources, it just didn’t make any sense.” 

Congress agreed, authorizing the TSA to create a program that would allow people who passed background checks to be recognized as trusted travelers and skip some of the scrutiny at the airport. 

A computer screen showing a biometric iris scan, part of Clear's security program in airports.
In 2007, San Francisco’s then mayor, Gavin Newsom, had his irises scanned by Clear at San Francisco International Airport.
DAVID PAUL MORRIS/GETTY

In 2003, Brill teamed up with Ajay Amlani, a technology entrepreneur and former adviser to the Department of Homeland Security, and founded a company called Verified Identity Pass (VIP) to provide biometric identity verification in the TSA’s new program. “The vision,” says Amlani, “was a unified fast lane—similar to a toll lane.”

It appeared to be a win-win solution. The TSA had a private-sector partner for its registered-traveler program; VIP had a revenue stream from user fees; airports got a cut of the fees in exchange for leasing VIP space; and initial members—typically frequent business travelers—were happy to cut down on airport wait times. 

By 2005, VIP had launched in its first airport, Orlando International in Florida. Members—initially paying $80—received “Clear cards” that contained a cryptographic representation of their fingerprint, iris scans, and a photo of their face taken at enrollment. They could use those cards at the airport to be escorted to the front of the security lines.

The defense contracting giant Lockheed Martin, which already provided biometric capabilities to the US Department of Defense and the FBI, was responsible for deploying and providing technology for VIP’s system, with additional technical expertise from Oracle and others. This left VIP to “focus on marketing, pricing, branding, customer service, and consumer privacy policies,” as the president of Lockheed Transportation and Security Solutions, Don Antonucci, said at the time. 

By 2009, nearly 200,000 people had joined. The company had received $116 million in investments and signed contracts with about 20 airports. It all seemed so promising—if VIP had not already inadvertently revealed the risks inherent in a system built on sensitive personal data.

A lost laptop and a big opportunity

From the beginning, there were concerns about the implications of VIP’s Clear card for privacy, civil liberty, and equity, as well as questions about its effectiveness at actually stopping future terrorist attacks. Advocacy groups like the Electronic Privacy Information Center (EPIC) warned that the biometrics-based system would result in a surveillance infrastructure built on sensitive personal information, but data from the Pew Research Center shows that a majority of the public at the time felt that it was generally necessary to sacrifice some civil liberties in the name of safety.

Then a security lapse sent the whole operation crumbling. 

In the summer of 2008, VIP reported that an unencrypted company laptop containing addresses, birthdays, and driver’s license and passport numbers of 33,000 applicants had gone missing from an office at San Francisco International Airport (SFO)—even though TSA’s security protocol required it to encrypt all laptops holding personal data. 

a hand reaches into drawers containing sensitive personal data from behind the user's profile image

NEIL WEBB

The laptop was found about two weeks later and the company said no data was compromised. But it was still a mess for VIP. Months later, investors pushed Brill out, and associated costs led the company to declare bankruptcy and close the following year. 

Disgruntled users filed a class action lawsuit against VIP to recoup membership fees and “punitive damages.” Some users were upset they had recently renewed their subscriptions, and others worried about what would happen to their personal information. A judge temporarily prevented the company from selling user data, but the decision didn’t hold. 

Seidman Becker and her longtime business partner Ken Cornick, both hedge fund managers, saw an opportunity. In 2010, they bought VIP—and its user data—in a bankruptcy sale for just under $6 million and registered a new company called Alclear. “I was a big believer in biometrics,” Seidman Becker told the tech journalists Kara Swisher and Lauren Goode in 2017. “I wanted to build something that made the world a better place, and Clear was that platform.” 

Initially, the new Clear followed closely in the footsteps of its predecessor: Lockheed Martin transferred the members’ information to the new company, which had acquired VIP’s hardware and continued to use Clear cards to hold members’ biometrics.

After the relaunch, Clear also started building partnerships with other companies in the travel industry—including American Express, United Airlines, Alaska Airlines, Delta Airlines, and Hertz Rental Cars—to bundle its service for free or at a discount. (Clear declined to specify how many of its users have such discounts, but in earnings calls the company has stressed its efforts to reduce the number of members paying reduced rates.)

By 2014, improvements in internet latency and biometric processing speeds allowed Clear to eliminate the cards and migrate to a server-based system—without compromising data security, the company says. Clear emphasizes that it meets industry standards for keeping data secure, with methods including encryption, firewalls, and regular penetration testing by both internal and external teams. The company says it also maintains “locked boxes” around data relating to air travelers. 

Still, the reality is that every database of this kind is ultimately a target, and “almost every day there’s a massive breach or hack,” says Chris Gilliard, a privacy and surveillance researcher who was recently named co-director of the Critical Internet Studies Institute. Over the years, even apparently well-protected biometric information has been compromised. Last year, for instance, a data breach at the genetic testing company 23andMe exposed sensitive information—including geographic locations, birth years, family trees, and user-uploaded photos—from nearly 7 million customers. 

This is what Young, who helped facilitate the creation of the open-source identity management standards Open ID Connect and OAuth, means when she says that Clear has the wrong “architecture” for managing digital identity; it’s too much of a risk to keep our digital identities in a central database, cryptographically protected or not. She and many other identity and privacy experts believe that the most privacy-protecting way to manage digital identity is to “use credentials, like a mobile driver’s license, stored on people’s devices in digital wallets,“ she says. “These digital credentials can have biometrics, but the biometrics in a central database are not being pinged for day to day use.”

But it’s not just data that’s potentially vulnerable. In 2022 and 2023, Clear faced three high-profile security incidents in airports, including one in which a passenger successfully got through the company’s checks using a boarding pass found in the trash. In another, a traveler in Alabama used someone else’s ID to register for Clear and, later, to successfully pass initial security checks; he was discovered only when he tried to bring ammunition through a subsequent checkpoint. 

This spurred an investigation by the TSA, which turned up more alarming information: Nearly 50,000 photos used by Clear to enroll customers were flagged as “non-matches” by the company’s facial recognition software. Some photos didn’t even contain full faces, according to Bloomberg. (In a press release after the incident, the company refuted the reporting, describing it as “a single human error—having nothing to do with our technology” and stating that “the images in question were not relied upon during the secure, multi-layered enrollment process.”) 

“How do you get to be the one?”

When I spoke to Brill this spring, he told me he’d always envisioned that Clear would expand far beyond the airport. “The idea I had was that once you had a trusted identity, you would potentially be able to use it for a lot of different things,” he said, but “the trick is to get something that is universally accepted. And that’s the battle that Clear and anybody else has to fight, which is: How do you get to be the one?”

Goode Intelligence, a market research firm that focuses on the booming identity space, estimates that by 2029, there will be 1.5 billion digital identity wallets around the world—with use for travel leading the way and generating an estimated $4.6 billion in revenue. Clear is just one player, and certainly not the biggest. ID.me, for instance, provides similar face-based identity verification and has over 130 million users, dwarfing Clear’s roughly 27 million. It’s also already in use by numerous US federal and state agencies, including the IRS. 

The reality is that every database of this kind is ultimately a target, and “almost every day there’s a massive breach or hack.”

But as Goode Intelligence CEO Alan Goode tells me, Clear’s early-mover advantage, particularly in the US, “puts it in a good space within North America … [to] be more pervasive”—or to become what Brill called “the one” that is most closely stitched into people’s daily lives. 

Clear began growing beyond travel in 2015, when it started offering biometric fast-pass access to what was then AT&T Park in San Francisco. Stadiums across California, Colorado, and Washington, and in major cities in other states, soon followed. Fans can simply download the free Clear app and scan the QR code to bypass normal lines in favor of designated Clear lanes. For a time, Clear also promoted its biometric payment systems at some venues, including two in Seattle, which could include built-in age verification. It even partnered with Budweiser for a “Bud Now” machine that used your fingerprint to verify your identity, age, and payment. (These payment programs, which a Clear representative called “pilots” in an email, have since ended; representatives for the Seattle Mariners and Seahawks did not respond to multiple requests for comment on why.) Clear’s programs for expedited event access have been popular enough to drive greater user growth than its paid airport service, according to numbers provided by the company. 

Then came the pandemic, hitting Clear (and the entire travel industry) hard. But the crisis for Clear’s primary business actually accelerated its move into new spaces with “Health Pass,” which allowed organizations to confirm the health status of employees, residents, students, and visitors who sought access to a physical space. Users could upload vaccination cards to the Health Pass section in the Clear mobile app; the program was adopted by nearly 70 partners in 110 unique locations, including NFL stadiums, the Mariners’ T-Mobile Park, and the 9/11 Memorial Museum. 

Demand for vaccine verification eventually slowed, and Health Pass shut down in March 2024. But as Jason Sherwin, Clear’s senior director of health-care business development, said in a podcast interview earlier this year, it was the company’s “first foray into health care”—the business line that currently represents its “primary focus across everything we’re doing outside of the airport.” Today, Clear kiosks for patient sign-ins are being piloted at Georgia’s Wellstar Health Systems, in conjunction with one of the largest providers of electronic health records in the United States: Epic (which is unrelated to the privacy nonprofit). 

What’s more, Health Pass enabled Clear to expand at a time when the survival of travel-focused businesses wasn’t guaranteed. In November 2020, Clear had roughly 5 million members; today, that number has grown fivefold. The company went public in 2021 and has experienced double-digit revenue growth annually. 

These doctor’s office sign-ins, in which the system verifies patient identity via a selfie, rely on what’s called Clear Verified, a platform the company has rolled out over the past several years that allows partners (health-care systems, as well as brick-and-mortar retailers, hotels, and online platforms) to integrate Clear’s identity checks into their own user-verification processes. It again seems like a win-win situation: Clear gets more users and a fee from companies using the platform, while companies confirm customers’ identity and information, and customers, in theory, get that valuable frictionless experience. One high-profile partnership, with LinkedIn, was announced last year: “We know authenticity matters and we want the people, companies and jobs you engage with everyday to be real and trusted,” Oscar Rodriguez, LinkedIn’s head of trust and privacy, said in a press release. 

All this comes together to create the foundation for what is Clear’s biggest advantage today: its network. The company’s executives often speak about its “embedded” users across various services and platforms, as well as its “ecosystem,” meaning the venues where it is used. As Peddy explains, the value proposition for Clear today is not necessarily any particular technology or biometric algorithm, but how it all comes together—and can work universally. Clear would be “wherever our consumers need us to be,” he says—it would “sort of just be this ubiquitous thing that everybody has.”

Seidman-Becker with the gavel raised above her head next to the opening bell on the floor of the stock exchange with NYSE Group president Stacey Cunningham clapping on the right side of the frame
Clear CEO Caryn Seidman Becker (left) rings the bell at the New York Stock Exchange in 2021.
NYSE VIA TWITTER

A prospectus to investors from the company’s IPO makes the pitch simple: “We believe Clear enables our partners to capture not just a greater share of their customers’ wallet, but a greater share of their overall lives.” 

The more Clear is able to reach into customers’ lives, the more valuable customer data it can collect. All user interactions and experiences can be tracked, the company’s privacy policy explains. While the policy states that Clear will not sell data and will never share biometric or health information without “express consent,” it also lays out the non-health and non-biometric data that it collects and can use for consumer research and marketing. This includes members’ demographic details, a record of every use of Clear’s various products, and even digital images and videos of the user. Documents obtained by OneZero offer some further detail into what Clear has at least considered doing with customer data: David Gershgorn wrote about a 2015 presentation to representatives from Los Angeles International Airport, titled “Identity Dashboard—Valuable Marketing Data,” which “showed off” what the company had collected, including the number of sports games users had attended and with whom, which credit cards they had, their favorite airlines and top destinations, and how often they flew first class or economy. 

Clear representatives emphasized to MIT Technology Review that the company “does not share or sell information without consent,” though they “had nothing to add” in response to a question about whether Clear can or does aggregate data to derive its own marketing insights, a business model popularized by Facebook. “At Clear, privacy and security are job one,” spokesperson Ricardo Quinto wrote in an email. “We are opt-in. We never sell or share our members’ information and utilize a multilayered, best-in-class infosec system that meets the highest standards and compliance requirements.” 

Nevertheless, this influx of customer data is not just good for business; it’s risky for customers. It creates “another attack surface,” Gilliard warns. “This makes us less safe, not more, as a consistent identifier across your entire public and private life is the dream of every hacker, bad actor, and authoritarian.”

A face-based future for some

Today, Clear is in the middle of another major change: replacing its use of iris scans and fingerprints with facial verification in airports—part of “a TSA-required upgrade in identity verification,” a TSA spokesperson wrote in an email to MIT Technology Review

For a long time, facial recognition technology “for the highest security purposes” was “not ready for prime time,” Seidman Becker told Swisher and Goode back in 2017. It wasn’t operating with “five nines,” she added—that is, “99.999% from a matching and an accuracy perspective.” But today, facial recognition has “significantly improved” and the company has invested “in enhancing image quality through improved capture, focus, and illumination,” according to Quinto.

 Clear says switching to facial images in airports will also further decrease friction, enabling travelers to verify their identity so effortlessly it’s “almost like you don’t really break stride,” Peddy says. “You walk up, you scan your face. You walk straight to the TSA.” 

The move is part of a broader shift toward facial recognition technology in US travel, bringing the country in line with practices at many international airports. The TSA began expanding facial identification from a few pilot programs this year, while airlines including Delta and United are also introducing face-based boarding, baggage drops, and even lounge access. And the International Air Transport Association, a trade group for the airline industry, is rolling out a “contactless travel” process that will allow passengers to check in, drop off their bags, and board their flights—all without showing either passports or tickets, just their faces. 

a crowd of people with their faces obscured by a bright glow

NEIL WEBB

Privacy experts worry that relying on faces for identity verification is even riskier than other biometric methods. After all, “it’s a lot easier to scan people’s faces passively than it is to scan irises or take fingerprints,” Senator Jeff Merkley of Oregon, an outspoken critic of government surveillance and of the TSA’s plans to employ facial verification at airports, said in an email. The point is that once a database of faces is built, it is potentially far more useful for surveillance purposes than, say, fingerprints. “Everyone who values privacy, freedom, and civil rights should be concerned about the increasing, unchecked use of facial recognition technology by corporations and the federal government,” Merkley wrote.

Even if Clear is not in the business of surveillance today, it could, theoretically, pivot or go bankrupt and (again) sell off its parts, including user data. Jeramie Scott, senior counsel and director of the Project on Surveillance Oversight at EPIC, says that ultimately, the “lack of federal [privacy] regulation” means that we’re just taking the promises of companies like Clear at face value: “Whatever they say about how they implement facial recognition today does not mean that that’s how they’ll be implementing facial recognition tomorrow.” 

Making this particular scenario potentially more concerning is that the images stored by this private company are “generally going to be much higher quality” than those collected by scraping the internet—which Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project (STOP), says would make its data far more useful for surveillance than that held by more controversial facial recognition companies like Clearview AI. 

Even a far less pessimistic read of Clear’s data collection reveals the challenges of using facial identification systems, which—as a 2019 report from the National Institute for Standards and Technology revealed—have been shown to work less effectively in certain populations, particularly people of African and East Asian descent, women, and elderly and very young people. NIST has also not tested identification accuracy for individuals who are transgender, but Gilliard says he expects the algorithms would fall short. 

More recent testing shows that some algorithms have improved, NIST spokesperson Chad Boutin tells MIT Technology Review—though accuracy is still short of the “five nines” that Seidman Becker once said Clear was aiming for. (Quinto, the Clear representative, maintains that Clear’s recent upgrades, combined with the fact that the company’s testing involves “comparing member photos to smaller galleries, rather than the millions used in NIST scenarios,” means its technology “remains accurate and suitable for secure environments like airports.”)

Even a very small error rate “in a system that is deployed hundreds of thousands of times a day” could still leave “a lot of people” at risk of misidentification, explains Hannah Quay-de La Vallee, a technologist at the Center for Democracy & Technology, a nonprofit based in Washington, DC. All this could make Clear’s services inaccessible to some—even if they can afford it, which is less likely given the recent increase in the subscription fee for travelers to $199 a year.

The free Clear Verified Platform is already giving rise to access problems in at least one partnership, with LinkedIn. The professional networking site encourages users to verify their identities either with an employer email address or with Clear, which marketing materials say will yield more engagement. But some LinkedIn users have expressed concerns, claiming that even after uploading a selfie, they were unable to verify their identities with Clear if they were subscribed to a smaller phone company or if they had simply not had their phone number for enough time. As one Reddit user emphasized, “Getting verified is a huge deal when getting a job.” LinkedIn said it does not enable recruiters to filter, rank, or sort by whether a candidate has a verification badge, but also said that verified information does “help people make more informed decisions as they build their network or apply for a job.” Clear only said it “works with our partners to provide them with the level of identity assurance that they require for their customers” and referred us back to LinkedIn. 

An opt-in future that may not really be optional 

Maybe what’s worse than waiting in line, or even being cut in front of, is finding yourself stuck in what turns out to be the wrong line—perhaps one that you never want to be in. 

That may be how it feels if you don’t use Clear and similar biometric technologies. “When I look at companies stuffing these technologies into vending machines, fast-food restaurants, schools, hospitals, and stadiums, what I see is resignation rather than acceptance—people often don’t have a choice,” says Gilliard, the privacy and surveillance scholar. “The life cycle of these things is that … even when it is ‘optional,’ oftentimes it is difficult to opt out.”

And while the stakes may seem relatively low—Clear is, after all, a voluntary membership program—they will likely grow as the system is deployed more widely. As Seidman Becker said on Clear’s latest earnings call in early November, “The lines between physical and digital interactions continue to blur. A verified identity isn’t just a check mark. It’s the foundation for everything we do in a high-stakes digital world.” Consider a job ad posted by Clear earlier this year, seeking to hire a vice president for business development; it noted that the company has its eye on a number of additional sectors, including financial services, e-commerce, P2P networking, “online trust,” gaming, government, and more. 

“Increasingly, companies and the government are making the submission of your biometrics a barrier to participation in society,” Gilliard says. 

This will be particularly true at the airport, with the increasing ubiquity of facial recognition across all security checks and boarding processes, and where time-crunched travelers could be particularly vulnerable to Clear’s sales pitch. Airports have even privately expressed concerns about these scenarios to Clear. Correspondence from early 2022 between the company and staff at SFO, released in response to a public records request, reveals that the airport “received a number of complaints” about Clear staff “improperly and deceitfully soliciting approaching passengers in the security checkpoint lanes outside of its premises,” with an airport employee calling it “completely unacceptable” and “aggressive and deceptive behavior.” 

Of course, this isn’t to say everyone with a Clear membership was coerced into signing up. Many people love it; the company told MIT Technology Review that it had a nearly 84% retention rate earlier this year. Still, for some experts, it’s worrisome to think that what Clear users are comfortable with ends up setting the ground rules for the rest of us. 

“We’re going to normalize potentially a bunch of biometric stuff but not have a sophisticated conversation about where and how we’re normalizing what,” says Young. She worries this will empower “actors who want to move toward a creepy surveillance state, or corporate surveillance capitalism on steroids.” 

“Without understanding what we’re building or how or where the guardrails are,” she adds, “I also worry that there could be major public backlash, and then legitimate uses [of biometric technology] are not understood and supported.”

But in the meantime, even superfans are grumbling about an uptick in wait times in the airport’s Clear lines. After all, if everyone decides to cut to the front of the line, that just creates a new long line of line-cutters.

Palmer Luckey on the Pentagon’s future of mixed reality

Palmer Luckey has, in some ways, come full circle. 

His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.

Now Luckey is redirecting his energy again, to headsets for the military. In September, Anduril announced it would partner with Microsoft on the US Army’s Integrated Visual Augmentation System (IVAS), arguably the military’s largest effort to develop a headset for use on the battlefield. Luckey says the IVAS project is his top priority at Anduril.

“There is going to be a heads-up display on every soldier within a pretty short period of time,” he told MIT Technology Review in an interview last week on his work with the IVAS goggles. “The stuff that we’re building—it’s going to be a big part of that.”

Though few would bet against Luckey’s expertise in the realm of mixed reality, few observers share his optimism for the IVAS program. They view it, thus far, as an avalanche of failures. 

IVAS was first approved in 2018 as an effort to build state-of-the-art mixed-reality headsets for soldiers. In March 2021, Microsoft was awarded nearly $22 billion over 10 years to lead the project, but it quickly became mired in delays. Just a year later, a Pentagon audit criticized the program for not properly testing the goggles, saying its choices “could result in wasting up to $21.88 billion in taxpayer funds to field a system that soldiers may not want to use or use as intended.” The first two variants of the goggles—of which the army purchased 10,000 units—gave soldiers nausea, neck pain, and eye strain, according to internal documents obtained by Bloomberg. 

Such reports have left IVAS on a short leash with members of the Senate Armed Services Committee, which helps determine how much money should be spent on the program. In a subcommittee meeting in May, Senator Tom Cotton, an Arkansas Republican and ranking member, expressed frustration at IVAS’s slow pace and high costs, and in July the committee suggested a $200 million cut to the program. 

Meanwhile, Microsoft has for years been cutting investments into its HoloLens headset—the hardware on which the IVAS program is based—for lack of adoption. In June, Microsoft announced layoffs to its HoloLens teams, suggesting the project is now focused solely on serving the Department of Defense. The company received a serious blow in August, when reports revealed that the Army is considering reopening bidding for the contract to oust Microsoft entirely. 

This is the catastrophe that Luckey’s stepped into. Anduril’s contribution to the project will be Lattice, an AI-powered system that connects everything from drones to radar jammers to surveil, detect objects, and aid in decision-making. Lattice is increasingly becoming Anduril’s flagship offering. It’s a tool that allows soldiers to receive instantaneous information not only from Anduril’s hardware, but also from radars, vehicles, sensors, and other equipment not made by Anduril. Now it will be built into the IVAS goggles. “It’s not quite a hive mind, but it’s certainly a hive eye” is how Luckey described it to me. 

Palmer Luckey holding an autonomous drone interceptor
Anvil, seen here held by Luckey in Anduril’s Costa Mesa Headquarters, integrates with the Lattice OS and can navigate autonomously to intercept hostile drones.
PHILIP CHEUNG

Boosted by Lattice, the IVAS program aims to produce a headset that can help soldiers “rapidly identify potential threats and take decisive action” on the battlefield, according to the Army. If designed well, the device will automatically sort through countless pieces of information—drone locations, vehicles, intelligence—and flag the most important ones to the wearer in real time. 

Luckey defends the IVAS program’s bumps in the road as exactly what one should expect when developing mixed reality for defense. “None of these problems are anything that you would consider insurmountable,” he says. “It’s just a matter of if it’s going to be this year or a few years from now.” He adds that delaying a product is far better than releasing an inferior product, quoting Shigeru Miyamoto, the game director of Nintendo: “A delayed game is delayed only once, but a bad game is bad forever.”

He’s increasingly convinced that the military, not consumers, will be the most important testing ground for mixed-reality hardware: “You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he says. In the consumer world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense.

“The gains are so different when we talk about life-or-death scenarios. You don’t have to worry about things like ‘Oh, this is kind of dorky looking,’ or ‘Oh, you know, this is slightly heavier than I would prefer,’” he says. “Because the alternatives of, you know, getting killed or failing your mission are a lot less desirable.”

Those in charge of the IVAS program remain steadfast in the expectation that it will pay off with huge gains for those on the battlefield. “If it works,” James Rainey, commanding general of the Army Futures Command, told the Armed Services Committee in May, “it is a legitimate 10x upgrade to our most important formations.” That’s a big “if,” and one that currently depends on Microsoft’s ability to deliver. Luckey didn’t get specific when I asked if Anduril was positioning itself to bid to become IVAS’s primary contractor should the opportunity arise. 

If that happens, US troops may, willingly or not, become the most important test subjects for augmented- and virtual-reality technology as it is developed in the coming decades. The commercial sector doesn’t have thousands of individuals within a single institution who can test hardware in physically and mentally demanding situations and provide their feedback on how to improve it. 

That’s one of the ways selling to the defense sector is very different from selling to consumers, Luckey says: “You don’t actually have to convince every single soldier that they personally want to use it. You need to convince the people in charge of him, his commanding officer, and the people in charge of him that this is a thing that is worth wearing.” The iterations that eventually come from IVAS—if it keeps its funding—could signal what’s coming next for the commercial market. 

When I asked Luckey if there were lessons from Oculus he had to unlearn when working with the Department of Defense, he said there’s one: worrying about budgets. “I prided myself for years, you know—I’m the guy who’s figured out how to make VR accessible to the masses by being absolutely brutal at every part of the design process, trying to get costs down. That isn’t what the DOD wants,” he says. “They don’t want the cheapest headset in a vacuum. They want to save money, and generally, spending a bit more money on a headset that is more durable or that has better vision—and therefore allows you to complete a mission faster—is definitely worth the extra few hundred dollars.”

I asked if he’s impressed by the progress that’s been made during his eight-year hiatus from mixed reality. Since he left Facebook in 2017, Apple, Magic Leap, Meta, Snap, and a cascade of startups have been racing to move the technology from the fringe to the mainstream. Everything in mixed reality is about trade-offs, he says. Would you like more computing power, or a lighter and more comfortable headset? 

With more time at Meta, “I would have made different trade-offs in a way that I think would have led to greater adoption,” he says. “But of course, everyone thinks that.” While he’s impressed with the gains, “having been on the inside, I also feel like things could be moving faster.”

Years after leaving, Luckey remains noticeably annoyed by one specific decision he thinks Meta got wrong: not offloading the battery. Dwelling on technical details is unsurprising from someone who spent his formative years living in a trailer in his parents’ driveway posting in obscure forums and obsessing over goggle prototypes. He pontificated on the benefits of packing the heavy batteries and chips in removable pucks that the user could put in a pocket, rather than in the headset itself. Doing so makes the headset lighter and more comfortable. He says he was pushing Facebook to go that route before he was ousted, but when he left, it abandoned the idea. Apple chose to have an external battery for its Vision Pro, which Luckey praised. 

“Anyway,” he told me. “I’m still sore about it eight years later.”

Speaking of soreness, Luckey’s most public professional wound, his ouster from Facebook in 2017, was partially healed last month. The story—involving countless Twitter threads, doxxing, retractions and corrections to news articles, suppressed statements, and a significant segment in Blake Harris’s 2020 book The History of the Future—is difficult to boil down. But here’s the short version: A donation by Luckey to a pro-Trump group called Nimble America in late 2016 led to turmoil within Facebook after it was reported by the Daily Beast. That turmoil grew, especially after Ars Technica wrote that his donation was funding racist memes (the founders of Nimble America were involved in the subreddit r/TheDonald, but the organization itself was focused on creating pro-Trump billboards). Luckey left in March 2017, but Meta has never disclosed why. 

This April, Oculus’s former CTO John Carmack posted on X that he regretted not supporting Luckey more. Meta’s CTO, Andrew Bosworth, argued with Carmack, largely siding with Meta. In response, Luckey said, “You publicly told everyone my departure had nothing to do with politics, which is absolutely insane and obviously contradicted by reams of internal communications.” The two argued. In the X argument, Bosworth cautioned that there are “limits on what can be said here,” to which Luckey responded, “I am down to throw it all out there. We can make everything public and let people judge for themselves. Just say the word.” 

Six months later, Bosworth apologized to Luckey for the comments. Luckey responded, writing that although he is “infamously good at holding grudges,” neither Bosworth nor current leadership at Meta was involved in the incident. 

By now Luckey has spent years mulling over how much of his remaining anger is irrational or misplaced, but one thing is clear. He has a grudge left, but it’s against people behind the scenes—PR agents, lawyers, reporters—who, from his perspective, created a situation that forced him to accept and react to an account he found totally flawed. He’s angry about the steps Facebook took to keep him from communicating his side (Luckey has said he wrote versions of a statement at the time but that Facebook threatened further escalation if he posted it).

“What am I actually angry at? Am I angry that my life went in that direction? Absolutely,” he says.

“I have a lot more anger for the people who lied in a way that ruined my entire life and that saw my own company ripped out from under me that I’d spent my entire adult life building,” he says. “I’ve got plenty of anger left, but it’s not at Meta, the corporate entity. It’s not at Zuck. It’s not at Boz. Those are not the people who wronged me.”

While various subcommittees within the Senate and House deliberate how many millions to spend on IVAS each year, what is not in question is the Pentagon is investing to prepare for a potential conflict in the Pacific between China and Taiwan. The Pentagon requested nearly $10 billion for the Pacific Deterrence Initiative in its latest budget. The prospect of such a conflict is something Luckey considers often. 

He told the authors of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War that Anduril’s “entire internal road map” has been organized around the question “How do you deter China? Not just in Taiwan, but Taiwan and beyond?”

At this point, nothing about IVAS is geared specifically toward use in the South Pacific as opposed to Ukraine or anywhere else. The design is in early stages. According to transcripts of a Senate Armed Services Subcommittee meeting in May, the military was scheduled to receive the third iteration of IVAS goggles earlier this summer. If they were on schedule, they’re currently in testing. That version is likely to change dramatically before it approaches Luckey’s vision for the future of mixed-reality warfare, in which “you have a little bit of an AI guardian angel on your shoulder, helping you out and doing all the stuff that is easy to miss in the midst of battle.”

Palmer Luckey sitting on yellow metal staircase
Designs for IVAS will have to adapt amid a shifting landscape of global conflict.
PHILIP CHEUNG

But will soldiers ever trust such a “guardian angel”? If the goggles of the future rely on AI-powered software like Lattice to identify threats—say, an enemy drone ahead or an autonomous vehicle racing toward you—Anduril is making the promise that it can sort through the false positives, recognize threats with impeccable accuracy, and surface critical information when it counts most. 

Luckey says the real test is how the technology compares with the current abilities of humans. “In a lot of cases, it’s already better,” he says, referring to Lattice, as measured by Anduril’s internal tests (it has not released these, and they have not been assessed by any independent external experts). “People are fallible in ways that machines aren’t necessarily,” he adds.

Still, Luckey admits he does worry about the threats Lattice will miss.

“One of the things that really worries me is there’s going to be people who die because Lattice misunderstood something, or missed a threat to a soldier that it should have seen,” he says. “At the same time, I can recognize that it’s still doing far better than people are doing today.”

When Lattice makes a significant mistake, it’s unlikely the public will know. Asked about the balance between transparency and national security in disclosing these errors, Luckey said that Anduril’s customer, the Pentagon, will receive complete information about what went wrong. That’s in line with the Pentagon’s policies on responsible AI adoption, which require that AI-driven systems be “developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.” 

However, the policies promise nothing about disclosure to the public, a fact that’s led some progressive think tanks, like the Brennan Center for Justice, to call on federal agencies to modernize public transparency efforts for the age of AI. 

“It’s easy to say, Well, shouldn’t you be honest about this failure of your system to detect something?” Luckey says, regarding Anduril’s obligations. “Well, what if the failure was because the Chinese figured out a hole in the system and leveraged that to speed past our defenses of some military base? I’d say there’s not very much public good served in saying, ‘Attention, everyone—there is a way to get past all of the security on every US military base around the world.’ I would say that transparency would be the worst thing you could do.”

Africa fights rising hunger by looking to foods of the past

The first time the rains failed, the farmers of Kanaani were prepared for it. It was April of 2021, and as climate change had made the weather increasingly erratic, families in the eastern Kenyan village had grown used to saving food from previous harvests. But as another wet season passed with barely any rain, and then another, the community of small homesteads, just off the main road linking Nairobi to the coast of the Indian Ocean, found itself in a full-fledged hunger crisis. 

By the end of 2022, Danson Mutua, a longtime Kanaani resident, counted himself lucky that his farm still had pockets of green: Over the years, he’d gradually replaced much of his maize, the staple crop in Kenya and several other parts of Africa, with more drought-resistant crops. He’d planted sorghum, a tall grass capped with tufts of seeds that look like arrowheads, as well as protein-rich legumes like pigeon peas and green gram, which don’t require any chemical fertilizers and are also prized for fixing nitrogen in soils. Many of his neighbors’ fields were completely parched. Cows, with little to eat themselves, had stopped producing milk; some had started dying. While it was still possible to buy grain at the local market, prices had spiked, and few people had the cash to pay for it. 

Mutua, a father of two, began using his bedroom to secure the little he’d managed to harvest. “If I left it out, it would have disappeared,” he told me from his home in May, 14 months after the rains had finally returned and allowed Kanaani’s farmers to begin recovering. “People will do anything to get food when they’re starving.”

The food insecurity facing Mutua and his neighbors is hardly unique. In 2023, according to the United Nations’ Food and Agriculture Organization, or FAO, an estimated 733 million people around the world were “undernourished,” meaning they lacked sufficient food to “maintain a normal, active, and healthy life.” After falling steadily for decades, the prevalence of global hunger is now on the rise—nowhere more so than in sub-Saharan Africa, where conflicts, economic fallout from the covid-19 pandemic, and extreme weather events linked to climate change pushed the share of the population considered undernourished from 18% in 2015 to 23% in 2023. The FAO estimates that 63% of people in the region are “food insecure”—not necessarily undernourished but unable to consistently eat filling, nutritious meals.

In Africa, like anywhere, hunger is driven by many interwoven factors, not all of which are a consequence of farming practices. Increasingly, though, policymakers on the continent are casting a critical eye toward the types of crops in farmers’ plots, especially the globally dominant and climate-vulnerable grains like rice, wheat, and above all, maize. Africa’s indigenous crops are often more nutritious and better suited to the hot and dry conditions that are becoming more prevalent, yet many have been neglected by science, which means they tend to be more vulnerable to diseases and pests and yield well below their theoretical potential. Some refer to them as “orphan crops” because of this. 

Efforts to develop new varieties of many of these crops, by breeding for desired traits, have been in the works for decades—through state-backed institutions, a continent-wide research consortium, and underfunded scientists’ tinkering with hand-pollinated crosses. Now those endeavors have gotten a major boost: In 2023, the US Department of State, in partnership with the African Union, the FAO, and several global agriculture institutions, launched the Vision for Adapted Crops and Soils, or VACS, a new Africa-focused initiative that seeks to accelerate research and development for traditional crops and help revive the region’s long-­depleted soils. VACS, which had received funding pledges worth $200 million as of August, marks an important turning point, its proponents say—not only because it’s pumping an unprecedented flow of money into foods that have long been disregarded but because it’s being driven by the US government, which has often promoted farming policies around the world that have helped entrench maize and other food commodities at the expense of local crop diversity.

It may be too soon to call VACS a true paradigm shift: Maize is likely to remain central to many governments’ farming policies, and the coordinated crop R&D the program seeks to hasten is only getting started. Many of the crops it aims to promote could be difficult to integrate into commercial supply chains and market to growing urban populations, which may be hesitant to start eating like their ancestors. Some worry that crops farmed without synthetic fertilizers and pesticides today will be “improved” in a way that makes farmers more dependent on these chemicals—in turn, raising farm expenses and eroding soil fertility in the long run. Yet for many of the policymakers, scientists, and farmers who’ve been championing crop diversity for decades, this high-level attention is welcome and long overdue.

“One of the things our community has always cried for is how to raise the profile of these crops and get them on the global agenda,” says Tafadzwa Mabhaudhi, a longtime advocate of traditional crops and a professor of climate change, food systems, and health at the London School of Hygiene and Tropical Medicine, who comes from Zimbabwe.

Now the question is whether researchers, governments, and farmers like Mutua can work together in a way that gets these crops onto plates and provides Africans from all walks of life with the energy and nutrition that they need to thrive, whatever climate change throws their way.

A New World addiction

Africa’s love affair with maize, which was first domesticated several thousand years ago in central Mexico, dates to a period known as the Columbian exchange, when the trans-Atlantic flow of plants, animals, metals, diseases, and people—especially enslaved Africans—dramatically reshaped the world economy. The new crop, which arrived in Africa sometime after 1500 along with other New World foods like beans, potatoes, and cassava, was tastier and required less labor than indigenous cereals like millet and sorghum, and under the right conditions it could yield significantly more calories. It quickly spread across the continent, though it didn’t begin to dominate until European powers carved up most of Africa into colonies in the late 19th century. Its uptake was greatest in southern Africa and Kenya, which both had large numbers of white settlers. These predominantly British farmers, tilling land that had often been commandeered from Africans, began adopting new maize varieties that were higher yielding and more suitable for mechanized milling—albeit less nutritious—than both native grains and the types of maize that had been farmed locally since the 16th century. 

“People plant maize, harvest nothing, and still plant maize the next season. It’s difficult to change that mindset.”

Florence Wambugu, CEO, Africa Harvest

Eager to participate in the new market economy, African farmers followed suit; when hybrid maize varieties arrived in the 1960s, promising even higher yields, the binge only accelerated. By 1990, maize accounted for more than half of all calories consumed in Malawi and Zambia and at least 20% of calories eaten in a dozen other African countries. Today, it remains omnipresent—as a flour boiled into a sticky paste; as kernels jumbled with beans, tomatoes, and a little salt; or as fermented dumplings steamed and served inside the husk. Florence Wambugu, CEO of Africa Harvest, a Kenyan organization that helps farmers adopt maize alternatives, says the crop has such cultural significance that many insist on cultivating it even where it often fails. “People plant maize, harvest nothing, and still plant maize the next season,” she says. “It’s difficult to change that mindset.”

Maize and Africa have never been a perfect match. The plant is notoriously picky, requiring nutrient-rich soils and plentiful water at specific moments. Many of Africa’s soils are naturally deficient in key elements like nitrogen and phosphorus. Over time, the fertilizers needed to support hybrid varieties, often subsidized by governments, depleted soils even further. Large portions of Africa’s inhabited areas are also dry or semi-arid, and 80% of farms south of the Sahara are occupied by smallholders, who work plots of 10 hectares or less. On these farms, irrigation can be spatially impractical and often does not make economic sense. 

It would be a stretch to blame Africa’s maize addiction for its most devastating hunger crises. Research by Alex de Waal, an expert in humanitarian disasters at Tufts University, has found that more than three-quarters of global famine deaths between 1870 and 2010 occurred in the context of “conflict or political repression.” That description certainly applies to today’s worst hunger crisis, in Sudan, a country being ripped apart by rival military governments. As of September, according to the UN, more than 8.5 million people in the country were facing “emergency levels of hunger,” and 755,000 were facing conditions deemed “catastrophic.”

overhead of a bowl of stew
Ground egusi seeds, rich in protein and B vitamins, are used in a popular West African soup.
ADAM DETOUR

For most African farmers, though, weather extremes pose a greater risk than conflict. The two-year drought that affected Mutua, for example, has been linked to a narrowing of the cloud belt that straddles the equator, as well as the tendency of land to lose moisture faster in higher temperatures. According to one 2023 study, by a global coalition of meteorologists, these climatic changes made that drought—which contributed to a 22% drop in Kenya’s national maize output and forced a million people from their homes across eastern Africa—100 times more likely. The UN’s Intergovernmental Panel on Climate Change expects yields of maize, wheat, and rice in tropical regions to fall by 5%, on average, for every degree Celsius that the planet heats up. Eastern Africa could be especially hard hit. A rise in global temperatures of 1.5 degrees above preindustrial levels, which scientists believe is likely to occur sometime in the 2030s, is projected to cause maize yields there to drop by roughly one-third from where they stood in 2005.  

Food demand continues to rise: Sub-Saharan Africa’s population, 1.2 billion now, is expected to surpass 2 billion by 2050.

Food demand, at the same time, will continue to rise: Sub-Saharan Africa’s population, 1.2 billion now, is expected to surpass 2 billion by 2050, and roughly half of those new people will be born and come of age in cities. Many will grow up on Westernized diets: Young, middle-class residents of Nairobi today are more likely to meet friends for burgers than to eat local dishes like nyama choma, roasted meat typically washed down with bottles of Tusker lager. KFC, seen by many as a status symbol, has franchises in a dozen Kenyan towns and cities; those looking to splurge can dine on sushi crafted from seafood flown in specially from Tokyo. Most, though, get by on simple foods like ugali, a maize porridge often accompanied by collard greens or kale. Although some urban residents consume maize grown on family farms “upcountry,” most of them buy it; when domestic harvests underperform, imports rise and prices spike, and more people go hungry. 

A solution from science?

The push to revive Africa’s indigenous crops is a matter of nutrition as well. An overreliance on maize and other starches is a big reason that nearly a third of children under five in sub-Saharan Africa are stunted—a condition that can affect cognition and immune system functioning for life. Many traditional foods are nutrient dense and have potential to combat key dietary deficiencies, says Enoch Achigan-Dako, a professor of genetics and plant breeding at the University of Abomey-Calavi in Benin. He cites egusi as a prime example. The melon seed, used in a popular West African soup, is rich in protein and the B vitamins the body needs to convert food into energy; it is already a lifeline in many places where milk is not widely available. Breeding new varieties with shorter growth cycles, he says, could make the plant more viable in drier areas. Achigan-Dako also believes that many orphan crops hold untapped commercial potential that could help farmers combat hunger indirectly. 

Increasingly, institutions are embracing similar views. In 2013, the 55-­member-state African Union launched the African Orphan Crops Consortium, or AOCC—a collaboration with CGIAR, a global coalition of 15 nonprofit food research institutions, the University of California, Davis, and other partners. The AOCC has since trained more than 150 scientists from 28 African countries in plant breeding techniques through 18-month courses held in Nairobi. It’s also worked to sequence the genomes of 101 understudied crops, in part to facilitate the use of genomic selection. This technique involves correlating observed traits, like drought or pest resistance, with plant DNA, which helps breeders make better-­informed crosses and develop new varieties faster. The consortium launched another course last year to train African scientists in the popular gene-editing technique CRISPR, which enables the tweaking of plant DNA directly. While regulatory and licensing hurdles remain, Leena Tripathi, a molecular biologist at CGIAR’s International Institute of Tropical Agriculture (IITA) and a CRISPR course instructor, believes gene-editing tools could eventually play a big role in accelerating breeding efforts for orphan crops. Most exciting, she says, is the promise of mimicking genes for disease resistance that are found in wild plants but not in cultivated varieties available for crossing.   

For many orphan crops, old-­fashioned breeding techniques also hold big promise. Mathews Dida, a professor of plant genetics and breeding at Kenya’s Maseno University and an alumnus of the AOCC’s course in Nairobi, has focused much of his career on the iron-rich grain finger millet. He believes yields could more than double if breeders incorporated a semi-dwarf gene—a technique first used with wheat and rice in the 1960s. That would shorten the plants so that they don’t bend and break when supplied with nitrogen-based fertilizer. Yet money for such projects, which largely comes from foreign grants, is often tight. “The effort we’re able to put in is very erratic,” he says.

VACS, the new US government initiative, was envisioned in part to help plug these sorts of gaps. Its move to champion traditional crops marks a significant pivot. The United States was a key backer of the Green Revolution that helped consolidate the global dominance of rice, wheat, and maize during the 1960s and 1970s. And in recent decades its aid dollars have tended to support programs in Africa that also emphasize the chemical-­intensive farming of maize and other commercial staples. 

Change, though, was afoot: In 2021, with hunger on the rise, the African Union explicitly called for “intentional investments towards increased productivity and production in traditional and indigenous crops.” It found a sympathetic ear in Cary Fowler, a longtime biodiversity advocate who was appointed US special envoy for global food security by President Joe Biden in 2022. The 74-year-old Tennessean was a co-recipient of this year’s World Food Prize, agriculture’s equivalent of the Nobel, for his role in establishing the Svalbard Global Seed Vault, a facility in the Norwegian Arctic that holds copies of more than 1.3 million seed samples from around the world. Fowler has argued for decades that the loss of crop diversity wrought by the global expansion of large-scale farming risks fueling future hunger crises.

VACS, which complements the United States’ existing food security initiative, Feed the Future, began by working with the AOCC and other experts to develop an initial list of underutilized crops that were climate resilient and had the greatest potential to boost nutrition in Africa. It pared that list down to a group of 20 “opportunity crops” and commissioned models that assessed their future productivity under different climate-change scenarios. The models predicted net yield gains for many: Carbon dioxide, including that released by burning fossil fuels, is the key input in plant photosynthesis, and in some cases the “fertilization effect” of higher atmospheric CO2 can more than nullify the harmful impact of hotter temperatures. 

According to Fowler’s deputy, Anna Nelson, VACS will now operate as a “broad coalition,” with funds channeled through four core implementing partners. One of them, CGIAR, is spearheading R&D on an initial seven of those 20 crops—pigeon peas, Bambara groundnuts, taro, sesame, finger millet, okra, and amaranth—through partnerships with a range of research institutions and scientists. (Mabhaudhi, Achigan-Dako, and Tripathi are all involved in some capacity.) The FAO is leading an initiative that seeks to drive improvements in soil fertility, in part through tools that help farmers decide where and what to plant on the basis of soil characteristics. While Africa remains VACS’s central focus, activities have also launched or are being planned in Guatemala, Honduras, and the Pacific Community, a bloc of 22 Pacific island states and territories. The idea, Nelson tells me, is that VACS will continue to evolve as a “movement” that isn’t necessarily tied to US funding—or to the priorities of the next occupant of the White House. “The US is playing a convening and accelerating role,” she says. But the movement, she adds, is “globally owned.”

Making farm-to-table work

In some ways, the VACS concept is a unifying one. There’s long been a big and often rancorous divide between those who believe Africa needs more innovation-­driven Green Revolution–style agriculture and those promoting ecological approaches, who insist that chemically intensive commercial crops aren’t fit for smallholders. In its focus on seed science as well as crop diversity and soil, VACS has something to offer both. Still, the degree to which the movement can change the direction of Africa’s food production remains an open question. VACS’s initial funding—roughly $150 million pledged by the US and $50 million pledged by other governments as of August—is more than has ever been earmarked for traditional crops and soils at a single moment. The AOCC, by comparison, spent $6.5 million on its plant breeding academy over a decade; as of 2023, its alumni had received a total of $175 million, largely from external grants, to finance crop improvement. Yet enabling orphan crops to reach their full potential, says Allen Van Deynze, the AOCC’s scientific director, who also heads the Seed Biotechnology Center at the University of California, Davis, would require an even bigger scale-up: $1 million per year, ideally, for every type of crop being prioritized in every country, or between $500 million and $1 billion per year across the continent.

“If there are shortages of maize, there will be demonstrations. But nobody’s going to demonstrate if there’s not enough millet, sorghum, or sweet potato.”

Florence Wambugu, CEO, Africa Harvest

Despite the African Union’s support, it remains to be seen if VACS will galvanize African governments to chip in more for crop development themselves. In Kenya, the state-run Agricultural & Livestock Research Organization, or KALRO, has R&D programs for crops such as pigeon peas, green gram, sorghum, and teff. Nonetheless, Wambugu and others say the overall government commitment to traditional crops is tepid—in part because they don’t have a big impact on politics. “If there are shortages of maize, there will be demonstrations,” she says. “But nobody’s going to demonstrate if there’s not enough millet, sorghum, or sweet potato.”

Others express concern that some participants in the VACS movement, including global institutions and private companies, could co-opt long-standing efforts by locals to support traditional crops. Sabrina Masinjila, research and advocacy officer at the African Center for Biodiversity, a Johannesburg-based organization that promotes ecological farming practices and is critical of corporate involvement in Africa’s food systems, sees red flags in VACS’s partnerships with several Western companies. Most concerning, she says, is the support of Bayer, the German biotech conglomerate, for the IITA’s work developing climate-­resilient varieties of banana. In 2018 Bayer purchased Monsanto, which had become a global agrochemical giant through the sale of glyphosate, a weed killer the World Health Organization calls “probably carcinogenic,” along with seeds genetically modified to resist it. Monsanto had also long attracted scrutiny for aggressively pursuing claims of seed patent violations against farmers. Masinjila, a Tanzanian, fears that VACS could open the door to multinational companies’ use of African crops’ genetic sequences for their own private interests or to develop varieties that demand application of expensive, environmentally damaging pesticides and fertilizers.

According to Nelson, no VACS-related US funding will go to crop development that results in any private-sector patents. Seeds developed through CGIAR, VACS’s primary crop R&D partner, are considered to be public goods and are generally made available to governments, researchers, and farmers free of charge. Nonetheless, Nelson does not rule out the possibility that some improved varieties might require costlier, non-organic farming methods. “At its core, VACS is about making more options available to farmers,” she says.

While most indigenous-crop advocates I’ve spoken to are excited about VACS’s potential, several cite other likely bottlenecks, including challenges in getting improved varieties to farmers. A 2023 study by Benson Nyongesa, a professor of plant genetics at the University of Eldoret in Kenya, found that 33% of registered varieties of sorghum and 47% of registered varieties of finger millet had not made it into the fields of farmers; instead, he says, they remained “sitting on the shelves of the institutions that developed them.” The problem represents a market failure: Most traditional crops are self- or open-­pollinated, which means farmers can save a portion of their harvest to plant as seeds the following year instead of buying new ones. Seed companies, he and others say, are out to make a profit and are generally not interested in commercializing them.

Farmers can access seeds in other ways, sometimes with the help of grassroots organizations. Wambugu’s Africa Harvest, which receives funding from the Mastercard Foundation, provides a “starter pack” of seeds for drought-­tolerant crops like sorghum, groundnuts, pigeon peas, and green gram. It also helps its beneficiaries navigate another common challenge: finding markets for their produce. Most smallholders consume a portion of the crops they grow, but they also need cash, and commercial demand isn’t always forthcoming. Part of the reason, says Pamela Muyeshi, owner of Amaica, a Nairobi restaurant specializing in traditional Kenyan fare, is that Kenyans often consider indigenous foods to be “primitive.” This is especially true for those in urban areas who face food insecurity and could benefit from the nutrients these foods offer but often feel pressure to appear modern. Lacking economies of scale, many of these foods remain expensive. To the extent they’re catching on, she says, it’s mainly among the affluent.

The global research partnership CGIAR is spearheading R&D on several drought-tolerant crops, including green gram.
ADAM DETOUR

Similar “social acceptability” barriers will need to be overcome in South Africa, says Peter Johnston, a climate scientist who specializes in agricultural adaptation at the University of Cape Town. Johnston believes traditional crops have an important role to play in Africa’s climate resilience efforts, but he notes that no single crop is fully immune to the extreme droughts, floods, and heat waves that have become more frequent and more unpredictable. Crop diversification strategies, he says, will work best if paired with “anticipatory action”—pre-agreed and pre-financed responses, like the distribution of food aid or cash, when certain weather-related thresholds are breached.

Mutua, for his part, is a testament that better crop varieties, coupled with a little foresight, can go a long way in the face of crisis. When the drought hit in 2021, his maize didn’t stand a chance. Yields of pigeon peas and cowpeas were well below average. Birds, notorious for feasting on sorghum, were especially ravenous. The savior turned out to be green gram, better known in Kenya by its Swahili name, ndengu. Although native to India, the crop is well suited to eastern Kenya’s sandy soils and semi-arid climate, and varieties bred by KALRO to be larger and faster maturing have helped its yields improve over time. In good years, Mutua sells much of his harvest, but after the first season with barely any rain, he hung onto it; soon, out of necessity, ndengu became the fixture of his family’s diet. On my visit to his farm, he pointed it out with particular reverence: a low-lying plant with slender green pods that radiate like spokes of a bicycle wheel. The crop, Mutua told me, has become so vital to this area that some people consider it their “gold.”

If the movement to revive “forgotten” crops lives up to its promise, other climate-­stressed corners of Africa might soon discover their gold equivalent as well.

Jonathan W. Rosen is a journalist who writes about Africa. Evans Kathimbu assisted his reporting from Kenya.

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Serhii “Flash” Beskrestnov hates going to the front line. The risks terrify him. “I’m really not happy to do it at all,” he says. But to perform his particular self-appointed role in the Russia-Ukraine war, he believes it’s critical to exchange the relative safety of his suburban home north of the capital for places where the prospect of death is much more immediate. “From Kyiv,” he says, “nobody sees the real situation.”

So about once a month, he drives hundreds of kilometers east in a homemade mobile intelligence center: a black VW van in which stacks of radio hardware connect to an array of antennas on the roof that stand like porcupine quills when in use. Two small devices on the dash monitor for nearby drones. Over several days at a time, Flash studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

He is, at least in an unofficial capacity, a spy. But unlike other spies, Flash does not keep his work secret. In fact, he shares the results of these missions with more than 127,000 followers—including many soldiers and government officials—on several public social media channels. Earlier this year, for instance, he described how he had recorded five different Russian reconnaissance drones in a single night—one of which was flying directly above his van.

“Brothers from the Armed Forces of Ukraine, I am trying to inspire you,” he posted on his Facebook page in February, encouraging Ukrainian soldiers to learn how to recognize enemy drone signals as he does. “You will spread your wings, you will understand over time how to understand distance and, at some point, you will save the lives of dozens of your colleagues.”

Drones have come to define the brutal conflict that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Flash has obsessed over since childhood. So while Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio.

As well as the frontline information he shares on his public channels, he runs a “support service” for almost 2,000 military communications specialists on Signal and writes guides for building anti-drone equipment on a tight budget. “He’s a celebrity,” one special forces officer recently shouted to me over the thump of music in a Kyiv techno club. He’s “like a ray of sun,” an aviation specialist in Ukraine’s army told me. Flash tells me that he gets 500 messages every day asking for help.

Despite this reputation among rank-and-file service members—and maybe because of it—Flash has also become a source of some controversy among the upper echelons of Ukraine’s military, he tells me. The Armed Forces of Ukraine declined multiple requests for comment, but Flash and his colleagues claim that some high-ranking officials perceive him as a security threat, worrying that he shares too much information and doesn’t do enough to secure sensitive intel. As a result, some refuse to support or engage with him. Others, Flash says, pretend he doesn’t exist. Either way, he believes they are simply insecure about the value of their own contributions—“because everybody knows that Serhii Flash is not sitting in Kyiv like a colonel in the Ministry of Defense,” he tells me in the abrasive fashion that I’ve come to learn is typical of his character. 

But above all else, hours of conversations with numerous people involved in Ukraine’s defense, including frontline signalmen and volunteers, have made clear that even if Flash is a complicated figure, he’s undoubtedly an influential one. His work has become greatly important to those fighting on the ground, and he recently received formal recognition from the military for his contributions to the fight, with two medals of commendation—one from the commander of Ukraine’s ground forces, the other from the Ministry of Defense. 

With a handheld directional antenna and a spectrum analyzer, Flash can scan for hostile signals.
EMRE ÇAYLAK

Despite a small number of semi-autonomous machines with a reduced reliance on radio communications, the drones that saturate the skies above the battlefield will continue to largely depend on this technology for the foreseeable future. And in this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. 

“I know nothing about his background,” says “Igrok,” who works with drones in Ukraine’s 110th Mechanized Brigade and whom we are identifying by his call sign, as is standard military practice. “But I do know that most engineers and all pilots know nothing about radios and antennas. His job is definitely one of the most powerful forces keeping Ukraine’s aerial defense in good condition.”

And given the mounting evidence that both militaries and militant groups in other parts of the world are now adopting drone tactics developed in Ukraine, it’s not only his country’s fate that Flash may help to determine—but also the ways that armies wage war for years to come.

A prescient hobby

Before I can even start asking questions during our meeting in May, Flash is rummaging around in the back of the Flash-mobile, pulling out bits of gear for his own version of show-and-tell: a drone monitor with a fin-shaped antenna; a walkie-talkie labeled with a sticker from Russia’s state security service, the FSB; an approximately 1.5-meter-long foldable antenna that he says probably came from a US-made Abrams tank.

Flash has parked on a small wooded road beside the Kyiv Sea, an enormous water reservoir north of the capital. He’s wearing a khaki sweat-wicking polo shirt, combat trousers, and combat boots, with a Glock 19 pistol strapped to his hip. (“I am a threat to the enemy,” he tells me, explaining that he feels he has to watch his back.) As we talk, he moves from one side to the other, as if the electromagnetic waves that he’s studied since childhood have somehow begun to control the motion of his body.

Now 49, Flash grew up in a suburb of Kyiv in the ’80s. His father, who was a colonel in the Soviet army, recalls bringing home broken radio equipment for his preteen son to tinker with. Flash showed talent from the start. He attended an after-school radio club, and his father fixed an antenna to the roof of their apartment for him. Later, Flash began communicating with people in countries beyond the Iron Curtain. “It was like an open door to the big world for me,” he says.

Flash recalls with amusement a time when a letter from the KGB arrived at his family home, giving his father the fright of his life. His father didn’t know that his son had sent a message on a prohibited radio frequency, and someone had noticed. Following the letter, when Flash reported to the service’s office in downtown Kyiv, his teenage appearance confounded them. Boy, what are you doing here? Flash recalls an embarrassed official saying. 

Ukraine had been a hub of innovation as part of the Soviet Union. But by the time Flash graduated from military communications college in 1997, Ukraine had been independent for six years, and corruption and a lack of investment had stripped away the armed forces’ former grandeur. Flash spent just a year working in a military radio factory before he joined a private communications company developing Ukraine’s first mobile network, where he worked with technologies far more advanced than what he had used in the military. The  project was called “Flash.” 

A decade and a half later, Flash had risen through the ranks of the industry to become head of department at the progenitor to the telecommunications company Vodafone Ukraine. But boredom prompted him to leave and become an entrepreneur. His many projects included a successful e-commerce site for construction services and a popular video game called Isotopium: Chernobyl, which he and a friend based on the “really neat concept,” according to a PC Gamer review, of allowing players to control real robots (fitted with radios, of course) around a physical arena. Released in 2019, it also received positive reviews from Reuters and BBC News.

But within just a few years, an unexpected attack would hurl his country into chaos—and upend Flash’s life. 

“I am here to help you with technical issues,” Flash remembers writing to his Signal group when he first started offering advice. “Ask me anything and I will try to find the answer for you.”
EMRE ÇAYLAK

By early 2022, rumors were growing of a potential attack from Russia. Though he was still working on Isotopium, Flash began to organize a radio network across the northern suburbs of Kyiv in preparation. Near his home, he set up a repeater about 65 meters above ground level that could receive and then rebroadcast transmissions from all the radios in its network across a 200-square-kilometer area. Another radio amateur programmed and distributed handheld radios.

When Russian forces did invade, on February 24, they took both fiber-optic and mobile networks offline, as Flash had anticipated. The radio network became the only means of instant communications for civilians and, critically, volunteers mobilizing to fight in the region, who used it to share information about Russian troop movements. Flash fed this intel to several professional Ukrainian army units, including a unit of special reconnaissance forces. He later received an award from the head of the district’s military administration for his part in Kyiv’s defense. The head of the district council referred to Flash as “one of the most worthy people” in the region.

Yet it was another of Flash’s projects that would earn him renown across Ukraine’s military.

Despite being more than 100 years old, radio technology is still critical in almost all aspects of modern warfare, from secure communications to satellite-guided missiles. But the decline of Ukraine’s military, coupled with the movement of many of the country’s young techies into lucrative careers in the growing software industry, created a vacuum of expertise. Flash leaped in to fill it.

Within roughly a month of Russia’s incursion, Flash had created a private group called “Military Signalmen” on the encrypted messaging platform Signal, and invited civilian radio experts from his personal network to join alongside military communications specialists. “I am here to help you with technical issues,” he remembers writing to the group. “Ask me anything and I will try to find the answer for you.”

The kinds of questions that Flash and his civilian colleagues answered in the first months were often basic. Group members wanted to know how to update the firmware on their devices, reset their radios’ passwords, or set up the internal communications networks for large vehicles. Many of the people drafted as communications specialists in the Ukrainian military had little relevant experience; Flash claims that even professional soldiers lacked appropriate training and has referred to large parts of Ukraine’s military communications courses as “either nonsense or junk.” (The Korolov Zhytomyr Military Institute, where many communications specialists train, declined a request for comment.)

After Russia’s invasion of Ukraine, Flash transformed his VW van into a mobile radio intelligence center.
EMRE ÇAYLAK

He demonstrates handheld spectrum analyzers with custom Ukrainian firmware.

News of the Signal group spread by word of mouth, and it soon became a kind of 24-hour support service that communications specialists in every sector of Ukraine’s frontline force subscribed to. “Any military engineer can ask anything and receive the answer within a couple of minutes,” Flash says. “It’s a nice way to teach people very quickly.” 

As the war progressed into its second year, Military Signalmen became, to an extent, self-sustaining. Its members had learned enough to answer one another’s questions themselves. And this is where several members tell me that Flash has contributed the most value. “The most important thing is that he brought together all these communications specialists in one team,” says Oleksandr “Moto,” a technician at an EU mission in Kyiv and an expert in Motorola equipment, who has advised members of the group. (He asked to not be identified by his surname, due to security concerns.) “It became very efficient.”

Today, Flash and his partners continue to answer occasional questions that require more advanced knowledge. But over the past year, as the group demanded less of his time, Flash has begun to focus on a rapidly proliferating weapon for which his experience had prepared him almost perfectly: the drone.  

A race without end

The Joker-10 drone, one of Russia’s latest additions to its arsenal, is equipped with a hibernation mechanism, Flash warned his Facebook followers in March. This feature allows the operator to fly it to a hidden location, leave it there undetected, and then awaken it when it’s time to attack. “It is impossible to detect the drone using radio-electronic means,” Flash wrote. “If you twist and turn it in your hands—it will explode.” 

This is just one example of the frequent developments in drone engineering that Ukrainian and Russian troops are adapting to every day. 

Larger strike drones similar to the US-made Reaper have been familiar in other recent conflicts, but sophisticated air defenses have rendered them less dominant in this war. Ukraine and Russia are developing and deploying vast numbers of other types of drones—including the now-notorious “FPV,” or first-person view, drone that pilots operate by wearing goggles that stream video of its perspective. These drones, which can carry payloads large enough to destroy tanks, are cheap (costing as little as $400), easy to produce, and difficult to shoot down. They use direct radio communications to transmit video feeds, receive commands, and navigate.

A Ukrainian soldier prepares an FPV drone equipped with dummy ammunition for a simulated flight operation.
MARCO CORDONE/SOPA IMAGES/SIPA USA VIA AP IMAGES

But their reliance on radio technology is a major vulnerability, because enemies can disrupt the signals that the drones emit—making them far less effective, if not inoperable. This form of electronic warfare—which most often involves emitting a more powerful signal at the same frequency as the operator’s—is called “jamming.”

Jamming, though, is an imperfect solution. Like drones, jammers themselves emit radio signals that can enable enemies to locate them. There are also effective countermeasures to bypass jammers. For example, a drone operator can use a tactic called “frequency hopping,” rapidly jumping between different frequencies to avoid a jammer’s signal. But even this method can be disrupted by algorithms that calculate the hopping patterns.

For this reason, jamming is a frequent focus of Flash’s work. In a January post on his Telegram channel, for instance, which people viewed 48,000 times, Flash explained how jammers used by some Ukrainian tanks were actually disrupting their own communications. “The cause of the problems is not direct interference with the reception range of the radio station, but very powerful signals from several [electronic warfare] antennae,” he wrote, suggesting that other tank crews experiencing the same problem might try spreading their antennas across the body of the tank. 

It is all part of an existential race in which Russia and Ukraine are constantly hunting for new methods of drone operation, drone jamming, and counter-jamming—and there’s no end in sight. In March, for example, Flash says, a frontline contact sent him photos of a Russian drone with what looks like a 10-kilometer-long spool of fiber-optic cable attached to its rear—one particularly novel method to bypass Ukrainian jammers. “It’s really crazy,” Flash says. “It looks really strange, but Russia showed us that this was possible.”

Flash’s trips to the front line make it easier for him to track developments like this. Not only does he monitor Russian drone activity from his souped-up VW, but he can study the problems that soldiers face in situ and nurture relationships with people who may later send him useful intel—or even enemy equipment they’ve seized. “The main problem is that our generals are located in Kyiv,” Flash says. “They send some messages to the military but do not understand how these military people are fighting on the front.”

Besides the advice he provides to Ukrainian troops, Flash also publishes online his own manuals for building and operating equipment that can offer protection from drones. Building their own tools can be soldiers’ best option, since Western military technology is typically expensive and domestic production is insufficient. Flash recommends buying most of the parts on AliExpress, the Chinese e-commerce platform, to reduce costs.

While all his activity suggests a close or at least cooperative relationship between Flash and Ukraine’s military, he sometimes finds himself on the outside looking in. In a post on Telegram in May, as well as during one of our meetings, Flash shared one of his greatest disappointments of the war: the military’s refusal of his proposal to create a database of all the radio frequencies used by Ukrainian forces. But when I mentioned this to an employee of a major electronic warfare company, who requested anonymity to speak about the sensitive subject, he suggested that the only reason Flash still complains about this is that the military hasn’t told him it already exists. (Given its sensitivity, MIT Technology Review was unable to independently confirm the existence of this database.) 

Flash believes that generals in Kyiv “do not understand how these military people are fighting on the front.” So even though he doesn’t like the risks they involve, he takes trips to the frontline about once a month.
EMRE ÇAYLAK

This anecdote is emblematic of Flash’s frustration with a military complex that may not always want his involvement. Ukraine’s armed forces, he has told me on several occasions, make no attempt to collaborate with him in an official manner. He claims not to receive any financial support, either. “I’m trying to help,” he says. “But nobody wants to help me.”

Both Flash and Yurii Pylypenko, another radio enthusiast who helps Flash manage his Telegram channel, say military officials have accused Flash of sharing too much information about Ukraine’s operations. Flash claims to verify every member of his closed Signal groups, which he says only discuss “technical issues” in any case. But he also admits the system is not perfect and that Russians could have gained access in the past. Several of the soldiers I interviewed for this story also claimed to have entered the groups without Flash’s verification process. 

It’s ultimately difficult to determine if some senior staff in the military hold Flash at arm’s length because of his regular, often strident criticism—or whether Flash’s criticism is the result of being held at arm’s length. But it seems unlikely either side’s grievances will subside soon; Pylypenko claims that senior officers have even tried to blackmail him over his involvement in Flash’s work. “They blame my help,” he wrote to me over Telegram, “because they think Serhii is a Russian agent reposting Russian propaganda.” 

Is the world prepared?

Flash’s greatest concern now is the prospect of Russia overwhelming Ukrainian forces with the cheap FPV drones. When they first started deploying FPVs, both sides were almost exclusively targeting expensive equipment. But as production has increased, they’re now using them to target individual soldiers, too. Because of Russia’s production superiority, this poses a serious danger—both physical and psychological—to Ukrainian soldiers. “Our army will be sitting under the ground because everybody who goes above ground will be killed,” Flash says. Some reports suggest that the prevalence of FPVs is already making it difficult for soldiers to expose themselves at all on the battlefield.

To combat this threat, Flash has a grand yet straightforward idea. He wants Ukraine to build a border “wall” of jamming systems that cover a broad range of the radio spectrum all along the front line. Russia has already done this itself with expensive vehicle-based systems, but these present easy targets for Ukrainian drones, which have destroyed several of them. Flash’s idea is to use a similar strategy, albeit with smaller, cheaper systems that are easier to replace. He claims, however, that military officials have shown no interest.

Although Flash is unwilling to divulge more details about this strategy (and who exactly he pitched it to), he believes that such a wall could provide a more sustainable means of protecting Ukrainian troops. Nevertheless, it’s difficult to say how long such a defense might last. Both sides are now in the process of developing artificial-intelligence programs that allow drones to lock on to targets while still outside enemy jamming range, rendering them jammer-proof when they come within it. Flash admits he is concerned—and he doesn’t appear to have a solution.

Flash admits he is worried about Russia overwhelming Ukrainian forces with the cheap FPV drones: “Our army will be sitting under the ground because everybody who goes above ground will be killed.”
EMRE ÇAYLAK

He’s not alone. The world is entirely unprepared for this new type of warfare, says Yaroslav Kalinin, a former Ukrainian intelligence officer and the CEO of Infozahyst, a manufacturer of equipment for electronic warfare. Kalinin recounts talking at an electronic-warfare-focused conference in Washington, DC, last December where representatives from some Western defense companies weren’t able to recognize the basic radio signals emitted by different types of drones. “Governments don’t count [drones] as a threat,” he says. “I need to run through the streets like a prophet—the end is near!”

Nevertheless, Ukraine has become, in essence, a laboratory for a new era of drone warfare—and, many argue, a new era of warfare entirely. Ukraine’s and Russia’s soldiers are its technicians. And Flash, who sometimes sleeps curled up in the back of his van while on the road, is one of its most passionate researchers. “Military developers from all over the world come to us for experience and advice,” he says. Only time will tell whether their contributions will be enough to see Ukraine through to the other side of this war. 

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers, including Wired, the Guardian, and MIT Technology Review.

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car

DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

DHS plans to collect biometric data from migrant children “down to the infant”

The US Department of Homeland Security (DHS) plans to collect and analyze photos of the faces of migrant children at the border in a bid to improve facial recognition technology, MIT Technology Review can reveal. This includes children “down to the infant,” according to John Boyd, assistant director of the department’s Office of Biometric Identity Management (OBIM), where a key part of his role is to research and develop future biometric identity services for the government. 

As Boyd explained at a conference in June, the key question for OBIM is, “If we pick up someone from Panama at the southern border at age four, say, and then pick them up at age six, are we going to recognize them?”

Facial recognition technology (FRT) has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors. 

In practice, the new DHS plan could effectively solve that problem. According to Syracuse University’s Transactional Records Access Clearinghouse (TRAC), 339,234 children arrived at the US-Mexico border in 2022, the last year for which numbers are currently available. Of those children, 150,000 were unaccompanied—the highest annual number on record. If the face prints of even 1% of those children had been enrolled in OBIM’s craniofacial structural progression program, the resulting data set would dwarf nearly all existing data sets of real children’s faces used for aging research.

It’s unclear to what extent the plan has already been implemented; Boyd tells MIT Technology Review that to the best of his knowledge, the agency has not yet started collecting data under the program, but he adds that as “the senior executive,” he would “have to get with [his] staff to see.” He could only confirm that his office is “funding” it. Despite repeated requests, Boyd did not provide any additional information. 

Boyd says OBIM’s plan to collect facial images from children under 14 is possible due to recent “rulemaking” at “some DHS components,” or sub-offices, that have removed age restrictions on the collection of biometric data. US Customs and Border Protection (CBP), the US Transportation Security Administration, and US Immigration and Customs Enforcement declined to comment before publication. US Citizenship and Immigration Services (USCIS) did not respond to multiple requests for comment. OBIM referred MIT Technology Review back to DHS’s main press office. 

DHS did not comment on the program prior, but sent an emailed statement following publication: “The Department of Homeland Security uses various forms of technology to execute its mission, including some biometric capabilities. DHS ensures all technologies, regardless of type, are operated under the established authorities and within the scope of the law. We are committed to protecting the privacy, civil rights, and civil liberties of all individuals who may be subject to the technology we use to keep the nation safe and secure.”

Boyd spoke publicly about the plan in June at the Federal Identity Forum and Exposition, an annual identity management conference for federal employees and contractors. But close observers of DHS that we spoke with—including a former official, representatives of two influential lawmakers who have spoken out about the federal government’s use of surveillance technologies, and immigrants’ rights organizations that closely track policies affecting migrants—were unaware of any new policies allowing biometric data collection of children under 14. 

That is not to say that all of them are surprised. “That tracks,” says one former CBP official who has visited several migrant processing centers on the US-Mexico border and requested anonymity to speak freely. He says “every center” he visited “had biometric identity collection, and everybody was going through it,” though he was unaware of a specific policy mandating the practice. “I don’t recall them separating out children,” he adds.

“The reports of CBP, as well as DHS more broadly, expanding the use of facial recognition technology to track migrant children is another stride toward a surveillance state and should be a concern to everyone who values privacy,” Justin Krakoff, deputy communications director for Senator Jeff Merkley of Oregon, said in a statement to MIT Technology Review. Merkley has been an outspoken critic of both DHS’s immigration policies and of government use of facial recognition technologies

Beyond concerns about privacy, transparency, and accountability, some experts also worry about testing and developing new technologies using data from a population that has little recourse to provide—or withhold—consent. 

Could consent “actually take into account the vast power differentials that are inherent in the way that this is tested out on people?” asks Petra Molnar, author of The Walls Have Eyes: Surviving Migration in the Age of AI. “And if you arrive at a border … and you are faced with the impossible choice of either: get into a country if you give us your biometrics, or you don’t.”

“That completely vitiates informed consent,” she adds.

This question becomes even more challenging when it comes to children, says Ashley Gorski, a senior staff attorney with the American Civil Liberties Union. DHS “should have to meet an extremely high bar to show that these kids and their legal guardians have meaningfully consented to serve as test subjects,” she says. “There’s a significant intimidation factor, and children aren’t as equipped to consider long-term risks.”

Murky new rules

The Office of Biometric Identity Management, previously known as the US Visitor and Immigrant Status Indicator Technology Program (US-VISIT), was created after 9/11 with the specific mandate of collecting biometric data—initially only fingerprints and photographs—from all non-US citizens who sought to enter the country. 

Since then, DHS has begun collecting face prints, iris and retina scans, and even DNA, among other modalities. It is also testing new ways of gathering this data—including through contactless fingerprint collection, which is currently deployed at five sites on the border, as Boyd shared in his conference presentation. 

Since 2023, CBP has been using a mobile app, CBP One, for asylum seekers to submit biometric data even before they enter the United States; users are required to take selfies periodically to verify their identity. The app has been riddled with problems, including technical glitches and facial recognition algorithms that are unable to recognize darker-skinned people. This is compounded by the fact that not every asylum seeker has a smartphone. 

Then, just after crossing into the United States, migrants must submit to collection of biometric data, including DNA. For a sense of scale, a recent report from Georgetown Law School’s Center on Privacy and Technology found that CBP has added 1.5 million DNA profiles, primarily from migrants crossing the border, to law enforcement databases since it began collecting DNA “from any person in CBP custody subject to fingerprinting” in January 2020. The researchers noted that an overrepresentation of immigrants—the majority of whom are people of color—in a DNA database used by law enforcement could subject them to over-policing and lead to other forms of bias. 

Generally, these programs only require information from individuals aged 14 to 79. DHS attempted to change this back in 2020, with proposed rules for USCIS and CBP that would have expanded biometric data collection dramatically, including by age. (USCIS’s proposed rule would have doubled the number of people from whom biometric data would be required, including any US citizen who sponsors an immigrant.) But the USCIS rule was withdrawn in the wake of the Biden administration’s new “priorities to reduce barriers and undue burdens in the immigration system.” Meanwhile, for reasons that remain unclear, the proposed CBP rule was never enacted. 

This would make it appear “contradictory” if DHS were now collecting the biometric data of children under 14, says Dinesh McCoy, a staff attorney with Just Futures Law, an immigrant rights group that tracks surveillance technologies. 

Neither Boyd nor DHS’s media office would confirm which specific policy changes he was referring to in his presentation, though MIT Technology Review has identified a 2017 memo, issued by then-Secretary of Homeland Security John F. Kelly, that encouraged DHS components to remove “age as a basis for determining when to collect biometrics.” 

The DHS’s Office of the Inspector General (OIG) referred to this memo as the “overarching policy for biometrics at DHS” in a September 2023 report, though none of the press offices MIT Technology Review contacted—including the main DHS press office, OIG, and OBIM, among others—would confirm whether this was still the relevant policy; we have not been able to confirm any related policy changes since then. 

The OIG audit also found a number of fundamental issues related to DHS’s oversight of biometric data collection and use—including that its 10-year strategic framework for biometrics, covering 2015 to 2025, “did not accurately reflect the current state of biometrics across the Department, such as the use of facial recognition verification and identification.” Nor did it provide clear guidance for the consistent collection and use of biometrics across DHS, including age requirements. 

But there is also another potential explanation for the new OBIM program: Boyd says it is being conducted under the auspices of the DHS’s undersecretary of science and technology, the office that leads much of the agency’s research efforts. Because it is for research, rather than to be used “in DHS operations to inform processes or decision making,” many of the standard restrictions for DHS use of face recognition and face capture technologies do not apply, according to a DHS directive

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489. 

Some lawyers allege that changing the age limit for data collection via department policy, not by a federal rule, which requires a public comment period, is problematic. McCoy, for instance, says any lack of transparency here amplifies the already “extremely challenging” task of “finding [out] in a systematic way how these technologies are deployed”—even though that is key for accountability.

Advancing the field

At the identity forum and in a subsequent conversation, Boyd explained that this data collection is meant to advance the development of effective FRT algorithms. Boyd leads OBIM’s Future Identity team, whose mission is to “research, review, assess, and develop technology, policy, and human factors that enable rapid, accurate, and secure identity services” and to make OBIM “the preferred provider for identity services within DHS.” 

Driven by high-profile cases of missing children, there has long been interest in understanding how children’s faces age. At the same time, there have been technical challenges to doing so, both preceding FRT and with it. 

At its core, facial recognition identifies individuals by comparing the geometry of various facial features in an original face print with subsequent images. Based on this comparison, a facial recognition algorithm assigns a percentage likelihood that there is a match. 

But as children grow and develop, their bone structure changes significantly, making it difficult for facial recognition algorithms to identify them over time. (These changes tend to be even more pronounced  in children under 14. In contrast, as adults age, the changes tend to be in the skin and muscle, and have less variation overall.) More data would help solve this problem, but there is a dearth of high-quality data sets of children’s faces with verifiable ages. 

“What we’re trying to do is to get large data sets of known individuals,” Boyd tells MIT Technology Review. That means taking high-quality face prints “under controlled conditions where we know we’ve got the person with the right name [and] the correct birth date”—or, in other words, where they can be certain about the “provenance of the data.” 

For example, one data set used for aging research consists of 305 celebrities’ faces as they aged from five to 32. But these photos, scraped from the internet, contain too many other variables—such as differing image qualities, lighting conditions, and distances at which they were taken—to be truly useful. Plus, speaking to the provenance issue that Boyd highlights, their actual ages in each photo can only be estimated. 

Another tactic is to use data sets of adult faces that have been synthetically de-aged. Synthetic data is considered more privacy-preserving, but it too has limitations, says Stephanie Schuckers, director of the Center for Identification Technology Research (CITeR). “You can test things with only the generated data,” Schuckers explains, but the question remains: “Would you get similar results to the real data?”

(Hosted at Clarkson University in New York, CITeR brings together a network of academic and government affiliates working on identity technologies. OBIM is a member of the research consortium.) 

Schuckers’s team at CITeR has taken another approach: an ongoing longitudinal study of a cohort of 231 elementary and middle school students from the area around Clarkson University. Since 2016, the team has captured biometric data every six months (save for two years of the covid-19 pandemic), including facial images. They have found that the open-source face recognition models they tested can in fact successfully recognize children three to four years after they were initially enrolled. 

But the conditions of this study aren’t easily replicable at scale. The study images are taken in a controlled environment, all the participants are volunteers, the researchers sought consent from parents and the subjects themselves, and the research was approved by the university’s Institutional Review Board. Schuckers’s research also promises to protect privacy by requiring other researchers to request access, and by providing facial datasets separately from other data that have been collected. 

What’s more, this research still has technical limitations, including that the sample is small, and it is overwhelmingly Caucasian, meaning it might be less accurate when applied to other races. 

Schuckers says she was unaware of DHS’s craniofacial structural progression initiative. 

Far-reaching implications

Boyd says OBIM takes privacy considerations seriously, and that “we don’t share … data with commercial industries.” Still, OBIM has 144 government partners with which it does share information, and it has been criticized by the Government Accountability Office for poorly documenting who it shares information with, and with what privacy-protecting measures. 

Even if the data does stay within the federal government, OBIM’s findings regarding the accuracy of FRT for children over time could nevertheless influence how—and when—the rest of the government collects biometric data, as well as whether the broader facial recognition industry may also market its services for children. (Indeed, Boyd says sharing “results,” or the findings of how accurate FRT algorithms are, is different than sharing the data itself.) 

That this technology is being tested on people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies. 

In fact, Boyd previously helped advance the Department of Defense’s biometric systems in Iraq and Afghanistan, where he acknowledged that individuals lacked the privacy protections that would have been granted in many other contexts, despite the incredibly high stakes. Biometric data collected in those war zones—in some areas, from every fighting-age male—was used to identify and target insurgents, and being misidentified could mean death. 

These projects subsequently played a substantial role in influencing the expansion of biometric data collection by the Department of Defense, which now happens globally. And architects of the program, like Boyd, have taken important roles in expanding the use of biometrics at other agencies. 

“It’s not an accident” that this testing happens in the context of border zones, says Molnar. Borders are “the perfect laboratory for tech experimentation, because oversight is weak, discretion is baked into the decisions that get made … it allows the state to experiment in ways that it wouldn’t be allowed to in other spaces.” 

But, she notes, “just because it happens at the border doesn’t mean that that’s where it’s going to stay.”

Update: This story was updated to include comment from DHS.

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489.