Inside Chicago’s surveillance panopticon

Early on the morning of September 2, 2024, a Chicago Transit Authority Blue Line train was the scene of a random and horrific mass shooting. Four people were shot and killed on a westbound train as it approached the suburb of Forest Park. 

The police swiftly activated a digital dragnet—a surveillance network that connects thousands of cameras in the city. 

The process began with a quick review of the transit agency’s surveillance cameras, which captured the alleged gunman shooting the victims execution style. Law enforcement followed the suspect, through real-time footage, across the rapid-­transit system. Police officials circulated the images to transit staff and to thousands of officers. An officer in the adjacent suburb of Riverdale recognized the suspect from a previous arrest. By the time he was captured at another train station, just 90 minutes after the shooting, authorities already had his name, address, and previous arrest history.

Little of this process would come as much surprise to Chicagoans. The city has tens of thousands of surveillance cameras—up to 45,000, by some estimates. That’s among the highest numbers per capita in the US. Chicago boasts one of the largest license plate reader systems in the country, and the ability to access audio and video surveillance from independent agencies such as the Chicago Public Schools, the Chicago Park District, and the public transportation system as well as many residential and commercial security systems such as Ring doorbell cameras. 

Law enforcement and security advocates say this vast monitoring system protects public safety and works well. But activists and many residents say it’s a surveillance panopticon that creates a chilling effect on behavior and violates guarantees of privacy and free speech. 

Black and Latino communities in Chicago have historically been targeted by excessive policing and surveillance, says Lance Williams, a scholar of urban violence at Northeastern Illinois University. That scrutiny has created new problems without delivering the promised safety, he suggests. In order to “solve the problem of crime or violence and make these communities safer,” he says, “you have to deal with structural problems,” such as the shortage of livable-wage jobs, affordable housing, and mental-health services across the city.

Recent years have seen some effective pushback against the surveillance. Until recently, for example, the city was the largest customer of ShotSpotter acoustic sensors, which are designed to detect gunfire and alert police. The system was introduced in a small area on the South Side in 2012. By 2018, an area of about 136 square miles—some 60% of the city—was covered by the acoustic surveillance network.

Critics questioned ShotSpotter’s effectiveness and objected that the sensors were installed largely in Black and Latino neighborhoods. Those critiques gained urgency with the fatal shooting in March 2021 of a 13-year-old, Adam Toledo, by police responding to a ShotSpotter alert. The tragedy became the touchstone of the #StopShotSpotter protest movement and one of the major issues in Brandon Johnson’s successful mayoral campaign in 2023. When he reached office, Johnson followed through, ending the city’s contract with SoundThinking, the San Francisco Bay Area company behind ShotSpotter. In total, it’s estimated, the city paid more than $53 million for the system. 

In response to a request for comment, SoundThinking said that ShotSpotter enables law enforcement “to reach the scene faster, render aid to victims, and locate evidence more effectively.” It said the company “plays no part in the selection of deployment areas” but added: “We believe communities experiencing the highest levels of gun violence deserve the same rapid emergency response as any other neighborhood.” 

While there has been successful resistance to police surveillance in the nation’s third-largest city, there are also countervailing forces: Governments and officials in Chicago and the surrounding suburbs are moving to expand the use of surveillance, also in response to public pressure. Even the victory against acoustic surveillance might be short-lived. Early last year, the city issued a request for proposals for gun violence detection technology. 

Many people in and around Chicago—digital privacy and surveillance activists, defense attorneys, law enforcement officials, and ordinary citizens—are part of this push and pull. Here are some of their stories. 


Alejandro Ruizesparza and Freddy Martinez
Cofounders, Lucy Parsons Labs

Oak Park, a quiet suburb at Chicago’s western border, is the birthplace of Ernest Hemingway. It includes the world’s largest collection of Frank Lloyd Wright–designed buildings and homes. 

Until recently, the village of Oak Park was also the center of a three-year-long campaign against an unwelcome addition to its manicured lawns and Prairie-style architecture: automated license plate readers from a company called Flock Safety. These are high-speed cameras that automatically scan license plates to look for stolen or wanted vehicles, or for drivers with outstanding warrants. 

Freddy Martinez (left) and Alejandro Ruizesparza (right) direct Lucy Parsons Labs, a charitable organization focused on digital rights.
AKILAH TOWNSEND

An Oak Park group called Freedom to Thrive—made up of parents, activists, lawyers, data scientists, and many others—suspected that this technology was not a good or equitable addition to their neighborhood. So the group engaged the Chicago-based nonprofit Lucy Parsons Labs to help navigate the often intimidating process of requesting license plate reader data under the Illinois Freedom of Information Act.

Lucy Parsons Labs, which is named for a turn-of-the-century Chicago labor organizer, investigates technologies such as license plate readers, gunshot detection systems, and police bodycams. 

LPL provides digital security and public records training to a variety of groups and is frequently called on to help community members audit and analyze surveillance systems that are targeting their neighborhoods. It’s led by two first-­generation Mexican-Americans from the city’s Southwest Side. Alejandro Ruizesparza has a background in community organizing and data science. Freddy Martinez was also a community organizer and has a background in physics. 

The group is now approaching its 10th year, but it was an all-volunteer effort until 2022. That’s when LPL received its first unrestricted, multi-year operational grant from a large foundation: the Chicago-based John D. and Catherine T. MacArthur Foundation, known worldwide for its so-called “genius grants.” A grant from the Ford Foundation followed the next year. 

The additional resources—a significant amount compared with the previous all-volunteer budget, acknowledges Ruizesparza—meant the two cofounders and two volunteers became full-time employees. But the group is determined not to become “too comfortable” and lose its edge. There is a tenacity to Lucy Parsons Labs’ work—a “sense of scrappiness,” they say—because “we did so much of this work with no money.” 

One of LPL’s primary strategies is filing extensive FOIA requests for raw data sets of police surveillance. The process can take a while, but it often reveals issues. 

In the case of Oak Park, the FOIA requests were just one tool that Freedom to Thrive and LPL used to sort out what was going on. The data revealed that in the first 10 months of operation, the eight Flock license plate readers the town had deployed scanned 3,000,000 plates. But only 42 scans led to an alert—an infinitesimal yield of 0.000014%. 

At the same time, the impact was disproportionate. While Oak Park’s population of about 53,000 is only 19% Black, Black drivers made up 85% of those flagged by the Flock cameras, seemingly amplifying what were already concerning racial disparities in the village’s traffic stops. Flock did not respond to a request for comment.

“We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”

Freddy Martinez, cofounder, Lucy Parsons Labs

LPL brings a mix of radical politics and critical theory to its mission. Most surveillance technologies are “largely extensions of the plantation systems,” says Ruizesparza. 

The comparison makes sense: Many slaveholding communities required enslaved persons to carry signed documents to leave plantations and wear badges with numbers sewn to their clothing. The group says it aims to empower local communities to push back against biased policing technologies through technical assistance, training, and litigation—and to de­mystify algorithms and surveillance tools in the process.

“When we talk to people, they realize that you don’t need to know how to run a regression to understand that a technology has negative implications on your life,” says Ruizesparza. “You don’t need to understand how circuits work to understand that you probably shouldn’t have all of these cameras embedded in only Black and brown regions of a city.”

The group came by some of its techniques through experimentation. “When LPL was first getting started, we didn’t really feel like FOIA would have been a good way of getting information. We didn’t know anything about it,” says Martinez. “Along the way, we were very successful in uncovering a lot of surveillance practices.” 

One of the covert surveillance practices uncovered by those aggressive FOIA requests, for example, was the Chicago Police Department’s use of “Stingray” equipment, portable surveillance devices deployed to track and monitor mobile phones. 

The contentious issue of Oak Park’s license plate readers was finally put to a vote in late August. The village trustees voted 5–2 to terminate the contract with Flock Safety. 

Since then, community-­based groups from across the country—as far away as California—have contacted LPL to say the Chicago collective’s work has inspired their own efforts, says Martinez: “We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”


Brian Strockis
Chief, Oak Brook Police Department

If you drive about 20 miles west of Chicago, you’ll find Oakbrook Center, one of the nation’s leading luxury shopping destinations. The open-air mall includes Neiman-Marcus, Louis Vuitton, and Gucci and attracts high-end shoppers from across the region. It’s also become a destination for retail theft crews that coordinate “smash and grabs” and often escape with thousands of dollars’ worth of inventory that can be quickly sold, such as sunglasses or luxury handbags. 

In early December, police say, a Chicago man tried to lead officers on what could have been a dangerous high-speed chase from the mall. Patrol cars raced to the scene. So did a “first responder drone,” built by Flock Safety and deployed by the Oak Brook Police Department.  

The drone identified the suspect vehicle from the mall parking lot using its license plate reader and snapped high-definition photos that were texted to officers on the ground. The suspect was later tracked to Chicago, where he was arrested. 

Brian Strockis, chief of the Oak Brook Police Department, led the way in introducing drones as first responders in the state of Illinois.
AKILAH TOWNSEND

This was the type of outcome that Brian Strockis, chief of the Oak Brook Police Department, hoped for when he pioneered the “drone as first responder,” or DFR, program in Illinois. A longtime member of the force, he joined the department almost 25 years ago as a patrol officer, worked his way up the brass ladder, and was awarded the top job in 2022. 

Oak Brook was the first municipality in Illinois to deploy a drone as a first responder. One of the main reasons, says Strockis, was to reduce the number of high-speed chases, which are potentially dangerous to officers, suspects, and civilians. A drone is also a more effective and cost-efficient way to deal with suspects in fleeing vehicles, says Strockis.

Police say there was the potential for a dangerous high-speed chase. Patrol cars raced to the scene. But the first unit to arrive was a drone.

“It’s a force multiplier in that we’re able to do more with less,” says the chief, who spoke with me in his office at Oak Brook’s Village Hall. 

The department’s drone autonomously launches from the roof of the building and responds to about 10 to 12 service calls per day, at speeds up to 45 miles per hour. It arrives at crime scenes before patrol officers in nine out of every 10 cases.

Next door to Village Hall is the Oak Brook Police Department’s real-time crime center, a large room with two video walls that integrates livestreams from the first-responder drone, handheld drones, traffic cameras, license plate readers, and about a thousand private security cameras. When I visited, the two DFR operators demonstrated how the machine can fly itself or be directed to locations from a destination entered on Google Maps. They sent it off to a nearby forest preserve and then directed it to return to the rooftop base, where it docks automatically, changes batteries, and charges. After the demo, one of the drone operators logged the flight, as required by state law.

Strockis says he is aware of the privacy concerns around using this technology but that protections are in place. 

For example, the drone cannot be used for random or mass surveillance, he says, because the camera is always pointed straight ahead during flight and does not angle down until it reaches its desired location. The drone’s payload does not include facial recognition technology, which is restricted by state law, he says. 

The drone video footage is invaluable, he adds, because “you are seeing the events as they’re transpiring from an angle that you wouldn’t otherwise be privy to.” 

It’s an extra layer of protection for the public as well as for the officers, says the chief: “For every incident that an officer responds to now, you have squad car and bodycam video. You likely have cell-phone video from the public, officers, complainants, from offenders. So adding this element is probably the best video source on a scene that the police are going to anyway.”


Mark Wallace
Executive director, Citizens to Abolish Red Light Cameras

Mark Wallace wears several hats. By day he is a real estate investor and mortgage lender. But he is probably best known to many Chicagoans—especially across the city’s largely African-American communities on the South and West Sides—as a talk radio host for the station WVON and one of the leading voices against the city’s extensive network of red-light and speed cameras. 

For the past two decades, city officials have maintained that the cameras—which are officially known as “automated enforcement”—are a crucial safety measure. They are also a substantial revenue stream, generating around $150 million a year and a total of some $2.5 billion since they were installed.

Urged on by a radio listener, Mark Wallace started organizing against Chicago’s red-light and speed cameras, a substantial revenue stream for the city that has been found to disproportionately burden majority Black and Latino areas.
AKILAH TOWNSEND

“The one thing that the cameras have the ability to do is generate a lot of money,” Wallace says. He describes the tickets as a “cash grab” that disproportionately affects Black and Latino communities.

A groundbreaking 2022 analysis by ProPublica found, in fact, that households in majority Black and Latino zip codes were ticketed at much higher rates than others, in part because the cameras in those areas were more likely to be installed near expressway ramps and on wider streets, which encouraged faster speeds. The tickets, which can quickly rack up late fees, were also found to cause more of a financial burden in such communities, the report found.

These were some of the same concerns that many people expressed on the radio and in meetings, Wallace says. 

Chicago’s automated traffic enforcement began in 2003, and it became the most extensive—and most lucrative—such program in the country. About 300 red-light cameras and 200 speed cameras are set up near schools and parks. The cost of the tickets can quickly double if they are not paid or contested—providing a windfall for the city.  

Wallace began his advocacy against the cameras soon after arriving at the radio station in the early 2010s. A younger listener called in and said, he recalls, “that he enjoyed the information that came from WVON but that we didn’t do anything.” The comment stuck with him, especially in light of WVON’s storied history. The station was closely involved in the civil rights movement of the 1960s and broadcast Martin Luther King Jr.’s speeches during his Chicago campaign.

Wallace hoped to change the caller’s perception about the station. He had firsthand experience with red-light cameras,  having been ticketed himself, and decided to take them on as a cause. He scheduled a meeting at his church for a Friday night, promoting it on his show. “More than 300 people showed up,” he remembers, chatting with me in the spacious project studio and office in the basement of his townhouse on the city’s South Side. “That said to me there are a lot of people who see this in­equity and injustice.” 

Wallace began using his platform on WVON—The People’s Show—to mobilize communities around social and economic justice, and many discussions revolved around the automated enforcement program. The cause gained traction after city and state officials were found to have taken thousands of dollars from technology and surveillance companies to make sure their cameras remained on the streets.

Wallace and his group, Citizens to Abolish Red Light Cameras, want to repeal the ordinances authorizing the city’s camera programs. That hasn’t happened so far, but political pressure from the group paved the way for a Chicago City Council ordinance that required public meetings before any red-light cameras are installed, removed, or relocated. The group hopes for more restrictions for speed cameras, too.

“It was never about me personally. It was about ensuring that we could demonstrate to people that you have power,” says Wallace. “If you don’t like something, as Barack Obama would say, get a pen and clipboard and go to work to fight to make these changes.” 


Jonathan Manes
Senior counsel, MacArthur Justice Center

Derick Scruggs, a 30-year-old father and licensed armed security guard, was working in the parking lot of an AutoZone on Chicago’s Southwest Side on April 19, 2021. That’s when he was detained, interrogated, and subjected to a “humiliating body search” by two Chicago police officers, Scruggs later attested. “I was just doing my job when police officers came at me, handcuffed me, and treated me like a criminal—just because I was near a ShotSpotter alert,” he says.

The officers found no evidence of a shooting and released Scruggs. But the next day, the police returned and arrested him for an alleged violation related to his security guard paperwork. Prosecutors later dismissed the charges, but he was held in custody overnight and was then fired from his job. “Because of what they did,” he says, “I lost my job, couldn’t work for months, and got evicted from my apartment.”

Jonathan Manes litigated cases related to detentions at Guantanamo Bay and the legality of drone strikes before turning his attention to Chicago’s implementation of gunshot detection technology.
AKILAH TOWNSEND

Scruggs is believed to be among thousands of Chicagoans who’ve been questioned, detained, or arrested by police because they were near the location of a ShotSpotter alert, according to an analysis by the City of Chicago Office of Inspector General. The case caught the attention of Jonathan Manes, a law professor at Northwestern and senior counsel at the MacArthur Justice Center, a public interest law firm. 

Manes previously worked in national security law, but when he joined the justice center about six years ago, he chose to focus squarely on the intersection of civil rights with police surveillance and technology. “My goal was to identify areas that weren’t well covered by other civil rights organizations but were a concern for people here in Chicago,” he says. 

“There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it.”

Jonathan Manes, senior counsel, MacArthur Justice Center

And when he and his colleagues looked into ShotSpotter, they revealed a disturbing problem: The system generated alerts that yielded no evidence of gun-­related crimes but were used by police as a pretext for other actions. There seemed to be “a pattern of people being stopped, detained, questioned, sometimes arrested, in response to a ShotSpotter alert—often resulting in charges that have nothing to do with guns,” Manes says. 

The system also directed a “massive number of police deployments onto the South and West Sides of the city,” Manes says. Those regions are home to most of Chicago’s Black and Latino residents. The research showed that 80% of the city’s Black population but only 30% of its white population lived in districts covered by the system. 

Manes brought Scruggs’s case into a lawsuit that he was already developing against the city’s use of ShotSpotter. In late 2025, he and his colleagues reached a settlement that prohibits police officers from doing what they did in Scruggs’s case—stopping or searching people simply because they are near the location of a gunshot detection alert. 

Chicago had already decommissioned ShotSpotter in 2024, but the agreement will cover any future gunshot detection systems. Manes is carefully watching to see what happens next.

Though Manes is pleased with the settlement, he points out that it narrowly focused on how police resources were used after the gunshot detection system was operational. “There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it,” he adds. He supports laws that require disclosure from local officials and law enforcement about what technologies are being proposed and how civil rights could be affected.  

More than two dozen jurisdictions nationwide have adopted surveillance transparency laws, including San Francisco, Seattle, Boston, and New York City. But so far Chicago is not on that list. 

Rod McCullom is a Chicago-based science and technology writer whose focus areas include AI, biometrics, cognition, and the science of crime and violence.  

The human work behind humanoid robots is being hidden

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In January, Nvidia’s Jensen Huang, the head of the world’s most valuable company, proclaimed that we are entering the era of physical AI, when artificial intelligence will move beyond language and chatbots into physically capable machines. (He also said the same thing the year before, by the way.)

The implication—fueled by new demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The new way is to replicate the way humans think, learn, and adapt while they work. The problem is that the lack of transparency about the human labor involved in training and operating such robots leaves the public both misunderstanding what robots can actually do and failing to see the strange new forms of work forming around them.

Consider how, in the AI era, robots often learn from humans who demonstrate how to do a chore. Creating this data at scale is now leading to Black Mirror–esque scenarios. A worker in Shanghai, for example, recently spent a week wearing a virtual-reality headset and an exoskeleton while opening and closing the door of a microwave hundreds of times a day to train the robot next to him, Rest of World reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it would partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across a variety of household environments.” (Figure did not respond to questions about this effort.)

Just as our words became training data for large language models, our movements are now poised to follow the same path. Except this future might leave humans with an even worse deal, and it’s already beginning. The roboticist Aaron Prather told me about recent work with a delivery company that had its workers wear movement-tracking sensors as they moved boxes; the data collected will be used to train robots. The effort to build humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.” 

Or consider tele-operation. Though the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is set to ship to homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the customer wants it to do a tricky task, a tele-operator from the company’s headquarters in Palo Alto, California, will pilot it, looking through its cameras to iron clothes or unload the dishwasher.

This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we know it will not exist in a world where tele-operators are doing chores in your house through a robot. And if home humanoids are not genuinely autonomous, the arrangement is better understood as a form of wage arbitrage that re-creates the dynamics of gig work while, for the first time, allowing physical tasks to be performed wherever labor is cheapest.

We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn on its own, even the best models require an awful lot of human feedback to work as desired.

These human workforces do not mean that AI is just vaporware. But when they remain invisible, the public consistently overestimates the machines’ actual capabilities.

That’s great for investors and hype, but it has consequences for everyone. When Tesla marketed its driver-assistance software as “Autopilot,” for example, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages). 

The same will be true for humanoid robots. If Huang is right, and physical AI is coming for our workplaces, homes, and public spaces, then the way we describe and scrutinize such technology matters. Yet robotics companies remain as opaque about training and tele-operation as AI firms are about their training data. If that does not change, we risk mistaking concealed human labor for machine intelligence—and seeing far more autonomy than truly exists.

Measles cases are rising. Other vaccine-preventable infections could be next.

There’s a measles outbreak happening close to where I live. Since the start of this year, 34 cases have been confirmed in Enfield, a northern borough of London. Most of those affected are children under the age of 11. One in five have needed hospital treatment.

It’s another worrying development for an incredibly contagious and potentially fatal disease. Since October last year, 962 cases of measles have been confirmed in South Carolina. Large outbreaks (with more than 50 confirmed cases) are underway in four US states. Smaller outbreaks are being reported in another 12 states.

The vast majority of these cases have been children who were not fully vaccinated. Vaccine hesitancy is thought to be a significant reason children are missing out on important vaccines—the World Health Organization described it as one of the 10 leading threats to global health in 2019. And if we’re seeing more measles cases now, we might expect to soon see more cases of other vaccine-preventable infections, including some that can cause liver cancer or meningitis.

Some people will always argue that measles is not a big deal—that infections used to be common, and most people survived them and did just fine. It is true that in most cases kids do recover well from the virus. But not always.

Measles symptoms tend to start with a fever and a runny nose. The telltale rash comes later. In some cases, severe complications develop. They can include pneumonia, blindness, and inflammation of the brain. Some people won’t develop complications until years later. In rare cases, the disease can be fatal.

Before the measles vaccine was introduced, in 1963, measles epidemics occurred every two to three years, according to the WHO. Back then, around 2.6 million people died from measles every year. Since it was introduced, the measles vaccine is thought to have prevented almost 59 million deaths.

But vaccination rates have been lagging, says Anne Zink, an emergency medicine physician and clinical fellow at the Yale School of Public Health. “We’ve seen a slow decline in people who are willing to get vaccinated against measles for some time,” she says. “As we get more and more people who are at risk because they’re unvaccinated, the higher the chances that the disease can then spread and take off.”

Vaccination rates need to be at 95% to prevent measles outbreaks. But rates are well below that level in some regions. Across South Carolina, the proportion of kindergartners who received both doses of the MMR vaccine, which protects against measles as well as mumps and rubella, has dropped steadily over the last five years, from 94% in 2020-2021 to 91% in 2024-2025. Some schools in the state have coverage rates as low as 20%, state epidemiologist Linda Bell told reporters last month.

Vaccination rates are low in London, too. Fewer than 70% of children have received both doses of their MMR by the time they turn five, according to the UK Health Security Agency. In some boroughs, vaccination rates are as low as 58%. So perhaps it’s not surprising we’re seeing outbreaks.

The UK is one of six countries to have lost their measles elimination status last month, along with Spain, Austria, Armenia, Azerbaijan, and Uzbekistan. Canada lost its elimination status last year.

The highly contagious measles could be a bellwether for other vaccine-preventable diseases. Zink is already seeing signs. She points to a case of polio that paralyzed a man in New York in 2022. That happened when rates of polio vaccination were low, she says. “Polio is a great example of … a disease that is primarily asymptomatic, and most people don’t have any symptoms whatsoever, but for the people who do get symptoms, it can be life-threatening.”

Then there’s mumps—another disease the MMR vaccine protects against. It’s another one of those infections that can be symptom-free and harmless in some, especially children, but nasty for others. It can cause a painful swelling of the testes, and other complications include brain swelling and deafness. (From my personal experience of being hospitalized with mumps, I can attest that even “mild” infections are pretty horrible.)

Mumps is less contagious than measles, so we might expect a delay between an uptick in measles cases and the spread of mumps, says Zink. But she says that she’s more concerned about hepatitis B.

“It lives on surfaces for a long period of time, and if you’re not vaccinated against it and you’re exposed to it as a kid, you’re at a really high risk of developing liver cancer and death,” she says.

Zink was formerly chief medical officer of Alaska, a state that in the 1970s had the world’s highest rate of childhood liver cancer caused by hepatitis B. Screening and universal newborn vaccination programs eliminated the virus’s spread.

Public health experts worry that the current US administration’s position on vaccines may contribute to the decline in vaccine uptake. Last month the US Centers for Disease Control and Prevention approved changes to childhood vaccination recommendations. The agency no longer recommends the hepatitis B vaccine for all newborns. The chair of the CDC’s vaccine advisory panel has also questioned broad vaccine recommendations for polio.

Even vitamin injections are being refused by parents, says Zink. A shot of vitamin K at birth can help prevent severe bleeding in some babies. But recent research suggests that parents of 5% of newborns are refusing it (up from 2.9% in 2017).

“I can’t tell you how many of my pediatric [doctor] friends have told me about having to care for a kiddo in the ICU with … bleeding into their brain because the kid didn’t get vitamin K at birth,” says Zink. “And that can kill kids, [or have] lifelong, devastating, stroke-like symptoms.”

All this paints a pretty bleak picture for children’s health. But things can change. Vaccination can still offer protection to plenty of people at risk of infection. South Carolina’s Department of Public Health is offering free MMR vaccinations to residents at mobile clinics.

“It’s easy to think ‘It’s not going to be me,’” says Zink. “Seeing kiddos who don’t have the agency to make decisions [about vaccination] being so sick from vaccine-preventable diseases, to me, is one of the most challenging things of practicing medicine.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Job titles of the future: Breast biomechanic

Twenty years ago, Joanna Wakefield-Scurr was having persistent pain in her breasts. Her doctor couldn’t diagnose the cause but said a good, supportive bra could help. A professor of biomechanics, Wakefield-Scurr thought she could do a little research and find a science-backed option. Two decades later, she’s still looking. Wakefield-Scurr now leads an 18-person team at the Research Group in Breast Health at the University of Portsmouth in the UK. Their research shows that the most effective high-impact-sports bras have underwires, padded cups, adjustable underbands and shoulder straps, and hook-and-eye closures. These bras reduce breast movement by up to 74% when compared with wearing no bra. But movement might not be the only metric that matters.

A biological rarity

Few anatomical structures hang outside of the body unsupported by cartilage, muscle, or bone—meaning there wasn’t much historical research to build on. Wakefield-Scurr’s lab was the first to find that when women run, the motion of the torso causes breasts to move in a three-dimensional pattern—swinging side to side and up and down—as well as moving forward and backward. In an hour of slow jogging, boobs can bounce approximately 10,000 times.

A sports necessity

Wearing a bra that’s too tight can limit breathing. Wearing one that’s too loose can create back, shoulder, and neck pain. Pain can also be caused by the lag between torso and breast movement, which causes what is scientifically known as “breast slap.”

The lab’s research has also found that the physical discomfort of bad bras, combined with the embarrassment of flopping around, is the one of the biggest barriers to exercise for women and that if women have a good sports bra, they’re more willing to go for a run.

An open question

Some bras function by deliberately compressing breasts. Others encapsulate and support each individual breast. But scientists still don’t know whether it’s more biomechanically important to reduce the breasts’ motion entirely, to reduce the speed at which they move, or to reduce breast slap. Will women constantly be forced to choose between the comfort of a stretchier bra and the support of a more restrictive one?

Wakefield-Scurr is excited about new materials she’s tested that tighten or stretch depending on how you move. She’s working with fabric manufacturers and clothing companies to try out their wares.

As more women take up high-impact sports, the need to understand what makes a good bra grows. Wakefield-Scurr says her lab can’t keep up with demand. Their cups runneth over.

Sara Harrison is a freelance journalist who writes about science, technology, and health.

Community service

The bird is a beautiful silver-gray, and as she dies twitching in the lasernet I’m grateful for two things: First, that she didn’t make a sound. Second, that this will be the very last time. 

They’re called corpse doves—because the darkest part of their gray plumage surrounds the lighter part, giving the impression that skeleton faces are peeking out from behind trash cans and bushes—and their crime is having the ability to carry diseases that would be compatible with humans. I open my hand, triggering the display from my imprinted handheld, and record an image to verify the elimination. A ding from my palm lets me know I’ve reached my quota for the day and, with that, the year.

I’m tempted to give this one a send-off, a real burial with holy words and some flowers, but then I hear a pack of streetrats hooting beside me. My city-issued vest is reflective and nanopainted so it projects a slight glow. I don’t know if it’s to keep us safe like they say, or if it’s just that so many of us are ex-cons working court-ordered labor, and civilians want to be able to keep an eye on us. Either way, everyone treats us like we’re invisible—everyone except children.

I switch the lasernet on the bird from electrocute to incinerate and watch as what already looked like a corpse becomes ashes.

“Hey, executioner!” says a girl.

“Executioner” is not my official title. The branch of city government we work for is called the Department of Mercy, and we’re only ever called technicians. But that doesn’t matter to the child, who can’t be more than eight but has the authority of a judge as she holds up a finger to point me out to her friends.

bird talon

HENRY HORENSTEIN

“Guys, look!” she says, then turns her attention to me. “You hunting something big?”

I shake my head, slowly packing up my things.

“Something small?” she asks. Then her eyes darken. “You’re not a cat killer, are you?”

“No,” I say quickly. “I do horseflies.”

I don’t know why I lied, but as the suspicion leaves her face and a smile returns, I’m glad I did.

“You should come down by the docks. We’ve got flies! Make your quota in a day.”

The girl tosses her hair, making the tinfoil charms she’s wrapped around her braids tinkle like wind chimes. 

“It’s my last day. But if I get flies again for next year, I’ll swing by.”

Another lie, because we both know the city would never send anyone to the docks for flies. Flies are killed because they are a nuisance, which means people only care about clearing them out of suburbs and financial districts. They’d only send a tech down to the docks to kill something that put the city proper at risk through disease, or by using up more resources than they wanted to spare.

LeeLee is expecting me home to sit through the reassignments with her and it’s already late, so I hand out a couple of the combination warming and light sticks I get for winter to the pack of children with nowhere to go. As I walk away, the children are laughing so loud it sounds like screaming. They toss the sticks in the air like signal flares, small bright cries for help that no one will see.


LeeLee’s anxiety takes the form of caretaking, and as soon as I’ve stepped through the door I can smell bread warming and soup on the stove. I take off my muffling boots. Another day, I’d leave them on and sneak up on her just to be irritating, and she’d turn and threaten me with whatever kitchen utensil was at hand. But she’ll be extra nervous today, so I remove the shoes that let me catch nervous birds, and step hard on my way in.

Sometimes it seems impossible that I can spend a year killing every fragile and defenseless thing I’ve encountered but still take such care with Lee. But I tell myself that the killing isn’t me—it’s just my sentence, and what I do when I have a choice is the only thing that really says anything about me. For the first six months and 400 birds, I believed it.

LeeLee flicks on a smile that lasts a whole three seconds when she sees me, then clouds over again.

“Soup’s too thin. There wasn’t enough powder for a real broth.”

“I like thin soup,” I say.

“Not like this. It doesn’t even cover up the taste of the water.”

“I like the taste of the water,” I say, which breaks her out of her spiraling enough to roll her eyes.

I put my hands on her shoulder to stop her fussing. 

“The soup is going to be fine,” I say. “So will the reassignment.”

I’m not much taller than she is, but when we met in juvie she hadn’t hit her last growth spurt yet, so she still tilts her head back to look me in the eyes. “What if it’s not?”

“It will—”

“What if you get whatever assignment Jordan got?”

There it is. Because two of us didn’t leave juvie together to start community service—three of us did. But Jordan didn’t last three weeks into his assignment before he turned his implements inward.

I notice she doesn’t say What if  I get what Jordan got? Because LeeLee is more afraid of being left alone than of having to kill something innocent.

“We don’t know what his assignment was,” I say.

It’s true, but we do know it was bad. Two weeks into our first stretch, a drug meant to sterilize the city’s feral cat population accidentally had the opposite effect. Everyone was pulled off their assigned duty for three days to murder litters of new kittens instead. It nearly broke me and Lee, but Jordan seemed almost grateful.

“Besides, we don’t know if his assignment had anything to do with … what he did. You’re borrowing trouble. Worry in”—I check my palm—“an hour, when you actually know there’s something to worry about.”

You’d think it would hover over us too insistently to be ignored, but after we sit down and talk about our day I’m at ease, basking in the warmth of her storytelling and the bread that’s more beige than gray today. When the notification comes in, I am well and truly happy, and I can only hope it isn’t for the last time.

We both stiffen when we hear the alert. She looks at me, and I give her a smile and a nod, and then we look down. In the time between hearing the notification and checking it, I imagine all kinds of horrors that could be in my assignment slot. I imagine a picture of kittens, reason enough for the girl I met earlier to condemn me. For a moment, just a flash, I imagine looking down and seeing my own face as my target, or LeeLee’s.

But when I finally see the file, the relief that comes over me softens my spine. It’s a plant. Faceless, and bloodless. 

I look up, and LeeLee’s eyes are dark as she leans forward, studying my face, looking for whatever crack she failed to see in Jordan. I force myself to smile wide for her.

“It’s a plant. I got a plant, Lee.”

She reaches forward and squeezes my hands. Hers are shaking.

“What did you get?” I ask.

She waves away my question. “I got rats. I can handle it. I was just worried about you.”

I spend the rest of the night unbelievably happy. For the next year, I get to kill a thing that does not scream.


“You get all that?” the man behind the desk asks, and I nod even though I didn’t.

I’ve traded in my boots and lasernet for a hazmat suit and a handheld mister with two different solutions. The man had been talking to me about how to use the solutions, but I can’t process verbal information very well. The whole reason I was sent to the correctional facility as a teen was that too many teachers mistook my processing delays for behavioral infractions. I’m planning to read the manual on my own time before I start in a few hours, but when I pick up the mister and look down the barrel, the equipment guy freaks out.

“They were supposed to add sulfur to this batch, but they didn’t. So you won’t smell it. It won’t make you cough or your eyes water. It’ll just be lights out. Good night. You got me?”

“Did you not hear me? Don’t even look at that thing without your mask on.” He takes a breath, calmer now that I’ve lowered my hands. “Look, the first solution—it’s fine. It’s keyed to the plant itself and just opens its cells up for whatever solution we put on it. You could drink the stuff. But that second? The orange vial? Don’t even put it in the mister without your mask on. It dissipates quickly, so you’re good once you’re done spraying, but not a second before.”

He looks around, then leans in. “They were supposed to add sulfur to this batch, but they didn’t. So you won’t smell it. It won’t make you cough or your eyes water. It’ll just be lights out. Good night. You got me?”

I nod again as I grab the mask I hadn’t noticed before. This time when I thank him, I mean it.


It takes me an hour to find the first plant, and when I do it’s beautiful. Lush pink on the inside and dark green on the outside, it looks hearty and primitive. Almost Jurassic. I can see why it’s only in the sewers now: it would be too easy to spot and destroy aboveground in the sea of concrete.

After putting on my mask, I activate the mister and then stand back as it sprays the plant with poison. Nothing happens. I remember the prepping solution and switch the cartridges to coat it in that first. The next time I try the poison, the plant wilts instantly, browning and shrinking like a tire deflating. I was wrong. Plants this size don’t die silently. It makes a wheezing sound, a deep sigh. By the third time I’ve heard it, I swear I can make out the word Please.

sprout

HENRY HORENSTEIN

When I get home, LeeLee’s locked herself in the bathroom, which doesn’t surprise me. I heard that they moved to acid for rats, and the smell of a corpse dissolving is impossible to get used to and even harder to get out of your hair. I eat dinner, read, change for bed, and she’s still in the bathroom. I brush my teeth in the kitchen.


The next morning, I have to take a transport to the plant’s habitat on the other end of the city, so I spend the time looking through the file that came with the assignment. Under “Characteristics,” some city government scientist has written, “Large, dark. Resource-intensive. Stubborn.”

I stare at the last word. Its own sentence, tacked on like an afterthought. Stubborn. The same word that was written in my file when I got sent from school to the facility where I met LeeLee and Jordan. Large, dark, stubborn, and condemned. I’ve never been called resource-intensive. But I have been called a waste.

And maybe that’s why I do it.

When I get to my last plant of the day, I don’t reach for the mister. This one is small, young, the green still neon-bright and the teeth at the edges still soft. I pick it up, careful with its roots, and carry it home. I find a discarded water container along the way and place it inside. When I get home I knock on LeeLee’s door. She doesn’t answer, so I leave the plant on the floor as an offering. They aren’t proper flowers, but they smell nice and earthy. It might keep the residual odor from melted organs, fur, and bones from taking over her room.


“Killing things is a dumb job,” says the girl.

After a week of hearing the death cries of its cousins, I was moved to use some of my allowance to buy cheap fertilizer and growth serum for my plant. The girl and her friends, fewer than before, were panhandling at the megastore across the way. She ran over, braids jingling, as soon as she saw me. I thought she’d leave once I gave her more glowsticks for her friends, but she stayed in step and kept following me.

“It’s not a dumb job,” I say, even though it is. 

“What’s the point?”

I shift my bag to point at the bottom of my vest. Beneath “Mercy Dept.” the department’s slogan is written in cursive: Killing to Save! 

“See?”

She sees the text but doesn’t register it, and I have to remind myself that even getting kicked out of school is a privilege. The city had decided to stop wasting educational resources on me. They’d never even tried with her or the other streetrats.

“It just means we kill to help.”

“That doesn’t make sense.”

Suddenly, all I can think about is Jordan. “Maybe they don’t mind.”

“What?”

I think of the plants. Maybe they hadn’t been pleading. Maybe they’d been sighing with relief. I think of the birds that eventually stopped running away.

“Maybe they’re tired. The city’s right, and their existence isn’t compatible with the world we made. And that’s our fault for being stupid and cruel, but it makes their lives so hard. We’ve made it so they can only live half a life. Maybe the least we can do is finish the job.”

It’s a terrible thing to say—even worse to a kid.

Her eyes go hard. “What are you killing now, executioner?”

The question surprises me. “Sewer plants. Why?”

“I don’t believe you.”

I’d wanted her to leave me alone, but when she runs away I feel suddenly empty.


I have an issue at work when I can’t find my poison vial. I tell them it rolled away in the sewer and I couldn’t catch it in time, because I don’t want to tell them I was unobservant enough to let a street kid steal from me. After a stern warning and a mountain of forms, they issue a new vial and don’t add to my service time.

Pulling overtime to make up for the day I didn’t have my poison means it’s days before I get to fertilize my houseplant. LeeLee’s door is open, so I bring in the fertilizer and serum. She’s put the plant on her windowsill, but it prefers indirect sunlight, so I move it to the shelf next to her boxes of knickknacks and trinkets. I add the fertilizer to its soil and am about to spray it with the growth serum when I get an idea. I get the mister from my kit and set it up to spray the prepping solution on the little plant to prime it. I open the window and put on my mask, just in case, but I’m sure the man was telling the truth when he called the first liquid harmless. After its cells are open, I spray it with my store-bought growth serum.

I’m halfway through making dinner when I hear the crash and run into LeeLee’s room.

“Shit!”

The plant has grown huge, turning adult instantly, and its new weight has taken down LeeLee’s shelf. Dainty keepsake boxes are shattered on our concrete floor.

I bend to my knees quickly, so focused on fixing my mistake that I don’t register the oddness of the items I’m picking up—jacks, kids’ toys, a bow—until my fingers touch something small and shimmering. It’s a scrap of silver, still rounded in the shape of the braids it was taken from.

I got rats. I can handle it.

I’d forgotten the city has more than one kind.


I’m waiting up when Lee gets home. I don’t make her tell me. I just grab her kit and rummage through it. Where my kit has a hazmat suit, hers has a stealth mesh to render her invisible. Where I keep my mister, she has a gun loaded with vials too large for rats. I have a mini-vac to suck up excess plant matter to prevent seeds from sprouting. She has zip ties.

By the time I’m done, she’s already cracking under the weight of everything she tried to protect me from. Within moments she’s sobbing on the floor. I carry her to her bed and get in beside her. I try not to listen too closely as she recounts every horrible moment, but I’m listening at the end, when she tells me she can’t do it anymore. When she confesses that she’s the one who stole my poison, and has only been waiting to take it because she didn’t have the stomach to do to me what Jordan did to us.

I tell her how we’ll make playgrounds of dead data centers and use hoses to fill the holes where skyscrapers were, and kids will play Marco Polo swimming over a CEO’s sunken office.

I leave her for just a moment, but by the time I lie back in bed beside her I’ve figured it out.

I tell her that she won’t have to take her shift tomorrow. I tell her I’m going to go around the city with my mister and my growth serum. That I’ll move plants from sewers to the yards around City Hall and every public space and the support pylons of important people’s companies, and then spray them so they become huge. The city will freak. I tell her it will be like the kittens, but this time we’ll all be pulled off our assignments to kill plants. And maybe the serum will work too well. Maybe the city was right to fear these plants, and they will grow and grow and eat our concrete while the roots crack our foundations and cut our electricity and everything will crumble. And the people with something to lose might suffer, but the rest of us will just laugh at the perfection of rubble. I tell her how we’ll make playgrounds of dead data centers and use hoses to fill the holes where skyscrapers were, and kids will play Marco Polo swimming over a CEO’s sunken office. 

She asks if I’ll put any at our old detention center.

I tell her, Hundreds.

I talk long enough that her eyes close, and loud enough that neither of us can hear the sound of my mister blowing. The man who gave it to me was right. Even without the mask, it doesn’t smell like sulfur. It doesn’t smell like anything. 


Micaiah Johnson’s debut novel, The Space Between Worlds, a Sunday Times bestseller and New York Times Editors’ Choice pick, was named one of the best books of 2020 and one of the best science fiction books of the last decade by NPR. Her first horror novel, The Unhaunting, is due out in fall 2026.

How uncrewed narco subs could transform the Colombian drug trade

On a bright morning last April, a surveillance plane operated by the Colombian military spotted a 40-foot-long shark-like silhouette idling in the ocean just off Tayrona National Park. It was, unmistakably, a “narco sub,” a stealthy fiberglass vessel that sails with its hull almost entirely underwater, used by drug cartels to move cocaine north. The plane’s crew radioed it in, and eventually nearby coast guard boats got the order, routine but urgent: Intercept.

In Cartagena, about 150 miles from the action, Captain Jaime González Zamudio, commander of the regional coast guard group, sat down at his desk to watch what happened next. On his computer monitor, icons representing his patrol boats raced toward the sub’s coordinates as updates crackled over his radio from the crews at sea. This was all standard; Colombia is the world’s largest producer of cocaine, and its navy has been seizing narco subs for decades. And so the captain was pretty sure what the outcome would be. His crew would catch up to the sub, just a bit of it showing above the water’s surface. They’d bring it to heel, board it, and force open the hatch to find two, three, maybe four exhausted men suffocating in a mix of diesel fumes and humidity, and a cargo compartment holding several tons of cocaine.

The boats caught up to the sub. A crew boarded, forced open the hatch, and confirmed that the vessel was secure. But from that point on, things were different.

First, some unexpected details came over the radio: There was no cocaine on board. Neither was there a crew, nor a helm, nor even enough room for a person to lie down. Instead, inside the hull the crew found a fuel tank, an autopilot system and control electronics, and a remotely monitored security camera. González Zamudio’s crew started sending pictures back to Cartagena: Bolted to the hull was another camera, as well as two plastic rectangles, each about the size of a cookie sheet—antennas for connecting to Starlink satellite internet.

The authorities towed the boat back to Cartagena, where military techs took a closer look. Weeks later, they came to an unsettling conclusion: This was Colombia’s first confirmed uncrewed narco sub. It could be operated by remote control, but it was also capable of some degree of autonomous travel. The techs concluded that the sub was likely a prototype built by the Clan del Golfo, a powerful criminal group that operates along the Caribbean coast.

For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.

Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. Law enforcement around the world is just beginning to grapple with what the Tayrona sub means for the future—whether it was merely an isolated experiment or the opening move in a new era of autonomous drug smuggling at sea.


Drug traffickers love the ocean. “You can move drug traffic through legal and illegal routes,” says Juan Pablo Serrano, a captain in the Colombian navy and head of the operational coordination center for Orión, a multiagency, multinational counternarcotics effort. The giant container ships at the heart of global commerce offer a favorite approach, Serrano says. Bribe a chain of dockworkers and inspectors, hide a load in one of thousands of cargo boxes, and put it on a totally legal commercial vessel headed to Europe or North America. That route is slow and expensive—involving months of transit and bribes spread across a wide network—but relatively low risk. “A ship can carry 5,000 containers. Good luck finding the right one,” he says.

Far less legal, but much faster and cheaper, are small, powerful motorboats. Quick to build and cheap to crew, these “go-fasts” top out at just under 50 feet long and can move smaller loads in hours rather than days. But they’re also easy for coastal radars and patrols to spot.

Submersibles—or, more accurately, “semisubmersibles”—fit somewhere in the middle. They take more money and engineering to build than an open speedboat, but they buy stealth—even if a bit of the vessel rides at the surface, the bulk stays hidden underwater. That adds another option to a portfolio that smugglers constantly rebalance across three variables: risk, time, and cost. When US and Colombian authorities tightened control over air routes and commercial shipping in the early 1990s, subs became more attractive. The first ones were crude wooden hulls with a fiberglass shell and extra fuel tanks, cobbled together in mangrove estuaries, hidden from prying eyes. Today’s fiberglass semisubmersible designs ride mostly below the surface, relying on diesel engines that can push multi-ton loads for days at a time while presenting little more than a ripple and a hot exhaust pipe to radar and infrared sensors.

A typical semisubmersible costs under $2 million to build and can carry three metric tons of cocaine. That’s worth over $160 million in Europe—wholesale.

Most ferry between South American coasts and handoff points in Central America and Mexico, where allied criminal organizations break up the cargo and slowly funnel it toward the US. But some now go much farther. In 2019, Spanish authorities intercepted a semisubmersible after a 27-day transatlantic voyage from Brazil. In 2024, police in the Solomon Islands found the first narco sub in the Asia-Pacific region, a semisubmersible probably originating from Colombia on its way to Australia or New Zealand.

If the variables are risk, time, and cost, then the economics of a narco sub are simple. Even if they spend more time on the water than a powerboat, they’re less likely to get caught—and a relative bargain to produce. A narco sub might cost between $1 million and $2 million to build, but a kilo of cocaine costs just about $500 to make. “By the time that kilo reaches Europe, it can sell for between $44,000 and $55,000,” Serrano says. A typical semisubmersible carries up to three metric tons—cargo worth well over $160 million at European wholesale prices.

Starlink panel with a rusty mount
hands holding a Starlink antenna
rusty round white surveillance camera

Off-the-shelf nautical autopilots, WiFi antennas, Starlink satellite internet connections, and remote cameras are all drug smugglers need to turn semisubmersibles into drone ships.

As a result, narco subs are getting more common. Seizures by authorities tripled in the last 20 years, according to Colombia’s International Center for Research and Analysis Against Maritime Drug Trafficking (CMCON), and Serrano admits that the Orión alliance has enough ships and aircraft to catch only a fraction of what sails.

Until now, though, narco subs have had one major flaw: They depended on people, usually poor fishermen or low-level recruits sealed into stifling compartments for days at a time, steering by GPS and sight, hoping not to be spotted. That made the subs expensive and a risk to drug sellers if captured. Like good capitalists, the Tayrona boat’s builders seem to have been trying to obviate labor costs with automation. No crew means more room for drugs or fuel and no sailors to pay—or to get arrested or flip if a mission goes wrong.

“If you don’t have a person or people on board, that makes the transoceanic routes much more feasible,” says Henry Shuldiner, a researcher at InSight Crime who has analyzed hundreds of narco-sub cases. It’s one thing, he notes, to persuade someone to spend a day or two going from Colombia to Panama for a big payout; it’s another to ask four people to spend three weeks sealed inside a cramped tube, sleeping, eating, and relieving themselves in the same space. “That’s a hard sell,” Shuldiner says.

An uncrewed sub doesn’t have to race to a rendezvous because its crew can endure only a few days inside. It can move more slowly and stealthily. It can wait out patrols or bad weather, loiter near a meeting point, or take longer and less well-monitored routes. And if something goes wrong—if a military plane appears or navigation fails—its owners can simply scuttle the vessel from afar.

Meanwhile, the basic technology to make all that work is getting more and more affordable, and the potential profit margins are rising. “The rapidly approaching universality of autonomous technology could be a nightmare for the U.S. Coast Guard,” wrote two Coast Guard officers in the US Naval Institute’s journal Proceedings in 2021. And as if to prove how good an idea drone narco subs are, the US Marine Corps and the weapons builder Leidos are testing a low-profile uncrewed vessel called the Sea Specter, which they describe as being “inspired” by narco-sub design.

The possibility that drug smugglers are experimenting with autonomous subs isn’t just theoretical. Law enforcement agencies on other smuggling routes have found signs the Tayrona sub isn’t an isolated case. In 2022, Spanish police seized three small submersible drones near Cádiz, on Spain’s southern coast. Two years later, Italian authorities confiscated a remote-­controlled minisubmarine they believed was intended for drug runs. “The probability of expansion is high,” says Diego Cánovas, a port and maritime security expert in Spain. Tayrona, the biggest and most technologically advanced uncrewed narco sub found so far, is more likely a preview than an anomaly.


Today, the Tayrona semisubmersible sits on a strip of grass at the ARC Bolívar naval base in Cartagena. It’s exposed to the elements; rain has streaked its paint. To one side lies an older, bulkier narco sub seized a decade ago, a blue cylinder with a clumsy profile. The Tayrona’s hull looks lower, leaner, and more refined.

Up close, it is also unmistakably handmade. The hull is a dull gray-blue, the fiberglass rough in places, with scrapes and dents from the tow that brought it into port. It has no identifying marks on the exterior—nothing that would tie it to a country, a company, or a port. On the upper surface sit the two Starlink antennas, painted over in the same gray-blue to keep them from standing out against the sea.

I climb up a ladder and drop through the small hatch near the stern. Inside, the air is damp and close, the walls beaded with condensation. Small puddles of fuel have collected in the bilge. The vessel has no seating, no helm or steering wheel, and not enough space to stand up straight or lie down. It’s clear it was never meant to carry people. A technical report by CMCON found that the sub would have enough fuel for a journey of some 800 nautical miles, and the central cargo bay would hold between 1 and 1.5 tons of cocaine.

At the aft end, the machinery compartment is a tangle of hardware: diesel engine, batteries, pumps, and a chaotic bundle of cables feeding an electronics rack. All the core components are still there. Inside that rack, investigators identified a NAC-3 autopilot processor, a commercial unit designed to steer midsize boats by tying into standard hydraulic pumps, heading sensors, and rudder-­feedback systems. They cost about $2,200 on Amazon.

“These are plug-and-play technologies,” says Wilmar Martínez, a mechatronics professor at the University of America in Bogotá, when I show him pictures of the inside of the sub. “Midcareer mechatronics students could install them.”


For all its advantages, an autonomous drug-smuggling submarine wouldn’t be invincible. Even without a crew on board, there are still people in the chain. Every satellite internet terminal—Starlink or not—comes with a billing address, a payment method, and a log of where and when it pings the constellation. Colombian officers have begun to talk about negotiating formal agreements with providers, asking them to alert authorities when a transceiver’s movements match known smuggling patterns. Brazil’s government has already cut a deal with Starlink to curb criminal use of its service in the Amazon.

The basic playbook for finding a drone sub will look much like the one for crewed semisubmersibles. Aircraft and ships will use radar to pick out small anomalies and infrared cameras to look for the heat of a diesel engine or the turbulence of a wake. That said, it might not work. “If they wind up being smaller, they’re going to be darn near impossible to detect,” says Michael Knickerbocker, a former US Navy officer who advises defense tech firms.

Autonomous drug subs are “a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities,” says one researcher.

Even worse, navies already act on only a fraction of their intelligence leads because they don’t have enough ships and aircraft. The answer, Knickerbocker argues, is “robot on robot.” Navies and coast guards will need swarms of their own small, relatively cheap uncrewed systems—surface vessels, underwater gliders, and long-endurance aerial vehicles that can loiter, sense, and relay data back to human operators. Those experiments have already begun. The US 4th Fleet, which covers Latin America and the Caribbean, is experimenting with uncrewed platforms in counternarcotics patrols. Across the Atlantic, the European Union’s European Maritime Safety Agency operates drones for maritime surveillance.

Today, though, the major screens against oceangoing vessels of all kinds are coastal radar networks. Spain operates SIVE to watch over choke points like the Strait of Gibraltar, and in the Pacific, Australia’s over-the-horizon radar network, JORN, can spot objects hundreds of miles away, far beyond the range of conventional radar.

Even so, it’s not enough to just spot an uncrewed narco sub. Law enforcement also has to stop it—and that will be tricky.

man in naval uniform pointing at a map
To find drone subs, international law enforcement will likely have to rely on networks of surveillance systems and, someday, swarms of their own drones.
CARLOS PARRA RIOS

With a crewed vessel, Colombian doctrine says coast guard units should try to hail the boat first with lights, sirens, radio calls, and warning shots. If that fails, interceptor crews sometimes have to jump aboard and force the hatch. Officers worry that future autonomous craft could be wired to sink or even explode if someone gets too close. “If they get destroyed, we may lose the evidence,” says Víctor González Badrán, a navy captain and director of CMCON. “That means no seizure and no legal proceedings against that organization.” 

That’s where electronic warfare enters the picture—radio-frequency jamming, cyber tools, perhaps more exotic options. In the simplest version, jamming means flooding the receiver with noise so that commands from the operator never reach the vessel. Spoofing goes a step further, feeding fake signals so that the sub thinks it’s somewhere else or obediently follows a fake set of waypoints. Cyber tools might aim higher up the chain, trying to penetrate the software that runs the vessel or the networks it uses to talk to satellite constellations. At the cutting edge of these countermeasures are electromagnetic pulses designed to fry electronics outright, turning a million-dollar narco sub into a dead hull drifting at sea.

In reality, the tools that might catch a future Tayrona sub are unevenly distributed, politically sensitive, and often experimental. Powerful cyber or electromagnetic tricks are closely guarded secrets; using them in a drug case risks exposing capabilities that militaries would rather reserve for wars. Systems like Australia’s JORN radar are tightly held national security assets, their exact performance specs classified, and sharing raw data with countries on the front lines of the cocaine trade would inevitably mean revealing hints as to how they got it. “Just because a capability exists doesn’t mean you employ it,” Knickerbocker says. 

Analysts don’t think uncrewed narco subs will reshape the global drug trade, despite the technological leap. Trafficking organizations will still hedge their bets across those three variables, hiding cocaine in shipping containers, dissolving it into liquids and paints, racing it north in fast boats. “I don’t think this is revolutionary,” Shuldiner says. “But it’s a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities.”

There’s still that chance, though, that everything international law enforcement agencies know about drug smuggling is about to change. González Zamudio says he keeps getting requests from foreign navies, coast guards, and security agencies to come see the Tayrona sub. He greets their delegations, takes them out to the strip of grass on the base, and walks them around it, gives them tours. It has become a kind of pilgrimage. Everyone who makes it worries that the next time a narco sub appears near a distant coastline, they’ll board it as usual, force the hatch—and find it full of cocaine and gadgets, but without a single human occupant. And no one knows what happens after that. 

Eduardo Echeverri López is a journalist based in Colombia.

The building legal case for global climate justice

The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe. 

Meanwhile, places like the Solomon Islands and Chad—low-lying or just plain sweltering—have emitted relatively little carbon dioxide, but by dint of their latitude and history, they rank among the countries most vulnerable to the fiercest consequences of global warming. That means increasingly devastating cyclones, heat waves, famines, and floods.

Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation for the homes that will be destroyed, the shorelines that will disappear beneath rising seas, and the lives that will be cut short. By one estimate, the major economies owe a climate debt to the rest of the world approaching $200 trillion in reparations.

Legally, though, the case has been far harder to make. Even putting aside the jurisdictional problems, early climate science couldn’t trace the provenance of airborne molecules of carbon dioxide across oceans and years. Deep-pocketed corporations with top-tier legal teams easily exploited those difficulties. 

Now those tides might be turning. More climate-related lawsuits are getting filed, particularly in the Global South. Governments, nonprofits, and citizens in the most climate-exposed nations continue to test new legal arguments in new courts, and some of those courts are showing a new willingness to put nations and their industries on the hook as a matter of human rights. In addition, the science of figuring out exactly who is to blame for specific weather disasters, and to what degree, is getting better and better. 

It’s true that no court has yet held any climate emitter liable for climate-related damages. For starters, nations are generally immune from lawsuits originating in other countries. That’s why most cases have focused on major carbon producers. But they’ve leaned on a pretty powerful defense. 

While oil and gas companies extract, refine, and sell the world’s fossil fuels, most of the emissions come out of “the vehicles, power plants, and factories that burn the fuel,” as Michael Gerrard and Jessica Wentz, of Columbia Law School’s Sabin Center, note in a recent piece in Nature. In other words, companies just dig the stuff up. It’s not their fault someone else sets it on fire.

So victims of extreme weather events continue to try new legal avenues and approaches, backed by ever-more-convincing science. Plaintiffs in the Philippines recently sued the oil giant Shell over its role in driving Super Typhoon Odette, a 2021 storm that killed more than 400 people and displaced nearly 800,000. The case relies partially on an attribution study that found climate change made extreme rainfall like that seen in Odette twice as likely. 

IVAN JOESEFF GUIWANON/GREENPEACE

Overall, evidence of corporate culpability—linking a specific company’s fossil fuel to a specific disaster—is getting easier to find. For example, a study published in Nature in September was able to determine how much particular companies contributed to a series of 21st-century heat waves.

A number of recent legal decisions signal improving odds for these kinds of suits. Notably, a handful of determinations in climate cases before the European Court of Human Rights affirmed that states have legal obligations to protect people from the effects of climate change. And though it dismissed the case of a Peruvian farmer who sued a German power company over fears that a melting alpine glacier could destroy his property, a German court determined that major carbon polluters could in principle be found liable for climate damages tied to their emissions. 

At least one lawsuit has already emerged that could test that principle: Dozens of Pakistani farmers whose land was deluged during the massive flooding events of 2022 have sued a pair of major German power and cement companies.

Even if the lawsuit fails, that would be a problem with the system, not the science. Major carbon-polluting countries and companies have a disproportionate responsibility for climate-change-powered disasters. 

Wealthy nations continued to encourage business practices that pollute the atmosphere, even as the threat of climate change grew increasingly grave. And oil and gas companies remain the kingpin suppliers to a fossil-fuel-addicted world. They have operated with the full knowledge of the massive social, environmental, and human cost imposed by their business while lobbying fiercely against any rules that would force them to pay for those harms or clean up their act. 

They did it. They knew. In a civil society where rule of law matters, they should pay the price. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Microsoft has a new plan to prove what’s real and what’s AI online

AI-enabled deception now permeates our online lives. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. 

It is into this mess that Microsoft has put forward a blueprint, shared with MIT Technology Review, for how to prove what’s real online. 

An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today’s most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.

To understand the gold standard that Microsoft is pushing, imagine you have a Rembrandt painting and you are trying to document its authenticity. You might describe its provenance with a detailed manifest of where the painting came from and all the times it changed hands. You might apply a watermark that would be invisible to humans but readable by a machine. And you could digitally scan the painting and generate a mathematical signature, like a fingerprint, based on the brush strokes. If you showed the piece at a museum, a skeptical visitor could then examine these proofs to verify that it’s an original.

All of these methods are already being used to varying degrees in the effort to vet content online. Microsoft evaluated 60 different combinations of them, modeling how each setup would hold up under different failure scenarios—from metadata being stripped to content being slightly altered or deliberately manipulated. The team then mapped which combinations produce sound results that platforms can confidently show to people online, and which ones are so unreliable that they may cause more confusion than clarification. 

The company’s chief scientific officer, Eric Horvitz, says the work was prompted by legislation—like California’s AI Transparency Act, which will take effect in August—and the speed at which AI has developed to combine video and voice with striking fidelity.

“You might call this self-regulation,” Horvitz told MIT Technology Review. But it’s clear he sees pursuing the work as boosting Microsoft’s image: “We’re also trying to be a selected, desired provider to people who want to know what’s going on in the world.”

Nevertheless, Horvitz declined to commit to Microsoft using its own recommendation across its platforms. The company sits at the center of a giant AI content ecosystem: It runs Copilot, which can generate images and text; it operates Azure, the cloud service through which customers can access OpenAI and other major AI models; it owns LinkedIn, one of the world’s largest professional platforms; and it holds a significant stake in OpenAI. But when asked about in-house implementation, Horvitz said in a statement, “Product groups and leaders across the company were involved in this study to inform product road maps and infrastructure, and our engineering teams are taking action on the report’s findings.”

It’s important to note that there are inherent limits to these tools; just as they would not tell you what your Rembrandt means, they are not built to determine if content is accurate or not. They only reveal if it has been manipulated. It’s a point that Horvitz says he has to make to lawmakers and others who are skeptical of Big Tech as an arbiter of fact.

“It’s not about making any decisions about what’s true and not true,” he said. “It’s about coming up with labels that just tell folks where stuff came from.”

Hany Farid, a professor at UC Berkeley who specializes in digital forensics but wasn’t involved in the Microsoft research, says that if the industry adopted the company’s blueprint, it would be meaningfully more difficult to deceive the public with manipulated content. Sophisticated individuals or governments can work to bypass such tools, he says, but the new standard could eliminate a significant portion of misleading material.

“I don’t think it solves the problem, but I think it takes a nice big chunk out of it,” he says.

Still, there are reasons to see Microsoft’s approach as an example of somewhat naïve techno-optimism. There is growing evidence that people are swayed by AI-generated content even when they know that it is false. And in a recent study of pro-Russian AI-generated videos about the war in Ukraine, comments pointing out that the videos were made with AI received far less engagement than comments treating them as genuine. 

“Are there people who, no matter what you tell them, are going to believe what they believe?” Farid asks. “Yes.” But, he adds, “there are a vast majority of Americans and citizens around the world who I do think want to know the truth.”

That desire has not exactly led to urgent action from tech companies. Google started adding a watermark to content generated by its AI tools in 2023, which Farid says has been helpful in his investigations. Some platforms use C2PA, a provenance standard Microsoft helped launch in 2021. But the full suite of changes that Microsoft suggests, powerful as they are, might remain only suggestions if they threaten the business models of AI companies or social media platforms.

“If the Mark Zuckerbergs and the Elon Musks of the world think that putting ‘AI generated’ labels on something will reduce engagement, then of course they’re incentivized not to do it,” Farid says. Platforms like Meta and Google have already said they’d include labels for AI-generated content, but an audit conducted by Indicator last year found that only 30% of its test posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube were correctly labeled as AI-generated.

More forceful moves toward content verification might come from the many pieces of AI regulation pending around the world. The European Union’s AI Act, as well as proposed rules in India and elsewhere, would all compel AI companies to require some form of disclosure that a piece of content was generated with AI. 

One priority from Microsoft is, unsurprisingly, to play a role in shaping these rules. The company waged a lobbying effort during the drafting of California’s AI Transparency Act, which Horvitz said made the legislation’s requirements on how tech companies must disclose AI-generated content “a bit more realistic.”

But another is a very real concern about what could happen if the rollout of such content-verification technology is done poorly. Lawmakers are demanding tools that can verify what’s real, but the tools are fragile. If labeling systems are rushed out, inconsistently applied, or frequently wrong, people could come to distrust them altogether, and the entire effort would backfire. That’s why the researchers argue that it may be better in some cases to show nothing at all than a verdict that could be wrong.

Inadequate tools could also create new avenues for what the researchers call sociotechnical attacks. Imagine that someone takes a real image of a fraught political event and uses an AI tool to change only an inconsequential share of pixels in the image. When it spreads online, it could be misleadingly classified by platforms as AI-manipulated. But combining provenance and watermark tools would mean platforms could clarify that the content was only partially AI generated, and point out where the changes were made.

California’s AI Transparency Act will be the first major test of these tools in the US, but enforcement could be challenged by President Trump’s executive order from late last year seeking to curtail state AI regulations that are “burdensome” to the industry. The administration has also generally taken a posture against efforts to curb disinformation, and last year, via DOGE, it canceled grants related to misinformation. And, of course, official government channels in the Trump administration have shared content manipulated with AI (MIT Technology Review reported that the Department of Homeland Security, for example, uses video generators from Google and Adobe to make content it shares with the public).

I asked Horvitz whether fake content from this source worries him as much as that coming from the rest of social media. He initially declined to comment, but then he said, “Governments have not been outside the sectors that have been behind various kinds of manipulative disinformation, and this is worldwide.”

The robots who predict the future

To be human is, fundamentally, to be a forecaster. Occasionally a pretty good one. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid being hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival. Indeed, as the tools of divination have changed over the centuries, from tea leaves to data sets, our conviction that the future can be known (and therefore controlled) has only grown stronger. 

Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. As I write this sentence, algorithms on some remote server are busy trying to guess my next word based on those I have already typed. If you’re reading this online, a separate set of algorithms has likely already served you an ad deemed to be one you are most likely to click. (To the die-hards reading this story on paper, congratulations! You have escaped the algorithms … for now.)

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here.

So how did all this happen? People’s desire for reliable forecasting is understandable. Still, nobody signed up for an omnipresent, algorithmic oracle mediating every aspect of their life. A trio of new books tries to make sense of our future-­focused world—how we got here, and what this change means. Each has its own prescriptions for navigating this new reality, but they all agree on one thing: Predictions are ultimately about power and control.

cover of The Means of Prediction
The Means of Prediction: How AI Really Works (and Who Benefits)
Maximilian Kasy
UNIVERSITY OF CHICAGO PRESS, 2025

In The Means of Prediction: How AI Really Works (and Who Benefits), the Oxford economist Maximilian Kasy explains how most predictions in our lives are based on the statistical analysis of patterns in large, labeled data sets—what’s known in AI circles as supervised learning. Once “trained” on such data sets, algorithms for supervised learning can be presented with all kinds of new information and then deliver their best guess as to some specific future outcome. Will you violate your parole, pay off your mortgage, get promoted if hired, perform well on your college exams, be in your home when it gets bombed? More and more, our lives are shaped (and, yes, occasionally shortened) by a machine’s answer to these questions.

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here. This arrangement is leading to a crueler, blander, more instrumentalized world, one where life’s possibilities are foreclosed, age-old prejudices are entrenched, and everyone’s brain seems to be actively turning into goo. It’s an outcome, according to Kasy, that was entirely predictable. 

AI adherents might frame those consequences as “unintended,” or mere problems of optimization and alignment. Kasy, on the other hand, argues that they represent the system working as intended. “If an algorithm selecting what you see on social media promotes outrage, thereby maximizing engagement and ad clicks,” he writes, “that’s because promoting outrage is good for profits from ad sales.” The same holds true for an algorithm that nixes job candidates “who are likely to have family-care responsibilities outside the workplace,” and the ones that “screen out people who are likely to develop chronic health problems or disabilities.” What’s good for a company’s bottom line may not be good for your job-hunting prospects or life expectancy.

Where Kasy differs from other critics is that he doesn’t think working to create less biased, more equitable algorithms will fix any of this. Trying to rebalance the scales can’t change the fact that predictive algorithms rely on past data that’s often racist, sexist, and flawed in countless other ways. And, he says, the incentives for profit will always trump attempts to eliminate harm. The only way to counter this is with broad democratic control over what Kasy calls “the means of prediction”: data, computational infrastructure, technical expertise, and energy.  

A little more than half of The Means of Prediction is devoted to explaining how this might be accomplished—through mechanisms including “data trusts” (collective public bodies that make decisions about how to process and use data on behalf of their contributors) and corporate taxing schemes that try to account for the social harm AI inflicts. There’s a lot of economist talk along the way, about how “agents of change” might help achieve “value alignment” in order to “maximize social welfare.” Reasonable, I guess, though a skeptic might point out that Kasy’s rigorous, systematic approach to building new public-serving institutions comes at a time when public trust in institutions has never been lower. Also, there’s the brain goo problem. 

To his credit, Kasy is a realist here. He doesn’t presume that any of these proposals will be easy to implement. Or that it will happen overnight, or even in the near future. The troubling question at the end his book is: Do we have that kind of time?

Reading Kasy’s blueprint for seizing control of the means of prediction raises another pressing question. How on earth did we reach a point where machine-mediated prediction is more or less inescapable? Capitalism, might be Marx’s pithy response. Fine, as far as it goes, but that doesn’t explain why the same kinds of algorithms that currently model climate change are for some reason also deciding whether you get a new kidney or I get a car loan.

The Irrational Decision: How We Gave Computers the Power to Choose for Us
Benjamin Recht
PRINCETON UNIVERSITY PRESS, 2026

If you ask Benjamin Recht, author of The Irrational Decision: How We Gave Computers the Power to Choose for Us, he’d likely tell you our current predicament has a lot to do with the idea and ideology of decision theory—or what economists call rational choice theory. Recht, a polymathic professor in UC Berkeley’s Department of Electrical Engineering and Computer Science, prefers the term “mathematical rationality” to describe the narrow, statistical conception that stoked the desire to build computers, informed how they would eventually work, and influenced the kinds of problems they would be good at solving. 

This belief system goes all the way back to the Enlightenment, but in Recht’s telling, it truly took hold at the tail end of World War II. Nothing focuses the mind on risk and quick decision-making like war, and the mathematical models that proved especially useful in the fight against the Axis powers convinced a select group of scientists and statisticians that they might also be a logical basis for designing the first computers. Thus was born the idea of a computer as an ideal rational agent, a machine capable of making optimal decisions by quantifying uncertainty and maximizing utility.

Intuition, experience, and judgment gave way, says Recht, to optimization, game theory, and statistical prediction. “The core algorithms developed in this period drive the automated decisions of our modern world, whether it be in managing supply chains, scheduling flight times, or placing advertisements on your social media feeds,” he writes. In this optimization-­driven reality, “every life decision is posed as if it were a round at an imaginary casino, and every argument can be reduced to costs and benefits, means and ends.”

Today, mathematical rationality (wearing its human skin) is best represented by the likes of the pollster Nate Silver, the Harvard psychologist Steven Pinker, and an assortment of Silicon Valley oligarchs, says Recht. These are people who fundamentally believe the world would be a better place if more of us adopted their analytic mindset and learned to weigh costs and benefits, estimate risks, and plan optimally. In other words, these are people who believe we should all make decisions like computers. 

How might we demonstrate that (unquantifiable) human intuition, morality, and judgment are better ways of addressing some of the world’s most important and vexing problems?

It’s a ridiculous idea for multiple reasons, he says. To name just one, it’s not as if humans couldn’t make evidence-based decisions before automation. “Advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950,” Recht writes. “From the late 1800s to the early 1900s, we had world-changing scientific breakthroughs in physics, including new theories of thermodynamics, quantum mechanics, and relativity.” We also managed to build cars and airplanes without a formal system of rationality and somehow came up with societal innovations like modern democracy without optimal decision theory. 

So how might we convince the Pinkers and Silvers of the world that most decisions we face in life are not in fact grist for the unrelenting mill of mathematical rationality? Moreover, how might we demonstrate that (unquantifiable) human intuition, morality, and judgment might be better ways of addressing some of the world’s most important and vexing problems?

cover of Prophecy
Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI
Carissa Véliz
DOUBLEDAY, 2026

One might start by reminding the rationalists that any prediction, computational or otherwise, is really just a wish—but one with a powerful tendency to self-fulfill. This idea animates Carissa Véliz’s wonderfully wide-ranging polemic Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI

A philosopher at the University of Oxford, Véliz sees a prediction as “a magnet that bends reality toward itself.” She writes, “When the force of the magnet is strong enough, the prediction becomes the cause of its becoming true.” 

Take Gordon Moore. While he doesn’t come up in Prophecy, he does figure somewhat prominently in Recht’s history of mathematical rationality. A cofounder of the tech giant Intel, Moore is famous for his 1965 prediction that the density of transistors in integrated circuits would double every two years. “Moore’s Law” turned out to be true, and remains true today, although it does seem to be running out of steam thanks to the physical size limits of the silicon atom.

One story you can tell yourself about Moore’s Law is that Gordon was just a prescient guy. His now-classic 1965 opinion piece “Cramming More Components onto Integrated Circuits,” for Electronics magazine, simply extrapolated what computing trends might mean for the future of the semiconductor industry. 

Another story—the one I’m guessing Véliz might tell—is that Moore put an informed prediction out into the world, and an entire industry had a collective interest in making it come true. As Recht makes clear, there were and remain obvious financial incentives for companies to make faster and smaller computer chips. And while the industry has likely spent billions of dollars trying to keep Moore’s Law alive, it’s undoubtedly profited even more from it. Moore’s Law was a helluva strong magnet. 

Predictions don’t just have a habit of making themselves come true, says Véliz. They can also distract us from the challenges of the here and now. When an AI boomer promises that artificial general intelligence will be the last problem humanity needs to solve, it not only shapes how we think about AI’s role in our lives; it also shifts our attention away from the very real and very pressing problems of the present day—problems that in many cases AI is causing.

In this sense, the questions around predictions (Who’s making them? Who has the right to make them?) are also fundamentally about power. It’s no accident, Véliz says, that the societies that rely most heavily on prediction are also the ones that tend toward oppression and authoritarianism. Predictions are “veiled prescriptive assertions—they tell us how to act,” she writes. “They are what philosophers call speech acts. When we believe a prediction and act in accordance with it, it’s akin to obeying an order.”

As much as tech companies would like us to believe otherwise, technology is not destiny. Humans make it and choose how to use it … or not use it. Maybe the most appropriate (and human) thing we can do in the face of all the uninvited daily predictions in our lives is to simply defy them. 

Bryan Gardiner is a writer based in Oakland, California.

Welcome to the dark side of crypto’s permissionless dream

“We’re out of airspace now. We can do whatever we want,” Jean-Paul Thorbjornsen tells me from the pilot’s seat of his Aston Martin helicopter. As we fly over suburbs outside Melbourne, Australia, it’s becoming clear that doing whatever he wants is Thorbjornsen’s MO. 

Upper-middle-class homes give way to vineyards, and Thorbjornsen points out our landing spot outside a winery. People visiting for lunch walk outside. “They’re going to ask for a shot now,” he says, used to the attention drawn by his luxury helicopter, emblazoned with the tail letters “BTC” for bitcoin (the price tag of $5 million in Australian dollars—$3.5 million in US dollars today—was perhaps reasonable for someone who claims a previous crypto project made more than AU$400 million, although he also says those funds were tied up in the company). 

Thorbjornsen is a founder of THORChain, a blockchain through which users can swap one cryptocurrency for another and earn fees from making those swaps. THORChain is permissionless, so anyone can use it without getting prior approval from a centralized authority. As a decentralized network, the blockchain is built and run by operators located across the globe, most of whom use pseudonyms. 

During its early days, Thorbjornsen himself hid behind the pseudonym “leena” and used an AI-generated female image as his avatar. But around March 2024, he revealed that he, an Australian man in his mid-30s, with a rural Catholic upbringing, was the mind behind the blockchain. More or less. 

If there is a central question around THORChain, it is this: Exactly who is responsible for its operations? Blockchains as decentralized as THORChain are supposed to offer systems that operate outside the centralized leadership of corruptible governments and financial institutions. If a few people have outsize sway over this decentralized network—one of a handful that operate at such a large scale—it’s one more blemish on the legacy of bitcoin’s promise, which has already been tarnished by capitalistic political frenzy.   

Who’s responsible for THORChain matters because in January last year, its users lost more than $200 million worth of their cryptocurrency in US dollars after THORChain transactions and accounts were frozen by a singular admin override, which users believed was not supposed to be possible given the decentralized structure. When the freeze was lifted, some users raced to pull their money out. The following month, a team of North Korean hackers known as the Lazarus Group used THORChain to move roughly $1.2 billion of stolen ethereum taken in the infamous hack of the Dubai-based crypto exchange Bybit. 

Thorbjornsen explains away THORChain’s inability to stop the movement of stolen funds, or prevent a bank run, as a function of its decentralized and permissionless nature. The lack of executive powers means that anyone can use the network for any reason, and arguably there’s no one to hold accountable when even the worst goes down.

But when the worst did go down, nearly everyone in the THORChain community, and those paying attention to it in channels like X, pointed their fingers at Thorbjornsen. A lawsuit filed by the THORChain creditors who lost millions in January 2025 names him. A former FBI analyst and North Korea specialist, reflecting on the potential repercussions for helping move stolen funds, told me he wouldn’t want to be in Thorbjornsen’s shoes.

THORChain was designed to make decisions based on votes by node operators, where two-thirds majority rules.

That’s why I traveled to Australia—to see if I could get a handle on where he sees himself and his role in relation to the network he says he founded.

According to Thorbjornsen, he should not be held responsible for either event. THORChain was designed to make decisions based on votes by node operators—people with the computer power, and crypto stake, to run a cluster of servers that process the network’s transactions. In those votes, a two-thirds majority rules.

Then there’s the permissionless part. Anyone can use THORChain to make swaps, which is why it’s been a popular way for widely sanctioned entities such as the government of North Korea to move stolen money. This principle goes back to the cypherpunk roots of bitcoin, a currency that operates outside of nation-states’ rules. THORChain is designed to avoid geopolitical entanglements; that’s what its users like about it.

But there are distinct financial motivations for moving crypto, stolen or not: Node operators earn fees from the funds running through the network. In theory, this incentivizes them to act in the network’s best interests—and, arguably, Thorbjornsen’s interests too, as many assume his wealth is tied to the network’s profits. (Thorbjornsen says it is not, and that it comes instead from “many sources,” including “buying bitcoin back in 2013.”)

Now recent events have raised critical questions, not just about Thorbjornsen’s outsize role in THORChain’s operations, but also about the blockchain’s underlying nature.

If THORChain is decentralized, how was a single operator able to freeze its funds a month before the Bybit hack? Could someone have unilaterally decided to stop the stolen Bybit funds from coming through the network, and chosen not to? 

Thorbjornsen insists THORChain is helping realize bitcoin’s original purpose of enabling anyone to transact freely outside the reach of purportedly corrupt governments. Yet the network’s problems suggest that an alternative financial system might not be much better.

Decentralized? 

On February 21, 2025, Bybit CEO Ben Zhou got an alarming call from the company’s chief financial officer. About $1.5 billion US of the exchange’s ethereum token, ETH, had been stolen. 

The FBI attributed the theft to the Lazarus Group. Typically, criminals will want to convert ETH to bitcoin, which is much easier to convert in turn to cash. Knowing this, the FBI issued a public service announcement on February 26 to “exchanges, bridges … and other virtual asset service providers,” encouraging them to block transactions from accounts related to the hack. 

Someone posted that announcement in THORChain’s private, invite-only developer channel on Discord, a chat app used widely by software engineers and gamers. While other crypto exchanges and bridges (which facilitate transactions across different blockchains) heeded the warning, THORChain’s node operators, developers, and invested insiders debated about whether or not to close the trading gates, a decision requiring a majority vote.

“Civil war is a very strong term, but there was a strong rift in the community,” says Boone Wheeler, a US-based crypto enthusiast. In 2021, Wheeler purchased some rune, THORChain’s Norse-mythology-themed native token, and he has been paid to write articles about the network to help advertise it. The rift formed “between people who wanted to stay permissionless,” he says, “and others who wanted to blacklist the funds.”

Wheeler, who says he doesn’t run a node or code for THORChain, fell on the side of remaining permissionless. However, others spoke up for blocking the transfers. THORChain, they argued, wasn’t decentralized enough to keep those running the network safe from law enforcement—especially when those operators were identifiable by their IP addresses, some based in the US.

“We are not the morality police,” someone with the username @Swing_Pop wrote on February 27 in the developer Discord.

THORChain’s design includes up to 120 nodes whose operators manage transactions on the network through a voting process. Anyone with hosting hardware can become an operator by funding nodes with rune as collateral, which provides the network with liquidity. Nodes can respond to a transaction by validating it or doing nothing. While individual transactions can’t be blocked, trading can be halted by a two-thirds majority vote. 

Nodes are also penalized for not participating in voting, which the system labels as “bad behavior.” Every 2.5 days, THORChain automatically “churns” nodes out to ensure that no one node gains too much control. The nodes that chose not to validate transactions from the Bybit hack were automatically “churned” out of the system. Thorbjornsen says about 20 or 30 nodes were booted from the network in this way. (Node operators can run multiple nodes, and 120 are rarely running simultaneously; at the time of writing, 55 unique IDs operated 103 nodes.)

By February 27, some node operators were prepared to leave the network altogether. “It’s personally getting beyond my risk tolerance,” wrote @Runetard in the dev Discord. “Sorry to those of the community that feel otherwise. There are a bunch of us holding all the risk and some are getting ready to walk away.”

Even so, the financial incentive for the network operators who remained was significant. As one member of the dev Discord put it earlier that day, $3 million had been “extracted as commission” from the theft by those operating THORChain. “This is wrong!” they wrote.

Thorbjornsen weighed in on this back-and-forth, during which nodes paused and unpaused the network. He now says there was a right and wrong way for node operators to have behaved. “The correct way of doing things,” he says, was for node operators who opposed processing stolen funds to “go offline and … get [themselves] kicked out” rather than try to police who could use THORChain. He also says that while operators could discuss stopping transactions, “there was simply no design in the code that allowed [them] to do that.” However, a since-deleted post from his personal X account on March 3, 2025, stated: “I pushed for all my nodes to unhalt trading [keep trading]. Threatened to yank bond if they didn’t comply. Every single one.” (Thorbjornsen says his social media team ran this account in 2025.) 

In an Australian 7 News Spotlight documentary last June, Thorbjornsen estimated that THORChain earned between $5 million and $10 million from the heist.

When asked in that same documentary if he received any of those fees, he replied, “Not directly.” When we spoke, I asked him to elaborate. He said he’s “not a recipient” of any funds THORChain sets aside for developers or marketers, nor does he operate any nodes. He was merely speaking generally, he told me: “All crypto holders profit indirectly off economic activity on any chain.”

a character in a hooded sweatshirt at a computer station

KAGAN MCLEOD

Most important to Thorbjornsen was that, despite “huge pressure to shut the protocol down and stop servicing these swaps,” THORChain chugged along. He also notes that the hackers’ tactics, moving fast and splitting funds across multiple addresses, made it difficult to identify “bad swaps.”

Blockchain experts like Nick Carlsen, a former FBI analyst at the blockchain intelligence company TRM Labs, don’t buy this assessment. Other services similar to THORChain were identifying and rejecting these transactions. Had THORChain done the same, Carlsen adds, the stolen funds could have been contained on the Ethereum network, which “would have basically denied North Korea the ability to kick off this laundering process.” 

And while THORChain touts its decentralization, in “practical applications” like the Lazarus Group’s theft, “most de-fi [decentralized finance] protocols are centralized,” says Daren Firestone, an attorney who represents crypto industry whistleblowers, citing a 2023 US Treasury Department risk assessment on illicit finance. 

With centralization comes culpability, and in these cases, Firestone adds, that comes down to “who profits from [the protocol], so who creates it? But most importantly, who controls it?” Is there someone who can “hit an emergency off switch? … Direct nodes?”

Many answer these questions with Thorbjornsen’s name. “Everyone likes to pass the blame,” he says, even though he wasn’t alone in building THORChain. “​​In the end, it all comes back to me anyway.”

THORChain origins

According to Thorbjornsen, his story goes like this.

The third of 10 homeschooled children in a “traditional” Catholic household in rural Australia, he spent his days learning math, reading, writing, and studying the Bible. As he got older, he was also responsible for instructing his younger siblings. Wednesday was his day to move the solar panels that powered their home. His parents “installed” a mango and citrus orchard, more to keep nine boys busy than to reap the produce, he says.

“We lived close to a local airfield,” Thorbjornsen says, “and I was always mesmerized by these planes.” He joined the Australian air force and studied engineering, but he says the military left him feeling like “a square peg in a round hole.” He adds that doing things his own way got him frequently “pulled aside” by superiors.

“That’s when I started looking elsewhere,” he says, and in 2013, he found bitcoin. It appealed because it existed “outside the system.”

During the 2017 crypto bull run, Thorbjornsen raised AU$12 million in an initial coin offering for CanYa, a decentralized marketplace he cofounded. CanYa ultimately “died” in 2018, and Thorbjornsen pivoted to a “decentralized liquidity” project that would become THORChain.

He worked with a couple of different developer teams, and then, in 2019, he clicked with an American developer, Chad Barraford, at a hackathon in Germany. Barraford (who declined to be interviewed for this story) was an early public face of THORChain. 

Around this time, Thorbjornsen says, “a couple of us helped manage the payroll and early investment funds.” In a 2020 interview, Kai Ansaari, identified as a THORChain “project lead,” wrote, “We’re all contributors … There’s no real ‘lead,’ ‘CEO,’ ‘founder,’ etc.”

In interviews conducted since he came out from behind the “leena” account in 2024, Thorbjornsen has positioned himself as a key lead. He now says his plan had always been to hand over the account, along with command powers and control of THORChain social media accounts, once the blockchain had matured enough to realize its promise of decentralization.

In 2021, he says, he started this process, first by ceasing to use his own rune to back node operators who didn’t have enough to supply their own funding (this can be a way to influence node votes without operating a node yourself). That year, the protocol suffered multiple hacks that resulted in millions of dollars in losses. Nine Realms, a US-incorporated coding company, was brought on to take over THORChain’s development. Thorbjornsen says he passed “leena” over to “other community members” and “left crypto” in 2021, selling “a bunch of bitcoin” and buying the helicopter. 

Despite this crypto departure, he came back onto the scene with gusto in 2024 when he revealed himself as the operator of the “leena” account. “​​For many years, I stayed private because I didn’t want the attention,” he says now. 

By early 2024 Thorbjornsen considered the network to be sufficiently decentralized and began advertising it publicly. He started regularly posting videos on his TikTok and YouTube channels (“Two sick videos every week,” in the words of one caption) that showed him piloting his helicopter wearing shirts that read “Thor.”

By November 2024, Thorbjornsen, who describes himself as “a bit flamboyant,” was calling himself THORChain’s CEO (“chief energy officer”) and the “master of the memes” in a video from Binance Blockchain Week, an industry conference in Dubai. You need “strong memetic energy,” he says in the video, “to create the community, to create the cult.” Cults imply centralized leadership, and since outing himself as “leena,” Thorbjornsen has publicly appeared to helm the project, with one interviewer deeming him the “THORChain Satoshi” (an allusion to the pseudonymous creator of bitcoin). 

One consequence of going public as a face of the protocol: He’s received death threats. “I stirred it up. Do I regret it? Who knows?” he said when we met in Australia. “It’s caused a lot of chaos.” 

But, he added, “this is the bed that I’ve laid.” When we spoke again, months later, he backtracked, saying he “got sucked into” defending THORChain in 2024 and 2025 because he was involved from 2018 to 2021 and has “a perspective on how the protocol operates.”

Centralized? 

Ryan Treat, a retired US Army veteran, woke up one morning in January 2025 to some disturbing activity on X. “My heart sank,” he says. THORFi, the THORChain program he’d used to earn interest on the bitcoin he’d planned to save for his retirement, had frozen all accounts—but that didn’t make sense.

THORFi featured a lending and saving program said to give users “complete control” and self-custody of their crypto, meaning they could withdraw it at any time. 

Treat was no crypto amateur. He bought his first bitcoin at around “$5 apiece,” he says, and had always kept it off centralized exchanges that would maintain custody of his wallets. He liked THORChain because it claimed to be decentralized and permissionless. “I got into bitcoin because I wanted to have government-less money,” he says. 

We were told it was decentralized. Then you wake up one morning and read this guy had an admin mimir.

Many who’d used THORFi lending and saving programs felt similarly. Users I interviewed differentiated THORChain from centralized lending platforms like BlockFi and Celsius, both of which offered extraordinarily high yields before filing for bankruptcy in 2022. “I viewed THORChain as a decentralized system where it was safer,” says Halsey Richartz, a Florida-based THORFi creditor, with “vanilla, 1% passive yield.” Indeed, users I spoke with hadn’t felt the need to monitor their THORFi deposits. “Only your key can be used to withdraw your funds,” the product’s marketing materials insisted. “Savers can withdraw their position to native assets at any time.”

So on January 9, when the “leena” account announced that an admin key had been used to pause withdrawals, it took THORFi users by surprise—and seemed to contradict the marketing messaging around decentralization. “We were told that it was decentralized, and you wake up one morning and read an article that says ‘This guy, JP, had an admin mimir,’” says Treat, referring to Thorbjornsen, “and I’m like, ‘What the fuck is an admin mimir?’”

The admin mimir was one of “a bunch of hard-coded admin keys built into the base code of the system,” says Jonathan Reiter, CEO of the blockchain intelligence company ChainArgos. Those with access to the keys had the ability to make executive decisions on the blockchain—a function many THORChain users didn’t realize could supersede the more democratic decisions made by node votes. These keys had been in THORChain’s code for years and “let you control just about anything,” Reiter adds, including the decision to pause the network during the hacks in 2021 that resulted in a loss of more than $16 million in assets. 

Thorbjornsen says that one key was given to Nine Realms, while another was “shared around the original team.” He told me at least three people had them, adding, “I can neither confirm nor deny having access to that mimir key, because there’s no on-chain registry of the keys.”

Regardless of who had access, Thorbjornsen maintains that the admin mimir mechanism was “widely known within the community, and heavily used throughout THORChain’s history” and that any action taken using the keys “could be largely overruled by the nodes.” Indeed, nodes voted to open withdrawals back up shortly after the admin key was used to pause them. By then, those burned by THORFi argue, the damage had already been done. The executive pause to withdrawals, for some, signaled that something was amiss with THORFi. This led to a bank run after the pause was lifted, until the nodes voted to freeze withdrawals permanently (which Thorbjornsen had suggested in a since-deleted post on X), separating users from crypto worth around $200 million in US dollars on January 23. THORFi users were then offered a token called TCY (THORChain Yield), which they could claim with the idea that, when its price rose to $1, they would be made whole. (The price, as of writing, sits at $0.16.)

Who used the key? Thorbjornsen maintains he didn’t do it, but he claims he knows who did and won’t say. He says he’d handed over the “leena” account and doesn’t “have access to any of the core components of the system,” nor has he for “at least three years.” He implies that whoever controlled “leena” at the time used the admin key to pause network withdrawals.

A video released by Nine Realms on February 20, 2025, names Thorbjornsen as the activator of the key, stating, “JP ended up pausing lenders and savers, preventing withdrawals so that we can work out … [a] payback plan on them.” Thorbjornsen told me the video was “not factual.”

Multiple blockchain analysts told me it would be extremely difficult to determine who used the admin mimir key. A month after it was used to pause the network, THORChain said the key had been “removed from the network.” At least you can’t find the words “admin mimir” in THORChain’s base code; I’ve looked. 

Culpability

After the debacle of the THORFi withdrawal freeze, Richartz says, he tried to file reports with the Miami-Dade Police Department, the Florida Department of Law Enforcement, the FBI, the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Federal Trade Commission, and Interpol. When we spoke in November, he still hadn’t been able to file with the city of Miami. They told him to try small claims court.

“I was like, no, you don’t understand … a post office box in Switzerland is the company address,” he says. “It underscored to me how little law enforcement even knows about these crimes.” 

As for the Bybit hack, at least one government has retaliated against those who facilitate blockchain projects. Last April German authorities shut down eXch, an exchange suspected of using THORChain to process funds Lazarus stole from Bybit, says Julia Gottesman, cofounder and head of investigations at the cybersecurity group zeroShadow. Australia, she adds, where Thorbjornsen was based, has been “slow to try to engage with the crypto community, or any regulations.”

a character with his pockets turned out shrugs next to his helicopter while wearing meme sunglasses

KAGAN MCLEOD

In response to requests for comment, Australia’s Department of Home Affairs wrote that at the end of March 2026, the country’s regulatory powers will expand to include “exchanges between the same type of cryptocurrency and transfers between different types.” They did not comment on specific investigations.

Crypto and finance experts disagree about whether THORChain engaged in money laundering, defined by the UN as “the processing of criminal proceeds to disguise their illegal origin.” But some think it fits the definition.

Shlomit Wagman, a Harvard fellow and former head of Israel’s anti-money-laundering agency and its delegation to the Financial Action Task Force (FATF), thinks the Bybit activity was money laundering because THORChain helped the hackers “transfer the funds in an unsupervised manner, completely outside of the scope of regulated or supervised activity.” 

And by helping with conversions, Carlsen says, THORChain enabled bad actors to turn stolen crypto into usable currency. “People like [Thorbjornsen] have a personal degree of culpability in sustaining the North Korean government,” he says. Thorbjornsen counters that THORChain is “open-source infrastructure.”

Meanwhile, just days after the hack, Bybit issued a 10% bounty on any funds recovered. As of mid-January this year, between $100 million and $500 million worth of those funds in US dollars remain unaccounted for, according to Gottesman of zeroShadow, which was hired by Bybit to recover funds following the hack.

Thorbjornsen hacked

For Thorbjornsen, it’s just another day at the casino. That’s the comparison he made during his regrettable 7 News Spotlight interview about the Bybit heist, and he repeated it when we met. “You go to a casino, you play a few games, you expect to lose,” he told me. “When you do actually go to zero, don’t cry.”

Thorbjornsen, it should be noted, has lost at the casino himself.

In September, he says, he got a Telegram message from a friend, inviting him to a Zoom meeting. He accepted and participated in a call with people who had “American voices.”

Ultimately, Thorbjornsen describes himself as a guy who’s had a bad year, fending off “threat vectors” left and right.

After the meeting, Thorbjornsen learned that his friend’s Telegram had been hacked. Whoever was responsible had used the Zoom link to remotely install software on Thorbjornsen’s computer, which “got access to everything”—his email, his crypto wallets, a bitcoin-based retirement fund. It cost him at least $1.2 million. The blockchain sleuth known as ZachXBT traced the funds and attributed the hack to North Korea. 

ZachXBT called it “poetic.”

Ultimately, Thorbjornsen describes himself as a guy who’s had a bad year. He says he had to liquidate his crypto assets because he’s dealing with the fallout of a recent divorce. He also feels he is fending off “threat vectors” left and right. More than once, he asked if I was a private investigator masquerading as a journalist.

Still, his many contradictions don’t inspire confidence. He doesn’t have any more crypto assets, he says. However, the crypto wallet he shared with me so I could pay him back for lunch showed that it contained assets worth more than $143,000 in US dollars. He now says it wasn’t his wallet. He says he doesn’t control THORChain’s social media, but he’d also told me that he runs the @THORChain X account (later backtracking and saying the account is “delegated” to him for trickier questions).

He insists that he does not care about money. He says that in the robot future, the AI-powered hive mind will become our benevolent overlord, rendering money obsolete, so why give it much thought? Yet as we flew back from the vineyard, he pointed out his new house from the helicopter. It resembles a compound. He says he lives there with his new wife. 

Multiple people I spoke with about Thorbjornsen before I met him warned me he wasn’t trustworthy, and he’s undeniably made fishy statements. For instance, the presence of a North Korean flag in a row of decals on the tail of his helicopter suggested an affinity with the country for which THORChain has processed so much crypto. Thorbjornsen insists he had requested the flag of Australia’s Norfolk Island, calling the mix-up “a complete coincidence.” The flags were gone by the time of our flight, apparently removed during a recent repair.

“Money is a meme,” he says. “Money does not exist.” Meme or not, it’s had real repercussions for those who have interacted with THORChain, and those who wound up losing have been looking for someone to blame. 

When I spoke with Thorbjornsen again in January, he appeared increasingly concerned that he is that someone. He’s spending more time in Singapore, he told me. Singapore happens to have historically denied extraditions to the US for money-laundering prosecutions. 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.