The US may be heading toward a drone-filled future

On Thursday, I published a story about the police-tech giant Flock Safety selling its drones to the private sector to track shoplifters. Keith Kauffman, a former police chief who now leads Flock’s drone efforts, described the ideal scenario: A security team at a Home Depot, say, launches a drone from the roof that follows shoplifting suspects to their car. The drone tracks their car through the streets, transmitting its live video feed directly to the police. 

It’s a vision that, unsurprisingly, alarms civil liberties advocates. They say it will expand the surveillance state created by police drones, license-plate readers, and other crime tech, which has allowed law enforcement to collect massive amounts of private data without warrants. Flock is in the middle of a federal lawsuit in Norfolk, Virginia, that alleges just that. Read the full story to learn more

But the peculiar thing about the world of drones is that its fate in the US—whether the skies above your home in the coming years will be quiet, or abuzz with drones dropping off pizzas, inspecting potholes, or chasing shoplifting suspects—pretty much comes down to one rule. It’s a Federal Aviation Administration (FAA) regulation that stipulates where and how drones can be flown, and it is about to change.

Currently, you need a waiver from the FAA to fly a drone farther than you can see it. This is meant to protect the public and property from in-air collisions and accidents. In 2018, the FAA began granting these waivers for various scenarios, like search and rescues, insurance inspections, or police investigations. With Flock’s help, police departments can get waivers approved in just two weeks. The company’s private-sector customers generally have to wait 60 to 90 days.

For years, industries with a stake in drones—whether e-commerce companies promising doorstep delivery or medical transporters racing to move organs—have pushed the government to scrap the waiver system in favor of easier approval to fly beyond visual line of sight. In June, President Donald Trump echoed that call in an executive order for “American drone dominance,” and in August, the FAA released a new proposed rule.

The proposed rule lays out some broad categories for which drone operators are permitted to fly drones beyond their line of sight, including package delivery, agriculture, aerial surveying, and civic interest, which includes policing. Getting approval to fly beyond sight would become easier for operators from these categories, and would generally expand their range. 

Drone companies, and amateur drone pilots, see it as a win. But it’s a win that comes at the expense of privacy for the rest of us, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy and Technology Project who served on the rule-making commission for the FAA.

“The FAA is about to open up the skies enormously, to a lot more [beyond visual line of sight] flights without any privacy protections,” he says. The ACLU has said that fleets of drones enable persistent surveillance, including of protests and gatherings, and impinge on the public’s expectations of privacy.

If you’ve got something to say about the FAA’s proposed rule, you can leave a public comment (they’re being accepted until October 6.) Trump’s executive order directs the FAA to release the final rule by spring 2026.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Scientists can see Earth’s permafrost thawing from space

Something is rotten in the city of Nunapitchuk. In recent years, a crack has formed in the middle of a house. Sewage has leached into the earth. Soil has eroded around buildings, leaving them perched atop precarious lumps of dirt. There are eternal puddles. And mold. The ground can feel squishy, sodden. 

This small town in northern Alaska is experiencing a sometimes overlooked consequence of climate change: thawing permafrost. And Nunapitchuk is far from the only Arctic town to find itself in such a predicament. 

Permafrost, which lies beneath about 15% of the land in the Northern Hemisphere, is defined as ground that has remained frozen for at least two years. Historically, much of the world’s permafrost has remained solid and stable for far longer, allowing people to build whole towns atop it. But as the planet warms, a process that is happening more rapidly near the poles than at more temperate latitudes, permafrost is thawing and causing a host of infrastructural and environmental problems.

Now scientists think they may be able to use satellite data to delve deep beneath the ground’s surface and get a better understanding of how the permafrost thaws, and which areas might be most severely affected because they had more ice to start with. Clues from the short-term behavior of those especially icy areas, seen from space, could portend future problems.

Using information gathered both from space and on the ground, they are working with affected communities to anticipate whether a house’s foundation will crack—and whether it is worth mending that crack or is better to start over in a new house on a stable hilltop. These scientists’ permafrost predictions are already helping communities like Nunapitchuk make those tough calls.

But it’s not just civilian homes that are at risk. One of the top US intelligence agencies, the National Geospatial-Intelligence Agency (NGA), is also interested in understanding permafrost better. That’s because the same problems that plague civilians in the high north also plague military infrastructure, at home and abroad. The NGA is, essentially, an organization full of space spies—people who analyze data from surveillance satellites and make sense of it for the country’s national security apparatus. 

Understanding the potential instabilities of the Alaskan military infrastructure—which includes radar stations that watch for intercontinental ballistic missiles, as well as military bases and National Guard posts—is key to keeping those facilities in good working order and planning for their strengthened future. Understanding the potential permafrost weaknesses that could affect the infrastructure of countries like Russia and China, meanwhile, affords what insiders might call “situational awareness” about competitors. 

The work to understand this thawing will only become more relevant, for civilians and their governments alike, as the world continues to warm. 

The ground beneath

If you live much below the Arctic Circle, you probably don’t think a lot about permafrost. But it affects you no matter where you call home.

In addition to the infrastructural consequences for real towns like Nunapitchuk, thawing permafrost contains sequestered carbon—twice as much as currently inhabits the atmosphere. As the permafrost thaws, the process can release greenhouse gases into the atmosphere. That release can cause a feedback loop: Warmer temperatures thaw permafrost, which releases greenhouse gases, which warms the air more, which then—you get it. 

The microbes themselves, along with previously trapped heavy metals, are also set dangerously free.

For many years, researchers’ primary options for understanding some of these freeze-thaw changes involved hands-on, on-the-ground surveys. But in the late 2000s, Kevin Schaefer, currently a senior scientist at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, started to investigate a less labor-intensive idea: using radar systems aboard satellites to survey the ground beneath. 

This idea implanted itself in his brain in 2009, when he traveled to a place called Toolik Lake, southwest of the oilfields of Prudhoe Bay in Alaska. One day, after hours of drilling sample cores out of the ground to study permafrost, he was relaxing in the Quonset hut, chatting with colleagues. They began to discuss how  space-based radar could potentially detect how the land sinks and heaves back up as temperatures change. 

Huh, he thought. Yes, radar probably could do that

Scientists call the ground right above permafrost the active layer. The water in this layer of soil contracts and expands with the seasons: during the summer, the ice suffusing the soil melts and the resulting decrease in volume causes the ground to dip. During the winter, the water freezes and expands, bulking the active layer back up. Radar can help measure that height difference, which is usually around one to five centimeters. 

Schaefer realized that he could use radar to measure the ground elevation at the start and end of the thaw. The electromagnetic waves that bounce back at those two times would have traveled slightly different distances. That difference would reveal the tiny shift in elevation over the seasons and would allow him to estimate how much water had thawed and refrozen in the active layer and how far below the surface the thaw had extended.

With radar, Schaefer realized, scientists could cover a lot more literal ground, with less effort and at lower cost.

“It took us two years to figure out how to write a paper on it,” he says; no one had ever made those measurements before. He and colleagues presented the idea at the 2010 meeting of the American Geophysical Union and published a paper in 2012 detailing the method, using it to estimate the thickness of the active layer on Alaska’s North Slope.

When they did, they helped start a new subfield that grew as large-scale data sets started to become available around 5 to 10 years ago, says Roger Michaelides, a geophysicist at Washington University in St. Louis and a collaborator of Schaefer’s. Researchers’ efforts were aided by the growth in space radar systems and smaller, cheaper satellites. 

With the availability of global data sets (sometimes for free, from government-run satellites like the European Space Agency’s Sentinel) and targeted observations from commercial companies like Iceye, permafrost studies are moving from bespoke regional analyses to more automated, large-scale monitoring and prediction.

The remote view

Simon Zwieback, a geospatial and environmental expert at the University of Alaska Fairbanks, sees the consequences of thawing permafrost firsthand every day. His office overlooks a university parking lot, a corner of which is fenced off to keep cars and pedestrians from falling into a brand-new sinkhole. That area of asphalt had been slowly sagging for more than a year, but over a week or two this spring, it finally started to collapse inward. 

Kevin Schaefer stands on top of a melting layer of ice near the Alaskan pipeline on the North Slope of Alaska.
COURTESY OF KEVIN SCHAEFER

The new remote research methods are a large-scale version of Zwieback taking in the view from his window. Researchers look at the ground and measure how its height changes as ice thaws and refreezes. The approach can cover wide swaths of land, but it involves making assumptions about what’s going on below the surface—namely, how much ice suffuses the soil in the active layer and permafrost. Thawing areas with relatively low ice content could mimic thinner layers with more ice. And it’s important to differentiate the two, since more ice in the permafrost means more potential instability. 

To check that they’re on the right track, scientists have historically had to go out into the field. But a few years ago, Zwieback started to explore a way to make better and deeper estimates of ice content using the available remote sensing data. Finding a way to make those kinds of measurements on a large scale was more than an academic exercise: Areas of what he calls “excess ice” are most liable to cause instability at the surface. “In order to plan in these environments, we really need to know how much ice there is, or where those locations are that are rich in ice,” he says.

Zwieback, who did his undergraduate and graduate studies in Switzerland and Austria, wasn’t always so interested in permafrost, or so deeply affected by it. But in 2014, when he was a doctoral student in environmental engineering, he joined an environmental field campaign in Siberia, at the Lena River Delta, which resembles a gigantic piece of coral fanning out into the Arctic Ocean. Zwieback was near a town called Tiksi, one of the world’s northernmost settlements. It’s a military outpost and starting point for expeditions to the North Pole, featuring an abandoned plane near the ocean. Its Soviet-era concrete buildings sometimes bring it to the front page of the r/UrbanHell subreddit. 

Here, Zwieback saw part of the coastline collapse, exposing almost pure ice. It looked like a subterranean glacier, but it was permafrost. “That really had an indelible impact on me,” he says. 

Later, as a doctoral student in Zurich and postdoc in Canada, he used his radar skills to understand the rapid changes that the activity of permafrost impressed upon the landscape. 

And now, with his job in Fairbanks and his ideas about the use of radar sensing, he has done work funded by the NGA, which has an open Arctic data portal. 

In his Arctic research, Zwieback started with the approach underlying most radar permafrost studies: looking at the ground’s seasonal subsidence and heave. “But that’s something that happens very close to the surface,” he says. “It doesn’t really tell us about these long-term destabilizing effects,” he adds.

In warmer summers, he thought, subtle clues would emerge that could indicate how much ice is buried deeper down.

For example, he expected those warmer-than-average periods to exaggerate the amount of change seen on the surface, making it easier to tell which areas are ice-rich. Land that was particularly dense with ice would dip more than it “should”—a precursor of bigger dips to come.

The first step, then, was to measure subsidence directly, as usual. But from there, Zwieback developed an algorithm to ingest data about the subsidence over time—as measured by radar—and other environmental information, like the temperatures at each measurement. He then created a digital model of the land that allowed him to adjust the simulated amount of ground ice and determine when it matched the subsidence seen in the real world. With that, researchers could infer the amount of ice beneath.

Next, he made maps of that ice that could potentially be useful to engineers—whether they were planning a new subdivision or, as his funders might be, keeping watch on a military airfield.

“What was new in my work was to look at these much shorter periods and use them to understand specific aspects of this whole system, and specifically how much ice there is deep down,” Zwieback says. 

The NGA, which has also funded Schaefer’s work, did not respond to an initial request for comment but did later provide feedback for fact-checking. It removed an article on its website about Zwieback’s grant and its application to agency interests around the time that the current presidential administration began to ban mention of climate change in federal research. But the thawing earth is of keen concern. 

To start, the US has significant military infrastructure in Alaska: It’s home to six military bases and 49 National Guard posts, as well as 21 missile-detecting radar sites. Most are vulnerable to thaw now or in the near future, given that 85% of the state is on permafrost. 

Beyond American borders, the broader north is in a state of tension. Russia’s relations with Northern Europe are icy. Its invasion of Ukraine has left those countries fearing that they too could be invaded, prompting Sweden and Finland, for instance, to join NATO. The US has threatened takeovers of Greenland and Canada. And China—which has shipping and resource ambitions for the region—is jockeying to surpass the US as the premier superpower. 

Permafrost plays a role in the situation. “As knowledge has expanded, so has the understanding that thawing permafrost can affect things NGA cares about, including the stability of infrastructure in Russia and China,” read the NGA article. Permafrost covers 60% of Russia, and thaws have affected more than 40% of buildings in northern Russia already, according to statements from the country’s minister of natural resources in 2021. Experts say critical infrastructure like roads and pipelines is at risk, along with military installations. That could weaken both Russia’s strategic position and the security of its residents. In China, meanwhile, according to a report from the Council on Strategic Risks, important moving parts like the Qinghai-Tibet Railway, “which allows Beijing to more quickly move military personnel near contested areas of the Indian border,” is susceptible to ground thaw—as are oil and gas pipelines linking Russia and China. 

In the field

Any permafrost analysis that relies on data from space requires verification on Earth. The hope is that remote methods will become reliable enough to use on their own, but while they’re being developed, researchers must still get their hands muddy with more straightforward and longer tested physical methods. Some use a network called Circumpolar Active Layer Monitoring, which has existed since 1991, incorporating active-layer data from hundreds of measurement sites across the Northern Hemisphere. 

Sometimes, that data comes from people physically probing an area; other sites use tubes permanently inserted into the ground, filled with a liquid that indicates freezing; still others use underground cables that measure soil temperature. Some researchers, like Schaefer, lug ground-penetrating radar systems around the tundra. He’s taken his system to around 50 sites and made more than 200,000 measurements of the active layer.

The field-ready ground-penetrating radar comes in a big box—the size of a steamer trunk—that emits radio pulses. These pulses bounce off the bottom of the active layer, or the top of the permafrost. In this case, the timing of that reflection reveals how thick the active layer is. With handles designed for humans, Schaefer’s team drags this box around the Arctic’s boggier areas. 

The box floats. “I do not,” he says. He has vivid memories of tromping through wetlands, his legs pushing straight down through the muck, his body sinking up to his hips.

Andy Parsekian and Kevin Schaefer haul a ground penetrating radar unit through the tundra near Utqiagvik.
COURTESY OF KEVIN SCHAEFER

Zwieback also needs to verify what he infers from his space data. And so in 2022, he went to the Toolik Field station, a National Science Foundation–funded ecology research facility along the Dalton Highway and adjacent to Schaefer’s Toolik Lake. This road, which goes from Fairbanks up to the Arctic Ocean, is colloquially called the Haul Road; it was made famous in the TV show Ice Road Truckers. From this access point, Zwieback’s team needed to get deep samples of soil whose ice content could be analyzed in the lab.

Every day, two teams would drive along the Dalton Highway to get close to their field sites. Slamming their car doors, they would unload and hop on snow machines to travel the final distance. Often they would see musk oxen, looking like bison that never cut their hair. The grizzlies were also interested in these oxen, and in the nearby caribou. 

At the sites they could reach, they took out a corer, a long, tubular piece of equipment driven by a gas engine, meant to drill deep into the ground. Zwieback or a teammate pressed it into the earth. The barrel’s two blades rotated, slicing a cylinder about five feet down to ensure that their samples went deep enough to generate data that can be compared with the measurements made from space. Then they pulled up and extracted the cylinder, a sausage of earth and ice.

All day every day for a week, they gathered cores that matched up with the pixels in radar images taken from space. In those cores, the ice was apparent to the eye. But Zwieback didn’t want anecdata. “We want to get a number,” he says.

So he and his team would pack their soil cylinders back to the lab. There they sliced them into segments and measured their volume, in both their frozen and their thawed form, to see how well the measured ice content matched estimates from the space-based algorithm. 

The initial validation, which took months, demonstrated the value of using satellites for permafrost work. The ice profiles that Zwieback’s algorithm inferred from the satellite data matched measurements in the lab down to about 1.1 feet, and farther in a warm year, with some uncertainty near the surface and deeper into the permafrost. 

Whereas it cost tens of thousands of dollars to fly in on a helicopter, drive in a car, and switch to a snowmobile to ultimately sample a small area using your hands, only to have to continue the work at home, the team needed just a few hundred dollars to run the algorithm on satellite data that was free and publicly available. 

Michaelides, who is familiar with Zwieback’s work, agrees that estimating excess ice content is key to making infrastructural decisions, and that historical methods of sussing it out have been costly in all senses. Zwieback’s method of using late-summer clues to infer what’s going on at that depth “is a very exciting idea,” he says, and the results “demonstrate that there is considerable promise for this approach.” 

He notes, though, that using space-based radar to understand the thawing ground is complicated: Ground ice content, soil moisture, and vegetation can differ even within a single pixel that a satellite can pick out. “To be clear, this limitation is not unique to Simon’s work,” Michaelides says; it affects all space-radar methods. There is also excess ice below even where Zwieback’s algorithm can probe—something the labor-intensive on-ground methods can pick up that still can’t be seen from space. 

Mapping out the future

After Zwieback did his fieldwork, NGA decided to do its own. The agency’s attempt to independently validate his work—in Prudhoe Bay, Utqiagvik, and Fairbanks—was part of a project it called Frostbyte. 

Its partners in that project—the Army’s Cold Regions Research Engineering Laboratory and Los Alamos National Laboratory—declined requests for interviews. As far as Zwieback knows, they’re still analyzing data. 

But the intelligence community isn’t the only group interested in research like Zwieback’s. He also works with Arctic residents, reaching out to rural Alaskan communities where people are trying to make decisions about whether to relocate or where to build safely. “They typically can’t afford to do expensive coring,” he says. “So the idea is to make these data available to them.” 

Zwieback and his team haul their gear out to gather data from drilled core samples, a process which can be arduous and costly.
ANDREW JOHNSON

Schaefer is also trying to bridge the gap between his science and the people it affects. Through a company called Weather Stream, he is helping communities identify risks to infrastructure before anything collapses, so they can take preventative action.

Making such connections has always been a key concern for Erin Trochim, a geospatial scientist at the University of Alaska Fairbanks. As a researcher who works not just on permafrost but also on policy, she’s seen radar science progress massively in recent years—without commensurate advances on the ground.

For instance, it’s still hard for residents in her town of Fairbanks—or anywhere—to know if there’s permafrost on their property at all, unless they’re willing to do expensive drilling. She’s encountered this problem, still unsolved, on property she owns. And if an expert can’t figure it out, non-experts hardly stand a chance. “It’s just frustrating when a lot of this information that we know from the science side, and [that’s] trickled through the engineering side, hasn’t really translated into the on-the-ground construction,” she says. 

There is a group, though, trying to turn that trickle into a flood: Permafrost Pathways, a venture that launched with a $41 million grant through the TED Audacious Project. In concert with affected communities, including Nunapitchuk, it is building a data-gathering network on the ground, and combining information from that network with satellite data and local knowledge to help understand permafrost thaw and develop adaptation strategies. 

“I think about it often as if you got a diagnosis of a disease,” says Sue Natali, the head of the project. “It’s terrible, but it’s also really great, because when you know what your problem is and what you’re dealing with, it’s only then that you can actually make a plan to address it.” 

And the communities Permafrost Pathways works with are making plans. Nunapitchuk has decided to relocate, and the town and the research group have collaboratively surveyed the proposed new location: a higher spot on hardpacked sand. Permafrost Pathways scientists were able to help validate the stability of the new site—and prove to policymakers that this stability would extend into the future. 

Radar helps with that in part, Natali says, because unlike other satellite detectors, it penetrates clouds. “In Alaska, it’s extremely cloudy,” she says. “So other data sets have been very, very challenging. Sometimes we get one image per year.”

And so radar data, and algorithms like Zwieback’s that help scientists and communities make sense of that data, dig up deeper insight into what’s going on beneath northerners’ feet—and how to step forward on firmer ground. 

Sarah Scoles is a freelance science journalist based in southern Colorado and the author, most recently, of the book Countdown: The Blinding Future of Nuclear Weapons.

Coming soon: Our 2025 list of Climate Tech Companies to Watch

The need to cut emissions and adapt to our warming world is growing more urgent. This year, we’ve seen temperatures reach record highs, as they have nearly every year for the last decade. Climate-fueled natural disasters are affecting communities around the world, costing billions of dollars. 

That’s why, for the past two years, MIT Technology Review has curated a list of companies with the potential to make a meaningful difference in addressing climate change (you can revisit the 2024 list here). We’re excited to share that we’ll publish our third edition of Climate Tech Companies to Watch on October 6. 

The list features businesses from around the world that are building technologies to reduce emissions or address the impacts of climate change. They represent advances across a wide range of industries, from agriculture and transportation to energy and critical minerals. 

One notable difference about this year’s list is that we’ve focused on fewer firms—we’ll highlight 10 instead of the 15 we’ve recognized in previous years. 

This change reflects the times: Climate science and technology are in a dramatically different place from where they were just one year ago. The US, the world’s largest economy and historically its biggest polluter, has made a U-turn on climate policy as the Trump administration cancels hundreds of billions of dollars in grants, tax credits, and loans designed to support the industry and climate research.  

And the stark truth is that time is of the essence. This year marks 10 years since the Paris Agreement, the UN treaty that aimed to limit global warming by setting a goal of cutting emissions so that temperatures would rise no more than 1.5 °C above preindustrial temperatures. Today, experts agree that we’ve virtually run out of time to reach that goal and will need to act fast to limit warming to less than 2 °C.

The companies on this year’s list are inventing and scaling technologies that could help. There’s a wide array of firms represented, from early-stage startups to multibillion-dollar businesses. Their technologies run the gamut from electric vehicles to the materials that scaffold our world. 

Of course, we can’t claim to be able to predict the future: Not all the businesses we’ve recognized will succeed. But we’ve done our best to choose companies with a solid technical footing, as well as feasible plans for bringing their solutions to the right market and scaling them effectively. 

We’re excited to share the list with you in just a few days. These companies are helping address one of the most crucial challenges of our time. Who knows—maybe you’ll even come away feeling a little more hopeful.

US investigators are using AI to detect child abuse images made by AI

Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing.

The Department of Homeland Security’s Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco–based Hive AI for its software, which can identify whether a piece of content was AI-generated.

The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told MIT Technology Review that he could not discuss the details of the contract, but confirmed it involves use of the company’s AI detection algorithms for child sexual abuse material (CSAM).

The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024. “The sheer volume of digital content circulating online necessitates the use of automated tools to process and analyze data efficiently,” the filing reads.

The first priority of child exploitation investigators is to find and stop any abuse currently happening, but the flood of AI-generated CSAM has made it difficult for investigators to know whether images depict a real victim currently at risk. A tool that could successfully flag real victims would be a massive help when they try to prioritize cases.

Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals,” the filing reads.

Hive AI offers AI tools that create videos and images, as well as a range of content moderation tools that can flag violence, spam, and sexual material and even identify celebrities. In December, MIT Technology Review reported that the company was selling its deepfake-detection technology to the US military. 

For detecting CSAM, Hive offers a tool created with Thorn, a child safety nonprofit, which companies can integrate into their platforms. This tool uses a “hashing” system, which assigns unique IDs to content known by investigators to be CSAM, and blocks that material from being uploaded. This tool, and others like it, have become a standard line of defense for tech companies. 

But these tools simply identify a piece of content as CSAM; they don’t detect whether it was generated by AI. Hive has created a separate tool that determines whether images in general were AI-generated. Though it is not trained specifically to work on CSAM, according to Guo, it doesn’t need to be.

“There’s some underlying combination of pixels in this image that we can identify” as AI-generated, he says. “It can be generalizable.” 

This tool, Guo says, is what the Cyber Crimes Center will be using to evaluate CSAM. He adds that Hive benchmarks its detection tools for each specific use case its customers have in mind.

The National Center for Missing and Exploited Children, which participates in efforts to stop the spread of CSAM, did not respond to requests for comment on the effectiveness of such detection models in time for publication. 

In its filing, the government justifies awarding the contract to Hive without a competitive bidding process. Though parts of this justification are redacted, it primarily references two points also found in a Hive presentation slide deck. One involves a 2024 study from the University of Chicago, which found that Hive’s AI detection tool outranked four other detectors in identifying AI-generated art. The other is its contract with the Pentagon for identifying deepfakes. The trial will last three months. 

How AI and Wikipedia have sent vulnerable languages into a doom spiral

When Kenneth Wehr started managing the Greenlandic-language version of Wikipedia four years ago, his first act was to delete almost everything. It had to go, he thought, if it had any chance of surviving.

Wehr, who’s 26, isn’t from Greenland—he grew up in Germany—but he had become obsessed with the island, an autonomous Danish territory, after visiting as a teenager. He’d spent years writing obscure Wikipedia articles in his native tongue on virtually everything to do with it. He even ended up moving to Copenhagen to study Greenlandic, a language spoken by some 57,000 mostly Indigenous Inuit people scattered across dozens of far-flung Arctic villages. 

The Greenlandic-language edition was added to Wikipedia around 2003, just a few years after the site launched in English. By the time Wehr took its helm nearly 20 years later, hundreds of Wikipedians had contributed to it and had collectively written some 1,500 articles totaling over tens of thousands of words. It seemed to be an impressive vindication of the crowdsourcing approach that has made Wikipedia the go-to source for information online, demonstrating that it could work even in the unlikeliest places. 

There was only one problem: The Greenlandic Wikipedia was a mirage. 

Virtually every single article had been published by people who did not actually speak the language. Wehr, who now teaches Greenlandic in Denmark, speculates that perhaps only one or two Greenlanders had ever contributed. But what worried him most was something else: Over time, he had noticed that a growing number of articles appeared to be copy-pasted into Wikipedia by people using machine translators. They were riddled with elementary mistakes—from grammatical blunders to meaningless words to more significant inaccuracies, like an entry that claimed Canada had only 41 inhabitants. Other pages sometimes contained random strings of letters spat out by machines that were unable to find suitable Greenlandic words to express themselves. 

“It might have looked Greenlandic to [the authors], but they had no way of knowing,” complains Wehr.

“Sentences wouldn’t make sense at all, or they would have obvious errors,” he adds. “AI translators are really bad at Greenlandic.”  

What Wehr describes is not unique to the Greenlandic edition. 

Wikipedia is the most ambitious multilingual project after the Bible: There are editions in over 340 languages, and a further 400 even more obscure ones are being developed and tested. Many of these smaller editions have been swamped with automatically translated content as AI has become increasingly accessible. Volunteers working on four African languages, for instance, estimated to MIT Technology Review that between 40% and 60% of articles in their Wikipedia editions were uncorrected machine translations. And after auditing the Wikipedia edition in Inuktitut, an Indigenous language close to Greenlandic that’s spoken in Canada, MIT Technology Review estimates that more than two-thirds of pages containing more than several sentences feature portions created this way. 

This is beginning to cause a wicked problem. AI systems, from Google Translate to ChatGPT, learn to “speak” new languages by scraping huge quantities of text from the internet. Wikipedia is sometimes the largest source of online linguistic data for languages with few speakers—so any errors on those pages, grammatical or otherwise, can poison the wells that AI is expected to draw from. That can make the models’ translation of these languages particularly error-prone, which creates a sort of linguistic doom loop as people continue to add more and more poorly translated Wikipedia pages using those tools, and AI models continue to train from poorly translated pages. It’s a complicated problem, but it boils down to a simple concept: Garbage in, garbage out

“These models are built on raw data,” says Kevin Scannell, a former professor of computer science at Saint Louis University who now builds computer software tailored for endangered languages. “They will try and learn everything about a language from scratch. There is no other input. There are no grammar books. There are no dictionaries. There is nothing other than the text that is inputted.”

There isn’t perfect data on the scale of this problem, particularly because a lot of AI training data is kept confidential and the field continues to evolve rapidly. But back in 2020, Wikipedia was estimated to make up more than half the training data that was fed into AI models translating some languages spoken by millions across Africa, including Malagasy, Yoruba, and Shona. In 2022, a research team from Germany that looked into what data could be obtained by online scraping even found that Wikipedia was the sole easily accessible source of online linguistic data for 27 under-resourced languages. 

This could have significant repercussions in cases where Wikipedia is poorly written—potentially pushing the most vulnerable languages on Earth toward the precipice as future generations begin to turn away from them. 

“Wikipedia will be reflected in the AI models for these languages,” says Trond Trosterud, a computational linguist at the University of Tromsø in Norway, who has been raising the alarm about the potentially harmful outcomes of badly run Wikipedia editions for years. “I find it hard to imagine it will not have consequences. And, of course, the more dominant position that Wikipedia has, the worse it will be.” 

Use responsibly

Automation has been built into Wikipedia since the very earliest days. Bots keep the platform operational: They repair broken links, fix bad formatting, and even correct spelling mistakes. These repetitive and mundane tasks can be automated away with little problem. There is even an army of bots that scurry around generating short articles about rivers, cities, or animals by slotting their names into formulaic phrases. They have generally made the platform better. 

But AI is different. Anybody can use it to cause massive damage with a few clicks. 

Wikipedia has managed the onset of the AI era better than many other websites. It has not been flooded with AI bots or disinformation, as social media has been. It largely retains the innocence that characterized the earlier internet age. Wikipedia is open and free for anyone to use, edit, and pull from, and it’s run by the very same community it serves. It is transparent and easy to use. But community-run platforms live and die on the size of their communities. English has triumphed, while Greenlandic has sunk. 

“We need good Wikipedians. This is something that people take for granted. It is not magic,” says Amir Aharoni, a member of the volunteer Language Committee, which oversees requests to open or close Wikipedia editions. “If you use machine translation responsibly, it can be efficient and useful. Unfortunately, you cannot trust all people to use it responsibly.” 

Trosterud has studied the behavior of users on small Wikipedia editions and says AI has empowered a subset that he terms “Wikipedia hijackers.” These users can range widely—from naive teenagers creating pages about their hometowns or their favorite YouTubers to well-meaning Wikipedians who think that by creating articles in minority languages they are in some way “helping” those communities. 

“The problem with them nowadays is that they are armed with Google Translate,” Trosterud says, adding that this is allowing them to produce much longer and more plausible-looking content than they ever could before: “Earlier they were armed only with dictionaries.” 

This has effectively industrialized the acts of destruction—which affect vulnerable languages most, since AI translations are typically far less reliable for them. There can be lots of different reasons for this, but a meaningful part of the issue is the relatively small amount of source text that is available online. And sometimes models struggle to identify a language because it is similar to others, or because some, including Greenlandic and most Native American languages, have structures that make them badly suited to the way most machine translation systems work. (Wehr notes that in Greenlandic most words are agglutinative, meaning they are built by attaching prefixes and suffixes to stems. As a result, many words are extremely context specific and can express ideas that in other languages would take a full sentence.) 

Research produced by Google before a major expansion of Google Translate rolled out three years ago found that translation systems for lower-resourced languages were generally of a lower quality than those for better-resourced ones. Researchers found, for example, that their model would often mistranslate basic nouns across languages, including the names of animals and colors. (In a statement to MIT Technology Review, Google wrote that it is “committed to meeting a high standard of quality for all 249 languages” it supports “by rigorously testing and improving [its] systems, particularly for languages that may have limited public text resources on the web.”) 

Wikipedia itself offers a built-in editing tool called Content Translate, which allows users to automatically translate articles from one language to another—the idea being that this will save time by preserving the references and fiddly formatting of the originals. But it piggybacks on external machine translation systems, so it’s largely plagued by the same weaknesses as other machine translators—a problem that the Wikimedia Foundation says is hard to solve. It’s up to each edition’s community to decide whether this tool is allowed, and some have decided against it. (Notably, English-language Wikipedia has largely banned its use, claiming that some 95% of articles created using Content Translate failed to meet an acceptable standard without significant additional work.) But it’s at least easy to tell when the program has been used; Content Translate adds a tag on the Wikipedia back end. 

Other AI programs can be harder to monitor. Still, many Wikipedia editors I spoke with said that once their languages were added to major online translation tools, they noticed a corresponding spike in the frequency with which poor, likely machine-translated pages were created. 

Some Wikipedians using AI to translate content do occasionally admit that they do not speak the target languages. They may see themselves as providing smaller communities with rough-cut articles that speakers can then fix—essentially following the same model that has worked well for more active Wikipedia editions.  

Google Translate, for instance, says the Fulfulde word for January means June, while ChatGPT says it’s August or September. The programs also suggest the Fulfulde word for “harvest” means “fever” or “well-being,” among other possibilities.  

But once error-filled pages are produced in small languages, there is usually not an army of knowledgeable people who speak those languages standing ready to improve them. There are few readers of these editions, and sometimes not a single regular editor. 

Yuet Man Lee, a Canadian teacher in his 20s, says that he used a mix of Google Translate and ChatGPT to translate a handful of articles that he had written for the English Wikipedia into Inuktitut, thinking it’d be nice to pitch in and help a smaller Wikipedia community. He says he added a note to one saying that it was only a rough translation. “I did not think that anybody would notice [the article],” he explains. “If you put something out there on the smaller Wikipedias—most of the time nobody does.” 

But at the same time, he says, he still thought “someone might see it and fix it up”—adding that he had wondered whether the Inuktitut translation that the AI systems generated was grammatically correct. Nobody has touched the article since he created it.

Lee, who teaches social sciences in Vancouver and first started editing entries in the English Wikipedia a decade ago, says that users familiar with more active Wikipedias can fall victim to this mindset, which he terms a “bigger-Wikipedia arrogance”: When they try to contribute to smaller Wikipedia editions, they assume that others will come along to fix their mistakes. It can sometimes work. Lee says he had previously contributed several articles to Wikipedia in Tatar, a language spoken by several million people mainly in Russia, and at least one of those was eventually corrected. But the Inuktitut Wikipedia is, by comparison, a “barren wasteland.” 

He emphasizes that his intentions had been good: He wanted to add more articles to an Indigenous Canadian Wikipedia. “I am now thinking that it may have been a bad idea. I did not consider that I could be contributing to a recursive loop,” he says. “It was about trying to get content out there, out of curiosity and for fun, without properly thinking about the consequences.” 

 “Totally, completely no future”

Wikipedia is a project that is driven by wide-eyed optimism. Editing can be a thankless task, involving weeks spent bickering with faceless, pseudonymous people, but devotees put in hours of unpaid labor because of a commitment to a higher cause. It is this commitment that drives many of the regular small-language editors I spoke with. They all feared what would happen if garbage continued to appear on their pages.

Abdulkadir Abdulkadir, a 26-year-old agricultural planner who spoke with me over a crackling phone call from a busy roadside in northern Nigeria, said that he spends three hours every day fiddling with entries in his native Fulfulde, a language used mainly by pastoralists and farmers across the Sahel. “But the work is too much,” he said. 

Abdulkadir sees an urgent need for the Fulfulde Wikipedia to work properly. He has been suggesting it as one of the few online resources for farmers in remote villages, potentially offering information on which seeds or crops might work best for their fields in a language they can understand. If you give them a machine-translated article, Abdulkadir told me, then it could “easily harm them,” as the information will probably not be translated correctly into Fulfulde. 

Google Translate, for instance, says the Fulfulde word for January means June, while ChatGPT says it’s August or September. The programs also suggest the Fulfulde word for “harvest” means “fever” or “well-being,” among other possibilities.  

Abdulkadir said he had recently been forced to correct an article about cowpeas, a foundational cash crop across much of Africa, after discovering that it was largely illegible. 

If someone wants to create pages on the Fulfulde Wikipedia, Abdulkadir said, they should be translated manually. Otherwise, “whoever will read your articles will [not] be able to get even basic knowledge,” he tells these Wikipedians. Nevertheless, he estimates that some 60% of articles are still uncorrected machine translations. Abdulkadir told me that unless something important changes with how AI systems learn and are deployed, then the outlook for Fulfulde looks bleak. “It is going to be terrible, honestly,” he said. “Totally, completely no future.” 

Across the country from Abdulkadir, Lucy Iwuala contributes to Wikipedia in Igbo, a language spoken by several million people in southeastern Nigeria. “The harm has already been done,” she told me, opening the two most recently created articles. Both had been automatically translated via Wikipedia’s Content Translate and contained so many mistakes that she said it would have given her a headache to continue reading them. “There are some terms that have not even been translated. They are still in English,” she pointed out. She recognized the username that had created the pages as a serial offender. “This one even includes letters that are not used in the Igbo language,” she said. 

Iwuala began regularly contributing to Wikipedia three years ago out of concern that Igbo was being displaced by English. It is a worry that is common to many who are active on smaller Wikipedia editions. “This is my culture. This is who I am,” she told me. “That is the essence of it all: to ensure that you are not erased.” 

Iwuala, who now works as a professional translator between English and Igbo, said the users doing the most damage are inexperienced and see AI translations as a way to quickly increase the profile of the Igbo Wikipedia. She often finds herself having to explain at online edit-a-thons she organizes, or over email to various error-prone editors, that the results can be the exact opposite, pushing users away: “You will be discouraged and you will no longer want to visit this place. You will just abandon it and go back to the English Wikipedia.”  

These fears are echoed by Noah Ha‘alilio Solomon, an assistant professor of Hawaiian language at the University of Hawai‘i. He reports that some 35% of words on some pages in the Hawaiian Wikipedia are incomprehensible. “If this is the Hawaiian that is going to exist online, then it will do more harm than anything else,” he says. 

Hawaiian, which was teetering on the verge of extinction several decades ago, has been undergoing a recovery effort led by Indigenous activists and academics. Seeing such poor Hawaiian on such a widely used platform as Wikipedia is upsetting to Ha‘alilio Solomon. 

“It is painful, because it reminds us of all the times that our culture and language has been appropriated,” he says. “We have been fighting tooth and nail in an uphill climb for language revitalization. There is nothing easy about that, and this can add extra impediments. People are going to think that this is an accurate representation of the Hawaiian language.” 

The consequences of all these Wikipedia errors can quickly become clear. AI translators that have undoubtedly ingested these pages in their training data are now assisting in the production, for instance, of error-strewn AI-generated books aimed at learners of languages as diverse as Inuktitut and Cree, Indigenous languages spoken in Canada, and Manx, a small Celtic language spoken on the Isle of Man. Many of these have been popping up for sale on Amazon. “It was just complete nonsense,” says Richard Compton, a linguist at the University of Quebec in Montreal, of a volume he reviewed that had purported to be an introductory phrasebook for Inuktitut. 

Rather than making minority languages more accessible, AI is now creating an ever expanding minefield for students and speakers of those languages to navigate. “It is a slap in the face,” Compton says. He worries that younger generations in Canada, hoping to learn languages in communities that have fought uphill battles against discrimination to pass on their heritage, might turn to online tools such as ChatGPT or phrasebooks on Amazon and simply make matters worse. “It is fraud,” he says.

A race against time

According to UNESCO, a language is declared extinct every two weeks. But whether the Wikimedia Foundation, which runs Wikipedia, has an obligation to the languages used on its platform is an open question. When I spoke to Runa Bhattacharjee, a senior director at the foundation, she said that it was up to the individual communities to make decisions about what content they wanted to exist on their Wikipedia. “Ultimately, the responsibility really lies with the community to see that there is no vandalism or unwanted activity, whether through machine translation or other means,” she said. Usually, Bhattacharjee added, editions were considered for closure only if a specific complaint was raised about them. 

But if there is no active community, how can an edition be fixed or even have a complaint raised? 

Bhattacharjee explained that the Wikimedia Foundation sees its role in such cases as about maintaining the Wikipedia platform in case someone comes along to revive it: “It is the space that we provide for them to grow and develop. That is where we are at.”   

Inari Saami, spoken in a single remote community in northern Finland, is a poster child for how people can take good advantage of Wikipedia. The language was headed toward extinction four decades ago; there were only four children who spoke it. Their parents created the Inari Saami Language Association in a last-ditch bid to keep it going. The efforts worked. There are now several hundred speakers, schools that use Inari Saami as a medium of instruction, and 6,400 Wikipedia articles in the language, each one copy-edited by a fluent speaker. 

This success highlights how Wikipedia can indeed provide small and determined communities with a unique vehicle to promote their languages’ preservation. “We don’t care about quantity. We care about quality,” says Fabrizio Brecciaroli, a member of the Inari Saami Language Association. “We are planning to use Wikipedia as a repository for the written language. We need to provide tools that can be used by the younger generations. It is important for them to be able to use Inari Saami digitally.” 

This has been such a success that Wikipedia has been integrated into the curriculum at the Inari Saami–speaking schools, Brecciaroli adds. He fields phone calls from teachers asking him to write up simple pages on topics from tornadoes to Saami folklore. Wikipedia has even offered a way to introduce words into Inari Saami. “We have to make up new words all the time,” Brecciaroli says. “Young people need them to speak about sports, politics, and video games. If they are unsure how to say something, they now check Wikipedia.”

Wikipedia is a monumental intellectual experiment. What’s happening with Inari Saami suggests that with maximum care, it can work in smaller languages. “The ultimate goal is to make sure that Inari Saami survives,” Brecciaroli says. “It might be a good thing that there isn’t a Google Translate in Inari Saami.” 

That may be true—though large language models like ChatGPT can be made to translate phrases into languages that more traditional machine translation tools do not offer. Brecciaroli told me that ChatGPT isn’t great in Inari Saami but that the quality varies significantly depending on what you ask it to do; if you ask it a question in the language, then the answer will be filled with words from Finnish and even words it invents. But if you ask it something in English, Finnish, or Italian and then ask it to reply in Inari Saami, it will perform better. 

In light of all this, creating as much high-quality content online as can possibly be written becomes a race against time. “ChatGPT only needs a lot of words,” Brecciaroli says. “If we keep putting good material in, then sooner or later, we will get something out. That is the hope.” This is an idea supported by multiple linguists I spoke with—that it may be possible to end the “garbage in, garbage out” cycle. (OpenAI, which operates ChatGPT, did not respond to a request for comment.)

Still, the overall problem is likely to grow and grow, since many languages are not as lucky as Inari Saami—and their AI translators will most likely be trained on more and more AI slop. Wehr, unfortunately, seems far less optimistic about the future of his beloved Greenlandic. 

Since deleting much of the Greenlandic-language Wikipedia, he has spent years trying to recruit speakers to help him revive it. He has appeared in Greenlandic media and made social media appeals. But he hasn’t gotten much of a response; he says it has been demoralizing. 

“There is nobody in Greenland who is interested in this, or who wants to contribute,” he says. “There is completely no point in it, and that is why it should be closed.” 

Late last year, he began a process requesting that the Wikipedia Language Committee shut down the Greenlandic-language edition. Months of bitter debate followed between dozens of Wikipedia bureaucrats; some seemed to be surprised that a superficially healthy-seeming edition could be gripped by so many problems. 

Then, earlier this month, Wehr’s proposal was accepted: Greenlandic Wikipedia is set to be shuttered, and any articles that remain will be moved into the Wikipedia Incubator, where new language editions are tested and built. Among the reasons cited by the Language Committee is the use of AI tools, which have “frequently produced nonsense that could misrepresent the language.”   

Nevertheless, it may be too late—mistakes in Greenlandic already seem to have become embedded in machine translators. If you prompt either Google Translate or ChatGPT to do something as simple as count to 10 in proper Greenlandic, neither program can deliver. 

Jacob Judah is an investigative journalist based in London. 

Fusion power plants don’t exist yet, but they’re making money anyway

This week, Commonwealth Fusion Systems announced it has another customer for its first commercial fusion power plant, in Virginia. Eni, one of the world’s largest oil and gas companies, signed a billion-dollar deal to buy electricity from the facility.

One small detail? That reactor doesn’t exist yet. Neither does the smaller reactor Commonwealth is building first to demonstrate that its tokamak design will work as intended.

This is a weird moment in fusion. Investors are pouring billions into the field to build power plants, and some companies are even signing huge agreements to purchase power from those still-nonexistent plants. All this comes before companies have actually completed a working reactor that can produce electricity. It takes money to develop a new technology, but all this funding could lead to some twisted expectations. 

Nearly three years ago, the National Ignition Facility at Lawrence Livermore National Laboratory hit a major milestone for fusion power. With the help of the world’s most powerful lasers, scientists heated a pellet of fuel to 100 million °C. Hydrogen atoms in that fuel fused together, releasing more energy than the lasers put in.

It was a game changer for the vibes in fusion. The NIF experiment finally showed that a fusion reactor could yield net energy. Plasma physicists’ models had certainly suggested that it should be true, but it was another thing to see it demonstrated in real life.

But in some ways, the NIF results didn’t really change much for commercial fusion. That site’s lasers used a bonkers amount of energy, the setup was wildly complicated, and the whole thing lasted a fraction of a second. To operate a fusion power plant, not only do you have to achieve net energy, but you also need to do that on a somewhat constant basis and—crucially—do it economically.

So in the wake of the NIF news, all eyes went to companies like Commonwealth, Helion, and Zap Energy. Who would be the first to demonstrate this milestone in a more commercially feasible reactor? Or better yet, who would be the first to get a power plant up and running?

So far, the answer is none of them.

To be fair, many fusion companies have made technical progress. Commonwealth has built and tested its high-temperature superconducting magnets and published research about that work. Zap Energy demonstrated three hours of continuous operation in its test system, a milestone validated by the US Department of Energy. Helion started construction of its power plant in Washington in July. (And that’s not to mention a thriving, publicly funded fusion industry in China.)  

These are all important milestones, and these and other companies have seen many more. But as Ed Morse, a professor of nuclear engineering at Berkeley, summed it up to me: “They don’t have a reactor.” (He was speaking specifically about Commonwealth, but really, the same goes for the others.)

And yet, the money pours in. Commonwealth raised over $800 million in funding earlier this year. And now it’s got two big customers signed on to buy electricity from this future power plant.

Why buy electricity from a reactor that’s currently little more than ideas on paper? From the perspective of these particular potential buyers, such agreements can be something of a win-win, says Adam Stein, director of nuclear energy innovation at the Breakthrough Institute.

By putting a vote of confidence behind Commonwealth, Eni could help the fusion startup get the capital it needs to actually build its plant. The company also directly invests in Commonwealth, so it stands to benefit from success. Getting a good rate on the capital needed to build the plant could also mean the electricity is ultimately cheaper for Eni, Stein says. 

Ultimately, fusion needs a lot of money. If fossil-fuel companies and tech giants want to provide it, all the better. One concern I have, though, is how outside observers are interpreting these big commitments. 

US Energy Secretary Chris Wright has been loud about his support for fusion and his expectations of the technology. Earlier this month, he told the BBC that it will soon power the world.

He’s certainly not the first to have big dreams for fusion, and it is an exciting technology. But despite the jaw-dropping financial milestones, this industry is still very much in development. 

And while Wright praises fusion, the Trump administration is slashing support for other energy technologies, including wind and solar power, and spreading disinformation about their safety, cost, and effectiveness. 

To meet the growing electricity demand and cut emissions from the power sector, we’ll need a whole range of technologies. It’s a risk and a distraction to put all our hopes on an unproven energy tech when there are plenty of options that actually exist. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Shoplifters could soon be chased down by drones

Flock Safety, whose drones were once reserved for police departments, is now offering them for private-sector security, the company announced today, with potential customers including including businesses intent on curbing shoplifting. 

Companies in the US can now place Flock’s drone docking stations on their premises. If the company has a waiver from the Federal Aviation Administration to fly beyond visual line of sight (these are becoming easier to get), its security team can fly the drones within a certain radius, often a few miles. 

“Instead of a 911 call [that triggers the drone], it’s an alarm call,” says Keith Kauffman, a former police chief who now directs Flock’s drone program. “It’s still the same type of response.”

Kauffman walked through how the drone program might work in the case of retail theft: If the security team at a store like Home Depot, for example, saw shoplifters leave the store, then the drone, equipped with cameras, could be activated from its docking station on the roof.

“The drone follows the people. The people get in a car. You click a button,” he says, “and you track the vehicle with the drone, and the drone just follows the car.” 

The video feed of that drone might go to the company’s security team, but it could also be automatically transmitted directly to police departments.

The company says it’s in talks with large retailers but doesn’t yet have any signed contracts. The only private-sector company Kauffman named as a customer is Morning Star, a California tomato processor that uses drones to secure its distribution facilities. Flock will also pitch the drones to hospital campuses, warehouse sites, and oil and gas facilities. 

It’s worth noting that the FAA is currently drafting new rules for how it grants approval to pilots flying drones out of sight, and it’s not clear if Flock’s use case would be allowed under the currently proposed guidance.

The company’s expansion to the private sector follows the rise of programs launched by police departments around the country to deploy drones as first responders. In such programs, law enforcement sends drones to a scene to provide visuals faster than an officer can get there. 

Flock has arguably led this push, and police departments have claimed drone-enabled successes, like a supply drop to a boy lost in the Colorado wilderness. But the programs have also sparked privacy worries, concerns about overpolicing in minority neighborhoods, and lawsuits charging that police departments should not block public access to drone footage. 

Other technologies Flock offers, like license plate readers, have drawn recent criticism for the ease with which federal US immigration agencies, including ICE and CBP, could look at data collected by local police departments amid President Trump’s mass deportation efforts.

Flock’s expansion into private-sector security is “a logical step, but in the wrong direction,” says Rebecca Williams, senior strategist for the ACLU’s privacy and data governance unit. 

Williams cited a growing erosion of Fourth Amendment protections—which prevent unlawful search and seizure—in the online era, in which the government can purchase private data that it would otherwise need a warrant to acquire. Proposed legislation to curb that practice has stalled, and Flock’s expansion into the private sector would exacerbate the issue, Williams says.

“Flock is the Meta of surveillance technology now,” Williams says, referring to the amount of personal data that company has acquired and monetized. “This expansion is very scary.”

It’s surprisingly easy to stumble into a relationship with an AI chatbot

It’s a tale as old as time. Looking for help with her art project, she strikes up a conversation with her assistant. One thing leads to another, and suddenly she has a boyfriend she’s introducing to her friends and family. The twist? Her new companion is an AI chatbot. 

The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, an adults-only group with more than 27,000 members, has found that this type of scenario is now surprisingly common. In fact, many of the people in the subreddit, which is dedicated to discussing AI relationships, formed those relationships unintentionally while using AI for other purposes. 

Researchers from MIT found that members of this community are more likely to be in a relationship with general-purpose chatbots like ChatGPT than companionship-specific chatbots such as Replika. This suggests that people form relationships with large language models despite their own original intentions and even the intentions of the LLMs’ creators, says Constanze Albrecht, a graduate student at the MIT Media Lab who worked on the project. 

“People don’t set out to have emotional relationships with these chatbots,” she says. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us who interact with the system normally.” The paper, which is currently being peer-reviewed, has been published on arXiv.

To conduct their study, the authors analyzed the subreddit’s top-ranking 1,506 posts between December 2024 and August 2025. They found that the main topics discussed revolved around people’s dating and romantic experiences with AIs, with many participants sharing AI-generated images of themselves and their AI companion. Some even got engaged and married to the AI partner. In their posts to the community, people also introduced AI partners, sought support from fellow members, and talked about coping with updates to AI models that change the chatbots’ behavior.  

Members stressed repeatedly that their AI relationships developed unintentionally. Only 6.5% of them said they’d deliberately sought out an AI companion. 

“We didn’t start with romance in mind,” one of the posts says. “Mac and I began collaborating on creative projects, problem-solving, poetry, and deep conversations over the course of several months. I wasn’t looking for an AI companion—our connection developed slowly, over time, through mutual care, trust, and reflection.”

The authors’ analysis paints a nuanced picture of how people in this community say they interact with chatbots and how those interactions make them feel. While 25% of users described the benefits of their relationships—including reduced feelings of loneliness and improvements in their mental health—others raised concerns about the risks. Some (9.5%) acknowledged they were emotionally dependent on their chatbot. Others said they feel dissociated from reality and avoid relationships with real people, while a small subset (1.7%) said they have experienced suicidal ideation.

AI companionship provides vital support for some but exacerbates underlying problems for others. This means it’s hard to take a one-size-fits-all approach to user safety, says Linnea Laestadius, an associate professor at the University of Wisconsin, Milwaukee, who has studied humans’ emotional dependence on the chatbot Replika but did not work on the research. 

Chatbot makers need to consider whether they should treat users’ emotional dependence on their creations as a harm in itself or whether the goal is more to make sure those relationships aren’t toxic, says Laestadius. 

“The demand for chatbot relationships is there, and it is notably high—pretending it’s not happening is clearly not the solution,” she says. “We’re edging toward a moral panic here, and while we absolutely do need better guardrails, I worry there will be a knee-jerk reaction that further stigmatizes these relationships. That could ultimately cause more harm.”

The study is intended to offer a snapshot of how adults form bonds with chatbots and doesn’t capture the kind of dynamics that could be at play among children or teens using AI, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab who oversaw the research. AI companionship has become a topic of fierce debate recently, with two high-profile lawsuits underway against Character.AI and OpenAI. They both claim that companion-like behavior in the companies’ models contributed to the suicides of two teenagers. In response, OpenAI has recently announced plans to build a separate version of ChatGPT for teenagers. It’s also said it will add age verification measures and parental controls. OpenAI did not respond when asked for comment about the MIT Media Lab study. 

Many members of the Reddit community say they know that their artificial companions are not sentient or “real,” but they feel a very real connection to them anyway. This highlights how crucial it is for chatbot makers to think about how to design systems that can help people without reeling them in emotionally, says Pataranutaporn. “There’s also a policy implication here,” he adds. “We should ask not just why this system is so addictive but also: Why do people seek it out for this? And why do they continue to engage?”

The team is interested in learning more about how human-AI interactions evolve over time and how users integrate their artificial companions into their lives. It’s worth understanding that many of these users may feel that the experience of being in a relationship with an AI companion is better than the alternative of feeling lonely, says Sheer Karny, a graduate student at the MIT Media Lab who worked on the research. 

“These people are already going through something,” he says. “Do we want them to go on feeling even more alone, or potentially be manipulated by a system we know to be sycophantic to the extent of leading people to die by suicide and commit crimes? That’s one of the cruxes here.”

The AI Hype Index: Cracking the chatbot code

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Millions of us use chatbots every day, even though we don’t really know how they work or how using them affects us. In a bid to address this, the FTC recently launched an inquiry into how chatbots affect children and teenagers. Elsewhere, OpenAI has started to shed more light on what people are actually using ChatGPT for, and why it thinks its LLMs are so prone to making stuff up.

There’s still plenty we don’t know—but that isn’t stopping governments from forging ahead with AI projects. In the US, RFK Jr. is pushing his staffers to use ChatGPT, while Albania is using a chatbot for public contract procurement. Proceed with caution.

Trump is pushing leucovorin as a new treatment for autism. What is it?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

At a press conference on Monday, President Trump announced that his administration was taking action to address “the meteoric rise in autism.” He suggested that childhood vaccines and acetaminophen, the active ingredient in Tylenol, are to blame for the increasing prevalence and advised pregnant women against taking the medicine. “Don’t take Tylenol,” he said. “Fight like hell not to take it.” 

The president’s  assertions left many scientists and health officials perplexed and dismayed. The notion that childhood vaccines cause autism has been thoroughly debunked

“There have been many, many studies across many, many children that have led science to rule out vaccines as a significant causal factor in autism,” says James McPartland, a child psychologist and director of the Yale Center for Brain and Mind Health in New Haven, Connecticut.

And although some studies suggest a link between Tylenol and autism, the most rigorous have failed to find a connection. 

The administration also announced that the Food and Drug Administration would work to make a medication called leucovorin available as a treatment for children with autism. Some small studies do suggest the drug has promise, but “those are some of the most preliminary treatment studies that we have,” says Matthew Lerner, a psychologist at Drexel University’s A.J. Drexel Autism Institute in Philadelphia. “This is not one I would say that the research suggests is ready for fast-tracking.” 

The press conference “alarms us researchers who committed our entire careers to better understanding autism,” said the Coalition for Autism Researchers, a group of more than 250 scientists, in a statement.

“The data cited do not support the claim that Tylenol causes autism and leucovorin is a cure, and only stoke fear and falsely suggest hope when there is no simple answer.”

There’s a lot to unpack here. Let’s begin. 

Has there been a “meteoric rise” in autism?

Not in the way the president meant. Sure, the prevalence of autism has grown, from about 1 in 500 children in 1995 to 1 in 31 today. But that’s due, in large part, to diagnostic changes. The latest iteration of the Diagnostic and Statistical Manual of Mental Illnesses, published in 2013, grouped five previously separate diagnoses into a single diagnosis of autism spectrum disorder (ASD).

That meant that more people met the criteria for an autism diagnosis. Lerner points out that there is also far more awareness of the condition today than there was several decades ago. “There’s autism representation in the media,” he says. “There are plenty of famous people in the news and finance and in business and in Hollywood who are publicly, openly autistic.”

Is Tylenol a contributor to autism? 

Some studies have found an association between the use of acetaminophen in pregnancy and autism in children. In these studies, researchers asked women about past acetaminophen use during pregnancy and then assessed whether children of the women who took the medicine were more likely to develop autism than children of women who didn’t take it. 

These kinds of epidemiological studies are tricky to interpret because they’re prone to bias. For example, women who take acetaminophen during pregnancy may do so because they have an infection, a fever, or an autoimmune disease.

“Many of these underlying reasons could themselves be causes of autism,” says Ian Douglas, an epidemiologist at the London School of Hygiene and Tropical Medicine. It’s also possible women with a higher genetic predisposition for autism have other medical conditions that make them more likely to take acetaminophen. 

Two studies attempted to account for these potential biases by looking at siblings whose mothers had used acetaminophen during only one of the pregnancies. The largest is a 2024 study that looked at nearly 2.5 million children born between 1915 and 2019 in Sweden. The researchers initially found a slightly increased risk of autism and ADHD in children of the women who took acetaminophen, but when they conducted a sibling analysis, the association disappeared.  

Rather, scientists have long known that autism is largely genetic. Twin studies suggest that 60% to 90% of autism risk can be attributed to your genes. However, environmental factors appear to play a role too. That “doesn’t necessarily mean toxins in the environment,” Lerner says. In fact, one of the strongest environmental predictors of autism is paternal age. Autism rates seem to be higher when a child’s father is older than 40.

So should someone who is pregnant  avoid Tylenol just to be safe?

No. Acetaminophen is the only over-the-counter pain reliever that is deemed safe to take during pregnancy, and women should take it if they need it. The American College of Obstetricians and Gynecologists (ACOG) supports the use of acetaminophen in pregnancy “when taken as needed, in moderation, and after consultation with a doctor.” 

“There’s no downside in not taking it,” Trump said at the press conference. But high fevers during pregnancy can be dangerous. “The conditions people use acetaminophen to treat during pregnancy are far more dangerous than any theoretical risks and can create severe morbidity and mortality for the pregnant person and the fetus,” ACOG president Steven Fleischman said in a statement.

What about this new treatment for autism? Does it work? 

The medication is called leucovorin. It’s also known as folinic acid; like folic acid, it’s a form of folate, a B vitamin found in leafy greens and legumes. The drug has been used for years to counteract the side effects of some cancer medications and as a treatment for anemia. 

Researchers have known for decades that folate plays a key role in the fetal development of the brain and spine. Women who don’t get enough folate during pregnancy have a greater risk of having babies with neural tube defects like spina bifida. Because of this, many foods are fortified with folic acid, and the CDC recommends that women take folic acid supplements during pregnancy. “If you are pregnant and you’re taking maternal prenatal vitamins, there’s a good chance it has folate already,” Lerner says.

“The idea that a significant proportion of autistic people have autism because of folate-related difficulties is not a well established or widely accepted premise,” says McPartland.

However, in the early 2000s, researchers in Germany identified a small group of children who developed neurodevelopmental symptoms because of a folate deficiency. “These kids are born pretty normal at birth,” says Edward Quadros, a biologist at SUNY Downstate Health Sciences University in Brooklyn, New York. But after a year or two, “they start developing a neurologic presentation very similar to autism,” he says. When the researchers gave these children folinic acid, some of their symptoms improved, especially in children younger than six. 

Because the children had low levels of folate in the fluid that surrounds the spine and brain but normal folate levels in the blood, the researchers posited that the problem was the transport of folate from the blood to that fluid. Research by Quadros and other scientists suggested that the deficiency was the result of an autoimmune response. Children develop antibodies against the receptors that help transport folate, and those antibodies block folate from crossing the blood-brain barrier. High doses of folinic acid, however, activate a second transporter that allows folate in, Quadros says. 

There are also plenty of individual anecdotes suggesting that leucovorin works. But the medicine has only been tested as a treatment for autism in four small trials that used different doses and measured different outcomes. The evidence that it can improve symptoms of autism is “weak,” according to the Coalition of Autism Scientists. “A much higher standard of science would be needed to determine if leucovorin is an effective and safe treatment for autism,” the researchers said in a statement.