Three reasons why DeepSeek’s new model matters

On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that helps it handle large amounts of text more efficiently. Like DeepSeek’s previous models, V4 is open source, meaning it is available for anyone to download, use, and modify.

V4 marks DeepSeek’s most significant release since R1, the reasoning model it launched in January 2025. R1, which was trained on limited computing resources, stunned the global AI industry with its strong performance and efficiency, turning DeepSeek from a little-known research team into China’s best-known AI company almost overnight. It also helped set off a wave of open-weight model releases from other Chinese AI firms. 

DeepSeek has kept a relatively low profile since then—but earlier this month, it effectively teased V4’s release when it added “expert” and “flash” modes to the online version of its model, prompting speculation that the updates were tied to a bigger upcoming release.

While the company has become a powerful symbol of China’s AI ambitions, its big return to cutting-edge frontier models comes after months of scrutiny—including major personnel departures, delays to previous model launches, and growing scrutiny from both the US and Chinese governments. 

So, will V4 shake the AI field the way R1 did? Almost certainly not, but here are three big reasons why this release matters.

1. It breaks new ground for an open-source model.

As with R1 before it, DeepSeek claims that V4’s performance rivals the best models available at a fraction of the price. This is great news for developers and for companies using the tech, because it means they can access frontier AI capabilities on their own terms, and without worrying about skyrocketing costs.

The new model comes in two versions, both of which are available on DeepSeek’s website and in its app, with API access also open to developers. V4-Pro is a larger model built for coding and complex agent tasks, and V4-Flash is a smaller version designed to be faster and cheaper to run. Both versions offer reasoning modes, in which the model can carefully parse a user’s prompt and show each step as it works through the problem.

For V4-Pro, DeepSeek charges $1.74 per million input tokens and $3.48 per million output tokens, a fraction of the cost of comparable models from OpenAI and Anthropic. V4-Flash is even cheaper, at about $0.14 per million input tokens and about $0.28 per million output tokens, making it one of the cheapest top-tier models available. This would make it a very appealing model to build applications on.

In terms of performance, V4 is, perhaps unsurprisingly, a huge jump from R1—and it seems to be a strong alternative to just about all the latest big AI models. On the major benchmarks, according to results shared by the company, DeepSeek V4-Pro competes with leading closed-source models, matching the performance of Anthropic’s Claude-Opus-4.6, OpenAI’s GPT-5.4, and Google’s Gemini-3.1. And compared to other open-source models, such as Alibaba’s Qwen-3.5 or Z.ai’s GLM-5.1, DeepSeek V4 exceeds them all on coding, math, and STEM problems, making it one of the strongest open-source models ever released. 

DeepSeek also says that V4-Pro now ranks among the strongest open-source models on benchmarks for agentic coding tasks and performs well on other tests that measure ability to carry out multistep problems. Its writing ability and world knowledge also lead the field, according to benchmarking results shared by the company. 

In a technical report released alongside the model, DeepSeek shared results from an internal survey of 85 experienced developers: More than 90% included V4-Pro among their top model choices for coding tasks.

DeepSeek says it has specifically optimized V4 for popular agent frameworks such as Claude Code, OpenClaw, and CodeBuddy.

2. It delivers on a new approach to memory efficiency.

One of the key innovations of V4 is its long context window—the amount of text the model can process at once. Both versions can handle 1 million tokens, which is large enough to fit all three volumes of The Lord of the Rings and The Hobbit combined. The company says this context window size is now the default across all DeepSeek services and it matches what is offered by cutting-edge versions of models like Gemini and Claude. 

But it’s important to know not just that DeepSeek has made this leap, but how it did so. V4 makes significant architectural changes to the company’s former models—especially in the attention mechanism, which is the feature of AI models that helps them understand each part of a prompt in relation to the rest. As the prompt text gets longer, these comparisons become much more costly, making attention one of the main bottlenecks for long-context models.

DeepSeek’s innovation was to make the model more selective about what it pays attention to. Instead of treating all earlier text as equally important, V4 compresses older information and focuses on the parts most likely to matter in the present moment, while still keeping nearby text in full so it does not miss important details. 

DeepSeek says this sharply reduces the cost of using long context. In a 1-million-token context, V4-Pro uses only 27% of the computing power required by its previous model, V3.2, while cutting memory use to 10%. The reduction in V4-Flash is even larger, using just 10% of the computing power and 7% of the memory. In practice, this could make it cheaper to build tools that need to work across huge amounts of material, such as an AI coding assistant that can read an entire codebase or a research agent that can analyze a long archive of documents without constantly forgetting what came before.

DeepSeek’s interest in long context windows didn’t start with V4. Over the past year and a half, the company has quietly published a series of papers on how AI models “remember” information, experimenting with compression and mathematical techniques to extend what AI models could realistically handle.

3. It marks the first steps on the hard road away from Nvidia.

V4 is DeepSeek’s first model optimized for domestic Chinese chips, such as Huawei’s Ascend—a move that has turned the launch into something of a test of whether China’s homegrown AI industry can begin to loosen its dependence on US chip giant Nvidia. 

This was largely expected, since The Information reported earlier this month that DeepSeek did not give American chipmakers like Nvidia and AMD early access to V4, though prerelease access is common to allow chipmakers to optimize support of the new model ahead of a launch. Instead, the company reportedly gave early access only to Chinese chipmakers. 

On Friday, Huawei said its Ascend supernode products, based on the Ascend 950 series, would support DeepSeek V4. This means that companies and individuals who want to run their own modified version of Deepseek V4 will be able to use Huawei chips easily.

Reuters previously reported that Chinese government officials recommended that DeepSeek integrate Huawei chips in its training process. And this pressure fits a broader pattern in China’s industrial policy: Strategic sectors are often pushed, and sometimes effectively required, to align with national self-reliance goals. But there’s a particular urgency when it comes to AI. Since 2022, US export controls have cut Chinese firms off from Nvidia’s most powerful chips, and they later also restricted access to downgraded China-market versions. Beijing’s response has been to accelerate the push for a domestic AI stack, from chips to software frameworks to data centers.

Chinese authorities have reportedly been pushing data centers and public computing projects to use more domestic chips, including through reported bans on foreign-made chips, sourcing quotas, and requirements to pair Nvidia chips with Chinese alternatives from companies such as Huawei and Cambricon. 

Still, replacing Nvidia is not as simple as swapping one chip for another. Nvidia’s advantage lies not only in its chips, but in the software ecosystem developers have spent years building around them. Moving to Huawei’s Ascend chips means adapting model code, rebuilding tools, and proving that systems built around those chips are stable enough for serious use.

To be clear, DeepSeek does not appear to have fully moved beyond Nvidia. The company’s technical report reveals that it is using Chinese chips to run the model for inference, or when someone asks the model to complete a task. But Liu Zhiyuan, a computer science professor at Tsinghua University, told MIT Technology Review that DeepSeek appears to have adapted only part of V4’s training process for Chinese chips. The report does not say whether some key long-context features were adapted to domestic chips, so Liu says V4 may still have been trained mainly on Nvidia chips. Multiple sources who spoke on the condition of anonymity, due to political sensitivity around these issues, told MIT Technology Review that Chinese chips still don’t perform as well as Nvidia chips but are better suited for inference than training.

DeepSeek is also tying the future costs of V4 to this hardware shift. The company says V4-Pro prices could fall significantly after Huawei’s Ascend 950 supernodes begin shipping at scale in the second half of this year. 

If that works, V4 could be an early sign that China is successfully building a parallel AI infrastructure.

Will fusion power get cheap? Don’t count on it.

Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap.

Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.

It’s tricky to make any predictions about the cost of a technology that doesn’t exist yet. But when there’s billions of dollars of public and private funding on the line, it’s worth considering what assumptions we’re making about our future energy mix and its cost.

One crucial measure is a metric called experience rate—the percentage by which an energy technology’s cost declines every time capacity doubles. A higher figure means a quicker price drop and better economic gains with scaling.

Historically, the experience rate is 12% for onshore wind power, 20% for lithium-ion batteries, and 23% for solar modules. Other energy technologies haven’t gotten cheap quite as quickly—fission is at just 2%.

In the new study, published in Nature Energy, researchers aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate. The team looked at three key characteristics that can correlate with experience rate: unit size, design complexity, and the need for customization. The larger and more complex a technology is, and/or the more it needs to be customized for different use cases, the lower the experience rate.

The researchers interviewed fusion experts, including public-sector researchers and those working at companies in the private sector. They had the experts evaluate fusion power plants on those characteristics and used that info to predict the experience rate. (One note here: The study focused only on magnetic confinement and laser inertial confinement, two of the leading fusion approaches, which together receive the vast majority of funding today. Other approaches could come with different cost benefits.)

Fusion plants will likely be relatively large, similar to other types of facilities (like coal and fission power plants) that rely on generating heat. They will probably need less customization than fission plants—largely because regulations and safety considerations should be simpler—but more than technologies like solar panels. And as for complexity, “there was almost unanimous agreement that fusion is incredibly complex,” says Lingxi Tang, a PhD candidate in the energy and technology policy group at ETH Zurich in Switzerland and one of the authors of the study. (Some experts said it was literally off the scale the researchers gave them.)

The final figure the researchers suggest for fusion’s experience rate is between 2% and 8%, meaning it will see a faster price reduction than nuclear power but not as dramatic an improvement as many common energy technologies being deployed today.

That means that it would take a lot of deployment—and likely quite a long time—for the price of building a fusion reactor to drop significantly, so electricity produced by fusion plants could be expensive for a while. And it’s a much slower rate than the 8% to 20% that many modeling studies assume today.

“On the whole, I think questions should be raised about current investment levels in fusion,” Tang says. (The US allocated over $1 billion to fusion in the 2024 fiscal year, and private-sector funding totaled $2.2 billion between July 2024 and July 2025.) “If you’re talking about decarbonization of the energy system, is this really the best use of public money?”

But some experts say that looking to the past to understand the future of energy prices might be misleading.“It’s a good exercise, but we have to be humble about how much we don’t know,” says Egemen Kolemen, a professor at the Princeton Plasma Physics Laboratory.

In 2000, many analysts predicted that solar power would remain expensive—but then production exploded and prices came crashing down, largely because China went all in, he says. “People weren’t exactly wrong then,” he adds. “They were just extrapolating what they saw into the future.”

How fast prices drop depends on regulations, geopolitical dynamics, and labor cost, he says: “We haven’t built the thing yet, so we don’t know.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

3 things Michelle Kim is into right now

Isegye Idol

If you thought K-pop was weird, virtual idolshumans who perform as anime-style digital characters via motion capturewill blow your mind. My favorite is a girl group called Isegye Idol, created by Woowakgood, a Korean VTuber (a streamer who likewise performs as a digital persona). Isegye Idol’s six members are anonymous, which seems to let them deploy a rare breed of honesty and humor. They play games (League of Legends, Go, Minecraft), chitchat, and perform kitschy music that’s somewhere between anime soundtrack and video-­game score. It’s very DIYand very intimate. And the group’s wild popularity speaks to the mood of Gen Z South Koreans, famously lonely and culturally adriftstruggling to find work, giving up on dating, trying to find friendships online. Isegye Idol shows what a magical online universe people can build when reality stops working for them.

Mr. Nobody Against Putin

Pavel Talankin didn’t have the easiest life as a schoolteacher in the copper-­smelting town of Karabash, Russia; UNESCO once called it the most toxic place on Earth. But video he shot, partially in secret, makes it clear he loved itthe smokestacks, the cold, the ice mustache he’d get walking around outside, and, most of all, his bright-eyed students. That makes it all the more painful when a distant, grinding war and state propaganda change the town. An antiwar progressive with a democracy flag in his classroom, Talankin had to deal with a new patriotic curriculum, mandatory parades, visits from mercenariesand the loss of the creative space he’d built with his students. Talankin’s footage tells his story in this Oscar-winning documentary from director David Borenstein, and what struck me most is how strange it is being an adult around kids. We shape them in profound ways we might not even recognize.

Repertoire by James Acaster

I am the kind of person who will pay $150 to watch a comedian in a smelly theater in San Francisco that charges $20 for a can of waterbecause I am crazy enough to hope that standup will not die. In February, I saw the British comedian James Acaster perform live … and it was a mediocre show. But Repertoire, his 2018 miniseries on Netflix, is gold. Shot shortly after Acaster went through a breakup, the four-part show features him portraying, among other characters, a cop who goes undercover as a standup comedian, forgets who he is, and gets divorced. And then things get weird. “What if every relationship you’ve ever been in,” Acaster asks, “is somebody slowly figuring out they didn’t like you as much as they hoped they would?” If the best comedy comes from paying attention to the hellhole that you’re in, I wish Acaster many more pitfalls.

One town’s scheme to get rid of its geese

“Pull over!” I order my brother one sunny February afternoon. Our target is in sight: a gaggle of Canada geese, pecking at grass near the dog park. As I approach, tiptoeing over their grayish-white poop, I notice that one bird wears a white cuff around its slender black neck. It’s a GPS tracker—part of a new tech-centered campaign to drive the geese out of my hometown of Foster City, California. 

the United States with a dot on the California coast line
__________________________
THE PLACE
Foster City, CA
USA

About 300 geese live in this sleepy Bay Area suburb, equal to nearly 1% of our human population—and some say this town isn’t big enough for the both of us. Goose poop notoriously blanketed our middle school’s lawn, and the birds have hassled residents for generations. My own grandmother remembers when geese took over her garage for five whole minutes before waddling out. She says, “I wanted to kill them, but I thought I’d get in trouble.”

Indeed, that idea doesn’t fly here. City officials backed out of a previous plan to kill 100 geese following uproar from local environmentalists. Still, the poop creates a public health hazard; the birds need to go. 

So the city paid nearly $400,000—roughly $1,300 per goose—to Wildlife Innovations, a company that resolves conflicts between humans and wildlife, to haze the geese with gadgets. The company’s approach is “basically, making the geese less comfortable,” Dan Biteman, head of the goose management plan and senior wildlife biologist at Wildlife Innovations, tells me.

The need for such conflict resolution is on the rise as land development collides with changes in animal behavior. Though overpopulation of Canada geese is a national nuisance in the US, such tensions also surface with other species in this country and elsewhere, including grizzlies on the Montana prairies, coyotes on San Francisco streets, and savanna elephants in Tanzania parks. 

So the people whose job it is to deal with recalcitrant critters are bringing on the gadgets.

Back in Foster City, I spot a black camera mounted to a tree trunk at Gull Park by the lagoon. They’re in seven parks around town, programmed to snap photos every 15 minutes and transmit them back to Wildlife Innovations HQ. If they detect geese, a biologist immediately drives over to disperse the birds. One team member uses devices like lasers or drones; another brings along a goose-hating border collie named Rocky. 

An orange foam pontoon boat with yellow eyes and sharp-looking jagged teeth
Belligerent birds must grapple with the Goosinator.
ANNIKA HOM

As a special measure, staff deploy the “Goosinator,” a small, remote-controlled neon-orange pontoon boat with a fearsome dog-like mouth painted on its bow, meant to evoke geese’s fear of coyotes and bright colors. It comes with attachable wheels and can zoom around on land or water to chase birds away. Biteman tells me the company is thinking about mounting speakers on trees and flying drones that will screech the calls of goose predators like red-tailed hawks or golden eagles. 

The company received federal permits required by the Migratory Bird Treaty Act to stick GPS trackers on 10 geese, too. This way, staff can surveil the geese and research their behavior and movements. 

At local goose hangouts, signs that look like “Wanted” posters alert the public to the new plan. As I watch some culprits graze (and defecate) on a church lawn, I think to myself: Enjoy it while it lasts. 

Annika Hom is an award-winning independent journalist. She’s written for National Geographic, Wired, and more.

There is no nature anymore

When people talk about “nature,” they’re generally talking about things that aren’t made by human beings. Rocks. Reefs. Red wolves. But while there is plenty of God’s creation to go around, it is hard to think of anything on Earth that human hands haven’t affected.

Mat Honan

In the Brazilian rainforest, scientists have found microplastics in the bellies of animals ranging from red howler monkeys to manatees. In remotest Yakutia, where much of the earth remains untrodden by human feet, the carbon in the sky above melts the permafrost below. In the Arctic Ocean, artificial light from ship traffic—on the rise as the polar ice cap melts away—now disrupts the nightly journey of zooplankton to the ocean surface, one of the largest animal migrations on the planet. The remote mountain lakes of the Alps are contaminated with all kinds of synthetic chemicals. Polar bears are full of flame retardants. Cesium-137, fallout from nuclear bomb explosions, lightly rimes the entire planet. 

These examples are mostly pollution—nuclear, carbon, chemical, light—but I raise them not to highlight the ways human industry and technology degrade the environment but to note how the things humans build change it. Nobody really knows what the exact effects of all that will be, but my point is that no part of the globe is free of human fingerprints. We have literally changed the world.  

We’ve changed ourselves as well. Humans are especially adept at bending human nature. Everything about us is up for grabs—appearance, health, our very thoughts. Pharmaceuticals, surgeries, vaccines, and hormones give us longer lives, take away our pain, ease our anxiety and depression, make us faster, stronger, more resilient. We’re getting glimpses of technologies that will let us change who our children will become before they’re even born. Electrodes implanted in people’s brains let them control computers and translate thoughts into speech. Prosthetics and exoskeletons straight out of comic books restore and enhance physical abilities, while gene-­editing technologies like CRISPR are rewriting our very DNA. And meanwhile, people have taken the sum total of all the information we have ever written down and poured it into vast calculating machines in an effort—at least by some—to build an intelligence greater than our own. 

So what even is nature, or natural, in this context? Is it “environmentalist,” in the conventional sense, to try to preserve what one could argue no longer exists? Should we employ technology to try to make the world more “natural”?  

Those questions led us to approach this Nature issue with humility. We try to grapple with them all the time—MIT Technology Review is, after all, a review of how people have altered and built upon nature.

And it’s a place to think about how we might repair it. Take solar geoengineering, for example—a subject we have covered with increasing frequency over the past few years. The basic idea of geoengineering is to find a technological fix for a problem technology caused: Burning ­petrochemicals to fuel the Industrial Revolution turned Earth’s atmosphere into a heat sink, fundamentally breaking the climate. Some geoengineers think that releasing particulate matter into the stratosphere would reflect sunlight back into space, thus reducing global temperatures. After years of theoretical discussions, some companies have begun to actively experiment with such technologies. This might seem like a great way to restore the world to a more natural state. It’s also fraught with controversy and peril. It could, for example, benefit some nations while harming others. It may give us license to continue burning fossil fuels and releasing greenhouse gases. The list goes on. 

Nature isn’t easy. 

In our May/June issue, we have attempted to take a hard look at nature in our unnatural world. We have stories about birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice and within ourselves—and in the far future, on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer. I don’t know if any of that will answer the questions I’ve been asking here—but we can’t help but try. It’s in our nature. 

Los Angeles is finally going underground

Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-­packed stretch of the city will be, if all goes well, a 25-minute train ride.

The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town.

These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-­balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day. 

a car on the D line track
A Metro train pulls into La Cienega station
Fairfax Station
Art by Susan Silton at the Fairfax station
Eamon Ore-Giron - LA Metro D Line - La Brea Station
Art by Eamon Ore-Giron at the La Brea station

Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above.

Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.

Chinese tech workers are starting to train their AI doubles–and pushing back

Tech workers in China are being instructed by their bosses to train AI agents to replace them—and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. 

Earlier this month a GitHub project called Colleague Skill, which claimed workers could use it to “distill” their colleagues’ skills and personality traits and replicate them with an AI agent, went viral on Chinese social media. Though the project was created as a spoof, it struck a nerve among tech workers, a number of whom told MIT Technology Review that their bosses are encouraging them to document their workflows in order to automate specific tasks and processes using AI agent tools like OpenClaw or Claude Code. 

To set up Colleague Skill, a user names the coworker whose tasks they want to replicate and adds basic profile details. The tool then automatically imports chat history and files from Lark and DingTalk, both popular workplace apps in China, and generates reusable manuals describing that coworker’s duties—and even their unique quirks—for an AI agent to replicate. 

Colleague Skill was created by Tianyi Zhou, who works as an engineer at the Shanghai Artificial Intelligence Laboratory. Earlier this week he told Chinese outlet Southern Metropolis Daily that the project was started as a stunt, prompted by AI-related layoffs and by the growing tendency of companies to ask employees to automate themselves. He didn’t respond to requests for further comment.

Internet users have found humor in the idea behind the tool, joking about automating their coworkers before themselves. However, Colleague Skill’s virality has sparked a lot of debate about workers’ dignity and individuality in the age of AI.

After seeing Colleague Skill on social media, Amber Li, 27, a tech worker in Shanghai, used it to recreate a former coworker as a personal experiment. Within minutes, the tool created a file detailing how that person did their job. “It is surprisingly good,” Li says. “It even captures the person’s little quirks, like how they react and their punctuation habits.” With this skill, Li can use an AI agent as a new “coworker” that helps debug her code and replies instantly. It felt uncanny and uncomfortable, Li says. 

Even so,  replacing coworkers with agents could become a norm. Since OpenClaw became a national craze, bosses in China have been pushing tech workers to experiment with agents. 

Although AI agents can take control of your computer, read and summarize news, reply to emails, and book restaurant reservations for you, tech workers on the ground say their utility has so far proven to be limited in business contexts. Asking employees to make manuals describing the minutiae of their day-to-day jobs the way Colleague Skill does is one way to help bridge that gap. 

Hancheng Cao, an assistant professor at Emory University who studies AI and work, believes that companies have good reasons to push employees to create work blueprints like these, beyond simply following a trend. “Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment,” he says.

To employees, though, making agents or even blueprints for them can feel strange and alienating. One software engineer, who spoke with MIT Technology Review anonymously because of concerns about their job security, trained an AI (not Colleague Skill) on their workflow and found that the process felt reductive—as if their work had been flattened into modules in a way that made them easier to replace. On social media, workers have turned to bleak humor to express similar feelings. In one comment on Rednote, a user wrote that “a cold farewell can be turned into warm tokens,” quipping that if they use Colleague Skill to distill their coworkers into tasks first, they themselves might survive a little longer.

The push for creating agents has also spurred clever countermeasures. Irritated by the idea of reducing a person to a skill, Koki Xu, 26 an AI product manager in Beijing, published an “anti-distillation” skill on GitHub on April 4. The tool, which took Xu about an hour to build, is designed to sabotage the process of creating workflows for agents. Users can choose between light, medium, and heavy sabotage modes depending on how closely their boss is observing the process, and the agent rewrites the material into generic, non-actionable language that would produce a less useful AI stand-in. A video Xu posted about the project went viral, drawing more than 5 million likes across platforms.

Xu told MIT Technology Review that she has been following the Colleague Skill trend from the start and that it has made her think about alienation, disempowerment, and broader implications for labor. “I originally wanted to write an op-ed, but decided it would be more useful to make something that pushes back against it,” she says.

Xu, who has undergraduate and master’s degrees in law, said the trend also raises legal questions. While a company may be able to argue that work chat histories and materials created on a work laptop are corporate property, a skill like this can also capture elements of personality, tone, and judgment, making ownership much less clear. She said she hopes Colleague Skill prompts more discussion about how to protect workers’ dignity and identity in the age of AI. “I believe it’s important to keep up with these trends so we (employees) can participate in shaping how they are used,” she says. Xu herself is an avid AI adopter, with seven OpenClaw agents set up across her personal and work devices.

Li, the tech worker in Shanghai, says her company has not yet found a way to replace actual workers with AI tools, largely because they remain unreliable and require constant supervision. “I don’t feel like my job is immediately at risk,” she says. “But I do feel that my value is being cheapened, and I don’t know what to do about it.”

Colossal Biosciences said it cloned red wolves. Is it for real?

If you want to capture something wolflike, it’s best to embark before dawn.

So on a morning this January, with the eastern horizon still pink-hued, I drove with two young scientists into a blanket of fog. Forty miles to the west, the industrial sprawl of Houston spawned a golden glow. Tanner Broussard’s old Toyota Tacoma bumped over the levee-top roads as killdeer, flushed from their rest, flew across the beams of his headlights. 

Broussard peered into the darkness, looking for traps. “I have one over here,” he said, slowing slightly. A master’s student at McNeese State University, he was quiet and contemplative, his bearded face half-hidden under a black ball cap. “Nothing on it,” he said, blandly. The truck rolled on.

Wolves and their relations—dogs, jackals, coyotes, and so on—are classed in the family Canidae, and the canid that dominated this landscape in eastern Texas was once the red wolf. But as soon as white settlers arrived on the continent, Canis rufus found itself under siege. The war on wolves “lasted 200 years,” federal researchers once put it, in a surprisingly evocative report. “The wolf lost.” By 1980, the red wolf was declared extinct in the wild, its population reduced to a small captive breeding population.

Still, for decades afterward, people noted that strange wolflike creatures persisted along the Gulf Coast. Finally, in 2018, scientists confirmed that some local coyotes were more than coyotes: They were taller, long-legged, their coats shaded with hints of cinnamon. These animals contained relict red wolf genes. They became known as the ghost wolves.

Broussard grew up in southwest Louisiana, watching coyotes trot across his parents’ ranch. The thrilling fact that these might have been not just coyotes but something more? That reset a rambling academic career. In 2023, Broussard had recently returned to college after a seven-year pause, and his budding obsession with wolves narrowed his focus. Before he finished his bachelor’s degree, he began to supply field data to a prominent conservation nonprofit.

a wolf pup chews on a terrycloth toy
The American red wolf, Canis rufus, is the most endangered wolf species in the world. This pup is one of four animals said to be clones of this native North American species.
COURTESY OF COLOSSAL BIOSCIENCES

Then, last year, just before he began his master’s studies, he woke to disconcerting news. A startup called Colossal Biosciences claimed to have resuscitated the dire wolf, a large canid that went extinct more than 10,000 years ago. Pundits debated the utility of the project and whether the clones—technically, gray wolves with some genetic tweaks—could really be called dire wolves. But what mattered to Broussard was Colossal’s simultaneous announcement that it had cloned four red wolves.  

“That surprised pretty much everybody in the wolf community,” Broussard said as we toured the wildlife refuge where he’d set his traps. The Association of Zoos and Aquariums runs a program that sustains red wolves through captive breeding; its leadership had no idea a cloning project was underway. Nor did ecologist Joey Hinton, one of Broussard’s advisors, who had trapped the canids Colossal used to source the DNA for its clones. Some of Hinton’s former partners were collaborating with the company, but he didn’t know that clones were on the table.

There was already disagreement among scientists about the entire idea of de-extinction. Now Colossal had made these mystery clones, whose location was kept secret. Even the purpose of the clones was murky to some scientists; just how they might restore red wolf populations was unclear. 

Red wolves had always been a contentious species, hard for scientists to pin down. The red wolf research community was already marked by the inevitable interpersonal tensions of a small and passionate group. Now Colossal’s clones became one more lightning rod. Perhaps the most curious question, though, was whether the company had cloned red wolves at all. 


You can think of the red wolf as the wolf of the East—an apex predator that once roamed the forests and grasslands and marshes everywhere from Texas to Illinois to New York. Smaller than a gray wolf (though a good bit larger than a coyote), this was a sleek beast, with, according to one old field guide, a “cunning fox-like appearance”: long body, long legs; clearly built to run across long distances. Its coat was smooth and flat and came in many colors: a reddish tone that comes out in the right light, yes, but also, despite the name, white and gray and, in certain regions and populations, an ominous all black.

We know these details thanks to a few notes from early naturalists. As writer Andrew Moore writes in his new book, The Beasts of the East, by the time a mammalogist decided to class these eastern wolves as a standalone species in the 1930s, the red wolf had been extirpated from the East Coast and was rapidly dwindling across its range. Working with remnant skulls and other specimens, the mammalogist chose the name red wolf—which was later enshrined with the Latinate Canis rufus—because that’s what these wolves were called in the last place they survived. 

The looming extinction of the red wolf turned out to be a good thing for coyotes. Canis latrans is a distant relative of wolves that split away from a common ancestor thousands of years ago and might be considered, as one canid biologist put it to me, the “wolf of the Anthropocene.” Their smaller size means they need less food and can survive in smaller and more fragmented territory, the kind that modern humans tend to build. 

The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all.

Red wolves had kept coyotes out of eastern America, outcompeting them for prey. Now, as the wolves declined, the coyotes began to slip in. The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all. Soon the territory became a genetic jumble, home to both wolves and coyotes and hybrids that, after several generations of intermixing, came in every shade between. Scientists call such a population a “hybrid swarm,” and it poses a genetic threat to the declining species: As more coyotes poured east, and as all the canids kept interbreeding, there would be nothing that was “purely” wolf. 

Ron Wooten surveys a location on the edge of Galveston Island State Park in Texas. In 2016, Wooten’s photographs of oversized local coyotes got the attention of Joey Hinton, then a postdoctoral researcher at the University of Georgia.
TRISTAN SPINSKI

For years, no one seemed to notice. Perhaps trappers in the region mistook the new hybrids for wolves—or were happy to take the higher bounty that a wolf pelt earned. Finally, though, by the 1960s, as the concept of endangered species first emerged, biologists began to worry for the disappearing wolf. 

The best solution they could come up with was a program of mass extermination. Over several years, trappers rounded up hundreds of canids in Texas and Louisiana. Those deemed true red wolves (on the basis of their howls and skull shape) were whisked away to breed in captivity. Most of the rest were euthanized. In 1980, the red wolf was declared extinct in the wild. To put it plainly: The red wolf was wiped out intentionally, in a roundabout effort to keep it alive.

Just 14 individuals survived this gauntlet; today’s wolves descend from 12 of those. They became the ark, the source material for the few hundred red wolves that live today. There are about 280 in the “Species Survival Plan” population, living in captivity, and another 30 or so that roam a federal refuge in coastal North Carolina, and that the government deems “nonessential” and “experimental.” According to the US Fish and Wildlife Service, to be classified as a representative of the protected entity known as Canis rufus, an animal must trace at least 87.5% of its lineage to the 12 founders. 

The scientist who led this trapping-and-breeding program understood that the federal government would be narrowing the red wolf’s gene pool precipitously—so much so that the result could be an entirely new species. None of those notably black wolves persisted in the new population, for example. But what other choice existed? A new kind of wolf, free of the taint of the invading coyote, seemed better than no wolf at all.


After I learned about Colossal’s clones, I decided to travel to eastern Texas. The clones were hidden away on an unnamed refuge, but on this coastline, I might be able to at least see the animals that provided their genetic material. I arrived in the small town of Winnie on a balmy afternoon in January and met up with Broussard and another graduate student, Patrick Cunningham, at a Tex-Mex joint to discuss the challenges of studying red wolves.

“We don’t have a good reference genome,” Cunningham said. We can collect DNA from the descendants of the 12 founders, but not from the countless wolves that had been killed. It’s difficult to extract usable DNA from old samples. So our picture of what the species used to look like is limited. 

Studies of the genes we do have, meanwhile, have proved controversial. When a Princeton geneticist named Bridgett vonHoldt dug into the genome of the Species Survival Plan population, she found little about their DNA that could set them apart from other wolflike American canids. In 2016, in a paper in Science Advances, vonHoldt and her coauthors wondered if there ever really was a separate southern wolf species. Perhaps the 12 founders were just coyotes injected with some smaller portion of wolf.

It’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

Her paper called for complex new interpretations of the Endangered Species Act. We should, she wrote, focus less on species and more on the function a group of animals performs. The red wolves deserved protection, then, as creatures that filled the same role as truly endangered wolves and carried some of their genetics. Nonetheless, for Canis rufus, the timing of the paper was bad news.

The red wolves roaming that federal reserve in North Carolina are supposed to be a first step toward the species’ return to the wild. But some locals never liked the idea of living alongside wolves. By 2016, state officials had turned against the recovery program and were requesting its termination. The wild population, which had included as many as 120 a few years earlier, was falling. But the US Fish and Wildlife Service had paused further releases of wolves. Now a group of scientists, led by vonHoldt, was saying that the red wolf showed “a lack of unique ancestry.” Why spend money, some people wondered, on a species that does not exist? 

Part of the problem was that the concept of a “species” is less sturdy than your high school biology teacher might have led you to believe. The most familiar definition is that a species consists of animals that can produce fertile offspring. But that’s a rule various species of canids violate all the time; it’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

VonHoldt suggested that the modern red wolf is a channel in that river, part wolf and part coyote, that appeared surprisingly recently. But a year after her study came out, other researchers claimed that her data, if interpreted differently, could suggest that the red wolf braid had emerged tens of thousands of years ago, meaning this was a species that had long been on its own evolutionary journey. 

These nuances were confusing for the policymakers who oversaw actual, living animals. “Congress was just like, ‘What is going on?’” Cunningham said. “‘Why is there not just a simple explanation for what this thing is?’”

Given the policy implications, the National Academies of Science, Engineering, and Medicine tasked a panel of scientists with finding that simple answer. Their report, published in 2019, declared that the red wolf is, by virtue of its appearance and seemingly long-standing isolated population, a species. As their study got underway, though, a new question was arising: What to make of the strange canids on the Gulf Coast, those today called the ghost wolves?


The path to that name began in 2008, when a photographer from Galveston Island, Texas, grew obsessed with the oversized local coyotes. He began to take photos of the packs, which he distributed to scientists, seeking answers: What were they? By 2016, the photos had reached Joey Hinton, then a postdoctoral researcher at the University of Georgia.

Hinton had spent more than a decade trapping wolves and coyotes in North Carolina, and his work has always focused on live animals, especially visual ways to distinguish red wolves and coyotes. So he was a good choice for helping the photographer, Ron Wooten, figure out the status of the canids. In his freezer Wooten also had tissue samples he’d collected from road-killed coyotes. These could be used by a geneticist to give a fuller picture of the canids’ ancestry. So vonHoldt was brought in too. The result was a 2018 paper, with Hinton as a coauthor, that identified the Galveston Island canids as at least part red wolf.

These canids were not, to be clear, actual red wolves; no canid on the Gulf Coast is descended from the government’s 12 canonical founders, so under current policy, none can be officially classified as a wolf. Subsequent studies have found that, on average, the ancestry of the region’s canids is less than half red wolf, and often far less. In scientific terms, the red wolf had introgressed into the Gulf Coast population—its genes had leaked across the species boundary and lodged themselves in a different population.

Hinton, vonHoldt, and their coauthors also noted the presence of what they called “ghost alleles”—DNA sequences unknown in any other named species. The Occam’s razor assumption was that, in these already wolfy coyotes, these sequences likely represented Canis rufus genetics that had not been captured in the sweep of the marsh that yielded the Species Survival Plan population. Since so much of the red wolf gene pool had been lost, these genes seemed to be a potential resource for the species—a way to expand its diversity. When the New York Times covered this discovery a few years later, the headline popularized the “ghost wolf” moniker that has proved so indelible. 

As it happened, a separate team, focused on canids in and around federally protected marsh in Louisiana, published a similar paper in 2018, at nearly the same time. The twin discoveries raised new questions—What should we make of these creatures, the latest branch in the canid river? What do they mean for the wolves in North Carolina?—and helped researchers secure new funding.

In 2020, vonHoldt and Kristin Brzeski, a former postdoc under vonHoldt and now a professor at Michigan Technological University, launched what they called the Gulf Coast Canine Project. Brzeski, who led the field work, hired Hinton to do much of the canid trapping and sample collection. In 2022, vonHoldt, Hinton, and Brzeski were all coauthors of another paper that identified even more red-wolf-descended canids in Louisiana and noted a positive correlation between red wolf ancestry and body mass—the more red wolf genes, the bigger the animal. The paper also suggested that given this newly discovered reservoir of red wolf DNA, “genomic technologies” could prove useful in the long-term survival of the species.

Bridgett vonHoldt (left) and Kristin Brzeski (center) visit a location where canids have been spotted with an animal control worker.
TRISTAN SPINSKI

VonHoldt and Brzeski eventually conceived of an ambitious project. They hoped that by carefully matching the most wolf-­descended canids and breeding them together, over three generations they’d increase the proportion of red wolf genes—de-introgression. “I’m expecting, based on these pairings of animals, that I can stitch together the puzzle pieces,” vonHoldt told me recently. “We are very likely to get puppies each generation that are higher and higher red wolf content”—enough wolf content, she hopes, to eventually win her permission to breed the resulting animals with the Species Survival Plan population of red wolves. They’d essentially be adding a new founder to the limited lineage.

Hinton told me he felt he’d been kept in the dark about the de-introgression idea. He was also worried, he says, to learn that Colossal Biosciences hovered in the background. (In a draft proposal for the project, vonHoldt indicated that Colossal would be in charge of “live capture.”) Hinton says he was not comfortable collecting materials for a for-profit company that has to keep its shareholders happy. 

Hinton says he reached out to state and federal officials and found they knew little about the project. (The US Fish and Wildlife Service declined to make anyone available for an interview for this story, and the Louisiana Department of Wildlife and Fisheries did not reply to requests for comment.) He knew the group’s next phone call would be difficult, and indeed it was. He wound up speaking one-on-one with vonHoldt for at least half an hour.

“We didn’t reach an agreement,” he says. After the call, he sent her a text: He was exiting the project. He believes that had Colossal not been involved, they’d all still be working as a team. Both vonHoldt and Brzeski declined to comment on what felt to them like a matter of interpersonal relationships rather than a scientific dispute. “There were challenges over time, and the tone and manner of the interactions became increasingly difficult to navigate productively,” Brzeski said in an email. 


Colossal was cofounded in 2021 by George Church, an eminent Harvard geneticist who, thanks to investors, could finally embark on a long-discussed dream. He wanted to make de-extinction a reality—using CRISPR gene-editing technology to, say, turn a modern elephant into something like the extinct woolly mammoth. The concept has drawn skepticism from the beginning—at best it would only be possible to make something like a woolly mammoth. Was there any point to that? Some scientists note that genes alone do not teach an animal how to exist in the world; indeed, since social structures affect how genes are expressed, an animal without parents may not effectively fill its ecological niche.

Less reproachable, though, was Colossal’s interest in partnering with scientists who, like vonHoldt and Brzeski, focus on extant species that are endangered. This gave more heft to Colossal’s gee-whiz de-extinction projects: They would, along the way, supply technology that could save our natural world.

For red wolves, such technologies could offer a quick way to expand the limited gene pool. Through genetic engineering, Colossal could take clones of the Gulf Coast canids and tune up the wolf, tune down the coyote. It would be a high-tech shortcut past vonHoldt and Brzeski’s careful breeding program. “You can do the same thing much more precisely, much more quickly, much more efficiently, in vitro,” says Matt James, Colossal’s chief animal officer and the executive director of the Colossal Foundation, the company’s nonprofit arm. VonHoldt notes that the old-fashioned approach, with breeding, means she has to take a few individual canids out of the wild, into captivity—never ideal but, in her view, a worthwhile price for progress. The advantage of cloning, which Colossal has managed to do with blood samples alone, is that the wild canid populations can be kept intact. 

VonHoldt has always been an advocate for wolves. Indeed, when she hypothesized that the red wolf had hybrid origins, in 2016, she’d framed it as an argument for protecting the gray wolf, which the federal government was considering removing from the Endangered Species List. (In short: If all wolves were one wolf, then it was undeniable that the species’ range had contracted precipitously.) But she’d grown frustrated with the federal government’s efforts to restore the red wolf, which after half a century had seen few meaningful successes, she says. 

VonHoldt joined Colossal’s scientific advisory board in 2023. “I love the bold, the shock and awe,” she told me, explaining her decision. She saw the fact that Colossal sparked controversy as an asset, given the problems she sees in conservation: “Get something out there. Start pushing buttons and start forcing these conversations,” she says. The red wolf was akin to a terminal patient who was ready to accept any and all therapies, however experimental. Why not embrace biotech? 

She also notes that the federal budget for endangered species conservation is incredibly limited. Rely only on that money and “we can kiss our world goodbye,” she said in an e-mail. The $100 million raised by the Colossal Foundation is essential, then, she says. As for the samples the team had collected on the Gulf Coast, she says, limited freezer space is often devoted to animals that are officially categorized as threatened or endangered, which the Gulf Coast canids are not. Colossal could take the samples, and the team passed them along to the company.

Dr. Joey Hinton
Ecologist Joey Hinton trapped the canids that Colossal Biosciences used to source the DNA for its clones. He dismisses the clones as a way for the company to earn headlines and attract funding.
RICH SAAL

It was Hinton—a source for a former story—who first alerted me to Colossal’s work on red wolves; he described vonHoldt and Brzeski’s de-introgression project, which won federal funding in late 2024, as nefarious-sounding work to “disappear” canids off the Gulf Coast. But he did not have all the details of the project, which had changed after he left the team. He suggested they’d be “just throwing animals together,” whereas vonHoldt described a careful program of observing the canids in the wild so she could determine which acted most wolflike, findings she’d cross-­reference with their genetic data.

 Colossal did not wind up participating in the de-­introgression project. But the company is doing work on the red wolf that ­vonHoldt views as complementary: Its scientists are assembling a “pangenome” of North American canids by studying samples pulled from museums, universities, zoos, and other institutions. This data set is expected to clarify both what genetic sequences are shared across the entire canid family and what snippets differ in certain populations. The hope is that this will provide a clearer picture of the red wolf in its early days, before the coyotes arrived and the gene pool narrowed. That might shift what Colossal’s James calls the government’s arbitrary definition of the red wolf, to encompass more of the species’ full former diversity. 

The pangenome, then, might allow vonHoldt’s de-­introgressed canids, descended from the Gulf coast canids, to qualify as actual red wolves. Indeed, James suggested to me that more information about historic red wolves might force the government to take a new look at the Gulf Coast canids; some individuals might have high enough red wolf ancestry to be classified as red wolves. (“That has management implications that terrify state and federal government,” he added.)

hair in Zip-Loc bags on a metal tray
Blood and tissue samples collected by the Galveston Island Humane Society from canid roadkill will be shipped to Princeton University for DNA analysis.
TRISTAN SPINSKI

The purpose of vonHoldt’s de-introgression project is to bring back certain lost red wolf genes—to create a whole new wolf lineage. But she has also pushed against the idea of “genetic purity,” which she thinks limits what we protect with conservation laws; she told me emphasizing it reminds her of the human history of eugenics and “makes every part of my soul hurt.” She cares less about what species are out there, in the landscape, than what ecological function the animals play, and she sees coyotes and red wolves as closely related animals that may have a role to play in one another’s future survival.


As for Colossal’s clones, even vonHoldt seems to describe them as something less than a conservation breakthrough. They are a “proof of principle that we, collectively, as a scientific community, know how to do it,” she told me. If an urgent need arises to clone red wolves, the groundwork has been laid. 

Hinton, meanwhile, is one of several scientists I spoke with who were skeptical Colossal was doing good science, given that so much is conducted behind closed doors. He implied that the clones were nothing but an empty showpiece, a way to earn headlines and attract funders. “The work is anything but symbolic,” James responded via e-mail. “It expands the genetic toolkit available for critically endangered species, demonstrates scalable approaches to biodiversity restoration, and contributes directly to preserving imperiled lineages.” He noted that Colossal had intentionally decided to avoid the “snail’s pace” of the peer review process and suggested that the skepticism from scientists may actually be a “panicked response to being outpaced.”

Until some evidence confirms that the Gulf Coast canids—the source material for the clones—are red wolves, they can’t legally be classified as such for federal conservation purposes. Nonetheless, Colossal’s press release claimed that the company had “birthed two litters of cloned red wolves, the most critically endangered wolf in the world.” On the same day that press release dropped, Colossal’s CEO and cofounder, Ben Lamm, appeared on The Joe Rogan Experience and claimed that he had offered to create hundreds of red wolves for the federal government to use in recovery—for free! He was miffed when the government, under the Biden administration, replied that it wanted to spend several years and many millions of dollars to study the potential for cloning before it would take any action. (The company has gotten more traction with the Trump administration, Lamm said.)

When I first spoke to James at Colossal, he said that he was “cognizant” of the concerns over the names and labels and that the company’s own materials described the clones as “red ‘ghost’ wolves.” He suggested that if anyone assumed the clones were actual red wolves, that was because journalists had failed to grasp the nuances of the science. But this phrase appears so late in a long document that it was cut off in some versions. Later, over email, James indicated that further analysis had convinced him that what the company had created were red wolves, and that anyone who disagreed either could not grasp the science or is “so ideologically opposed to Colossal’s conservation revolution that they are willing to compromise their scientific integrity.”

VonHoldt has had her own issues with the company’s communications; she told me it was “stressful” when Lamm described the clones as red wolves—which, she notes, “federally, they’re not.” But she values the company’s work, she says, and “the thing that I value the most is shaking things up.” People are paying attention to red wolves. If it’s hard to decide what to call the animals on the Gulf Coast—where some heavily wolfy animals live alongside others that are more coyote—that’s just proof that our concept of a “species” does not capture the complex realities on the ground. 


In 2025, the same year as Colossal’s wolf announcement, Hinton launched the Texas-Louisiana Canid Project. He’s working in partnership with Broussard, the master’s student at McNeese, in slightly different territory from vonHoldt and Brzeski—and focusing more on the animals’ appearance and behavior than their genes. The Gulf Coast canids are stable and faring better than the North Carolina red wolves, and his hope is that if we learn why they’ve been successful for so many years, we might be able to help the official red wolf population, which is only just limping along. 

a wolf crosses a road outside of the city
Galveston locals hope that the presence of these remarkable creatures—red wolves or not—might rein in the rapid development of the island’s last stands of green.
TRISTAN SPINSKI

I had planned to join Hinton in the field, but by the time I was able to visit, he’d had to go home to his family. So I joined Broussard on his last days trapping in Texas that season. Before I’d left for Winnie, I’d told my friends I’d be out chasing the last surviving red wolves. But there, on the Gulf Coast, I came to understand that this was just as much a story about coyotes.

That’s what Broussard and Cunningham both called the creatures. Hinton does too; he considers the animals to be a specific “ecotype” of coyote, featuring an injection of wolf DNA that has helped them adapt to the local marshes. 

At vonHoldt’s behest, I drove an hour down the coast to Galveston Island, where she and Brzeski began working with the island’s animal control department; when locals find a coyote, the animal is captured so its blood can be collected and a GPS collar fitted on its neck. A small group of locals who support the project have come to call themselves the “ghost wolf team.” They hoped that the presence of these remarkable creatures might rein in the rapid development of the island’s last stands of green. Still, the people I spoke to in Galveston conceded that the animals were, if special, nonetheless a form of coyote. 

VonHoldt describes Galveston Island as a potential model for what conservation could look like in the future. Top-down recovery hasn’t been working, but helping more places fall in love with their local animals might. And for that to happen, we need to stop obsessing over whether or not something is a “pure” wolf. What matters, she argues, is that an animal is doing what a larger predator does in an ecosystem. She embraces the “ghost wolf” name because, more than “Gulf Coast canid,” it makes clear that there’s something special on the coast—something worth protecting. 

Her vision is enticing: Focus on function over purity. Let evolution proceed. Stop protecting the wolf of the past and consider the wolf of the future. Such rapid genetic exchange may be necessary to help predators adapt to a hotter, increasingly shattered world, she says. 

If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead?

Then again, we already know what’s adapted to the world we’re building: coyotes. The argument against genetic purity can sound like giving up on wolves entirely, with the possible exception of whatever specimens we produce in cloning facilities. And there is the matter of politics: If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead? Under an administration already rolling back environmental protections, the likeliest outcome may be protecting nothing at all.

I tried in Galveston, too, to see the coyotes. Ron Wooten, the local resident who helped alert scientists to this population, dropped some pins on a map, pointing me toward several likely spots. That evening, after the sun set, I chose a quiet road that passed through marshes until it reached the island’s eastern beach. It was mating season, Wooten had noted. The animals should be on the move, he said; look to the bushes. As I drove up and down the road, my headlights revealed only empty darkness. No coyote. No wolf. Fitting, perhaps—isn’t absence the essence of a ghost? But whether this was a good omen was less clear. As individuals, these animals do best by avoiding us humans. As a group, their survival—like the survival of the red wolves—depends on our knowing that they are here, and were here, and deciding that is reason enough to care.

In Winnie the next morning, I went out one last time with Broussard, and we struck out again. With no coyotes in his traps and the new semester looming, he decided to take down his game cameras. Back at the hotel, I caught at least an image of what I’d been chasing: In black and white, the animals were appropriately silver, spectral, dashing across the midnight fields. In one clip, a canid paused and howled. “That’s super cool,” Broussard said quietly, as an echoing, interweaving chorus responded from somewhere deeper in the marsh. 

Boyce Upholt is a journalist based in New Orleans and founding editor of Southlands, a magazine about Southern nature. 

The case for fixing everything

The handsome new book Maintenance: Of Everything, Part One, by the tech industry legend Stewart Brand, promises to be the first in a series offering “a comprehensive overview of the civilizational importance of maintenance.” One of Brand’s several biographers described him as a mainstay of both counterculture and cyberculture, and with Maintenance, Brand wants us to understand that the upkeep and repair of tools and systems has profound impact on daily life. As he puts it, “Taking responsibility for maintaining something—whether a motorcycle, a monument, or our planet—can be a radical act.”

Radical how? This volume doesn’t say. In an outline for the overall work, Brand says his goal is to “end with the nature of maintainers and the honor owed them.”

The idea that maintainers are owed anything, much less honor, might surprise some readers. Actually, maintenance and repair have been hot topics in academia since the mid-2010s. I played some role in that movement as a cofounder of the Maintainers, a global, interdisciplinary network dedicated to the study of maintenance, repair, care, and all the work that goes into keeping the world going.

Brand is right, too, that maintainers haven’t gotten the laurels they deserve. Over the past few decades, scholars have shown that work from oiling tools to replacing worn parts to updating code bases all tends to be lower in status than “innovation.” Maintenance gets neglected in many organizational and social settings. (Just look at some American infrastructure!) And as the right-to-­repair movement has shown, companies in pursuit of greater profits have frequently locked us out of being able to do repairs or greatly reduced the maintainable life of their products. It’s hard to think of any other reason to put a computer in the door of a refrigerator.

Some of Brand’s earlier work helped inspire those insights. But his new book makes me think he doesn’t see things that way. For Brand, maintenance seems to be a solitary act, profound but more about personal success and fulfillment than tending to a shared world or making it better.


Born in 1938, Brand is 87 years old. A sense hangs over the book—with its battles against corrosion, rust, and decay, with its attempts to keep things going even as they inevitably falter—of someone looking over life and pondering its end. Maintenance: Of Everything connects to every stage of Brand’s life. It’s worth reviewing where it falls in that arc. Brand has always been interested in tools and fixing things, but rarely has he focused on the systems that need the most care. 

More than a half-century ago, Brand was a member of the Merry Pranksters, a countercultural, LSD-centered hippie collective famously led by Ken Kesey, the author of One Flew Over the Cuckoo’s Nest. In 1966, Brand co-produced the Trips Festival, where bands like the Grateful Dead and Big Brother and the Holding Company performed for thousands amid psychedelic light shows.

Brand’s Whole Earth Catalog had a vision that might feel progressive, but its libertarian, rugged-individualist philosophy of remaking civilization alone stood in contrast to more collective social change movements.

In some ways, the Trips Festival set a paradigm for the rest of his life’s work. Brand’s biographers have described him as a network celebrity—someone who got ahead by bringing people together, building coalitions of influential figures who could boost his signal. As Kesey put it in 1980, “Stewart recognizes power. And cleaves to it.” 

Brand applied this network logic to the undertaking he will always be best remembered for: the Whole Earth Catalog. First published in 1968 and aimed at hippies and members of the nascent back-to-the-land movement, the publication had the motto “Access to tools.” Its pages were full of Quonset huts, geodesic domes, solar panels, well pumps, water filters, and other technologies for life off the grid. It was a vision that might feel progressive or left-leaning, but the libertarian, rugged-individualist philosophy of eschewing corrupt systems and remaking civilization alone stood in contrast to the more collective movements pushing for deep social change at the time—like civil rights, feminism, and environmentalism.

That vision also led straight to the empowerment that came with new digital tools, and to Silicon Valley. In 1985, Brand published the Whole Earth Software Catalog, the last of the series, and also cofounded the WELL—the Whole Earth ’Lectronic Link, a pioneering online community famous for, among other things, facilitating the trade of Grateful Dead bootlegs. He also wrote a hagiographic book about the MIT Media Lab, known for its corporate-sponsored research into new communications tech. “The Lab would cure the pathologies of technology not with economics or politics but with technology,” Brand wrote. Again, not collective action, not policymaking: tools. And Brand then cofounded the Global Business Network, a group of pricey consulting futurists that further connected him to MIT, Stanford, and the Valley. Brand had literally helped bring about the modern digital revolution.

His attention then turned toward its upkeep. Brand’s 1994 book, How Buildings Learn: What Happens After They’re Built, argued against high-modernist architectural ideas. Nearly all buildings eventually get remade, he argued, but he especially favored cheap, simple structures that inhabitants could easily retool to suit changing needs. In some ways, Brand was recapitulating the liberated—or libertarian—philosophy of the Whole Earth Catalog: People can remake their world, if they have access to tools. In a chapter titled “The Romance of Maintenance,” he asked readers to see the beauty, value, and occasional pleasures of fixer-uppers of all kinds.

This chapter was a touchstone for many of us in the academic subfield of maintenance studies. Researchers in disciplines like history, sociology, and anthropology, as well as artists and practitioners in fields like libraries, IT, and engineering, all started trying to understand the realities and, yes, romance of maintenance and repair. Brand joined and contributed to Listservs, attended conferences, chatted with intellectual leaders. So it’s a bit uncharitable when he writes that his new book is “the first to look at maintenance in general.” He knows better. The real question, though, is what his work has to teach us that others have not said before. In this first volume, the answer is unclear.


Maintenance: Of Everything, Part One is an odd book. If so much of Brand’s thinking has been about access to tools, he now asks, in a more extended way: How are our tools maintained? But where Brand began his career with a catalogue, in this volume we get … what? A digest? An almanac? An encyclopedia? Its form and riotous variety fit no genre easily. 

The book has two chapters. The first, “The Maintenance Race,” recounts the story of three men who took part in the Golden Globe, a round-the-world race for solo sailors held in 1968. Each of the sailors, Brand explains, had a different philosophy of maintenance. One neglected it and hoped for the best. He died. Another thought of and prepared for everything in advance, and while he didn’t win the race, he completed it and once held the record for the “world’s longest recorded nonstop solo sailing voyage.” The final sailor won and did so through heroic acts of perseverance; his style was “Whatever comes, deal with it,” Brand explains. Structured like a fairy tale and unremittingly romantic, the story—like most of the anecdotes in the book—focuses on the derring-do of vigorous white guys. The strategy is no secret. Brand’s outline explains: “Start with a dramatic contest of maintenance styles under life-critical conditions—a true story told as a fable.” This myth is meant to inspire. 

The second chapter, “Vehicles (and Weapons),” is over 150 pages long. It has five sections, multiple subsections, five subsections designated “digressions,” one called a “subdigression,” two “postscripts,” and several “footnotes” that are not footnotes in a formal sense but, rather, further addenda. At times, it all feels like notes for a future work. Brand makes no apology for the book’s woolliness. “All I can offer here,” he writes, “is to muse across a representative of maintenance domains and see what emerges.” Perhaps the most charitable reading of the potpourri is that it represents the return of a Merry Prankster, offering us a riotous varied light show. It’s a good book to leave on a table and occasionally open to a random page for entertainment. But it often seems as if it does not know what it wants to say or be. 

“Vehicles (and Weapons)” begins by paraphrasing two famous works of maintenance philosophy, Robert M. Pirsig’s Zen and the Art of Motorcycle Maintenance and Matthew B. Crawford’s Shop Class as Soulcraft. Maintenance involves both “problem finding” and “problem solving.” While much repair work is marked by anxiety, impatience, and boredom, it also offers positive values and outcomes. “Motorcycle maintainers take heart from what they repair for—the glory of the ride,” Brand writes. 

The beauty and triumph of cheapness is a running theme throughout the work, harking back to How Buildings Learn. Henry Ford’s Model T won out over early electric vehicles and hugely expensive luxury vehicles like Rolls-Royce’s Silver Ghost because it was cheap and easier to maintain. The three most popular cars in human history—the Ford Model T, the Volkswagen Bug, and the Lada “Classic” from Russia—all privileged cheapness, “retained their basic design for decades, and … invited repair by the owner.” Or, to be fair, maybe demanded it? For every hobbyist who delighted in being able to self-reliantly keep a VW running, there must have been thousands who appreciated how cheap it was and hated that it broke a lot. Brand never points to social research, like surveys, that might help us know people’s feelings on such matters.

Other sections recount how Americans created interchangeable parts (enabling not only cheap mass production but also easy maintenance), examine how maintenance works with assault rifles and in war, and track the history of technical manuals from the early modern period to the age of YouTube. These stories are solid, but they’re also well known to students of technology, and nearly all are recycled from the work of others, featuring many large block quotes. The volume breaks little new ground. 

Brand treats maintenance as an unalloyed good. But the field of maintenance studies has moved on, burrowing into the domain’s ironies, complexities, and difficulties. A simple example: In most cases, it is environmentally far better to retire and recycle an internal-combustion vehicle and buy an electric one than to keep the polluting beast going forever. Maintaining a gas-guzzler or a coal-­burning power plant isn’t a radical act but a regressive one. Also, maintenance can become a life-breaking burden on the poor, and it falls inequitably on the shoulders of women and people of color. Keeping existing systems going can be a way of avoiding tough, necessary change—like making technological systems more accessible for people with disabilities. In this volume, Brand is uninterested in such difficult trade-offs. He avoids any question of how politics shapes these issues, or how they shape politics.

This avoidance comes out most clearly in a section of “Vehicles (and Weapons)” that talks about Elon Musk—a character of “unique mastery,” Brand informs us. He tells us that Bill Gates once shorted Tesla’s stock, only to lose $1.5 billion. The lesson is clear: Elon won. 

In what political and social vision is money the best way to keep the score? Brand rightly points out that electric vehicles have fewer moving parts and, in that sense, are more maintainable than internal-combustion vehicles. He celebrates Musk most of all because his products “have all proven to be game changers in part because they combine ingenious design with surprisingly low cost.” Again, it’s Brand’s “cheap, available tools” hypothesis. But there’s a real superficiality and lack of follow-through in thinking here: Teslas remain luxury vehicles whose sales have slumped since federal tax subsidies disappeared. The company has faced several right-to-repair lawsuits; there’s even a law review article on the topic. Musk is in no sense a maintenance hero. Yet Brand writes that with his companies, “Musk may have done more practical world saving than any other business leader of his time.” By the time Brand was writing this book, the controversies surrounding Musk for at least flirting with antisemitism, racism, sexism, authoritarianism, and more were quite clear. About this, the book says not a word.

book cover
Maintenance: Of Everything, Part One
Stewart Brand
STRIPE PRESS, 2026

For sure, Brand needn’t agree with Musk’s critics, but failing to even broach the subject is tone deaf and out of touch. Others have argued that Silicon Valley’s “Move fast and break things” mentality undermines healthy maintenance. Brand doesn’t raise the idea—even to dismiss it. 

It could be that with Maintenance: Of Everything, Part One Brand is just getting going; that in subsequent volumes he’ll have something more coherent to say; that he’ll raise really hard questions and try to answer them. But given his track record, we might reasonably doubt it. Kesey said Brand cleaves to power; he certainly doesn’t question it. 

Lee Vinsel is an associate professor of science, technology, and society at Virginia Tech and host of Peoples & Things, a podcast about human life with technology.

How robots learn: A brief, contemporary history

Roboticists used to dream big but build small. They’d hope to match or exceed the extraordinary complexity of the human body, and then they’d spend their career refining robotic arms for auto plants. Aim for C-3P0; end up with the Roomba. 

The real ambition for many of these researchers was the robot of science fiction—one that could move through the world, adapt to different environments, and interact safely and helpfully with people. For the socially minded, such a machine could help those with mobility issues, ease loneliness, or do work too dangerous for humans. For the more financially inclined, it would mean a bottomless source of wage-free labor. Either way, a long history of failure left most of Silicon Valley hesitant to bet on helpful robots.

That has changed. The machines are yet unbuilt, but the money is flowing: Companies and investors put $6.1 billion into humanoid robots in 2025 alone, four times what was invested in 2024. 

What happened? A revolution in how machines have learned to interact with the world. 

Imagine you’d like a pair of robot arms installed in your home purely to do one thing: fold clothes. How would it learn to do that? You could start by writing rules. Check the fabric to figure out how much deformation it can tolerate before tearing. Identify a shirt’s collar. Move the gripper to the left sleeve, lift it, and fold it inward by exactly this distance. Repeat for the right sleeve. If the shirt is rotated, turn the plan accordingly. If the sleeve is twisted, correct it. Very quickly the number of rules explodes, but a complete accounting of them could produce reliable results. This was the original craft of robotics: anticipating every possibility and encoding it in advance.

Around 2015, the cutting edge started to do things differently: Build a digital simulation of the robotic arms and the clothes, and give the program a reward signal every time it folds successfully and a ding every time it fails. This way, it gets better by trying all sorts of techniques through trial and error, with millions of iterations—the same way AI got good at playing games.

The arrival of ChatGPT in 2022 catalyzed the current boom. Trained on vast amounts of text, large language models work not through trial and error but by learning to predict what word should come next in a sentence. Similar models adapted to robotics were soon able to absorb pictures, sensor readings, and the position of a robot’s joints and predict the next action the machine should take, issuing dozens of motor commands every second.

This conceptual shift—to reliance on AI models that ingest large amounts of data—seems to work whether that helpful robot is supposed to talk to people, move through an environment, or even do complicated tasks. And it was paired with other ideas about how to accomplish this new way of learning, like deploying robots even if they aren’t yet perfect so they can learn from the environment they’re meant to work in. Today, Silicon Valley roboticists are dreaming big again. Here’s how that happened. 


Jibo

A movable social robot carried out conversations long before the age of LLMs.

An MIT robotics researcher named Cynthia Breazeal introduced an armless, legless, faceless robot called Jibo to the world in 2014. It looked, in fact, like a lamp. Breazeal’s aim was to create a social robot for families, and the idea pulled in $3.7 million in a crowdsourced funding campaign. Early preorders cost $749.

The early Jibo could introduce itself and dance to entertain kids, but that was about it. The vision was always for it to become a sort of embodied assistant that could handle everything from scheduling and emails to telling stories. It earned a number of devoted users, but ultimately the company shut down in 2019.

A crowdfunding campaign started in 2014 and drew 4,800 Jibo preorders.
COURTESY OF MIT MEDIA LAB

In retrospect, one thing that Jibo really needed was better language capabilities. It was competing against Apple’s Siri and Amazon’s Alexa, and all those technologies at the time relied on heavy scripting. In broad terms, when you spoke to them, software would translate your speech into text, analyze what you wanted, and create a response pulled from preapproved snippets. Those snippets could be charming, but they were also repetitive and simply boringdownright robotic. That was especially a challenge for a robot that was supposed to be social and family oriented. 

What has happened since, of course, is a revolution in how machines can generate language. Voice mode from any leading AI provider is now engaging and impressive, and multiple hardware startups are trying (and failing) to build products that take advantage of it. 

But that comes with a new risk: While scripted conversations can’t really go off the rails, ones generated by AI certainly can. Some popular AI toys have, for example, talked to kids about how to find matches and knives. 


Dactyl

A robot hand trained with simulations tries to model the unpredictability and variation of the real world.

By 2018, every leading robotics lab was trying to scrap the old scripted rules and train robots through trial and error. OpenAI tried to train its robotic hand, Dactyl, virtuallywith digital models of the hand and of the palm-size cubes Dactyl was supposed to manipulate. The cubes had letters and numbers on their faces; the model might set a task like “Rotate the cube so the red side with the letter O faces upward.”

Here’s the problem: A robotic hand might get really good at doing this in its simulated world, but when you take that program and ask it to work on a real version in the real world, the slight differences between the two can cause things to go awry. Colors might be slightly different, or the deformable rubber in the robot’s fingertips could turn out to be stretchier than it was in simulation.

a Dactyl robot hand holds a Rubix cube
Dactyl, part of OpenAI’s first attempt at robotics, was trained in simulation to solve Rubik’s Cubes.
COURTESY OF OPENAI

The solution is called domain randomization. You essentially create millions of simulated worlds that all vary slightly and randomly from one another. In each one the friction might be less, or the lighting more harsh, or the colors darkened. Exposure to enough of this variation means the robots will be better able to manipulate the cube in the real world. The approach worked on Dactyl, and one year later it was able to use the same core techniques to do something harder: solving Rubik’s Cubes (though it worked only 60% of the time, and just 20% when the scrambles were particularly hard). 

Still, the limits of simulation mean that this technique plays a far smaller role today than it did in 2018. OpenAI shuttered its robotics effort in 2021 but has recently started the division up againreportedly focusing on humanoids. 


RT-2

Training on images from across the internet helps robots translate language into action.

Around 2022, Google’s robotics team was up to some strange things. It spent 17 months handing people robot controllers and filming them doing everything from picking up bags of chips to opening jars. The team ended up cataloguing 700 different tasks.

The point was to build and test one of the first large-scale foundation models for robotics. As with large language models, the idea was to input lots of text, tokenize it into a format an algorithm could work with, and then generate an output. Google’s RT-1 received input about what the robot was looking at and how the many parts of the robotic arm were positioned; then it took an instruction and translated it into motor commands to move the robot. When it had seen tasks before, it carried out 97% of them successfully; it succeeded at 76% of the instructions it hadn’t seen before. 

a robot at a table of small toys
The model RT-2, for Robotic Transformer 2, incorporated internet data to help robots process what they were seeing.
COURTESY OF GOOGLE DEEPMIND

The second iteration, RT-2, came out the following year and went even further. Instead of training on data specific to robotics, it went broad: It trained on more general images from across the internet, like the vision-language models lots of researchers were working on at the time. That allowed the robot to interpret where certain objects were in the scene.

“All these other things were unlocked,” says Kanishka Rao, a roboticist at Google DeepMind who led work on both iterations. “We could do things now like ‘Put the Coke can near the picture of Taylor Swift.’” 

In 2025, Google DeepMind further fused the worlds of large language models and robotics, releasing a Gemini Robotics model with improved ability to understand commands in natural language. 


RFM-1

An AI model that allows robotic arms to act like coworkers.

In 2017, before OpenAI shuttered its first robotics team, a group of its engineers spun out a project called Covariant, aiming to build not sci-fi humanoids but the most pragmatic of all robots: an arm that could pick up and move things in warehouses. After building a system based on foundation models similar to Google’s, Covariant deployed this platform in warehouses like those operated by Crate & Barrel and treated it as a data collection pipeline. 

By 2024, Covariant had released a robotics model, RFM-1, that you could interact with like a coworker. If you showed an arm many sleeves of tennis balls, for example, you could then instruct it to move each sleeve to a separate area. And the robot could respondperhaps predicting that it wouldn’t be able to get a good grip on the item and then asking for advice on which particular suction cups it should use. 

This sort of thing had been done in experiments, but Covariant was launching it at significant scale. The company now had cameras and data collection machines in every customer location, feeding back even more data for the model to train on.

a warehouse robot arm lifts object with many suckers to place in a bin
A Covariant robot demonstrates “induction”—the common warehouse task of placing objects on sorters or conveyors.
COURTESY OF COVARIANT

It wasn’t perfect. In a demo in March 2024 with an array of kitchen items, the robot struggled when it was asked to “return the banana” to its original location. It picked up a sponge, then an apple, then a host of other items before it finally accomplished the task. 

It “doesn’t understand the new concept” of retracing its steps, cofounder Peter Chen told me at the time. “But it’s a good exampleit might not work well yet in the places where you don’t have good training data.”

Chen and fellow founder Pieter Abbeel were soon hired by Amazon, which is currently licensing Covariant’s robotics model (Amazon did not respond to questions about how it’s being used, but the company runs an estimated 1,300 warehouses in the US alone). 


Digit

Companies are putting this humanoid to the test in real-world settings.

The new investment dollars flowing to robotics startups are aimed largely at robots shaped not like lamps or arms but like people. Humanoid robots are supposed to be able to seamlessly enter the spaces and jobs where humans currently work, avoiding the need to retool assembly lines to accommodate new shapes such as giant arms. 

It’s easier said than done. In the rare cases where humanoids appear in real warehouses, they’re often confined to test zones and pilot programs. 

Digit humanoid robot putting a plastic bin on a conveyor belt
Amazon and other companies are using Digit to help move shipping totes.
COURTESY OF AGILITY ROBOTICS

That said, Agility’s humanoid Digit appears to be doing some real work. The designwith exposed joints and a distinctly unhuman headis driven more by function than by sci-fi aesthetics. Amazon, Toyota, and GXO (a logistics giant with customers like Apple and Nike) have all deployed itmaking it one of the first examples of a humanoid robot that companies see as providing actual cost savings rather than novelty. Their Digits spend their days picking up, moving, and stacking shipping totes.

The current Digit is still a long way from the humanlike helper Silicon Valley is betting on, though. It can lift only 35 pounds, for exampleand every time Agility makes Digit stronger, its battery gets heavier and it has to recharge more often. And standards organizations say humanoids need stricter safety rules than most industrial robots, because they’re designed to be mobile and spend time in proximity to people. 

But Digit shows that this revolution in robot training isn’t converging on a single method. Agility relies on simulation techniques like those OpenAI used to train its hand, and the company has worked with Google’s Gemini models to help its robots adapt to new environments. That’s where more than a decade of experiments have gotten the industry: Now it’s building big.