Chinese tech workers are starting to train their AI doubles–and pushing back

Tech workers in China are being instructed by their bosses to train AI agents to replace them—and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. 

Earlier this month a GitHub project called Colleague Skill, which claimed workers could use it to “distill” their colleagues’ skills and personality traits and replicate them with an AI agent, went viral on Chinese social media. Though the project was created as a spoof, it struck a nerve among tech workers, a number of whom told MIT Technology Review that their bosses are encouraging them to document their workflows in order to automate specific tasks and processes using AI agent tools like OpenClaw or Claude Code. 

To set up Colleague Skill, a user names the coworker whose tasks they want to replicate and adds basic profile details. The tool then automatically imports chat history and files from Lark and DingTalk, both popular workplace apps in China, and generates reusable manuals describing that coworker’s duties—and even their unique quirks—for an AI agent to replicate. 

Colleague Skill was created by Tianyi Zhou, who works as an engineer at the Shanghai Artificial Intelligence Laboratory. Earlier this week he told Chinese outlet Southern Metropolis Daily that the project was started as a stunt, prompted by AI-related layoffs and by the growing tendency of companies to ask employees to automate themselves. He didn’t respond to requests for further comment.

Internet users have found humor in the idea behind the tool, joking about automating their coworkers before themselves. However, Colleague Skill’s virality has sparked a lot of debate about workers’ dignity and individuality in the age of AI.

After seeing Colleague Skill on social media, Amber Li, 27, a tech worker in Shanghai, used it to recreate a former coworker as a personal experiment. Within minutes, the tool created a file detailing how that person did their job. “It is surprisingly good,” Li says. “It even captures the person’s little quirks, like how they react and their punctuation habits.” With this skill, Li can use an AI agent as a new “coworker” that helps debug her code and replies instantly. It felt uncanny and uncomfortable, Li says. 

Even so,  replacing coworkers with agents could become a norm. Since OpenClaw became a national craze, bosses in China have been pushing tech workers to experiment with agents. 

Although AI agents can take control of your computer, read and summarize news, reply to emails, and book restaurant reservations for you, tech workers on the ground say their utility has so far proven to be limited in business contexts. Asking employees to make manuals describing the minutiae of their day-to-day jobs the way Colleague Skill does is one way to help bridge that gap. 

Hancheng Cao, an assistant professor at Emory University who studies AI and work, believes that companies have good reasons to push employees to create work blueprints like these, beyond simply following a trend. “Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment,” he says.

To employees, though, making agents or even blueprints for them can feel strange and alienating. One software engineer, who spoke with MIT Technology Review anonymously because of concerns about their job security, trained an AI (not Colleague Skill) on their workflow and found that the process felt reductive—as if their work had been flattened into modules in a way that made them easier to replace. On social media, workers have turned to bleak humor to express similar feelings. In one comment on Rednote, a user wrote that “a cold farewell can be turned into warm tokens,” quipping that if they use Colleague Skill to distill their coworkers into tasks first, they themselves might survive a little longer.

The push for creating agents has also spurred clever countermeasures. Irritated by the idea of reducing a person to a skill, Koki Xu, 26 an AI product manager in Beijing, published an “anti-distillation” skill on GitHub on April 4. The tool, which took Xu about an hour to build, is designed to sabotage the process of creating workflows for agents. Users can choose between light, medium, and heavy sabotage modes depending on how closely their boss is observing the process, and the agent rewrites the material into generic, non-actionable language that would produce a less useful AI stand-in. A video Xu posted about the project went viral, drawing more than 5 million likes across platforms.

Xu told MIT Technology Review that she has been following the Colleague Skill trend from the start and that it has made her think about alienation, disempowerment, and broader implications for labor. “I originally wanted to write an op-ed, but decided it would be more useful to make something that pushes back against it,” she says.

Xu, who has undergraduate and master’s degrees in law, said the trend also raises legal questions. While a company may be able to argue that work chat histories and materials created on a work laptop are corporate property, a skill like this can also capture elements of personality, tone, and judgment, making ownership much less clear. She said she hopes Colleague Skill prompts more discussion about how to protect workers’ dignity and identity in the age of AI. “I believe it’s important to keep up with these trends so we (employees) can participate in shaping how they are used,” she says. Xu herself is an avid AI adopter, with seven OpenClaw agents set up across her personal and work devices.

Li, the tech worker in Shanghai, says her company has not yet found a way to replace actual workers with AI tools, largely because they remain unreliable and require constant supervision. “I don’t feel like my job is immediately at risk,” she says. “But I do feel that my value is being cheapened, and I don’t know what to do about it.”

Colossal Biosciences said it cloned red wolves. Is it for real?

If you want to capture something wolflike, it’s best to embark before dawn.

So on a morning this January, with the eastern horizon still pink-hued, I drove with two young scientists into a blanket of fog. Forty miles to the west, the industrial sprawl of Houston spawned a golden glow. Tanner Broussard’s old Toyota Tacoma bumped over the levee-top roads as killdeer, flushed from their rest, flew across the beams of his headlights. 

Broussard peered into the darkness, looking for traps. “I have one over here,” he said, slowing slightly. A master’s student at McNeese State University, he was quiet and contemplative, his bearded face half-hidden under a black ball cap. “Nothing on it,” he said, blandly. The truck rolled on.

Wolves and their relations—dogs, jackals, coyotes, and so on—are classed in the family Canidae, and the canid that dominated this landscape in eastern Texas was once the red wolf. But as soon as white settlers arrived on the continent, Canis rufus found itself under siege. The war on wolves “lasted 200 years,” federal researchers once put it, in a surprisingly evocative report. “The wolf lost.” By 1980, the red wolf was declared extinct in the wild, its population reduced to a small captive breeding population.

Still, for decades afterward, people noted that strange wolflike creatures persisted along the Gulf Coast. Finally, in 2018, scientists confirmed that some local coyotes were more than coyotes: They were taller, long-legged, their coats shaded with hints of cinnamon. These animals contained relict red wolf genes. They became known as the ghost wolves.

Broussard grew up in southwest Louisiana, watching coyotes trot across his parents’ ranch. The thrilling fact that these might have been not just coyotes but something more? That reset a rambling academic career. In 2023, Broussard had recently returned to college after a seven-year pause, and his budding obsession with wolves narrowed his focus. Before he finished his bachelor’s degree, he began to supply field data to a prominent conservation nonprofit.

a wolf pup chews on a terrycloth toy
The American red wolf, Canis rufus, is the most endangered wolf species in the world. This pup is one of four animals said to be clones of this native North American species.
COURTESY OF COLOSSAL BIOSCIENCES

Then, last year, just before he began his master’s studies, he woke to disconcerting news. A startup called Colossal Biosciences claimed to have resuscitated the dire wolf, a large canid that went extinct more than 10,000 years ago. Pundits debated the utility of the project and whether the clones—technically, gray wolves with some genetic tweaks—could really be called dire wolves. But what mattered to Broussard was Colossal’s simultaneous announcement that it had cloned four red wolves.  

“That surprised pretty much everybody in the wolf community,” Broussard said as we toured the wildlife refuge where he’d set his traps. The Association of Zoos and Aquariums runs a program that sustains red wolves through captive breeding; its leadership had no idea a cloning project was underway. Nor did ecologist Joey Hinton, one of Broussard’s advisors, who had trapped the canids Colossal used to source the DNA for its clones. Some of Hinton’s former partners were collaborating with the company, but he didn’t know that clones were on the table.

There was already disagreement among scientists about the entire idea of de-extinction. Now Colossal had made these mystery clones, whose location was kept secret. Even the purpose of the clones was murky to some scientists; just how they might restore red wolf populations was unclear. 

Red wolves had always been a contentious species, hard for scientists to pin down. The red wolf research community was already marked by the inevitable interpersonal tensions of a small and passionate group. Now Colossal’s clones became one more lightning rod. Perhaps the most curious question, though, was whether the company had cloned red wolves at all. 


You can think of the red wolf as the wolf of the East—an apex predator that once roamed the forests and grasslands and marshes everywhere from Texas to Illinois to New York. Smaller than a gray wolf (though a good bit larger than a coyote), this was a sleek beast, with, according to one old field guide, a “cunning fox-like appearance”: long body, long legs; clearly built to run across long distances. Its coat was smooth and flat and came in many colors: a reddish tone that comes out in the right light, yes, but also, despite the name, white and gray and, in certain regions and populations, an ominous all black.

We know these details thanks to a few notes from early naturalists. As writer Andrew Moore writes in his new book, The Beasts of the East, by the time a mammalogist decided to class these eastern wolves as a standalone species in the 1930s, the red wolf had been extirpated from the East Coast and was rapidly dwindling across its range. Working with remnant skulls and other specimens, the mammalogist chose the name red wolf—which was later enshrined with the Latinate Canis rufus—because that’s what these wolves were called in the last place they survived. 

The looming extinction of the red wolf turned out to be a good thing for coyotes. Canis latrans is a distant relative of wolves that split away from a common ancestor thousands of years ago and might be considered, as one canid biologist put it to me, the “wolf of the Anthropocene.” Their smaller size means they need less food and can survive in smaller and more fragmented territory, the kind that modern humans tend to build. 

The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all.

Red wolves had kept coyotes out of eastern America, outcompeting them for prey. Now, as the wolves declined, the coyotes began to slip in. The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all. Soon the territory became a genetic jumble, home to both wolves and coyotes and hybrids that, after several generations of intermixing, came in every shade between. Scientists call such a population a “hybrid swarm,” and it poses a genetic threat to the declining species: As more coyotes poured east, and as all the canids kept interbreeding, there would be nothing that was “purely” wolf. 

Ron Wooten surveys a location on the edge of Galveston Island State Park in Texas. In 2016, Wooten’s photographs of oversized local coyotes got the attention of Joey Hinton, then a postdoctoral researcher at the University of Georgia.
TRISTAN SPINSKI

For years, no one seemed to notice. Perhaps trappers in the region mistook the new hybrids for wolves—or were happy to take the higher bounty that a wolf pelt earned. Finally, though, by the 1960s, as the concept of endangered species first emerged, biologists began to worry for the disappearing wolf. 

The best solution they could come up with was a program of mass extermination. Over several years, trappers rounded up hundreds of canids in Texas and Louisiana. Those deemed true red wolves (on the basis of their howls and skull shape) were whisked away to breed in captivity. Most of the rest were euthanized. In 1980, the red wolf was declared extinct in the wild. To put it plainly: The red wolf was wiped out intentionally, in a roundabout effort to keep it alive.

Just 14 individuals survived this gauntlet; today’s wolves descend from 12 of those. They became the ark, the source material for the few hundred red wolves that live today. There are about 280 in the “Species Survival Plan” population, living in captivity, and another 30 or so that roam a federal refuge in coastal North Carolina, and that the government deems “nonessential” and “experimental.” According to the US Fish and Wildlife Service, to be classified as a representative of the protected entity known as Canis rufus, an animal must trace at least 87.5% of its lineage to the 12 founders. 

The scientist who led this trapping-and-breeding program understood that the federal government would be narrowing the red wolf’s gene pool precipitously—so much so that the result could be an entirely new species. None of those notably black wolves persisted in the new population, for example. But what other choice existed? A new kind of wolf, free of the taint of the invading coyote, seemed better than no wolf at all.


After I learned about Colossal’s clones, I decided to travel to eastern Texas. The clones were hidden away on an unnamed refuge, but on this coastline, I might be able to at least see the animals that provided their genetic material. I arrived in the small town of Winnie on a balmy afternoon in January and met up with Broussard and another graduate student, Patrick Cunningham, at a Tex-Mex joint to discuss the challenges of studying red wolves.

“We don’t have a good reference genome,” Cunningham said. We can collect DNA from the descendants of the 12 founders, but not from the countless wolves that had been killed. It’s difficult to extract usable DNA from old samples. So our picture of what the species used to look like is limited. 

Studies of the genes we do have, meanwhile, have proved controversial. When a Princeton geneticist named Bridgett vonHoldt dug into the genome of the Species Survival Plan population, she found little about their DNA that could set them apart from other wolflike American canids. In 2016, in a paper in Science Advances, vonHoldt and her coauthors wondered if there ever really was a separate southern wolf species. Perhaps the 12 founders were just coyotes injected with some smaller portion of wolf.

It’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

Her paper called for complex new interpretations of the Endangered Species Act. We should, she wrote, focus less on species and more on the function a group of animals performs. The red wolves deserved protection, then, as creatures that filled the same role as truly endangered wolves and carried some of their genetics. Nonetheless, for Canis rufus, the timing of the paper was bad news.

The red wolves roaming that federal reserve in North Carolina are supposed to be a first step toward the species’ return to the wild. But some locals never liked the idea of living alongside wolves. By 2016, state officials had turned against the recovery program and were requesting its termination. The wild population, which had included as many as 120 a few years earlier, was falling. But the US Fish and Wildlife Service had paused further releases of wolves. Now a group of scientists, led by vonHoldt, was saying that the red wolf showed “a lack of unique ancestry.” Why spend money, some people wondered, on a species that does not exist? 

Part of the problem was that the concept of a “species” is less sturdy than your high school biology teacher might have led you to believe. The most familiar definition is that a species consists of animals that can produce fertile offspring. But that’s a rule various species of canids violate all the time; it’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

VonHoldt suggested that the modern red wolf is a channel in that river, part wolf and part coyote, that appeared surprisingly recently. But a year after her study came out, other researchers claimed that her data, if interpreted differently, could suggest that the red wolf braid had emerged tens of thousands of years ago, meaning this was a species that had long been on its own evolutionary journey. 

These nuances were confusing for the policymakers who oversaw actual, living animals. “Congress was just like, ‘What is going on?’” Cunningham said. “‘Why is there not just a simple explanation for what this thing is?’”

Given the policy implications, the National Academies of Science, Engineering, and Medicine tasked a panel of scientists with finding that simple answer. Their report, published in 2019, declared that the red wolf is, by virtue of its appearance and seemingly long-standing isolated population, a species. As their study got underway, though, a new question was arising: What to make of the strange canids on the Gulf Coast, those today called the ghost wolves?


The path to that name began in 2008, when a photographer from Galveston Island, Texas, grew obsessed with the oversized local coyotes. He began to take photos of the packs, which he distributed to scientists, seeking answers: What were they? By 2016, the photos had reached Joey Hinton, then a postdoctoral researcher at the University of Georgia.

Hinton had spent more than a decade trapping wolves and coyotes in North Carolina, and his work has always focused on live animals, especially visual ways to distinguish red wolves and coyotes. So he was a good choice for helping the photographer, Ron Wooten, figure out the status of the canids. In his freezer Wooten also had tissue samples he’d collected from road-killed coyotes. These could be used by a geneticist to give a fuller picture of the canids’ ancestry. So vonHoldt was brought in too. The result was a 2018 paper, with Hinton as a coauthor, that identified the Galveston Island canids as at least part red wolf.

These canids were not, to be clear, actual red wolves; no canid on the Gulf Coast is descended from the government’s 12 canonical founders, so under current policy, none can be officially classified as a wolf. Subsequent studies have found that, on average, the ancestry of the region’s canids is less than half red wolf, and often far less. In scientific terms, the red wolf had introgressed into the Gulf Coast population—its genes had leaked across the species boundary and lodged themselves in a different population.

Hinton, vonHoldt, and their coauthors also noted the presence of what they called “ghost alleles”—DNA sequences unknown in any other named species. The Occam’s razor assumption was that, in these already wolfy coyotes, these sequences likely represented Canis rufus genetics that had not been captured in the sweep of the marsh that yielded the Species Survival Plan population. Since so much of the red wolf gene pool had been lost, these genes seemed to be a potential resource for the species—a way to expand its diversity. When the New York Times covered this discovery a few years later, the headline popularized the “ghost wolf” moniker that has proved so indelible. 

As it happened, a separate team, focused on canids in and around federally protected marsh in Louisiana, published a similar paper in 2018, at nearly the same time. The twin discoveries raised new questions—What should we make of these creatures, the latest branch in the canid river? What do they mean for the wolves in North Carolina?—and helped researchers secure new funding.

In 2020, vonHoldt and Kristin Brzeski, a former postdoc under vonHoldt and now a professor at Michigan Technological University, launched what they called the Gulf Coast Canine Project. Brzeski, who led the field work, hired Hinton to do much of the canid trapping and sample collection. In 2022, vonHoldt, Hinton, and Brzeski were all coauthors of another paper that identified even more red-wolf-descended canids in Louisiana and noted a positive correlation between red wolf ancestry and body mass—the more red wolf genes, the bigger the animal. The paper also suggested that given this newly discovered reservoir of red wolf DNA, “genomic technologies” could prove useful in the long-term survival of the species.

Bridgett vonHoldt (left) and Kristin Brzeski (center) visit a location where canids have been spotted with an animal control worker.
TRISTAN SPINSKI

VonHoldt and Brzeski eventually conceived of an ambitious project. They hoped that by carefully matching the most wolf-­descended canids and breeding them together, over three generations they’d increase the proportion of red wolf genes—de-introgression. “I’m expecting, based on these pairings of animals, that I can stitch together the puzzle pieces,” vonHoldt told me recently. “We are very likely to get puppies each generation that are higher and higher red wolf content”—enough wolf content, she hopes, to eventually win her permission to breed the resulting animals with the Species Survival Plan population of red wolves. They’d essentially be adding a new founder to the limited lineage.

Hinton told me he felt he’d been kept in the dark about the de-introgression idea. He was also worried, he says, to learn that Colossal Biosciences hovered in the background. (In a draft proposal for the project, vonHoldt indicated that Colossal would be in charge of “live capture.”) Hinton says he was not comfortable collecting materials for a for-profit company that has to keep its shareholders happy. 

Hinton says he reached out to state and federal officials and found they knew little about the project. (The US Fish and Wildlife Service declined to make anyone available for an interview for this story, and the Louisiana Department of Wildlife and Fisheries did not reply to requests for comment.) He knew the group’s next phone call would be difficult, and indeed it was. He wound up speaking one-on-one with vonHoldt for at least half an hour.

“We didn’t reach an agreement,” he says. After the call, he sent her a text: He was exiting the project. He believes that had Colossal not been involved, they’d all still be working as a team. Both vonHoldt and Brzeski declined to comment on what felt to them like a matter of interpersonal relationships rather than a scientific dispute. “There were challenges over time, and the tone and manner of the interactions became increasingly difficult to navigate productively,” Brzeski said in an email. 


Colossal was cofounded in 2021 by George Church, an eminent Harvard geneticist who, thanks to investors, could finally embark on a long-discussed dream. He wanted to make de-extinction a reality—using CRISPR gene-editing technology to, say, turn a modern elephant into something like the extinct woolly mammoth. The concept has drawn skepticism from the beginning—at best it would only be possible to make something like a woolly mammoth. Was there any point to that? Some scientists note that genes alone do not teach an animal how to exist in the world; indeed, since social structures affect how genes are expressed, an animal without parents may not effectively fill its ecological niche.

Less reproachable, though, was Colossal’s interest in partnering with scientists who, like vonHoldt and Brzeski, focus on extant species that are endangered. This gave more heft to Colossal’s gee-whiz de-extinction projects: They would, along the way, supply technology that could save our natural world.

For red wolves, such technologies could offer a quick way to expand the limited gene pool. Through genetic engineering, Colossal could take clones of the Gulf Coast canids and tune up the wolf, tune down the coyote. It would be a high-tech shortcut past vonHoldt and Brzeski’s careful breeding program. “You can do the same thing much more precisely, much more quickly, much more efficiently, in vitro,” says Matt James, Colossal’s chief animal officer and the executive director of the Colossal Foundation, the company’s nonprofit arm. VonHoldt notes that the old-fashioned approach, with breeding, means she has to take a few individual canids out of the wild, into captivity—never ideal but, in her view, a worthwhile price for progress. The advantage of cloning, which Colossal has managed to do with blood samples alone, is that the wild canid populations can be kept intact. 

VonHoldt has always been an advocate for wolves. Indeed, when she hypothesized that the red wolf had hybrid origins, in 2016, she’d framed it as an argument for protecting the gray wolf, which the federal government was considering removing from the Endangered Species List. (In short: If all wolves were one wolf, then it was undeniable that the species’ range had contracted precipitously.) But she’d grown frustrated with the federal government’s efforts to restore the red wolf, which after half a century had seen few meaningful successes, she says. 

VonHoldt joined Colossal’s scientific advisory board in 2023. “I love the bold, the shock and awe,” she told me, explaining her decision. She saw the fact that Colossal sparked controversy as an asset, given the problems she sees in conservation: “Get something out there. Start pushing buttons and start forcing these conversations,” she says. The red wolf was akin to a terminal patient who was ready to accept any and all therapies, however experimental. Why not embrace biotech? 

She also notes that the federal budget for endangered species conservation is incredibly limited. Rely only on that money and “we can kiss our world goodbye,” she said in an e-mail. The $100 million raised by the Colossal Foundation is essential, then, she says. As for the samples the team had collected on the Gulf Coast, she says, limited freezer space is often devoted to animals that are officially categorized as threatened or endangered, which the Gulf Coast canids are not. Colossal could take the samples, and the team passed them along to the company.

Dr. Joey Hinton
Ecologist Joey Hinton trapped the canids that Colossal Biosciences used to source the DNA for its clones. He dismisses the clones as a way for the company to earn headlines and attract funding.
RICH SAAL

It was Hinton—a source for a former story—who first alerted me to Colossal’s work on red wolves; he described vonHoldt and Brzeski’s de-introgression project, which won federal funding in late 2024, as nefarious-sounding work to “disappear” canids off the Gulf Coast. But he did not have all the details of the project, which had changed after he left the team. He suggested they’d be “just throwing animals together,” whereas vonHoldt described a careful program of observing the canids in the wild so she could determine which acted most wolflike, findings she’d cross-­reference with their genetic data.

 Colossal did not wind up participating in the de-­introgression project. But the company is doing work on the red wolf that ­vonHoldt views as complementary: Its scientists are assembling a “pangenome” of North American canids by studying samples pulled from museums, universities, zoos, and other institutions. This data set is expected to clarify both what genetic sequences are shared across the entire canid family and what snippets differ in certain populations. The hope is that this will provide a clearer picture of the red wolf in its early days, before the coyotes arrived and the gene pool narrowed. That might shift what Colossal’s James calls the government’s arbitrary definition of the red wolf, to encompass more of the species’ full former diversity. 

The pangenome, then, might allow vonHoldt’s de-­introgressed canids, descended from the Gulf coast canids, to qualify as actual red wolves. Indeed, James suggested to me that more information about historic red wolves might force the government to take a new look at the Gulf Coast canids; some individuals might have high enough red wolf ancestry to be classified as red wolves. (“That has management implications that terrify state and federal government,” he added.)

hair in Zip-Loc bags on a metal tray
Blood and tissue samples collected by the Galveston Island Humane Society from canid roadkill will be shipped to Princeton University for DNA analysis.
TRISTAN SPINSKI

The purpose of vonHoldt’s de-introgression project is to bring back certain lost red wolf genes—to create a whole new wolf lineage. But she has also pushed against the idea of “genetic purity,” which she thinks limits what we protect with conservation laws; she told me emphasizing it reminds her of the human history of eugenics and “makes every part of my soul hurt.” She cares less about what species are out there, in the landscape, than what ecological function the animals play, and she sees coyotes and red wolves as closely related animals that may have a role to play in one another’s future survival.


As for Colossal’s clones, even vonHoldt seems to describe them as something less than a conservation breakthrough. They are a “proof of principle that we, collectively, as a scientific community, know how to do it,” she told me. If an urgent need arises to clone red wolves, the groundwork has been laid. 

Hinton, meanwhile, is one of several scientists I spoke with who were skeptical Colossal was doing good science, given that so much is conducted behind closed doors. He implied that the clones were nothing but an empty showpiece, a way to earn headlines and attract funders. “The work is anything but symbolic,” James responded via e-mail. “It expands the genetic toolkit available for critically endangered species, demonstrates scalable approaches to biodiversity restoration, and contributes directly to preserving imperiled lineages.” He noted that Colossal had intentionally decided to avoid the “snail’s pace” of the peer review process and suggested that the skepticism from scientists may actually be a “panicked response to being outpaced.”

Until some evidence confirms that the Gulf Coast canids—the source material for the clones—are red wolves, they can’t legally be classified as such for federal conservation purposes. Nonetheless, Colossal’s press release claimed that the company had “birthed two litters of cloned red wolves, the most critically endangered wolf in the world.” On the same day that press release dropped, Colossal’s CEO and cofounder, Ben Lamm, appeared on The Joe Rogan Experience and claimed that he had offered to create hundreds of red wolves for the federal government to use in recovery—for free! He was miffed when the government, under the Biden administration, replied that it wanted to spend several years and many millions of dollars to study the potential for cloning before it would take any action. (The company has gotten more traction with the Trump administration, Lamm said.)

When I first spoke to James at Colossal, he said that he was “cognizant” of the concerns over the names and labels and that the company’s own materials described the clones as “red ‘ghost’ wolves.” He suggested that if anyone assumed the clones were actual red wolves, that was because journalists had failed to grasp the nuances of the science. But this phrase appears so late in a long document that it was cut off in some versions. Later, over email, James indicated that further analysis had convinced him that what the company had created were red wolves, and that anyone who disagreed either could not grasp the science or is “so ideologically opposed to Colossal’s conservation revolution that they are willing to compromise their scientific integrity.”

VonHoldt has had her own issues with the company’s communications; she told me it was “stressful” when Lamm described the clones as red wolves—which, she notes, “federally, they’re not.” But she values the company’s work, she says, and “the thing that I value the most is shaking things up.” People are paying attention to red wolves. If it’s hard to decide what to call the animals on the Gulf Coast—where some heavily wolfy animals live alongside others that are more coyote—that’s just proof that our concept of a “species” does not capture the complex realities on the ground. 


In 2025, the same year as Colossal’s wolf announcement, Hinton launched the Texas-Louisiana Canid Project. He’s working in partnership with Broussard, the master’s student at McNeese, in slightly different territory from vonHoldt and Brzeski—and focusing more on the animals’ appearance and behavior than their genes. The Gulf Coast canids are stable and faring better than the North Carolina red wolves, and his hope is that if we learn why they’ve been successful for so many years, we might be able to help the official red wolf population, which is only just limping along. 

a wolf crosses a road outside of the city
Galveston locals hope that the presence of these remarkable creatures—red wolves or not—might rein in the rapid development of the island’s last stands of green.
TRISTAN SPINSKI

I had planned to join Hinton in the field, but by the time I was able to visit, he’d had to go home to his family. So I joined Broussard on his last days trapping in Texas that season. Before I’d left for Winnie, I’d told my friends I’d be out chasing the last surviving red wolves. But there, on the Gulf Coast, I came to understand that this was just as much a story about coyotes.

That’s what Broussard and Cunningham both called the creatures. Hinton does too; he considers the animals to be a specific “ecotype” of coyote, featuring an injection of wolf DNA that has helped them adapt to the local marshes. 

At vonHoldt’s behest, I drove an hour down the coast to Galveston Island, where she and Brzeski began working with the island’s animal control department; when locals find a coyote, the animal is captured so its blood can be collected and a GPS collar fitted on its neck. A small group of locals who support the project have come to call themselves the “ghost wolf team.” They hoped that the presence of these remarkable creatures might rein in the rapid development of the island’s last stands of green. Still, the people I spoke to in Galveston conceded that the animals were, if special, nonetheless a form of coyote. 

VonHoldt describes Galveston Island as a potential model for what conservation could look like in the future. Top-down recovery hasn’t been working, but helping more places fall in love with their local animals might. And for that to happen, we need to stop obsessing over whether or not something is a “pure” wolf. What matters, she argues, is that an animal is doing what a larger predator does in an ecosystem. She embraces the “ghost wolf” name because, more than “Gulf Coast canid,” it makes clear that there’s something special on the coast—something worth protecting. 

Her vision is enticing: Focus on function over purity. Let evolution proceed. Stop protecting the wolf of the past and consider the wolf of the future. Such rapid genetic exchange may be necessary to help predators adapt to a hotter, increasingly shattered world, she says. 

If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead?

Then again, we already know what’s adapted to the world we’re building: coyotes. The argument against genetic purity can sound like giving up on wolves entirely, with the possible exception of whatever specimens we produce in cloning facilities. And there is the matter of politics: If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead? Under an administration already rolling back environmental protections, the likeliest outcome may be protecting nothing at all.

I tried in Galveston, too, to see the coyotes. Ron Wooten, the local resident who helped alert scientists to this population, dropped some pins on a map, pointing me toward several likely spots. That evening, after the sun set, I chose a quiet road that passed through marshes until it reached the island’s eastern beach. It was mating season, Wooten had noted. The animals should be on the move, he said; look to the bushes. As I drove up and down the road, my headlights revealed only empty darkness. No coyote. No wolf. Fitting, perhaps—isn’t absence the essence of a ghost? But whether this was a good omen was less clear. As individuals, these animals do best by avoiding us humans. As a group, their survival—like the survival of the red wolves—depends on our knowing that they are here, and were here, and deciding that is reason enough to care.

In Winnie the next morning, I went out one last time with Broussard, and we struck out again. With no coyotes in his traps and the new semester looming, he decided to take down his game cameras. Back at the hotel, I caught at least an image of what I’d been chasing: In black and white, the animals were appropriately silver, spectral, dashing across the midnight fields. In one clip, a canid paused and howled. “That’s super cool,” Broussard said quietly, as an echoing, interweaving chorus responded from somewhere deeper in the marsh. 

Boyce Upholt is a journalist based in New Orleans and founding editor of Southlands, a magazine about Southern nature. 

The Download: murderous ‘mirror’ bacteria, and Chinese workers fighting AI doubles

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

No one’s sure if synthetic mirror life will kill us all

In February 2019, a group of scientists proposed a high-risk, cutting-edge, irresistibly exciting idea that the National Science Foundation should fund: making “mirror” bacteria.

These lab-created microbes would be organized like ordinary bacteria, but their proteins and sugars would be mirror images of those found in nature. Researchers believed they could reveal new insights into building cells, designing drugs, and even the origins of life.

But now, many of them have reversed course. They’ve become convinced that mirror organisms could trigger a catastrophic event threatening every form of life on Earth. Find out why they’re ringing alarm bells.

—Stephen Ornes

This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands this Wednesday.

Chinese tech workers are starting to train their AI doubles—and pushing back

Earlier this month, a GitHub project called Colleague Skill struck a nerve by claiming to “distill” a worker’s skills and personality—and replicate them with an AI agent. Though the project was a spoof, it prompted a wave of soul-searching among otherwise enthusiastic early adopters.

A number of tech workers told MIT Technology Review that their bosses are already encouraging them to document their workflows for automation via tools like OpenClaw. Many now fear that they are being flattened into code and losing their professional identity.

In response, some are fighting back with tools designed to sabotage the automation process.

Read the full story.

—Caiwei Chen

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The White House and Anthropic are working toward a compromise
The Trump administration says they had a “productive meeting.” (Reuters $)
+ Trump had ordered US agencies to phase out Anthropic’s tech. (Guardian)
+ Despite the blacklist, the NSA is using Anthropic’s new Mythos model. (Axios)

2 Palantir has unveiled a manifesto calling for universal national service
While denouncing inclusivity and “regressive” cultures. (TechCrunch)
+ It’s a summary of CEO Alex Karp’s book “The Technological Republic.” (Engadget)
+ One critic called the book “a piece of corporate sales material.“ (Bloomberg $)

3 Germany’s chancellor and largest company want looser AI rules
Chancellor Merz said industrial AI needs ‌more regulatory freedom. (Reuters $)
+ Siemens says it plans to shift investments to the US if EU rules don’t change. (Bloomberg $)
+ Fractures over AI regulation are also emerging in the US. (MIT Technology Review)  

4 Nvidia’s once-tight bond with gamers is cracking over AI  
Consumer graphics cards are no longer the priority. (CNBC)
+ But generative AI could reinvent what it means to play. (MIT Technology Review)

5 Insurers are trying to exclude AI-related harms from their coverage
And escape legal liability for AI’s mistakes. (FT $)
+ AI images are being used in insurance scams. (BBC)

6 AI is about to make the global e-waste crisis much worse
And most of the trash will end up in non-Western countries. (Rest of World)
+ Here’s what we can do about it. (MIT Technology Review)

7 Tinder and Zoom have partnered with Sam Altman’s eye-scanning firm
To offer a “proof of humanity” badge to users. (BBC)

8 Islamist insurgents in West Africa are driving surging demand for drones
A Nigerian UAV startup is opening its first factory abroad in Ghana. (Bloomberg $)

9 Hundreds of fake pro-Trump AI influencers are flooding social media
In an apparent bid to hook conservative voters. (NYT)

10 A Chinese humanoid has smashed the human half-marathon record
Despite crashing into a railing near the end of the race. (NBC News)
+ Chinese tech firm Honor swept the podium spots. (Engadget)
+ Last year, humans won the race by a mile. (CNN)

Quote of the day

“This is the only issue where you’ve got Steve Bannon and Ralph Nader, Glenn Beck and Bernie Sanders fighting for the same thing.”

—Ben Cumming, head of communications at the AI safety nonprofit Future of Life Institute, tells the Washington Post that diverse public figures are endorsing a declaration of AI policy priorities.

One More Thing

International Space Station photographed from space with Earth in the distance

NASA


The great commercial takeover of low Earth orbit

The International Space Station will be decommissioned as soon as 2030, but the story of America in low Earth orbit (LEO) will continue. 

Using lessons from the ISS, NASA has partnered with private companies to develop new commercial space stations for research, manufacturing, and tourism. If they are successful, these businesses will bring about a new era of space exploration: private rockets flying to private destinations.

They will also demonstrate a new model in which NASA builds infrastructure and the private sector takes it from there—freeing the agency to explore deeper and deeper into space. Read the full story.


—David W. Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Bask in thisadorable test of a dog’s devotion.
+ This vocal pitch trainer improves your singing straight from your browser.
+ Master international etiquette with this interactive guide to the world’s cultures.
+ Explore the networks of public figures with this intriguing interactive graph

Organic Search Winners Share 5 Traits

Google’s March 2026 core algorithm update concluded on April 8. The search giant doesn’t provide recovery guidelines for businesses whose rankings have decreased. We’re left with search-engine optimizers to create tactics that align with the winners to help losing sites maintain organic visibility.

A just-published study by SEO pro Cyrus Shepard of Zyppy Signal is an example. He analyzed organic search traffic of 400 winning and losing websites over the past 12 months and classified them by business model, content types, creator profiles, and other definable traits. From there, he identified five characteristics of winning sites.

Here are Cyrus’s five features of sites that consistently maintain prominent organic rankings on Google.

Proprietary assets

Of the 400 analyzed sites, 92.9% of the winners own proprietary assets that are difficult to replicate, such as datasets, products, images, or studies.

For example, a fashion ecommerce site may use its user data to report trends in colors or seasonality. A site with extensive product reviews could repurpose them into shopping guides.

Completes a task

According to the study, 83.7% of winning websites help searchers do something: buy, download, or search.

Winning sites tend to help users accomplish whatever they’re looking for. Losing sites may offer meaningful info on topics, but the searcher must go elsewhere to complete the task.

The solution may be a unique product or an interactive tool. For example, a tutorial site could offer interactive tools, quizzes, and workbooks to help students practice math.

Niche expertise

Expertise within a niche was a trait of 75.9% of the winners.

Winning sites tend to focus on a topic in which they have deep knowledge and experience.

Those sites become go-to authorities for specialized subjects. Hyper-specific travel blogs, for example, often outrank global travel brands.

Unique product or service

A unique product or service is a trait of 70.2% of sites that consistently rank well across core updates. Cyrus’s study found that informational sites (news publishers and affiliate sites) lost the most traffic and that offering a product may be the answer.

For example, a recipe site can sell a subscription meal plan, a book, or access to a private cooking community.

Strong brand

A strong brand, a destination site, was a trait of 32.6% of organic search winners. Cyrus found a high correlation between winning in organic search and having a strong profile of branded search terms.

The more searchers query a business’s name, the more that site is a destination and a strong signal to Google. Treat your brand search metrics as a key performance indicator, in other words.

I’ll add one feature for 2026 that Cyrus doesn’t address: sites that rank prominently in organic search offer something that AI cannot easily replicate.

How To Build AI Visibility In 90 Days [Webinar] via @sejournal, @hethr_campbell

AI search has changed how buyers discover solutions. Here’s how to make sure they find you.

Why AI Visibility Is Now a Growth Priority

Platforms like ChatGPT, Perplexity, and Google AI Overviews are now active discovery channels for buyers. Marketing leaders who understand those signals are building durable visibility. Those who don’t are quietly losing ground.

What You’ll Learn in This Free SEO Webinar

  • Which AI visibility signals actually drive discoverability in 2026
  • A phased 90-day framework that helps you audit your baseline, run AI-native experiments, then scale what works
  • How funded startups are restructuring teams and budgets around this shift

About the Speaker

Jason Shafton is Founder & CEO of Winston Francois, a growth consulting firm. He’s led growth and marketing at Google, Headspace, and Kajabi, and has built AI visibility playbooks across 10+ venture and PE-backed startups navigating this exact transition.

Register Free

This is one hour of tactical, experience-backed frameworks, built for founders, CMOs, and marketing leaders who are ready to act.

Google May Have To Share Search Data With Rivals via @sejournal, @MattGSouthern

The European Commission has sent preliminary findings to Google proposing measures to share search data with rival search engines, including AI chatbots that qualify as online search engines under the DMA, across the EU and EEA.

Under the proposal, Google must share four categories of anonymized data on fair, reasonable, and non-discriminatory (FRAND) terms.

The categories are ranking, query, click, and view data. The Commission says the aim is to allow third-party search engines to “optimise their search services and contest Google Search’s position.”

The measures are not yet binding. A public consultation is open until May, and a final decision is due by July 27.

What’s In The Proposal

The Commission’s proposed measures cover six areas:

  • Eligibility criteria for data beneficiaries, including AI chatbots with search capabilities
  • The extent of search data that Google is required to share
  • Methods and intervals for sharing data
  • Anonymization standards for personal data
  • Guidelines for determining FRAND pricing
  • Procedures for how beneficiaries access the data

The data will be available to eligible third parties operating search engines in the EEA, including AI chatbot providers that qualify as such.

This is Article 6(11) proceeding following the Commission’s opening on January 27. A separate Article 6(7) proceeding addresses Android interoperability for third-party AI. Both aim to turn broad DMA obligations into specific, enforceable rules.

AI Chatbots Are Eligible

Eligibility criteria for qualifying AI chatbots are what change the picture for AI search visibility.

Under the proposal, AI chatbots meeting the DMA’s definition of online search engines could access Google’s anonymized search data. Qualified AI search products might use this data to improve their retrieval and ranking systems.

The proposed measures specify data sharing methods, frequency, access, and pricing, with technical details to be finalized.

Google Is Pushing Back

Google opposed the proposal in a statement provided to multiple outlets. Clare Kelly, Senior Competition Counsel at Google, said in a statement to Engadget:

“Hundreds of millions of Europeans trust Google with their most sensitive searches — including private questions about their health, family, and finances — and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections. We will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security.”

Google also told The Register the investigation appears to be driven “at least in part by OpenAI,” which it claims is “seeking to take advantage of the DMA to harvest data from Google in ways not anticipated by the drafters of the DMA.”

The company is fighting on several DMA fronts. Brussels sent preliminary findings in 2025 on a separate Article 6(5) self-preferencing case. In February, Google began testing search result changes in the EU to address that proceeding.

Why This Matters

The measures are preliminary and, if adopted, applicable only in the EEA. Anonymization and pricing details remain open through the May consultation.

The longer-term issue is whether AI chatbot eligibility survives the final decision in July.

If the EU proposal is adopted with eligibility for AI chatbots, eligible products serving EU/EEA users could access anonymized signals from Google Search.

The proposal doesn’t give AI chatbots access to Google’s index but instead allows access to data similar to what Alphabet uses to optimize its search services, which differs from current AI search data sources.

Looking Ahead

The public consultation closes on May 1, and the Commission will assess the feedback before making a final, binding decision by July 27, which will apply to Google.

These proceedings do not constitute a non-compliance finding, but separate DMA enforcement can impose fines up to 10% of global turnover. The next milestone for AI visibility practitioners is the consultation outcome.

If the Commission maintains eligibility for AI chatbots, the focus shifts to how quickly data-sharing arrangements enable AI tools to compete for citation visibility.


Featured Image: Samuel Boivin/Shutterstock

What Search Engines Trust Now: Authority, Freshness & First-Party Signals via @sejournal, @cshel

Search has not become more chaotic. It has become more continuous.

If the last two years have felt like a blur of updates, volatility, and shifting guidance, you’re not imagining it. What’s changed is not just what search engines value. It’s how those values are evaluated.

The traditional model (the model we’re accustomed to) – periodic updates, relatively stable ranking signals, and long feedback loops – has been replaced by something faster and less discrete. Search engines are now heavily influenced by/running on AI systems that continuously test, interpret, and refine results, so what looks like constant algorithm change is actually ongoing model adjustment.

It’s this shift that has redefined what search engines trust.

The Algorithm Isn’t Static Anymore

For years, SEO operated on a predictable rhythm: core updates arrived, the rankings shifted, and then the industry analyzed the damage, identified patterns, and adapted.

That model assumed a relatively stable system punctuated by updates, but that assumption no longer holds.

Modern search systems incorporate multiple layers of AI-driven evaluation, including ranking systems, retrieval mechanisms, and answer-generation layers. These systems do not wait for quarterly updates. They iterate constantly, adjusting weighting, refining interpretation, and recalibrating outputs in near real time.

What we’re left with is a shorter signal half-life. What worked six months ago may still matter, but it is being re-evaluated continuously rather than periodically.

This is why it feels like we’re in a persistent state of chaos. The system is never settled; it’s always learning.

From Ranking To Evaluation

Traditional SEO focused on ranking documents. Pages competed as whole units, evaluated on signals like links, relevance, and technical accessibility. That model still exists, but it is no longer the full picture.

AI-driven search introduces a second layer: retrieval and synthesis. Instead of simply ranking pages, systems increasingly extract and recombine information from multiple sources to produce answers. This changes the competitive unit, pages still rank but fragments are what get used.

In practical terms, your content is no longer evaluated solely as a document or single URL. It is evaluated as an entire collection of potential answers. Each section, paragraph, and list becomes a candidate for inclusion in AI-generated responses.

Why does this distinction matter? Because it shifts the role of trust. Search engines are not just deciding which page deserves to rank; they are deciding which source is trustworthy enough to be a resource.

Redefining “Trust” In Search

Trust used to feel like a score – it was a combination of authority signals, content quality, and technical hygiene that resulted in stable rankings.

Today, trust behaves more like a probability – it is continuously evaluated, recalculated, and reinforced based on new data. It is not assigned once and retained. It is earned repeatedly.

How is trust determined? There are three factors that dominate the evaluation: authority, freshness, and first-party signals. Each plays a distinct role in how AI-driven systems determine what to surface.

Authority: The Entry Point

Authority has always mattered, no question, but what has changed is where it sits in the process. In an AI-driven system, authority functions as a filter. It determines whether your content is even considered. Not all sources get equal treatment because not all sources are considered authoritative. There is a systems bias toward entities they recognize – brands, authors, and domains that have demonstrated consistent expertise and visibility across the web.

A certain quantity of backlinks is no longer a reliable proxy for authority. Entity-level authoritative presence requires more proof than just links. The search engines build an understanding of who you are (and your authority) based on:

  • Mentions across other authoritative sites.
  • Consistent authorship and topical focus.
  • Brand recognition within a subject area.
  • Inclusion in structured knowledge systems.

These signals create what can be thought of as “entity gravity.” The stronger your presence, the more likely your content is to be included in the candidate set for retrieval.

The key distinction is that authority does not guarantee visibility, it guarantees eligibility. Without it, your content may be well-written, well-structured, and technically sound – and still be ignored.

Authority Comes Before Structure

There is a common misconception that better formatting or clearer writing alone can improve visibility in AI-driven search. Sorry, but it cannot, at least not in isolation.

Authority determines whether your content is selected. Structure determines whether it can be used. So, if your brand lacks recognition, your content may never be retrieved. If your content lacks structure, it may be retrieved but never cited. Both layers are required for this to work well.

This is why entity-building efforts, like PR, partnerships, thought leadership, and brand presence, have become inseparable from SEO. They influence not just rankings, but inclusion.

Freshness: The Signal Of Ongoing Relevance

Freshness has also evolved, or maybe it’s more accurate to say that it’s diverged.

In the past, all types of content benefited from freshness, and that fresh factor was often tied to recency. Newer content could reliably receive a temporary boost, especially for time-sensitive queries.

Today, that old kind of freshness only benefits time-sensitive publishers like news outlets. For everyone else, freshness is less about when something was published and more about whether it is being maintained.

When we’re looking at how freshness is evaluated for non-news publishers (i.e., everyone else), we see that AI-driven systems prioritize sources that demonstrate ongoing relevance. This includes:

  • Regularly updated content.
  • Clear timestamps and revision history.
  • Reinforcement of key topics over time.
  • Alignment with current information and context.

Outdated content introduces risk. If a system cannot determine whether information is still accurate (especially at grounding), it is less likely to include it in a synthesized answer.
Freshness, in this sense, becomes a trust reinforcement loop. Updating content signals continued expertise. It reduces uncertainty. It increases the likelihood of inclusion.

Please do not confuse this with rewriting everything constantly. It means maintain the content that matters.

First-Party Signals: The Ground Truth

The third big shift is the dramatically increasing importance of first-party signals. AI systems are designed to synthesize information, but they still depend on source material. The quality of that material directly affects the quality of the output. As a result, systems favor content that represents original, verifiable input rather than recycled summaries.

First-party signals include:

  • Original research and data.
  • Proprietary insights and analysis.
  • Direct product or service information.
  • First-hand experience and expertise.

These signals reduce ambiguity. They provide a clear source of truth. They are easier to attribute and harder to replicate.

This is one of the reasons the “content at scale” model has struggled in recent years. Large volumes of derivative content offer little new information. They increase noise without increasing value.

AI systems are not looking for more content; they are looking for better inputs. If your content does not add something unique, it is unlikely to be selected.

The Hidden Layer: Usability

So we know that authority gets you considered, freshness keeps you relevant, and first-party signals establish credibility. But none of that matters if your content cannot be used, and this is where many sites fail.

A page can rank well and still have no presence in AI-generated answers. When that happens, it is rarely a ranking issue. It is an extractability issue.

AI systems do not read pages the way humans do. They do not navigate, interpret, and synthesize in a leisurely, exploratory way. They retrieve what is easy to extract and move on.

Content that performs well in this environment tends to share a few characteristics:

  • Clear, descriptive headings.
  • Logical hierarchy (H1, H2, H3).
  • One primary idea per paragraph.
  • Direct, declarative statements.
  • Lists and tables where appropriate.
  • Key points introduced early, not buried.

This is not about writing style. It is about reducing friction.

If a system has to reinterpret your content to isolate the answer, it is less likely to use it. If it can lift a sentence or a list directly, it is more likely to include it. In this sense, structure is not cosmetic. It is functional.

Why “Good SEO” Isn’t Always Enough

Many teams are encountering a frustrating pattern: They rank well, traffic is stable, but they are absent from AI-generated answers.

The first instinct is to look for ranking issues. Then, when that doesn’t fix the problem, move on to re-optimizing keywords, building more links, or publishing more content. These are solutions that do not address the real problem.

Ranking determines whether you are visible in search results. Retrieval determines whether you are used in answers. Those are not the same system. A page can perform well in traditional SEO metrics and still fail to provide clean, extractable segments for AI systems. When that happens, competitors with clearer structure or stronger authority are more likely to be cited, even if they rank lower.

This is not a contradiction, rather it is a shift in evaluation.

Practical Implications

The implications for SEO are straightforward, even if the execution is not.

First, please stop treating updates as isolated events. They are outputs of a continuous system. Optimizing for long-term direction is more effective than reacting to short-term volatility.

Second, invest in authority at the entity level. Build recognition beyond your own site. Where and how you are mentioned matters as much as what you publish.

Third, maintain your content. Freshness is not a one-time signal. It is an ongoing demonstration of relevance.

Fourth, prioritize first-party value. Original insights, data, and expertise are more durable than derivative content.

Finally, structure for usability. Make your content easy to extract, not just easy to read.

Trust Is Now Dynamic

Search engines no longer assign trust once and move on. They evaluate it continuously, so you need to continuously monitor and maintain your trust signals.

Authority determines whether you are considered. Freshness determines whether you remain relevant. First-party signals determine whether you are credible. Structure determines whether you are usable.

All four are required.

If your content cannot be selected, extracted, and trusted quickly, it does not matter how well it ranks. That is the shift, and it is not going away.

More Resources:


Featured Image: beast01/Shutterstock

Google Lists Best Practices For Read More Deep Links via @sejournal, @MattGSouthern

Google updated its snippet documentation today with a new section on “Read more” deep links in Search results. The section outlines three best practices for increasing the likelihood that a page appears with these deep links.

What A Read More Deep Link Is

Google defines the feature as “a link within a snippet that leads users to a specific section on that page.”

The examples in the documentation show the link appearing inside the snippet area of a standard Search result.

Screenshot from: developers.google.com/search/docs/appearance/snippet, April 2026.

The Three Best Practices

Google lists three best practices that can increase the likelihood of these links appearing.

First, content must be immediately visible to a human on page load. Content hidden behind expandable sections or tabbed interfaces can reduce that likelihood, per Google’s guidance.

Second, avoid using JavaScript to control the user’s scroll position on page load. One example Google gives is forcing the user’s scroll to the top of the page.

Third, if the page uses history API calls or window.location.hash modifications on page load, keep the hash fragment in the URL. Removing it breaks deep linking behavior.

More Context

Read more deep links are one type of anchor URL that appears in Search Console performance reports. John Mueller previously addressed those hashtag URLs, confirming that they come from Google and link to page sections.

Before today’s addition, the documentation was last revised in 2024. That change clarified page content, not the meta description, as the primary source of search snippets.

Why This Matters

For websites, the new guidance outlines what can increase the likelihood that a Read more deep link will appear.

Pages using accordion UI patterns, tabbed content, or forced-scroll JavaScript may reduce that likelihood. Teams working with single-page applications should ensure that hash fragments remain in URLs during page loads.

Looking Ahead

This is a documentation clarification, not a new SERP feature. Read more deep links have appeared in Search for some time. What’s new is the written guidance on how to increase that likelihood.

Developers working on JavaScript-heavy sites should test how their pages handle scroll position and hash fragments on initial load. Today’s update provides clearer signals on what can reduce the likelihood of a “Read more” link appearing.


Featured Image: Blossom Stock Studio/Shutterstock

Winning Google Ads Campaign Structures For DTC Ecommerce via @sejournal, @MenachemAni

You’ve got a whole library of winning ads from Meta to run on Google, but you don’t want to spend a ton of time setting up campaigns or becoming a Google guru. So, you take your existing creatives and pop them into Performance Max, spin up some ad copy, and let Google do its thing.

One campaign, one budget, and your entire product line targeting a broad audience – just like Meta taught you. When we audit ecommerce brands expanding to Google, this is the thinking we often see reflected in a highly consolidated account setup.

The logic makes sense if you think in Meta terms. Consolidate spend, let the algorithm find buyers, and scale what converts. It works on Meta because the platform is built on interest-based targeting. You define a pool, feed it plenty of creatives, and the system shows it to the right people.

Except … Google doesn’t work that way. Targeting is driven by active search intent, so a consolidated, broad structure doesn’t give the algorithm better signal – just noise. So, your account ends up burning through your $20,000/month budget without the architecture needed to distinguish between demand that was on its way to being captured and truly net new revenue.

If you live in the world of direct-to-consumer (DTC) and ecommerce brands and operate this way, you aren’t being careless. You’ve mastered one of the most competitive paid channels available and are simply applying that expertise to a platform that operates on entirely different principles.

Let me fix it.

Why Account Structure Is Vital To Success

Every search query in Google is a person telling you something – not a demographic or an interest category inferred from content they’ve engaged with. Explicit, real-time signal that someone is looking for what you offer right now.

That signal is the foundation of everything Google Ads is. Smart Bidding reads it, query matching acts on it, the auction gives it weight, and your campaign structure puts you in a position to capitalize on it.

This is why structure in Google Ads carries more consequence than it does on many other paid channels. Campaigns without clear segmentation and defined boundaries prevent the algorithm from learning efficiently. This spreads budget across queries that don’t reflect the same intent and makes you compete against yourself, leading to outcomes that don’t map to your actual business goals.

The other dimension is economics. Different products carry different margins, average order values, and conversion rates. A structure that treats all of them the same can’t divert spend toward products where it actually makes sense. You end up with an account that converts but doesn’t necessarily generate optimal returns.

And here’s a secret: Sometimes, I never run PMax at all. And if I do, I set it up in a way where it’s not going to just recycle Meta traffic but focus on as much net new as possible (even blocking brand, retargeting, and existing customers can’t get you to 100% net new). But if you have a very heavy Meta presence and PMax looks like it will over-index on recycling traffic, I’d move towards Shopping so we can move the needle.

3 Mistakes That Erode Efficiency For Google Ecommerce

1. Launching Every Campaign Type At Once

The instinct to go broad from day one is understandable. You have products to sell with multiple campaign types available to you and a budget ready to deploy. So you build out brand Search, Shopping, Performance Max, and YouTube, and wait for the data to come in.

The problem is that each of those campaigns needs impressions, clicks, and conversions to learn. When you split a less-than-astronomical budget across five campaign types, none of them gets enough volume to learn efficiently. Visibility is low across the board, and data is slow to compound, and Google’s machine learning systems are starved of the information they need to do better for your account.

Your account is running, but it isn’t moving. At the end of the quarter, you’ll still have no meaningful insights and won’t be able to optimize with confidence.

A smarter approach could be to start with just a couple of campaigns, like Search plus Shopping. This lets you get wider product visibility without being constrained by budget. Once those campaigns have data behind them and are generating returns, you layer in PMax, YouTube, and other formats one by one.

This way, each new move has a foundation to build on rather than competing for scraps.

2. Putting The Same Products In Multiple Campaigns

When your flagship product lives across multiple campaigns, they compete against each other in the same auction. That means a split budget, divided impressions, and not enough conversion momentum for any campaign to become meaningfully better.

Reporting is just as damaging. Sales come through, but you can’t tell which campaign was responsible. Attribution, which is already murky when two platforms are involved, gets harder. And optimization decisions get made with incomplete data.

Clean product segmentation across your account solves all three problems. Each product has a home, which makes performance readable. And when something isn’t working, you know exactly where to look.

3. Segmenting Performance Max Asset Groups By Audience Signal

Performance Max gives you audience signals as an input – customer lists, past purchasers, site visitors. The temptation is to use those signals as the basis for how you divide your asset groups. One group for past buyers, one for prospecting, one for lapsed customers.

The problem is that audience membership has nothing to do with the economics of what you’re selling. A past buyer and a new visitor can both be in the market for your highest-margin product. Structuring asset groups around who they are rather than what you’re selling means your budget isn’t organized around the products that actually matter most to your business.

A more effective approach is to build asset groups around shared product themes – bestsellers, new releases, bundles, seasonal offers. This way, the creative, the budget, and the optimization signal are all pointed at a coherent set of products with similar business value. Performance Max can still find the right audience. Your job is to give it the right product context to work with.

3 Proven Examples Of Google Ads Account Structure For Ecommerce

Example 1: Single-Product DTC Brand

A brand selling one hero product with a few variants (sizes, colors, or bundles) doesn’t need a complex account structure, just a disciplined one.

Start with two campaigns:

  • Branded search captures anyone searching for you by name (high intent), protects your brand equity, and tends to convert at a lower cost – so remember not to use automated bidding.
  • Either Performance Max or Shopping to drive product discovery.
  • If you choose PMax, divide asset groups by variant type rather than audience: one for the core product, one for bundles, one for any subscription or multi-unit offers. This keeps creative and budget in line with how the product is actually sold rather than who you think is buying it.

Adding both retail campaigns or YouTube before the first two layers capture enough conversion data only splinters your budget and stops the algorithm from learning anything meaningful to optimize against.

Example 2: Multi-Product DTC Brand With Bestsellers

Brands with larger catalogs make a common structural mistake: treating all SKUs equally. A single PMax campaign with one asset group covering 40 items gives Google no basis for prioritization and will spend where it finds the path of least resistance, which isn’t always where your margins are.

The better approach is to build asset groups around product tiers.

  • Bestsellers – products with the strongest sales velocity and healthiest margins – get their own asset group with dedicated creative and the largest share of budget.
  • New releases get a separate asset group because they need impression volume to gather data and shouldn’t compete directly with proven performers.
  • Include lower-margin, specialty, or slow-moving SKUs but cap their spend, or exclude from PMax entirely and handle them through a Shopping campaign where you have more direct control.

This structure makes performance readable by economic impact level. When a bestseller starts to slip, you see it immediately. And when a new release gains traction, you can promote it without disrupting the rest of the account.

Example 3: Seasonal DTC Brands

For brands with strong seasonal demand, like gifting or back to school, the structural challenge is running seasonal campaigns without damaging the learning of evergreen ones. The approach here is to treat seasonal pushes as additions to the account, not replacements.

  • Evergreen PMax stays live and funded at a baseline level throughout the year.
  • When a seasonal moment approaches, a separate PMax campaign is layered on with its own budget, asset groups built around the seasonal offer, and a defined run window.
  • Seasonal spend is then contained so that when it ends, the evergreen campaign’s learning history is unaffected.
  • When the seasonal campaign winds down, asset groups are paused rather than deleted. Conversion data accumulated during each period is preserved and available when the next seasonal cycle begins, which shortens the relearning period significantly compared to building a new campaign from scratch each time.

Make This Read Worthwhile: Product Segmentation Exercise

Meta finds customers by matching your offer to people’s interests. Google finds customers who are actively looking. What both platforms share is that the systems are increasingly in charge of the operational side: Smart Bidding, Advantage+, Performance Max. These tools make decisions about who sees your ads, when, and at what cost. The advertiser’s job has shifted from button pusher to signal architect.

On Google, that starts with how your campaigns and product/asset groups are organized.

Your Next Step To Value

Before you change any settings or adjust any budgets, try this product segmentation exercise.

  • Pull your catalog and group SKUs by shared characteristics: bestsellers, new releases, bundles, seasonal offers, margin tiers. The goal is to understand which products belong together and which need their own dedicated focus.
  • Once you have that, look at whether retargeting is siloed or folded into your broader activity. It should be a standalone campaign as blending it with prospecting dilutes performance data and makes it harder to read what’s actually driving new customer acquisition.

These two steps alone will give you a clearer foundation than many DTC brands have as they start layering in Google Ads as a channel.

More Resources:


Featured Image: Summit Art Creations/Shutterstock

68 Million AI Crawler Visits Show What Drives AI Search Visibility via @sejournal, @martinibuster

A new analysis of 858,457 sites hosted on the Duda platform shows how AI crawlers are interacting with websites at scale. The data offers a clearer view of how crawling activity is growing and what SEOs and businesses should do to increase traffic from AI search.

AI Crawling Has Already Reached Scale

AI crawling is growing quickly, with more requests tied to real-time answers and most of that activity coming from a single provider. The data creates a pattern that shows which sites are being crawled and more importantly, why.

Year-Over-Year Growth In LLM Referrals

LLM referral traffic has increased sharply over the past year, with multiple platforms showing meaningful gains from very different starting points.

AI Referral Traffic Patterns

  • Total LLM referrals: 93,484 to 161,469 (+72.7%)
  • ChatGPT: 81,652 to 136,095 (+66.7%)
  • Claude: 106 to 2,488 (23x growth)
  • Copilot: 22 to 9,560 (from near-zero)
  • Perplexity: 11,533 to 13,157 (+14.1%)

Growth is not happening evenly, but across the board, referral traffic from AI systems is increasing. That makes AI-generated discovery a growing source of traffic, not a marginal one.

Crawlers Are Increasingly Fetching Content To Ground Answers

AI crawlers are no longer used primarily for indexing, with most activity now tied to retrieving content in real time to generate answers for users.

Most crawling is now happening in response to user queries rather than for building an index, which changes how content is accessed and used.

  • User Fetch (real-time answers): 56.9% of all crawler activity, driven almost entirely by ChatGPT
  • Training (model learning): 28.8%, split across GPTBot and other model crawlers
  • Discovery (content indexing): 14.3%, distributed across multiple systems
  • ChatGPT User Fetch volume: ~39.8 million visits

The trends are largely driven by ChatGPT, which is responsible for nearly all real-time retrieval activity. That means the move toward answer-based crawling is not evenly distributed, but concentrated in one platform shaping how content is accessed. This trend may change with Google’s new Google-Agent crawler.

Market Concentration In AI Crawling

AI crawler activity is heavily concentrated, with OpenAI responsible for the vast majority of requests, reflecting its position as the primary tool users rely on to find and retrieve information.

  • OpenAI: 55.8 million visits (81.0%)
  • Anthropic (Claude): 11.5 million (16.6%)
  • Perplexity: 1.3 million (1.8%)
  • Google (Gemini): 380,000 (0.6%)

Most AI crawling activity comes from OpenAI, which aligns with ChatGPT’s role as a primary tool for finding and retrieving information. Claude follows at a much smaller share, suggesting a different usage pattern, while the rest of the market accounts for a minimal portion of crawler activity.

Scale And What That Actually Means

AI crawling is already operating across a large portion of the web, reaching hundreds of thousands of sites and generating tens of millions of requests in a single month.

More than half of all sites in the dataset received at least one AI crawler visit, showing that this activity is not limited to a small subset of websites.

  • Total sites analyzed: 858,457
  • Sites with at least one AI crawler visit: 506,910 (59%)
  • Total AI crawler visits (Feb 2026): 68.9 million

AI crawling is not isolated to high-profile or heavily trafficked sites. It is already widespread, with consistent activity across a majority of the web.

The Relationship Between Crawling and Real Traffic

Sites that allow AI systems to crawl them consistently show stronger engagement across multiple metrics.

What the data actually shows is:

  1. Sites that allow AI crawling receive significantly more human traffic
  2. Higher-traffic sites are more likely to be crawled

Sites that allow crawling by AI systems receive significantly more human traffic, averaging 527.7 sessions compared to 164.9 for sites that are not crawled. This does not establish causation, but it shows a clear alignment between sites that attract human visitors and how often AI systems revisit them.

  • Average human traffic (AI-crawled vs not): 527.7 vs 164.9 (3.2x higher)
  • Average form completions: 4.17 vs 1.57 (2.7x higher)
  • Averageclick-to-call: 8.62 vs 3.46 (2.5x higher)
  • Sites with 10K+ sessions: 90.5% crawl rate

AI systems are not discovering weak or inactive sites and lifting them up. They are returning to sites that already attract human visitors. For marketers, that shifts the focus away from trying to “get crawled” and toward building real audience demand, since visibility in AI systems appears to follow it.

What Correlates With More Crawling

The research compared sites that include specific third-party integrations, structured features, and content depth with those that do not and found which ones mattered most for AI crawler activity and referrals.

Across the dataset, 59% of sites received at least one AI crawler visit in February 2026. Sites that are crawled more often tend to combine three types of signals: external integrations, structured business data, and content depth.

1. External Integrations

These integrations connect the site to external systems that validate and distribute business information.

  • Yext integration: 97.1% crawl rate vs ~58% without (+38.9pp)
  • Reviews integrations: 89.8% crawl rate vs 58.8% without, 376.9 average crawler visits

Sites that are connected to external data and review systems are crawled more often and more frequently, indicating that AI systems rely on these integrations as signals that a business is real, verifiable, and worth revisiting.

2. Structured Site Features And Business Data

These are built into the site and help AI systems understand and verify business identity.

  • Google Business Profile sync: 92.8% crawl rate vs 58.9% without, 415.6 average crawler visits
  • Local schema: 72.3% vs 55.2% (+17.1pp), 22.3% adoption
  • Dynamic pages: 69.4% vs 58.2% (+11.2pp)
  • Ecommerce: 54.2% vs 59.2% (-5.0pp)

Sites that clearly define their business identity and structure their information in a machine-readable way are crawled more often, showing that AI systems favor sites they can easily interpret, verify, and extract information from.

3. Content Depth (Volume Of Usable Data)

Sites with more content provide more opportunities for AI systems to retrieve, reference, and reuse information in responses.

  • Sites with 50+ blog posts: 1,373.7 average crawler visits vs 41.6 with no blog (~33x higher)

Sites with more content are crawled far more often, indicating that AI systems may return to sources that offer a larger supply of usable information to draw from when generating answers.

Local Business Schema Completeness = More Crawling

This part of the research focuses specifically on local business schema, comparing how the completeness of schema implementation for communicating business details relates to AI crawler activity. The fields measured include business name, phone number, address, hours, and social profiles.

  • No local schema fields: 55.2% crawl rate
  • 10–11 completed schema fields: 82% crawl rate
  • Sites with more complete local schema show a 26.8 percentage point higher crawl rate (82% vs 55.2%)

Sites that provide more complete local business information in structured form are crawled more often and receive more crawler visits. As more of these fields are filled in, both crawl rate and crawl frequency increase.

The data shows that clearly defined local business data makes a site easier for AI systems to identify, verify, and subsequently revisit, all the prerequisites for receiving traffic from AI search.

Takeaways

AI crawling is a parallel method for content discovery and the research shows clear patterns for sites that are visited by crawlers most often.

  • AI crawling operates alongside traditional search, changing how content is accessed and reused
  • Sites with structured local signals, deeper content, and more complete schema are crawled more often
  • Multiple reinforcing signals appear together on the same sites, not in isolation
  • The data shows direction, not causation, but the patterns are consistent

The data shows that sites that make it easy for AI crawlers to index and revisit the them tend to perform better. Interestingly, sites that present clear, structured, and verifiable information, while continuing to build real audience demand, are more likely to be revisited by AI systems and benefit from traffic generated through AI search.

Read the research: Duda study finds AI-optimized websites drive 320% more traffic to local businesses

Featured Image by Shutterstock/Preaapluem