ChatGPT Ads Now Offer CPC Bidding Between $3 And $5: Report via @sejournal, @MattGSouthern

Digiday reports that an early version of ChatGPT’s ads manager, available to a subset of pilot advertisers, now shows cost-per-click bids ranging from $3 to $5, based on screenshots reviewed and verified by the publication.

Until now, advertisers in the pilot have paid on a CPM basis, meaning a flat rate per 1,000 impressions served. CPC pricing lets buyers pay only when a user clicks. Digiday reported the option is available to marketers already testing advertising in the pilot, not as a broad rollout. OpenAI didn’t respond to Digiday’s request for comment.

Pricing Has Been Falling Since Launch

The CPC addition follows a drop in ChatGPT ad pricing since the pilot launched on February 9, 2026.

CPMs have fallen from $60 at launch to as low as $25 in some cases, per Digiday’s earlier reporting. Digiday also reported the minimum spend commitment has fallen from $250,000 at launch to $50,000, alongside the quiet release of a self-serve ads manager that gives a subset of pilot advertisers the ability to monitor impressions and clicks in real time.

What CPC Pricing Means For Buyers

CPM and CPC pricing serve different advertiser bases. Brand advertisers tend to plan around CPM. Performance marketers, who account for the majority of online ad spend, prefer to pay for clicks rather than impressions.

Adding CPC bidding opens the channel to a buyer category that has largely sat out the pilot. Nicole Greene, VP analyst at Gartner, told Digiday that the pricing change lets advertisers directly compare their results on OpenAI with those on other major platforms.

What ChatGPT clicks are worth depends on where they land relative to existing channels. According to ad agency Adthena (cited by Digiday), Meta CPCs run three to five times cheaper than Google Search, not because Meta’s inventory is worse, but because the intent behind those clicks is different. Social platform users tend to browse without a specific goal, while search users typically have one in mind.

The pricing drops ChatGPT into the same intent-and-value debate advertisers already face when comparing social clicks with search clicks.

Why This Matters

CPC bidding moves ChatGPT advertising into a territory where performance marketers can plan campaigns and compare costs directly against Google and Meta. Combined with the lower minimum spend, the channel is accessible to a wider buyer base than the enterprise tier that defined its launch.

SEJ’s Brooke Osmundson covered the implications for paid media teams in her analysis of whether ChatGPT Ads warrant real budget yet.

A CPM-only enterprise pilot has, in roughly 10 weeks, become a self-serve channel with a $50,000 minimum, lower CPMs, and now CPC pricing visible to a subset of advertisers. Each step down has opened the channel to a different category of buyer.

Looking Ahead

Paid media teams running search and social campaigns should compare ChatGPT’s clicks for intent quality and conversions. Measurement tools are limited and inconsistent, so teams must plan proxy measurement until OpenAI’s reporting improves.

OpenAI is hiring its first advertising marketing science leader, per Digiday. Until that role is filled, advertisers will be evaluating ChatGPT clicks largely on faith.

Google Ads Makes Call Recording Default For AI Lead Calls via @sejournal, @MattGSouthern

Google Ads has enabled call recording by default for eligible call flows associated with AI-qualified call leads, with exceptions for prior opt-outs and certain sensitive verticals.

A new Google support page describes the feature, which uses AI to evaluate phone conversations instead of relying on call duration alone to count conversions.

What Changed

Google Ads previously classified a phone call as a conversion primarily based on its duration. Google’s documentation says the new system analyzes call recordings to identify signals of intent, such as a caller asking about specific services, scheduling a consultation, or indicating readiness to purchase.

Google describes the classification as tiered.

  • Primary signal, call recording. If recording is on, AI evaluates the conversation and only qualified calls count as conversions.
  • Secondary signal, call duration. If a call can’t be recorded, duration determines whether it counts.
  • Tertiary signal, ad interaction. If no Google forwarding number is available, ad interaction data is used.

Call Details reports now include an AI-generated summary of each call and hashtags such as “#HighIntent” or “#ConsultationScheduled.”

Call Recording Defaults And Exceptions

Google’s settings page says call recording will remain off for advertisers who have already turned it off and for accounts Google has identified as operating in healthcare or financial services.

Advertisers in those categories can manually enable recording at any time, according to Google.

To turn recording off, advertisers can go to Admin > Account settings > Call ads > Call recording and select Off.

Where It Works

Call recording and AI-qualified conversions are currently limited to calls in which both the calling and receiving phone numbers are in the United States or Canada. Calls must route through a Google Forwarding Number, which requires call reporting to be enabled at the account level.

Only calls to call ads, call assets, and calls from website visits are eligible. Calls from location assets are not supported at this time.

Privacy And Compliance

Google’s settings page says callers will hear an automated message at the start of the call notifying them the conversation is being recorded for quality purposes. Advertisers agree to the Call Ads Supplemental Terms when using the feature and acknowledge they have given notice to employees or other parties who may participate in calls.

Google also says that recordings are used to evaluate lead quality, monitor spam and fraud, and improve the accuracy of conversion reporting.

Advertisers using call recording should review whether Google’s automated notification complies with their own legal obligations regarding recorded calls.

Why This Matters

Advertisers that don’t plan to use AI-qualified call leads are still producing recordings Google analyzes for lead quality, spam, and fraud, unless they turn recording off.

Smart Bidding now optimizes against AI-classified qualified calls when recording is on, and falls back to call duration when it isn’t.

Looking Ahead

Advertisers who prefer call duration as the primary signal can turn recording off in account settings. The duration threshold itself can be adjusted under Goals > Summary > Phone call leads > AI-qualified call leads.


Featured Image: El editorial/Shutterstock

The Ghost Citation Problem via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

When an AI answers a question using your content, it usually cites you with a source link. What it doesn’t do, 62% of the time, is say your name. The link is there. The brand mention is not. This is what I like to call a ghost citation: the AI using your content doesn’t mention you in the answer.

This week, I’m sharing:

  • Why being cited and being mentioned are two different outcomes that require different strategies.
  • Which LLMs name brands vs. which treat them as anonymous source material.
  • The query format and content type that produce 30x more brand mentions.

A note from Kevin: I’m a big fan of HubSpot’s Marketing Against the Grain. I had Kieran, one of the co-hosts, on my Tech Bound podcast back in 2023. Now, they launched a newsletter with smart experiments, fresh perspectives, and practical lessons on what’s working right now. So, I thought I would give a friendly shoutout: Check it out.

This analysis draws on 3,981 domains across 115 prompts, 14 countries, and four AI search engines (ChatGPT, Google AI Overviews, Gemini, AI Mode), using data from the Semrush AI Toolkit. Every appearance is tagged as “cited” (source link present) and/or “mentioned” (brand name appears in the answer text). The gap between those two states is the ghost citation problem.

1. 62% Of Your Brand’s LLM Citations Are Functionally Invisible

Most brands assume being cited means being seen. The data says otherwise.

Image Credit: Kevin Indig

74.9% of domains were cited, and 38.3% mentioned. 61.7% of citations are ghost citations: the domain gets a source link but zero name recognition in the answer text.

Only 13.2% of appearances convert into both a citation and a mention. Not a single domain was cited, but not mentioned at all, or vice versa.

2. Every LLM Shows A Different Behavior

The four AI engines treat citations and mentions in fundamentally different ways:

  • Gemini names brands in 83.7% of appearances, but only generates a citation link 21.4% of the time. It operates more like a conversationalist drawing on brand knowledge.
  • ChatGPT is the opposite: It cites 87.0% of the time but mentions brands in only 20.7% of answers, functioning more like an academic paper with footnotes.
  • Google AI Overviews (AIOs) sit in the middle but lean toward citation.
  • Google’s AI Mode offers about 17% more brand mentions than ChatGPT in its outputs, but also functions closer to an academic paper than its Gemini sibling.

For brands, this means Gemini visibility and ChatGPT visibility are not the same thing. (This data set showed clear evidence that there wasn’t much overlap with ChatGPT citations/mentions and Gemini citation/mentions for the same prompts.) Optimizing for one does not help with the other. There is no single “AI visibility metric.” There are at least 4 different behavioral systems running in parallel.

Image Credit: Kevin Indig

3. Strong Brands Get Named In The Text

A clear pattern emerges among domains appearing three or more times: Content aggregators and academic sources are cited repeatedly but almost never mentioned.

  • Medium.com was cited 16 times for the same prompts across three different engines and named zero times.
  • Wikipedia.org was cited 27 times and mentioned in only two answers, both times for the same conversational query (“What is the most dangerous creature in the world?”).
  • Wired.com, sciencedirect.com, harvard.edu: same pattern.

Consumer brands with strong public identity get mentioned in the output at near 100%. The AI doesn’t feel the need to cite. Instead, it mentions consumer brands outright. It knows the data about the brands came from somewhere, but doesn’t feel the need to explicitly say so to users. For publishers whose value proposition is information authority, this is a structural problem.

*Mention rate above 100% means the brand is named in the answer text even when not cited as a source link – the engine references the brand by name without linking to it. For values in this data set over 100%, think about being cited 10x and mentioned 10x as = 100%. If a brand is mentioned 12x and cited 10x, that’s 120%.

Image Credit: Kevin Indig

4. LLMs Disagree On The Same Brand 22% Of The Time

454 prompt+domain combinations were tested across multiple engines. In 22% of those outputs (100 total), LLMs disagreed on whether to mention the brand:

  • Instagram.com was mentioned by ChatGPT and Gemini but only cited (not named) by Google.
  • Facebook.com was mentioned by Gemini in 3 out of 3 appearances.
  • Google AI cited Facebook 9 out of 9 times, but named it in only 1.
Image Credit: Kevin Indig

The same brand, the same query, but different engines and different outcomes. This matters for measurement: A brand can appear “visible” in one engine’s data while being completely anonymous in another. Aggregate AI visibility metrics mask this divergence.

5. In-Text Brand Mention Rates Vary By Geography

Controlling for the LLM, country-level differences in mention rates are meaningful:

  • India and Sweden show the highest mention rates (50%), suggesting more conversational or brand-forward query patterns in those markets.
  • Italy, Brazil, and the Netherlands show the lowest mention rates (18-22%), with very high citation rates (82-94%).
  • The UK and Canada are mid-range but above the global average.

*Note: the dataset uses localized prompts confirmed by Semrush, so language is not a confound.

Image Credit: Kevin Indig

Being Cited And Being Named Are Not The Same, And Require A Different Approach

From this analysis, four takeaways stood out to me the most for brands and their content strategies:

1. Being cited means an AI is drawing on your content. Being mentioned means it is naming you. We don’t yet know enough about the implications of mentions and citations, but we can say for sure that there’s a system that decides when you’re cited vs. mentioned.

2. Your strategy must be LLM-specific. A Gemini-first strategy is different from a ChatGPT-first strategy. Any AI visibility report that aggregates across LLMs is misleading.

3. Comparative content gets brands named. Informational content feeds the machine anonymously. If the goal is brand mentions, not just citations, focus your content strategy toward evaluation, comparison, and recommendation.

4. Prompt format matters. Brands should map not just which topics they want to appear in, but specifically which phrasing patterns produce mentions vs. ghost citations. Short conversational queries and long structured queries behave like different products.

Methodology

Data source: Semrush AI Toolkit: 3,981 domain appearances across 115 prompts, 14 countries, and four AI search engines (ChatGPT, Google AI Overviews, Gemini, Google).

Every row in the dataset represents a domain that appeared in an AI answer. Each appearance is tagged as “cited” (the domain appears as a source link) and/or “mentioned” (the brand name appears in the answer text). The gap between those two states is what this analysis calls a ghost citation: the AI used your content but did not say your name.


Featured Image: Roman Samborskyi/Shutterstock; Paulo Bobita/Search Engine Journal

What’s The Biggest Technical SEO Blind Spot From Over-Relying On Tools? – Ask An SEO via @sejournal, @HelenPollitt1

We are fortunate to have a wide range of SEO tools available, designed to help us understand how our websites might be crawled, indexed, used, and ranked. They often have a similar interface of bold charts, color-coded alerts, and a score that sums up the “health” of your website. For those of us high-achievers who love to be graded.

But these tools can be a curse as well as a blessing, so today’s question is a really important one:

“What’s the biggest technical SEO blind spot caused by SEOs over-relying on tools instead of raw data?”

It’s the false sense of completeness. The belief that the tool is showing you the full picture, when in reality, you’re only seeing a representative model of it.

Everything else, mis-prioritization, conflicting insights, and misguided fixes all flow from that single issue.

Why Technical SEO Tools “Feel Complete” But Aren’t

Technical SEO programs are a critical part of an SEO’s toolkit. They provide insight into how a website is functioning as well as how it may be perceived by users and search bots.

A Snippet In Time Of The State Of Your Website

With a lot of the tools currently on the market, you are presented with a snapshot of the website at the point you set the crawler or report to run. This is helpful for spot-checking issues and fixes. It can be highly beneficial in spotting technical issues that could cause problems in the future, before they have made an impact.

However, they don’t necessarily show how issues have developed over time, or what might be the root cause.

Prioritized List Of Issues

The tools often help to cut through the noise of data by providing prioritized lists of issues. They may even give you a checklist of items to address. This can be very helpful for marketers who haven’t got much experience in SEO and need a hand knowing where to start.

All of these give the illusion that the tool is showing a complete picture of how a search engine perceives your site. But it’s far from accurate.

What’s Missing From Technical SEO Tools

Every tool is constricted in some way. They apply their own crawl limits, assumptions about site structure, prioritization algorithms, and data sampling or aggregation.

Even when tools integrate with each other, they are still stitching together partial views.

By contrast, raw data shows what actually happened, not what could happen or what a tool infers.

In technical SEO, raw data can include:

Without these, you are often diagnosing a simulation of your site and not the real thing.

Joined Up Data

These tools will often only report on data from their own crawl findings. Sometimes it is possible to link tools together, so your crawler can ingest information from Google Search Console, or your keyword tracking tool uses information from Google Analytics. However, they are largely independent of each other.

This means you may well be missing critical information about your website by only looking at one of two of the tools. For a holistic understanding of a website’s potential or actual performance, multiple data sets may be needed.

For example, looking at a crawling tool will not necessarily give you clarity over how the website is currently being crawled by the search engines, just how it potentially could be crawled. For more accurate crawl data, you would need to look at the server log files.

Non-Comparable Metrics

The reverse of this issue is that using too many of these tools in parallel can lead to confusing perspectives on what is going well or not with the website. What do you do if the tools provide conflicting priorities? Or the number of issues doesn’t match up?

Looking at the data through the lens of the tool means there can be an extra layer added to the data that makes it not comparable. For example, sampling could be occurring, or a different prioritization algorithm used. This might result in two tools giving conflicting results or recommendations.

Some Tools Give Simulations Rather Than Actual Data

The other potential pitfall is that, sometimes, the data provided through these reports is simulated rather than actual data. Simulated “lab” data is not the same as actual bot or user data. This can lead to false assumptions and incorrect conclusions being drawn.

In this context, “simulated” doesn’t mean the data is fabricated. It means the tool is recreating conditions to estimate how a page might behave, rather than measuring what actually did happen.

A common example of lab vs. real data is found in speed tests. Tools like Lighthouse simulate page load performance under controlled conditions.

For example, a Lighthouse mobile test runs under throttled network conditions simulating a slow 4G connection. That lab result might show an LCP of 4.5s. But CrUX field data, reflecting real users across all their devices and connections, might show a 75th percentile LCP of 2.8s, because many of your actual visitors are on faster connections.

The lab result is helpful for debugging, but it doesn’t reflect the distribution of real user experiences in real-world scenarios.

Why This Is Important

Understanding the difference between the false sense of completeness shown through tools, and the actual experience of users and bots through raw data can be critical.

As an example, a crawler could flag 200 pages with missing meta descriptions. It suggests you address these missing meta descriptions as a matter of urgency.

Looking at server logs reveals something different. Googlebot only crawls 50 of those pages. The remaining 150 are effectively undiscovered due to poor internal linking. GSC data shows impressions are concentrated on a small subset of the URLs.

If you follow the tool, you spend time writing 200 meta descriptions.

If you follow the raw data, you fix internal linking, thereby unlocking crawlability for 150 pages that currently don’t have visibility in the search engines at all.

The Risk Of This Completeness Blind Spot

The “completeness” blind spot, caused by over-reliance on technical tools, causes a lot of knock-on effects. Through the false sense of completeness, key aspects are overlooked. As a result, time and effort are misguided.

Losing Your Industry Context

Tools often make recommendations without the context of your industry or organization. When SEOs rely too much on the tools and not the data, they may not put on this additional contextual overlay that is important for a high-performing technical SEO strategy.

Optimizing For The Tool, Not Users

When following the recommendations of a tool rather than looking at the raw data itself, there can be a tendency to optimize for the “green tick” of the tool, and not what’s best for users. For example, any tool that provides a scoring system for technical health can lead SEOs to make changes to the site purely so the score goes up, even if it is actually detrimental to users or their search visibility.

Ignoring The Best Way Forward By Following The Tool

For complex situations that take a nuanced approach, there is a risk that overly relying on tools rather than the raw data can lead to SEOs ignoring the complexity of a situation in favor of following the tools’ recommendations. Think of times when you have needed to ignore a tool’s alerts or recommendations because following them would lead to pages on your site being indexed that shouldn’t, or pages being crawlable that you would rather not be. Without the overall context of your strategy for the site, tools cannot possibly know when a “noindex” is good or bad. Therefore, they tend to report in a very black-and-white manner, which can go against what is best for your site.

Final Thought

Overall, there is a very real risk that by accessing all of your technical SEO data only through tools, you may well be nudged towards taking actions that are not beneficial for your overall SEO goals at best, or at worst, you may be doing harm to your site.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

The Yoast Perspective 2026: 7 things we learned from the SEO industry 

SEO in 2026 is expanding, not changing. Traditional search still matters, but now SEO also includes AI-driven discovery, social platforms, and chatbots. The principles are the same, like clarity, structure, authority, and relevance, but the platforms are multiplying. We surveyed 59 SEOs to see how they’re handling these changes.

Table of contents

Some have less than a year of experience. Others have been in the field for over a decade. Their answers show an industry figuring things out. A few are ahead of the curve, but most are still catching up.

The best SEOs aren’t just reacting to AI. They’re using it to strengthen what already works: technical foundations, high-quality content, and real authority. Others are stuck debating whether SEO should even keep its name. 

Here’s what stood out, and where Yoast fits into the conversation of what SEO means in 2026.  

You can find the full results, with more questions and deeper insights from Yoast’s principal SEOs, Carolyn Shelby and Alex Moss, in a downloadable PDF. Sign up below!

Download the PDF report now

Enter your email address below. We’ll send you a download link to the full Yoast Perspective PDF report. Check your inbox, as it’ll arrive in minutes.

Privacy policy

1. SEO isn’t dying, but evolving 

51% of respondents consider SEO to be “evolving”. 33% say it’s “thriving”. Only 10% think it’s “declining”. 

This is an interesting divide, but it’s not random. In the results, those with 10+ years of experience say SEO is thriving, while newcomers say it is not. It might be that experts know the landscape better and see change as a constant. 

Alex Moss’s take: “SEO has always adapted to changes in the SERP, and now it’s adapting again. The traditional SERP is gone, but SEO isn’t.” 

Carolyn Shelby’s take: “SEO is evolving, but not because its fundamentals are breaking. The interfaces between users and information are changing. Search is no longer confined to ten blue links, but the need for structured, relevant, trustworthy content hasn’t diminished.” 

The Yoast Perspective: We think SEO isn’t going anywhere, but there are changes happening. Traditional search from Google and Bing still drives traffic, but AI-driven discovery from LLM-powered assistants shapes perception and discovery. Therefore, the best SEOs don’t choose sides in this fight; they are mastering both directions. 

2. Keep the name Search Engine Optimization 

39% say SEO should be relabeled “Search Everywhere Optimization”. Only 32% want to keep “Search Engine Optimization”. 

Big support for relabeling SEO, and even among veterans, 41% prefer Search Everywhere Optimization. Of course, this doesn’t mean that we should do this. 

Alex Moss’s take: “The term ‘SEO’ will stay. The role will widen to include AI and other disciplines, but the name doesn’t need to change.” 

Carolyn Shelby’s take: “The term ‘SEO’ still holds shared meaning, credibility, and market recognition. There’s no strong evidence that rebranding the discipline itself is necessary or beneficial. Responses favoring ‘Search Everywhere Optimization’ reflect where SEO outcomes now surface, not a fundamentally different practice.” 

The Yoast Perspective: We at Yoast don’t think the term SEO is broken. Yes, there is a lot of change happening, especially in search, with AI overviews, chatbots, and social media platforms, but what about the core SEO work? You still have to focus on technical foundations, content quality, brand building, and authority.  

‘Search Everywhere Optimization’ might describe where SEO happens, but it doesn’t change what SEO is. The name ‘SEO’ still works, but we just need to explain how it applies to AI and social platforms. 

seo label graph showing 28.6% saying search everywhere optimization

3. Good SEO is LLM optimization 

64% agree LLM optimization is essentially the same as traditional SEO. 59% aren’t even actively optimizing for LLMs. 

You might call this laziness, but you could also call it efficiency. It oftentimes comes down to the same thing. 

There’s also the 9% who strongly disagree with this statement. These respondents say LLMs prioritize synthesis over rankings, so focusing on structured data and brand mentions makes more sense for them. Of course, they are not wrong, but they don’t contradict what others have said. LLMs don’t require new tactics; they just reward the same SEO principles more strictly.

Alex Moss’s take: “If you’re undertaking good SEO, you’re already optimizing well for LLMs. The tactics don’t change—just the audience.” 

Carolyn Shelby’s take: “The same practices that make content discoverable and trustworthy for search engines also make it usable for LLMs. The confusion arises when people treat LLMs as a completely separate system. In reality, LLM visibility rewards clarity, relevance, and authority—all long-standing SEO principles.” 

LLM optimization isn’t a separate discipline because it’s SEO for AI. The same principles apply: clarity, structure, and authority. The difference? AI systems are less forgiving of mediocre content, so the bar for quality is higher. 

llm optimization is the same as traditional seo graph showing 51.8% agree

4. Rankings still matter, but not like they used to 

52% say rankings are “equally important” as before. 30% say they’re “less important”. 

This is a sensible shift. Google’s AI overviews and other zero-click results mean visibility does not equal traffic. For AI systems, rankings are still an authority signal.  

Alex Moss’s take: “Traditional rankings are still important because agents still search the web to ingest information. If you aren’t visible there, it’s less likely an agent will identify and select you into their responses.” 

Carolyn Shelby’s take: “Rankings still matter, but they are no longer the end goal. They are a proxy for visibility, not a guarantee of impact.” 

The Yoast Perspective: We need to stop obsessing over ranking number one, so start tracking visibility and presence. Check whether you are cited in AI-driven answers, and try to be mentioned in industry discussions. AI visibility and citations are the new rankings.  

how important are rankings as a kpi in 2026 graph showing 51.9% saying equally important

5. Organic traffic is still king, but for how long? 

55% say “organic traffic” is their top metric. Yet 49% cite “reducing organic clicks” as their biggest challenge. 

We see this as the great paradox of 2026. Traffic is down, but the value of that traffic could be up. You might get less traffic, but the clicks that do happen have a better intent.  

Carolyn Shelby’s take: “As AI reduces the need for some visits, success looks like being represented correctly rather than merely visited. Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source, even when users don’t click.” 

Our advice:

  • Work on AI visibility, as this is the new SEO metric. Just as rankings show your visibility in traditional search, citations in AI overviews show your authority in AI-driven discovery. Track it alongside rankings and traffic 
  • Keep an eye on branded search volume to learn whether people are looking for you by name 
  • Monitor citations to see if others are referencing your content online 
what are the most important seo metrics in 2026 graph showing 54.5% choose organic traffic

6. Content saturation is a big threat 

39% say “competing with AI-generated content” is their top challenge. Only 4% cite a “talent gap.” 

We know AI can write bad content. But it’s a bigger challenge when AI writes good enough content at scale. This will flood the web with noise, making it hard to penetrate. 

Alex Moss’s take: “AI-generated content is artificial. Humans connect with stories, not regurgitated lists.” 

Carolyn Shelby’s take: “AI doesn’t change what good content is, but just raises the bar. Mediocrity doesn’t just rank lower; it disappears.” 

Our advice: 

  • Focus on building your EEAT, because AI can’t fake real-world expertise and authority 
  • Prioritize quality over quantity, as a single great piece of content can beat ten average ones 
  • Use AI, but be careful and always use it as a tool, not as a replacement 
biggest challenges in seo in 2026 graph showing 49% choose reducing organic clicks

7. Most SEOs are ignoring a fast-growing search channel 

Traditional search (Google/Bing) is still #1. But TikTok search ranks #5, lower than Amazon. 

This might be something of a blind spot for many. Younger generations use TikTok and other video platforms for entertainment, recommendations, tutorials, and even B2B advice.  

Alex Moss’s take: “Social platforms influence how LLMs perceive freshness and authority. Ignoring them means missing out on signals that AI systems value.”

Carolyn Shelby’s take: “You don’t need to rank on TikTok, but you do need to be discoverable there. LLMs scrape social platforms for real-world signals.”

The Yoast Perspective: SEO now includes social platforms like TikTok. You don’t need to rank there, but you do need to be discoverable, because LLMs scrape these platforms for fresh, authoritative content. A great video channel can boost your authority in AI responses.  

Our advice: 

  • Repurpose content for video platforms like TikTok and YouTube  
  • Check brand mentions in these platforms 
  • Improve your video SEO in general 
which search channels are you prioritizing most in 2026 graph showing traditional search engines at number one

What Yoast’s experts really think 

The data shows trends, but the real wisdom comes from Yoast’s SEO leaders, Carolyn Shelby and Alex Moss. Here is a small peek at the insights they share about the various debates:

On “Search Everywhere Optimization”:  

Alex: “The term ‘SEO’ will stay. The role will widen, but the name doesn’t need to change.”

Carolyn: “Rebranding risks fragmenting understanding. ‘SEO’ is already well-established outside the industry.” 

On the future of SEO metrics: 

Alex: “As we move from being seen to being selected, visits don’t hold the same value they used to. The business goal should be the most important metric.”

Carolyn: “Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source.” 

On rankings vs. influence:  

Alex: “Rankings still matter because agents search the web to ingest information.”

Carolyn: “Rankings are a proxy for visibility, not a guarantee of impact. Focus on presence.” 

On the role of SEOs in 2026: 

Alex: “100% all three: marketers, brand builders, and SEO specialists. Brand and marketing have become intertwined with SEO as our role expands.”

Carolyn: “A blended mindset is essential. SEO can’t operate in isolation from brand, product, or communications.” 

Do you want to read the full story? 

These insights are just a small taster for you. In the full Yoast SEO report, you’ll find much more:  

  • Includes the full answers to all 25 questions 
  • In-depth commentary from Yoast’s SEO experts, Carolyn Shelby and Alex Moss 
  • Learn which metrics really matter in 2026  
  • Why backlinks are losing ground to citations 

Sign up and download it right away!

Google Adds New Tasked-Based Search Features via @sejournal, @martinibuster

Google introduced new features for Search that continues its evolution into a more task-oriented tool, enabling users to launch AI agents directly from AI Mode and complete more tasks. This is a trend that all SEOs and online businesses need to be aware of.

Rose Yao, Product leader in Search, posted about the new features on X. The first tool is a toggle that enables users to track hotel prices directly from the search bar.

Yao explained:

“To help you save $$, today we launched hotel price tracking on Search! Use the new tracking toggle to get an email if prices drop for your dream hotel. Available now, globally”

An accompanying official blog post further explained the new tool:

“You can already track hotel prices at the city level, and launching today, you can now track prices for individual hotels, too. To get started on desktop, head to Search and look up a specific hotel by name, then tap the new price tracking toggle. On mobile, you’ll find the price tracking option under the Prices tab after you search. Either way, you’ll get an email alert if rates change significantly during your chosen dates, so you can jump on those price drops and snag a great deal.”

Agentic Search From AI Mode

Google’s CEO, Sundar Pichai, recently shared that the future of search was tasked-based with a reliance on AI agents that can complete tasks for users. This announcement brings Google search closer to that paradigm by introducing agentic search directly from AI Mode. This new feature launches an AI agent from AI Mode that will call local stores.

Yao explained:

“Agentic calling in AI Mode for finding last-minute travel gear.

When you just need that *one thing* before you leave but don’t know who’s got it in stock, you can ask AI Mode to save you the stress. Just search for what you need “near me” and Google AI will call local stores directly to get the details you need.”

This feature has been available on Google Search since November 2025 but it’s now rolling out to AI Mode.

Canvas Tool

AI Mode in Search has a Canvas tool that can accomplish planning tasks for users. The official blog post describes it:

“AI Mode in Search can transform your scattered research into a cohesive travel plan. Just head to AI Mode, select the Canvas tool from the plus (+) menu and describe your ideal trip. AI Mode will craft a custom itinerary in the Canvas side panel, including options for flights and hotels, as well as local attractions laid out on a map.”

The results can be further refined by the user. Travel planning with the Canvas tool is currently only available in the United States.

Three Featured Travel Tools

Those are the three travel-related features that Yao announced on X. The official blog post lists seven features related to travel, not all of which are new. For example, saving a boarding pass to Google Wallet is not a new feature.

Google’s Seven Travel Related Search Features

  1. Build a custom trip plan with AI Mode in Search
  2. Save money with hotel price tracking on Search
  3. Let Google take the hassle out of booking restaurants
  4. Ask Google to call nearby stores for last-minute shopping
  5. Translate and communicate with confidence
  6. Ask Maps for the best stops on your summer trips
  7. Make airport travel easier with Google Wallet

Transformation Of Search Continues

The main takeaways are:

  • Search is on a path toward becoming task oriented
  • Features like hotel tracking, AI calling, and Canvas show Google handling real-world actions, not just queries
  • Sundar Pichai’s “task-based” vision is already live in product features, not theoretical
  • AI Mode acts as an execution layer, turning search into a tool that does things on behalf of users
  • Local intent is becoming more actionable, with AI directly interacting with businesses
  • The traditional “ten blue links” model is being replaced by an interface that organizes and completes workflows
  • Visibility in search is increasingly tied to whether your business can be used by these systems, not just found

Google Search is becoming less about answering queries and is becoming more about helping users with their every day tasks. In that mode, it changes the role of a website from a destination into a data source and service endpoint.

For marketers, that creates an opportunity for helping businesses be aware of these changes and be ready for them.

If AI agents are calling stores, tracking prices, and assembling plans, then the winners are not just the best-ranked pages but the ones that are use accurately structured HTML elements as well as Schema.org structured markup.  The winners are the businesses whose data is structured, accessible, and actionable enough for those agents to use.

What this means:

  • Treat product availability, pricing, hours, and inventory as critical inputs, not just content
  • Ensure local listings, structured data, and third-party integrations are accurate and consistent

Google Search is transforming into a tasked-based user interface. Tasked-based Agentic Search is not hype, it’s something real and these new features are a part of that transformation. The old ten blue links paradigm is steadily fading away and what’s replacing it is the concept of search as an interface for navigating the modern world.

Read more about Google’s tasked-based agentic search. On a related note, research based on 68 million AI crawler visits show what successful websites do to drive better AI search performance to local business sites.

Featured Image by Shutterstock/Sergio Reis

Chinese tech workers are starting to train their AI doubles–and pushing back

Tech workers in China are being instructed by their bosses to train AI agents to replace them—and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. 

Earlier this month a GitHub project called Colleague Skill, which claimed workers could use it to “distill” their colleagues’ skills and personality traits and replicate them with an AI agent, went viral on Chinese social media. Though the project was created as a spoof, it struck a nerve among tech workers, a number of whom told MIT Technology Review that their bosses are encouraging them to document their workflows in order to automate specific tasks and processes using AI agent tools like OpenClaw or Claude Code. 

To set up Colleague Skill, a user names the coworker whose tasks they want to replicate and adds basic profile details. The tool then automatically imports chat history and files from Lark and DingTalk, both popular workplace apps in China, and generates reusable manuals describing that coworker’s duties—and even their unique quirks—for an AI agent to replicate. 

Colleague Skill was created by Tianyi Zhou, who works as an engineer at the Shanghai Artificial Intelligence Laboratory. Earlier this week he told Chinese outlet Southern Metropolis Daily that the project was started as a stunt, prompted by AI-related layoffs and by the growing tendency of companies to ask employees to automate themselves. He didn’t respond to requests for further comment.

Internet users have found humor in the idea behind the tool, joking about automating their coworkers before themselves. However, Colleague Skill’s virality has sparked a lot of debate about workers’ dignity and individuality in the age of AI.

After seeing Colleague Skill on social media, Amber Li, 27, a tech worker in Shanghai, used it to recreate a former coworker as a personal experiment. Within minutes, the tool created a file detailing how that person did their job. “It is surprisingly good,” Li says. “It even captures the person’s little quirks, like how they react and their punctuation habits.” With this skill, Li can use an AI agent as a new “coworker” that helps debug her code and replies instantly. It felt uncanny and uncomfortable, Li says. 

Even so,  replacing coworkers with agents could become a norm. Since OpenClaw became a national craze, bosses in China have been pushing tech workers to experiment with agents. 

Although AI agents can take control of your computer, read and summarize news, reply to emails, and book restaurant reservations for you, tech workers on the ground say their utility has so far proven to be limited in business contexts. Asking employees to make manuals describing the minutiae of their day-to-day jobs the way Colleague Skill does is one way to help bridge that gap. 

Hancheng Cao, an assistant professor at Emory University who studies AI and work, believes that companies have good reasons to push employees to create work blueprints like these, beyond simply following a trend. “Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment,” he says.

To employees, though, making agents or even blueprints for them can feel strange and alienating. One software engineer, who spoke with MIT Technology Review anonymously because of concerns about their job security, trained an AI (not Colleague Skill) on their workflow and found that the process felt reductive—as if their work had been flattened into modules in a way that made them easier to replace. On social media, workers have turned to bleak humor to express similar feelings. In one comment on Rednote, a user wrote that “a cold farewell can be turned into warm tokens,” quipping that if they use Colleague Skill to distill their coworkers into tasks first, they themselves might survive a little longer.

The push for creating agents has also spurred clever countermeasures. Irritated by the idea of reducing a person to a skill, Koki Xu, 26 an AI product manager in Beijing, published an “anti-distillation” skill on GitHub on April 4. The tool, which took Xu about an hour to build, is designed to sabotage the process of creating workflows for agents. Users can choose between light, medium, and heavy sabotage modes depending on how closely their boss is observing the process, and the agent rewrites the material into generic, non-actionable language that would produce a less useful AI stand-in. A video Xu posted about the project went viral, drawing more than 5 million likes across platforms.

Xu told MIT Technology Review that she has been following the Colleague Skill trend from the start and that it has made her think about alienation, disempowerment, and broader implications for labor. “I originally wanted to write an op-ed, but decided it would be more useful to make something that pushes back against it,” she says.

Xu, who has undergraduate and master’s degrees in law, said the trend also raises legal questions. While a company may be able to argue that work chat histories and materials created on a work laptop are corporate property, a skill like this can also capture elements of personality, tone, and judgment, making ownership much less clear. She said she hopes Colleague Skill prompts more discussion about how to protect workers’ dignity and identity in the age of AI. “I believe it’s important to keep up with these trends so we (employees) can participate in shaping how they are used,” she says. Xu herself is an avid AI adopter, with seven OpenClaw agents set up across her personal and work devices.

Li, the tech worker in Shanghai, says her company has not yet found a way to replace actual workers with AI tools, largely because they remain unreliable and require constant supervision. “I don’t feel like my job is immediately at risk,” she says. “But I do feel that my value is being cheapened, and I don’t know what to do about it.”

Colossal Biosciences said it cloned red wolves. Is it for real?

If you want to capture something wolflike, it’s best to embark before dawn.

So on a morning this January, with the eastern horizon still pink-hued, I drove with two young scientists into a blanket of fog. Forty miles to the west, the industrial sprawl of Houston spawned a golden glow. Tanner Broussard’s old Toyota Tacoma bumped over the levee-top roads as killdeer, flushed from their rest, flew across the beams of his headlights. 

Broussard peered into the darkness, looking for traps. “I have one over here,” he said, slowing slightly. A master’s student at McNeese State University, he was quiet and contemplative, his bearded face half-hidden under a black ball cap. “Nothing on it,” he said, blandly. The truck rolled on.

Wolves and their relations—dogs, jackals, coyotes, and so on—are classed in the family Canidae, and the canid that dominated this landscape in eastern Texas was once the red wolf. But as soon as white settlers arrived on the continent, Canis rufus found itself under siege. The war on wolves “lasted 200 years,” federal researchers once put it, in a surprisingly evocative report. “The wolf lost.” By 1980, the red wolf was declared extinct in the wild, its population reduced to a small captive breeding population.

Still, for decades afterward, people noted that strange wolflike creatures persisted along the Gulf Coast. Finally, in 2018, scientists confirmed that some local coyotes were more than coyotes: They were taller, long-legged, their coats shaded with hints of cinnamon. These animals contained relict red wolf genes. They became known as the ghost wolves.

Broussard grew up in southwest Louisiana, watching coyotes trot across his parents’ ranch. The thrilling fact that these might have been not just coyotes but something more? That reset a rambling academic career. In 2023, Broussard had recently returned to college after a seven-year pause, and his budding obsession with wolves narrowed his focus. Before he finished his bachelor’s degree, he began to supply field data to a prominent conservation nonprofit.

a wolf pup chews on a terrycloth toy
The American red wolf, Canis rufus, is the most endangered wolf species in the world. This pup is one of four animals said to be clones of this native North American species.
COURTESY OF COLOSSAL BIOSCIENCES

Then, last year, just before he began his master’s studies, he woke to disconcerting news. A startup called Colossal Biosciences claimed to have resuscitated the dire wolf, a large canid that went extinct more than 10,000 years ago. Pundits debated the utility of the project and whether the clones—technically, gray wolves with some genetic tweaks—could really be called dire wolves. But what mattered to Broussard was Colossal’s simultaneous announcement that it had cloned four red wolves.  

“That surprised pretty much everybody in the wolf community,” Broussard said as we toured the wildlife refuge where he’d set his traps. The Association of Zoos and Aquariums runs a program that sustains red wolves through captive breeding; its leadership had no idea a cloning project was underway. Nor did ecologist Joey Hinton, one of Broussard’s advisors, who had trapped the canids Colossal used to source the DNA for its clones. Some of Hinton’s former partners were collaborating with the company, but he didn’t know that clones were on the table.

There was already disagreement among scientists about the entire idea of de-extinction. Now Colossal had made these mystery clones, whose location was kept secret. Even the purpose of the clones was murky to some scientists; just how they might restore red wolf populations was unclear. 

Red wolves had always been a contentious species, hard for scientists to pin down. The red wolf research community was already marked by the inevitable interpersonal tensions of a small and passionate group. Now Colossal’s clones became one more lightning rod. Perhaps the most curious question, though, was whether the company had cloned red wolves at all. 


You can think of the red wolf as the wolf of the East—an apex predator that once roamed the forests and grasslands and marshes everywhere from Texas to Illinois to New York. Smaller than a gray wolf (though a good bit larger than a coyote), this was a sleek beast, with, according to one old field guide, a “cunning fox-like appearance”: long body, long legs; clearly built to run across long distances. Its coat was smooth and flat and came in many colors: a reddish tone that comes out in the right light, yes, but also, despite the name, white and gray and, in certain regions and populations, an ominous all black.

We know these details thanks to a few notes from early naturalists. As writer Andrew Moore writes in his new book, The Beasts of the East, by the time a mammalogist decided to class these eastern wolves as a standalone species in the 1930s, the red wolf had been extirpated from the East Coast and was rapidly dwindling across its range. Working with remnant skulls and other specimens, the mammalogist chose the name red wolf—which was later enshrined with the Latinate Canis rufus—because that’s what these wolves were called in the last place they survived. 

The looming extinction of the red wolf turned out to be a good thing for coyotes. Canis latrans is a distant relative of wolves that split away from a common ancestor thousands of years ago and might be considered, as one canid biologist put it to me, the “wolf of the Anthropocene.” Their smaller size means they need less food and can survive in smaller and more fragmented territory, the kind that modern humans tend to build. 

The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all.

Red wolves had kept coyotes out of eastern America, outcompeting them for prey. Now, as the wolves declined, the coyotes began to slip in. The last red wolves, which lived in Louisiana and Texas, decided a strange and smaller mate was preferable to no mate at all. Soon the territory became a genetic jumble, home to both wolves and coyotes and hybrids that, after several generations of intermixing, came in every shade between. Scientists call such a population a “hybrid swarm,” and it poses a genetic threat to the declining species: As more coyotes poured east, and as all the canids kept interbreeding, there would be nothing that was “purely” wolf. 

Ron Wooten surveys a location on the edge of Galveston Island State Park in Texas. In 2016, Wooten’s photographs of oversized local coyotes got the attention of Joey Hinton, then a postdoctoral researcher at the University of Georgia.
TRISTAN SPINSKI

For years, no one seemed to notice. Perhaps trappers in the region mistook the new hybrids for wolves—or were happy to take the higher bounty that a wolf pelt earned. Finally, though, by the 1960s, as the concept of endangered species first emerged, biologists began to worry for the disappearing wolf. 

The best solution they could come up with was a program of mass extermination. Over several years, trappers rounded up hundreds of canids in Texas and Louisiana. Those deemed true red wolves (on the basis of their howls and skull shape) were whisked away to breed in captivity. Most of the rest were euthanized. In 1980, the red wolf was declared extinct in the wild. To put it plainly: The red wolf was wiped out intentionally, in a roundabout effort to keep it alive.

Just 14 individuals survived this gauntlet; today’s wolves descend from 12 of those. They became the ark, the source material for the few hundred red wolves that live today. There are about 280 in the “Species Survival Plan” population, living in captivity, and another 30 or so that roam a federal refuge in coastal North Carolina, and that the government deems “nonessential” and “experimental.” According to the US Fish and Wildlife Service, to be classified as a representative of the protected entity known as Canis rufus, an animal must trace at least 87.5% of its lineage to the 12 founders. 

The scientist who led this trapping-and-breeding program understood that the federal government would be narrowing the red wolf’s gene pool precipitously—so much so that the result could be an entirely new species. None of those notably black wolves persisted in the new population, for example. But what other choice existed? A new kind of wolf, free of the taint of the invading coyote, seemed better than no wolf at all.


After I learned about Colossal’s clones, I decided to travel to eastern Texas. The clones were hidden away on an unnamed refuge, but on this coastline, I might be able to at least see the animals that provided their genetic material. I arrived in the small town of Winnie on a balmy afternoon in January and met up with Broussard and another graduate student, Patrick Cunningham, at a Tex-Mex joint to discuss the challenges of studying red wolves.

“We don’t have a good reference genome,” Cunningham said. We can collect DNA from the descendants of the 12 founders, but not from the countless wolves that had been killed. It’s difficult to extract usable DNA from old samples. So our picture of what the species used to look like is limited. 

Studies of the genes we do have, meanwhile, have proved controversial. When a Princeton geneticist named Bridgett vonHoldt dug into the genome of the Species Survival Plan population, she found little about their DNA that could set them apart from other wolflike American canids. In 2016, in a paper in Science Advances, vonHoldt and her coauthors wondered if there ever really was a separate southern wolf species. Perhaps the 12 founders were just coyotes injected with some smaller portion of wolf.

It’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

Her paper called for complex new interpretations of the Endangered Species Act. We should, she wrote, focus less on species and more on the function a group of animals performs. The red wolves deserved protection, then, as creatures that filled the same role as truly endangered wolves and carried some of their genetics. Nonetheless, for Canis rufus, the timing of the paper was bad news.

The red wolves roaming that federal reserve in North Carolina are supposed to be a first step toward the species’ return to the wild. But some locals never liked the idea of living alongside wolves. By 2016, state officials had turned against the recovery program and were requesting its termination. The wild population, which had included as many as 120 a few years earlier, was falling. But the US Fish and Wildlife Service had paused further releases of wolves. Now a group of scientists, led by vonHoldt, was saying that the red wolf showed “a lack of unique ancestry.” Why spend money, some people wondered, on a species that does not exist? 

Part of the problem was that the concept of a “species” is less sturdy than your high school biology teacher might have led you to believe. The most familiar definition is that a species consists of animals that can produce fertile offspring. But that’s a rule various species of canids violate all the time; it’s long been clear that North America’s soup of Canis genes is something less like a family tree and more like a river—one that’s broken by islands and sandbars into many braided channels that split and merge and re-split.

VonHoldt suggested that the modern red wolf is a channel in that river, part wolf and part coyote, that appeared surprisingly recently. But a year after her study came out, other researchers claimed that her data, if interpreted differently, could suggest that the red wolf braid had emerged tens of thousands of years ago, meaning this was a species that had long been on its own evolutionary journey. 

These nuances were confusing for the policymakers who oversaw actual, living animals. “Congress was just like, ‘What is going on?’” Cunningham said. “‘Why is there not just a simple explanation for what this thing is?’”

Given the policy implications, the National Academies of Science, Engineering, and Medicine tasked a panel of scientists with finding that simple answer. Their report, published in 2019, declared that the red wolf is, by virtue of its appearance and seemingly long-standing isolated population, a species. As their study got underway, though, a new question was arising: What to make of the strange canids on the Gulf Coast, those today called the ghost wolves?


The path to that name began in 2008, when a photographer from Galveston Island, Texas, grew obsessed with the oversized local coyotes. He began to take photos of the packs, which he distributed to scientists, seeking answers: What were they? By 2016, the photos had reached Joey Hinton, then a postdoctoral researcher at the University of Georgia.

Hinton had spent more than a decade trapping wolves and coyotes in North Carolina, and his work has always focused on live animals, especially visual ways to distinguish red wolves and coyotes. So he was a good choice for helping the photographer, Ron Wooten, figure out the status of the canids. In his freezer Wooten also had tissue samples he’d collected from road-killed coyotes. These could be used by a geneticist to give a fuller picture of the canids’ ancestry. So vonHoldt was brought in too. The result was a 2018 paper, with Hinton as a coauthor, that identified the Galveston Island canids as at least part red wolf.

These canids were not, to be clear, actual red wolves; no canid on the Gulf Coast is descended from the government’s 12 canonical founders, so under current policy, none can be officially classified as a wolf. Subsequent studies have found that, on average, the ancestry of the region’s canids is less than half red wolf, and often far less. In scientific terms, the red wolf had introgressed into the Gulf Coast population—its genes had leaked across the species boundary and lodged themselves in a different population.

Hinton, vonHoldt, and their coauthors also noted the presence of what they called “ghost alleles”—DNA sequences unknown in any other named species. The Occam’s razor assumption was that, in these already wolfy coyotes, these sequences likely represented Canis rufus genetics that had not been captured in the sweep of the marsh that yielded the Species Survival Plan population. Since so much of the red wolf gene pool had been lost, these genes seemed to be a potential resource for the species—a way to expand its diversity. When the New York Times covered this discovery a few years later, the headline popularized the “ghost wolf” moniker that has proved so indelible. 

As it happened, a separate team, focused on canids in and around federally protected marsh in Louisiana, published a similar paper in 2018, at nearly the same time. The twin discoveries raised new questions—What should we make of these creatures, the latest branch in the canid river? What do they mean for the wolves in North Carolina?—and helped researchers secure new funding.

In 2020, vonHoldt and Kristin Brzeski, a former postdoc under vonHoldt and now a professor at Michigan Technological University, launched what they called the Gulf Coast Canine Project. Brzeski, who led the field work, hired Hinton to do much of the canid trapping and sample collection. In 2022, vonHoldt, Hinton, and Brzeski were all coauthors of another paper that identified even more red-wolf-descended canids in Louisiana and noted a positive correlation between red wolf ancestry and body mass—the more red wolf genes, the bigger the animal. The paper also suggested that given this newly discovered reservoir of red wolf DNA, “genomic technologies” could prove useful in the long-term survival of the species.

Bridgett vonHoldt (left) and Kristin Brzeski (center) visit a location where canids have been spotted with an animal control worker.
TRISTAN SPINSKI

VonHoldt and Brzeski eventually conceived of an ambitious project. They hoped that by carefully matching the most wolf-­descended canids and breeding them together, over three generations they’d increase the proportion of red wolf genes—de-introgression. “I’m expecting, based on these pairings of animals, that I can stitch together the puzzle pieces,” vonHoldt told me recently. “We are very likely to get puppies each generation that are higher and higher red wolf content”—enough wolf content, she hopes, to eventually win her permission to breed the resulting animals with the Species Survival Plan population of red wolves. They’d essentially be adding a new founder to the limited lineage.

Hinton told me he felt he’d been kept in the dark about the de-introgression idea. He was also worried, he says, to learn that Colossal Biosciences hovered in the background. (In a draft proposal for the project, vonHoldt indicated that Colossal would be in charge of “live capture.”) Hinton says he was not comfortable collecting materials for a for-profit company that has to keep its shareholders happy. 

Hinton says he reached out to state and federal officials and found they knew little about the project. (The US Fish and Wildlife Service declined to make anyone available for an interview for this story, and the Louisiana Department of Wildlife and Fisheries did not reply to requests for comment.) He knew the group’s next phone call would be difficult, and indeed it was. He wound up speaking one-on-one with vonHoldt for at least half an hour.

“We didn’t reach an agreement,” he says. After the call, he sent her a text: He was exiting the project. He believes that had Colossal not been involved, they’d all still be working as a team. Both vonHoldt and Brzeski declined to comment on what felt to them like a matter of interpersonal relationships rather than a scientific dispute. “There were challenges over time, and the tone and manner of the interactions became increasingly difficult to navigate productively,” Brzeski said in an email. 


Colossal was cofounded in 2021 by George Church, an eminent Harvard geneticist who, thanks to investors, could finally embark on a long-discussed dream. He wanted to make de-extinction a reality—using CRISPR gene-editing technology to, say, turn a modern elephant into something like the extinct woolly mammoth. The concept has drawn skepticism from the beginning—at best it would only be possible to make something like a woolly mammoth. Was there any point to that? Some scientists note that genes alone do not teach an animal how to exist in the world; indeed, since social structures affect how genes are expressed, an animal without parents may not effectively fill its ecological niche.

Less reproachable, though, was Colossal’s interest in partnering with scientists who, like vonHoldt and Brzeski, focus on extant species that are endangered. This gave more heft to Colossal’s gee-whiz de-extinction projects: They would, along the way, supply technology that could save our natural world.

For red wolves, such technologies could offer a quick way to expand the limited gene pool. Through genetic engineering, Colossal could take clones of the Gulf Coast canids and tune up the wolf, tune down the coyote. It would be a high-tech shortcut past vonHoldt and Brzeski’s careful breeding program. “You can do the same thing much more precisely, much more quickly, much more efficiently, in vitro,” says Matt James, Colossal’s chief animal officer and the executive director of the Colossal Foundation, the company’s nonprofit arm. VonHoldt notes that the old-fashioned approach, with breeding, means she has to take a few individual canids out of the wild, into captivity—never ideal but, in her view, a worthwhile price for progress. The advantage of cloning, which Colossal has managed to do with blood samples alone, is that the wild canid populations can be kept intact. 

VonHoldt has always been an advocate for wolves. Indeed, when she hypothesized that the red wolf had hybrid origins, in 2016, she’d framed it as an argument for protecting the gray wolf, which the federal government was considering removing from the Endangered Species List. (In short: If all wolves were one wolf, then it was undeniable that the species’ range had contracted precipitously.) But she’d grown frustrated with the federal government’s efforts to restore the red wolf, which after half a century had seen few meaningful successes, she says. 

VonHoldt joined Colossal’s scientific advisory board in 2023. “I love the bold, the shock and awe,” she told me, explaining her decision. She saw the fact that Colossal sparked controversy as an asset, given the problems she sees in conservation: “Get something out there. Start pushing buttons and start forcing these conversations,” she says. The red wolf was akin to a terminal patient who was ready to accept any and all therapies, however experimental. Why not embrace biotech? 

She also notes that the federal budget for endangered species conservation is incredibly limited. Rely only on that money and “we can kiss our world goodbye,” she said in an e-mail. The $100 million raised by the Colossal Foundation is essential, then, she says. As for the samples the team had collected on the Gulf Coast, she says, limited freezer space is often devoted to animals that are officially categorized as threatened or endangered, which the Gulf Coast canids are not. Colossal could take the samples, and the team passed them along to the company.

Dr. Joey Hinton
Ecologist Joey Hinton trapped the canids that Colossal Biosciences used to source the DNA for its clones. He dismisses the clones as a way for the company to earn headlines and attract funding.
RICH SAAL

It was Hinton—a source for a former story—who first alerted me to Colossal’s work on red wolves; he described vonHoldt and Brzeski’s de-introgression project, which won federal funding in late 2024, as nefarious-sounding work to “disappear” canids off the Gulf Coast. But he did not have all the details of the project, which had changed after he left the team. He suggested they’d be “just throwing animals together,” whereas vonHoldt described a careful program of observing the canids in the wild so she could determine which acted most wolflike, findings she’d cross-­reference with their genetic data.

 Colossal did not wind up participating in the de-­introgression project. But the company is doing work on the red wolf that ­vonHoldt views as complementary: Its scientists are assembling a “pangenome” of North American canids by studying samples pulled from museums, universities, zoos, and other institutions. This data set is expected to clarify both what genetic sequences are shared across the entire canid family and what snippets differ in certain populations. The hope is that this will provide a clearer picture of the red wolf in its early days, before the coyotes arrived and the gene pool narrowed. That might shift what Colossal’s James calls the government’s arbitrary definition of the red wolf, to encompass more of the species’ full former diversity. 

The pangenome, then, might allow vonHoldt’s de-­introgressed canids, descended from the Gulf coast canids, to qualify as actual red wolves. Indeed, James suggested to me that more information about historic red wolves might force the government to take a new look at the Gulf Coast canids; some individuals might have high enough red wolf ancestry to be classified as red wolves. (“That has management implications that terrify state and federal government,” he added.)

hair in Zip-Loc bags on a metal tray
Blood and tissue samples collected by the Galveston Island Humane Society from canid roadkill will be shipped to Princeton University for DNA analysis.
TRISTAN SPINSKI

The purpose of vonHoldt’s de-introgression project is to bring back certain lost red wolf genes—to create a whole new wolf lineage. But she has also pushed against the idea of “genetic purity,” which she thinks limits what we protect with conservation laws; she told me emphasizing it reminds her of the human history of eugenics and “makes every part of my soul hurt.” She cares less about what species are out there, in the landscape, than what ecological function the animals play, and she sees coyotes and red wolves as closely related animals that may have a role to play in one another’s future survival.


As for Colossal’s clones, even vonHoldt seems to describe them as something less than a conservation breakthrough. They are a “proof of principle that we, collectively, as a scientific community, know how to do it,” she told me. If an urgent need arises to clone red wolves, the groundwork has been laid. 

Hinton, meanwhile, is one of several scientists I spoke with who were skeptical Colossal was doing good science, given that so much is conducted behind closed doors. He implied that the clones were nothing but an empty showpiece, a way to earn headlines and attract funders. “The work is anything but symbolic,” James responded via e-mail. “It expands the genetic toolkit available for critically endangered species, demonstrates scalable approaches to biodiversity restoration, and contributes directly to preserving imperiled lineages.” He noted that Colossal had intentionally decided to avoid the “snail’s pace” of the peer review process and suggested that the skepticism from scientists may actually be a “panicked response to being outpaced.”

Until some evidence confirms that the Gulf Coast canids—the source material for the clones—are red wolves, they can’t legally be classified as such for federal conservation purposes. Nonetheless, Colossal’s press release claimed that the company had “birthed two litters of cloned red wolves, the most critically endangered wolf in the world.” On the same day that press release dropped, Colossal’s CEO and cofounder, Ben Lamm, appeared on The Joe Rogan Experience and claimed that he had offered to create hundreds of red wolves for the federal government to use in recovery—for free! He was miffed when the government, under the Biden administration, replied that it wanted to spend several years and many millions of dollars to study the potential for cloning before it would take any action. (The company has gotten more traction with the Trump administration, Lamm said.)

When I first spoke to James at Colossal, he said that he was “cognizant” of the concerns over the names and labels and that the company’s own materials described the clones as “red ‘ghost’ wolves.” He suggested that if anyone assumed the clones were actual red wolves, that was because journalists had failed to grasp the nuances of the science. But this phrase appears so late in a long document that it was cut off in some versions. Later, over email, James indicated that further analysis had convinced him that what the company had created were red wolves, and that anyone who disagreed either could not grasp the science or is “so ideologically opposed to Colossal’s conservation revolution that they are willing to compromise their scientific integrity.”

VonHoldt has had her own issues with the company’s communications; she told me it was “stressful” when Lamm described the clones as red wolves—which, she notes, “federally, they’re not.” But she values the company’s work, she says, and “the thing that I value the most is shaking things up.” People are paying attention to red wolves. If it’s hard to decide what to call the animals on the Gulf Coast—where some heavily wolfy animals live alongside others that are more coyote—that’s just proof that our concept of a “species” does not capture the complex realities on the ground. 


In 2025, the same year as Colossal’s wolf announcement, Hinton launched the Texas-Louisiana Canid Project. He’s working in partnership with Broussard, the master’s student at McNeese, in slightly different territory from vonHoldt and Brzeski—and focusing more on the animals’ appearance and behavior than their genes. The Gulf Coast canids are stable and faring better than the North Carolina red wolves, and his hope is that if we learn why they’ve been successful for so many years, we might be able to help the official red wolf population, which is only just limping along. 

a wolf crosses a road outside of the city
Galveston locals hope that the presence of these remarkable creatures—red wolves or not—might rein in the rapid development of the island’s last stands of green.
TRISTAN SPINSKI

I had planned to join Hinton in the field, but by the time I was able to visit, he’d had to go home to his family. So I joined Broussard on his last days trapping in Texas that season. Before I’d left for Winnie, I’d told my friends I’d be out chasing the last surviving red wolves. But there, on the Gulf Coast, I came to understand that this was just as much a story about coyotes.

That’s what Broussard and Cunningham both called the creatures. Hinton does too; he considers the animals to be a specific “ecotype” of coyote, featuring an injection of wolf DNA that has helped them adapt to the local marshes. 

At vonHoldt’s behest, I drove an hour down the coast to Galveston Island, where she and Brzeski began working with the island’s animal control department; when locals find a coyote, the animal is captured so its blood can be collected and a GPS collar fitted on its neck. A small group of locals who support the project have come to call themselves the “ghost wolf team.” They hoped that the presence of these remarkable creatures might rein in the rapid development of the island’s last stands of green. Still, the people I spoke to in Galveston conceded that the animals were, if special, nonetheless a form of coyote. 

VonHoldt describes Galveston Island as a potential model for what conservation could look like in the future. Top-down recovery hasn’t been working, but helping more places fall in love with their local animals might. And for that to happen, we need to stop obsessing over whether or not something is a “pure” wolf. What matters, she argues, is that an animal is doing what a larger predator does in an ecosystem. She embraces the “ghost wolf” name because, more than “Gulf Coast canid,” it makes clear that there’s something special on the coast—something worth protecting. 

Her vision is enticing: Focus on function over purity. Let evolution proceed. Stop protecting the wolf of the past and consider the wolf of the future. Such rapid genetic exchange may be necessary to help predators adapt to a hotter, increasingly shattered world, she says. 

If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead?

Then again, we already know what’s adapted to the world we’re building: coyotes. The argument against genetic purity can sound like giving up on wolves entirely, with the possible exception of whatever specimens we produce in cloning facilities. And there is the matter of politics: If we throw out the concept of “endangered species,” will we really protect “endangered functions” instead? Under an administration already rolling back environmental protections, the likeliest outcome may be protecting nothing at all.

I tried in Galveston, too, to see the coyotes. Ron Wooten, the local resident who helped alert scientists to this population, dropped some pins on a map, pointing me toward several likely spots. That evening, after the sun set, I chose a quiet road that passed through marshes until it reached the island’s eastern beach. It was mating season, Wooten had noted. The animals should be on the move, he said; look to the bushes. As I drove up and down the road, my headlights revealed only empty darkness. No coyote. No wolf. Fitting, perhaps—isn’t absence the essence of a ghost? But whether this was a good omen was less clear. As individuals, these animals do best by avoiding us humans. As a group, their survival—like the survival of the red wolves—depends on our knowing that they are here, and were here, and deciding that is reason enough to care.

In Winnie the next morning, I went out one last time with Broussard, and we struck out again. With no coyotes in his traps and the new semester looming, he decided to take down his game cameras. Back at the hotel, I caught at least an image of what I’d been chasing: In black and white, the animals were appropriately silver, spectral, dashing across the midnight fields. In one clip, a canid paused and howled. “That’s super cool,” Broussard said quietly, as an echoing, interweaving chorus responded from somewhere deeper in the marsh. 

Boyce Upholt is a journalist based in New Orleans and founding editor of Southlands, a magazine about Southern nature. 

The Download: murderous ‘mirror’ bacteria, and Chinese workers fighting AI doubles

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

No one’s sure if synthetic mirror life will kill us all

In February 2019, a group of scientists proposed a high-risk, cutting-edge, irresistibly exciting idea that the National Science Foundation should fund: making “mirror” bacteria.

These lab-created microbes would be organized like ordinary bacteria, but their proteins and sugars would be mirror images of those found in nature. Researchers believed they could reveal new insights into building cells, designing drugs, and even the origins of life.

But now, many of them have reversed course. They’ve become convinced that mirror organisms could trigger a catastrophic event threatening every form of life on Earth. Find out why they’re ringing alarm bells.

—Stephen Ornes

This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands this Wednesday.

Chinese tech workers are starting to train their AI doubles—and pushing back

Earlier this month, a GitHub project called Colleague Skill struck a nerve by claiming to “distill” a worker’s skills and personality—and replicate them with an AI agent. Though the project was a spoof, it prompted a wave of soul-searching among otherwise enthusiastic early adopters.

A number of tech workers told MIT Technology Review that their bosses are already encouraging them to document their workflows for automation via tools like OpenClaw. Many now fear that they are being flattened into code and losing their professional identity.

In response, some are fighting back with tools designed to sabotage the automation process.

Read the full story.

—Caiwei Chen

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The White House and Anthropic are working toward a compromise
The Trump administration says they had a “productive meeting.” (Reuters $)
+ Trump had ordered US agencies to phase out Anthropic’s tech. (Guardian)
+ Despite the blacklist, the NSA is using Anthropic’s new Mythos model. (Axios)

2 Palantir has unveiled a manifesto calling for universal national service
While denouncing inclusivity and “regressive” cultures. (TechCrunch)
+ It’s a summary of CEO Alex Karp’s book “The Technological Republic.” (Engadget)
+ One critic called the book “a piece of corporate sales material.“ (Bloomberg $)

3 Germany’s chancellor and largest company want looser AI rules
Chancellor Merz said industrial AI needs ‌more regulatory freedom. (Reuters $)
+ Siemens says it plans to shift investments to the US if EU rules don’t change. (Bloomberg $)
+ Fractures over AI regulation are also emerging in the US. (MIT Technology Review)  

4 Nvidia’s once-tight bond with gamers is cracking over AI  
Consumer graphics cards are no longer the priority. (CNBC)
+ But generative AI could reinvent what it means to play. (MIT Technology Review)

5 Insurers are trying to exclude AI-related harms from their coverage
And escape legal liability for AI’s mistakes. (FT $)
+ AI images are being used in insurance scams. (BBC)

6 AI is about to make the global e-waste crisis much worse
And most of the trash will end up in non-Western countries. (Rest of World)
+ Here’s what we can do about it. (MIT Technology Review)

7 Tinder and Zoom have partnered with Sam Altman’s eye-scanning firm
To offer a “proof of humanity” badge to users. (BBC)

8 Islamist insurgents in West Africa are driving surging demand for drones
A Nigerian UAV startup is opening its first factory abroad in Ghana. (Bloomberg $)

9 Hundreds of fake pro-Trump AI influencers are flooding social media
In an apparent bid to hook conservative voters. (NYT)

10 A Chinese humanoid has smashed the human half-marathon record
Despite crashing into a railing near the end of the race. (NBC News)
+ Chinese tech firm Honor swept the podium spots. (Engadget)
+ Last year, humans won the race by a mile. (CNN)

Quote of the day

“This is the only issue where you’ve got Steve Bannon and Ralph Nader, Glenn Beck and Bernie Sanders fighting for the same thing.”

—Ben Cumming, head of communications at the AI safety nonprofit Future of Life Institute, tells the Washington Post that diverse public figures are endorsing a declaration of AI policy priorities.

One More Thing

International Space Station photographed from space with Earth in the distance

NASA


The great commercial takeover of low Earth orbit

The International Space Station will be decommissioned as soon as 2030, but the story of America in low Earth orbit (LEO) will continue. 

Using lessons from the ISS, NASA has partnered with private companies to develop new commercial space stations for research, manufacturing, and tourism. If they are successful, these businesses will bring about a new era of space exploration: private rockets flying to private destinations.

They will also demonstrate a new model in which NASA builds infrastructure and the private sector takes it from there—freeing the agency to explore deeper and deeper into space. Read the full story.


—David W. Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Bask in thisadorable test of a dog’s devotion.
+ This vocal pitch trainer improves your singing straight from your browser.
+ Master international etiquette with this interactive guide to the world’s cultures.
+ Explore the networks of public figures with this intriguing interactive graph

Organic Search Winners Share 5 Traits

Google’s March 2026 core algorithm update concluded on April 8. The search giant doesn’t provide recovery guidelines for businesses whose rankings have decreased. We’re left with search-engine optimizers to create tactics that align with the winners to help losing sites maintain organic visibility.

A just-published study by SEO pro Cyrus Shepard of Zyppy Signal is an example. He analyzed organic search traffic of 400 winning and losing websites over the past 12 months and classified them by business model, content types, creator profiles, and other definable traits. From there, he identified five characteristics of winning sites.

Here are Cyrus’s five features of sites that consistently maintain prominent organic rankings on Google.

Proprietary assets

Of the 400 analyzed sites, 92.9% of the winners own proprietary assets that are difficult to replicate, such as datasets, products, images, or studies.

For example, a fashion ecommerce site may use its user data to report trends in colors or seasonality. A site with extensive product reviews could repurpose them into shopping guides.

Completes a task

According to the study, 83.7% of winning websites help searchers do something: buy, download, or search.

Winning sites tend to help users accomplish whatever they’re looking for. Losing sites may offer meaningful info on topics, but the searcher must go elsewhere to complete the task.

The solution may be a unique product or an interactive tool. For example, a tutorial site could offer interactive tools, quizzes, and workbooks to help students practice math.

Niche expertise

Expertise within a niche was a trait of 75.9% of the winners.

Winning sites tend to focus on a topic in which they have deep knowledge and experience.

Those sites become go-to authorities for specialized subjects. Hyper-specific travel blogs, for example, often outrank global travel brands.

Unique product or service

A unique product or service is a trait of 70.2% of sites that consistently rank well across core updates. Cyrus’s study found that informational sites (news publishers and affiliate sites) lost the most traffic and that offering a product may be the answer.

For example, a recipe site can sell a subscription meal plan, a book, or access to a private cooking community.

Strong brand

A strong brand, a destination site, was a trait of 32.6% of organic search winners. Cyrus found a high correlation between winning in organic search and having a strong profile of branded search terms.

The more searchers query a business’s name, the more that site is a destination and a strong signal to Google. Treat your brand search metrics as a key performance indicator, in other words.

I’ll add one feature for 2026 that Cyrus doesn’t address: sites that rank prominently in organic search offer something that AI cannot easily replicate.