4 Sites That Recovered From Google’s December 2025 Core Update – What They Changed via @sejournal, @marie_haynes

The December 2025 core update had a significant impact on a large number of sites. Each of the sites below that have done well are either long term clients, past clients or sites that I have done a site review for. While we can never say with certainty what changed as the result of a change to Google’s core algorithms and systems, I’ll share some observations on what I think helped these sites improve.

1. Trust Matters Immensely

This first client, a medical eCommerce site, reached out to me in mid 2024 and we started on a long term engagement. A few days into our relationship they were strongly negatively impacted by the August 2024 core update. It was devastating.

When you are impacted by a core update, in most cases, you remain suppressed until another core update happens. It usually takes several core updates. And given that these only happen a few times a year, this site remained suppressed for quite some time.

We worked on a lot of things:

  • Improving blog post quality so it was not “commodity content”.
  • Improving page load time.
  • Optimizing images.
  • Improving FAQ content on product pages to help answer customer questions.
  • Creating helpful guides.
  • Improving product descriptions to better answer questions their customers have.
  • Adding more information about the E-E-A-T of authors.
  • Adding more authors with medical E-E-A-T.
  • Getting more reviews from satisfied customers.

While I think that all of the above helped contribute to a better assessment of quality for this site, I actually think that what helped the most had very little to do with SEO, but rather, was the result of the business working hard to truly improve upon customer service.

Core updates are tightly connected to E-E-A-T. Google says that trust is the most important aspect of E-E-A-T. The quality rater guidelines, which serve as guidance to help Google’s quality raters who help train their AI systems to improve in producing high quality search algorithms, mention “trust” 191 times.

For online stores, the raters are told that reliable customer service is vitally important.

Image Credit: Marie Haynes

A few bad reviews aren’t likely to tank your rankings, but this business had previously had significant logistical problems with shipping. They had been working hard to rectify these. Yet, if I asked AI Mode to tell me about the reputation of this company compared to their competitors, it would always tell me that there were serious concerns.

Here’s an interesting prompt you can use in AI Mode:

Make a chart showing the perceived trust in [url or brand] over time.

You can see that finally in 2025 the overall trust in this brand improved.

Image Credit: Marie Haynes

My suspicion is that these trust issues were the main driver in their core update suppression. I can’t say whether it was the improvement in customer trust that made a difference, the improvements in quality we made, or perhaps both. But these results were so good to see.

Image Credit: Marie Haynes

They continue to improve. Google recommends them more often in Popular Products carousels, ranks them more highly for many important terms and more importantly, drives far more sales for them now.

2. Original Content Takes A Lot Of Work

The next site is another site that was impacted by a core update.

This site is an affiliate site that writes about a large ticket product. They have a lot of competition from some big players in their industry. When I reviewed their site, one thing was obvious to me. While they had a lot of content, most of it offered essentially the same value as everyone else. This was frustrating considering they actually did purchase and review these products. What they were writing about was mostly a collection of known facts on these products rather than their personal experience. And what was experiential was buried in massive walls of text that were difficult for readers to navigate.

Google’s guidance on core updates recommends that if you were impacted, you should consider rewriting or restructuring your content to make it easier for your audience to read and navigate the page.

Image Credit: Marie Haynes

This site put an incredible amount of work into improving their content quality:

  • They purchased the products they reviewed and took detailed photos of everything they discussed. And videos. Really helpful videos.
  • The blog posts were written by an expert in their field. This already was the case, but we worked on making it more clear what their expertise was and why it was helpful.
  • We brainstormed with AI to help us come up with ideas for adding helpful unique information that was borne from their experience and not likely to be found on other sites.
  • We used Microsoft Clarity to identify aspects on pages that were frustrating users and worked to improve them.
  • We added interactive quizzes to help readers and drive engagement.
  • We worked on improving freshness for every important post, ensuring they were up to date with the latest information.
  • We worked to really get in the shoes of a searcher and understand what they wanted to see. We made sure that this information was easy to find even if a reader was skimming.
  • We broke up large walls of text into chunks with good headings that were easy to skim and navigate.
  • We noindexed pages that were talking on YMYL topics for which they lacked expertise.
  • We worked on improving core web vitals. (Note: I don’t think this is a huge ranking factor, but in this case the largest contentful paint was taking forever and likely frustrated users.)

Once again, it took many months of tireless work before improvements were seen! Rankings improved to the first page for many important keywords and some moved from page 4 to position #1-3.

Image Credit: Marie Haynes

3. Work To Improve User Experience

This next site was not a long term client, but rather, a site review I did for an eCommerce site in an YMYL niche. The SEO working on this site applied many of my recommendations and made some other smart changes as well including:

  • Improving site navigation and hierarchy.
  • Improved UX. They have a nicer, more modern font. The site looks more professional.
  • Improved customer checkout flow which improved checkout abandonment rates.
  • Improved their About Us page to add more information to demonstrate the brand’s experience and history. Note: I don’t think this matters immensely to Google’s algorithms as most of their assessment of trust is made from off-site signals, but it may help users feel more comfortable with engaging.
  • Produced content around some topics that were gaining public attention. This did help to truly earn some new links and mentions from authoritative sources.

After making these changes, the site was able to procure a knowledge panel for brand searches. And, search traffic is climbing.

Image Credit: Marie Haynes

4. First Hand Experience Can Really Help

This next site is another one that I did a site review for. It is a city guide that monetizes through affiliate links and sponsors. For every page I looked at I came to the same conclusion: There was nothing on this page that couldn’t be covered by an AI Overview. Almost every piece of information was essentially paraphrased from somewhere else on the web.

The most recent update to the rater guidelines increased the use of the word “paraphrased” from 3 mentions to 25. I think this applies to a lot of sites!

Image Credit: Marie Haynes

and

Image Credit: Marie Haynes

and also,

Image Credit: Marie Haynes

Yet, when I spoke with the site owner she shared to me that they had on-site writers who were truly writing from their experience.

While I don’t know specifically what changes this site owner has made, I looked at several pages that had seen nice improvements in conjunction with the core update and noticed the following improvements:

  • They’ve added video to some posts – filmed by their team.
  • There’s original photography from their team – not taken from elsewhere on the web. Not every photo is original, but quite a few of them are.
  • Added information to help readers make their decision, like “This place is best for…” or, “Must try dishes include…”
  • They wrote about their actual experiences. Rather than just sharing what dishes were available at a restaurant, they share which ones they tried and how they felt they stood out compared to other restaurants.
  • They’ve worked to keep content updated and fresh.

This site saw some nice improvements. However, they still have ground to gain as they previously were doing much better in the days before the helpful content updates.

Image Credit: Marie Haynes

Some Thoughts For Sites That Have Not Done Well

The December 2025 core update had a devastating negative impact on many sites. If you were impacted, your answer is unlikely to lie in technical SEO fixes, disavowing links or building new links. Google’s ranking systems are a collection of AI systems that work together with one goal in mind – to present searchers with pages that they are likely to find helpful. Many components of the ranking systems are deep learning systems which means that they improve on these recommendations over time.

I’d recommend the following for you:

1. Consider Whether The Brand Has Trust Issues

You can try the AI Mode prompt I used above. A few bad reviews is not going to cause you a core update suppression. But, a prolonged history of repeated customer service frustrations, fraud or anything else that significantly impacts your reputation can seriously impact your ability to rank. This is especially true if you are writing on YMYL topics.

2. Look At How Your Content Is Structured

It is a helpful exercise to look at which pages Google’s algorithms are ranking for your queries. If they don’t seem to make sense to you, look at how quickly they get people to the answer they are trying to find. I have found that often sites that are impacted make their readers scroll through a lot of fluff or ads to get to the important bits. Improve your headings – not for search engines, but for readers who are skimming. Put the important parts at the top. Or, if that’s not feasible, make it really easy for people to find the “main content”.

Here’s a good exercise – Open up the rater guidelines. These are guidelines for human raters who help Google understand if the AI systems are producing good, helpful rankings. CTRL-F for “main content” and see what you can learn.

3. Really Ask Yourself Whether Your Content Is Mostly “Commodity Content”

Commodity content is information that is widely available in many places on the web. There was a time when a business could thrive by writing pages that aggregate known information on a topic. Now that Google has AI Overviews and AI Mode, this type of page is much less valuable. You will still see some pages cited in AI Overviews that essentially parrot what is already in the AIO. Usually these are authoritative sites which are helpful for readers who want to see information from an authority rather than an AI answer.

Liz Reid from Google said these interesting words in an interview with the WSJ:

“What people click on in AI Overviews is content that is richer and deeper. That surface level AI generated content, people don’t want that, because if they click on that they don’t actually learn that much more than they previously got. They don’t trust the result any more across the web. So what we see with AI Overviews is that we sort of surface these sites and get fewer, what we call bounced clicks. A bounced click is like, you click on this site and you’re like, “Ah, I didn’t want that” and you go back. And so AI Overviews give some content and then we get to surface sort of deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not AI generated.”

Here is a good exercise to try on some of the pages that have declined with the core update. Give your url or copy your page’s content into your favourite LLM and use this prompt:

“What are 10 concepts that are discussed in this page? For each concept tell me whether this topic has been widely written about online. Does this content I am sharing with you add anything truly uniquely interesting and original to the body of knowledge that already exists? Your goal here is to be brutally honest and not just flatter me. I want to know if this page is likely to be considered commodity content or whether it truly is content that is richer and deeper than other pages available on the web.”

You can follow this up with this prompt:

“Give me 10 ideas that I can use to truly create content that goes deeper on these topics? How can I draw from my real world experience to produce this kind of content?”

Concluding Thoughts

I’ve been studying Google updates for a long time – since the early days of Panda and Penguin updates. I built a business on helping sites recover following Google update hits. However, over the years I have found it is increasingly more difficult for a site that is impacted by a Google update to recover. This is why today, although I do still love doing site reviews to give you ideas for improving, I generally decline doing work with sites that have been strongly impacted by Google updates. While recovery is possible, it generally takes a year or more of hard work and even then, recovery is not guaranteed as Google’s algorithms and people’s preferences are continually changing.

The sites that saw nice recovery with this Google update were sites that worked on things like:

  • Truly improving the world’s perception of their customer service.
  • Creating original and insightful content that was substantially better than other pages that exist.
  • Using their own imagery and videos in many cases.
  • Working hard to improve user experience.

If you missed it I recently published this video that talks about what we learned about the role of user satisfaction signals in Google’s algorithms. Traditional ranking factors create an initial pool of results. AI systems rerank them, working to predict what the searcher will find most helpful. And the quality raters as well as live users in live user tests help fine-tune these systems.

And here are some more blog posts that you may find helpful:

Ultimately, Google’s systems work to reward content that users are likely to find satisfying. Your goal is to be the most helpful result there is!

More Resources:


Read Marie’s newsletter AI News You Can Use, subscribe now.


Featured Image: Jack_the_sparow/Shutterstock

SEO Fundamental: Google Explains Why It May Not Use A Sitemap via @sejournal, @martinibuster

Google’s John Mueller answered a question about why a Search Console was providing a sitemap fetch error even though server logs show that GoogleBot successfully fetched it.

The question was asked on Reddit. The person who started the discussion listed a comprehensive list of technical checks that they did to confirm that the sitemap returns a 200 response code, uses a valid XML structure, indexing is allowed and so on.

The sitemap is technically valid in every way but Google Search Console keeps displaying an error message about it.

The Redditor explained:

“I’m encountering very tricky issue with sitemap submission immediately resulted `Couldn’t fetch` status and `Sitemap could not be read` error in the detail view. But i have tried everything I can to ensure the sitemap is accessible and also in server logs, can confirm that GoogleBot traffic successfully retrieved sitemap with 200 success code and it is a validated sitemap with URL – loc and lastmod tags.

…The configuration was initially setup and sitemap submitted in Dec 2025 and for many months, there’s no updates to sitemap crawl status – multiple submissions throughout the time all result the same immediate failure. Small # of pages were submitted manually and all were successfully crawled, but none of the rest URLs listed in sitemap.xml were crawled.”

Google’s John Mueller answered the question, implying that the error message is triggered by an issue related to the content.

Mueller responded:

“One part of sitemaps is that Google has to be keen on indexing more content from the site. If Google’s not convinced that there’s new & important content to index, it won’t use the sitemap.”

While Mueller did not use the phrase “site quality,” site quality is implied because he says that Google has to be “keen on indexing more content from the site” that is “new and important.”

That implies two things, that maybe the site doesn’t produce much new content and that the content might not be important. The part about content being important is a very broad description that can mean a lot of things and not all of those reasons necessarily mean that the content is low quality.

Sometimes the ranked sites are missing an important form of content or a structure that makes it easier for users to understand a topic or come to a decision. It could be an image, it could be a step by step, it could be a video, it could be a lot of things but not necessarily all of them. When in doubt, think like a site visitor and try to imagine what would be the most helpful for them. Or it could  be that the content is trivial because it’s thin or not unique. Mueller was broad but I think circling back to what makes a site visitor happy is the way to identify ways to improve content.

Featured Image by Shutterstock/Asier Romero

Information Retrieval Part 3: Vectorization And Transformers (Not The Film)

Information retrieval systems are designed to satisfy a user. To make a user happy with the quality of their recall. It’s important we understand that. Every system and its inputs and outputs are designed to provide the best user experience.

From the training data to similarity scoring and the machine’s ability to “understand” our tired, sad bullshit – this is the third in a series I’ve titled, information retrieval for morons.

Image Credit: Harry Clarkson-Bennett

TL;DR

  1. In the vector space model, the distance between vectors represents the relevance (similarity) between the documents or items.
  2. Vectorization has allowed search engines to perform concept searching instead of word searching. It is the alignment of concepts, not letters or words.
  3. Longer documents contain more similar terms. To combat this, document length is normalized, and relevance is prioritized.
  4. Google has been doing this for over a decade. Maybe for over a decade, you have too.

Things You Should Know Before We Start

Some concepts and systems you should be aware of before we dive in.

I don’t remember all of these, and neither will you. Just try to enjoy yourself and hope that through osmosis and consistency, you vaguely remember things over time.

  • TF-IDF stands for term frequency-inverse document frequency. It is a numerical statistic used in NLP and information retrieval to measure a term’s relevance within a document corpus.
  • Cosine similarity measures the cosine of the angle between two vectors, ranging from -1 to 1. A smaller angle (closer to 1) implies higher similarity.
  • The bag-of-words model is a way of representing text data when modelling text with machine learning algorithms.
  • Feature extraction/encoding models are used to convert raw text into numerical representations that can be processed by machine learning models.
  • Euclidean distance measures the straight-line distance between two points in vector space to calculate data similarity (or dissimilarity).
  • Doc2Vec (an extension of Word2Vec), designed to represent the similarity (or lack of it) in documents as opposed to words.

What Is The Vector Space Model?

The vector space model (VSM) is an algebraic model that represents text documents or items as “vectors.” This representation allows systems to create a distance between each vector.

The distance calculates the similarity between terms or items.

Commonly used in information retrieval, document ranking, and keyword extraction, vector models create structure. This structured, high-dimensional numerical space enables the calculation of relevance via similarity measures like cosine similarity.

Terms are assigned values. If a term appears in the document, its value is non-zero. Worth noting that terms are not just individual keywords. They can be phrases, sentences, and entire documents.

Once queries, phrases, and sentences are assigned values, the document can be scored. It has a physical place in the vector space as chosen by the model.

In this case, words, represented on a graph to denote relationships between them (Image Credit: Harry Clarkson-Bennett)

Based on its score, documents can be compared to one another based on the inputted query. You generate similarity scores at scale. This is known as semantic similarity, where a set of documents is scored and positioned in the index based on their meaning.

Not just their lexical similarity.

I know this sounds a bit complicated, but think of it like this:

Words on a page can be manipulated. Keyword stuffed. They’re too simple. But if you can calculate meaning (of the document), you’re one step closer to a quality output.

Why Does It Work So Well?

Machines don’t just like structure. They bloody love it.

Fixed-length (or styled) inputs and outputs create predictable, accurate results. The more informative and compact a dataset, the better quality classification, extraction, and prediction you will get.

The problem with text is that it doesn’t have much structure. At least not in the eyes of a machine. It’s messy. This is why it has such an advantage over the classic Boolean Retrieval Model.

In Boolean Retrieval Models, documents are retrieved based on whether they satisfy the conditions of a query that uses Boolean logic. It treats each document as a set of words or terms and uses AND, OR, and NOT operators to return all results that fit the bill.

Its simplicity has its uses, but cannot interpret meaning.

Think of it more like data retrieval than identifying and interpreting information. We fall into the term frequency (TF) trap too often with more nuanced searches. Easy, but lazy in today’s world.

Whereas the vector space model interprets actual relevance to the query and doesn’t require exact match terms. That’s the beauty of it.

It’s this structure that creates much more precise recall.

The Transformer Revolution (Not Michael Bay)

Unlike Michael Bay’s series, the real transformer architecture replaced older, static embedding methods (like Word2Vec) with contextual embeddings.

While static models assign one vector to each word, transformers generate dynamic representations that change based on the surrounding words in a sentence.

And yes, Google has been doing this for some time. It’s not new. It’s not GEO. It’s just modern information retrieval that “understands” a page.

I mean, obviously not. But you, as a hopefully sentient, breathing being, understand what I mean. But transformers, well, they fake it:

  1. Transformers weight input by data by significance.
  2. The model pays more attention to words that demand or provide extra context.

Let me give you an example.

“The bat’s teeth flashed as it flew out of the cave.”

Bat is an ambiguous term. Ambiguity is bad in the age of AI.

But transformer architecture links bat with “teeth,” “flew,” and “cave,” signaling that bat is far more likely to be a bloodsucking rodent* than something a gentleman would use to caress the ball for a boundary in the world’s finest sport.

*No idea if a bat is a rodent, but it looks like a rat with wings.

BERT Strikes Back

BERT. Bidirectional Encoder Representations from Transformers. Shrugs.

This is how Google has worked for years. By applying this type of contextually aware understanding to the semantic relationships between words and documents. It’s a huge part of the reason why Google is so good at mapping and understanding intent and how it shifts over time.

BERT’s more recent updates (DeBERTa) allow words to be represented by two vectors – one for meaning and one for its position in the document. This is known as Disentangled Attention. It provides more accurate context.

Yep, sounds weird to me, too.

BERT processes the entire sequence of words simultaneously. This means context is applied from the entirety of the page content (not just the few surrounding terms).

Synonyms Baby

Launching in 2015, RankBrain was Google’s first deep learning system. Well, that I know of anyway. It was designed to help the search algorithm understand how words relate to concepts.

This was kind of the peak search era. Anyone could start a website about anything. Get it up and ranking. Make a load of money. Not need any kind of rigor.

Halcyon days.

With hindsight, these days weren’t great for the wider public. Getting advice on funeral planning and commercial waste management from a spotty 23-year-old’s bedroom in Halifax.

As new and evolving queries surged, RankBrain and the subsequent neural matching were vital.

Then there was MUM. Google’s ability to “understand” text, images and visual content across multiple languages simultenously.

Document length was an obvious problem 10 years ago. Maybe less. Longer articles, for better or worse, always did better. I remember writing 10,000-word articles on some nonsense about website builders and sticking them on a homepage.

Even then that was a rubbish idea…

In a world where queries and documents are mapped to numbers, you could be forgiven for thinking that longer documents will always be surfaced over shorter ones.

Remember 10-15 years ago when everyone was obsessed when every article being 2,000 words.

“That’s the optimal length for SEO.”

If you see another “What time is X” 2,000-word article, you have my permission to shoot me.

You can’t knock the fact this is a better experience (Image Credit: Harry Clarkson-Bennett)

Longer documents will – as a result of containing more terms – have higher TF values. They also contain more distinct terms. These factors can conspire to raise the scores of longer documents

Hence why, for a while, they were the zenith of our crappy content production.

Longer documents can broadly be lumped into two categories:

  1. Verbose documents that essentially repeat the same content (hello, keyword stuffing, my old friend).
  2. Documents covering multiple topics, in which the search terms probably match small segments of the document, but not all of it.

To combat this obvious issue, a form of compensation for document length is used, known as Pivoted Document Length Normalization. This adjusts scores to counteract the natural bias longer documents have.

Pivoted normalization rescales term weights using a linear adjustment around the average document length (Image Credit: Harry Clarkson-Bennett)

The cosine distance should be used because we do not want to favour longer (or shorter) documents, but to focus on relevance. Leveraging this normalization prioritizes relevance over term frequency.

It’s why cosine similarity is so valuable. It is robust to document length. A short and long answer can be seen as topically identical if they point in the same direction in the vector space.

Great question.

Well, no one’s expecting you to understand the intricacies of a vector database. You don’t really need to know that databases create specialized indices to find close neighbors without checking every single record.

This is just for companies like Google to strike the right balance between performance, cost, and operational simplicity.

Kevin Indig’s latest excellent research shows that 44.2% of all citations in ChatGPT originate from the first 30% of the text. The probability of citation drops significantly after this initial section, creating a “ski ramp” effect.

Image Credit: Harry Clarkson-Bennett

Even more reason not to mindlessly create massive documents because someone told you to.

In “AI search,” a lot of this comes down to tokens. According to Dan Petrovic’s always excellent work, each query has a fixed grounding budget of approximately 2,000 words total, distributed across sources by relevance rank.

In Google, at least. And your rank determines your score. So get SEO-ing.

Position 1 gives you double the prominence of position 5 (Image Credit: Harry Clarkson-Bennett)

Metehan’s study on what 200,000 Tokens Reveal About AEO/GEO really highlights how important this is. Or will be. Not just for our jobs, but biases and cultural implications.

As text is tokenized (compressed and converted into a sequence of integer IDs), this has cost and accuracy implications.

  • Plain English prose is the most token-efficient format at 5.9 characters per token. Let’s call it 100% relative efficiency. A baseline.
  • Turkish prose has just 3.6. This is 61% as efficient.
  • Markdown tables 2.7. 46% as efficient.

Languages are not created equal. In an era where capital expenditures (CapEx) costs are soaring, and AI firms have struck deals I’m not sure they can cash, this matters.

Well, as Google has been doing this for some time, the same things should work across both interfaces.

  1. Answer the flipping question. My god. Get to the point. I don’t care about anything other than what I want. Give it to me immediately (spoken as a human and a machine).
  2. So frontload your important information. I have no attention span. Neither do transformer models.
  3. Disambiguate. Entity optimization work. Connect the dots online. Claim your knowledge panel. Authors, social accounts, structured data, building brands and profiles.
  4. Excellent E-E-A-T. Deliver trustworthy information in a manner that sets you apart from the competition.
  5. Create keyword-rich internal links that help define what the page and content are about. Part disambiguation. Part just good UX.
  6. If you want something focused on LLMs, be more efficient with your words.
    • Using structured lists can reduce token consumption by 20-40% because they remove fluff. Not because they’re more efficient*.
    • Use commonly known abbreviations to also save tokens.

*Interestingly, they are less efficient than traditional prose.

Almost all of this is about giving people what they want quickly and removing any ambiguity. In an internet full of crap, doing this really, really works.

Last Bits

There is some discussion around whether markdown for agents can help strip out the fluff from HTML on your site. So agents could bypass the cluttered HTML and get straight to the good stuff.

How much of this could be solved by having a less fucked up approach to semantic HTML, I don’t know. Anyway, one to watch.

Very SEO. Much AI.

More Resources:


Read Leadership in SEO. Subscribe now.


Featured Image: Anton Vierietin/Shutterstock

Google AI Mode Link Update, Click Share Data & ChatGPT Fan-Outs – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse: updates affect how links appear in AI search results, where organic clicks are going, and which languages ChatGPT uses to find sources.

Here’s what matters for you and your work.

Google Redesigns Links In AI Overviews And AI Mode

Robby Stein, VP of Product for Google Search, announced on X that AI Overviews and AI Mode are getting a redesigned link experience on both desktop and mobile.

Key Facts: On desktop, groups of links will now appear in a pop-up when you hover over them, showing site names, favicons, and short descriptions. Google is also rolling out more descriptive and prominent link icons across desktop and mobile.

Why This Matters

This is the latest in a series of link-visibility updates Stein has announced since last summer, when he called showing more inline links Google’s “north star” for AI search. The pattern is consistent. Google keeps iterating on how links surface inside AI-generated responses.

The hover pop-up is a new interaction pattern for AI Overviews. Instead of small inline citations that are easy to miss, users now get a preview card with enough context to decide whether to click. That changes the calculus for publishers wondering how much traffic AI results actually send.

What The Industry Is Saying

SEO consultant Lily Ray (Amsive) wrote on X that she had been seeing the new link cards and was “REALLY hoping it sticks.”

Read our full coverage: Google Says Links Will Be More Visible In AI Overviews

43% Of ChatGPT Fan-Out Queries For Non-English Prompts Run In English

A report from AI search analytics firm Peec AI found that a large share of ChatGPT’s fan-out queries run in English, even when the original prompt was in another language.

Key Facts: Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across non-English prompts analyzed, 43% of the fan-out queries ran in English. Nearly 78% of non-English prompt sessions included at least one English-language fan-out query.

Why This Matters

When ChatGPT Search builds an answer, it can rewrite the user’s prompt into “one or more targeted queries,” according to OpenAI’s documentation. OpenAI does not describe how language is chosen for those rewritten queries. Peec AI’s data suggests that English gets inserted into the process even when the user and their location are clearly non-English.

SEO and content teams working in non-English markets may face a disadvantage in ChatGPT’s source selection that doesn’t map to traditional ranking signals. Language filtering appears to happen before citation signals come into play.

Read our full coverage: ChatGPT Search Often Switches To English In Fan-Out Queries: Report

Google’s Search Relations Team Can’t Say You Still Need A Website

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a definitive yes.

Key Facts: In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question. Both acknowledged that websites still offer advantages, including data sovereignty, control over monetization, and freedom from platform content moderation. But neither argued that the open web offers something irreplaceable.

Why This Matters

Google Search is built around crawling and indexing web content. The fact that Google’s own Search Relations team treats “do I need a website?” as a business decision rather than an obvious yes is worth noting.

Illyes offered the closest thing to a position. He said that if you want to make information available to as many people as possible, a website is probably still the way to go. But he called it a personal opinion, not a recommendation.

The conversation aligns with increasingly fragmented user journeys, now spanning AI chatbots, social feeds, community platforms, and traditional search. For practitioners advising clients on building websites, the answer increasingly depends on where the audience is, not where it used to be.

Read our full coverage: Google’s Search Relations Team Debates If You Still Need A Website

Theme Of The Week: The Ground Keeps Moving Under Organic

Each story this week shows a different force pulling attention, clicks, or visibility away from the organic channel as practitioners have known it.

Google is redesigning how links appear in AI responses, acknowledging the traffic concern. ChatGPT’s background queries introduce a language filter that can exclude non-English content before relevance signals even apply. And Google’s own team won’t say that websites are the default answer for visibility anymore.

These stories reinforce the idea of spreading your content across different platforms to reach more people. And track where your clicks are really coming from.

More Resources:


Featured Image: TippaPatt/Shutterstock; Paulo Bobita/Search Engine Journal

35-Year SEO Veteran: Great SEO Is Good GEO — But Not Everyone’s Been Doing Great SEO via @sejournal, @theshelleywalsh

As SEOs, we are used to being adaptable to changing algorithms, so LLM optimization should be a simple extension of that process.

To discuss the industry debates surrounding the differences between SEO and GEO and clarify whether they are the same or different, I spoke with SEO veteran Grant Simmons.

Grant has over 30 years of experience helping brands grow and has spent decades focused on meaning, intent, and topical authority long before LLMs entered the conversation.

I spoke with Grant about signal alignment, how Google’s latest continuation patents reveal the mechanics of LLM citations, and what SEOs are getting wrong about topical focus.

We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.” – Grant Simmons

You can watch the full interview with Grant on IMHO below, or continue reading the article summary.

Great SEO Is Good GEO

At Google Search Live in December 2025, John Mueller said, “Good SEO is good GEO.”

I asked Grant what he thought were the differences between optimizing for search engines and for machines, and if he thought there were any overlaps.

Grant’s approach echoes what John Mueller said, but “Not everyone has been doing great SEO,” he explained. “Great SEO was always about building topical authority.”

He continued to say, “Essentially, machines (whether it’s Google or whether it’s an LLM) have to understand the underlying meaning of the content so they can present the best answer.

They have to understand the query or the prompt, then they have to send the best answer. So in that way, it’s very similar.”

Where Grant sees divergence is in how the systems evaluate content. Google has historically ranked pages, and even with passage ranking, it still considers the page and the site as a whole. LLMs operate differently.

“LLMs are looking more at that passage side, you know, something that’s easily extractable, something that has value semantically related to the query or the prompt. And so there’s that fundamental difference.”

Grant also stressed that great SEO has always been holistic, touching social media, PR, content, and brand messaging. Having brand awareness, brand visibility, and brand consistency across all channels is a significant factor in LLM representation. And this is exactly the kind of work that the best SEOs do.

“We’re marketers. We should make sure, not just from a standpoint of what we do in SEO and GEO for our clients, which is connecting a need and intent to the product or service that satisfies that intent, we’re also doing the same in our own marketing. We have to understand what our clients are looking for.

“[GEO] is the same [as SEO] if you’re doing it well. It’s not the same if you weren’t. And of course, there’s nuance.”

My thoughts are that SEOs who have been in the industry the longest are experiencing less disruption because they have seen it all before. They learned to be adaptable in the early years when there was so much flux as we progressed from multiple search engines to just one. Whereas for anyone new to the industry, they don’t have the same background points of reference.

Why Consensus Matters To Be Surfaced By LLMs

I went on to ask Grant about Google’s latest continuation patents, which describe two distinct systems that work together.

The first is what Grant describes as a response confidence engine. This system evaluates whether a passage can be corroborated, whether the information has consensus across the web.

“If they return a passage and they can corroborated that it is true, and when we say true, it’s true in the sense of more than one person is saying it, that doesn’t mean it’s true, but it means the consensus is there,” Grant explained. “The consensus generally wins out.”

The second system is what Grant calls a linkifying engine. Once a passage has been confirmed through consensus, this engine determines whether a specific sentence or sub-element within that passage, what Grant calls a “chunklet,” can be matched and linked to a source.

“Consensus decides whether it’s surfaced in the first place. The linkify engine actually decides whether it’s linkable, whether a citation is actually going to happen,” Grant said.

Getting mentioned by an LLM is one thing. Getting an actual link back to your content requires that the specific passage is both verifiable through consensus and uniquely attributable to your source.

Golden Knowledge Content Wins

So, what kind of content earns this kind of AI visibility? Grant described it as “golden knowledge,” content that is unique in some meaningful way.

“Generally, data-driven, your own data, your own opinion that’s proof-backed, evidence-backed. Taking a different view of things,” Grant said. “But in the same way of taking a different view, there still has to be some kind of consensus. If other people are agreeing with you, that is really important. Your content needs the uniqueness and the data-driven aspect, but it still has to align with the overall consensus on the web.”

Grant was also clear that while we often talk about writing for machines, the orientation should remain human-centered: “We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.”

This balance between uniqueness and consensus is perhaps the most actionable takeaway. Content that simply restates what everyone else is saying won’t stand out. But content that takes a position without corroboration elsewhere won’t pass the confidence threshold to be surfaced. The sweet spot is original, data-driven insight that others can and do validate.

The Biggest Mistakes SEOs Make With Topical Focus

When I asked Grant about the most common mistakes he sees with topical diversification on pages, his answer was clear: trying to be everything to everyone.

“When you think about intent, suddenly you understand that pages have a right to exist,” Grant said. “I call it path to satisfaction. Understanding who the audience is and what they need to find, you have to provide a path to that satisfaction.”

Grant pointed out that most SEOs inherit existing sites rather than building from scratch. The temptation is to focus on the surface-level optimizations, such as title tags, meta descriptions, and headers, without reviewing whether a page is actually focused on a specific intent or whether it has what he calls “drift.”

“What they won’t do is fundamentally review the page and understand whether that page is focused on a specific intent or whether it has this drift,” Grant explained. “Cleaning out those outliers, topics that you’re covering when you don’t really mean to, is essentially diffusing what the page means. Those are the things that I think SEOs miss out on.”

This ties directly back to LLM citability. If a page lacks clear topical focus, it becomes harder for AI systems to extract a self-contained passage that answers a specific query. Tightening that focus isn’t just good SEO; it’s the foundation of being visible in AI-generated responses.

Grant’s Strategy Recommendation For 2026

I finished by asking Grant what he’s recommending to his clients right now.

“Let’s double down on what’s working,” Grant said. “LLM traffic is so small today that optimizing for LLMs is important for the future but not for today’s metrics. Let’s improve our SEO. Let’s get to that great SEO level. And as we’re doing that, we are incorporating the elements that will help you show up for GEO, that will help show up on these other surfaces.”

His focus is on great content, topical authority, uniqueness, data-driven approaches, citations, and digital PR. In Grant’s words: “Getting content so good that LLMs can’t ignore you, Google can’t ignore you, and publications can’t ignore you.”

It’s the Steve Martin philosophy applied to SEO: “Be so good they can’t ignore you,” and, coincidence or not, the rule I have applied for the last 15 years in SEO.

Watch the full interview with Grant Simmons here:

Thank you to Grant Simmons for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

Are Citations In AI Search Affected By Google Organic Visibility Changes? via @sejournal, @lilyraynyc

recently wrote about an unconfirmed Google algorithm update that rolled out in mid-January 2026, which negatively impacted the organic search visibility of dozens of major brands. For most of the impacted sites I analyzed, the impact was disproportionately targeted to the company’s blog, or another folder containing informational articles and resources.

That same organic trajectory has continued into mid-February for all of the subfolders I analyzed, using the Sistrix U.S. Visibility Index:

Image Credit: Lily Ray

Zooming out, this is what the drops look like when you look at the visibility trends across the whole domains, not just the blogs:

Image Credit: Lily Ray

Here is another example of the visibility impact on the company blog for the biggest company in the list (in terms of both ARR and organic visibility):

Image Credit: Lily Ray

And this is what the impact looks like when you look at the company’s full domain’s visibility in organic search:

Image Credit: Lily Ray

Needless to say, these recent organic visibility drops were extreme, relative to the sites’ overall SEO trajectories over the past few years. Drilling down into 11 of the sites that saw extreme declines over the last month, I wanted to see if this new data could help answer another question:

Do drops in Google organic search visibility coincide with similar drops in AI search citations?

My working hypothesis is that these drops are no longer just isolated to traditional search. Instead, I suspect we will find that, for most LLMs, AI search citation trends mirror what happens in Google’s organic search results, for two reasons:

1. The Direct Pipeline: Google’s AI Ecosystem

For Google’s own AI products – AI Mode and Gemini – the correlation should be strongest. Presumably, Google is using its own index and top-ranking search results to formulate AI search responses; therefore, dropping in organic rankings should logically cause those pages to be cited and referenced less frequently in generative answers.

2. The Downstream Effects: Third-Party LLMs (ChatGPT & Perplexity)

The link between Google organic rankings and third-party LLMs like ChatGPT and Perplexity is more nuanced, as we don’t know exactly which search engines these LLMs are surfacing for web search.

While there is a growing body of evidence (and industry reporting) suggesting that ChatGPT likely scrapes Google during live web searches, we still technically lack official confirmation from the source. Perplexity, on the other hand, is currently believed to utilize the Brave Search API as a core part of its retrieval process, alongside its own specialized “PerplexityBot” crawler.

To test this out, I wanted to drill down into the subfolders that saw substantial visibility drops on Google in recent weeks, to see whether the trend line for AI search citations followed suit.

To start, I honed in on a list of 11 sites whose subfolders saw substantial organic traffic drops between January 20, 2026 and February 16, 2026.

I used the Ahrefs MCP server with Claude Cowork to pull in estimated global monthly organic traffic numbers for each path (subfolder) in the list. Because most of the traffic declines started around January 21, 2026, I pulled the projected monthly organic numbers for January 20, 2026 and the most recent date, February 16, 2026.

I also redacted the site names, leaving the name of the subfolder and a brief, anonymized summary about the company type and the subfolder’s purpose:

Image Credit: Lily Ray

These subfolders experienced anywhere from a -5.7% to -53.1% drop in estimated monthly organic search traffic since January 20th, 2026.

Using Ahrefs Brand Radar, you can drill down to see the number of AI search citations that a given subfolder has received across various LLMs over time. For example, here is the ChatGPT citation trend line for the first subfolder listed in the above table (U.S. data):

Image Credit: Lily Ray

This is the corresponding chart showing the organic traffic trend for this same subfolder, which began dropping around January 21, 2026:

Image Credit: Lily Ray

I used the Ahrefs MCP server with Claude Cowork to pull global traffic and citation data, and to analyze this same pattern for 11 of the subfolders that saw big drops.

Note on methodology: While 11 subfolders are a small sample size, I was specifically looking for a “clean” data set – subfolders experiencing a similar algorithmic demotion on Google during the unconfirmed January 2026 update. By narrowing the scope, I could better isolate whether a loss of traditional search visibility translates directly into a citation drop in AI search.

Below are the high-level summaries of how organic traffic and citation counts changed across Google and various LLMs, including AI Mode, ChatGPT, Perplexity, and Gemini:

Image Credit: Lily Ray
Image Credit: Lily Ray

Findings:

  • The data shows a broad decline in both SEO traffic & AI search citations: Every subfolder in the study (11 of 11) experienced a drop in both Google organic traffic and total AI search citations, with a significant average citation decline of -22.5%.
  • Google’s AI Mode (-23.8%) and ChatGPT (-27.8%) showed the most severe declines, closely mirroring the -26.7% average drop in organic traffic.
  • While Gemini also saw broad declines (10 of 11 sites), Perplexity proved to be the most resilient, with only 4 of the 11 sites seeing a drop and a much milder average change of -2.9%.
    • This data supports the theory that Perplexity is primarily using non-Google search surfaces to generate its responses.

Looking at the changes in estimated organic traffic for each subfolder compared to total AI search citations between January 20 and February 16, 2026, the correlation is clear: Significant losses in organic search visibility are almost universally mirrored by a corresponding decline in AI search citations.

Image Credit: Lily Ray

Drilling down into specific LLMs, including Google’s AI Mode, ChatGPT, Perplexity, and Gemini, shows how the decline was nearly universal for most platforms, whereas Perplexity frequently displayed a significant divergence, showing positive citation growth for the majority of the subfolders despite their organic traffic losses.

Image Credit: Lily Ray

ChatGPT (green) consistently shows the deepest declines across almost every subfolder – often exceeding AI Mode and Gemini. This is intriguing because ChatGPT isn’t a Google product, yet it appears more sensitive to these organic ranking shifts than Google’s own Gemini.

This appears to be another clue that ChatGPT is reliant on Google’s search index during retrieval.

AI Mode and Gemini tend to move in the same direction but not the same magnitude. Despite both being Google products, AI Mode declines are generally steeper than Gemini’s. This could suggest they weight or source from Google’s organic index differently – perhaps AI Mode is more tightly coupled to live SERP rankings while Gemini draws from a broader or cached knowledge base.

The few sites where Perplexity did decline (e.g., Site J, Site K) are also the ones showing relatively smaller organic drops. So even in the cases where Perplexity tracked downward, it doesn’t appear to be correlated with the severity of the Google organic loss – further evidence that Perplexity is likely pulling from a different retrieval pipeline.

The below table shows all the organic search vs. AI search citation data in one place:

Image Credit: Lily Ray

The table reveals a clear pattern: every subfolder that lost organic visibility on Google also saw a decline in total AI search citations, with an average drop of -22.5% across all LLMs.

ChatGPT was the most severely impacted platform, with citation declines reaching as high as -42.3% (Site E) and exceeding -34% for five of the eleven subfolders – often surpassing even the organic traffic loss itself.

Google’s AI Mode followed a similar trajectory, while Gemini showed more moderate declines across the board.

The most notable outlier is Perplexity, which actually showed citation growth for 7 of the 11 subfolders, reinforcing the theory that it retrieves from a non-Google search index.

Perhaps the most interesting finding is that ChatGPT – a non-Google product – appears more tightly coupled to Google’s organic rankings than Google’s own Gemini, suggesting that ChatGPT’s web retrieval pipeline is heavily dependent on Google’s search results.

One recommendation I’ve been making since AI search entered the SEO conversation is that you shouldn’t invest in AEO/GEO tactics that could be detrimental to SEO performance. For example, using hidden prompt injections, cloaking, or self-promotional listicles (tactics that some have advocated for to boost AI search visibility) might be temporarily beneficial for AI search, but could cause massive headaches with Google and Bing’s organic search ranking algorithms down the line.

Now, we have even more evidence that AI search is fundamentally connected to SEO performance: If you drop in organic search, you can likely expect a corresponding drop in citations not only from Google’s own AI search products, but from other LLMs like ChatGPT, which appear to also be heavily reliant on Google’s search results.

The one notable exception is Perplexity, which showed citation growth for the majority of subfolders hit by the recent algorithm update. That said, it’s important to weigh this against the scale of traffic and LLM usage at stake. According to a recent article by Similarweb, ChatGPT received 5.8 billion web visits in August 2025, compared to 148.2 million for Perplexity.

To add to this, when you factor in Google’s organic search traffic – which still dwarfs all the AI search platforms combined – the vast majority of your search-driven visibility across both search engines and AI chatbots is still flowing through a pipeline where Google’s rankings dictate the outcome.

For the past year, the SEO industry has been asking how closely traditional SEO and AEO/GEO are really tied together. I think this data helps answer that question: Not only is a strong SEO foundation critical for AI search visibility, but tactics that hurt your organic rankings can have a cascading negative impact on your AI search citations as well. In other words, the fastest way to lose visibility in AI search might be to lose it in Google first.

More Resources:


This post was originally published on Lily Ray NYC Substack.


Featured Image: PeopleImages/Shutterstock

Enterprise SEO Operating Models That Scale In 2026 And Beyond via @sejournal, @billhunt

Most enterprises are still treating SEO as a marketing activity. That decision, whether intentional or accidental, is now a material business risk.

In the years ahead, SEO performance will not be determined by better tactics, better tools, or even better talent. It will be determined by whether leadership understands what SEO has become and restructures the organization accordingly. SEO is no longer simply a channel but an infrastructure, and infrastructure decisions are leadership decisions.

The Old SEO Question Is No Longer Relevant

For years, executives asked a familiar question: Are we doing SEO well? Or even more simply, are we ranking well in Google? 

That question assumed SEO was something you did, summed up as a collection of optimizations, audits, and campaigns applied after the fact. It made sense when search primarily ranked pages and rewarded incremental improvements. The more relevant question today is different: Is our organization structurally capable of being discovered, understood, and selected by modern search systems?

That is no longer a marketing question. It is an operating model question because AI optimization must become a team sport.

Search engines, and increasingly AI-driven systems, do not reward isolated optimizations. They reward coherence, structure, intent alignment, and machine-readable clarity across an entire digital ecosystem. Those outcomes are not created downstream. They are created by how an organization builds, governs, and scales its digital assets.

What Has Fundamentally Changed

To understand why enterprise SEO operating models must evolve, leadership first needs to understand what actually changed in search.

1. Search Systems Now Interpret Intent Before Retrieval

Modern search systems no longer treat queries as literal requests. They reinterpret ambiguous intent, expand queries through fan-out, explore multiple intent paths simultaneously, and retrieve information across formats and sources. Content no longer competes page-to-page. It competes concept-to-concept.

If an organization lacks clear intent modeling, structured topical coverage, and consistent entity representation, its content may never enter the retrieval set at all, regardless of how optimized individual pages appear.

2. Eligibility Now Precedes Ranking

This shift also changed the sequence of how visibility is earned. Ranking still matters, particularly for enterprises where much of the traffic still flows through traditional results. But ranking now occurs only after eligibility is established. As search experiences move toward synthesized answers and AI-driven surfaces, eligibility has become the prerequisite rather than the reward.

That eligibility is determined upstream by templates, data models, taxonomy, entity consistency, governance, and workflow design. These are not marketing decisions. They are organizational ones.

3. Enterprise SEO Has Crossed An Infrastructure Threshold

Enterprise SEO has always depended on infrastructure. What has changed is that modern search systems no longer compensate for structural shortcuts. In the past, rankings recovered, signals recalibrated, and messiness was often forgiven.

Today, AI-driven systems amplify inconsistency. Retrieval becomes selective, narratives persist, and structural debt compounds. Delivering results aligned to real searcher intent has shifted from a forgiving environment to a selective one, where visibility depends on how well the underlying system is designed. Taken together, these conditions define what a scalable enterprise SEO operating model actually looks like, not as a team or function, but as an organizational capability.

The Leadership Declaration: What Must Be True In 2026

Organizations that scale organic visibility in the coming years will share a small set of non-negotiable characteristics. These are not best practices. They are operating requirements.

Declaration #1: SEO Must Be Treated As Infrastructure

SEO must be treated as infrastructure. That means it moves from a downstream marketing function to a foundational digital capability. SEO requirements are embedded in platforms, standards are enforced through templates, and eligibility is designed before content is commissioned. When failures occur, they are treated like performance or security issues, not optional enhancements. If SEO depends on post-launch fixes, the operating model is already broken.

Declaration #2: SEO Must Live Upstream In Decision-Making

SEO must live upstream in decision-making. Search performance is created when decisions are made about site structure, content scope, taxonomy, product naming, localization strategy, data modeling, and internal linking frameworks. SEO cannot succeed if it only reviews outcomes; it must help shape inputs. This does not mean SEO dictates solutions. It means SEO defines non-negotiable discovery constraints, just as accessibility, performance, and security already do.

Declaration #3: SEO Requires Cross-Functional Accountability

SEO requires cross-functional accountability. Visibility depends on development, content, product, UX, legal, and localization teams working in concert, similar to a professional sports team. In most enterprises, SEO is measured on outcomes while other teams control the systems that produce them. That accountability gap must close. High-performing organizations define shared ownership of visibility, clear escalation paths, mandatory compliance standards, and executive sponsorship for search performance. Without this, SEO remains a negotiation rather than a capability.

Declaration #4: Governance Must Replace Guidelines

Governance must replace guidelines. Guidelines are optional; governance is enforceable. Scalable SEO requires mandatory standards, controlled templates, centralized entity definitions, enforced structured data policies, approved market deviations, and continuous compliance monitoring. This demands a Center of Excellence with authority, not just expertise. SEO cannot scale on influence alone.

Declaration #5: SEO Must Be Measured As A System

Finally, SEO must be measured as a system. Executives need to move beyond quarterly performance questions and instead assess structural eligibility across markets, intent coverage, entity coherence, template enforcement, and where visibility leaks and why. System-level measurement replaces page-level obsession.

This shift mirrors a broader issue I explored in a previous Search Engine Journal article on the questions CEOs should be asking about their websites, but rarely do. The core insight was that executive oversight often focuses on surface-level outcomes while missing systemic sources of risk, inefficiency, and value leakage.

SEO measurement suffers from the same blind spot. Asking how SEO “performed” this quarter obscures whether the organization is structurally capable of being discovered and represented accurately across modern search and AI-driven environments. The more meaningful questions are systemic: where visibility leaks, which teams own those failure points, and whether the underlying architecture enforces consistency at scale.

Measured this way, SEO stops being a reporting function and becomes an early warning system for digital effectiveness.

The Operating Model Divide

Enterprises will fall into two groups.

Some will remain tactical optimizers, where SEO lives in marketing, fixes happen after launch, paid media masks organic gaps, and AI visibility remains inconsistent. Others will become structural builders, embedding SEO into systems, defining requirements before creation, enforcing governance, and earning consistent retrieval and trust from AI-driven platforms.

The difference will not be effort. It will be organizational design.

The Clarifying Reality

Ranking still matters, particularly for enterprises where a significant share of traffic continues to flow through traditional results. What has changed is not its importance, but its position in the visibility chain. Before anything can rank, it must first be retrieved. Before it can be retrieved, it must be eligible. And eligibility is no longer determined by isolated optimizations, but by infrastructure – how content is structured, how entities are defined, and how consistently signals are enforced across systems.

Every enterprise already has an SEO operating model, whether it was designed intentionally or emerged by default. In the years ahead, that distinction will matter far more than most organizations expect.

SEO has become infrastructure. Infrastructure requires leadership because it shapes what the organization can reliably produce and how it is perceived at scale. The companies that win will not be the ones that optimize harder, but the ones that operate differently, by designing systems that search engines and AI-driven platforms can consistently discover, understand, and trust.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

How To Set Up AI Prompt Tracking You Can Trust [Webinar] via @sejournal, @lorenbaker

Getting Real About AI Visibility Tracking

If you’re on the search or marketing team right now, you’ve probably been asked some version of: “Are we showing up in ChatGPT?” or “What’s our visibility in AI Overviews?”

And honestly? Most of us are still figuring that out.

Answer engines like ChatGPT, Perplexity, and Google AI Overviews have changed how people discover and evaluate solutions. Yet, we still see a lot of teams approaching AI visibility tracking the same way they’ve approached keyword tracking, and they’re just not the same.

Improper tracking leads to bad data that’s being used to make decisions. And bad decisions can be expensive.

That’s why we’re bringing in Nick Gallagher, Sr. SEO Strategy Director at Conductor, to walk through how to set up AI prompt tracking the right way. The goal is to walk away with a tracking framework you can actually trust.

What You’ll Learn

  • How AI prompt tracking works, and why the setup matters more than the volume of prompts you’re monitoring.
  • Best practices for choosing the right topics, prompts, and answer engines to track.
  • How to avoid common mistakes that lead to inaccurate or misleading AI visibility data.

Why This Matters Right Now

A lot of the conversations I’ve been having with SEOs and in-house marketers lately come back to the same thing: they know AI search is important, but they don’t trust the data they’re getting. Nick is going to break down why that’s happening and give you a clear framework to fix it for smarter decision-making. 

If you’re trying to measure AI visibility and want to make sure you’re not building strategy on bad data, please join us.

Can’t make it live? Register anyway, and we’ll send you the on-demand recording.

4 Pillars To Turn Your “Sticky-Taped” Tech Stack Into a Modern Publishing Engine

This post was sponsored by WP Engine. The opinions expressed in this article are the sponsor’s own.

In the race for audience attention, digital marketers at media companies often have one hand tied behind their backs. The mission is clear: drive sustainable revenue, increase engagement, and stay ahead of technological disruptions such as LLMs and AI agents.

Yet, for many media organizations, execution is throttled by a “Sticky-taped stack,” which is a fragile, patchwork legacy CMS structure and ad-hoc plugins. For a digital marketing leader, this isn’t just a technical headache; it’s a direct hit to the bottom line.

It’s time to examine the Fragmentation Tax, and why a new publishing standard is required to reclaim growth.

Fragmentation Tax: How A Siloed CMS, Disconnected Data & Tech Debt Are Costing You Growth

The Fragmentation Tax is the hidden cost of operational inefficiency. It drains budgets, burns out teams, and stunts the ability to scale. For digital marketing and growth leads, this tax is paid in three distinct “currencies”:

1. Siloed Data & Strategic Blindness.

When your ad server, subscriber database, and content tools exist as siloed work streams, you lose the ability to see the full picture of the reader’s journey.

Without integrated attribution, marketers are forced to make strategic pivots based on vanity metrics like generic pageviews rather than true business intelligence, such as conversion funnels or long-term reader retention.

2. The Editorial Velocity Gap.

In the era of breaking news, being second is often the same as being last. If an editorial team is forced into complex, manual workflows because of a fragmented tech stack, content reaches the market too late to capture peak search volume or social trends. This friction creates a culture of caution precisely when marketing needs a culture of velocity to capture organic traffic.

3. Tech Debt vs. Innovation.

Tech debt is the future cost of rework created by choosing “quick-and-dirty” solutions. This is a silent killer of marketing budgets. Every hour an engineering team spends fixing plugin conflicts or managing security fires caused by a cobbled-together infrastructure is an hour stolen from innovation.

The 4 Publishing Pillars That Improve SEO & Monetization

To stop paying this tax, media organizations are moving away from treating their workflows as a collection of disparate parts. Instead, they are adopting a unified system that eliminates the friction between engineering, editorial, and growth.

A modern publishing standard addresses these marketing hurdles through four key operational pillars:

Pillar 1: Automated Governance (Built-In SEO & Tracking Integrity)

Marketing integrity relies on consistency.

In a fragmented system, SEO metadata, tracking pixels, and brand standards are often managed manually, leading to human error.

A unified approach embeds governance directly into the workflow.

By using automated checklists, organizations ensure that no article goes live until it meets defined standards, protecting the brand and ensuring every piece of content is optimized for discovery from the moment of publication.

Pillar 2: Fearless Iteration (Continuous SEO & CRO Optimization Without Risk)

High-traffic articles are a marketer’s most valuable asset. However, in a legacy stack, updating a live story to include, for instance, a Call-to-Action (CTA), is often a high-risk maneuver that could break site layouts.

A modern unified approach allows for “staged” edits, enabling teams to draft and review iterations on live content without forcing those changes live immediately. This allows for a continuous improvement cycle that protects the user experience and site uptime.

Pillar 3: Cross-Functional Collaboration (Reducing Workflow Bottlenecks Between Editorial, SEO & Engineering)

Any type of technology disruption requires a team to collaborate in real-time. The “Sticky-taped” approach often forces teams to work in separate tools, creating bottlenecks.

A modern unified standard utilizes collaborative editing, separating editorial functions into distinct areas for text, media, and metadata. This allows an SEO specialist or a growth marketer to optimize a story simultaneously with the journalist, ensuring the content is “market-ready” the instant it’s finished.

Pillar 4: Native Breaking News Capabilities (Capturing Real-Time Search Demand)

Late-breaking or real-time events, such as global geopolitical shifts or live sports, require in-the-moment storytelling to keep audiences informed, engaged, and on-site. Traditionally, “Live Blogs” relied on clunky third-party embeds that fragmented user data and slowed page loads.

A unified standard treats breaking news as a native capability, enabling rapid-fire updates that keep the audience glued to the brand’s own domain, maximizing ad impressions and subscription opportunities.

Conclusion: Trading Toil for Agility

Ultimately, shifting to a unified standard is about reducing inefficiencies caused by “fighting the tools.” By removing the technical toil that typically hides insights in siloed tools, media organizations can finally trade operational friction for strategic agility.

When your site’s foundation is solid and fast, editors can hit “publish” without worrying about things breaking. At the same time, marketers can test new ways to grow the audience without waiting weeks for developers to update code. This setup clears the way for everyone to move faster and focus on what actually matters: telling great stories and connecting with readers.

The era of stitching software together with “sticky tape” is over. For modern media companies to thrive amid constant digital disruption, infrastructure must be a launchpad, not a hindrance. By eliminating the Fragmentation Tax, marketing leaders can finally stop surviving and start growing.

Jason Konen is director of product management at WP Engine, a global web enablement company that empowers companies and agencies of all sizes to build, power, manage, and optimize their WordPressⓇ websites and applications with confidence.

Image Credits

Featured Image: Image by WP Engine. Used with permission.

In-Post Images: Image by WP Engine. Used with permission.

Google Text Ad Click Share Rises Sharply In Some Verticals via @sejournal, @MattGSouthern

An analysis of 16,000 U.S. search queries found that text ads gained 7 to 13 percentage points of click share between January 2025 and January 2026.

SEO consultant Aleyda Solis used Similarweb clickstream estimates to measure click share across classic organic results, SERP features, text ads, PLAs, and zero-click behavior.

She also tracked how often AI Overviews appeared on the page, but the dataset doesn’t attribute clicks to AI Overviews directly.

What The Data Shows

Text ads gained between 7 and 13 percentage points of click share across every vertical Solis analyzed.

In the headphones vertical (top 5,000 US queries), classic organic click share fell from 73% to 50%. Text ads grew from 3% to 16%, and PLAs grew from 13% to 20%. Combined paid results now capture 36% of clicks in that category, up from 16% a year earlier.

Jeans followed a similar pattern. Classic organic dropped from 73% to 56%, while combined paid results rose from 18% to 34%.

The online games vertical saw text ads quadruple, from 3% to 13%, even though the category had historically had almost no ad presence.

In greeting cards, the only vertical where total clicks actually grew year over year, organic click share still fell from 88% to 75% as text ads nearly doubled.

The AI Overview presence on SERPs grew across all four verticals. Headphones saw AIO presence jump from 2.28% to 32.76%, and online games went from 0.38% to 29.80%. But the analysis measured how often AIOs appeared on the page, not how many clicks they captured or prevented.

Solis wrote:

“When I started this research, my hypothesis was that text ads and organic SERP features -not just AI Overviews- could be significant culprits behind declining organic clicks. The data confirmed this across all four verticals, and the scale of the text ad impact surprised me: they gained between +7 and +13 percentage points of click share in every vertical, making them the single biggest measurable driver of the organic decline.”

Independent Data Points To The Same Pattern

The SERP-level click data lines up with what advertisers are seeing from the other side.

Tinuiti’s Q4 2025 benchmark report found Google text ad clicks hit a 19-quarter high in Tinuiti’s benchmark dataset, growing 9% year over year. Overall Google search ad spend rose 13% in the quarter, up from 10% in Q3.

Google’s earnings tell a similar story. In its Q3 2025 report, Alphabet posted $102.3 billion in revenue, its first $100 billion quarter, with search ad revenue reaching $56.6 billion. CEO Sundar Pichai said AI features were expanding total query volume, including commercial queries.

More queries and more commercial intent create more ad inventory. The Similarweb data is consistent with more clicks shifting to paid placements in these verticals.

Why This Matters

The industry has spent much of the past year focused on AI Overviews as the explanation for declining organic clicks.

AIO presence is growing, and Google reported 1.5 billion monthly AIO users as of Q1 2025. This data indicates that text ads are an increasingly important factor to consider.

When diagnosing drops in organic traffic, it’s helpful to look at the SERP composition for your industry rather than assuming AI Overviews are the sole reason.

Looking Ahead

Data from different sources indicate that text ads are gaining click share.

Whether Google is actively expanding ad placements or advertisers are bidding more aggressively on existing inventory is unknown.

What you may consider doing now is tracking SERP composition changes in your own vertical using tools that measure click distribution rather than rankings alone.