Google recently fixed a bug that enabled anyone to anonymously use an official Google tool to remove any URL from Google search and get away with it. The tool had the potential to be used to devastate competitor rankings by removing their URLs completely from Google’s index. The bug was known by Google since 2023 but until now Google hadn’t taken action to fix it.
Tool Exploited For Reputation Management
A report by the Freedom of the Press Foundation recounted the case of a tech CEO who had employed numerous tactics to “censor” negative reporting by a journalist, ranging from legal action to identify the reporter’s sources, an “intimidation campaign” via the San Francisco city attorney and a DMCA takedown request.
Through it all, the reporter and the Freedom of the Press Foundation prevailed in court, and the article at the center of the actions remained online until it began getting removed through abuse of Google’s Remove Outdated Content tool. Restoring the web page with Google Search Console was easy, but the abuse continued. This led to opening a discussion on the Google Search Console Help Community.
The person posted a description of what was happening and asked if there was a way to block abuse of the tool. The post alleged that the attacker was choosing a word that was no longer in the original article and using that as the basis for claiming an article is outdated and should be removed from Google’s search index.
This is what the report on Google’s Help Community explained:
“We have a dozen articles that got removed this way. We can measure it by searching Google for the article, using the headline in quotes and with the site name. It shows no results returned.
Then, we go to GSC and find it has been “APPROVED” under outdated content removal. We cancel that request. Moments later, the SAME search brings up an indexed article. This is the 5th time we’ve seen this happen.”
Four Hundred Articles Deindexed
What was happening was an aggressive attack against a website, and Google apparently was unable to do anything to stop the abuse, leaving the user in a very bad position.
In a follow-up post, they explained the devastating effect of the sustained negative SEO attack:
“Every week, dozens of pages are being deindexed and we have to check the GSC every day to see if anything else got removed, and then restore that.
We’ve had over 400 articles deindexed, and all of the articles were still live and on our sites. Someone went in and submitted them through the public removal tool, and they got deindexed.”
Google Promised To Look Into It
They asked if there was a way to block the attacks, and Google’s Danny Sullivan responded:
“Thank you — and again, the pages where you see the removal happening, there’s no blocking mechanism on them.”
Danny responded to a follow-up post, saying that they would look into it:
“The tool is designed to remove links that are no longer live or snippets that are no longer reflecting live content. We’ll look into this further.”
How Google’s Tool Was Exploited
The initial report said that the negative SEO attack was leveraging changed words within the content to file a successful outdated content removal. But it appears that they later discovered that another attack method was being used.
Google’s Outdated Content Removal tool is case-sensitive, which means that if you submit a URL containing an uppercase letter, the crawler will go out to specifically check for the uppercase version, and if the server returns a 404 Not Found error response, Google will remove all versions of the URL.
The Freedom of the Press Foundation writes that the tool is case insensitive, but that’s not entirely correct because if it were insensitive, the case wouldn’t matter. But the case does matter, which means that it is case sensitive.
By the way, the victim of the attack could have created a workaround by rewriting all requests for uppercase URLs to lowercase and enforcing lowercase URLs across the entire website.
That’s the flaw the attacker exploited. So, while the tool was case sensitive, at some point in the system Google’s removal system is case agnostic, which resulted in the correct URL being removed.
Here’s how the Freedom of the Press Foundation described it:
“Our article… was vanished from Google search using a novel maneuver that apparently hasn’t been publicly well documented before: a sustained and coordinated abuse of Google’s “Refresh Outdated Content” tool.
This tool is supposed to allow those who are not a site’s owner to request the removal from search results of web pages that are no longer live (returning a “404 error”), or to request an update in search of web pages that display outdated or obsolete information in returned results.
However, a malicious actor could, until recently, disappear a legitimate article by submitting a removal request for a URL that resembled the target article but led to a “404 error.” By altering the capitalization of a URL slug, a malicious actor apparently could take advantage of a case-insensitivity bug in Google’s automated system of content removal.”
Other Sites Affected By Thes Exploit
Google responded to the Freedom of the Press Foundation and admitted that this exploit did, in fact, affect other sites.
They are quoted as saying the issue only impacted a “tiny fraction of websites” and that the wrongly impacted sites were reinstated.
Google responded by email to note that this bug has been fixed.
Bill Hunt is a true pioneer in the industry, with more than 25 years of experience working on the websites of some of the largest multinationals. Having built two large digital/search agencies, one of which was acquired by Ogilvy, Bill has now moved into consulting focused on repositioning search to leverage marketing for shareholder growth.
His approach is not myopic surface-level SEO, but as an enterprise specialist who looks at what users actually want from their online experience. He connects the dots between search visibility, user experience, and business value for real results.
Bill is currently writing a series for Search Engine Journal about connecting search visibility to business value, and I spoke to him for IMHO to find out why he thinks SEO is currently not working.
“SEOs are creatures of habit. To succeed now, we need to unlearn and relearn how discovery actually works.”
The Real Problems Aren’t What You Think
I started out by asking Bill why SEO isn’t working, and his key message was not that SEO is broken, but there is paralysis, distraction from AI hype, and a neglect of fundamentals:
“I think there are three key problems right now. One is paralysis. We see that clients put search on pause, especially organic search, because they just don’t know what to do.
The second is the distraction with all the hype around the AI thing.
I mean, there’s a different acronym every day. So, which do we do? Are we chasing answers? Are we doing LLM index files or whatever craziness comes out?
And then the third is that there’s such a distraction from all this that a lot of the fundamentals aren’t being covered. And I think that’s where the problem is.”
Bill emphasized that the impact varies significantly by business type. Information-based businesses have been significantly affected because AI now directly answers queries that previously drove traffic to their sites. However, many other businesses might not be negatively impacted if they understand what’s actually changed.
Three Fundamental Shifts To Pay Attention To
Bill went on to talk about how three core changes have reshaped search, and understanding them is crucial for adaptation:
Intent understanding has evolved: Everything is about what did they search for? What are they hoping to see?
Friction must be removed: Platforms reward the path of least resistance.
Monetization is leading the way: It’s not just about helpful, but also about profitable.
Bill used an example from his work with Absolut Vodka.
“When I was working with Absolut Vodka, we had a drink site that was really just an awareness driver, and every month we sat down and we looked at Google’s search results and said, ‘If we were Google, what would we be changing around drinks or recipes or things like that?’
And so, by looking at the results, we could see, little by little, [that] somebody [was] looking for yellow cocktails. What should Google present?”
Rather than just optimizing for rankings, his team studied Google’s interface changes and adapted their visual content accordingly.
“We started focusing on the drink, bringing it front and center, amplifying the colors, the ingredients, and more and more people clicked.
We were generating millions and millions of visits because every step that Google was making to create a different user experience, we were trying to accommodate it.”
Bill believes that the idea of intent is still crucial. Considering how users just want to get to an answer, we must think about how they discover information and how we then present information to them.
“I think that’s really it in a nutshell. All of this change has paralyzed us and distracted us, and we need to recenter and refocus.
And that’s really a key part of what this series [at SEJ] is about: How do we refocus? How do we rethink this, both from a strategic point of view, from a shareholder value standpoint, and from a simple workflow standpoint?”
As generative AI becomes embedded into how users explore and consume information, Bill warned against assuming that originality is enough to get discovered.
“AI systems synthesize consensus. If you’re saying something radically different, you won’t show up unless you connect it to what people already know.”
So, I asked Bill if you are creating this original content, how do you teach the systems to see you?
Bill’s advice is that to succeed in AI search environments, businesses need to:
Link new ideas to familiar terms.
Reflect user language and legacy concepts.
Be explicit in bridging the gap between old and new methods.
Otherwise, you risk being invisible to LLMs and answer engines that rely on summarizing well-established viewpoints.
“If you’re stating that you’re radically different, you’re not going to be shown because you’re radically different. So, you have to connect, and this is what I put in that article. You need to connect back to the consensus idea.
If you’re saying you’ve got a new way to cut bread, you have to talk about the old way to cut bread and connect it to a more efficient or easier way to do it.”
Is Your Product Even Discoverable?
The most practical insight from our conversation surrounded how people can discover your brand or your product.
Historically, keyword research has been focused on connecting to searches that have existing search volume. But, if somebody doesn’t know a product exists to solve a problem, how would they search for it?
“I used to tell companies, if somebody doesn’t know a product exists to solve a problem, how would they search for it?
They would use the problem or symptoms of the problem. If they know a product exists but don’t know you exist, how would they search for it?”
Bill recommended that you run searches for problems related to your product and see if you show up. Search as if you know the solution exists, but not your brand.
And if you don’t surface, ask yourself why not?
“Take the symptoms people have, go into any tool you want, Google, Perplexity, ChatGPT, Gemini, and search and see if you come up.
If you don’t come up, the very next question you should ask is, ‘Why isn’t this product or this company in your result set?’ That’s probably the single most illuminating thing a senior executive can do…
When it tells you that you don’t have the answer, your very next step is, ‘How do we then create the answer, and then how do we get it into these?’”
This kind of query-path analysis is more revealing than traditional keyword research because it aligns with how people actually search, especially in AI environments that interpret broader queries.
Moving Forward: Back To Basics
Despite all the AI disruption, Bill recommends a return to fundamental principles. Companies need to ensure they’re indexable, crawlable, and seen as authorities in their space. The same core elements that have always mattered for search visibility.
“Who got cited? Who was number one? And Larry and Sergey said, ‘Well, if they’re cited most frequently as a source for a question, shouldn’t they be?’”
The key difference is that these fundamentals now operate in an AI-enhanced environment where understanding user intent and creating relevant, engaging content matter more than ever.
And if you want to find answers, ask the tools; they can tell you everything you need to know.
“I would tell everybody to go do that query and do the follow-up saying why aren’t we there? And you’d be surprised how efficient these tools are at telling you what you need to do to close that gap.”
Rather than panicking about AI destroying SEO, organizations should focus on understanding what’s actually changed and adapting their strategies accordingly.
The fundamentals remain solid; they just need to be applied in new ways.
You can watch the full interview with Bill Hunt below:
Don’t miss the new series that Bill is currently writing for SEJ about how you can connect the dots between search visibility, user experience, and business value that will not only help CMOs but also help search marketers get buy-in from CMOs.
Thank you to Bill Hunt for offering his insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal
New research from enterprise search marketing platform BrightEdge discovered differences in how Google and ChatGPT surface content. These differences matter to digital marketers and content creators because they show how content is recommended by each system. Recognizing the split enables brands to adapt their content strategies to stay relevant across both platforms.
BrightEdge’s findings were surfaced through an analysis of B2B technology, education, healthcare, and finance queries. It’s possible to cautiously extrapolate the findings to other niches where there could be divergences in how Google and ChatGPT respond, but that’s highly speculative, so this article won’t do that.
Core Differences: Task Vs. Information Orientation
BrightEdge’s research discovered that ChatGPT and Google AI Overviews take two different approaches to helping users take action. ChatGPT is more likely to recommend tools and apps, behaving in the role of a guide for making immediate decisions. Google provides informational content that encourages users to read before acting. This difference matters for SEO because it enables content creators and online stores to understand how their content is processed and presented to users of each system.
BrightEdge explains:
“In task-oriented prompts, ChatGPT overwhelmingly suggests tools and apps directly, while Google continues to link to informational content. While Google thrives as a research assistant, ChatGPT acts like a trusted coach for decision making, and that difference shapes which tool users instinctively choose for different needs.”
Divergence On Action-Oriented Queries
ChatGPT and Google tend to show similar kinds of results when users are querying for comparisons, but the results begin to diverge when the user intent implies they want to act. BrightEdge found that prompts about credit card comparisons or learning platforms generated similar kinds of results.
Questions with an action intent, like “how to create a budget” or “learn Python,” lead to different answers. ChatGPT appears to treat action intent prompts as requiring a response with tools, while Google treats them as requiring information.
BrightEdge notes that Healthcare has the highest rate of divergence:
“At 62% divergence, healthcare demonstrates the most significant split between platforms.
When prompts pertain to symptoms or medical information, both ChatGPT and Google will mention the CDC and The Mayo Clinic.
However, when prompted to help with things like “How to find a doctor,” ChatGPT pushes users towards Zocdoc, while Google points to hospital directories.”
B2B Technology niche has the second highest level of divergence:
When comparing technology, such as cloud platforms, both suggest AWS and Azure.
When asked “How to deploy things (such as specific apps),” ChatGPT relies on tools like Kubernetes and the AWS CLI, while Google offers tutorials and Stack Overflow.”
Education follows closely behind B2B technology:
“At 45% divergence, education follows the same trend.
When comparing “Best online learning platforms,” both platforms surface Coursera, EdX, and LinkedIn Learning.
When a user’s prompt pertains to learning a skill such as “How to learn Python,” ChatGPT recommends Udemy, whereas Google directs users to user-generated content hubs like GitHub and Medium.”
Finance shows the lowest levels of divergence, at 39%.
BrightEdge concludes that this represents a “fundamental shift” in how AI platforms interpret intent, which means that marketers need to examine the intent behind the search results for each platform and make content strategy decisions based on that research.
Tools Versus Topics
BrightEdge uses the example of the prompt “What are some resources to help plan for retirement?” to show how Google and ChatGPT differ. ChatGPT offers calculators and tools that users can act on, while Google suggests topics for further reading.
Screenshot Of ChatGPT Responding With Financial Tools
There’s a clear difference in the search experience for users. Marketers, SEOs, and publishers should consider how to meet both types of expectations: practical, action-based responses from ChatGPT and informational content from Google.
Takeaways
Split In User Intent Interpretation: Google interprets queries as requests for information, while ChatGPT tends to interpret many of the same queries as a call for action that’s solved by tools.
Platform Roles: ChatGPT behaves like a decision-making coach, while Google acts as a research assistant.
Domain-Specific Differences: Healthcare has the highest divergence (62%), especially in task-based queries like finding a doctor. B2B Technology (47%) and Education (45%) also show significant splits in how guidance is delivered. Finance shows the least divergence (39%) in how results are presented.
Tools vs. Topics: ChatGPT recommends actionable resources; Google links to authoritative explainer content.
SEO Insight: Content strategies must reflect each platform’s interpretation of intent. For example, creating actionable responses for ChatGPT and comprehensive informational content for Google. This may even mean creating and promoting a useful tool that can surface in ChatGPT.
BrightEdge’s research shows that, for some queries, Google and ChatGPT interpret the same user intent in profoundly different ways. While Google treats action-oriented queries as a prompt to deliver informational content, ChatGPT responds by recommending tools and services users can immediately act on. This divergence calls attention to the need to understand when ChatGPT is delivering actionable responses in order for marketers and content creators to create platform-specific content and web experiences.
This post was sponsored by Peec.ai. The opinions expressed in this article are the sponsor’s own.
The first step of any good GEO campaign is creating something that LLM-driven answer machines actually want to link out to or reference.
GEO Strategy Components
Think of experiences you wouldn’t reasonably expect to find directly in ChatGPT or similar systems:
Engaging content like a 3D tour of the Louvre or a virtual reality concert.
Live data like prices, flight delays, available hotel rooms, etc. While LLMs can integrate this data via APIs, I see the opportunity to capture some of this traffic for the time being.
Topics that require EEAT (experience, expertise, authoritativeness, trustworthiness).
LLMs cannot have first-hand experience. But users want it. LLMs are incentivized to reference sources that provide first-hand experience. That’s just one of the things to keep in mind, but what else?
We need to differentiate between two approaches: influencing foundational models versus influencing LLM answers through grounding. The first is largely out of reach for most creators, while the second offers real opportunities.
Influencing Foundational Models
Foundational models are trained on fixed datasets and can’t learn new information after training. For current models like GPT-4, it is too late – they’ve already been trained.
But this matters for the future: imagine a smart fridge stuck with o4-mini from 2025 that might – hypothetically – favor Coke over Pepsi. That bias could influence purchasing decisions for years!
Optimizing For RAG/Grounding
When LLMs can’t answer from their training data alone, they use retrieval augmented generation (RAG) – pulling in current information to help generate answers. AI Overviews and ChatGPT’s web search work this way.
As SEO professionals, we want three things:
Our content gets selected as a source.
Our content gets quoted most within those sources.
Other selected sources support our desired outcome.
Concrete Steps To Succeed With GEO
Don’t worry, it doesn’t take rocket science to optimize your content and brand mentions for LLMs. Actually, plenty of traditional SEO methods still apply, with a few new SEO tactics you can incorporate into your workflow.
Step 1: Be Crawlable
Sounds simple but it is actually an important first step. If you aim for maximum visibility in LLMs, you need to allow them to crawl your website. There are many different LLM crawlers from OpenAI, Anthropic & Co.
Some of them behave so badly that they can trigger scraping and DDoS preventions. If you are automatically blocking aggressive bots, check in with your IT team and find a way to not block LLMs you care about.
If you use a CDN, like Fastly or Cloudflare, make sure LLM crawlers are not blocked by default settings.
Step 2: Continue Gaining Traditional Rankings
The most important GEO tactic is as simple as it sounds. Do traditional SEO. Rank well in Google (for Gemini and AI Overviews), Bing (for ChatGPT and Copilot), Brave (for Claude), and Baidu (for DeepSeek).
Step 3: Target the Query Fanout
The current generation of LLMs actually does a little more than simple RAG. They generate multiple queries. This is called query fanout.
For example, when I recently asked ChatGPT “What is the latest Google patent discussed by SEOs?”, it performed two web searches for “latest Google patent discussed by SEOs patent 2025 SEO forum” and “latest Google patent SEOs 2025 discussed”.
Advice: Check the typical query fanouts for your prompts and try to rank for those keywords as well.
Typical fanout-patterns I see in ChatGPT are appending the term “forums” when I ask what people are discussing and appending “interview” when I ask questions related to a person. The current year (2025) is often added as well.
Beware: fanout patterns differ between LLMs and can change over time. Patterns we see today may not be relevant anymore in 12 months.
Step 4: Keep Consistency Across Your Brand Mentions
This is something simple everyone should do – both as a person and an enterprise. Make sure you are consistently described online. On X, LinkedIn, your own website, Crunchbase, Github – always describe yourself the same way.
If your X and LinkedIn profiles say you are a “GEO consultant for small businesses”, don’t change it to “AIO expert” on Github and “LLMO Freelancer” in your press releases.
I have seen people achieve positive results within a few days on ChatGPT and Google AI Overviews by simply having a consistent self description across the web. This also applies to PR coverage – the more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.
Step 5: Avoid JavaScript
As an SEO, I always ask for as little JavaScript usage as possible. As a GEO, I demand it!
Most LLM crawlers cannot render JavaScript. If your main content is hidden behind JavaScript, you are out.
Step 6: Embrace Social Media & UGC
Unsurprisingly, LLMs seem to rely on reddit and Wikipedia a lot. Both platforms offer user-generated-content on virtually every topic. And thanks to multiple layers of community-driven moderation, a lot of junk and spam is already filtered out.
While both can be gamed, the average reliability of their content is still far better than on the internet as a whole. Both are also regularly updated.
reddit also provides LLM labs with data into how people discuss topics online, what language they use to describe different concepts, and knowledge on obscure niche topics.
We can reasonably assume that moderated UGC found on platforms like reddit, Wikipedia, Quora, and Stackoverflow will stay relevant for LLMs.
I do not advocate spamming these platforms. However, if you can influence how you and competitors show up there, you might want to do so.
Step 7: Create For Machine-Readability & Quotability
Write content that LLMs understand and want to cite. No one has figured this one out perfectly yet, but here’s what seems to work:
Use declarative and factual language. Instead of writing “We are kinda sure this shoe is good for our customers”, write “96% of buyers have self-reported to be happy with this shoe.”
Add schema. It has been debated many times. Recently, Fabrice Canel (Principal Product Manager at Bing) confirmed that schema markup helps LLMs to understand your content.
If you want to be quoted in an already existing AI Overview, have content with similar length to what is already there. While you should not just copy the current AI Overview, having high cosine similarly helps. And for the nerds: yes, given normalization, you can of course use the dot product instead of cosine similarity.
If you use technical terms in your content, explain them. Ideally in a simple sentence.
Add summaries of long text paragraphs, lists of reviews, tables, videos, and other types of difficult-to-cite content formats.
Step 8: Optimize your Content
The original GEO paper
If we look at GEO: Generative Engine Optimization (arXiv:2311.09735) , What Evidence Do Language Models Find Convincing? (arXiv:2402.11782v1), and similar scientific studies, the answer is clear. It depends!
To be cited for some topics in some LLMs, it helps to:
Add unique words.
Have pro/cons.
Gather user reviews.
Quote experts.
Include quantitative data and name your sources.
Use easy to understand language.
Write with positive sentiment.
Add product text with low perplexity (predictable and well-structured).
Include more lists (like this one!).
However, for other combinations of topics and LLMs, these measures can be counterproductive.
Until broadly accepted best practices evolve, the only advice I can give is do what is good for users and run experiments.
Step 9: Stick to the Facts
For over a decade, algorithms have extracted knowledge from text as triples like (Subject, Predicate, Object) — e.g., (Lady Liberty, Location, New York). A text that contradicts known facts may seem untrustworthy. A text that aligns with consensus but adds unique facts is ideal for LLMs and knowledge graphs.
So stick to the established facts. And add unique information.
Step 10: Invest in Digital PR
Everything discussed here is not just true for your own website. It is also true for content on other websites. The best way to influence it? Digital PR!
The more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.
I have even seen cases where advertorials were used as sources!
Concrete GEO Workflows To Try
Before I joined Peec AI, I was a customer. Here is how I used the tool – and how I advise our customers to use it.
Learn Who Your Competitors Are
Just like with traditional SEO, using a good GEO tool will often reveal unexpected competitors. Regularly look at a list of automatically identified competitors. For those who surprise you, check in which prompts they are mentioned. Then check the sources that led to their inclusion. Are you represented properly in these sources? If not, act!
Is a competitor referenced because of their PeerSpot profile but you have zero reviews there? Ask customers for a review.
Was your competitor’s CEO interviewed by a Youtuber? Try to get on that show as well. Or publish your own videos targeting similar keywords.
Is your competitor regularly featured on top 10 lists where you never make it to the top 5? Offer the publisher who created the list an affiliate deal they cannot decline. With the next content update, you’re almost guaranteed to be the new number one.
Understand the Sources
When performing search grounding, LLMs rely on sources.
Look at the top sources for a large set of relevant prompts. Ignore your own website and your competitors for a second. You might find some of these:
A community like Reddit or X. Become part of the community and join the discussion. X is your best bet to influence results on Grok.
Aninfluencer-driven website like YouTube or TikTok. Hire influencers to create videos. Make sure to instruct them to target the right keywords.
Anaffiliate publisher. Buy your way to the top with higher commissions.
Anews and mediapublisher. Buy an advertorial and/or target them with your PR efforts. In certain cases, you might want to contact their commercial content department.
Once you have observed which searches are triggered by query fanout for your most relevant prompts, create content to target them.
On your own website. With posts on Medium and LinkedIn. With press releases. Or simply by paying for article placements. If it ranks well in search engines, it has a chance to be cited by LLM-based answer engines.
Position Yourself for AI-Discoverability
Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. At Peec AI, we’re building the tools to track, influence, and win in this new ecosystem.
Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. We currently see clients growing their LLM traffic by 100% every 2 to 3 months. Sometimes with up to 20x the conversation rate of typical SEO traffic!
Whether you’re shaping AI answers, monitoring brand mentions, or pushing for source visibility, now is the time to act. The LLMs consumers will trust tomorrow are being trained today.
Google’s Gary Illyes discussed the concept of “centerpiece content,” how they go about identifying it, and why soft 404s are the most critical error that gets in the way of indexing content. The context of the discussion was the recent Google Search Central Deep Dive event in Asia, as summarized by Kenichi Suzuki.
Main Body Content
According to Gary Illyes, Google goes to great lengths to identify the main content of a web page. The phrase “main content” will be familiar to those who have read Google’s Search Quality Rater Guidelines. The concept of “main content” is first introduced in Part 1 of the guidelines, in a section that teaches how to identify main content, which is followed by a description of main content quality.
The quality guidelines define main content (aka MC) as:
“Main Content is any part of the page that directly helps the page achieve its purpose. MC can be text, images, videos, page features (e.g., calculators, games), and it can be content created by website users, such as videos, reviews, articles, comments posted by users, etc. Tabs on some pages lead to even more information (e.g., customer reviews) and can sometimes be considered part of the MC.
The MC also includes the title at the top of the page (example). Descriptive MC titles allow users to make informed decisions about what pages to visit. Helpful titles summarize the MC on the page.”
Google’s Illyes referred to main content as the centerpiece content, saying that it is used for “ranking and retrieval.” The content in this section of a web page has greater weight than the content in the footer, header, and navigation areas (including sidebar navigation).
“Google’s systems heavily prioritize the “main content” (which he also calls the “centerpiece”) of a page for ranking and retrieval. Words and phrases located in this area carry significantly more weight than those in headers, footers, or navigation sidebars. To rank for important terms, you must ensure they are featured prominently within the main body of your page.”
Content Location Analysis To Identify Main Content
This part of Illyes’ presentation is important to get right. Gary Illyes said that Google analyzes the rendered web page to located the content so that it can assign the appropriate amount of weight to the words located in the main content.
This isn’t about the identifying the position of keywords in the page. It’s just about identifying the content within a web page.
Here’s what Suzuki transcribed:
“Google performs positional analysis on the rendered page to understand where content is located. It then uses this data to assign an importance score to the words (tokens) on the page. Moving a term from a low-importance area (like a sidebar) to the main content area will directly increase its weight and potential to rank.”
Insight: Semantic HTML is an excellent way to help Google identify the main content and the less important areas. Semantic HTML makes web pages less ambiguous because it uses HTML elements to identify the different areas of a web page, like the top header section, navigational areas, footers, and even to identify advertising and navigational elements that may be embedded within the main content area. This technical SEO process of making a web page less ambiguous is called disambiguation.
3. Tokenization Is Foundation Of Google’s Index
Because of the prevalence of AI technologies today, many SEOs are aware of the concept of tokenization. Google also uses tokenization to convert words and phrases into a machine-readable format for indexing. What gets stored in Google’s index isn’t the original HTML; it’s the tokenized representation of the content.
4. “Soft 404s Are A Critical Error
This part is important because it frames soft 404s as a critical error. Soft 404s are pages that should return a 404 response but instead return a 200 OK response. This can happen when an SEO or publisher redirects a missing web page to the home page in order to conserve their PageRank. Sometimes a missing web page will redirect to an error page that returns a 200 OK response, which is also incorrect.
Many SEOs mistakenly believe that the 404 response code is an error that needs fixing. A 404 is something that needs fixing only if the URL is broken and is supposed to point to a different URL that is live with actual content.
But in the case of a URL for a web page that is gone and is likely never returning because it has not been replaced by other content, a 404 response is the correct one. If the content has been replaced or superseded by another web page, then it’s proper in that case to redirect the old URL to the URL where the replacement content exists.
The point of all this is that, to Google, a soft 404 is a critical error. That means that SEOs who try to fix a non-error event like a 404 response by redirecting the URL to the home page are actually creating a critical error by doing so.
Suzuki noted what Illyes said:
“A page that returns a 200 OK status code but displays an error message or has very thin/empty main content is considered a “soft 404.” Google actively identifies and de-prioritizes these pages as they waste crawl budget and provide a poor user experience. Illyes shared that for years, Google’s own documentation page about soft 404s was flagged as a soft 404 by its own systems and couldn’t be indexed.”
Takeaways
Main Content Google gives priority to the main content portion of a given web page. Although Gary Illyes didn’t mention it, it may be helpful to use semantic HTML to clearly outline what parts of the page are the main content and which parts are not.
Google Tokenizes Content For Indexing Google’s use of tokenization enables semantic understanding of queries and content. The importance for SEO is that Google no longer relies heavily on exact-match keywords, which frees publishers and SEOs to focus on writing about topics (not keywords) from the point of view of how they are helpful to users.
Soft 404s Are A Critical Error Soft 404s are commonly thought of as something to avoid, but they’re not generally understood as a critical error that can negatively impact the crawl budget. This elevates the importance of avoiding soft 404s.
Google’s John Mueller re-posted the results of an experiment that tested if ecommerce sites were accessible by AI Agents, commenting that it may be useful to check if your ecommerce site works for AI agents that are shopping on behalf of actual customers.
AI Agent Experiment On Ecommerce Sites
Malte Polzin posted commentary on LinkedIn on an experiment he did to test if the top 50 Swiss ecommerce sites were open for business for users who are shopping online with ChatGPT agents.
They reported that most of the ecommerce stores were accessible to ChatGPT’s AI agent but he also found some stores were not for a few reasons.
Reasons Why ChatGPT’s AI Agent Couldn’t Shop
CAPTCHA prevented ChatGPT’s AI agent from shopping
Blocked by Cloudflare’s Turnstile tool that’s a CAPTCHA alternative.
Store blocked access with a maintenance page
Bot defense blocked access
Google’s John Mueller Offers Advice
Google’s John Mueller recommended checking if your ecommerce store is open for business to shoppers who use AI agents. It may become more commonplace that users employ agentic search for online shopping.
“Pro tip: check your ecommerce site to see if it works for shoppers using the common agents. (Or, if you’d prefer they go elsewhere because you have too much business, maybe don’t.)
Bot-detection sometimes triggers on users with agents, and it can be annoying for them to get through. (Insert philosophical discussion on whether agents are more like bots or more like users, and whether it makes more sense to differentiate by actions rather than user-agent.)”
Should SEOs Add Agentic AI Testing To Site Audits?
SEOs want to consider adding Agentic AI accessibility to their site audits for ecommerce sites. There may be other use cases where an AI agent may need access to filling out forms, for example on a local services website.
Unlike top search engines, ChatGPT does not maintain an index of global websites. It has relied instead on Bing’s index and search for training and sources. However, recent third-party tests suggest ChatGPT has switched to Google for that purpose.
An ex-Googler and web developer in India, Abhishek Iyer, summarized his test on X. He invented a meaningless word with a definition, placed them on a page that was neither linked internally nor externally, and submitted the page to Google through Search Console.
He then prompted ChatGPT to define the term. The response was “verbatim” from his web page. He searched for the same word on Bing, DuckDuckGo, and Yandex. None returned results.
Another test, by Aleyda Solís, a search engine consultant, produced similar results. But it also revealed that ChatGPT utilized Google’s search snippet to fetch information.
In a response to a Solís prompt, ChatGPT stated it used “a cached snippet via web search” to fetch the information, indicating that ChatGPT may have direct access to Google’s cache.
In short, ChatGPT appears to utilize Google’s index to find information and sources.
What does it mean for visibility in ChatGPT?
ChatGPT has apparently switched from using Bing’s search index to Google’s.
Google’s Index
Both tests reveal ChatGPT’s reliance on Google’s index, like Google’s own Gemini and AI Mode. Hence being indexed by Google is a key step for visibility in generative AI platforms.
Yet Google is now aggressively removing pages from its index. It’s essential to monitor the indexation status of your important pages. “Crawled but not indexed” statuses in Search Console are more frequent. There’s little chance unindexed pages will surface in genAI responses.
If you are experiencing indexing glitches:
Know when to ignore them. All sites have unindexed pages. There’s often no problem to solve. It could be near-duplicate pages, old or outdated pages, or pages generated by internal search or filtering. Unless it’s an important product or landing page, “crawled but not indexed” is likely temporary.
Improve internal linking. A site’s navigation structure is the first step to better indexation. AI-powered tools can help, but overall, tactics such as “Related products,” “Related categories or subcategories,” and product-bundling pages elevate deeper pages.
Produce unique content. Repeated content can prevent a page from being indexed. It often occurs on sites with extensive products and manufacturer-provided descriptions. Third-party tools can create unique descriptions. Merchants can also follow Amazon’s example and include unique summaries and takeaways on product pages for additional informative content.
Beyond Indexing
Indexation by Google is fundamental, but a strategy for visibility in AI answers is much more. I’ve seen no evidence that organic rankings impact answers in ChatGPT or Gemini. Higher organic rankings do not improve visibility.
GenAI algorithms rely on different signals than search engines, preferring pages that answer questions clearly and succinctly.
Thus ensure your pages:
Provide straightforward answers to frequent questions,
Have content easily crawled and accessed with JavaScript disabled — AI crawlers cannot render JavaScript.
Generative AI-driven search isn’t a trend; it’s the new baseline. Tools like Gemini and ChatGPT have already replaced traditional queries for millions of users.
Your audience doesn’t just search anymore: They ask. They expect answers. And those answers are being assembled, ranked, and cited by AI systems that don’t care about title tags or keyword placement. They care about trust, structure, and retrievability.
Most SEO training programs haven’t caught up. They’re still built around tactics designed for a ranking algorithm, not a generative model. The gap isn’t closing; it’s widening.
And this isn’t speculation. Research from multiple firms now shows that conversational AI is becoming a dominant discovery interface.
Microsoft, Google, Meta, OpenAI, and Amazon are all restructuring their product ecosystems around AI-powered answers, not just ranked links.
The tipping point has already passed. If your training still revolves around keyword targeting and domain authority, you’re falling behind, and not gradually, but right now.
The uncomfortable reality is that many marketers are now trained in a playbook from the early 2010s, while the engines have moved on to an entirely different game.
At this point, are we even optimizing for “search engines” anymore – or have they become “discovery assistants” or “search assistants” built to curate, cite, and synthesize?
How SEO Fell Behind (Historical Context)
Traditional SEO has always adapted, from Google’s Panda and Penguin algorithms, which prioritized content quality and penalized low-quality links, to Hummingbird’s semantic understanding of user intent.
But today’s generative search landscape is an entirely new paradigm. Google Gemini, ChatGPT, and other conversational interfaces don’t simply rank pages; they synthesize answers from the most retrievable chunks of content available.
This is not a gradual shift. This is the biggest leap in SEO’s history, and most training programs haven’t caught up yet.
The Old Curriculum: What We’re Still Teaching (And Shouldn’t Be)
Traditional SEO curriculums typically emphasize:
Title Tags & Meta Descriptions: Despite Google rewriting around 60-75% of these (source: Zyppy SEO study), these remain foundational to most SEO training programs.
Link Outreach & Link Building: Still focused on quantity and domain authority, even though AI-driven search systems focus more on contextual relevance and content (and author) trustworthiness.
Keyword-Focused Blogging & Content Calendars: Rigid editorial calendars and keyword-driven articles are becoming obsolete in an AI-driven search era.
Technical SEO: While still useful for traditional search engines, modern AI-based systems care far less about the technical structure of a webpage, and more about the accessibility of the content, and how it displays entities and relationships.
Example:
Take a common assignment from SEO training programs: “Write a blog post targeting the keyword ‘best hiking boots for 2025’.”
You’re taught to select a primary keyword, structure your headers around related phrases, and write a long-form post designed to rank in traditional SERPs.
That approach might still work for Google’s blue links, but in a generative AI context, it fails.
Ask Gemini or ChatGPT the same query, and your content likely won’t appear. Not because it’s low quality, but because it wasn’t structured to be retrieved.
It lacks semantic chunking, embedding alignment, and explicit trust signals.
The AI systems are selecting content blocks they can understand, rank by relevance, and cite. If your article is built to match human scan patterns instead of machine retrieval cues, it’s simply invisible.
Image credit: Duane Forrester
The New SEO Work: What Actually Drives Results Now
Real SEO today revolves around structured, retrievable, semantically rich content:
1. Semantic Chunking
Creating content structured into clearly defined, self-contained chunks optimized for large language models (LLMs).
2. Vector Modeling & Embeddings
Placing content into semantic clusters inside vector databases, ensuring each piece of content is closely aligned with user intent and query vectors.
3. Trust, Signal Engineering
Implementing structured citations, schema markup, clear attribution, and credibility signals that AI-driven models trust enough to cite explicitly.
4. Retrieval Simulation & Prediction
Using tools such as RankBee, SERPRecon, and Waikay.io to actively simulate how your content surfaces within AI-driven answers.
5. RRF Tuning & Model Optimization
Fine-tuning content performance across generative models like Perplexity, Gemini, ChatGPT, ensuring maximum retrievability in various conversational contexts.
6. Zero-Click Optimization
Optimizing content not just for clicks but to be featured directly in generative AI responses.
Backlinko’s guide on LLM Seeding introduces a practical framework for getting cited by large language models like ChatGPT and Gemini.
It emphasizes creating chunkable, trustworthy content designed to be surfaced in AI-generated answers – marking a fundamental shift from optimizing for rankings to optimizing for retrieval.
Consider leading brands engaging with AI-first discovery themes:
Zapier has published educational content on vector embeddings and how they underpin tools like ChatGPT and semantic search (source). While that article doesn’t detail their internal SEO strategies, it shows how marketing teams can start unpacking the concepts that underpin retrieval-based visibility. → Correction: An earlier version of this article suggested Zapier had implemented semantic chunking and retrieval optimization. That was an editing error on my part: there’s no public evidence to support that claim.
Shopify, meanwhile, uses its Shopify Magic tool to generate SEO-optimized product descriptions at scale, integrating generative workflows into day-to-day content ops (source). → Takeaway: Shopify ties generative tooling directly to scalable, structured content designed for discovery.
These examples don’t suggest perfect alignment – but they point to how modern teams are beginning to integrate AI thinking into real workflows. That’s the shift: from content creation to content retrieval architecture.
Why The Disconnect Exists (And Persists)
1. Educational Inertia
Updating curriculums is expensive, difficult, and risky for educators.
Many course creators and educational institutions are overwhelmed or ill-equipped to rapidly pivot their syllabi toward advanced semantic optimization and vector embeddings.
2. Hiring Practices & Organizational Habits
Job ads often still emphasize outdated skills, perpetuating the inertia by attracting talent trained in legacy SEO methods rather than future-oriented techniques.
3. Legacy Toolsets
Major SEO platforms like Moz, Semrush, and Ahrefs continue to emphasize metrics like domain authority, keyword volumes, and traditional backlink counts, reinforcing outdated optimization practices.
The Fix: An Outcome-Driven SEO Training Model
To address these problems, SEO training must now shift toward measurable KPIs, clear roles, and task-based learning:
New KPI, Driven Framework:
Embedding retrieval rate (AI-driven visibility).
GenAI attribution percentage (citations in AI outputs).
Vector presence and semantic alignment.
Trust-signal effectiveness (schema and structured data).
Re-ranking lift via Retrieval Rank Fusion (RRF).
New Roles And Responsibilities:
Digital GEOlogist: Optimizes content placement and semantic structure for retrieval. (I know, the title is a joke, but you get the point.)
Cheditor (Chunk Editor): Optimizes chunks of content specifically for LLM consumption and retrievability. If you’re an Editor, you need to be a Cheditor.
Task-Based SEO Education:
Simulate retrieval via ChatGPT/Perplexity prompt engineering.
Perform semantic embedding audits to measure content similarity against successful retrieval outputs.
Conduct regular A/B tests on chunk structures and semantic signals, evaluating real-world retrievability.
How To Take Charge: You Are The Resource Now
The reality is stark but empowering: No one’s coming to save your career. Not your company, which may move slowly, nor traditional schools, nor third-party platforms with outdated content.
You won’t find this in a course catalog. If your company hasn’t caught up (and most haven’t), it’s on you to take the lead.
Here’s a practical roadmap to start building your own AI-SEO expertise from the ground up:
Month 1: Build Your Foundation
Complete foundational AI courses:
Share key learnings internally.
Month 2: Tactical Skill, Building
Complete practical SEO, specific courses:
Start sharing actionable tips via Slack or internal newsletters.
Month 3: Community And Collaboration
Organize “Lunch & Learns” or internal SEO Labs, focused on semantic chunking, embeddings, trust, signal engineering.
Engage actively in external communities (Discord groups, LinkedIn SEO groups, online forums like Moz Q&A) to deepen your knowledge.
Month 4: Institutionalize Your Expertise
Formally propose and launch an internal “AI-SEO Center of Excellence.”
Run practical retrieval simulations, document results, and showcase tangible improvements to secure ongoing investment and visibility internally.
Turning Learning Into Leadership
Once you’ve built momentum with personal upskilling, don’t stop at silent improvement. Make your learning visible, and valuable, by creating change around you:
Host SEO-AI Micro Sessions: Run short, focused sessions (15-20 minutes) on topics like semantic chunking, retrieval testing, or schema design. Keep them informal, repeatable, and useful.
Run Retrieval Audits: Pick three to five high-priority URLs and test them in ChatGPT, Gemini, or Perplexity. Which content blocks surface? What gets ignored? Share your findings openly.
Build a Knowledge Hub: Use Notion, Google Docs, or Confluence to create a centralized space for SEO-AI strategies, test results, tools, and templates.
Create a Weekly AI Digest: Curate key updates from the field – citations appearing in generative answers, new tools, useful prompts – and circulate them internally.
Recruit Allies: Invite collaborators to contribute retrieval tests, co-host sessions, or flag examples of your content appearing in AI answers. Leadership scales faster with support.
This is how you shift from learner to leader. You’re no longer just upskilling, you’re operationalizing AI search inside your company.
You Are the Catalyst, Take Action Now
The roles of traditional SEO specialists will shift (or fade?), replaced by experts fluent in semantic optimization and retrievability.
Become the person who educates your company because you educated yourself first.
Your role isn’t just to keep up, it’s to lead. The responsibility, and the opportunity, sit with you right now.
Don’t wait for your company to catch up or for course platforms to get current. Take action. The new discovery systems are already here, and the people who learn to work with them will define the next era of visibility.
If you teach SEO, rewrite your courses around these new KPIs and roles.
If you hire SEO talent, demand modern optimization skills: semantic embeddings knowledge, chunk structuring experience, retrieval simulation approaches.
If you practice SEO, proactively shift your efforts toward retrieval testing, embedding audits, and semantic optimization immediately.
SEO isn’t dying, it’s evolving.
And you have an opportunity, right now, to be at the forefront of this evolution.
You’ve heard the predictions: AI will replace SEO, generative search will eliminate organic traffic, and marketers should start updating their resumes.
With 73% of marketing teams using generative AI, it’s easy to assume we’re witnessing SEO’s funeral.
Here’s what’s actually happening: AI isn’t replacing SEO. It’s expanding SEO into new territories with bigger opportunities.
While Google’s AI Overviews and tools like ChatGPT are changing how people find information, they’re also creating new ways for your content to get discovered, cited, and trusted by millions of searchers.
The game isn’t ending. You just need to learn the new rules.
How AI Search Actually Works (And Where Your Content Fits)
Generative search doesn’t eliminate the need for quality content; it amplifies it.
When someone asks ChatGPT about email marketing or searches with Google’s AI features, these systems scan thousands of webpages to synthesize comprehensive answers.
Your content isn’t competing for traditional rankings anymore. You’re competing to become the authoritative source that AI systems pull from when generating responses.
The Citation Game
Here’s what most marketers miss: AI systems still cite their sources.
Google’s AI Overviews include links to referenced websites, and ChatGPT and Perplexity provide source citations.
Getting featured as a cited source can drive more qualified traffic than a traditional No. 1 ranking because users already know your content contributed to the answer they received.
Google AIO Citation Example:
Screenshot from search for [email marketing courses beginners must try], Google, July 2025
ChatGPT Citation Example:
Screenshot from ChatGPT, July 2025
What AI systems look for in sources:
Factual accuracy and reliability (they cross-reference information).
Update older content with recent statistics and insights.
Structure information in clear, scannable sections.
From Rankings To Retrieval
Traditional SEO targeted specific keyword rankings. AI search introduces “retrieval” – your content gets pulled into responses for queries you never directly optimized for.
Your comprehensive project management guide might get cited when someone asks, “How can I keep my remote team organized without micromanaging?” even though you never targeted that exact phrase.
Optimizing for retrieval requires a different mindset than traditional keyword targeting.
Create content that covers topics from multiple angles rather than focusing on single keyword phrases.
Structure your articles around the actual questions your audience asks, using headings that mirror real user queries.
Build comprehensive topic clusters that demonstrate your expertise across related subjects, showing AI systems that you’re a reliable source for broad topic coverage.
The SEO Fundamentals That Still Matter (With New Twists)
AI systems are far less forgiving than Google’s crawlers.
While Google’s bots can render JavaScript, handle errors gracefully, and work around technical issues, most AI agents simply fetch raw HTML and move on.
If they find an empty page, wrong HTTP status, or tangled markup, they won’t see your content at all.
This makes technical SEO non-negotiable for AI visibility. Server-side rendering becomes absolutely critical since AI agents won’t execute JavaScript or wait for client-side rendering.
Your content must be immediately visible in raw HTML.
Clean, semantic markup with valid HTML and proper heading hierarchy helps AI systems parse content accurately, while efficient delivery ensures AI agents don’t abandon slow or bloated sites.
AI bot requirements:
Allow AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) through robots.txt.
Whitelist AI bot IP ranges rather than blocking with firewalls.
Ensure critical content loads without JavaScript dependencies.
Avoid “noindex” and “nosnippet” tags on valuable content.
Optimize server response times for efficient content retrieval.
It could direct AI models to your best content during inference.
Place this plain text file at your domain root using proper markdown structure, including only your highest-value, well-structured content that answers specific questions.
Content Strategy For AI Citations
Your content strategy needs a fundamental shift. Instead of writing for search engine rankings, you’re creating content that feeds AI knowledge bases.
The key to successful retrieval optimization means leading with clear, definitive answers to specific questions.
When addressing common queries like [how long do SEO results take?], start immediately with “SEO results typically appear within three to six months for new websites.”
Break complex topics into digestible, extractable sections that include comprehensive explanations with supporting context.
AI systems favor content that provides complete answers rather than surface-level information, so include relevant data and statistics that can be easily identified and cited.
AI systems don’t retrieve entire pages; they break content into passages or “chunks” and extract the most relevant segments.
This means each section of your content should work as a standalone snippet that’s independently understandable.
Keep one focused idea per section, staying tightly concentrated on single concepts.
Use structured HTML with clear H2 and H3 subheadings for every subtopic, making passages semantically tight and self-contained.
Start each section with direct, concise sentences that immediately address the core point.
Building topical authority requires understanding how Google’s AI uses “query fan-out” techniques.
Complex queries get automatically broken into multiple related subqueries and executed in parallel, rewarding sites with both topical breadth and depth.
Create comprehensive pillar pages that summarize main topics with strategic links to deeper cluster content.
Develop cluster pages targeting specific facets of your expertise, then cross-link between related cluster pages to establish semantic relationships.
Cover diverse angles and intents to increase your content’s surface area for AI retrieval across multiple query variations.
Working With AI Systems, Not Against Them
The most successful marketers are learning to optimize for AI inclusion rather than fighting against machine-generated answers.
Optimizing For AI Summaries
Structure your content so AI systems can’t ignore it by leading with clear answers and using scannable formatting.
Include concrete data and statistics that make content citation-worthy, and implement schema markup like FAQ, how-to, and article schemas to help AI understand your content structure.
Key formatting elements that AI systems prefer:
Bullet points and numbered lists for easy parsing.
Clear subheadings that mirror actual user questions.
Natural language Q&A format throughout the content.
Building citation-worthy authority requires meeting higher trust and clarity standards than basic content inclusion.
AI systems prioritize content perceived as factually accurate, up-to-date, and authoritative. Include specific, verifiable claims with source citations that link to studies and expert sources.
Refresh key content regularly with timestamps to signal updated information, and consider publishing original research, surveys, or industry studies that journalists and bloggers reference.
AI search systems increasingly retrieve and synthesize content beyond text, including images, charts, tables, and videos. This creates opportunities for more engaging, scannable answers.
Ensure images and videos are crawlable by avoiding JavaScript-only rendering, and use descriptive alt text that includes topic context for all images.
Add explanatory captions directly below or beside visual elements, and use proper HTML markup like
and
instead of images of tables to support AI bot parsing.
Monitor Your AI Presence
Traditional rank tracking won’t show your full search visibility anymore. You need to track how AI platforms reference your content across different systems.
Set up Google Alerts for your brand and key topics you cover to catch when AI systems cite your content in their responses.
Regularly check Perplexity AI, ChatGPT, and Google’s AI Overviews for appearances of your content, and screenshot these citations since they’re becoming your new success metrics.
Don’t just monitor your brand presence. Track how competitors appear in AI summaries to understand what type of content AI engines prefer.
This competitive intelligence helps you adjust your strategy based on what’s actually getting cited.
Pay attention to the context around your citations, too, since AI engines sometimes present information differently than you intended, providing valuable feedback for refining how you present information in future content.
The Future Of SEO Is Bigger, Not Smaller
SEO isn’t shrinking. It’s expanding into a multi-platform opportunity. Your content can now appear in traditional search results, AI Overviews, chatbot responses, and voice search answers.
Skills That Matter Most
The SEOs thriving in this new landscape are developing expertise in data analysis to understand how different AI systems crawl and categorize content.
Multi-platform optimization has become essential, requiring the ability to write for Google, ChatGPT, Perplexity, and emerging AI tools simultaneously.
Advanced technical skills around implementing schema markup that actually helps AI understanding are increasingly valuable, along with content strategy integration that aligns SEO with broader content marketing and brand positioning efforts.
As AI makes search more complex, companies need expert guidance to navigate multiple platforms and opportunities.
The brands trying to handle this evolution internally often get left behind while their competitors appear across every AI-powered search experience.
SEO leaders today aren’t just optimizing websites; they’re building strategies that work across traditional and generative search platforms, tracking brand mentions in AI search, and ensuring their companies stay visible as search continues evolving.
Your Next Steps
The shift to AI-powered search isn’t a threat; it’s a call to expand your reach.
Start by auditing your current content for AI citation potential, asking whether it answers specific questions clearly and directly.
Create topic clusters that demonstrate deep expertise in your field.
Monitor AI platforms for mentions of your brand and competitors.
Update older content with fresh data and improved structure for AI retrieval.
The brands dominating tomorrow’s search landscape are adapting now.
Your SEO skills aren’t becoming obsolete; they’re becoming more valuable as companies need experts who can navigate both traditional rankings and AI-generated responses.
The game hasn’t ended. It just got more interesting.
At the recent Search Central Live Deep Dive 2025, Kenichi Suzuki asked Google’s Gary Illyes how Google measures high quality and user satisfaction of traffic from AI Overviews. Illyes’ response, published by Suzuki on LinkedIn, covered multiple points.
Kenichi asked for specific data, and Gary’s answer offered an overview of how Google gathers external data to form internal opinions on how AI Overviews is perceived by users in terms of satisfaction. He said that the data informs public statements by Google, including those made by CEO Sundar Pichai.
Illyes began his answer by saying that he couldn’t share specifics about the user satisfaction data, but he still continued to offer his overview.
User Satisfaction Surveys
The first data point that Illyes mentioned was user satisfaction surveys to understand how people feel about AI Overviews. Kenichi wrote that Illyes said:
“The public statements made by company leaders, such as Sundar Pichai, are validated by this internal data before being made public.”
Observed User Behavior
The second user satisfaction data point that Illyes mentioned was inferring from the broader market. Kenichi wrote:
“Gary suggested that one can infer user preference by looking at the broader market. He pointed out that the rapidly growing user base for other AI tools (like ChatGPT and Copilot) likely consists of the same demographic that enjoys and finds value in AI Overviews.”
Motivated By User-Focus
This part means putting the user first as the motivation for introducing a new feature. Illyes specifically said that causing a disruption is not Google’s motivation for AI search features.
Acknowledged The Web Ecosystem
The last point he made was to explain that Google’s still figuring out how to balance their user-focused approach with the need to maintain a healthy web ecosystem.
“He finished by acknowledging that they are still figuring out how to balance this user-focused approach with the need to continue supporting the wider web ecosystem.”
Balancing The Needs Of The Web Ecosystem
At the dawn of modern SEO, Google did something extraordinary: they reached out to web publishers through the most popular SEO forum at the time, WebmasterWorld. Gary Illyes himself, before he joined Google, was a WebmasterWorld member. This outreach by Google was the initiative of one Googler, Matt Cutts. Other Googlers provided interviews, but Matt Cutts, under the WebmasterWorld nickname of GoogleGuy, held two-way conversations with the search and publisher community.
This is no longer the case at Google, which is largely back to one-way communication accompanied by intermittent social media outreach.
The SEO community may share in the blame for this situation, as some SEOs post abusive responses on social media. Fortunately, those people are in the minority, but that behavior nonetheless puts a chill on the few opportunities provided to have a constructive dialogue.
It’s encouraging to hear Illyes mention the web ecosystem, and it would be even further encouraging to hear Googlers, including the CEO, focus more on how they intend to balance the needs of the users with those of the creators who publish content, because many feel that Google’s current direction is not sustainable for publishers.