The Changes, Features & Signals Driving Organic Traffic Next Year
Google’s search results are evolving faster than most SEO strategies can adapt.
AI Overviews are expanding into new keyword and intent types, AI Mode is reshaping how results are displayed, and ongoing experimentation with SERP layouts is changing how users interact with search altogether. For SEO leaders, the challenge is no longer keeping up with updates but understanding which changes actually impact organic traffic.
Join Tom Capper, Senior Search Scientist at STAT Search Analytics, for a data-backed look at how Google SERPs are shifting in 2026 and where real organic opportunities still exist. Drawing from STAT’s extensive repository of daily SERP data, this session cuts through speculation to show which features and keywords are worth prioritizing now.
This webinar offers a clear, evidence-based view of how Google SERPs are changing and what those changes mean for SEO strategy. You will gain practical insights to refine keyword targeting, focus on the right SERP features, and build an organic search approach grounded in real performance data for 2026.
Register now to understand the SERP shifts shaping organic traffic in 2026.
🛑 Can’t make it live? Register anyway and we’ll send you the on demand recording after the event.
If there’s one takeaway as we look toward SEO in 2026, it’s that visibility is no longer just about ranking pages, but about being understood by increasingly selective AI-driven systems. In 2025, SEO proved it was not disappearing, but evolving, as search engines leaned more heavily on structure, authority, and trust to interpret content beyond the click. In this article, we share SEO predictions for 2026 from Yoast SEO experts, Alex Moss and Carolyn Shelby, highlighting the shifts that will shape how brands earn visibility across search and AI-powered discovery experiences.
Key takeaways
In 2026, SEO focuses on visibility defined by clarity, authority, and trust rather than just page rankings
Structured data becomes essential for eligibility in AI-driven search and shopping experiences
Editorial quality must meet machine readability standards, as AI evaluates content based on structure and clarity
Rankings remain important as indicators of authority, but visibility now also includes citations and brand sentiment
Brands should align their SEO strategies with social presence and aim for consistency across all platforms to enhance visibility
Table of contents
A brief recap of SEO in 2025: what actually changed?
2025 marked a clear shift in how SEO works. Visibility stopped being defined purely by pages and rankings and began to be shaped by how well search engines and AI systems could interpret content, brands, and intent across multiple surfaces. AI-generated summaries, richer SERP features, and alternative discovery experiences made it harder to rely solely on traditional metrics, while signals such as authority, trust, and structure played a larger role in determining what was surfaced and reused.
As we outlined in our SEO in 2025 wrap-up, the brands that performed best were those with strong foundations: clear content, credible signals, and structured information that search systems could confidently understand. That shift set the direction for what was to come next.
By the end of 2025, it was clear that SEO had entered a new phase, one shaped by interpretation rather than isolated optimizations. The SEO predictions for 2026 from Yoast experts build directly on this evolution.
2026 SEO predictions by Yoast experts
The SEO predictions for 2026 shared here come from our very own Principal SEOs at Yoast, Alex Moss and Carolyn Shelby. Built on the lessons SEO revealed in 2025, these predictions focus less on reacting to individual updates and more on how search and AI systems are evolving at a foundational level, and what that means for sustainable visibility going forward.
TL;DR
SEO in 2026 is about understanding how signals such as structure, authority, clarity, and trust are now interpreted across search engines, AI-powered experiences, and discovery platforms. Each prediction below explains what is changing, why it matters, and how brands can practically adapt in the coming year.
Prediction 1: Structured data shifts from ranking enhancer to retrieval qualifier
In 2026, structured data will no longer be a competitive advantage; it will become a baseline requirement. Search engines and AI systems increasingly rely on structured data as a layer of eligibility to determine whether content, products, and entities can be confidently retrieved, compared, or surfaced in AI-powered experiences.
For ecommerce brands, this shift is especially significant. Product information such as pricing, availability, shipping details, and merchant data is now critical for visibility in AI-driven shopping agents and comparison interfaces. At the enterprise level, the move toward canonical identifiers reflects a growing need to avoid misattribution and data decay across systems that reuse information at scale.
What this means in practice:
Brands without clean, comprehensive entity and product data will not rank lower. They will simply not appear in AI-driven shopping and comparison flows at all.
Treat structured data as part of your SEO foundation, not an enhancement. Tools like Yoast SEO help standardize the implementation of structured data. The plugin’s structured data features make it easier to generate rich, meaningful schema markup, helping search engines better understand your site and take control of how your content is described.
A smarter analysis in Yoast SEO Premium
Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!
Prediction 2: Agentic commerce becomes a visibility battleground, not a checkout feature
Agentic commerce marks a shift in how users discover and choose brands. Instead of browsing, comparing, and transacting manually, users increasingly rely on AI-driven agents to recommend, reorder, or select products and services on their behalf. In this environment, visibility is established before a checkout ever happens, often without a traditional search query.
This shift is becoming more concrete as search and commerce platforms move toward standardised ways for agents to understand and transact with merchants. Recent developments around agentic commerce protocols and Universal Commerce Protocol (UCP) highlight how AI systems are being designed to access product, pricing, availability, and merchant information more directly. As a result, platforms such as Shopify, Stripe, and WooCommerce are no longer just infrastructure. They increasingly act as distribution layers, where agent compatibility influences which brands are surfaced, recommended, or selected.
What this means in practice:
In 2026, SEO teams will be accountable for agent readiness in much the same way they were once accountable for mobile-first readiness. If agents cannot consistently interpret your brand, product data, or availability, they are more likely to default to competitors that they can understand with greater confidence.
How to act on this:
Focus on making your brand legible to automated decision systems. Ensure product information, pricing, availability, and supporting metadata are clear, structured, and consistent across your site and feeds. This is not about optimising for a single platform or protocol, but about reducing ambiguity so AI agents can accurately interpret and act on your information across emerging agent-driven discovery and commerce experiences.
Prediction 3: Editorial quality becomes a machine readability requirement
In 2026, editorial quality is no longer judged only by human readers. AI systems increasingly evaluate content based on how efficiently it can be parsed, summarized, cited, and reused. Verbosity, fluff, and circular explanations do not fail editorially. They fail functionally.
Content that is concise, clearly structured, and well-attributed has higher chances of performing well. Headings, lists, definitions, and tables directly influence how information is chunked and reused across AI-generated summaries and search experiences.
“Helpful content” is being held to higher editorial standards. Content that cannot be summarized cleanly without losing meaning becomes less useful to AI systems, even if it remains readable to human audiences.
How to act on this:
Make editorial quality measurable and machine actionable. Utilize tools that assist you in aligning content with modern discoverability requirements. Yoast SEO Premium’s AI features, AI Generate, AI Optimize, and AI Summarize, help you assess and improve how content is structured and optimized, supporting both search engines and AI systems in understanding your intent.
Prediction 4: Rankings still matter, but as training signals, not endpoints
Despite ongoing speculation, rankings do not disappear in 2026. Instead, their role changes. AI agents and search systems continue to rely on top-ranked, trusted pages to understand authority, relevance, and consensus within a topic.
While rankings are no longer the final KPI, abandoning them entirely creates blind spots in understanding why certain brands are included or ignored in AI-driven experiences.
What this means in practice:
Teams that stop tracking rankings altogether risk losing insight into how authority is established and reinforced across search and AI systems.
How to act on this:
Continue to use rankings as diagnostic signals, but don’t treat them as the sole indicator of success in 2026. Alongside traditional performance metrics for SEO in 2026, look at how often your brand is mentioned, cited, or summarized in AI-generated answers and recommendations.
Tools like Yoast AI Brand Insights, available as part of Yoast SEO AI+, help surface these broader visibility signals by showing how your brand appears across AI platforms, including sentiment, citation patterns, and competitive context.
See how visible your brand is in AI search
Track mentions, sentiment, and AI visibility. With AI Brand Insights and Yoast SEO AI+, you can start monitoring and improving your performance.
Prediction 5: Brand sentiment becomes a core visibility signal
Brand sentiment increasingly influences how search engines and AI systems assess credibility and trust. Mentions, whether linked or unlinked, contribute to a broader understanding of how a brand is perceived across the web. AI systems synthesize signals from reviews, forums, social platforms, media coverage, and knowledge bases to form a composite view of legitimacy and expertise.
What makes this shift more impactful is amplification. Inconsistent messaging or negative sentiment is not smoothed out over time. Instead, it becomes more apparent when systems attempt to summarize, compare, or recommend brands across search and AI-driven experiences.
What this means in practice:
SEO, brand, PR, and social teams increasingly influence the same visibility signals. When these efforts are misaligned, credibility weakens. When they reinforce one another, trust becomes easier for systems to establish and maintain.
How to act on this:
Focus on consistency across owned, earned, and shared channels. Pay attention not only to where your brand ranks, but also to how it is discussed, described, and contextualized across various platforms. As discovery expands beyond traditional search results, reputation and narrative coherence become essential inputs into how brands are surfaced and understood.
Prediction 6: Multimodal optimization becomes baseline, not optional
Search behavior is no longer text-first. Images, video, audio, and transcripts now function as retrievable knowledge objects that feed both traditional search and AI-powered experiences. In particular, video platforms continue to influence how expertise and authority are understood at scale.
Platforms like YouTube function not only as discovery engines, but also as training corpora for AI systems learning how to interpret topics, brands, and creators.
What this means in practice:
Brands with strong written content but weak visual or video assets may appear incomplete or “thin” to AI systems, even if their articles are well-optimized.
How to act on this:
Treat multimodal content as part of your SEO foundation. Support written content with relevant visuals, video, and transcripts. Clear structure and readability remain essential, and tools like Yoast SEO help ensure your core content remains accessible and well-organized as it is reused across formats.
Prediction 7: Social platforms become secondary search indexes
Discovery will increasingly happen outside traditional search engines. Platforms such as TikTok, LinkedIn, Reddit, and niche communities now act as secondary search indexes where users validate expertise and intent.
AI systems reference these platforms to verify whether a brand’s claims, expertise, and messaging are substantiated in public discourse.
What this means in practice:
Presence alone is not enough. Inconsistent or unclear messaging across platforms weakens trust signals, while focused, repeatable narratives reinforce authority.
How to act on this:
Align your SEO strategy with social and community visibility to enhance your online presence. Ensure that your expertise, terminology, and positioning remain consistent across all discussions about your brand.
Prediction 8: Email reasserts itself as the most controllable growth channel
As discovery fragments and platforms increasingly gate access to audiences, email regains importance as a high-signal, low-distortion channel. Unlike search or social platforms, email offers direct access to users without algorithmic mediation.
In 2026, email plays a supporting role in reinforcing authority, engagement, and intent signals, especially as AI systems evaluate how audiences interact with trusted sources over time.
What this means in practice:
Brands that underinvest in email become overly dependent on platforms they do not control, which increases volatility and reduces long-term resilience.
How to act on this:
Focus on relevance over volume. Segment audiences, align content with intent, and use email to reinforce expertise and trust, not just drive clicks.
Prediction 9: Authority outweighs freshness for most non-news queries
For non-news content, AI systems increasingly prioritize credible, historically consistent sources over frequent updates or constant publishing. Freshness still matters, but only when it meaningfully improves accuracy or relevance.
Long-standing domains with coherent narratives and well-maintained content benefit, provided their foundations remain clean and trustworthy.
What this means in practice:
Scaled/programmatic content strategies lose effectiveness. Publishing frequently without maintaining quality or consistency introduces noise rather than value.
How to act on this:
Invest in maintaining and improving existing content. Update thoughtfully, reinforce expertise, and ensure that your most important pages remain accurate, structured, and authoritative.
Prediction 10: SEO teams evolve into visibility and narrative stewards
In 2026, SEO will extend far beyond search engines. SEO teams are increasingly influencing how brands are perceived by both humans and machines across search, AI-generated answers, and discovery platforms.
Success is measured not only by traffic alone, but also by inclusion, citation, and trust. SEO becomes a strategic function that shapes how a brand is represented and understood.
What this means in practice:
SEO teams that focus solely on production or technical fixes risk losing influence as visibility becomes a cross-channel concern.
How to act on this:
Shift focus toward clarity, consistency, and long-term trust. The most effective teams help define how a brand is understood, not just how it ranks.
What SEO is no longer about in 2026 (misconceptions to discard)
As SEO evolves in 2026, many long-standing assumptions no longer reflect how search engines and AI-driven systems actually determine visibility. The table below contrasts common SEO myths with the realities shaped by recent changes and expert insights from Yoast.
Diminishing relevance
What actually matters in 2026
SEO is mainly about ranking pages
Rankings still matter, but they serve as signals for authority and relevance, rather than the final measure of visibility
Structured data is optional or a ranking boost
Structured data is now a baseline requirement for eligibility in AI-driven search, shopping, and comparison experiences
Publishing more content leads to better performance
Authority, clarity, and maintenance of fewer strong assets outperform high-volume publishing
Editorial quality is subjective
Content quality is increasingly evaluated by machines based on structure, clarity, and reusability
Brand reputation is a PR concern, not an SEO one
Brand sentiment directly influences how AI systems interpret, trust, and recommend brands
Search is still primarily text-based
Images, video, audio, and transcripts are now core retrievable knowledge objects
SEO can be measured only through traffic
Visibility spans AI answers, social platforms, agents, and citations, requiring broader performance signals
Looking ahead: what will shape SEO in 2026
The focus is no longer on isolated tactics or short-term wins, but on building visibility systems that search engines and AI platforms can reliably understand, trust, and reuse.
Clarity and interpretability matter more than clever optimization. Content, products, and brand narratives need to be easy for machines to interpret without ambiguity. Structured data has become foundational, not optional, determining whether brands are eligible to appear in AI-powered shopping, comparison, and answer-driven experiences.
Authority is built over time, not manufactured at scale. Search and AI systems increasingly favor sources with consistent, well-maintained narratives over those chasing volume. Visibility also extends beyond the SERP, spanning AI-generated answers, citations, recommendations, and cross-platform mentions, making it essential to look beyond traffic as the sole measure of success.
Finally, SEO in 2026 demands alignment. Brand, content, product, and platform signals all contribute to how systems interpret trust and relevance.
Ahad Qureshi
I’m a Computer Science grad who accidentally stumbled into writing—and stayed because I fell in love with it. Over the past six years, I’ve been deep in the world of SEO and tech content, turning jargon into stories that actually make sense. When I’m not writing, you’ll probably find me lifting weights to balance my love for food (because yes, gym and biryani can coexist) or catching up with friends over a good cup of chai.
Automation is a part of our daily lives in marketing. If you’re in a leadership role or oversee it in some capacity, you’re hearing about it from your team doing the day-to-day work, from those within your industry, or you’re doing your own exploration.
Within search marketing, it has helped to greatly scale efforts as well as to bring new efficiencies, whether those are in our own processes or built into the platforms we use.
In just a few short years, automated bidding strategies, AI-generated content, AI-driven research, and platform-generated “insights” have changed the way we work, including the tools we use, and many of our expectations for how we do search marketing and digital marketing in a broader sense.
With all of this automation and new ways of getting things done, a gap has emerged. I’ll call it an “insights gap.” I contend that teams can see performance changes, but struggle to explain why. This can be serious and, for marketing leaders, can result in a loss of confidence in decision-making due to outcomes not being what was planned, projected, or desired.
No one at a leadership or implementation level likes to have a non-answer or mystery that can’t be solved when real leads or sales dollars are at stake.
Here’s the problem. It is a leadership challenge at this point. It isn’t a technology issue. Automation itself isn’t the problem; the lack of strategic interpretation is.
Now, yes, search volatility is involved. It amplifies the problem with algorithm updates, SERP changes, AI Overviews, and how user behavior changes. Automated systems we have react, but they don’t necessarily contextualize.
Combined with stakeholder expectations rising, we can’t get by with just charts and graphs and data tables. We have to find the insights, contextualize them, and demonstrate value. This is the impact versus activity contrast that has been around forever, but is amplified with automation.
If we go too far into reliance on automation and AI and don’t get the expected marketing and business outcomes, we likely have weaker strategic muscles and an over-dependence on AI and automation tools and platforms. Connecting all knowledge back to being institutional versus platform-specific (and in the AI “brains”) is a key to fixing the problem.
How Marketing Leaders Can Close The Insight Gap
1. Reinforce Strategy In Search Marketing Campaigns & Efforts
Efficiencies gained in execution should be celebrated. Tasks that were manual, done with expensive software, or not done at all just a few years ago can be done in an instant now. The hard and soft cost savings shouldn’t be overlooked.
However, we need to be clear in separating the executional efficiencies from strategic aspects and intent.
Every automated system and process needs to support a documented objective so we’re not just “doing” things, but we’re quantifying them, and they are connected to our overall strategy.
2. Build Human Review Into Automated Systems & Processes
A longstanding challenge with search marketing is that it often doesn’t have a clearly defined ending point. It is ongoing and includes iterative optimization processes. We look to the past to inform decisions for now and going forward, but we often don’t turn it all off, blow it up, and start over (and I’m not advocating for that).
Scheduling structured reviews of AI-driven decisions is important to ensure that we don’t have an insights gap.
In those reviews, even simply asking “why did this change?” before moving on to “what do we do next?” adds an intentional moment to ensure we’re not on autopilot with systems that are not connected deeply enough to our strategy.
3. Train Teams To Interpret, Not Just Monitor, Search Data
We all have dashboards and data coming to us. Or, we have go-to reports in Google Analytics 4 or our web analytics suite that we’re comfortable with. Those are important to have, and any alerts coming our way are great for tracking real-time progress.
Maintaining (or developing) analysts and strategists who can translate data, patterns, and observations into insights is important. Yes, you can create AI agents to do this, but ensure that you have oversight of the agents and that there’s enough cross-checking to ensure that business outcomes aren’t negatively impacted by assumptions that go on for too long in an automated way.
4. Treat AI Outputs As Inputs (For Humans), Not Answers
Being careful with my wording of “inputs” and “outputs” here, calling attention to what AI gives us, we should treat that as output. But, it shouldn’t stop there. The AI output should become “input” for humans.
Even the seemingly smartest ideas from AI should be taken as an output, for human input, and not a definitive (a favorite AI word, by the way) answer.
Just like when humans are owning the full process, with whatever level of AI and automation we have involved, we should maintain a healthy skepticism and validation.
5. Protect Institutional Knowledge In Search Marketing
The more automation we have, likely the more scattered we are with documentation. It probably lives in many places, within platforms, or may be lacking overall. As we get smarter and more efficient with our tech stacks and use, we can’t lose critical institutional knowledge in search marketing.
That means we need to document learnings from tests, optimization, campaigns, and changes. We don’t want to repeat mistakes when platforms, vendors, or other variables change.
6. Align Automation With Business Outcomes, Not Platform Metrics
This is not a new recommendation or news to anyone who has been in marketing leadership. However, I point it out as a word of caution, as the deeper we get in turning things over to automation, the more we’re at risk of getting into the weeds and not being able to connect actions, activities, tactics, and work being done back to an ultimate marketing-driven business outcome.
We need the platform metrics. But, we still need to be able to translate metrics at every depth level back to something higher in the marketing and business ROI equation. Being able to automate and scale something without context can lead us to just doing more of something, doing it faster, or cheaper, but not necessarily moving the needle for ROI.
7. Reintroduce Strategic Review Into Search Marketing Cadence
I mentioned asking questions with human review earlier. More broadly, ensuring that strategic review is integrated into your search marketing cadence is important. My team has been challenging our own client reporting meetings, metrics, and flow recently.
Whether you already have a monthly or quarterly strategic review process or not, this is an opportunity to challenge what automation and AI are doing in the mix. What is it helping, hiding, or potentially distorting? How can we include this in strategic review and go beyond just the data, reports, and activity?
8. Elevate Search Reporting For Executive Audiences
At the heart of any talk about insights, we know we have to translate performance into narrative. With more automation, we need to have more translation. What we are doing matters. However, our executive peers and audiences are a degree (or more) further removed from what we do, and with new tech, are probably even less connected (no offense to the super high-tech execs I know and love).
We still must connect search behavior to customer intent and business priorities. That hasn’t changed, even if we need to layer in more or mine it out of the automation we have in place.
Wrap Up
Automation is essential, and for most, it is a big part of how our teams are scaling digital marketing and search marketing work. Plus, we’re leveraging the functions (whether by choice or not) in platforms and channels that we’re doing our work in.
Automation is incomplete, though, without insight. Strategic understanding is not just necessary, but can be a competitive advantage in search. When everyone is automating, getting above and beyond with strategic insights and leveraging them can be a difference-maker.
The goal here isn’t to slow automation. It is to advance your team’s ability to think critically while scaling implementation and execution.
Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.
SEO And The Next Generation Of Search
Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.
What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”
So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.
The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.
Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?
How To Write For Longform Answers
Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?
Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?
It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.
Danny said:
“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?
So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”
Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.
Avoids Talking About Search Referrals
But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.
Focus On The Big Picture
Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.
He explained:
“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.
…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”
Let’s Talk About Garbage AI Search Results
I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.
Subject Matter Expertise Is Missing
Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.
How Google Hides Expert Web Pages
Google’s AI Mode Promotes Garbage And Sites Lacking Expertise
This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.
Google’s AI Mode cites the following pages:
1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.
2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?
3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?
Screenshot Of Google’s Garbage AI Results
Google Hides The Good Stuff In More > News Tab
Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.
Screenshot Of Hidden High Quality Search Results
GEO Or SEO – It Doesn’t Matter
This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.
I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?
Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.
How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?
In my last post, I referenced how there is now a growing split between the “human” web and the “agentic” web, where AI agents are becoming an additional audience/profile alongside the “traditional” human visitors we have been optimizing for for years.
This shift is now becoming more aggressive, especially when it comes to the transactional web in the form of agentic commerce. 2026 will see the accelerated adoption of this method, where store owners will now have to cater to and optimize for both the human and agentic visitor concurrently.
The recent launch of Universal Commerce Protocol (UCP) from Google underlines the push towards this integration of AI and ecommerce experiences.
What Is Agentic Commerce?
Agentic commerce is when agents complete purchases autonomously on behalf of users. Now, a human can engage with a large language model platform, where the agent will browse and purchase from a site on behalf (and with approval) of the human. Not only is the agent acting as the gatekeeper for information gain and influencing decisions, but they are also acting as the gatekeeper for the transaction itself.
This is a step beyond delegating an LLM to act as a recommendation agent or a method of validation, but now transfers authority to actually transact.
Enter ACP (Agentic Commerce Protocol)
On Sept. 29, 2025, OpenAI and Stripe announced their partnership and, within this, launched ACP, an open standard that defines how AI agents, merchants, and payment providers interact to complete agentic and programmatic purchases.
On the same day, OpenAI detailed platforms that were immediately able to benefit from agentic commerce, including Shopify and Etsy, with others following suit using the protocol, including Walmart and Instacart.
From a CMS point of view, Shopify hit the ground running by enabling ACP for over 1 million merchants from the day of the announcement. WooCommerce has followed suit more recently by announcing it will be part of Stripe’s launch of Agentic Commerce Suite, which will allow even more merchants the ability to sell products through various AI-based platforms.
But ACP was launched three months ago, and as we now know, things move fast…
UCP: Google’s Answer To The Immersive Agentic Commerce Experience
Google just announced the launch of Universal Commerce Protocol, which widens some boundaries applied by ACP by tackling a broader problem, providing any AI surface (like Search AI Mode or Gemini) a common language to discover merchants, understand their capabilities, and orchestrate full journeys from discovery through order management, as well as engagement beyond a purchase (also made seamless using Google Pay). This is also done by integrating with other existing standards, including APIs, Agent2Agent (A2A), and the Model Context Protocol (MCP).
Aspect
ACP (OpenAI)
UCP (Google)
Primary focus
Agent‑led commerce in ChatGPT and ACP‑aware agents.
Unified rail for many agents/surfaces talking to merchants.
Discovery, checkout, discounts, fulfillment, order management, payments.
Driver
OpenAI + Stripe & ecosystem partners.
Google + retailers/platforms (Shopify, Etsy, Walmart, etc.).
Here, Google adds to the possibilities of the commerce experience, where SEOs can adopt both ACP and UCP in order to accommodate both platforms and ecosystems.
This will only become more immersive as 2026 progresses. Google has a great advantage of knowing a lot about individual users, and features such as AI features inside Gmail illustrate Google can utilize and understand much more context about individuals in order to provide an even more frictionless experience.
Why This Matters For SEOs
As SEOs, we’ve spent over a generation optimizing for humans, albeit for various personas or ICPs. While we are still required to do this, we must now include the agent as an additional consideration. This does pose another challenge: that AI agents don’t browse pages but instead query APIs, parse product feeds, and evaluate structured data.
As such, we need to optimize for this. Maybe I can give it a name…
ACO: Agentic Commerce Optimization
I don’t want to trigger you by introducing yet another acronym to what seems to be a previous year of new acronyms, but for the sake of this post, let’s pretend that ACO is something you’ve been told to do now, as well as SEO, even though this is still SEO.
What would I need to consider and optimize for for successful ACO?
Crawlability: Agents still follow links, take journeys, and understand IA.
Format: Content needs to be concise with less fluff, but enough to ensure unique value has been added, and that it provides consistency throughout the site as a whole.
Structured Data: Agents will become more reliant on existing standards, especially if they’re open source.
Brand Authority And Sentiment: Populating your products well is, of course, paramount, but without positive brand sentiment, you have the challenge of convincing the agent to cite you as part of that discovery, then have to convince the human who will have that feedback presented to them. Third-party perspectives will become a larger contribution towards some of the agents’ grounding procedures before any agentic commerce begins.
Sounds familiar, right? While ACP is a connector between your site and the platforms that allow agents to use it, and CMSs are out there to make that connection as seamless as possible, this isn’t just a switch where, when switched on, is automatically optimized.
ACO = SEO.
Schema.org Is The Glue
Image Credit: Alex Moss, January 2026
Last month at Google Search Central Live in Zurich, Pascal Fleury went into detail about structured data for Shopping, where we can see that, while “schema.org is the glue that holds [structured data] together,” there are still other industry standards, such as GS1, that will add even more granular detail to products that will not only help inform agents on really specific details but also understand that you’re a great source of information to continue ingest from.
Product schema, pricing, availability, reviews, FAQs, shipping options, and other logistics, loyalty schemes – all of this structured data will need close optimization. If it’s missing or incorrect, you’re invisible to agent-mediated discovery.
Test The Agents
Even before your store is ACP-enabled, test how agents perceive your products. Ask platforms about products in your category. Do they surface your brand? How do they describe your products and complementary offerings? What information are they presenting, from both first-party and third-party perspectives? And more importantly, what is missing that you expected to be present?
Then, enable. What are the differences? Compare the results.
What Can I Do About It Now?
ACP
For WooCommerce and Wix, you will unfortunately need to join Stripe’s waitlist for ACS. Shopify users also have to join their own waitlist. Until then, we will have to wait until full rollout, but expect this to accelerate in Q1 of 2026.
If you work with a site where you have to integrate ACP directly into your CMS, any early adopters will perhaps benefit from early discovery, while the other CMSs catch up and competition is lower. So here, while this will require more resources, you will be able to take advantage of what ACP has to offer while most wait for their CMS platform to create the solution for them.
UCP
This is extremely fresh information, but I suggest that some time to understand it in detail, as well as experiment where possible using their documentation and GitHub repo, I know that’s how a lot of my time will be spent in the next few weeks.
Welcome to this week’s Pulse: updates on rankings from December’s core update, platform responses to AI quality issues, and disputes that reveal tensions in AI-generated health information.
Early analysis of Google’s December core update suggests specialized sites gained visibility in several shared examples. Microsoft and Google executives reframed criticism of AI quality. The Guardian reported concerns about health-related AI Overviews, and Google pushed back on aspects of the testing.
Here’s what matters for you and your work.
December Core Update Favors Specialists Over Generalists
Early analysis of Google’s December core update suggests specialized sites gained visibility in examples shared across publishing, ecommerce, and SaaS.
Key facts:Aleyda Solís’s analysis found sites with narrower, category-specific strength appear to be gaining ground on “best of” and mid-funnel product terms.
Some publisher sites appeared to lose visibility on broader, top-of-funnel queries in examples shared after the rollout. In examples shared after the December 11-29 rollout, ecommerce and SaaS brands with direct category expertise appeared to outperform broader review sites and affiliate aggregators.
Why SEOs Should Pay Attention
This update highlights a trend where generalist sites face ranking pressure, especially on queries with commercial intent or specific domain knowledge. Sites covering multiple categories are affected by competition from dedicated category sites.
Google says improvements can take time to show up. Some changes can take effect in a few days, but it can take several months for its systems to confirm longer-term improvement. Google also says it makes smaller, unannounced core updates that it doesn’t typically announce.
In the examples shared so far, specialization appears to outperform breadth when queries have specific intent.
What SEO Professionals Are Saying
Luke R., founder at Adexa.io, commented on LinkedIn:
“Specialists rise when search stops guessing and starts serving intent. These shifts reward brands that live one problem, one buyer.”
AYESHA ASIF, social media manager and content strategist, wrote:
“Generalist pages used to win on authority, but now depth matters more than domain size.”
“This feels like the beginning of a long-anticipated transition in how search evaluates relevance and expertise.”
In that thread, several commenters argued the update favors deep, category-specific content over broad coverage. Several commenters suggested domain authority mattered less than focused expertise in the examples being discussed.
Guardian Investigation Claims AI Overview Health Inaccuracies
The Guardian reported that health organizations and experts reviewed examples of AI Overviews for medical queries and raised concerns about inaccuracies. A Google spokesperson said many examples were “incomplete screenshots.” The spokesperson also said the vast majority of AI Overviews are factual and helpful, and that Google continuously makes quality improvements.
Key facts: The Guardian said it tested health queries and shared AI Overview responses with health groups and experts for review. A Google spokesperson said many examples were “incomplete screenshots,” but added that the results linked “to well-known, reputable sources” and recommended seeking out expert advice.
Why SEOs Should Pay Attention
AI Overviews can appear at the top of results. When the topic is health, errors carry more weight. The Guardian’s reporting also highlights a practical problem. One charity leader told The Guardian the AI summary changed when repeating the same search, pulling from different sources. That can make verification harder.
Publishers have spent years investing in documented medical expertise to meet Google’s expectations around health content. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.
What Health Organizations Are Saying
Sophie Randall, director of the Patient Information Forum, told The Guardian:
“Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”
Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK, stated:
“If someone followed what the search result told them, they might not take in enough calories … and be unable to tolerate either chemotherapy or potentially life-saving surgery.”
The reactions reveal two concerns. First, that even when AI Overviews link to trusted sources, the summary itself can override that trust by presenting confident but incorrect guidance. Second, some reactions framed Google’s response as addressing individual examples without explaining how these errors happen or how often they occur.
Microsoft CEO And Google Engineer Reframe AI Quality Criticism
Within one week, Microsoft CEO Satya Nadella published a blog post asking the industry to “get beyond the arguments of slop vs. sophistication,” while Google Principal Engineer Jaana Dogan posted that people are “only anti new tech when they are burned out from trying new tech.”
Key facts: Nadella’s blog post characterized AI as “cognitive amplifier tools” and called for “a new equilibrium” that accounts for humans having these tools. Dogan’s X post framed anti-AI sentiment as burnout from trying new technology. In replies, some people pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable in day-to-day workflows. The timing follows Merriam-Webster naming “slop” its 2025 Word of the Year.
Why SEOs Should Pay Attention
Some readers may interpret these statements as an attempt to move the conversation away from output quality and toward user expectations. When people are urged to move past “slop vs. sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.
The practical concern is how these companies respond to user feedback versus how they frame criticism. Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related one.
What Industry Observers Are Saying
Jez Corden, managing editor at Windows Central, wrote that Nadella’s framing of AI as a “scaffolding for human potential” felt “either naively utopic, or at worse, wilfully dishonest.”
Tom Warren, senior editor at The Verge, wrote on Bluesky that Nadella wants everyone to move beyond the arguments about AI slop, calling 2026 a “pivotal year for AI.”
The commentary reveals a gap between executive messaging about AI as a transformative technology and the user experience of AI products, which feels inconsistent or forced. Some reactions suggested the request drew more attention to the term.
Each story this week reveals a tension between the quality standards applied to publishers and those applied to platforms’ own AI systems.
The December core update appears to put more weight on category expertise than broad coverage in the examples highlighted. The Guardian investigation questions whether AI Overviews meet the accuracy bar Google sets for health content. The Nadella messaging attempts to reframe quality concerns as user adjustment problems rather than product issues.
The week highlights a tension between the standards applied to websites and the way platforms defend their own AI summaries when accuracy is questioned.
When most people hear the phrase “AI bias,” their mind jumps to ethics, politics, or fairness. They think about whether systems lean left or right, whether certain groups are represented properly, or whether models reflect human prejudice. That conversation matters. But it is not the conversation reshaping search, visibility, and digital work right now.
The bias that is quietly changing outcomes is not ideological. It is structural, and operational. It emerges from how AI systems are built, trained, how they retrieve and weight information, and how they are rewarded. It exists even when everyone involved is acting in good faith. And it affects who gets seen, cited, and summarized long before anyone argues about intent.
This article is about that bias. Not as a flaw or as a scandal. But as a predictable consequence of machine systems designed to operate at scale under uncertainty.
To talk about it clearly, we need a name. We need language that practitioners can use without drifting into moral debate or academic abstraction. This behavior has been studied, but what hasn’t existed is a single term that explains how it manifests as visibility bias in AI-mediated discovery. I’m calling it Machine Comfort Bias.
Image Credit: Duane Forrester
Why AI Answers Cannot Be Neutral
To understand why this bias exists, we need to be precise about how modern AI answers are produced.
AI systems do not search the web the way people do. They do not evaluate pages one by one, weigh arguments, or reason toward a conclusion. What they do instead is retrieve information, weight it, compress it, and generate a response that is statistically likely to be acceptable given what they have seen before, a process openly described in modern retrieval-augmented generation architectures such as those outlined by Microsoft Research.
That process introduces bias before a single word is generated.
First comes retrieval. Content is selected based on relevance signals, semantic similarity, and trust indicators. If something is not retrieved, it cannot influence the answer at all.
Then comes weighting. Retrieved material is not treated equally. Some sources carry more authority. Some phrasing patterns are considered safer. Some structures are easier to compress without distortion.
Finally comes generation. The model produces an answer that optimizes for probability, coherence, and risk minimization. It does not aim for novelty. It does not aim for sharp differentiation. It aims to sound right, a behavior explicitly acknowledged in system-level discussions of large models such as OpenAI’s GPT-4 overview.
At no point in this pipeline does neutrality exist in the way humans usually mean it. What exists instead is preference. Preference for what is familiar. Preference for what has been validated before. Preference for what fits established patterns.
Introducing Machine Comfort Bias
Machine Comfort Bias describes the tendency of AI retrieval and answer systems to favor information that is structurally familiar, historically validated, semantically aligned with prior training, and low-risk to reproduce, regardless of whether it represents the most accurate, current, or original insight.
This is not a new behavior. The underlying components have been studied for years under different labels. Training data bias. Exposure bias. Authority bias. Consensus bias. Risk minimization. Mode collapse.
What is new is the surface on which these behaviors now operate. Instead of influencing rankings, they influence answers. Instead of pushing a page down the results, they erase it entirely.
Machine Comfort Bias is not a scientific replacement term. It is a unifying lens. It brings together behaviors that are already documented but rarely discussed as a single system shaping visibility.
Where Bias Enters The System, Layer By Layer
To understand why Machine Comfort Bias is so persistent, it helps to see where it enters the system.
Training Data And Exposure Bias
Language models learn from large collections of text. Those collections reflect what has been written, linked, cited, and repeated over time. High-frequency patterns become foundational. Widely cited sources become anchors.
This means that models are deeply shaped by past visibility. They learn what has already been successful, not what is emerging now. New ideas are underrepresented by definition. Niche expertise appears less often. Minority viewpoints show up with lower frequency, a limitation openly discussed in platform documentation about model training and data distribution.
This is not an oversight. It is a mathematical reality.
Authority And Popularity Bias
When systems are trained or tuned using signals of quality, they tend to overweight sources that already have strong reputations. Large publishers, government sites, encyclopedic resources, and widely referenced brands appear more often in training data and are more frequently retrieved later.
The result is a reinforcement loop. Authority increases retrieval. Retrieval increases citation. Citation increases perceived trust. Trust increases future retrieval. And this loop does not require intent. It emerges naturally from how large-scale AI systems reinforce signals that have already proven reliable.
Structural And Formatting Bias
Machines are sensitive to structure in ways humans often underestimate. Clear headings, definitional language, explanatory tone, and predictable formatting are easier to parse, chunk, and retrieve, a reality long acknowledged in how search and retrieval systems process content, including Google’s own explanations of machine interpretation.
Content that is conversational, opinionated, or stylistically unusual may be valuable to humans but harder for systems to integrate confidently. When in doubt, the system leans toward content that looks like what it has successfully used before. That is comfort expressed through structure.
Semantic Similarity And Embedding Gravity
Modern retrieval relies heavily on embeddings. These are mathematical representations of meaning that allow systems to compare content based on similarity rather than keywords.
Embedding systems naturally cluster around centroids. Content that sits close to established semantic centers is easier to retrieve. Content that introduces new language, new metaphors, or new framing sits farther away, a dynamic visible in production systems such as Azure’s vector search implementation.
This creates a form of gravity. Established ways of talking about a topic pull answers toward themselves. New ways struggle to break in.
Safety And Risk Minimization Bias
AI systems are designed to avoid harmful, misleading, or controversial outputs. This is necessary. But it also shapes answers in subtle ways.
Sharp claims are riskier than neutral ones. Nuance is riskier than consensus. Strong opinions are riskier than balanced summaries.
When faced with uncertainty, systems tend to choose language that feels safest to reproduce. Over time, this favors blandness, caution, and repetition, a trade-off described directly in Anthropic’s work on Constitutional AI as far back as 2023.
Why Familiarity Wins Over Accuracy
One of the most uncomfortable truths for practitioners is that accuracy alone is not enough.
Two pages can be equally correct. One may even be more current or better researched. But if one aligns more closely with what the system already understands and trusts, that one is more likely to be retrieved and cited.
This is why AI answers often feel similar. It is not laziness. It is system optimization. Familiar language reduces the chance of error. Familiar sources reduce the chance of controversy. Familiar structure reduces the chance of misinterpretation, a phenomenon widely observed in mainstream analysis showing that LLM-generated outputs are significantly more homogeneous than human-generated one.
From the system’s perspective, familiarity is a proxy for safety.
The Shift From Ranking Bias To Existence Bias
Traditional search has long grappled with bias. That work has been explicit and deliberate. Engineers measure it, debate it, and attempt to mitigate it through ranking adjustments, audits, and policy changes.
Most importantly, traditional search bias has historically been visible. You could see where you ranked. You could see who outranked you. You could test changes and observe movement.
AI answers change the nature of the problem.
When an AI system produces a single synthesized response, there is no ranking list to inspect. There is no second page of results. There is only inclusion or omission. This is a shift from ranking bias to existence bias.
If you are not retrieved, you do not exist in the answer. If you are not cited, you do not contribute to the narrative. If you are not summarized, you are invisible to the user.
That is a fundamentally different visibility challenge.
Machine Comfort Bias In The Wild
You do not need to run thousands of prompts to see this behavior. It has already been observed, measured, and documented.
Studies and audits consistently show that AI answers disproportionately mirror encyclopedic tone and structure, even when multiple valid explanations exist, a pattern widely discussed.
Independent analyses also reveal high overlap in phrasing across answers to similar questions. Change the prompt slightly, and the structure remains. The language remains. The sources remain.
These are not isolated quirks. They are consistent patterns.
What This Changes About SEO, For Real
This is where the conversation gets uncomfortable for the industry.
SEO has always involved bias management. Understanding how systems evaluate relevance, authority, and quality has been the job. But the feedback loops were visible. You could measure impact, and you could test hypotheses. Machine Comfort Bias now complicates that work.
When outcomes depend on retrieval confidence and generation comfort, feedback becomes opaque. You may not know why you were excluded. You may not know which signal mattered. You may not even know that an opportunity existed.
This shifts the role of the SEO. From optimizer to interpreter. From ranking tactician to system translator, which reshapes career value. The people who understand how machine comfort forms, how trust accumulates, and how retrieval systems behave under uncertainty become critical. Not because they can game the system, but because they can explain it.
What Can Be Influenced, And What Cannot
It is important to be honest here. You cannot remove Machine Comfort Bias, nor can you force a system to prefer novelty. You cannot demand inclusion.
What you can do is work within the boundaries. You can make structure explicit without flattening voice, and you can align language with established concepts without parroting them. You can demonstrate expertise across multiple trusted surfaces so that familiarity accumulates over time. You can also reduce friction for retrieval and increase confidence for citation. The bottom line here is that you can design content that machines can safely use without misinterpretation. This shift is not about conformity; it’s about translation.
How To Explain This To Leadership Without Losing The Room
One of the hardest parts of this shift is communication. Telling an executive that “the AI is biased against us” rarely lands well. It sounds defensive and speculative.
I will suggest that a better framing is this. AI systems favor what they already understand and trust. Our risk is not being wrong. Our risk is being unfamiliar. That is our new, biggest business risk. It affects visibility, and it affects brand inclusion as well as how markets learn about new ideas.
Once framed that way, the conversation changes. This is no longer about influencing algorithms. It is about ensuring the system can recognize and confidently represent the business.
Bias Literacy As A Core Skill For 2026
As AI intermediaries become more common, bias literacy becomes a professional requirement. This does not mean memorizing research papers, but instead it means understanding where preference forms, how comfort manifests, and why omission happens. It means being able to look at an AI answer and ask not just “is this right,” but “why did this version of ‘right’ win.” That is an enhanced skill, and it will define who thrives in the next phase of digital work.
Naming The Invisible Changes
Machine Comfort Bias is not an accusation. It is a description, and by naming it, we make it discussable. By understanding it, we make it predictable. And anything predictable can be planned for.
This is not a story about loss of control. It is a story about adaptation, about learning how systems see the world and designing visibility accordingly.
Bias has not disappeared. It has changed shape, and now that we can see it, we can work with it.
“Is there any difference between how AI systems handle JavaScript-rendered or interactively hidden content compared to traditional Google indexing? What technical checks can SEOs do to confirm that all page critical information is available to machines?”
For several years now, SEOs have been fairly encouraged by Googlebot’s improvements in being able to crawl and render JavaScript-heavy pages. However, with the new AI crawlers, this might not be the case.
In this article, we’ll look at the differences between the two crawler types, and how to ensure your critical webpage content is accessible to both.
How Does Googlebot Render JavaScript Content?
Googlebot processes JavaScript in three main stages: crawling, rendering, and indexing. In a basic and simple explanation, this is how each stage works:
Crawling
Googlebot will queue pages to be crawled when it discovers them on the web. Not every page that gets queued will be crawled, however, as Googlebot will check to see if crawling is allowed. For example, it will see if the page is blocked from crawling via a disallow command in the robots.txt.
If the page is not eligible to be crawled, then Googlebot will skip it, forgoing an HTTP request. If a page is eligible to be crawled, it will move to render the content.
Rendering
Googlebot will check if the page is eligible to be indexed by ensuring there are no requests to keep it from the index, for example, via a noindex meta tag. Googlebot will queue the page to be rendered. The rendering may happen within seconds, or it may remain in the queue for a longer period of time. Rendering is a resource-intensive process, and as such, it may not be instantaneous.
In the meantime, the bot will receive the DOM response; this is the content that is rendered before JavaScript is executed. This typically is the page HTML, which will be available as soon as the page is crawled.
Once the JavaScript is executed, Googlebot will receive the fully constructed page, the “browser render.”
Indexing
Eligible pages and information will be stored in the Google index and made available to serve as search results at the point of user query.
How Does Googlebot Handle Interactively Hidden Content?
Not all content is available to users when they first land on a page. For example, you may need to click through tabs to find supplementary content, or expand an accordion to see all of the information.
Googlebot doesn’t have the ability to switch between tabs, or to click open an accordion. So, making sure it can parse all the page’s information is important.
The way to do this is to make sure that the information is contained within the DOM on the first load of the page. Meaning, content may be “hidden from view” on the front end before clicking a button, but it’s not hidden in the code.
Think of it like this: The HTML content is “hidden in a box”; the JavaScript is the key to open the box. If Googlebot has to open the box, it may not see that content straightaway. However, if the server has opened the box before Googlebot requests it, then it should be able to get to that content via the DOM.
How To Improve The Likelihood That Googlebot Will Be Able To Read Your Content
The key to ensuring that content can be parsed by Googlebot is making it accessible without the need for the bot to render the JavaScript. One way of doing this is by forcing the rendering to happen on the server itself.
Server-side rendering is the process by which a webpage is rendered on the server rather than by the browser. This means an HTML file is prepared and sent to the user’s browser (or the search engine bot), and the content of the page is accessible to them without waiting for the JavaScript to load. This is because the server has essentially created a file that has rendered content in it already; the HTML and CSS are accessible immediately. Meanwhile, JavaScript files that are stored on the server can be downloaded by the browser.
This is opposed to client-side rendering, which requires the browser to fetch and compile the JavaScript before content is accessible on the webpage. This is a much lower lift for the server, which is why it is often favored by website developers, but it does mean that bots struggle to see the content on the page without rendering the JavaScript first.
How Do LLM Bots Render JavaScript?
Given what we now know about how Googlebot renders JavaScript, how does that differ from AI bots?
The most important element to understand about the following is that, unlike Googlebot, there is no “one” governing body that represents all the bots that might be encompassed under “LLM bots.” That is, what one bot might be capable of doing won’t necessarily be the standard for all.
The bots that scrape the web to power the knowledge bases of the LLMs are not the same as the bots that visit a page to bring back timely information to a user via a search engine.
And Claude’s bots do not have the same capability as OpenAI’s.
When we are considering how to ensure that AI bots can access our content, we have to cater to the lowest-capability bots.
Less is known about how LLM bots render JavaScript, mainly because, unlike Google, the AI bots are not sharing that information. However, some very smart people have been running tests to identify how each of the main LLM bots handles it.
Back in 2024, Vercel published an investigation into the JavaScript rendering capabilities of the main LLM bots, including OpenAI’s, Anthropic’s, Meta’s, ByteDance’s, and Perplexity’s. According to their study, none of those bots were able to render JavaScript. The only ones that were, were Gemini (leveraging Googlebot’s infrastructure), Applebot, and CommonCrawl’s CCbot.
More recently, Glenn Gabe reconfirmed Vercel’s findings through his own in-depth analysis of how ChatGPT, Perplexity, and Claude handle JavaScript. He also runs through how to test your own website in the LLMs to see how they handle your content.
These are the most well-known bots, from some of the most heavily funded AI companies in this space. It stands to reason that if they are struggling with JavaScript, lesser-funded or more niche ones will be also.
How Do AI Bots Handle Interactively Hidden Content?
Not well. That is, if the interactive content requires some execution of JavaScript, they may struggle to parse it.
To ensure the bots are able to see content hidden behind tabs, or in accordions, it is prudent to ensure the content loads fully in the DOM without the need to execute JavaScript. Human visitors can still interact with the content to reveal it, but the bots won’t need to.
How To Check For JavaScript Rendering Issues
There are two very easy ways to check if Googlebot is able to render all the content on your page:
Check The DOM Through Developer Tools
The DOM (Document Object Model) is an interface for a webpage that represents the HTML page as a series of “nodes” and “objects.” It essentially links a webpage’s HTML source code to JavaScript, which enables the functionality of the webpage to work. In simple terms, think of a webpage as a family tree. Each element on a webpage is a “node” on the tree. So, a header tag
, a paragraph
, and the body of the page itself
are all nodes on the family tree.
When a browser loads a webpage, it reads the HTML and turns it into the family tree (the DOM).
How To Check It
I’ll take you through this using Chrome’s Developer Tools as an example.
You can check the DOM of a page by going to your browser. Using Chrome, right-click and select “Inspect.” From there, make sure you’re in the “Elements” tab.
To see if content is visible on your webpage without having to execute JavaScript, you can search for it here. If you find the content fully within the DOM when you first load the page (and don’t interact with it further), then it should be visible to Googlebot and LLM bots.
Use Google Search Console
To check if the content is visible specifically to Googlebot, you can use Google Search Console.
Choose the page you want to test and paste it into the “Inspect any URL” field. Search Console will then take you to another page where you can “Test live URL.” When you test a live page, you will be presented with another screen where you can opt to “View tested page.”
How To Check If An LLM Bot Can See Your Content
As per Glenn Gabe’s experiments, you can ask the LLMs themselves what they can read from a specific webpage. For example, you can prompt them to read the text of an article. They will respond with an explanation if they cannot due to JavaScript.
Viewing The Source HTML
If we are working to the lowest common denominator, it is prudent to assume, at this point, LLMs can’t read content in JavaScript. To make sure that your content is available in the HTML of a webpage so that the bots can definitely access it, be absolutely sure that the content of your page is readable to these bots. Make sure it is in the source HTML. To check this, you can go to Chrome and right click on the page. From the menu, select “View page source.” If you can “find” the text in this code, you know it’s in the source HTML of the page.
What Does This Mean For Your Website?
Essentially, Googlebot has been developed over the years to be much better at handling JavaScript than the newer LLM bots. However, it’s really important to understand that the LLM bots are not trying to crawl and render the web in the same way as Googlebot. Don’t assume that they will ever try to mimic Googlebot’s behavior. Don’t consider them “behind” Googlebot. They are a different beast altogether.
For your website, this means you need to check if your page loads all the pertinent information in the DOM on the first load of the page to satisfy Googlebot’s needs. For the LLM bots, to be very sure the content is available to them, check your static HTML.
More Resources:
Featured Image: Paulo Bobita/Search Engine Journal
Your future customers are relying on answer engines to surface a single recommendation, not a list of options.
Yet most small businesses remain invisible to AI because their Google Business Profile information is incomplete, inconsistent, or structured in ways these AI chat systems cannot confidently interpret. The result is fewer calls, missed bookings, and lost revenue.
In this upcoming webinar session, Raj Madhavni, Co-Founder, Alpha SEO Pros at Thryv, will explain how AI assistants evaluate local businesses today and which signals most influence recommendations. He will also identify the common gaps that prevent businesses from being selected and outline how to address them before 2026.
The ranking signals AI assistants use to select local businesses
A practical roadmap to increase AI driven visibility, trust, and conversions in 2026
Why Attend?
This webinar gives small business owners and marketers a clear framework for competing in an AI driven local search environment. You will leave with actionable guidance to close visibility gaps, strengthen trust signals, and position your business as the one AI assistants recommend when customers ask.
Register now to prepare your business for local AI search in 2026.
🛑 Can’t attend live? Register anyway, and we’ll send you the on demand recording after the session.
Google’s AI Overviews (AIO) represent a fundamental architectural shift in search. Retrieval has moved from a localized ranking-and-serving model, designed to return the most appropriate regional URL, to a semantic synthesis model, designed to assemble the most complete and defensible explanation of a topic.
This shift has introduced a new and increasingly visible failure mode: geographic leakage, where AI Overviews cite international or out-of-market sources for queries with clear local or commercial relevance.
This behavior is not the result of broken geo-targeting, misconfigured hreflang, or poor international SEO hygiene. It is the predictable outcome of systems designed to resolve ambiguity through semantic expansion, not contextual narrowing. When a query is ambiguous, AI Overviews prioritize explanatory completeness across all plausible interpretations. Sources that resolve any sub-facet with greater clarity, specificity, or freshness gain disproportionate influence – regardless of whether they are commercially usable or geographically appropriate for the user.
From an engineering perspective, this is a technical success. The system reduces hallucination risk, maximizes factual coverage, and surfaces diverse perspectives. From a business and user perspective, however, it exposes a structural gap: AI Overviews have no native concept of commercial harm. The system does not evaluate whether a cited source can be acted upon, purchased from, or legally used in the user’s market.
This article reframes geographic leakage as a feature-bug duality inherent to generative search. It explains why established mechanisms such as hreflang struggle in AI-driven experiences, identifies ambiguity and semantic normalization as force multipliers in misalignment, and outlines a Generative Engine Optimization (GEO) framework to help organizations adapt in the generative era.
The Engineering Perspective: A Feature Of Robust Retrieval
From an AI engineering standpoint, selecting an international source for an AI Overview is not an error. It is the intended outcome of a system optimized for factual grounding, semantic recall, and hallucination prevention.
1. Query Fan-Out And Technical Precision
AI Overviews employ a query fan-out mechanism that decomposes a single user prompt into multiple parallel sub-queries. Each sub-query explores a different facet of the topic – definitions, mechanics, constraints, legality, role-specific usage, or comparative attributes.
The unit of competition in this system is no longer the page or the domain. It is the fact-chunk. If a particular source contains a paragraph or explanation that is more explicit, more extractable, or more clearly structured for a specific sub-query, it may be selected as a high-confidence informational anchor – even if it is not the best overall page for the user.
2. Cross-Language Information Retrieval (CLIR)
The appearance of English summaries sourced from foreign-language pages is a direct result of Cross-Language Information Retrieval.
Modern LLMs are natively multilingual. They do not “translate” pages as a discrete step. Instead, they normalize content from different languages into a shared semantic space and synthesize responses based on learned facts rather than visible snippets. As a result, language differences no longer serve as a natural boundary in retrieval decisions.
Semantic Retrieval Vs. Ranking Logic: A Structural Disconnect
The technical disconnect observed in AI Overviews, where an out-of-market page is cited despite the presence of a fully localized equivalent, stems from a fundamental conflict between search ranking logic and LLM retrieval logic.
Traditional Google Search is designed around serving. Signals such as IP location, language, and hreflang act as strong directives once relevance has been established, determining which regional URL should be shown to the user.
Generative systems are designed around retrieval and grounding. In Retrieval-Augmented Generation pipelines, these same signals are frequently treated as secondary hints, or ignored entirely, when they conflict with higher-confidence semantic matches discovered during fan-out retrieval.
Once a specific URL has been selected as the source of truth for a given fact, downstream geographic logic has limited ability to override that choice.
The Vector Identity Problem: When Markets Collapse Into Meaning
At the core of this behavior is a vector identity problem.
In modern LLM architectures, content is represented as numerical vectors encoding semantic meaning. When two pages contain substantively identical content, even if they serve different markets, they are often normalized into the same or near-identical semantic vector.
From the model’s perspective, these pages are interchangeable expressions of the same underlying entity or concept. Market-specific constraints such as shipping eligibility, currency, or checkout availability are not semantic properties of the text itself; they are metadata properties of the URL.
During the grounding phase, the AI selects sources from a pool of high-confidence semantic matches. If one regional version was crawled more recently, rendered more cleanly, or expressed the concept more explicitly, it can be selected without evaluating whether it is commercially usable for the searcher.
Freshness As A Semantic Multiplier
Freshness amplifies this effect. Retrieval-Augmented Generation systems often treat recency as a proxy for accuracy. When semantic representations are already normalized across languages and markets, even a minor update to one regional page can unintentionally elevate it above otherwise equivalent localized versions.
Importantly, this does not require a substantive difference in content. A change in phrasing, the addition of a clarifying sentence, or a more explicit explanation can tip the balance. Freshness, therefore, acts as a multiplier on semantic dominance, not as a neutral ranking signal.
Ambiguity As A Force Multiplier In Generative Retrieval
One of the most significant, and least understood, drivers of geographic leakage is query ambiguity.
In traditional search, ambiguity was often resolved late in the process, at the ranking or serving layer, using contextual signals such as user location, language, device, and historical behavior. Users were trained to trust that Google would infer intent and localize results accordingly.
Generative retrieval systems respond to ambiguity very differently. Rather than forcing early intent resolution, ambiguity triggers semantic expansion. The system explores all plausible interpretations in parallel, with the explicit goal of maximizing explanatory completeness.
This is an intentional design choice. It reduces the risk of omission and improves answer defensibility. However, it introduces a new failure mode: as the system optimizes for completeness, it becomes increasingly willing to violate commercial and geographic constraints that were previously enforced downstream.
In ambiguous queries, the system is no longer asking, “Which result is most appropriate for this user?”
It is asking, “Which sources most completely resolve the space of possible meanings?”
Why Correct Hreflang Is Overridden
The presence of a correctly implemented hreflang cluster does not guarantee regional preference in AI Overviews because hreflang operates at a different layer of the system.
Hreflang was designed for a post-retrieval substitution model. Once a relevant page is identified, the appropriate regional variant is served. In AI Overviews, relevance is resolved upstream during fan-out and semantic retrieval.
When fan-out sub-queries focus on definitions, mechanics, legality, or role-specific usage, the system prioritizes informational density over transactional alignment. If an international or home-market page provides the “first best answer” for a specific sub-query, that page is retrieved immediately as a grounding source.
Unless a localized version provides a technically superior answer for the same semantic branch, it is simply not considered.
In short, hreflang can influence which URL is served. It cannot influence which URL is retrieved, and in AI Overviews, retrieval is where the decision is effectively made.
The Diversity Mandate: The Programmatic Driver Of Leakage
AI Overviews are explicitly designed to surface a broader and more diverse set of sources than traditional top 10 search results.
To satisfy this requirement, the system evaluates URLs, not business entities, as distinct sources. International subfolders or country-specific paths are therefore treated as independent candidates, even when they represent the same brand and product.
Once a primary brand URL has been selected, the diversity filter may actively seek an alternative URL to populate additional source cards. This creates a form of ghost diversity, where the system appears to surface multiple perspectives while effectively referencing the same entity through different market endpoints.
The Business Perspective: A Commercial Bug
The failures described below are not due to misconfigured geo-targeting or incomplete localization. They are the predictable downstream consequence of a system optimized to resolve ambiguity through semantic completeness rather than commercial utility.
1. The Commercial Blind Spot
From a business standpoint, the goal of search is to facilitate action. AI Overviews, however, do not evaluate whether a cited source can be acted upon. They have no native concept of commercial harm.
When users are directed to out-of-market destinations, conversion probability collapses. These dead-end outcomes are invisible to the system’s evaluation loop and therefore incur no corrective penalty.
2. Geographic Signal Invalidation
Signals that once governed regional relevance – IP location, language, currency, and hreflang – were designed for ranking and serving. In generative synthesis, they function as weak hints that are frequently overridden by higher-confidence semantic matches selected upstream.
3. Zero-Click Amplification
AI Overviews occupy the most prominent position on the SERP. As organic real estate shrinks and zero-click behavior increases, the few cited sources receive disproportionate attention. When those citations are geographically misaligned, opportunity loss is amplified.
The Generative Search Technical Audit Process
To adapt, organizations must move beyond traditional visibility optimization towards what we would now call Generative Engine Optimization (GEO).
Semantic Parity: Ensure absolute parity at the fact-chunk level across markets. Minor asymmetries can create unintended retrieval advantages.
Retrieval-Aware Structuring: Structure content into atomic, extractable blocks aligned to likely fan-out branches.
Utility Signal Reinforcement: Provide explicit machine-readable indicators of market validity and availability to reinforce constraints the AI does not infer reliably on its own.
Conclusion: Where The Feature Becomes The Bug
Geographic leakage is not a regression in search quality. It is the natural outcome of search transitioning from transactional routing to informational synthesis.
From an engineering perspective, AI Overviews are functioning exactly as designed. Ambiguity triggers expansion. Completeness is prioritized. Semantic confidence wins.
From a business and user perspective, the same behavior exposes a structural blind spot. The system cannot distinguish between factually correct and consumer-engagable information.
This is the defining tension of generative search: A feature designed to ensure completeness becomes a bug when completeness overrides utility.
Until generative systems incorporate stronger notions of market validity and actionability, organizations must adapt defensively. In the AI era, visibility is no longer won by ranking alone. It is earned by ensuring that the most complete version of the truth is also the most usable one.