Ask An SEO: Can AI Systems & LLMs Render JavaScript To Read ‘Hidden’ Content? via @sejournal, @HelenPollitt1

For this week’s Ask An SEO, a reader asked:

“Is there any difference between how AI systems handle JavaScript-rendered or interactively hidden content compared to traditional Google indexing? What technical checks can SEOs do to confirm that all page critical information is available to machines?”

This is a great question because beyond the hype of LLM-optimization sits a very real technical challenge: ensuring your content can actually be found and read by the LLMs.

For several years now, SEOs have been fairly encouraged by Googlebot’s improvements in being able to crawl and render JavaScript-heavy pages. However, with the new AI crawlers, this might not be the case.

In this article, we’ll look at the differences between the two crawler types, and how to ensure your critical webpage content is accessible to both.

How Does Googlebot Render JavaScript Content?

Googlebot processes JavaScript in three main stages: crawling, rendering, and indexing. In a basic and simple explanation, this is how each stage works:

Crawling

Googlebot will queue pages to be crawled when it discovers them on the web. Not every page that gets queued will be crawled, however, as Googlebot will check to see if crawling is allowed. For example, it will see if the page is blocked from crawling via a disallow command in the robots.txt.

If the page is not eligible to be crawled, then Googlebot will skip it, forgoing an HTTP request. If a page is eligible to be crawled, it will move to render the content.

Rendering

Googlebot will check if the page is eligible to be indexed by ensuring there are no requests to keep it from the index, for example, via a noindex meta tag. Googlebot will queue the page to be rendered. The rendering may happen within seconds, or it may remain in the queue for a longer period of time. Rendering is a resource-intensive process, and as such, it may not be instantaneous.

In the meantime, the bot will receive the DOM response; this is the content that is rendered before JavaScript is executed. This typically is the page HTML, which will be available as soon as the page is crawled.

Once the JavaScript is executed, Googlebot will receive the fully constructed page, the “browser render.”

Indexing

Eligible pages and information will be stored in the Google index and made available to serve as search results at the point of user query.

How Does Googlebot Handle Interactively Hidden Content?

Not all content is available to users when they first land on a page. For example, you may need to click through tabs to find supplementary content, or expand an accordion to see all of the information.

Googlebot doesn’t have the ability to switch between tabs, or to click open an accordion. So, making sure it can parse all the page’s information is important.

The way to do this is to make sure that the information is contained within the DOM on the first load of the page. Meaning, content may be “hidden from view” on the front end before clicking a button, but it’s not hidden in the code.

Think of it like this: The HTML content is “hidden in a box”; the JavaScript is the key to open the box. If Googlebot has to open the box, it may not see that content straightaway. However, if the server has opened the box before Googlebot requests it, then it should be able to get to that content via the DOM.

How To Improve The Likelihood That Googlebot Will Be Able To Read Your Content

The key to ensuring that content can be parsed by Googlebot is making it accessible without the need for the bot to render the JavaScript. One way of doing this is by forcing the rendering to happen on the server itself.

Server-side rendering is the process by which a webpage is rendered on the server rather than by the browser. This means an HTML file is prepared and sent to the user’s browser (or the search engine bot), and the content of the page is accessible to them without waiting for the JavaScript to load. This is because the server has essentially created a file that has rendered content in it already; the HTML and CSS are accessible immediately. Meanwhile, JavaScript files that are stored on the server can be downloaded by the browser.

This is opposed to client-side rendering, which requires the browser to fetch and compile the JavaScript before content is accessible on the webpage. This is a much lower lift for the server, which is why it is often favored by website developers, but it does mean that bots struggle to see the content on the page without rendering the JavaScript first.

How Do LLM Bots Render JavaScript?

Given what we now know about how Googlebot renders JavaScript, how does that differ from AI bots?

The most important element to understand about the following is that, unlike Googlebot, there is no “one” governing body that represents all the bots that might be encompassed under “LLM bots.” That is, what one bot might be capable of doing won’t necessarily be the standard for all.

The bots that scrape the web to power the knowledge bases of the LLMs are not the same as the bots that visit a page to bring back timely information to a user via a search engine.

And Claude’s bots do not have the same capability as OpenAI’s.

When we are considering how to ensure that AI bots can access our content, we have to cater to the lowest-capability bots.

Less is known about how LLM bots render JavaScript, mainly because, unlike Google, the AI bots are not sharing that information. However, some very smart people have been running tests to identify how each of the main LLM bots handles it.

Back in 2024, Vercel published an investigation into the JavaScript rendering capabilities of the main LLM bots, including OpenAI’s, Anthropic’s, Meta’s, ByteDance’s, and Perplexity’s. According to their study, none of those bots were able to render JavaScript. The only ones that were, were Gemini (leveraging Googlebot’s infrastructure), Applebot, and CommonCrawl’s CCbot.

More recently, Glenn Gabe reconfirmed Vercel’s findings through his own in-depth analysis of how ChatGPT, Perplexity, and Claude handle JavaScript. He also runs through how to test your own website in the LLMs to see how they handle your content.

These are the most well-known bots, from some of the most heavily funded AI companies in this space. It stands to reason that if they are struggling with JavaScript, lesser-funded or more niche ones will be also.

How Do AI Bots Handle Interactively Hidden Content?

Not well. That is, if the interactive content requires some execution of JavaScript, they may struggle to parse it.

To ensure the bots are able to see content hidden behind tabs, or in accordions, it is prudent to ensure the content loads fully in the DOM without the need to execute JavaScript. Human visitors can still interact with the content to reveal it, but the bots won’t need to.

How To Check For JavaScript Rendering Issues

There are two very easy ways to check if Googlebot is able to render all the content on your page:

Check The DOM Through Developer Tools

The DOM (Document Object Model) is an interface for a webpage that represents the HTML page as a series of “nodes” and “objects.” It essentially links a webpage’s HTML source code to JavaScript, which enables the functionality of the webpage to work. In simple terms, think of a webpage as a family tree. Each element on a webpage is a “node” on the tree. So, a header tag

, a paragraph

, and the body of the page itself are all nodes on the family tree.

When a browser loads a webpage, it reads the HTML and turns it into the family tree (the DOM).

How To Check It

I’ll take you through this using Chrome’s Developer Tools as an example.

You can check the DOM of a page by going to your browser. Using Chrome, right-click and select “Inspect.” From there, make sure you’re in the “Elements” tab.

To see if content is visible on your webpage without having to execute JavaScript, you can search for it here. If you find the content fully within the DOM when you first load the page (and don’t interact with it further), then it should be visible to Googlebot and LLM bots.

Use Google Search Console

To check if the content is visible specifically to Googlebot, you can use Google Search Console.

Choose the page you want to test and paste it into the “Inspect any URL” field. Search Console will then take you to another page where you can “Test live URL.” When you test a live page, you will be presented with another screen where you can opt to “View tested page.”

How To Check If An LLM Bot Can See Your Content

As per Glenn Gabe’s experiments, you can ask the LLMs themselves what they can read from a specific webpage. For example, you can prompt them to read the text of an article. They will respond with an explanation if they cannot due to JavaScript.

Viewing The Source HTML

If we are working to the lowest common denominator, it is prudent to assume, at this point, LLMs can’t read content in JavaScript. To make sure that your content is available in the HTML of a webpage so that the bots can definitely access it, be absolutely sure that the content of your page is readable to these bots. Make sure it is in the source HTML. To check this, you can go to Chrome and right click on the page. From the menu, select “View page source.” If you can “find” the text in this code, you know it’s in the source HTML of the page.

What Does This Mean For Your Website?

Essentially, Googlebot has been developed over the years to be much better at handling JavaScript than the newer LLM bots. However, it’s really important to understand that the LLM bots are not trying to crawl and render the web in the same way as Googlebot. Don’t assume that they will ever try to mimic Googlebot’s behavior. Don’t consider them “behind” Googlebot. They are a different beast altogether.

For your website, this means you need to check if your page loads all the pertinent information in the DOM on the first load of the page to satisfy Googlebot’s needs. For the LLM bots, to be very sure the content is available to them, check your static HTML.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Why Your Small Business’s Google Visibility in 2026 Depends on AEO [Webinar] via @sejournal, @hethr_campbell

AI Assistants Decide Which Local Businesses Get Recommended

In 2026, local visibility on SERPs is no longer controlled by traditional search rankings alone. 

AI assistants are increasingly deciding which businesses get recommended when customers ask who to call, book, or trust nearby. 

Tools like Google Gemini, ChatGPT, and Siri are shaping these decisions in ways that leave many small businesses unseen.

AI-powered search is already influencing your shoppers’ choices without a website click ever happening. 

Your future customers are relying on answer engines to surface a single recommendation, not a list of options. 

Yet most small businesses remain invisible to AI because their Google Business Profile information is incomplete, inconsistent, or structured in ways these AI chat systems cannot confidently interpret. The result is fewer calls, missed bookings, and lost revenue.

In this upcoming webinar session, Raj Madhavni, Co-Founder, Alpha SEO Pros at Thryv, will explain how AI assistants evaluate local businesses today and which signals most influence recommendations. He will also identify the common gaps that prevent businesses from being selected and outline how to address them before 2026.

What You’ll Learn

  • How to implement AEO to improve local business visibility
  • The ranking signals AI assistants use to select local businesses
  • A practical roadmap to increase AI driven visibility, trust, and conversions in 2026

Why Attend?

This webinar gives small business owners and marketers a clear framework for competing in an AI driven local search environment. You will leave with actionable guidance to close visibility gaps, strengthen trust signals, and position your business as the one AI assistants recommend when customers ask.

Register now to prepare your business for local AI search in 2026.

🛑 Can’t attend live? Register anyway, and we’ll send you the on demand recording after the session.

Why Global Search Misalignment Is An Engineering Feature And A Business Bug via @sejournal, @billhunt

Google’s AI Overviews (AIO) represent a fundamental architectural shift in search. Retrieval has moved from a localized ranking-and-serving model, designed to return the most appropriate regional URL, to a semantic synthesis model, designed to assemble the most complete and defensible explanation of a topic.

This shift has introduced a new and increasingly visible failure mode: geographic leakage, where AI Overviews cite international or out-of-market sources for queries with clear local or commercial relevance.

This behavior is not the result of broken geo-targeting, misconfigured hreflang, or poor international SEO hygiene. It is the predictable outcome of systems designed to resolve ambiguity through semantic expansion, not contextual narrowing. When a query is ambiguous, AI Overviews prioritize explanatory completeness across all plausible interpretations. Sources that resolve any sub-facet with greater clarity, specificity, or freshness gain disproportionate influence – regardless of whether they are commercially usable or geographically appropriate for the user.

From an engineering perspective, this is a technical success. The system reduces hallucination risk, maximizes factual coverage, and surfaces diverse perspectives. From a business and user perspective, however, it exposes a structural gap: AI Overviews have no native concept of commercial harm. The system does not evaluate whether a cited source can be acted upon, purchased from, or legally used in the user’s market.

This article reframes geographic leakage as a feature-bug duality inherent to generative search. It explains why established mechanisms such as hreflang struggle in AI-driven experiences, identifies ambiguity and semantic normalization as force multipliers in misalignment, and outlines a Generative Engine Optimization (GEO) framework to help organizations adapt in the generative era.

The Engineering Perspective: A Feature Of Robust Retrieval

From an AI engineering standpoint, selecting an international source for an AI Overview is not an error. It is the intended outcome of a system optimized for factual grounding, semantic recall, and hallucination prevention.

1. Query Fan-Out And Technical Precision

AI Overviews employ a query fan-out mechanism that decomposes a single user prompt into multiple parallel sub-queries. Each sub-query explores a different facet of the topic – definitions, mechanics, constraints, legality, role-specific usage, or comparative attributes.

The unit of competition in this system is no longer the page or the domain. It is the fact-chunk. If a particular source contains a paragraph or explanation that is more explicit, more extractable, or more clearly structured for a specific sub-query, it may be selected as a high-confidence informational anchor – even if it is not the best overall page for the user.

2. Cross-Language Information Retrieval (CLIR)

The appearance of English summaries sourced from foreign-language pages is a direct result of Cross-Language Information Retrieval.

Modern LLMs are natively multilingual. They do not “translate” pages as a discrete step. Instead, they normalize content from different languages into a shared semantic space and synthesize responses based on learned facts rather than visible snippets. As a result, language differences no longer serve as a natural boundary in retrieval decisions.

Semantic Retrieval Vs. Ranking Logic: A Structural Disconnect

The technical disconnect observed in AI Overviews, where an out-of-market page is cited despite the presence of a fully localized equivalent, stems from a fundamental conflict between search ranking logic and LLM retrieval logic.

Traditional Google Search is designed around serving. Signals such as IP location, language, and hreflang act as strong directives once relevance has been established, determining which regional URL should be shown to the user.

Generative systems are designed around retrieval and grounding. In Retrieval-Augmented Generation pipelines, these same signals are frequently treated as secondary hints, or ignored entirely, when they conflict with higher-confidence semantic matches discovered during fan-out retrieval.

Once a specific URL has been selected as the source of truth for a given fact, downstream geographic logic has limited ability to override that choice.

The Vector Identity Problem: When Markets Collapse Into Meaning

At the core of this behavior is a vector identity problem.

In modern LLM architectures, content is represented as numerical vectors encoding semantic meaning. When two pages contain substantively identical content, even if they serve different markets, they are often normalized into the same or near-identical semantic vector.

From the model’s perspective, these pages are interchangeable expressions of the same underlying entity or concept. Market-specific constraints such as shipping eligibility, currency, or checkout availability are not semantic properties of the text itself; they are metadata properties of the URL.

During the grounding phase, the AI selects sources from a pool of high-confidence semantic matches. If one regional version was crawled more recently, rendered more cleanly, or expressed the concept more explicitly, it can be selected without evaluating whether it is commercially usable for the searcher.

Freshness As A Semantic Multiplier

Freshness amplifies this effect. Retrieval-Augmented Generation systems often treat recency as a proxy for accuracy. When semantic representations are already normalized across languages and markets, even a minor update to one regional page can unintentionally elevate it above otherwise equivalent localized versions.

Importantly, this does not require a substantive difference in content. A change in phrasing, the addition of a clarifying sentence, or a more explicit explanation can tip the balance. Freshness, therefore, acts as a multiplier on semantic dominance, not as a neutral ranking signal.

Ambiguity As A Force Multiplier In Generative Retrieval

One of the most significant, and least understood, drivers of geographic leakage is query ambiguity.

In traditional search, ambiguity was often resolved late in the process, at the ranking or serving layer, using contextual signals such as user location, language, device, and historical behavior. Users were trained to trust that Google would infer intent and localize results accordingly.

Generative retrieval systems respond to ambiguity very differently. Rather than forcing early intent resolution, ambiguity triggers semantic expansion. The system explores all plausible interpretations in parallel, with the explicit goal of maximizing explanatory completeness.

This is an intentional design choice. It reduces the risk of omission and improves answer defensibility. However, it introduces a new failure mode: as the system optimizes for completeness, it becomes increasingly willing to violate commercial and geographic constraints that were previously enforced downstream.

In ambiguous queries, the system is no longer asking, “Which result is most appropriate for this user?”

It is asking, “Which sources most completely resolve the space of possible meanings?”

Why Correct Hreflang Is Overridden

The presence of a correctly implemented hreflang cluster does not guarantee regional preference in AI Overviews because hreflang operates at a different layer of the system.

Hreflang was designed for a post-retrieval substitution model. Once a relevant page is identified, the appropriate regional variant is served. In AI Overviews, relevance is resolved upstream during fan-out and semantic retrieval.

When fan-out sub-queries focus on definitions, mechanics, legality, or role-specific usage, the system prioritizes informational density over transactional alignment. If an international or home-market page provides the “first best answer” for a specific sub-query, that page is retrieved immediately as a grounding source.

Unless a localized version provides a technically superior answer for the same semantic branch, it is simply not considered.

In short, hreflang can influence which URL is served. It cannot influence which URL is retrieved, and in AI Overviews, retrieval is where the decision is effectively made.

The Diversity Mandate: The Programmatic Driver Of Leakage

AI Overviews are explicitly designed to surface a broader and more diverse set of sources than traditional top 10 search results.

To satisfy this requirement, the system evaluates URLs, not business entities, as distinct sources. International subfolders or country-specific paths are therefore treated as independent candidates, even when they represent the same brand and product.

Once a primary brand URL has been selected, the diversity filter may actively seek an alternative URL to populate additional source cards. This creates a form of ghost diversity, where the system appears to surface multiple perspectives while effectively referencing the same entity through different market endpoints.

The Business Perspective: A Commercial Bug

The failures described below are not due to misconfigured geo-targeting or incomplete localization. They are the predictable downstream consequence of a system optimized to resolve ambiguity through semantic completeness rather than commercial utility.

1. The Commercial Blind Spot

From a business standpoint, the goal of search is to facilitate action. AI Overviews, however, do not evaluate whether a cited source can be acted upon. They have no native concept of commercial harm.

When users are directed to out-of-market destinations, conversion probability collapses. These dead-end outcomes are invisible to the system’s evaluation loop and therefore incur no corrective penalty.

2. Geographic Signal Invalidation

Signals that once governed regional relevance – IP location, language, currency, and hreflang – were designed for ranking and serving. In generative synthesis, they function as weak hints that are frequently overridden by higher-confidence semantic matches selected upstream.

3. Zero-Click Amplification

AI Overviews occupy the most prominent position on the SERP. As organic real estate shrinks and zero-click behavior increases, the few cited sources receive disproportionate attention. When those citations are geographically misaligned, opportunity loss is amplified.

The Generative Search Technical Audit Process

To adapt, organizations must move beyond traditional visibility optimization towards what we would now call Generative Engine Optimization (GEO).

  1. Semantic Parity: Ensure absolute parity at the fact-chunk level across markets. Minor asymmetries can create unintended retrieval advantages.
  2. Retrieval-Aware Structuring: Structure content into atomic, extractable blocks aligned to likely fan-out branches.
  3. Utility Signal Reinforcement: Provide explicit machine-readable indicators of market validity and availability to reinforce constraints the AI does not infer reliably on its own.

Conclusion: Where The Feature Becomes The Bug

Geographic leakage is not a regression in search quality. It is the natural outcome of search transitioning from transactional routing to informational synthesis.

From an engineering perspective, AI Overviews are functioning exactly as designed. Ambiguity triggers expansion. Completeness is prioritized. Semantic confidence wins.

From a business and user perspective, the same behavior exposes a structural blind spot. The system cannot distinguish between factually correct and consumer-engagable information.

This is the defining tension of generative search: A feature designed to ensure completeness becomes a bug when completeness overrides utility.

Until generative systems incorporate stronger notions of market validity and actionability, organizations must adapt defensively. In the AI era, visibility is no longer won by ranking alone. It is earned by ensuring that the most complete version of the truth is also the most usable one.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

How Search Engines Tailor Results To Individual Users & How Brands Should Manage It

How many times have you seen different SERP layouts and results across markets?

No two people see the same search results, as per Google’s own documentation. No two users receive identical outputs from AI platforms either, even when using the same prompt. In a time of information overload, this raises an important question for global marketers: How do we manage and leverage personalized search experiences across multiple markets?

Today, clarity and transparency matter more than ever. Users have countless choices and distractions, so they expect experiences that feel relevant, trustworthy, and aligned with their needs in the moment. Personalization is now central to how potential customers discover, evaluate, and engage with brands.

Search engines have been personalizing results for years based on language, search behavior, device type, and technical elements such as hreflang. With the quick evolution of generative artificial intelligence (AI), personalization has expanded into summarized answers on AI platforms and hyper-personalized experiences that depend on internal data flows and processes.

This shift forces marketers to rethink how they measure visibility and business impact. According to McKinsey, 76% of users feel frustrated when experiences are not personalized, which shows how closely relevance and user satisfaction are linked.

At the same time, long-tail discovery increasingly happens outside of search engines, particularly on platforms like TikTok. Statista reports that 78% of global internet users now research brands and products on social media.

All of this is happening while most users know little about how search engines or AI systems operate.

Regardless of where people search, the implications extend far beyond algorithms. Personalization affects how teams collaborate, how data moves across departments, and how global organizations define success.

This article explores what personalization means today and how global brands can turn it into a competitive advantage.

From SERPs To AI Summaries

Search engines no longer return lists of blue links alone or People Also Ask (PAA). They now provide summarized information in AI Overviews and AI Mode, currently for informational queries.

Google often surfaces AI summaries first and URLs second, while continuously testing different layouts for mobile and desktop, as shown below.

Screenshot from search for [what is a nepo baby], Google, December 2025

Google’s Search Labs experiments, including features such as Preferred Sources, show how layouts and summaries change based on context, trust signals, and behavioral patterns.

Large language models (LLMs) add another layer. They adjust responses based on user context, intent, and sometimes whether the user has a free or paid account. Because users rarely get exactly what they need on the first attempt, they re-prompt the AI, creating iterative conversations where each instruction or prompt influences the next.

What prompts users to click through to a source or research it on search engines, whether it is curiosity, uncertainty, boredom, a call-to-action, or the model stating it does not know, is still unclear. Understanding this behavior will soon be as important as traditional click-through rate (CTR) analysis.

For global brands, the challenge is not simply keeping up with technology. It’s maintaining a consistent brand voice and value exchange across channels and markets when every user sees a different interpretation of the brand. Trust is now as important as visibility.

This landscape increases the importance of market research, segmentation, cultural insights, and competitive analysis. It also raises concerns about echo chambers, search inequality, and the barriers brands face when entering new markets or reaching new audiences.

Meanwhile, the long tail continues to shift to platforms like TikTok, where discovery works very differently from traditional search. And as enthusiasm for AI cools, many professionals believe we have entered the “trough of disillusionment” stage described by Jackie Fenn’s technology adoption lifecycle.

What Personalization Means Today

In marketing, personalization refers to tailoring content, offers, and experiences based on available data.

In search, it describes how search engines customize results and SERP features for individual users using signals such as:

  • Data patterns.
  • Inferred interests.
  • Location.
  • Search behavior.
  • Device type.
  • Language.
  • AI-driven memory (which is discussed below).

The goal of search engines is to provide relevant results and keep users engaged, especially as people now search across multiple channels and AI platforms. As a result of this, two people searching the same query rarely see identical results. For example:

  • A cuisine enthusiast searching for [apples] may see food-related content.
  • A tech-oriented user may see Apple product news.

SERP features can also vary across markets and profiles. People Also Ask (PAA) questions and filters may differ by region, language, or click behavior, and may not appear at all. For example, the query “vote of no confidence” displays different filters and different top results in Spain and the UK, and PAA does not appear in the UK version.

AI platforms push this further with session-based memory. Platforms like AI Mode, Gemini, ChatGPT, and Copilot handle context in a way that makes users feel there are real conversations, with each prompt influencing the next. In some cases, results from earlier responses may also be surfaced.

A human-in-the-loop (HITL) approach is essential to evaluate, monitor, and correct outputs before using them.

How Personalization Technically Works

Personalization operates across several layers. Understanding these helps marketers see where influence is possible.

1. SERP Features And Layout

Google and Bing adapt their layouts based on history, device type, user engagement, and market signals. Featured Snippets, PAA modules, videos, forums, or Top Stories may appear or disappear depending on behavior and intent.

2. AI Overviews, AI Mode, And Bing Copilot

AI platforms can:

  • Summarize content from multiple URLs.
  • Adapt tone and depth based on user behavior.
  • Personalize follow-up suggestions.
  • Integrate patterns learnt within the session or even previous sessions.

Visibility now includes being referenced in AI summaries. Current patterns show this depends on:

  • Clear site and URL structure.
  • Factual accuracy.
  • Strong entity signals.
  • Online credibility.
  • Fresh, easily interpreted content.

3. Structured Data And Entity Consistency

When algorithms understand a brand, they can personalize results more accurately. Schema markup helps avoid entity drift, where regional websites are mistaken for separate brands.

Bing uses Microsoft Graph to connect brand data with the Microsoft ecosystem, extending the influence of structured data.

4. Context Windows And AI Memory

LLMs simulate “memory” using context windows, which is the amount of information they can consider at once. This is measured in tokens, which represent words or parts of words. It is what makes conversations feel continuous.

This has some important implications:

  • Semantic consistency matters.
  • Tone should be unified across markets.
  • Messaging needs to be coherent across content formats.

Once an AI system associates a brand with a specific theme, that context can persist for a while, although it is unclear how long for. This is probably why LLMs favor fresh content as a way to reinforce authority.

5. Recommenders

In ecommerce and content-heavy sites, recommenders show personalized suggestions based on behavior. This reduces friction and increases time on site.

Benefits Of Personalization

When personalization works, users and brands can benefit from:

  • Reduced user friction.
  • Increased user satisfaction.
  • Improved conversion rates.
  • Stronger engagement.
  • Higher CTR.

This can positively influence the customer lifetime value. However, these benefits rely on consistent and trustworthy experiences across channels.

Potential Drawbacks

Alongside the benefits, personalization brings some challenges that marketers need to be aware of. These are not reasons to avoid personalization, but important considerations when planning global strategies. Consider:

  • Filter bubbles reduce exposure to diverse viewpoints and competing brands.
  • Privacy concerns increase as platforms rely on more behavioral and demographic data.
  • Reduced result diversity makes it harder for new or smaller brands to appear.
  • Global templates lose effectiveness when markets expect local nuance.

This means that brands using the same template or unified content across markets for globalization lose even more effectiveness in markets, as cultural nuance, context, or different user motivations are expected. Furthermore, purchase journeys vary across markets. Hence, the effectiveness of hyper-personalization.

It is probably more important than ever that brands spend time researching and planning to gain or maintain visibility in global markets, as well as strengthening their brand perception.

Managing Personalization Across Teams And Channels

At the moment, LLMs tend to favor strong, clearly structured brands and websites. If a brand is not well understood online, it is less likely to be referenced in AI summaries.

Successful digital and SEO projects rely on strong internal processes. When teams work in isolation, inconsistencies appear in data, content, and technical implementation, which then surface as inconsistencies in personalized search.

Common issues include:

  • Weak global alignment.
  • Translations that miss local relevance.
  • Conflicting schema markup.
  • Local pages ranking for the wrong intent.
  • Important local keywords being ignored.

Below is a framework to help organizations manage personalization across markets and channels.

1. Shared Objectives And Understanding Across Teams

Many search or marketing challenges can be prevented by building a shared understanding across teams of:

  • Business and project goals.
  • Issues across markets.
  • Search developments across markets.
  • Audience segmentation.
  • Integrated insights across all channels.
  • Data flows that connect global and local teams.
  • AI developments.

2. Strengthen The Technical Elements Of Your Website

Reinforce the technical elements of your website so that it is easy for search engines and LLMs to understand your brand across markets to avoid entity drift:

  • Website structure.
  • Schema markup on the appropriate sections.
  • Strong on-page structure.
  • Strong internal linking.
  • Appropriate hreflang.

3. Optimize For Content Clusters And User Intent, Not Keywords

Structure is everything. Organizing content into clusters helps users and search engines understand the website clearly, which supports personalization.

4. Use First-Party Data To Personalize On-Site Experiences

Internal search and logged-in user experiences are important to understand your users and build user journeys based on behavior. This helps with content relevance and stronger intent signals.

First-party data can support:

  • Personalized product recommendations.
  • Dynamic filters.
  • Auto-suggestions based on browsing behavior.

5. Maintain Cross-Channel Consistency

A coherent experience supports stronger personalization and prevents fragmented journeys, and search is only one personalized environment. Tone, structure, messaging, and data should remain consistent across:

  • Social platforms.
  • Email.
  • Mobile apps.
  • Websites and on-site search.

Clear and consistent USPs should be visible everywhere.

6. Strengthen Your Brand Perception

With so much online competition, brands whose work is being referenced positively across the internet. It is the old PR: Focus on your strengths and publish well-researched work, with stats that are useful to your target users.

Conclusion: Turning Personalization Into An Advantage

Conway’s Law matters more than ever. The idea that organizations design systems that mirror their own communication structures is highly visible in search today. If teams operate in silos, those silos often show up in fragmented content, inconsistent signals, and mixed user experiences. Personalization then amplifies these gaps even further by not being cited on AI platforms or the wrong information being spread.

Understanding how personalization works and how it shapes visibility, trust, and user behavior helps brands deliver experiences that feel coherent rather than confusing.

Success is no longer just about optimizing for Google. It is about understanding how people search, how AI interprets and summarizes content, how brands are referenced across the web, and how teams collaborate across channels to present a unified message.

Where every search result is unique, the brands that succeed will be the ones that coordinate, connect, and communicate clearly, both internally and across global markets, to help strengthen the perception of their brand.

More Resources:


Featured Image: Master1305/Shutterstock

The State of AEO & GEO in 2026 [Webinar] via @sejournal, @hethr_campbell

How AI Search Is Reshaping Visibility & Strategy

AI search is rapidly changing how brands are discovered and how visibility is earned. 

As AI Overviews, ChatGPT, Perplexity, and other answer engines take center stage, traditional SERP rankings are no longer the only measure of success. 

For enterprise SEO leaders, the focus has shifted to understanding where to invest, which strategies actually move the needle, and how to prepare for 2026.

Join Pat Reinhart, VP of Services and Thought Leadership at Conductor, and Lindsay Boyajian Hagan, VP of Marketing at Conductor, as they unpack key insights from The State of AEO and GEO in 2026 Report. This session provides a clear look at how enterprise teams are adapting to AI-driven discovery and where AEO and GEO strategies are headed next.

What You’ll Learn

Why Attend?

This webinar offers data-backed clarity on what is working in AI search today and what to prioritize moving forward. You will gain actionable insights to refine your strategy, focus resources effectively, and stay competitive as AI continues to reshape search in 2026.

Register now to access the latest guidance on growing AI visibility in 2026.

🛑 Can’t make it live? Register anyway, and we’ll send you the recording.

State Of AI Search Optimization 2026 via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Every year, after the winter holidays, I spend a few days ramping up by gathering the context from last year and reminding myself of where my clients are at. I want to use the opportunity to share my understanding of where we are with AI Search, so you can quickly get back into the swing of things.

As a reminder, the vibe around ChatGPT turned a bit sour at the end of 2025:

  • Google released the superior Gemini 3, causing Sam Altman to announce a Code Red (ironically, three years after Google did the same at the launch of ChatGPT 3.5).
  • OpenAI made a series of circular investments that raised eyebrows and questions about how to finance them.
  • ChatGPT, which sends the majority of all LLMs, reaches at most 4% of the current organic (mostly Google) referral traffic.

Most of all, we still don’t know the value of a mention in an AI response. However, the topic of AI and LLMs couldn’t be more important because the Google user experience is turning from a list of results to a definitive answer.

A big “thank you” to Dan Petrovic and Andrea Volpini for reviewing my draft and adding meaningful concepts.

AI Search Optimization
Image Credit: Kevin Indig

Retrieved → Cited → Trusted

Optimizing for AI search visibility follows a pipeline similar to the classic “crawl, index, rank” for search engines:

  1. Retrieval systems decide which pages enter the candidate set.
  2. The model selects which sources to cite.
  3. Users decide which citation to trust and act on.

Caveats:

  1. A lot of the recommendations overlap strongly with common SEO best practices. Same tactics, new game.
  2. I don’t pretend to have an exhaustive list of everything that works.
  3. Controversial factors like schema or llms.txt are not included.

Consideration: Getting Into The Candidate Pool

Before any content enters the model’s consideration (grounding) set, it must be crawled, indexed, and fetchable within milliseconds during real-time search.

The factors that drive consideration are:

  • Selection Rate and Primary Bias.
  • Server response time.
  • Metadata relevance.
  • Product feeds (in ecommerce).

1. Selection Rate And Primary Bias

  • Definition: Primary bias measures the brand-attribute associations a model holds before grounding in live search results. Selection Rate measures how frequently the model chooses your content from the retrieval candidate pool.
  • Why it matters: LLMs are biased by training data. Models develop confidence scores for brand-attribute relationships (e.g., “cheap,” “durable,” “fast”) independent of real-time retrieval. These pre-existing associations influence citation likelihood even when your content enters the candidate pool.
  • Goal: Understand which attributes the model associates with your brand and how confident it is in your brand as an entity. Systematically strengthen those associations through targeted on-page and off-page campaigns.

2. Server Response Time

  • Definition: The time between a crawler request and the server’s first byte of response data (TTFB = Time To First Byte).
  • Why it matters: When models need web results for reasoning answers (RAG), they need to retrieve the content like a search engine crawler. Even though retrieval is mostly index-based, faster servers help with rendering, agentic workflows, and freshness, and compound query fan-out. LLM retrieval operates under tight latency budgets during real-time search. Slow responses prevent pages from entering the candidate pool because they miss the retrieval window. Consistently slow response times trigger crawl rate limiting.
  • Goal: Maintain server response times <200ms>. Sites with <1s load times receive> 3x more Googlebot requests than sites >3s. For LLM crawlers (GPTBot, Google-Extended), retrieval windows are even tighter than traditional search.

3. Metadata Relevance

  • Definition: Title tags, meta descriptions, and URL structure that LLMs parse when evaluating page relevance during live retrieval.
  • Why it matters: Before picking content to form AI answers, LLMs parse titles for topical relevance, descriptions as document summaries, and URLs as context clues for page relevance and trustworthiness.
  • Goal: Include target concepts in titles and descriptions (!) to match user prompt language. Create keyword-descriptive URLs, potentially even including the current year to signal freshness.

4. Product Feed Availability (Ecommerce)

  • Definition: Structured product catalogs submitted directly to LLM platforms with real-time inventory, pricing, and attribute data.
  • Why it matters: Direct feeds bypass traditional retrieval constraints and enable LLMs to answer transactional shopping queries (”where can I buy,” “best price for”) with accurate, current information.
  • Goal: Submit merchant-controlled product feeds to ChatGPT’s merchant program (chatgpt.com/merchants) in JSON, CSV, TSV, or XML format with complete attributes (title, price, images, reviews, availability, specs). Implement ACP (Agentic Commerce Protocol) for agentic shopping.

Relevance: Being Selected For Citation

The Attribution Crisis in LLM Search Results” (Strauss et al., 2025) reports low citation rates even when models access relevant sources.

  • 24% of ChatGPT (4o) responses are generated without explicitly fetching any online content.
  • Gemini provides no clickable citation in 92% of answers.
  • Perplexity visits about 10 relevant pages per query but cites only three to four.

Models can only cite sources that enter the context window. Pre-training mentions often go unattributed. Live retrieval adds a URL, which enables attribution.

5. Content Structure

  • Definition: The semantic HTML hierarchy, formatting elements (tables, lists, FAQs), and fact density that make pages machine-readable.
  • Why it matters: LLMs extract and cite specific passages. Clear structure makes pages easier to parse and excerpt. Since prompts average 5x the length of keywords, structured content answering multi-part questions outperforms single-keyword pages.
  • Goal: Use semantic HTML with clear H-tag hierarchies, tables for comparisons, and lists for enumeration. Increase fact and concept density to maximize snippet contribution probability.

6. FAQ Coverage

  • Definition: Question-and-answer sections that mirror the conversational phrasing users employ in LLM prompts.
  • Why it matters: FAQ formats align with how users query LLMs (”How do I…,” “What’s the difference between…”). This structural and linguistic match increases citation and mention likelihood compared to keyword-optimized content.
  • Goal: Build FAQ libraries from real customer questions (support tickets, sales calls, community forums) that capture emerging prompt patterns. Monitor FAQ freshness through lastReviewed or DateModified schema.

7. Content Freshness

  • Definition: Recency of content updates as measured by “last updated” timestamps and actual content changes.
  • Why it matters: LLMs parse last-updated metadata to assess source recency and prioritize recent information as more accurate and relevant.
  • Goal: Update content within the past three months for maximum performance. Over 70% of pages cited by ChatGPT were updated within 12 months, but content updated in the last three months performs best across all intents.

8. Third-Party Mentions (”Webutation”)

  • Definition: Brand mentions, reviews, and citations on external domains (publishers, review sites, news outlets) rather than owned properties.
  • Why it matters: LLMs weigh external validation more heavily than self-promotion the closer user intent comes to a purchase decision. Third-party content provides independent verification of claims and establishes category relevance through co-mentions with recognized authorities. They increase the entitithood inside large context graphs.
  • Goal: 85% of brand mentions in AI search for high purchase intent prompts come from third-party sources. Earn contextual backlinks from authoritative domains and maintain complete profiles on category review platforms.

9. Organic Search Position

  • Definition: Page ranking in traditional search engine results pages (SERPs) for relevant queries.
  • Why it matters: Many LLMs use search engines as retrieval sources. Higher organic rankings increase the probability of entering the LLM’s candidate pool and receiving citations.
  • Goal: Rank in Google’s top 10 for fan-out query variations around your core topics, not just head terms. Since LLM prompts are conversational and varied, pages ranking for many long-tail and question-based variations have higher citation probability. Pages in the top 10 show a strong correlation (~0.65) with LLM mentions, and 76% of AI Overview citations pull from these positions. Caveat: Correlation varies by LLM. For example, overlap is high for AI Overviews but low for ChatGPT.

User Selection: Earning Trust And Action

Trust is critical because we’re dealing with a single answer in AI search, not a list of search results. Optimizing for trust is similar to optimizing for click-through rates in classic search, just that it takes longer and is harder to measure.

10. Demonstrated Expertise

  • Definition: Visible credentials, certifications, bylines, and verifiable proof points that establish author and brand authority.
  • Why it matters: AI search delivers single answers rather than ranked lists. Users who click through require stronger trust signals before taking action because they’re validating a definitive claim.
  • Goal: Display author credentials, industry certifications, and verifiable proof (customer logos, case study metrics, third-party test results, awards) prominently. Support marketing claims with evidence.

11. User-Generated Content Presence

  • Definition: Brand representation in community-driven platforms (Reddit, YouTube, forums) where users share experiences and opinions.
  • Why it matters: Users validate synthetic AI answers against human experience. When AI Overviews appear, clicks on Reddit and YouTube grow from 18% to 30% because users seek social proof.
  • Goal: Build positive presence in category-relevant subreddits, YouTube, and forums. YouTube and Reddit are consistently in the top 3 most cited domains across LLMs.

From Choice To Conviction

Search is moving from abundance to synthesis. For two decades, Google’s ranked list gave users a choice. AI search delivers a single answer that compresses multiple sources into one definitive response.

The mechanics differ from early 2000s SEO:

  • Retrieval windows replace crawl budgets.
  • Selection rate replaces PageRank.
  • Third-party validation replaces anchor text.

The strategic imperative is identical: earn visibility in the interface where users search. Traditional SEO remains foundational, but AI visibility demands different content strategies:

  • Conversational query coverage matters more than head-term rankings.
  • External validation matters more than owned content.
  • Structure matters more than keyword density.

Brands that build systematic optimization programs now will compound advantages as LLM traffic scales. The shift from ranked lists to definitive answers is irreversible.


Featured Image: Paulo Bobita/Search Engine Journal

Google’s Recommender System Breakthrough Detects Semantic Intent via @sejournal, @martinibuster

Google published a research paper about helping recommender systems understand what users mean when they interact with them. Their goal with this new approach is to overcome the limitations inherent in the current state-of-the-art recommender systems in order to get a finer, detailed understanding of what users want to read, listen to, or watch at the level of the individual.

Personalized Semantics

Recommender systems predict what a user would like to read or watch next. YouTube, Google Discover, and Google News are examples of recommender systems for recommending content to users. Other kinds of recommender systems are shopping recommendations.

Recommender systems generally work by collecting data about the kinds of things a user clicks on, rates, buys, and watches and then using that data to suggest more content that aligns with a user’s preferences.

The researchers referred to those kinds of signals as primitive user feedback because they’re not so good at recommendations based on an individual’s subjective judgment about what’s funny, cute, or boring.

The intuition behind the research is that the rise of LLMs presents an opportunity to leverage natural language interactions to better understand what a user wants through identifying semantic intent.

The researchers explain:

“Interactive recommender systems have emerged as a promising paradigm to overcome the limitations of the primitive user feedback used by traditional recommender systems (e.g., clicks, item consumption, ratings). They allow users to express intent, preferences, constraints, and contexts in a richer fashion, often using natural language (including faceted search and dialogue).

Yet more research is needed to find the most effective ways to use this feedback. One challenge is inferring a user’s semantic intent from the open-ended terms or attributes often used to describe a desired item. This is critical for recommender systems that wish to support users in their everyday, intuitive use of natural language to refine recommendation results.”

The Soft Attributes Challenge

The researchers explained that hard attributes are something that recommender systems can understand because they are objective ground truths like “genre, artist, director.” What they had problems with were other kinds of attributes called “soft attributes” that are subjective and for which they couldn’t be matched with movies, content, or product items.

The research paper states the following characteristics of soft attributes:

  • “There is no definitive “ground truth” source associating such soft attributes with items
  • The attributes themselves may have imprecise interpretations
  • And they may be subjective in nature (i.e., different users may interpret them differently)”

The problem of soft attributes is the problem that the researchers set out to solve and why the research paper is called Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors.

Novel Use Of Concept Activation Vectors (CAVs)

Concept Activation Vectors (CAVs) are a way to probe AI models to understand the mathematical representations (vectors) the models use internally. They provide a way for humans to connect those internal vectors to concepts.

So the standard direction of the CAV is interpreting the model. What the researchers did was to change that direction so that the goal is now to interpret the users, translating subjective soft attributes into mathematical representations for recommender systems. The researchers discovered that adapting CAVs to interpret users enabled vector representations that helped AI models detect subtle intent and subjective human judgments that are personalized to an individual.

As they write:

“We demonstrate … that our CAV representation not only accurately interprets users’ subjective semantics, but can also be used to improve recommendations through interactive item critiquing.”

For example, the model can learn that users mean different things by “funny” and be better able to leverage those personalized semantics when making recommendations.

The problem the researchers are solving is figuring out how to bridge the semantic gap between how humans speak and how recommender systems “think.”

Humans think in concepts, using vague or subjective descriptions (called soft attributes).

Recommender systems “think” in math: They operate on vectors (lists of numbers) in a high-dimensional “embedding space”.

The problem then becomes making the subjective human speech less ambiguous but without having to modify or retrain the recommender system with all the nuances. The CAVs do that heavy lifting.

The researchers explain:

“…we infer the semantics of soft attributes using the representation learned by the recommender system model itself.”

They list four advantages of their approach:

“(1) The recommender system’s model capacity is directed to predicting user-item preferences without further trying to predict additional side information (e.g., tags), which often does not improve recommender system performance.

(2) The recommender system model can easily accommodate new attributes without retraining should new sources of tags, keywords or phrases emerge from which to derive new soft attributes.

(3) Our approach offers a means to test whether specific soft attributes are relevant to predicting user preferences. Thus, we are able focus attention on attributes most relevant to capturing a user’s intent (e.g., when explaining recommendations, eliciting preferences, or suggesting critiques).

(4) One can learn soft attribute/tag semantics with relatively small amounts of labelled data, in the spirit of pre-training and few-shot learning.”

They then provide a high-level explanation of how the system works:

“At a high-level, our approach works as follows. we assume we are given:

(i) a collaborative filtering-style model (e.g.,probabilistic matrix factorization or dual encoder) which embeds items and users in a latent space based on user-item ratings; and

(ii) a (small) set of tags (i.e., soft attribute labels) provided by a subset of users for a subset of items.

We develop methods that associate with each item the degree to which it exhibits a soft attribute, thus determining that attribute’s semantics. We do this by applying concept activation vectors (CAVs) —a recent method developed for interpretability of machine-learned models—to the collaborative filtering model to detect whether it learned a representation of the attribute.

The projection of this CAV in embedding space provides a (local) directional semantics for the attribute that can then be applied to items (and users). Moreover, the technique can be used to identify the subjective nature of an attribute, specifically, whether different users have different meanings (or tag senses) in mind when using that tag. Such a personalized semantics for subjective attributes can be vital to the sound interpretation of a user’s true intent when trying to assess her preferences.”

Does This System Work?

One of the interesting findings is that their test of an artificial tag (odd year) showed that the systems accuracy rate was barely above a random selection, which corroborated their hypothesis that “CAVs are useful for identifying preference related attributes/tags.”

They also found that using CAVs in recommender systems were useful for understanding “critiquing-based” user behavior and improved those kinds of recommender systems.

The researchers listed four benefits:

“(i) using a collaborative filtering representation to identify attributes of greatest relevance to the recommendation task;

(ii) distinguishing objective and subjective tag usage;

(iii) identifying personalized, user-specific semantics for subjective attributes; and

(iv) relating attribute semantics to preference representations, thus allowing interactions using soft attributes/tags in example critiquing and other forms of preference elicitation.”

They found that their approach improved recommendations for situations where discovery of soft attributes are important. Using this approach for situations in which hard attributes are more the norm, such as in product shopping, is a future area of study to see if soft attributes would aid in making product recommendations.

Takeaways

The research paper was published in 2024 and I had to dig around to actually find it, which may explain why it generally went unnoticed in the search marketing community.

Google tested some of this approach with an algorithm called WALS (Weighted Alternating Least Squares), actual production code that is a product in Google Cloud for developers.

Two notes in a footnote and in the appendix explain:

“CAVs on MovieLens20M data with linear attributes use embeddings that were learned (via WALS) using internal production code, which is not releasable.”

…The linear embeddings were learned (via WALS, Appendix A.3.1) using internal production code, which is not releasable.”

“Production code” refers to software that is currently running in Google’s user-facing products, in this case Google Cloud. It’s likely not the underlying engine for Google Discover, however it’s important to note because it shows how easily it can be integrated into an existing recommender system.

They tested this system using the MovieLens20M dataset, which is a public dataset of 20 million ratings, with some of the tests done with Google’s proprietary recommendation engine (WALS). This lends credibility to the inference that this code can be used on a live system without having to retrain or modify them.

The takeaway that I see in this research paper is that this makes it possible for recommender systems to leverage semantic data about soft attributes. Google Discover is regarded by Google as a subset of search, and search patterns are some of the data that the system uses to surface content. Google doesn’t say whether they are using this kind of method, but given the positive results, it is possible that this approach could be used in Google’s recommender systems. If that’s the case, then that means Google’s recommendations may be more responsive to users’ subjective semantics.

The research paper credits Google Research (60% of the credits), and also Amazon, Midjourney, and Meta AI.

The PDF is available here:

Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors

Featured Image by Shutterstock/Here

December Core Update: More Brands Win “Best Of” Queries via @sejournal, @MattGSouthern

Google’s December core update ran from December 11 to December 29. Early analysis shared after the rollout points to a familiar pattern. Sites with narrower, category-specific strength appear to be gaining ground against broader, generalist pages in several verticals.

Aleyda Solís, International SEO Consultant and Founder at Orainti, published an analysis on LinkedIn breaking down the update’s impact across publications, ecommerce, and SaaS categories.

What Changed

Based on the examples Solís shared, the update appears to reward pages that match the query with direct category expertise. The effect shows up most clearly on “best of” and mid-funnel product terms.

Publications

Publication sites lost rankings for “best of” and broader queries that Google had previously treated as informational. Brands and commercial sites with direct product authority now rank better for these terms.

Solís cited Games Radar guides dropping for queries like “Best Steam Deck Games,” “Best Coop Games,” and “Upcoming Video Games.” Nintendo and Epic Games catalog pages increased for the same queries.

Ecommerce

Broader retailers lost ground on mid-funnel product queries to specialized retailers and brands showing specific authority in product categories.

Macy’s decreased for “winter boots women,” “winter coats,” and “men’s cologne.” Columbia, The North Face, and Fragrance Market increased for those same terms.

SaaS

Non-specialized SaaS platforms and publications dropped for software-related queries. More specialized software sites gained with targeted landing pages and resource content.

Zapier, Adobe, and CNBC decreased for queries like “Accounting Software for Small business” and “sole trader accounting software.” Freshbooks and Xero increased with dedicated landing pages.

Solís called the update “yet another iteration to reward specialization, expertise and showcase more commercially oriented content from brands or specialized retailers, rather than generic ecommerce platforms or publications.”

News Publishers Hit Hard

News publishers saw heavy volatility during the update.

Will Flannigan, Senior SEO Editor for The Wall Street Journal, shared SISTRIX data showing India-based news publishers lost visibility on U.S. search results. Hindustan Times, India Times, and Indian Express all showed downward trajectories.

Glenn Gabe, President of G-Squared Interactive, tracked movement across news sites throughout the rollout. He noted impacts across Discover, Google News, and Top Stories.

“There was a ton of volatility with news publishers with the December broad core update,” Gabe wrote on LinkedIn. “And it’s not just India-based publishers… it’s news publishers across many countries (including a number of large publishers here in the US dropping or surging heavily).”

During the rollout, some publishers reported steep Discover declines. Glenn Gabe wrote that publishers he spoke with “lost a ton of Discover visibility/traffic.”

For news specifically, this is worth tracking alongside Google’s Topic Authority system. That system surfaces expert sources for certain “newsy” queries in specialized topic areas.

We covered Topic Authority when it launched. The December volatility suggests Google continues to lean into depth signals for news, even if the mechanics differ by surface and query type.

Why This Matters

This update adds to a trend generalist sites have felt for years. Holding broad, non-specialized rankings gets harder when brands and specialist sites publish pages that map cleanly to the product category.

In NewzDash data shared by John Shehata, Google Web Search’s share of traffic from Google surfaces to news publishers fell from about 51% to about 27% over two years, while Discover’s share increased.

That doesn’t explain why Google made changes, but it helps explain why Discover volatility hits harder when a core update rolls through.

Additionally, the pattern suggests Google may be reclassifying “best of” queries as having commercial rather than informational intent.

In ecommerce, specialized retailers are outranking larger platforms in mid-funnel queries because they demonstrate category authority. For publishers creating product recommendation content, you now face direct competition from the brands themselves.

For news publishers, the volatility in Discover creates a planning problem. When updates hit this channel, the traffic loss can be swift for publishers who lack a specific niche focus.

Looking Ahead

The December core update completed on December 29 after an 18-day rollout.

Sites affected by the update can review Google’s guidance on core updates. For sites hit by the specialization tilt, the path forward likely involves showing deeper expertise in narrower topic areas rather than competing on breadth.


Featured Image: PJ McDonnell/Shutterstock

3 SEO Predictions for 2026

I’ve been a professional search engine optimizer since 2005. Never have I experienced the speed and magnitude of the current web changes. Generative AI is accelerating and progressively dominating search results pages via AI Overviews and AI Mode. Many traditional optimization tactics are ineffective.

Here are my search engine predictions for 2026.

Zero Click Discovery

Consumers will increasingly discover and research products without clicking an organic listing. Commercial websites have experienced traffic declines for years. The trend will accelerate in 2026, as genAI platforms will research and recommend products based on shoppers’ prompts.

For instance, I queried ChatGPT with the prompt “best hiking boots for winter.” The platform ran its own searches, identified the best options, and then compared products across multiple criteria, including snow, insulation, warmth, and price.

ChatGPT shopping interface showing three hiking boot options with product images and comparison lines pointing to specific features

For a prompt of “best hiking boots for winter,” ChatGPT ran its own searches, identified the best options, and then compared products.

The process could have taken me an hour or more searching, clicking, and then discovering each option. I would have read reviews and product comparisons. Instead, ChatGPT took less than a minute and required no additional clicks.

The next genAI evolution is enabling users to purchase products in the chat dialog, i.e, without leaving the platform. ChatGPT does this with “Instant Checkout“; Google’s version is “Agentic Checkout.”

All of this upends organic visibility for merchants, who face the double whammy of less traffic and few (if any) reliable attribution metrics for the traffic they do have.

Indeed, a top hurdle with optimizing for LLMs is the absence of data. We rely on third-party tools, which, in my experience, are unreliable. Google provides no AI Mode visibility data in Search Console, and ChatGPT offers analytics only to partners.

GenAI Monetization

No genAI platform is anywhere near profitable. Expect a flood of revenue-generating add-ons from ChatGPT, Perplexity, Claude, and more. Even Google is testing pay-per-click ads inside its AI Mode answers.

This could help SEO. Once they sell sponsorships and ads, LLM platforms will likely provide performance metrics, which could include organic visibility.

Optimization strategies will then become more informed and easier to plan.

AI Chats Replace Search

To date, consumers have not abandoned traditional search despite flocking to ChatGPT and similar platforms.

But the trend remains: More people are using genAI, especially for information gathering and instructions. Only technical help and writing assistance are trending down, per a September 2025 OpenAI report (PDF).

Google, too, is contributing by integrating AI Mode everywhere in search. AI Overviews now include invitations for searchers to converse in AI Mode rather than query further. Searchers can also access AI Mode from Google’s home page.

Google AI Overviews search results for best hiking boots for winter, displaying recommendations including Merrell Moab 3 Mid GTX, HOKA Kaha 3 GTX, and La Sportiva Ultra Raptor II Mid GTX with a 'Dive deeper in AI Mode' button

In AI Overviews, Google now invites searchers to converse in AI Mode rather than query further.

In short, I expect AI-powered search and LLM-driven answers to replace traditional search much faster. Changes in consumer behavior, declines in traffic, and new LLM visibility features will occur in 2026 as rapidly as 2025, if not more so.

Diversify traffic, retain customers, and emphasize direct relationships, such as email. Study how LLMs discover and recommend products. That’s my advice for 2026.

The Psychology of AI SERPs and Shopping

AI-generated search result summaries have changed how consumers query for answers and products. The rise of “zero-click” search engine result pages may signal the coming effect of AI shopping and agentic commerce on product discovery and decision-making.

In March 2025, 900 U.S. adults shared their browsing behavior with the Pew Research Center. Roughly 58% of those adults encountered an AI Overview when searching on Google. Only 8% then clicked a traditional listing. Conversely, 42% of Google searchers received no AI Overview; 15% then clicked on a listing.

The immediate impact — 8% vs. 15% — is material and measurable. According to eMarketer, zero-click searches have reduced traffic to many websites by 25% or more. For ecommerce marketers, fewer clicks and visits already pose a significant challenge that will likely intensify in 2026.

But declining traffic is not the only issue.

The same psychological forces driving zero-click searches may also shape how shoppers behave when AI recommends and completes their purchases.

Satisficing

The idea behind the Pew data is simple enough. Folks stop searching when they receive a (presumably) clear, readable answer. There is no reason to keep looking. The AI answer is satisfying. It’s also psychologically “satisficing” — accepting the first answer that meets a minimum criterion rather than optimizing for the best possible.

When AI answers are “good enough,” why would someone keep searching?

The key is whether satisficing will shape future AI shopping, as it now shapes search. When it evaluates options, compares prices, and recommends a single item, does an AI agent end the shopper’s journey?

If so, the winning product may be the first to meet the agent’s criteria

Cognitive Ease

AI summaries dramatically reduce cognitive load.

The perceived benefit of many product-related queries (shipping times, return policies, basic comparisons) might not outweigh the mental angst. Shoppers can think less when they accept the AI response.

As it leads people to accept AI-generated answers, cognitive ease may also influence their decisions in agentic commerce, making effortless acceptance the norm.

When it summarizes options, filters trade-offs, and recommends a purchase, an AI shopping agent eliminates not just clicks but also cognitive work. The shopper no longer compares specifications, reads reviews, or weighs alternatives; the decision feels effortless.

Authority Bias

Google users trust its search results and AI answers. The structured tone, neutral language, and top placement add an air of authority, even when users do not scrutinize or review the sources.

Psychologists call it “authority bias,” wherein people defer to perceived institutional expertise. In practice, Google’s voice becomes the expert. But the broader tendency to trust experts could increase in AI shopping, as shoppers are more likely to view AI recommendations as definitive guidance.

When an AI agent recommends a purchase, shoppers often treat it as expert advice rather than just a machine-generated suggestion. The platform’s authority and apparent sophistication signal trust and discourage second-guessing.

Completion Bias

Traditional search results suggest unfinished work. Effort is required to click the links and then study the ensuing pages. AI summaries, in contrast, signal completion.

Searchers’ motivation drops sharply when they think a task is complete.

Shoppers conclude the process when an AI agent evaluates the options, narrows the choices, and then recommends a product. Alternatives remain, but the urge to keep searching ends.

Hence completion bias could spur AI shopping.

Ecommerce Marketing

Taken together, satisficing, cognitive ease, and authority and completion biases suggest that AI shopping will shortcut the shopping journey and decision-making.

This has the potential to move ecommerce competition upstream.

Product data accuracy, pricing consistency, fulfillment performance, reviews, and policy transparency become inputs into an AI agent’s logic, not just reassurance for humans. Thus success with AI selling may depend less on winning clicks and more on being legible, credible, and “good enough” at the precise moment a search is complete.