APAC Search Strategy Goes Beyond Google & Baidu via @sejournal, @motokohunt

If you approach the Asia-Pacific search strategy as simply an extension of your U.S. or European Google strategy, you will miss how discovery actually works across the region. Google is still dominant in many markets. But the landscape is far more fragmented than most global teams assume.

Japan is a clear example. Bing holds 31.63% of search share alongside Google’s 59.58%, which is enough to materially influence both SEO and paid performance.

South Korea tells a different story, but leads to the same conclusion. Google (46.81%) and Naver (43.96%) operate at near parity, making any Google-only strategy incomplete by design.

Even in Southeast Asia, where Google is often assumed to be universal, local engines still matter.  In Vietnam, CocCoc holds a meaningful 5.34% share of the market, which is enough to influence visibility in some competitive categories.

These are not anomalies but a broader shift.

Information discovery is changing with AI-driven interfaces shortening the path from question to decision. Super-apps and platform ecosystems are also changing where that discovery happens. In many cases, users are no longer moving through the web step by step. They are interacting with systems that interpret, summarize, and guide decisions within a single experience. Put together, fragmentation and interface change are creating a very different competitive landscape.

The advantage in APAC is no longer about understanding a single algorithm or the top-ranking factors. It is about understanding how distribution works across multiple systems, each with its own logic, constraints, and opportunities. That shift requires a different mindset. Not how do we rank? But where do we need to exist?

The Forces Reshaping Discovery In APAC

To understand how search is evolving in APAC, it helps to step back from individual search engines and look at broader behavior patterns. Across Asian markets, four patterns are consistently changing how discovery happens.

The First Is The Rise Of AI-Driven Answer Systems

Search used to require effort. Users entered a query, reviewed results, compared options, and formed their own conclusions. That process is being compressed. A question goes in, and a synthesized answer comes back, often with built-in recommendations.

Visibility changes significantly in this new environment. Simply ranking in SERPS is no longer enough. Future-state content needs to be structured so it can be selected, understood, and cited.

The Second Force Is The Role Of Super-Apps

In markets like South Korea and Japan, discovery is not limited to a browser. It happens inside messaging platforms, content ecosystems, and integrated services. KakaoTalk and LINE are not just communication tools. They are environments where users search, evaluate, and act.

In Japan, it is common to see TV commercials directing users to a LINE account rather than a standalone app or website. For many brands, LINE has become the primary interface for engagement, offering promotions, customer service, and loyalty programs in one place.

That shift matters because users are not always navigating to a site or downloading a brand app. They are interacting within platforms they already use daily. For brands, being present on the web is no longer enough if the decision is made elsewhere.

The Third Force Is Distribution Through Telcos

This is one of the least discussed but most impactful changes. When telecom providers bundle AI tools into their offerings, they accelerate adoption at a scale that traditional product growth cannot match.

In India, Bharti Airtel partnered with Perplexity to provide its Pro offering to roughly 360 million users.

Reliance Jio has taken a similar approach, distributing access to Google’s Gemini AI across more than 500 million users through bundled plans.

In South Korea, SK Telecom also partnered with Perplexity to bring AI-powered search directly into its ecosystem, positioning it alongside traditional engines rather than outside them.

In these cases, adoption is not driven by users seeking new tools. It’s happening because those tools are already there. Pre-installed, bundled, or built into services people use every day.

Instead of gradual growth, usage can scale almost overnight, significantly changing the adoption curve. And because these tools are positioned as assistants rather than search engines, they reshape how users interact with information without requiring them to consciously change behavior.

For search teams, this creates a different kind of competitive dynamic. It’s no longer just about ranking in search engines. The real competition is for inclusion in systems being rolled out to millions of users via existing platforms they are already comfortable using.

The Fourth Force Is The Evolution Of Portal-Based Search

In South Korea and Japan, portals like Naver and Yahoo! are popular and function more like structured environments, with commerce modules, local listings, media, and knowledge panels built directly into the experience. Increasingly, these platforms are adding AI-generated summaries to answer questions without sending users elsewhere.

It changes what success looks like. Ranking still matters, but it’s not the whole story anymore. You also need to show up within these environments.

Once you look at it that way, the objective shifts. It’s less about visibility in one engine and more about being present wherever people are actually finding answers.

Market Realities That Change The Playbook

Once you recognize that APAC is a distributed landscape, the idea of a single “regional strategy” starts to break down. Each market introduces its own constraints and opportunities, and those differences materially affect how search should be approached.

Japan often gets rolled into global strategy, but the numbers don’t really support that. Bing’s share is high enough to affect both organic and paid performance, driven in part by default browser settings and enterprise environments. It’s one of those gaps that’s easy to miss until you look for it.

South Korea is a different kind of challenge. Naver sits at the center of how people discover content, and it doesn’t behave like a typical search engine. The formats, the way results are surfaced, and even what users expect to see all differ. If you approach it with a Google-first mindset, things start to break down pretty quickly.

Vietnam shows a different kind of opening. CocCoc’s share isn’t huge, but it doesn’t need to be. If competitors ignore it, that alone creates room to gain visibility. In markets like this, where local behavior doesn’t fully match global assumptions, those gaps tend to get picked up quickly.

Vietnam highlights a different type of opportunity. Even a smaller share for a local engine can create an advantage if competitors ignore it. In markets where local behavior diverges from global assumptions, these gaps can be leveraged relatively quickly.

India and Indonesia don’t follow the same pattern. Google still dominates, but something else is happening alongside it. AI tools are picking up faster than most teams expect. In some cases, that push isn’t coming from users at all. It’s coming through telco partnerships, bundled access, and tools showing up inside services people already use.

So the way discovery shifts in these markets can feel uneven. It doesn’t necessarily track with what we’ve seen in more mature regions.

The common thread across these markets is that the opportunity is not just in understanding each engine, but in recognizing where competitors are underinvesting.

Where Most SEO Strategies Fall Short

In APAC, the issue usually isn’t a lack of optimization knowledge. It’s how that knowledge gets applied. Most global teams are set up around a centralized model. The tools, processes, and reporting tend to revolve around Google by default. Regional differences are recognized, but they don’t always make it into how work actually gets done.

That’s where things start to drift.

Alternative engines are often pushed aside. Even when the data shows a meaningful share, they’re treated as secondary priorities. Over time, that creates an opening. Teams that do invest, even at a basic level, can pick up visibility that others leave behind.

Additionally, structured data and technical capabilities are not adapted to local ecosystems. What works for Google is assumed to work everywhere, even in environments where search behaves very differently.

Often experimentation is limited or not allowed. Many of the platforms that matter in APAC provide APIs, feeds, and tooling that enable more advanced strategies. These capabilities often go unused because they fall outside standard workflows.

None of these gaps is particularly complex to address. But they require a shift in how teams think about ownership and execution.

The Shift To Answer-Layer Visibility

One of the more subtle but important changes is the emergence of what can be described as the answer layer. Users are increasingly interacting with systems that provide direct responses rather than lists of options. In these environments, visibility is determined by whether your content is selected as a source, not just whether it ranks.

This changes how content should be created, requiring information to be structured in a way that is easy to extract and interpret. Clear definitions, comparisons, and step-by-step explanations become more valuable because they align with how AI systems assemble answers. At the same time, attribution becomes more important. Content that is well-organized, clearly sourced, and easy to validate is more likely to be used and cited.

This is not a replacement for traditional SEO. It is an extension of it. But it does require a different level of intentionality in how content is designed.

Measurement Needs To Catch Up

One of the challenges in adapting to this landscape is that measurement has not kept pace with behavior. Many teams still report on organic search as a single channel. In APAC, that approach obscures more than it reveals.

At a minimum, performance should be segmented by engine and by discovery type. Google, Bing, portal ecosystems, local engines, and AI-driven referrals each behave differently and should be evaluated separately.

Without this level of visibility, it becomes difficult to justify investment or identify where opportunities exist. This is particularly important as AI-driven traffic grows. Early data suggest that referrals from AI systems are increasing rapidly, but in many cases, they are not being tracked or attributed correctly, if at all.

The result is a blind spot in performance reporting at the exact moment when new discovery channels are emerging.

Regulation As A Strategic Constraint And Opportunity

Regulation is increasingly shaping how search and discovery operate across APAC.

Privacy laws in markets like Japan, South Korea, India, and Vietnam are tightening what teams can collect and how they can use it. At the same time, countries like Australia are putting more pressure on AI systems, especially regarding age verification and platform responsibility.

Most organizations still treat this as a compliance task. Something to deal with once it becomes unavoidable. But it doesn’t really work that way anymore.

The teams that plan for these constraints early tend to move faster. Their measurement holds up. Their content strategies translate more easily across markets. They don’t have to keep reworking things every time a new requirement shows up.

Others take a different path. They push harder on data collection, lean into short-term gains, and then end up rebuilding parts of their stack under pressure.

So regulation ends up doing more than limiting what’s possible. It quietly separates the teams that can adapt from the ones that will struggle to adapt to this new ecosystem.

What To Do Next

For teams trying to adapt, the next steps don’t need to be dramatic. Most of the gains come from getting the basics right in the markets that matter.

A good place to start is how you define search. In some markets, Bing deserves to be integrated as a primary channel given its market share. In South Korea, Naver needs to be approached as its own ecosystem rather than an alternative to Google. And in places like Vietnam, it’s worth taking a closer look at platforms like CocCoc to understand whether they contribute meaningful visibility for your category.

At the same time, begin building content that is designed for extraction and citation. This does not require a complete overhaul of your content strategy, but it does require more intentional structuring of key information.

Content that performs well in AI-driven environments tends to be clear, well-organized, and easy to interpret. Definitions, comparisons, step-by-step guidance, and well-supported claims are more likely to be selected and reused because they align with how answer systems assemble responses.

This is where many global teams overlook a significant advantage. Rather than creating entirely new content for each APAC market, there is often an opportunity to extend what already works in the U.S. or Europe. Content that has earned visibility, links, and engagement in one market has already demonstrated its value. When that content is adapted thoughtfully, not just translated, it can carry those strengths into new markets.

The key is in how that adaptation happens. Instead of treating localization as a linguistic exercise, it should be treated as a structural one. Core concepts, definitions, and frameworks can remain consistent, while local relevance is introduced through examples, regulatory context, and market-specific details.

This approach does two things.

First, it accelerates content development by building on proven assets rather than starting from scratch.

Second, it increases the likelihood that content will be recognized, interpreted, and cited across markets, particularly in AI-driven systems that prioritize clarity, consistency, and corroboration.

In a landscape where visibility increasingly depends on being selected as a source, not just ranked, this becomes a meaningful competitive advantage.

Finally, recognize that distribution is now a core part of the market-specific and regional search strategy. Whether through platforms, partnerships, or new interfaces, where your content appears is becoming just as important as how it ranks.

Closing Thought

APAC is often described as complex. That is true, but complexity is not the most important characteristic. Search is no longer defined by a single engine or a single interface. It is shaped by a network of systems that influence how users discover, evaluate, and act.

The teams that succeed will not be the ones that adapt their Google strategy to new markets but the ones that understand how discovery actually works and build their presence accordingly.

More Resources:


Featured Image: ktsdesign/Shutterstock

Ensuring continuous discoverability with agentic AI for SEO

In our Rethinking SEO in the age of AI article, we briefly explored how AI might move beyond simple prompt-and-response interactions. One emerging direction is agentic AI. Systems that can take action, not just generate answers. While this space is still evolving, we’re already seeing early signs of tools that can identify gaps, suggest improvements, and adapt to changing trends with minimal input. If these capabilities continue to develop, they could reshape how we think about maintaining continuous discoverability in SEO.

Table of contents

Key takeaways

  • Agentic AI for SEO represents a shift from traditional visibility and ranking to being trusted and understood by AI systems
  • The web’s structure remains stable, but interaction through AI agents changes how content is accessed and consumed
  • SEO must evolve to focus on being structured, reliable, and adaptable for AI interpretation
  • Challenges include data quality, integration complexity, and balancing automation with human judgment
  • The future of discoverability in an agent-driven web emphasizes collaboration between AI and human insight, expanding SEO’s role beyond just ranking

Understanding the coexistence of web and AI agents

Before understanding agentic SEO, let’s first look at the role of AI in shaping the web. Is it staying the same, or quietly changing?

For a long time, the web has been more than just a collection of pages. It has functioned as an interconnected graph of entities. Websites representing people, businesses, ideas, and concepts, all linked together through content, context, and trust. This structure, often referred to as the open web, has remained relatively stable for decades. Humans created content, users discovered it through search or links, and meaning was formed through exploration.

What seems to be shifting now is not the structure itself, but how that web is accessed and consumed.

Earlier, discovery was largely a direct interaction between humans and websites. You searched, clicked, read, compared, and formed your own conclusions. Today, AI systems are increasingly stepping into that journey. They sit between the user and the web, interpreting, summarizing, and sometimes even deciding which information to surface.

This is where the idea of AI agents begins to emerge. Not just as tools that generate responses, but as systems that can navigate the web, retrieve information, and potentially act on it. Early examples, such as experiments in natural language interfaces like NLWeb, hint at a web that can be interacted with more conversationally, without losing its openness and interconnectedness.

Some refer to this shift as the beginning of an “agentic web.” But it’s important to see it less as a complete transformation and more as a layer forming on top of the existing web. The open web still exists, content is still created by people, and links still matter. What’s evolving is how that content is discovered, interpreted, and used.

And that shift in interaction is where things start to get interesting for SEO.

Read more: Yoast collaborates with Microsoft to help AI understand Open Web

What will SEO mean in agentic web?

If AI agents are starting to reshape how people interact with the web, it naturally raises a follow-up question: where does that leave SEO?

For years, SEO has largely been about helping users find your content. You optimized for rankings, improved visibility on search engines, and relied on users to click, read, and navigate. But if AI agents begin to mediate that journey, not just retrieving information but interpreting and acting on it, then SEO may need to expand its role.

Not necessarily replace what exists, but build on top of it.

From ranking pages to being selected by systems

In a more agent-driven environment, discoverability may no longer depend solely on where you rank, but also on whether your content is selected, trusted, and used by AI systems.

That introduces a subtle but important shift:

  • It’s not just about being visible
  • It’s about being understandable, reliable, and usable by machines

AI agents don’t browse the web the way humans do. They:

  • Parse structured and unstructured data
  • Look for clear signals of authority and accuracy
  • Combine information from multiple sources before presenting it

So instead of optimizing only for clicks, SEO may also involve optimizing for inclusion in AI-generated responses and workflows.

What stays, what evolves, what gets added

Let’s ground this a bit. Traditional SEO doesn’t disappear. Many of its fundamentals still apply, but their role may shift.

What stays relevant

  • High-quality, original content
  • Clear site structure and internal linking
  • Strong technical SEO foundations
  • Authority and trust signals (E-E-A-T)

These remain essential because AI systems still rely on the web as their source of truth.

What evolves

  • Keywords → Intent modeling: Less about exact-match phrases, more about covering topics deeply and contextually
  • Rankings → Presence across surfaces: Visibility may extend beyond SERPs into AI summaries, assistants, and agent outputs
  • Clicks → Influence: Users may not always visit your site, but your content can still shape their decisions

What gets added

  • Structured, machine-readable content: Schema, clean formatting, and semantic clarity become even more important
  • Content designed for extraction: Clear answers, definitions, step-by-step explanations
  • Topical authority at the entity level: Being recognized as a trusted source for a subject, not just ranking for a keyword
  • Freshness and adaptability: Content that evolves as trends and information change

So, what does SEO really become?

It starts to look less like a discipline focused purely on rankings and more like one focused on continuous discoverability.

Or, as Alex Moss puts it in his article The Same But Different: Evolving Your Strategy For AI-Driven Discovery, the web itself may be evolving into two parallel experiences:

This has created a split from a completely open web into two – the ‘human’ web and the ‘agentic’ web… SEOs will have to consider both sides of the web and how to serve both.

That framing makes the shift clearer.

Your content still needs to rank. But it also needs to work at a second layer of the web, where AI systems interpret, select, and sometimes act on information before a human ever sees it.

So now, your content needs to be:

  • Understood without ambiguity
  • Trusted enough to be referenced
  • Structured well enough to be reused

In that sense, SEO doesn’t disappear in an agentic web. It stretches.

From helping users find information…

to helping systems choose it.

Role of agentic AI in SEO

If the web is gradually being experienced through both humans and AI agents, then it’s worth asking what role these agents might begin to play in SEO itself. Not as a replacement for SEO teams, but as a new layer within how SEO work gets done.

What we’re starting to see is a shift from SEO as a set of periodic tasks to something more continuous, assisted, and adaptive. Some early tools already hint at this. They don’t just analyze data, they suggest actions. In some cases, they even implement changes. If this direction continues, agentic AI could become less of a tool you use and more of a system you collaborate with.

Let’s break down where this role might start to take shape.

How agentic AI may reshape SEO workflows

Shift Traditional SEO approach (how it typically works today) With agentic AI (emerging direction)
Audits → Always-on optimization SEO teams run audits at set intervals (monthly, quarterly) using tools such as site crawlers.

Issues such as broken links, missing metadata, or slow pages are identified and then manually fixed over time.

Improvements often depend on when the audit is conducted.

Systems continuously monitor site performance, flag issues as they arise, and may suggest or implement fixes in real time.

Optimization becomes ongoing rather than dependent on manually scheduled audits.

Reacting → Anticipating Actions are usually triggered by visible changes.

For example, a drop in rankings leads to an investigation, or an algorithm update prompts content revisions.

SEO is often a response to what has already happened.

AI systems analyze patterns in search behavior and performance data to detect early signals.

This could mean identifying emerging topics, shifting intent, or declining engagement before it significantly impacts performance.

Manual execution → Guided systems Tasks such as keyword research, clustering, content optimization, and internal linking are performed manually or with tools.

SEO specialists interpret the data and execute changes step by step.

AI assists with these tasks by identifying keyword opportunities, grouping topics, suggesting optimizations, and even applying specific changes.

SEOs shift toward guiding strategy, reviewing outputs, and setting priorities.

Static content → Adaptive content Content is created, published, and revisited occasionally.

Updates are often triggered by performance drops, outdated information, or scheduled content refresh cycles.

Content evolves more dynamically.

Systems can recommend updates based on performance, refine sections for clarity, or restructure content to better match user intent and AI consumption patterns.

Generic UX → Contextual journeys Most users experience the same content and navigation structure.

Personalization is limited or rule-based, such as basic recommendations or segmented landing pages.

Experiences become more contextual.

Content, navigation, and recommendations can adapt based on user behavior, intent, or journey stage, creating more relevant and engaging interactions.

Technical maintenance → Intelligent infrastructure Technical SEO involves periodic checks for issues such as crawl errors, indexing problems, and schema gaps.

Fixes are prioritized manually based on impact and resources.

AI systems continuously monitor technical health, automatically prioritize issues, suggest fixes, and, in some cases, implement them.

Structured data, internal linking, and site architecture can be dynamically optimized.

A quick example: structuring content for machines, not just humans

If agentic systems rely on structured, connected, and machine-readable content, then this isn’t entirely new territory for SEO.

In many ways, we’ve already been moving in this direction through structured data and schema. What’s changing is how important and foundational it may become.

For example, features like schema aggregation in Yoast SEO bring together different pieces of structured data across a site and connect them into a more unified graph. Instead of treating pages as isolated units, they help search engines better understand how entities, content types, and relationships fit together.

This might seem like a technical detail, but it reflects a broader shift.

If AI agents are parsing, combining, and interpreting content across multiple sources, then clarity and connection at the data level become more important. Not just for visibility in search results, but for how content is understood and reused.

So while agentic AI may feel like a new layer, some of the foundational work, like structuring content, defining entities, and building semantic relationships, is already part of modern SEO. It just becomes more critical in this context.

So, where does this leave SEO teams?

If there’s one pattern across all of this, it’s not replacement, but redistribution.

  • Agentic AI may take on:
  • Repetitive tasks
  • Data-heavy analysis

Continuous monitoring

Which leaves humans to focus more on brand-building aspects like:

  • Strategy and positioning
  • Editorial judgment and brand voice
  • Deciding what should be done, not just what can be done

In that sense, agentic AI doesn’t redefine SEO overnight. But it does start to reshape how it’s practiced.

Understanding the risks and challenges of agentic AI for SEO

So far, agentic AI might sound like a natural evolution of SEO. But, as with most shifts in technology, it may also come with trade-offs.

Not because the technology is inherently problematic, but because it introduces new dependencies, new layers of complexity, and new decisions for SEO teams to navigate. In that sense, adopting agentic AI isn’t just about adding a new capability. It may also involve rethinking how much control to delegate and where human judgment continues to play a critical role.

Here are some of the challenges that could emerge as this space evolves:

1. High technical and integration complexity

Agentic systems are unlikely to operate in isolation. They may need to connect with your CMS, analytics tools, and multiple data sources.

This could introduce challenges such as:

  • Managing integrations across platforms
  • Ensuring consistent and reliable data flow
  • Defining clear workflows across systems

For many teams, this might not be plug-and-play. It could require time, experimentation, and coordination across different roles.

2. Data quality and dependency

Agentic AI may be heavily dependent on the quality of data it receives. If the data is:

  • Outdated
  • Incomplete
  • Poorly structured

Then the outputs could reflect those gaps.

At scale, even small inconsistencies might influence multiple recommendations or decisions. Which is why maintaining clean, reliable data sources may become even more important in an agent-driven setup.

3. Risk amplification and the need for governance

One of the strengths of agentic AI is speed. But that same speed might also amplify unintended outcomes.

Without clear guardrails:

  • Content updates could introduce inaccuracies
  • Technical changes might lead to issues like broken links or indexing errors
  • Best practices may not always be consistently followed

This is where governance frameworks and approval checkpoints may become essential, not to slow things down, but to keep them aligned.

4. Hallucinations and accuracy considerations

AI systems can sometimes generate outputs that sound plausible but aren’t entirely accurate.

In an SEO context, this might look like:

  • Misinterpreted data
  • Inaccurate keyword insights
  • Fabricated or blended information

The challenge is that these outputs can be difficult to spot at a glance. This suggests that validation and source-checking may remain an ongoing part of the workflow.

5. Limited understanding of nuance

SEO often goes beyond data and structure. It includes tone, context, and intent. Agentic systems may not always fully capture:

  • Brand voice and positioning
  • Legal or compliance nuances
  • Subtle differences in user intent

This could result in outputs that are technically sound, but not always contextually aligned. Human input may still play a key role here.

6. Balancing automation with human judgment

A broader question that may arise is how much to automate.

  • Too much automation might: Reduce control over strategy or brand
  • Too little might: Limit efficiency and scalability

Most teams may find themselves balancing the two. Using agentic AI to extend their capabilities, while still guiding direction and decision-making.

7. High initial investment and learning curve

While agentic systems may offer long-term efficiency, getting started could take time. This might involve:

  • Learning how the systems work
  • Setting up workflows and integrations
  • Aligning outputs with business goals

There’s also a level of uncertainty here. The technology is still evolving, and so are the tools built around it. Which means costs, capabilities, and best practices may continue to shift.

For many teams, adoption may not be immediate. It could happen gradually, through testing, iteration, and figuring out what actually works in practice.

8. Zero-click experiences and shifting traffic patterns

As AI systems become more involved in surfacing information, zero-click experiences may become more common.

Users might:

  • Get answers directly within AI interfaces
  • Interact without visiting the original source

This doesn’t necessarily reduce the importance of SEO, but it may shift how success is measured. Visibility and influence could become just as relevant as traffic.

What discoverability might look like in an agent-driven web?

Agentic AI may open up new possibilities for how SEO is done. But alongside that, it may also introduce new considerations.

It could require:

  • Stronger data foundations
  • Clear governance and review processes
  • A thoughtful balance between automation and human input

In many ways, the goal may not be full automation. It may be a better collaboration.

Even if agents take on more execution, the responsibility for direction, accuracy, and trust is likely to remain human. And maybe that’s the more interesting shift here. Not whether AI agents will “take over” SEO, but how they might reshape what good SEO looks like.

If discoverability is no longer just about ranking, but also about being selected, interpreted, and reused by systems, then the role of SEO starts to expand. It becomes less about optimizing for a single interface and more about preparing content to exist across multiple layers of the web.

So the question isn’t just:

“How do we rank?”

It might slowly become:

  • How to stay understandable across multiple LLMs?
  • Do we remain trustworthy enough to be referenced?
  • How do we design content that works for both humans and machines?

We don’t have all the answers yet. And maybe that’s okay.

Because this isn’t a fixed destination. It’s something that’s still taking shape.

And as it does, SEO may continue to evolve alongside it. Not disappearing, not being replaced, but adapting to a web that is becoming more dynamic, more layered, and a little less predictable.

GenAI Citations, Explained

ChatGPT, Claude, Gemini, and other generative AI platforms occasionally include source links in their responses. The links are called citations and may appear within an answer or in a separate panel, often to the right.

Understanding the citation algorithms is key to creating an optimization strategy. Here’s what we know about genAI citations.

Screenshot of a ChatGPT response showing a numbered list of walking shoe comparison URLs, with an Activity panel on the right displaying 22 sources. Red arrows highlight the connection between inline source badges in the main response and their corresponding citations in the Sources panel.

Citations in ChatGPT often appear in a right-side panel. Click image to enlarge.

Citation Methods

No leading genAI platform has explained its citation algorithm or provided optimization guidance. Yet it’s clear from analyzing citations and traditional search rankings that Google powers ChatGPT, Gemini, AI Mode, and Grok, while Brave drives Claude and Perplexity.

Thus maintaining high rankings on both of those search engines increases the likelihood of being cited. An exception is ChatGPT’s practice of citing its publication partners regardless of external rankings.

Citations and Sources

From patents and independent studies, we know of four types of citations.

Grounded citations influence the answer itself. The platforms run searches, crawl the content of indexed pages, and quote those sources in the response.

Ungrounded citations support and confirm the platforms’ existing training data without influencing the answer directly. I call these “reverse” citations for that reason. Presumably ungrounded citations exist to foster accuracy and objectivity from known, reliable companies.

The frequency of ungrounded citations is largely a mystery. However, a recent New York Times article referred to an analysis by Oumi, an AI development firm, that found “more than half” of the citations on AI Overviews (powered by Gemini) are ungrounded.

Ghost citations are links in answers that lack a source name. This presumably occurs because the source didn’t explain how its product or service solved the query. According to a study published this month from search optimizer Kevin Indig, 61.7% of answers include a ghost citation.

Invisible citations are not, in fact, citations; they are instances of genAI using a site’s info without mentioning or linking to it. A study released this month by Ahrefs found 50.2% of URLs retrieved by ChatGPT remain uncited. Moreover, in my experience, Reddit threads often influence answers, but very few are cited.

GEO Strategy

Knowing how genAI citations work can help elevate your business’s visibility.

Influencing an answer is different from being cited in it. Still, appearing in any answer is better than not, especially if it includes your products.

Training data is fundamental. GenAI platforms may answer queries from varied sources, but start with training data. The platforms may search Google or Brave only after creating an answer, or they may provide an answer exclusively from external pages.

Regardless, direct or indirect associations with prompts expose a brand to the platform. That’s the priority.

GoDaddy Transferred A Domain By Mistake And Refused To Fix It via @sejournal, @martinibuster

GoDaddy is alleged to have transferred a domain name without authorization from it’s longtime registrant, transferring the domain name without the proper authorization and the required documentation. The victim spent nearly ten hours with customer service only to receive the response that there is nothing GoDaddy could do to fix the problem.

Domain Transfer Happened On A Saturday

Interestingly, the rogue domain transfer happened on a Saturday, which could be an important detail because some domain registrars outsource their customer service on the weekends and I have heard of other occasions where mistakes have occurred due to less quality control. I know of a case where high-value domain names worth six to seven figures were stolen on a weekend where an attacker was able to manipulate the weekend customer service into changing the email address of the account, enabling the thief to transfer away all of the one and two-word domains to another account.

What happened with this specific domain was not a case of robbery but something worse. A weekend customer service person made a mistake processing a legitimate domain name change by another GoDaddy customer, and instead of initiating the change on the correct domain they transferred the victim’s domain instead.

Compounding the error, GoDaddy’s weekend customer service failed to follow their own protocol for preventing unauthorized transfers, thereby allowing the domain to be transferred to someone else.

32 Calls And Nearly 10 Hours Of Phone Calls

The process of getting GoDaddy to reverse it’s mistake was a bureaucratic nightmare. They placed thirty-two phone calls and spent 9.6 hours on the phone talking to GoDaddy’s customer service.

“Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.

On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.”

Compounding the error was that every time the victim called GoDaddy the call generated a new case number with none of the case numbers tied to any of the previous ones.

GoDaddy’s Response

After four days of trying to get through to someone at GoDaddy to get the problem resolved, GoDaddy finally responded with the following resolution:

“After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.”

GoDaddy’s response contained links to how to dispute a domain name change at ICAAN, the global organization that manages the domain name system, instructions on how to look up the domain name registration information and a customer support page about contacting legal representation.

That’s it.

Error Fixed, But Not By GoDaddy

The person who wrote about the issue said that they contacted a friend within GoDaddy who was then able to have the matter properly dealt with. Ultimately the error was not fixed by GoDaddy but by the innocent person who discovered someone else’s domain name in their GoDaddy account.

As previously stated, the entire fiasco began with a mistake on the part of GoDaddy on a legitimate domain change request. GoDaddy changed the domain name being changed to the victim’s domain name. The person who ended up with the victim’s domain name in their account contacted the victim and between the two of them they began the process of transferring the domain back to the rightful registrant.

Domain Name Ownership Is Non-Existent

A common mistake made by many developers and business owners is that they believe that they own a domain name. That is incorrect, nobody owns a domain name. Domain names are registered but never owned. The registration entitles the registrant to use the domain name but they never actually own it. That is how the domain name system works and it’s a part of the reason for why this issue played out the way it did. However,  the problem in this case was due solely to a mistake by GoDaddy.

The post that detailed the nightmare refers to GoDaddy’s “domain ownership protection” services but that’s not actually what it is called. There is no such thing domain name ownership protection.

What GoDaddy sells is a Domain Protection service that protects against unauthorized transfers and accidental expiration. The victim paid for that protection but because the error was due to GoDaddy’s own mistake the protection did nothing for the victim, the domain change went through without the proper documentation.

Read the blog post about how GoDaddy made a mistake and not only failed to fix the problem, they didn’t even acknowledge they had made a mistake.

GoDaddy Gave a Domain to a Stranger Without Any Documentation

Featured Image by Shutterstock/AVA Bitter

Google’s AI Overviews Cut Organic Clicks 38%, Field Study Finds via @sejournal, @MattGSouthern

A randomized field experiment finds Google’s AI Overviews reduce organic clicks to external websites by 38% on queries where they appear, while self-reported search satisfaction stays nearly unchanged when the summaries are removed.

The working paper by researchers at the Indian School of Business and Carnegie Mellon University was posted to SSRN this month. Authors Saharsh Agarwal and Ananya Sen describe it as the first randomized field experiment to test how AI Overviews affect user behavior in a real browsing environment.

How The Experiment Worked

Agarwal and Sen built a Chrome extension that randomly assigned 1,065 U.S. participants to one of three groups. People were recruited from Prolific and used Chrome on desktop. They also had to meet minimum browsing-history thresholds, so the sample reflects active desktop Chrome users rather than all Google users.

The control group saw Google Search normally. A “Hide AIO” group had the extension remove AI Overviews in real time. A third group was redirected to Google’s AI Mode for all searches. The study ran for two weeks per participant between January and February 2026.

Researchers pre-registered the experiment with the AEA RCT Registry before data collection. Over 95% of users in the Hide AIO group did not detect any changes during the study.

What The Researchers Found

AI Overviews appeared on 42% of queries, and removing them increased outbound clicks from 0.38 to 0.61 per search. They reduced outbound organic clicks by 38% on triggered queries, with zero-click search rising from 54% to 72%.

Effects were strongest when AI Overviews appeared at the top of the page, which occurred 85% of the time. Removing top-position AI Overviews nearly doubled outbound clicks, but lower ones had no effect.

Sponsored clicks and search frequency remained steady, indicating substitution between AI Overviews and organic visits.

The User Experience Finding

The endline survey used a 1-to-5 Likert scale to assess participants’ search experience. Responses from the control and Hide AIO groups were nearly identical across all measures, including satisfaction, information quality, and ease of finding information.

The researchers wrote that AI Overviews “divert traffic away from publishers without delivering measurable improvements in user experience.

How AI Mode Compared

Participants directed to AI Mode had lower outbound click rates, higher zero-click rates, and lower satisfaction at endline compared to other groups.

The authors note that these results are exploratory, as higher attrition, some uninstalling of the extension, or finding workarounds may have influenced the outcomes.

Why This Matters

Independent measurements of the impact of AI Overviews on traffic have mostly been correlational. Pew Research found users click 8% of the time with AI Overviews, compared to 15% without. Ahrefs analyzed GSC data and reported a 58% drop in click-through rate for top-ranking pages when AI Overviews appeared.

This experiment adds a different approach by randomly assigning users to see AI Overviews or not, isolating the causal effect.

Google VP Liz Reid claims AI Overviews cut “bounce clicks,’ but provides no data backing the user-benefit side. The Agarwal and Sen paper tested a related question with a randomized design, finding no measurable change in satisfaction or ease of finding info.

Looking Ahead

The paper is a draft on SSRN and is not peer-reviewed. Authors will add more results, and we will provide an update if findings change.

The Technical SEO Audit Needs A New Layer via @sejournal, @slobodanmanic

The standard technical SEO audit checks crawlability, indexability, website speed, mobile-friendliness, and structured data. That checklist was designed for one consumer: Googlebot.

This is how it’s always been.

In 2026, your website has, at least, a dozen additional non-human consumers. AI crawlers like GPTBot, ClaudeBot, and PerplexityBot train models and power AI search results. User-triggered agents like the newly announced Google-Agent, or its “siblings” Claude-User and ChatGPT-User, browse websites on behalf of specific humans in real time. A Q1 2026 analysis across Cloudflare’s network found that 30.6% of all web traffic now comes from now bots, with AI crawlers and agents making up a growing share. Your technical audit needs to account for all of them.

Here are the five layers to add to your existing technical SEO audit.

Layer 1: AI Crawler Access

Your robots.txt was probably written for Googlebot, Bingbot, and maybe a few scrapers. AI crawlers need their own robots.txt rules, and they need to be separate from Googlebot and Bingbot.

What To Check

Review your robots.txt for rules targeting AI-specific user agents: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider, AppleBot-Extended, CCBot, and ChatGPT-User. If none of these appear, you’re running on defaults, and those defaults might not reflect what you actually want. Never accept the defaults unless you know they are exactly what you need.

The key is making a conscious decision per crawler rather than blanket allowing or blocking everything. Not all AI crawlers serve the same purpose. AI crawler traffic can be split into three categories: training crawlers that collect data for model training (89.4% of AI crawler traffic according to Cloudflare data), search crawlers that power AI search results (8%), and user-triggered agents like Google-Agent and ChatGPT-User that browse on behalf of a specific human in real time (2.2%). Each category warrants a different robots.txt decision.

Chart showing traffic volume by crawler purpose - Cloudflare Radar Q1 2026
Cloudflare Radar data showing traffic volume by crawl purpose (Q1 2026); Screenshot by author, April 2026

The crawl-to-referral ratios from Cloudflare’s Radar report can make this an informed decision for you. Anthropic’s ClaudeBot crawls 20.6 thousand pages for every single referral it returns. OpenAI’s ratio is 1,300:1. Meta sends no referrals. Blocking OpenAI’s OAI-SearchBot or PerplexityBot reduces your visibility in ChatGPT Search and Perplexity’s AI answers. Blocking training-focused crawlers like CCBot or Meta’s crawler prevents data extraction from a provider that sends zero traffic back. The crawl-to-referral ratios tell you who is taking without giving.

There is one crawler that requires special attention. Google added Google-Agent to its official list of user-triggered fetchers on March 20, 2026. Google-Agent identifies requests from AI systems running on Google infrastructure that browse websites on behalf of users. Unlike traditional crawlers, Google-Agent ignores robots.txt. Google’s position is that since a human initiated the request, the agent acts as a user proxy rather than an autonomous crawler. Blocking Google-Agent requires server-side authentication, not robots.txt rules. This is both interesting, and important for the future, even if it’s not within the scope of this article.

Official documentation for each crawler:

Layer 2: JavaScript Rendering

Googlebot renders JavaScript using headless Chromium. There is nothing new about that. What is new and different is that virtually every major AI crawler does not render JavaScript.

Crawler Renders JavaScript
GPTBot (OpenAI) No
ClaudeBot (Anthropic) No
PerplexityBot No
CCBot (Common Crawl) No
AppleBot Yes
Googlebot Yes

AppleBot (which uses a WebKit-based renderer) and Googlebot are the only major crawlers that render JavaScript. Four of the six major web crawlers (GPTBot, ClaudeBot, PerplexityBot, and CCBot) fetch static HTML only, making server-side rendering a requirement for AI search visibility, not an optimization. If your content lives in client-side JavaScript, it is invisible to the crawlers training OpenAI, Anthropic, and Perplexity’s models and powering their AI search products.

What To Check

Run curl -s [URL] on your critical pages and search the output for key content like product names, prices, or service descriptions. If that content isn’t in the curl response, GPTBot, ClaudeBot, and PerplexityBot can’t see it either. Alternatively, use View Source in your browser (not Inspect Element, which shows the rendered DOM after JavaScript execution) and check whether the important information is present in the raw HTML.

CURL fetch of No Hacks homepage
Curl fetch of No Hacks homepage (Image from author, April 2026)

Single-page applications (SPAs) built with React, Vue, or Angular are particularly at risk unless they use server-side rendering (SSR) or static site generation (SSG). A React SPA that renders product descriptions, pricing, or key claims entirely on the client side is sending AI crawlers a blank page with a link to the JavaScript bundle.

The fix isn’t complicated. Server-side rendering (SSR), static site generation (SSG), or pre-rendering solves this for every major framework. Next.js supports SSR and SSG natively for React, Nuxt provides the same for Vue, and Angular Universal handles server rendering for Angular applications. The audit just needs to flag which pages depend on client-side JavaScript for critical content.

Layer 3: Structured Data For AI

Structured data has been part of technical SEO audits for years, but the evaluation criteria need updating. The question is no longer just “does this page have schema markup?” It’s “does this markup help AI systems understand and cite this content?”

What To Check

  • JSON-LD implementation (preferred over Microdata and RDFa for AI parsing).
  • Schema types that go beyond the basics: Organization, Article, Product, FAQ, HowTo, Person.
  • Entity relationships: sameAs, author, publisher connections that link your content to known entities.
  • Completeness: are all relevant properties populated, or are you just checking a box using skeleton schemas with name and URL?

Why This Matters Now

Microsoft’s Bing principal product manager Fabrice Canel confirmed in March 2025 that schema markup helps LLMs understand content for Copilot. The Google Search team stated in April 2025 that structured data gives an advantage in search results.

No, you can’t win with schema alone. Yes, it can help.

The data density angle matters too. The GEO research paper by Princeton, Georgia Tech, the Allen Institute for AI, and IIT Delhi (presented at ACM KDD 2024, first to publicly use the term “GEO”) found that adding statistics to content improved AI visibility by 41%. Yext’s analysis found that data-rich websites earn 4.3x more AI citations than directory-style listings. Structured data contributes to data density by giving AI systems machine-readable facts rather than requiring them to extract meaning from prose.

An important caveat: No peer-reviewed academic studies exist yet on schema’s impact on AI citation rates specifically. The industry data is promising and consistent, but treat these numbers as indicators rather than guarantees.

W3Techs reports that approximately 53% of the top 10 million websites use JSON-LD as of early 2026. If your website isn’t among them, you’re missing signals that both traditional and AI search systems use to understand your content.

Duane Forrester, who helped build Bing Webmaster Tools and co-launched Schema.org, argues that schema markup is only step one. As AI agents continue moving from simply interpreting pages to making decisions, brands will also need to publish operational truth (pricing, policies, constraints) in machine-verifiable formats with versioning and cryptographic signatures. Publishing machine-verifiable source packs is beyond the scope of a standard audit today, but auditing structured data completeness and accuracy is the foundation verified source packs build on.

Layer 4: Semantic HTML And The Accessibility Tree

The first three layers of the AI-readiness audit cover crawler access (robots.txt), JavaScript rendering, and structured data. The final two address how AI agents actually read your pages and what signals help them discover and evaluate your content.

Most SEOs evaluate HTML for search engine consumption. Agentic browsers like ChatGPT Atlas, Chrome with auto browse, and Perplexity Comet don’t parse pages the way Googlebot does. They read the accessibility tree instead.

The accessibility tree is a parallel representation of your page that browsers generate from your HTML. It strips away visual styling, layout, and decoration, keeping only the semantic structure: headings, links, buttons, form fields, labels, and the relationships between them. Screen readers like VoiceOver and NVDA have used the accessibility tree for decades to make websites usable for people with visual impairments. AI agents now use the same tree to understand and interact with web pages.

And the reason is simple: efficiency. Processing screenshots is both more expensive and slower than working with the accessibility tree.

Accessibility tree shown in Google Chrome
This is what an accessibility tree looks like in Google Chrome (Image from author, April 2026)

This matters because the accessibility tree exposes what your HTML actually communicates, not what your CSS (or JS) makes it look like. A

styled to look like a button doesn’t appear as a button in the accessibility tree. An image without alt text means nothing. A heading hierarchy that skips from H1 to H4 creates a broken structure that both screen readers and AI agents will struggle to navigate.

Microsoft’s Playwright MCP, the standard tool for connecting AI models to browser automation, uses accessibility snapshots rather than raw HTML or screenshots. Playwright MCP’s browser_snapshot function returns an accessibility tree representation because it’s more compact and semantically meaningful for LLMs. OpenAI’s documentation states that ChatGPT Atlas uses ARIA tags to interpret page structure when browsing websites.

Web accessibility and AI agent compatibility are now the same discipline. Proper heading hierarchy (H1-H6) creates meaningful sections that AI systems use for content extraction. Semantic elements like

,

,

, and

tell machines what role each content block plays. Form labels and descriptive button text make interactive elements understandable to agents that parse the accessibility tree instead of rendering visual design.

What To Check

  • Heading hierarchy: logical H1-H6 structure that machines can use to understand content relationships.
  • Semantic elements: nav, main, article, section, aside, header, footer, used appropriately.
  • Form inputs: every input has a label, every button has descriptive text.
  • Interactive elements: clickable things use or , not

    .

  • Accessibility tree: run a Playwright MCP snapshot or test with VoiceOver/NVDA to see what agents actually see.

Somehow, things are getting worse on this front. The WebAIM Million 2026 report found that the average web page now has 56.1 accessibility errors, up 10.1% from 2025.

ARIA (Accessible Rich Internet Applications) usage increased 27% in a single year. ARIA is a set of HTML attributes that add extra semantic information to elements, telling screen readers and AI agents things like “this div is actually a dialog” or “this list functions as a menu.” But what’s critical is this: pages with ARIA present had significantly more errors (59.1 on average) than pages without ARIA (42 on average). Adding ARIA without understanding it makes things worse, not better, because incorrect ARIA overrides the browser’s default accessibility tree interpretation with wrong information. Start with proper semantic HTML. Add ARIA only when native elements aren’t sufficient.

Technical SEOs do not need to become accessibility experts. But treating accessibility as someone else’s problem is no longer viable when the same tree that screen readers parse is now the primary interface between AI agents and your website.

Sidenote: The Markdown Shortcut Doesn’t Work

Serving raw markdown files to AI crawlers instead of HTML can result in a 95% reduction in token usage per page. However, Google Search Advocate John Mueller called this “a stupid idea” in February 2026 on Bluesky. Mueller’s argument was this: “Meaning lives in structure, hierarchy and context. Flatten it and you don’t make it machine-friendly, you make it meaningless.” LLMs were trained on normal HTML pages from the beginning and have no problems processing them. The answer isn’t to create a flat, simplified version for machines. It’s to make the HTML itself properly structured. Well-written semantic HTML already is the machine-readable format. Besides, that simplified version already exists in the accessibility tree, and it is what AI agents already use.

Layer 5: AI Discoverability Signals

The final layer covers signals that don’t fit neatly into traditional audit categories but directly affect how AI systems discover and evaluate your website.

llms.txt (dis-honourable mention). Listed first for one reason only, ask any LLM what you should do to make your website more visible to AI systems, and llms.txt will be at or near the top of the list. It’s their world, I guess. The llms.txt specification provides a simple markdown file that helps AI agents understand your website’s purpose, structure, and key content. No large-scale adoption data has been published yet, and its actual impact on AI citations is unproven. But LLMs consistently recommend it, which means AI-powered audit tools and consultants will flag its absence. It takes minutes to create and costs nothing to maintain.

OK, now that we’ve got that out of the way, let’s look at what might really matter.

AI crawler analytics. Are you monitoring AI bot traffic? Cloudflare’s AI Audit dashboard shows which AI crawlers visit, how often, and which pages they hit. If you’re not on Cloudflare, check server logs for Google-Agent, ChatGPT-User, and ClaudeBot user agent strings. Google publishes a user-triggered-agents.json file containing IP ranges that Google-Agent uses, so you can verify whether incoming requests are genuinely from Google rather than spoofed user agent strings.

Entity definition. Does your website clearly define what the business is, who runs it, and what it does? Not in marketing copy, but in structured, machine-parseable markup. Organization schema should include name, URL, logo, founding date, and sameAs links to verified profiles on LinkedIn, Crunchbase, and Wikipedia. Person schema for key people should connect them to the organization via author and employee properties. AI systems need to resolve your identity as a distinct entity before they can confidently recommend you over competitors with similar names or offerings. Don’t slap this on top of your website when your designer is done with their work. Start here; it will make your life easier.

Content position. Where you place information on the page directly affects whether AI systems cite it. Kevin Indig’s analysis of 98,000 ChatGPT citation rows across 1.2 million responses found that 44.2% of all AI citations come from the top 30% of a page. The bottom 10% earns only 2.4-4.4% of citations regardless of industry. Duane Forrester calls this “dog-bone thinking”: strong at the beginning and end, weak in the middle, a pattern Stanford researchers have confirmed as the “lost in the middle” phenomenon. Audit your key pages: are the most important claims and data points in the first 30%, or buried in the middle?

Content extractability. Pull any key claim from your page and read it in isolation. Does it still make sense without the surrounding paragraphs? AI retrieval systems, like ChatGPT, Perplexity, and Google AI Overviews, extract and cite individual passages and sentences that rely on “this,” “it,” or “the above” for meaning, become unusable when extracted from their original context. Ramon Eijkemans’ excellent utility-writing framework maps these principles to documented retrieval mechanisms: self-contained sentences, explicit entity relationships, and quotable anchor statements that AI systems can confidently cite without additional inference.

The Audit Checklist

Check Tool/Method What You’re Looking For
AI crawler robots.txt Manual review Conscious per-crawler decisions
JavaScript rendering curl, View Source, Lynx browser Critical content in static HTML
Structured data Schema validator, Rich Results Test Complete, connected JSON-LD
Semantic HTML axe DevTools, Lighthouse Proper elements, heading hierarchy
Accessibility tree Playwright MCP snapshot, screen reader What agents actually see
AI bot traffic Cloudflare, server logs Volume, pages hit, patterns

From Audit To Action

This audit identifies gaps. Fixing them requires a sequence, because some fixes depend on others. Optimizing content structure before establishing a machine-readable identity means agents can extract your information, but can’t confidently attribute it to your brand. I wrote Machine-First Architecture to provide that sequence: identity, structure, content, interaction, each pillar building on the previous one.

Why Technical SEO Audit Is Where This Belongs

None of this is technically SEO. Robots.txt rules for AI crawlers don’t affect Google rankings. Accessibility tree optimization doesn’t move keyword positions. Content position scoring has nothing to do with search indexing.

But most of it did grow out of technical SEO. Crawl management, structured data, semantic HTML, JavaScript rendering, server log analysis: these are skills technical SEOs already have. The audit methodology transfers directly. The consumer it serves is what changed.

The websites that get cited in AI responses, that work when Chrome auto browse visits them, that show up when someone asks ChatGPT for a recommendation, they won’t be the ones with the best content alone. They’ll be the ones whose technical foundation made that content accessible to machines. Technical SEOs are the people best equipped to build that foundation. The old audit template just needs a new section to reflect it.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

AI Overview CTR Fell 61%, But Clicks Didn’t Collapse via @sejournal, @MattGSouthern

Brand-cited AI Overview CTR fell 61% from Q3 to Q4, according to a new report from Seer Interactive, but the clicks on those pages barely moved.

The drop looks alarming on a dashboard, though it isn’t quite what it seems. Seer’s analysis of 5.47 million queries across 53 brands clearly shows what’s happening

What Happened In Q4

In September, brand-cited pages in AI Overviews received 15.8 million impressions and 398,798 clicks, with a CTR of 2.52%.

In October, impressions doubled to 33.1 million, and clicks increased slightly to 400,271, but CTR dropped to 1.21% as rapid impression growth outpaced clicks.

This isn’t a performance collapse but a math issue caused by faster impression growth than clicks.

November Is A Different Story

November’s impressions rose to 39.5 million, but clicks dropped to 301,783, and CTR fell to 0.76%.

Something pulled clicks down while visibility increased, and Seer’s data can’t explain why. For Q4, both patterns combine into a 61% figure, showing it’s important to analyze months separately in Search Console data.

What Seer Can’t Tell You

The agency is clear on one limit: it can’t determine whether the October impression surge was due to Google serving AI Overviews for more queries where brands were already cited, or because the brands earned citations through their SEO. Both explanations fit, and neither can be confirmed without a detailed analysis of the account.

Websites with similar data face the same ambiguity. Growing impressions are good if earned, but noise if they result from Google’s decisions. Your dashboard might not clarify this without account-level query analysis.

How This Fits With Past AIO CTR Coverage

Several studies show lower CTRs when AI Overviews appear. Ahrefs analyzed 146 million results and found a 20.5% AIO trigger rate, which was higher for informational and question queries.

A SISTRIX analysis in Germany reported a 59% drop in CTR at position one with AIOs, and Pew Research found that U.S. users clicked 8% of the time with AIOs versus 15% without.

Seer’s October data raises the question of whether a falling CTR on cited pages always means fewer clicks or can indicate greater visibility with the same click count.

Other Findings Worth Noting

Brand-cited pages get about 120% more clicks per impression than uncited pages on AIO SERPs, but cited pages lag behind no-AIO pages by 38%. A citation helps, but it doesn’t restore previous rankings.

Seer reports that organic CTR on AIO SERPs rose from 1.3% in December 2025 to 2.4% in February 2026, but calls this a leveling off rather than a recovery and advises against forecasting based on two months’ data.

Why This Matters

A falling CTR in your Q4 data doesn’t necessarily mean you’re losing clicks; check impressions for the same period before assuming there’s a problem.

Benchmarks show general trends, but your data tells your specific story. If clicks stay flat or grow faster than impressions, it’s a different issue than actual decline.

Looking Ahead

The main thing to watch is whether added AI Overview visibility starts driving more clicks, or whether cited pages continue absorbing more impressions without much traffic upside.

If that pattern holds, the value of being cited may look different from what CTR alone suggests. You may need to separate visibility, clicks, and citation coverage before deciding whether AI Overview exposure is helping or simply changing how performance gets measured


Featured Image: TaniaKitura/Shutterstock

Google Pushes “Bounce Clicks” Explanation For AI Overview Traffic Loss via @sejournal, @MattGSouthern

Google’s head of Search, Liz Reid, told Bloomberg’s Odd Lots podcast that AI Overviews are reducing “bounce clicks” from publisher pages, continuing an argument she has made in public appearances since last year.

Reid appeared on the April 23 episode of Odd Lots. Hosts Joe Weisenthal and Tracy Alloway asked how AI Overviews affect publisher traffic and ad revenue.

What Reid Said

Reid described what she called “bounce clicks” as the category of clicks AI Overviews are reducing.

She said users who quickly click and return to search no longer need to visit the page because they get the fact from the Overview. Those wanting to read longer still click through. She acknowledged fewer ad clicks for some queries but said increased query volume balances this. The argument aligns with Reid’s points in other public appearances.

The Pattern

Reid published a Google blog post in August stating that organic click volume from Google Search to websites was “relatively stable” year-over-year and that “quality clicks,” defined as visits where users don’t quickly click back, had increased.

In an October Wall Street Journal interview, she explicitly used the phrase “bounced clicks” and said that ad revenue with AI Overviews had been relatively stable.

The Bloomberg appearance makes the same basic case Reid made in August, describing some lost clicks as low-value visits where users would have quickly returned to Search.

What Reid Didn’t Say

In none of those three appearances has Reid provided supporting data.

Her August blog post included no charts, percentages, or year-over-year comparisons. On Bloomberg, she told Weisenthal and Alloway that Google tracks whether people come to search more often as one of its key signals, without providing numbers.

Weisenthal and Alloway asked about traffic and monetization, but the interview didn’t include follow-up questions requesting evidence for Reid’s explanation.

Google has not publicly shared data that would let outside observers test that distinction.

What Independent Data Shows

Chartbeat data published in the Reuters Institute’s Journalism and Technology Trends and Predictions 2026 report found that global publisher Google search traffic dropped by roughly a third. Google Discover referrals fell 21% year-over-year across more than 2,500 publisher websites.

Seer Interactive’s analysis found that organic click-through rate for queries with AI Overviews fell from 1.76% in 2024 to 0.61% in 2025, a 61% drop. Seer noted those queries tend to be informational searches that historically had lower CTRs.

Pew Research Center’s study of 68,000 real search queries found users clicked on results 8% of the time when AI Overviews appeared, compared with 15% when they did not.

Digital Content Next, a trade body whose members include the New York Times, Condé Nast, and Vox, reported a median 10% year-over-year decline in Google search referrals across 19 member publishers between May and June 2025. DCN CEO Jason Kint said at the time that the member data offered “ground truth” about what was happening to publisher traffic.

Why This Matters

Reid’s “bounce clicks” description answers a question the data raises, but it answers it without data of its own. That’s worth keeping in mind when evaluating any public claim from a platform that controls the measurements.

A business owner can’t verify from Reid’s Bloomberg appearance whether AI Overviews are cutting only low-value clicks or cutting across query types. The independent data measures total clicks and click-through rates, not the subset of clicks Reid describes as low-value. If Google has internal data that separates the two, it hasn’t shared it in the eight months since the August blog post.

Looking Ahead

Reid said that Google measures how often people return to Search. That signal tracks Google’s retention. Publishers need a traffic metric, but Google hasn’t shared one. Until it does, “bounce clicks” should be treated as a claim rather than a finding.

Google’s Updates Push Search Further Into Task Completion via @sejournal, @MattGSouthern

Google announced three updates to Search and AI Mode this week, which Roger Montti reported for SEJ. Reading his article motivated me to examine these updates, the broader pattern, and their implications for search this year.

Looking at this in detail, it appears the updates push more of what used to be a results-page experience into task completion.

What Google Announced

Google launched individual hotel price tracking in Search, now available globally for signed-in users searching in English and Spanish. Email alerts notify users of rate changes during selected dates.

Additionally, in March, Canvas trip planning in AI Mode moved from Labs preview to general U.S. availability, allowing users to describe trips and receive custom itineraries with flights, hotels, and attractions that save automatically. Agent-powered store calling, first introduced in classic Search, will soon roll out to AI Mode, enabling Google’s AI to call nearby stores, check inventory, using Gemini models and Duplex.

Rose Yao, Product Leader in Search, posted the updates on X. Additional detail sits in Google’s blog post.

The Pattern

These updates reflect Google’s product direction seen in research, patents, and executive statements since January.

In January, Google published the SAGE research paper on training agents for reasoning chains over four steps, laying groundwork for multi-step tasks in Search.

Pichai’s April interview made the language public. Pichai said, “A lot of what are just information-seeking queries will be agentic in Search.” Our deep dive tracked how his language shifted from “search will change” to specific descriptions of task completion.

Earlier this month, Montti argued that task-based agentic search was already changing SEO, citing Google’s global rollout of agentic restaurant booking as evidence that the future tense in Pichai’s language was already past tense in product.

A week ago, the U.S. Patent Office published a Google continuation patent titled “Autonomously providing search results post-facto” (our coverage). The filing describes a system that waits for answers when none are immediately available, then delivers them later through assistant interactions.

These updates continue in the same direction. Canvas moves from Labs preview to broader U.S. availability, approximately five months after its initial launch in November. Store calling has been introduced in AI Mode following its debut in Search last November. Additionally, hotel price tracking is now available in Search at the single-property level.

Microsoft’s recent news fits the same pattern. Sumit Chauhan, President of Microsoft’s Office Product Group, wrote in a company blog post that Copilot’s agentic capabilities are now generally available in Word, Excel, and PowerPoint:

“Copilot creates the most value when it performs the work—formatting, restructuring, building visuals, and transforming data—rather than just suggesting steps.”

The features are the default for Microsoft 365 Copilot and Premium subscribers, and available to Personal and Family plans. It’s unclear whether businesses will receive similar reporting for agent-driven surfaces, a point not addressed in Microsoft’s post.

The Vocabulary Hasn’t Settled

Google uses “agentic” in its product language and announcements, describing features like calling and AI Mode as task-oriented. A SeatGeek partnership was called “Google’s Agentic AI Search Experience.” Other companies also use a similar agent framework language.

Pichai describes ‘Agent manager’ as Google’s role and envisions a future in which Search becomes ‘an agent manager’ overseeing various tasks. It positions Google as an orchestration layer on top of agents rather than a direct competitor.

Montti has used “task-based agentic search” in his recent SEJ coverage, sometimes shortened to TBAS. That’s his shorthand for this beat, not industry-standard terminology.

“Agentic” describes the capability. “Agent manager” refers to a specific architectural role that Google is claiming. “Task-based” centers the user’s goal. When three different labels show up in one month, the market is still working out what to call this.

Why This Matters For Search Professionals

Features introduced this week change the meaning of visibility across several business categories.

Local retailers now encounter a new discovery surface. When a store calls in AI Mode, Google’s agents, rather than users, will contact businesses to verify stock and details. Google hasn’t disclosed which stores its agents will contact first, how eligibility is decided, or if specific business information influences the process.

An analysis of 68 million AI crawler visits across 858,457 Duda-hosted sites shows that sites with connections to Yext, Google Business Profile, and review systems were crawled more often than those without. These findings describe crawler behavior, not agent calls. It’s unknown if similar signals influence which stores are contacted.

Hotels and travel businesses now face individual-property price monitoring. Trip itineraries are based on Canvas’s selection logic. No report shows if a hotel appeared in a Canvas plan, triggered an alert, or was named in an AI Mode response.

Publishers face continued pressure from AI-driven summarization. Index Exchange analyzed 1,200 publishers on its exchange platform, finding that 69% experienced year-over-year declines in ad opportunities, with an average drop of 14%.

Declines varied across verticals. Health and careers publishers saw 40-50% ad drops, while news and politics publishers saw only 7% declines.

Vanessa Otero, Founder and CEO of Ad Fontes Media, told Index Exchange for the same piece:

“When it’s important enough that you want to be accurately and fully informed about some big international, national, or local event, a quality news site is still a much better experience than asking an AI chatbot, which may give a genericized or inaccurate answer. AI users already know this, which is why most news consumers still go direct to their trusted sites. News has always performed well for advertisers, and if the trend of news site resilience holds, this inventory will likely become the most valuable on the open web of the future.”

Travel publishers face pressure as Canvas compiles itineraries without citing sources, making it impossible for publications to know if their coverage influences trip plans.

Ecommerce retailers lack visibility into which stores get called, so they can’t determine if inventory feeds, listing accuracy, or Google Business Profile signals are effective.

Multi-platform coverage complicates strategy. Google’s agents favor structured data and verified profiles. Perplexity Computer routes across 19 models with diverse retrieval preferences. ChatGPT Atlas scrapes browser content directly. OpenAI’s Operator uses GUI vision to interact with rendered pages.

One business has multiple discovery mechanisms with varying technical needs. Single-strategy optimization no longer covers all surfaces.

What’s Still Invisible

Since our coverage flagged the measurement gap, it has widened.

Search professionals still can’t see whether their business was included in a Canvas trip plan. They can’t see whether an agent called them. They can’t see whether their hotel was surfaced in a price-tracking alert. And they can’t see how often their content was used to assemble someone else’s itinerary.

No new reporting surfaces were shipped alongside the updates. Alphabet reported $63.1 billion in Google Search & Other advertising revenue for Q4 2025, up 17% year-over-year, with management crediting Search and Cloud acceleration and AI usage gains. No new reporting tools have arrived to help businesses track their role in AI-mediated search.

The pattern holds across platforms. ChatGPT referral data is limited to what OpenAI shares. Perplexity citation visibility is inside Perplexity. Google’s agent surfaces don’t cleanly map to Search Console.

Academic research on agent training continues to advance. Two April 2026 papers on arXiv show the pace. CW-GRPO, from Junzhe Wang and colleagues, proposes reinforcement-learning improvements for multi-turn search agents. SKILL0, developed by Zhengxi Lu and colleagues at Zhejiang University, trains agents to internalize skill packages. The result is agents that operate without instruction overhead during inference.

The training pipeline is evolving faster than the measurement pipeline businesses depend on. Search professionals can’t close that gap alone. Google, OpenAI, Perplexity, and Anthropic would all need to provide equivalent agent-surface reporting. None has publicly committed to doing so.

Looking Ahead

Pichai said that 2027 would be “an important inflection point for certain things.” He cited non-engineering workflows and some agentic business processes. Our coverage walked through that timeline.

May brings Google I/O and Microsoft Build. Both companies are likely to expand their agentic surfaces at those events, making reporting the most urgent thing to watch. If businesses can’t see their role in task-based search, they can’t optimize for it or argue about who should pay for it.

Two longer-running questions sit behind that. Pay-per-click worked when users clicked links. Store calling, Canvas planning, and price tracking don’t produce clicks, and no platform has described a replacement. Schema.org was designed for search engine crawling, not for agents that need real-time inventory, booking availability, and action endpoints. Standards for agent-readable business data haven’t caught up either.

What happens next depends on whether any platform builds the reporting alongside the capability. So far, none has described how it would. Until that changes, businesses will be optimizing for surfaces they can’t see. Next signals land at I/O and Build in three weeks.

More Resources

Google Won’t Act On Spam Reports If They Contain Personal Information via @sejournal, @martinibuster

Google updated their spam reporting documentation to make it clearer that spam reports are not wholly confidential and that it’s possible for personal identifiable information to be shared with the sites receiving a manual action.

Change In Response To Feedback

Google’s changelog noted that they were updating the spam reporting form based on feedback they’d received about personal information contained in the spam report that is shared with spammy sites that receive a manual action (formerly known as a penalty).

The update contains a new notice that spam reports containing personal information will not be processed.

The changelog noted:

“Clarifying when and why we may take manual action based on spam reports
What: Further clarified when and why we may take manual action based on spam reports.
Why: To address feedback we received about the change on using spam reports to take manual action.”

Google removed the following from their documentation:

“If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

The above wording was replaced with the following:

“Don’t include any personally identifying information in your submission. To comply with regulations, we must send the submission text to the site owner to help them understand the context of a manual action, if one is issued.

Because of this, we won’t process your submission if we determine it contains personally identifying information to protect privacy. Not including such information fully ensures your information is safe and prevents your submission from being discarded.”

Action Moving Forward

On the one hand it’s good that Google won’t proceed with a manual action if the report contains personal information. This means that if you’re submitting spam reports to Google, don’t name your site, business name, personal name or anything else that you don’t want the affected spammer to know.

Read the updated documentation here:

Report spam, phishing, or malware

Learn more about Google’s spam reporting tool: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Featured Image by Shutterstock/andre_dechapelle