Google Says What Content Gets Clicked On AI Overviews via @sejournal, @martinibuster

Google’s Liz Reid, Vice President of Search, recently said that AI Overviews shows what kind of content makes people click through to visit a site. She also said that Google expanded the concept of spam to include content that does not bring the creator’s perspective and depth.

People’s Preferences Drives What Search Shows

Liz Reid affirmed that user behavior tells them what kinds of content people want to see, like short-form videos and so on. That behavior causes Google to want to show that to them and that the system itself will begin to learn and adjust to the kinds of content (forums, text, video, etc.) that they prefer.

She said:

“…we do have to respond to who users want to hear from, right? Like, we are in the business of both giving them high quality information, but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.

…You see it from users, right? Like we do everything from user research to we run an experiment. And so you take feedback from what you hear, from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

The important insight is that user preferences play an active role in shaping what appears in AI search results. Google’s ranking systems are designed to respond not just to quality but to the types of content users seek out and engage with. This means that shifts in user behavior related to content preferences directly influence what is surfaced. The system continuously adapts based on real-world feedback. The takeaway here is that SEOs and creators should actively gauge what kind of content users are engaging with and be ready to pivot in response to changes.

The conversation is building up toward where Reid says exactly what kinds of content engages users, based on the feedback they get through user behavior.

AI-Generated Is Not Always Spam

Liz next affirms that AI-generated content where she essentially confirms that the bar they’re using to decide what’s high and low quality is agnostic to whether the content is created by a human or an AI.

She said:

“Now, AI generated content doesn’t necessarily equal spam.

But oftentimes when people are referring to it, they’re referring to the spam version of it, right? Or the phrase AI slop, right? This content that feels extremely low value across, okay? And we really want to make an effort that that doesn’t surface.”

Her point is pretty clear that all content is judged by the same standard. So if content is judged to be low quality, it’s judged based on the merits of the content, not by the origin.

People Click On Rich Content

At this point in the interview Reid stops talking about low quality content and turns to discussing the kind of content that makes people click through to a website. She said that user behavior tells them that users don’t want superficial content and that the click patterns shows that more people click through to content that has depth, expresses a unique perspective that does not mirror what everyone else is saying and that these kinds of content engages users. This is the kind of content that gets clicks on AI search.

Reid explained:

“But what we see is people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?

That surface-level AI generated content, people don’t want that because if they click on that, they don’t actually learn that much more than they previously got. They don’t trust the result anymore.

So what we see with AI Overviews is that we surface these sites and get fewer what we call bounce clicks. A bounce click is like you click on your site, Yeah, I didn’t want that, and you go back.

AI Overviews gives some content, and then we get to surface deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not the AI generated.”

Reid’s comments indicate that click patterns indicate content offering a distinct perspective or insight derived from experience performs better than low-effort content. This indicates that there is intention within AI Overviews to not amplify generic output and to uprank content that demonstrates a firm knowledge of the topic.

Google’s Ranking Weights

Here’s an interesting part that explains what gets up-ranked and down-ranked, expressed in a way I’ve not seen before. Reid said that they’ve extended the concept of spam to also include content that repeats what’s already well known. She also said that they are giving more ranking weight to content that brings a unique perspective or expertise to the content.

Here Reid explains the downranking:

“Now, it is hard work, but we spend a lot of time and we have a lot of expertise built on this such that we’ve been able to take the spam rate of what actually shows up, down.

And as well as we’ve sort of expanded beyond this concept of spam to sort of low-value content, right? This content that doesn’t add very much, kind of tells you what everybody else knows, it doesn’t bring it…”

And this is the part where she says Google is giving more ranking weight to content that contains expertise:

“…and tried to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaways

How To Get More Upranked On AI Overviews

1. Create “Richer and Deeper” Content

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?”

Takeaway:
Publish content that shows original thought, unique insights, and depth rather than echoing what’s already widely said. In my opinion, using software that analyzes the content that’s already ranking or using a skyscraper/10x content strategy is setting yourself up for doing exactly the opposite of what Liz Reid is recommending. A creator will never express a unique insight by echoing what a competitor has already done.

2. Reflect Human Perspective

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it.”

Takeaway: Incorporate your own analysis, experiences, or firsthand understanding so that the content is authentic and expresses expertise.

3. Demonstrate Expertise and Craft

Reid shared that Google is trying “to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaway:
Effort, originality, and subject-matter knowledge are the qualities that Google is up-weighting to perform better within AI Overviews.

Reid draws a clear distinction between content that repeats what is already widely known and content that adds unique value through perspective or expertise. Google treats superficial content like spam and lowers the weights of the rankings to reduce its visibility, while actively “upweighting” content that demonstrates effort and insight, what she termed as the craft. Craft means skill and expertise, mastery of something. The message here is that originality and actual expertise are important for ranking well, particularly in AI Overviews and I would think the same applies for AI Mode.

Watch the interview from about the 18 minute mark:

Google Reminds SEOs How The URL Removals Tool Works via @sejournal, @martinibuster

Google’s John Mueller answered a question about removing hacked URLs that are showing in the index. He explained how to remove the sites from appearing in the search results and then discussed the nuances involved in dealing with this specific situation.

Removing Hacked Pages From Google’s SERPs

The person asking the question was the victim of the Japanese hacking attack, so-called because the attackers create hundreds or even thousands of rogue Japanese language web pages. The person had dealt with the issue and removed the spammy infected web pages, leaving them with 404 pages that are still referenced in Google’s search results.

They now want to remove them from Google’s search index so that the site is no longer associated with those pages.

They asked:

“My site recently got a Japanese attack. However, I shifted that site to a new hosting provider and have removed all data from there.

However, the fact is that many Japanese URLs have been indexed.

So how do I deindex those thousands of URLs from my website?”

The question reflects a common problem in the aftermath of a Japanese hack attack, where hacked pages stubbornly remain indexed long after the pages were removed. This shows that site recovery is not complete once the malicious content is removed; Google’s search index needs to clear the pages, and that can take a frustratingly long time.

How To Remove Japanese Hack Attack Pages From Google

Google’s John Mueller recommended using the URL Removals Tool found in Search Console. Contrary to the implication inherent in the name of the tool, it doesn’t remove a URL from the search index; it just removes it from showing in Google’s search results faster if the content has already been removed from the site or blocked from Google’s crawler. Under normal circumstances, Google will remove a page from the search results after the page is crawled and noted to be blocked or gone (404 error response).

Three Prerequisites For URL Removals Tool

  1. The page is removed and returns a 404 or 410 server response code.
  2. The URL is blocked from indexing by a robots meta tag:
  3. The URL is prevented from being crawled by a robots.txt file.

Google’s Mueller responded:

“You can use the URL removal tool in search console for individual URLs (also if the URLs all start with the same thing). I’d use that for any which are particularly visible (check the performance report, 24 hours).

This doesn’t remove them from the index, but it hides them within a day. If the pages are invalid / 404 now, they’ll also drop out over time, but the removal tool means you can stop them from being visible “immediately”. (Redirecting o 404 are both ok, technically a 404 is the right response code)”

Mueller clarified that the URL Removals Tool does not delete URLs from Google’s index but instead hides them from search results, faster than natural recrawling would. His explanation is a reminder that the tool has a temporary search visibility effect and is not a way to permanently remove a URL from Google’s index itself. The actual removal from the search index happens after Google verifies that the page is actually gone or blocked from crawling or indexing.

Featured Image by Shutterstock/Asier Romero

Your Brand Is Being Cited By AI. Here’s How To Measure It via @sejournal, @DuaneForrester

Search has never stood still. Every few years, a new layer gets added to how people find and evaluate information. Generative AI systems like ChatGPT, Copilot Search, and Perplexity haven’t replaced Google or Bing. They’ve added a new surface where discovery happens earlier, and where your visibility may never show up in analytics.

Call it Generative Engine Optimization, call it AI visibility work, or just call it the next evolution of SEO. Whatever the label, the work is already happening. SEO practitioners are already tracking citations, analyzing which content gets pulled into AI responses, and adapting strategies as these platforms evolve weekly.

This work doesn’t replace SEO, rather it builds on top of it. Think of it as the “answer layer” above the traditional search layer. You still need structured content, clean markup, and good backlinks, among the other usual aspects of SEO. That’s the foundation assistants learn from. The difference is that assistants now re-present that information to users directly inside conversations, sidebars, and app interfaces.

If your work stops at traditional rankings, you’ll miss the visibility forming in this new layer. Tracking when and how assistants mention, cite, and act on your content is how you start measuring that visibility.

Your brand can appear in multiple generative answers without you knowing. These citations don’t show up in any analytics tool until someone actually clicks.

Image Credi: Duane Forrester

Perplexity explains that every answer it gives includes numbered citations linking to the original sources. OpenAI’s ChatGPT Search rollout confirms that answers now include links to relevant sites and supporting sources. Microsoft’s Copilot Search does the same, pulling from multiple sources and citing them inside a summarized response. And Google’s own documentation for AI overviews makes it clear that eligible content can be surfaced inside generative results.

Each of these systems now has its own idea of what a “citation” looks like. None of them report it back to you in analytics.

That’s the gap. Your brand can appear in multiple generative answers without you knowing. These are the modern zero-click impressions that don’t register in Search Console. If we want to understand brand visibility today, we need to measure mentions, impressions, and actions inside these systems.

But there’s yet another layer of complexity here: content licensing deals. OpenAI has struck partnerships with publishers including the Associated Press, Axel Springer, and others, which may influence citation preferences in ways we can’t directly observe. Understanding the competitive landscape, not just what you’re doing, but who else is being cited and why, becomes essential strategic intelligence in this environment.

In traditional SEO, impressions and clicks tell you how often you appeared and how often someone acted. Inside assistants, we get a similar dynamic, but without official reporting.

  • Mentions are when your domain, name, or brand is referenced in a generative answer.
  • Impressions are when that mention appears in front of a user, even if they don’t click.
  • Actions are when someone clicks, expands, or copies the reference to your content.

These are not replacements for your SEO metrics. They’re early indicators that your content is trusted enough to power assistant answers.

If you read last week’s piece, where I discussed how 2026 is going to be an inflection year for SEOs, you’ll remember the adoption curve. During 2026, assistants are projected to reach around 1 billion daily active users, embedding themselves into phones, browsers, and productivity tools. But that doesn’t mean they’re replacing search. It means discovery is happening before the click. Measuring assistant mentions is about seeing those first interactions before the analytics data ever arrives.

Let’s be clear. Traditional search is still the main driver of traffic. Google handles over 3.5 billion searches per day. In May 2025, Perplexity processed 780 million queries in a full month. That’s roughly what Google handles in about five hours.

The data is unambiguous. AI assistants are a small, fast-growing complement, not a replacement (yet).

But if your content already shows up in Google, it’s also being indexed and processed by the systems that train and quote inside these assistants. That means your optimization work already supports both surfaces. You’re not starting over. You’re expanding what you measure.

Search engines rank pages. Assistants retrieve chunks.

Ranking is an output-aligned process. The system already knows what it’s trying to show and chooses the best available page to match that intent. Retrieval, on the other hand, is pre-answer-aligned. The system is still assembling the information that will become the answer and that difference can change everything.

When you optimize for ranking, you’re trying to win a slot among visible competitors. When you optimize for retrieval, you’re trying to be included in the model’s working set before the answer even exists. You’re not fighting for position as much as you’re fighting for participation.

That’s why clarity, attribution, and structure matter so much more in this environment. Assistants pull only what they can quote cleanly, verify confidently, and synthesize quickly.

When an assistant cites your site, it’s doing so because your content met three conditions:

  1. It answered the question directly, without filler.
  2. It was machine-readable and easy to quote or summarize.
  3. It carried provenance signals the model trusted: clear authorship, timestamps, and linked references.

Those aren’t new ideas. They’re the same best practices SEOs have worked with for years, just tested earlier in the decision chain. You used to optimize for the visible result. Now you’re optimizing for the material that builds the result.

One critical reality to understand: citation behavior is highly volatile. Content cited today for a specific query may not appear tomorrow for that same query. Assistant responses can shift based on model updates, competing content entering the index, or weighting adjustments happening behind the scenes. This instability means you’re tracking trends and patterns, not guarantees (not that ranking was guaranteed, but they are typically more stable). Set expectations accordingly.

Not all content has equal citation potential, and understanding this helps you allocate resources wisely. Assistants excel at informational queries (”how does X work?” or “what are the benefits of Y?”). They’re less relevant for transactional queries like “buy shoes online” or navigational queries like “Facebook login.”

If your content serves primarily transactional or branded navigational intent, assistant visibility may matter less than traditional search rankings. Focus your measurement efforts where assistant behavior actually impacts your audience and where you can realistically influence outcomes.

The simplest way to start is manual testing.

Run prompts that align with your brand or product, such as:

  • “What is the best guide on [topic]?”
  • “Who explains [concept] most clearly?”
  • “Which companies provide tools for [task]?”

Use the same query across ChatGPT Search, Perplexity, and Copilot Search. Document when your brand or URL appears in their citations or answers.

Log the results. Record the assistant used, the prompt, the date, and the citation link if available. Take screenshots. You’re not building a scientific study here; you’re building a visibility baseline.

Once you’ve got a handful of examples, start running the same queries weekly or monthly to track change over time.

You can even automate part of this. Some platforms now offer API access for programmatic querying, though costs and rate limits apply. Tools like n8n or Zapier can capture assistant outputs and push them to a Google Sheet. Each row becomes a record of when and where you were cited. (To be fair, it’s more complicated than 2 short sentences make it sound, but it’s doable by most folks, if they’re willing to learn some new things.)

This is how you can create your first “ai-citation baseline“ report if you’re willing to just stay manual in your approach.

But don’t stop at tracking yourself. Competitive citation analysis is equally important. Who else appears for your key queries? What content formats do they use? What structural patterns do their cited pages share? Are they using specific schema markup or content organization that assistants favor? This intelligence reveals what assistants currently value and where gaps exist in the coverage landscape.

We don’t have official impression data yet, but we can infer visibility.

  • Look at the types of queries where you appear in assistants. Are they broad, informational, or niche?
  • Use Google Trends to gauge search interest for those same queries. The higher the volume, the more likely users are seeing AI answers for them.
  • Track assistant responses for consistency. If you appear across multiple assistants for similar prompts, you can reasonably assume high impression potential.

Impressions here don’t mean analytics views. They mean assistant-level exposure: your content seen in an answer window, even if the user never visits your site.

Actions are the most difficult layer to observe, but not because assistant ecosystems hide all referrer data. The tracking reality is more nuanced than that.

Most AI assistants (Perplexity, Copilot, Gemini, and paid ChatGPT users) do send referrer data that appears in Google Analytics 4 as perplexity.ai / referral or chatgpt.com / referral. You can see these sources in your standard GA4 Traffic Acquisition reports. (useful article)

The real challenges are:

Free-tier users don’t send referrers. Free ChatGPT traffic arrives as “Direct” in your analytics, making it impossible to distinguish from bookmark visits, typed URLs, or other referrer-less traffic sources. (useful article)

No query visibility. Even when you see the referrer source, you don’t know what question the user asked the AI that led them to your site. Traditional search gives you some query data through Search Console. AI assistants don’t provide this.

Volume is still small but growing. AI referral traffic typically represents 0.5% to 3% of total website traffic as of 2025, making patterns harder to spot in the noise of your overall analytics. (useful article)

Here’s how to improve tracking and build a clearer picture of AI-driven actions:

  1. Set up dedicated AI traffic tracking in GA4. Create a custom exploration or channel group using regex filters to isolate all AI referral sources in one view. Use a pattern like the excellent example in this Orbit Media article to capture traffic from major platforms ( ^https://(www.meta.ai|www.perplexity.ai|chat.openai.com|claude.ai|gemini.google.com|chatgpt.com|copilot.microsoft.com)(/.*)?$ ). This separates AI referrals from generic referral traffic and makes trends visible.
  2. Add identifiable UTM parameters when you control link placement. In content you share to AI platforms, in citations you can influence, or in public-facing URLs. Even platforms that send referrer data can benefit from UTM tagging for additional attribution clarity. (useful article)
  3. Monitor “Direct” traffic patterns. Unexplained spikes in direct traffic, especially to specific landing pages that assistants commonly cite, may indicate free-tier AI users clicking through without referrer data. (useful article)
  4. Track which landing pages receive AI traffic. In your AI traffic exploration, add “Landing page + query string” as a dimension to see which specific pages assistants are citing. This reveals what content AI systems find valuable enough to reference.
  5. Watch for copy-paste patterns in social media, forums, or support tickets that match your content language exactly. That’s a proxy for text copied from an assistant summary and shared elsewhere.

Each of these tactics helps you build a more complete picture of AI-driven actions, even without perfect attribution. The key is recognizing that some AI traffic is visible (paid tiers, most platforms), some is hidden (free ChatGPT), and your job is to capture as much signal as possible from both.

Machine-Validated Authority (MVA) isn’t visible to us as it’s an internal trust signal used by AI systems to decide which sources to quote. What we can measure are the breadcrumbs that correlate with it:

  • Frequency of citation
  • Presence across multiple assistants
  • Stability of the citation source (consistent URLs, canonical versions, structured markup)

When you see repeat citations or multi-assistant consistency, you’re seeing a proxy for MVA. That consistency is what tells you the systems are beginning to recognize your content as reliable.

Perplexity reports almost 10 billion queries a year across its user base. That’s meaningful visibility potential even if it’s small compared to search.

Microsoft’s Copilot Search is embedded in Windows, Edge, and Microsoft 365. That means millions of daily users see summarized, cited answers without leaving their workflow.

Google’s rollout of AI Overviews adds yet another surface where your content can appear, even when no one clicks through. Their own documentation describes how structured data helps make content eligible for inclusion.

Each of these reinforces a simple truth: SEO still matters, but it now extends beyond your own site.

Start small. A basic spreadsheet is enough.

Columns:

  • Date.
  • Assistant (ChatGPT Search, Perplexity, Copilot).
  • Prompt used.
  • Citation found (yes/no).
  • URL cited.
  • Competitor citations observed.
  • Notes on phrasing or ranking position.

Add screenshots and links to the full answers for evidence. Over time, you’ll start to see which content themes or formats surface most often.

If you want to automate, set up a workflow in n8n that runs a controlled set of prompts weekly and logs outputs to your sheet. Even partial automation will save time and let you focus on interpretation, not collection. Use this sheet and its data to augment what you can track in sources like GA4.

Before investing heavily in assistant monitoring, consider resource allocation carefully. If assistants represent less than 1% of your traffic and you’re a small team, extensive tracking may be premature optimization. Focus on high-value queries where assistant visibility could materially impact brand perception or capture early-stage research traffic that traditional search might miss.

Manual quarterly audits may suffice until the channel grows to meaningful scale. This is about building baseline understanding now so you’re prepared when adoption accelerates, not about obsessive daily tracking of negligible traffic sources.

Executives understand and prefer dashboards, not debates about visibility layers, so show them real-world examples. Put screenshots of your brand cited inside ChatGPT or Copilot next to your Search Console data. Explain that this is not a new algorithm update but a new front end for existing content. It’s up to you to help them understand this critical difference.

Frame it as additive reach. You’re showing leadership that the company’s expertise is now visible in new interfaces before clicks happen. That reframing keeps support for SEO strong and positions you as the one tracking the next wave.

It’s worth noting that citation practices exist within a shifting legal landscape. Publishers and content creators have raised concerns about copyright and fair use as AI systems train on and reproduce web content. Some platforms have responded with licensing agreements, while legal challenges continue to work through courts.

This environment may influence how aggressively platforms cite sources, which sources they prioritize, and how they balance attribution with user experience. The frameworks we build today should remain flexible as these dynamics evolve and as the industry establishes clearer norms around content usage and attribution.

AI assistant visibility is not yet a major traffic source. It’s a small but growing signal of trust.

By measuring mentions and citations now, you build an early-warning system. You’ll see when your content starts appearing in assistants long before any of your analytics tools do. This means that when 2026 arrives and assistants become a daily habit, you won’t be reacting to the curve. You’ll already have data on how your brand performs inside these new systems.

If you extend the concept here of “data” to a more meta level, you could say it’s already telling us that the growth is starting, it’s explosive, and it’s about to have an impact in consumer’s behaviors. So now is the moment to take that knowledge and focus it on the more day-to-day work you do and start to plan for how those changes impact that daily work.

Traditional SEO remains your base layer. Generative visibility sits above it. Machine-Validated Authority lives inside the systems. Watching mentions, impressions, and actions is how we start making what’s in the shadows measurable.

We used to measure rankings because that’s what we could see. Today, we can measure retrieval for the same reason. This is just the next evolution of evidence-based SEO. Ultimately, you can’t fix what you can’t see. We cannot see how trust is assigned inside the system, but we can see the outputs of each system.

The assistants aren’t replacing search (yet). They’re simply showing you how visibility behaves when the click disappears. If you can measure where you appear in those layers now, you’ll know when the slope starts to change and you’ll already be ahead of it.

More Resources:


Featured Image: Anton Vierietin/Shutterstock


This post was originally published on Duane Forrester Decodes.

Google Says It Surfaces More Video, Forums, And UGC via @sejournal, @MattGSouthern

Google says it has adjusted rankings to surface more short-form video, forums, and user-generated content in response to how people search.

Liz Reid, VP and head of Google Search, discussed the changes in a Wall Street Journal Bold Names podcast interview.

What Reid Said

Reid described a shift in where people go for certain questions, especially among younger users:

“There’s a behavioral shift that is happening in conjunction with the move to AI, and that is a shift of who people are going to for a set of questions. And they are going to short-form video, they are going to forums, they are going to user-generated content a lot more than traditional sites.”

She added:

“We do have to respond to who users want to hear from. We are in the business of both giving them high quality information but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.”

To illustrate the behavior change, she gave a lifestyle example:

“Where are you getting your cooking? Are you getting your cooking recipes from a newspaper? Are you getting your cooking recipes from YouTube?”

Reid also highlighted a pattern with search updates:

“One of the things that’s always true about Google Search is that you make changes and there are winners and losers. That’s true on any ranking update.”

Ads And Query Mix

Reid said the impact of AI Overviews on ads is offset by people running more searches overall:

“The revenue with AI Overviews has been relatively stable… some queries may get less clicks on ads, but also it grows overall queries so people do more searches. And so those two things end up balancing out.”

She noted most queries have no ads:

“Most queries don’t have any ads at all… that query is sort of unaffected by ads.”

Reid also described how lowering friction (e.g., Lens, multi-page answers via AI Overviews) increases total searches.

Attribution & Personalization

Reid highlighted work on link prominence and loyal-reader connections:

“We’ve started doing more with inline links that allows you to say according to so-and-so with a big link for whoever the so-and-so is… building both the brand, as well as the click through.”

Quality Signals & Low-Value Content

On quality and spam posture:

“We’ve… expanded beyond this concept of spam to sort of low-value content.”

She said richer, deeper material tends to drive the clicks from AI experiences.

How Google Tests Changes

Asked whether there is a “push” as well as a “pull,” Reid described the evaluate-and-learn loop:

“You take feedback from what you hear from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

Why This Matters

In certain cases, your pages may face increased competition from forum threads and short videos.

That means improvements in quality and technical SEO alone might not fully account for traffic fluctuations if the distribution of formats has changed.

If hit by a Google update, teams should examine where visibility decreases and identify which query types are impacted. From there, determine if competing results have shifted to forum threads or short videos.

Open Questions

Reid didn’t provide timing for when the adjustments began or metrics indicating how much weighting changed.

It’s unclear which categories are most affected or whether the impact will expand further.

Looking Ahead

Reid’s comments confirm that Google has adjusted ranking to reflect evolving user behavior.

Given this, it makes sense to consider creating complementary formats like short videos while continuing to invest in in-depth expertise where traditional pages still win.


Featured Image: Michael Vi/Shutterstock

Google’s AI Mode SEO Impact | AI Mode User Behavior Study [Part 2] via @sejournal, @Kevin_Indig

Last week, I shared the largest usability study of AI Mode, and it revealed how users interact with the new search surface:

They focus on the AI Mode text first 88% of the time, ignore link icons, and rarely click out.

This week, for Part 2, I’m covering what’s measurable, what’s guesswork, and what’s possibly next for visibility, trust, and monetization in AI Mode.

If you have questions about the study methodology or initial findings, make sure to check out What Our AI Mode User Behavior Study Reveals about the Future of Search to get up to speed.

Because this week, we’re jumping right in.

Which AI Mode Elements Can You “Optimize” For?

Before we dive into additional findings that I didn’t have room to cover last week, first, we need to get on the same page about your brand’s visibility opportunities in AI Mode.

There are a few distinct visibility opportunities, each with different functions:

  • Inline text links or inline links: A hyperlink directly in the AI Mode output copy that opens a feature in the right side panel for user exploration; extremely rarely, an AI Mode inline text link may open an external page in a new tab.
  • Link icons: Grey link icon that displays citations in the right sidebar.
  • Citation listings side panel/sidebar: List of external links (with an image thumbnail) the AI Mode is sourcing from; appears in the right column. The link icon “shuffles” this list when clicked.
  • Shopping packs: These appear similar to shopping carousels within classic organic search, and they occur in the left panel within the AI Mode text output.
  • Local packs: These are similar to the local packs paired with the embedded map within classic organic search, and they occur in the left panel within the AI Mode text output (very similar to the Shopping packs above).
  • Merchant card: Once a selection is made in the shopping pack, it opens a merchant card for further inspection.
  • Google Business Profile (GBP) Card: This appears on the right when a merchant card from a local pack is clicked. Once clicked, the GBP Card opens for further inspection.
  • Map embed: Embedded local map displaying solutions to the prompt/search need in the area.

Our AI Mode usability study collected data from 37 participants across seven specific search tasks, resulting in 250 unique tasks that provided robust insight into how people navigate the different elements within AI Mode.

The data showed that some of these visibility opportunities are more valuable than others, and it might not be the ones you think.

Let me level with you: I will not pretend I have the answers to exactly how you can earn appearance in each of the above AI Mode visibility opportunities (yet – I’m studying this intently as AI Mode rolls out globally across my clients and user adoption increases).

I would argue that none of us have enough data – at least, as of right now – to give exact plays and tactics to earn reliable, recurring visibility in new AI-chat-based search systems.

But what I can tell you is that high-quality, holistic SEO and brand authority practices have influence on AIO and AI Mode visibility outcomes.

Brand Trust Is The No. 1 Influence Factor In AI Mode

If it feels like I keep saying this repeatedly over the past few months – that brand trust and authority matter more than ever in AI Mode and AI Overviews – it’s because it’s true and underrated.

Similar to the UX study of AI Overviews I published in May 2025, the AI Mode study I published last week also confirms:

If AI Mode is a game of influence, then trust has the biggest impact on user decisions.

Your goal is to ensure your brand is (1) trusted by your target audience and (2) visible in AI Mode output text.

I’ll explain.

Study participants took on the following seven tasks:

  1. What do people say about Liquid Death, the beverage company? Do their drinks appeal to you?
  2. Imagine you’re going to buy a sleep tracker and the only two available are the Oura Ring 3 or the Apple Watch 9. Which would you choose, and why?
  3. You’re getting insights about the perks of a Ramp credit card vs. a Brex Card for small businesses. Which one seems better? What would make a business switch from another card: fee detail, eligibility fine print, or rewards?
  4. In the “Ask Anything” box in AI Mode, enter “Help me purchase a waterproof canvas bag.” Select one that best fits your needs and you would buy (for example, a camera bag, tote bag, duffel bag, etc.).
    • Proceed to the seller’s page. Click to add to the shopping cart and complete this task without going further.
  5. Compare subscription language apps to free language apps. Would you pay, and in what situation? Which product would you choose?
  6. Suppose you are visiting a friend in a large city and want to go to either: 1. A virtual reality arcade OR 2. A smart home showroom. What’s the name of the city you’re visiting?
  7. Suppose you work at a small desk and your cables are a mess. In the “Ask anything” box in AI Mode, enter: “The device cables are cluttering up my desk space. What can I buy today to help?” Then choose the one product you think would be the best solution. Put it in the shopping cart on the external website and end this task.

Look at these quotes from users as they made shopping decisions:

“If I were to choose one, I would probably just choose Duolingo just because I’ve used it. … I’m not too certain about the others.”

“Okay, we’re going with REI, that’s a good brand.”

“I don’t know the brand … that’s why I’m hesitant.”

“I trust Rosetta Stone more.”

Unless we’re talking about utility goods (like cables), where users decide by price and availability, brand makes a huge difference.

Participants’ reactions were strongly shaped by how familiar they were with the product and how complex it seemed.

With simple, familiar items like cable organizers or canvas bags, people could lean on prior knowledge and make choices confidently, even when AI Mode wasn’t perfectly clear.

But with less familiar or more abstract categories – like Liquid Death, language apps, or Ramp vs. Brex – user hesitation spiked, and participants often defaulted to a brand they already recognized.

Image Credit: Kevin Indig

Our AI Mode usability study showed that when brand familiarity is absent, shoppers default to marketplaces – or they keep reading the output.

Speaking of continuing reading through the AI Mode output, the overwhelming majority of tasks (221 out of 248, ~89%) show AI Mode text as the first thing participants noticed and engaged with.

This cannot be stressed enough.

It suggests the AI Mode output text itself is by far the most attention-grabbing entry point, ahead of any visual elements.

Inline Text Links Beat Link Icons

Recently, VP Product Search at Google, Robby Stein, said on X:

“We’ve found that people really prefer and are more likely to click links that are embedded within AI Mode responses, when they have more context on what they’re clicking and where they want to dig deeper.”

We can validate why Google made this choice with data.

But before you dive in below, here’s some additional context:

  • The inline text links are what we call the actual URL hyperlinks within the AI Mode copy, which is what Robby Stein is referring to above in his quote.
  • The grey link icon users hover over is what we call (in this study) the link icon.
  • The rich snippet on the right side of AI mode is what we refer to as the side panel or sidebar.
Image Credit: Kevin Indig

We found that inline text links draw about 27% more clicks than the right side panel of citations.

Inline links are within the copy or claim users are trying to verify, while the link icons feel detached and demand a context switch of sorts. People aren’t used to clicking on icons vs. text or a button for navigation.

Image Credit: Kevin Indig

This is notable because if Google were to adopt inline links as the default, it could raise the number of click-outs in AI Mode.

The biggest takeaway from this?

Getting a citation/inclusion within a link icon isn’t as valuable as an inline text link in the body of AI Mode.

It’s important to mention this, because many SEOs/marketers could assume that getting some kind of visibility within the link icon citations is valuable for our brands or clients.

Of course, I’d argue that any hard-won organic visibility is worth something in this era of search. But this usability study indicates that inclusion in a link icon citation likely has no real impact on visitors. So correcting this assumption amongst our industry – and our clients – is wise to do.

Local Packs, Maps, And GBP Cards Need More Data

Another interesting find?

Only 9.6% of valid tasks performed by study participants showed a Local Pack, and the Google Business Profile (GBP) card was effectively absent in nearly all test scenarios.

Only 3% of search tasks for the study showed a GBP card presence in any form.

Image Credit: Kevin Indig

Most notably: Though not always present, GBP cards played a curious and important role in driving on-SERP engagement. Users tended to scan them quickly, but also click them often.

Their presence appears to compete effectively with external links and merchant cards, which were used much less in the same contexts.

While the user behavior observed here is valid and notable enough for sharing, we must acknowledge that only one search task in the study had a specific localized or geographical intent.

More data is needed to solidify behavioral patterns across search tasks with geographical intent, and SEOs can also take into account that well-optimized GBP cards would be incredibly valuable based on high engagement with that feature.

Ecommerce SEOs Rest Easy: Shopping Tasks Take The External Clicks

In last week’s memo, I highlighted the following:

Clicks are rare and mostly transactional. The median number of external clicks per task was zero. Yep. You read that right. Ze-ro. And 77.6% of sessions had zero external visits.

Here, I’m going to expand on that finding. It’s more nuanced than “users rarely click at all.”

External clicks depend on transactional vs non-transactional tasks. And when the search task was shopping-related, the chance of an external click was 100%.

Shopping Packs appeared in 26% of tasks within this study. When they did appear, as in the screenshot, 34 of 65 times it was clicked.

Image Credit: Kevin Indig

Keep in mind, study participants were directed to take all steps to move through a shopping selection, including making a decision on an item and adding it to cart – just like a high-purchase-intent user would outside of the study environment.

However, when the search task was informational and non-transactional, the number of external clicks to sources outside the AI Mode output was nearly zero across all tasks in this study.

There were common sequences to user behavior when the search task was shopping-related:

  • Shopping Pack clicked → Merchant Card pop-up opened (occurrence: 28 times).
  • Inline Text Link clicked → Merchant Card pop-up opened (occurrence: 17 times).
  • Right panel clicked only (occurrence: 15 times).

Shopping packs are popular elements that people click on when they want to buy. Remember, clicking on one item or product in a pack brings up the detailed view of that one item (a Merchant Card).

One logical reason? They have images (common UX wisdom says people click where there’s an image).

Questions Are The New Search Habit – And Reveal An Interesting Behavior Pattern

It’s no mystery that users have been increasing conversational search since the advent of ChatGPT & Co.

This AI Mode study verified this once again, but the data also surfaced an interesting finding.

Out of 250 tasks, 88.8% of the prompts were framed as AI chatbot queries, or conversational prompts, while 11.2% resembled search-style queries, like classic search keywords. However, it’s important to note that we only analyzed the first initial query of the user and not subsequent follow-ups.

This data validation means users are overwhelmingly leaning toward conversational (chatbot-like) interactions rather than “search-like” phrasing of the past.

But here’s the unusual pattern we spotted in the data:

Users who phrased queries conversationally were much more likely to click out to external websites.

This is interesting because this behavior pattern may also mean “experienced” AI-based search or AI chat users click more to validate or explore information.

This is one hypothesis of why this pattern occurs. Another idea?

If a user takes the time to write a question, they are more careful in their approach to finding information, and therefore, they also want to look outside the “walled garden” of AI mode. This behavior could influence any search personas you develop for your brand.

Our data did not point to an entirely clear reason why the longer conversational phrasing was correlated to a higher likelihood of external website clicks, but it’s noteworthy nonetheless.


Featured Image: Paulo Bobita/Search Engine Journal

Are LLM Visibility Trackers Worth It?

TL;DR

  1. When it comes to LLM visibility, not all brands are created equal. For some, it matters far more than others.
  2. LLMs give different answers to the same question. Trackers combat this by simulating prompts repeatedly to get an average visibility/citation score.
  3. While simulating the same prompts isn’t perfect, secondary benefits like sentiment analysis are not SEO-specific issues. Which right now is a good thing.
  4. Unless a visibility tracker offers enough scale at a reasonable price, I would be wary. But if the traffic converts well and you need to know more, get tracking.
(Image Credit: Harry Clarkson-Bennett)

A small caveat to start. This really depends on how your business makes money and whether LLMs are a fundamental part of your audience journey. You need to understand how people use LLMs and what it means for your business.

Brands that sell physical products have a different journey from publishers that sell opinion or SaaS companies that rely more deeply on comparison queries than anyone else.

Or a coding company destroyed by one snidey Reddit moderator with a bone to pick…

For example, Ahrefs made public some of its conversion rate data from LLMs. 12.1% of their signups came from LLMs from just 0.5% of their total traffic. Which is huge.

AI search visitors convert 23x better than traditional organic search visitors for Ahrefs. (Image Credit: Harry Clarkson-Bennett)

But for us, LLM traffic converts significantly worse. It is a fraction of a fraction.

Honestly, I think LLM visibility trackers at this scale are a bit here today and gone tomorrow. If you can afford one, great. If not, don’t sweat it. Take it all with a pinch of salt. AI search is just a part of most journeys, and tracking the same prompts day in, day out has obvious flaws.

They’re just aggregating what someone said about you on Reddit while they’re taking a shit in 2016.

What Do They Do?

Trackers like Profound and Brand Radar are designed to show you how your brand is framed and recommended in AI answers. Over time, you can measure yours and your competitors’ visibility in the platforms.

Image Credit: Harry Clarkson-Bennett

But LLM visibility is smoke and mirrors.

Ask a question, get an answer. Ask the same question, to the same machine, from the same computer, and get a different answer. A different answer with different citations and businesses.

It has to be like this, or else we’d never use the boring ones.

To combat the inherent variance determined by their temperature setting, LLM trackers simulate prompts repeatedly throughout the day. In doing so, you get an average visibility and citation score alongside some other genuinely useful add-ons like your sentiment score and some competitor benchmarking.

“Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

OpenAI Documentation

Simulate a prompt 100 times. If your content was used in 70 of the responses and you were cited seven times, you would have a 70% visibility score and a 7% citation score.

Trust me, that’s much better than it sounds… These engines do not want to send you traffic.

In Brian Balfour’s excellent words, they have identified the moat and the gates are open. They will soon shut. As they shut, monetization will be hard and fast. The likelihood of any referral traffic, unless it’s monetized, is low.

Like every tech company ever.

If you aren’t flush with cash, I’d say most businesses just do not need to invest in them right now. They’re a nice-to-have rather than a necessity for most of us.

How Do They Work?

As far as I can tell, there are two primary models.

  1. Pay for a tool that tracks specific synthetic prompts that you add yourself.
  2. Purchase an enterprise-like tool that tracks more of the market at scale.

Some tools, like Profound, offer both. The cheaper model (the price point is not for most businesses) lets you track synthetic prompts under topics and/or tags. The enterprise model gives you a significantly larger scale.

Whereas tools like Ahrefs Brand Radar provide a broader view of the entire market. As the prompts are all synthetic, there are some fairly large holes. But I prefer broad visibility.

I have not used it yet, but I believe Similarweb have launched their own LLM visibility tracker, which includes real user prompts from Clickstream data.

This makes for a far more useful version of these tools IMO and goes some way to answering the synthetic elephant in the room. And it helps you understand the role LLMs play in the user journey. Which is far more valuable.

The Problem

Does doing good SEO improve your chances of improving your LLM visibility?

Certainly looks like it…

GPT-5 no longer needs to train on more information. It is as well-versed as its overlords now want to pay for. It’s bored of ravaging the internet’s detritus and reaches out to a search index using RAG to verify a response. A response, it does not quite have the appropriate level of confidence to answer effectively.

But I’m sure we will need to modify it somewhat if your primary goal is to increase LLM visibility. Increase expenditure on TOFU and digital PR campaigns being a notable point.

Image Credit: Harry Clarkson-Bennett

Right now, LLMs have an obvious spam problem. One I don’t expect they’ll be willing to invest in solving anytime soon. The AI bubble and gross valuation of these companies will dictate how they drive revenue. And quickly.

It sure as hell won’t be sorting out their spam problem. When you have a $300 billion contract to pay and revenues of $12 billion, you need some more money. Quickly.

So anyone who pays for best page link inclusions or adds hidden and footer text to their websites will benefit in the short-term. But most of us should still build things actual, breathing, snoring people.

With the new iterations of LLM trackers calling search instead of formulating an answer for prompts based on learned ‘knowledge’, it becomes even harder to create an ‘LLM optimization strategy.’

As a news site, I know that most prompts we would vaguely show up in would trigger the web index. So I just don’t quite see the value. It’s very SEO-led.

If you don’t believe me, Will Reynolds is an inarguably better source of information (Image Credit: Harry Clarkson-Bennett)

How You Can Add Value With Sentiment Analysis

I found almost zero value to be had from tracking prompts in LLMs at a purely answer level. So, let’s forget all that for a second and use them for something else. Let’s start with some sentiment analysis.

These trackers give us access to:

  • A wider online sentiment score.
  • Review sources LLMs called upon (at a prompt level).
  • Sentiment scores by topics.
  • Prompts and links to on and off-site information sources.

You can identify where some of these issues start. Which, to be fair, is basically Trustpilot and Reddit.

I won’t go through everything, but a couple of quick examples:

  1. LLMs may be referencing some not-so-recently defunct podcasts and newsletters as “reasons to subscribe.”
  2. Your cancellation process may be cited as the most serious issues for most customers.

Unless you have explicitly stated that these podcasts and newsletters have finished, it’s all fair game. You need to tighten up your product marketing and communications strategy.

For people first. Then for LLMs.

These are not SEO specific projects. We’re moving into an era where solely SEO projects will be difficult to get pushed through. A fantastic way of getting buy-in is to highlight projects with benefits outside of search.

Highlighting serious business issues – poor reviews, inaccurate, out-of-date information et al. – can help get C-suite attention and support for some key brand reputation projects.

Profound’s sentiment analysis tab (Image Credit: Harry Clarkson-Bennett)
Here it is broken down by topic. You can see individual prompts and responses to each topic (Image Credit: Harry Clarkson-Bennett)

To me, this has nothing to do with LLMs. Or what our audience might ask an ill-informed answer engine. They are just the vessel.

It is about solving problems. Problems that drive real value to your business. In your case, this could be about increasing the LTV of a customer. Increasing their retention rate, reducing churn, and increasing the chance of a conversion by providing an improved experience.

If you’ve worked in SEO for long enough, someone will have floated the idea of improving your online sentiment and reviews past you.

“But will this improve our SEO?”

Said Jeff, a beleaguered business owner.

Who knows, Jeff. It really depends on what is holding you back compared to your competition. And like it or not, search is not very investible right now.

But that doesn’t matter in this instance. This isn’t a search-first project. It’s an audience-first project. It encompasses everyone. From customer service to SEO and editorial. It’s just the right thing to do for the business.

A quick hark back to the Google Leak shows you just how many review and sentiment-focused metrics may affect how you rank.

There are nine alone that mention review or sentiment in the title

There are nine alone that mention review or sentiment in the title (Image Credit: Harry Clarkson-Bennett)

For a long time, search has been about brands and trust. Branded search volume, outperforming expected CTR (a Bayesian type predictive model), direct traffic, and general user engagement and satisfaction.

This isn’t because Google knows better than people. It’s because they have stored how we feel about pages and brands in relation to queries and used that as a feedback loop. Google trusts brands because we do.

Most of us have never had to worry about reviews and sentiment. But this is a great time to fix any issues you may have under the guise of AEO, GEO, SEO, or whatever you want to call it.

Lars Lofgren’s article titled How a Competitor Crippled a $23.5M Bootcamp By Becoming a Reddit Moderator is an incredible look at how Codesmith was nobbled by negative PR. Negative PR started and maintained by one Reddit Mod. One.

So keeping tabs on your reputation and identifying potentially serious issues is never a bad thing.

Could I Just Build My Own?

Yep. For starters, you’d need an estimation of monthly LLM API costs based on the number of monthly tokens required. Let’s use Profound’s lower-end pricing tier as an estimate and our old friend Gemini to figure out some estimated costs.

  • 200 prompts × 10 runs × 12 days (approx.) × 3 models = 24,000 monthly runs.
  • 24,000 runs × 1,000 tokens/query (conservative est.) = 24,000,000 tokens.

Based on this, here’s a (hopefully) accurate cost estimate per model from our robot pal.

Image Credit: Harry Clarkson-Bennett

Right then. You now need some back-end functionality, data storage, and some front-end visualization. I’ll tot up as we go.

$21 per month

Back-End

  • A Scheduler/Runner like Render VPS to execute 800 API calls per day.
  • A data orchestrater. Essentially, some Python code to parse raw JSON and extract relevant citation and visibility data.

$10 per month

Data Storage

  • A database, like Supabase (which you can integrate directly through Lovable), to store raw responses and structured metrics.
  • Data storage (which should be included as part of your database).

$15 per month

Front-End Visualization

  • A web dashboard to create interactive, shareable dashboards. I unironically love Lovable. It’s easy to connect directly to databases. I have also used Streamlit previously. Lovable looks far sleeker but has its own challenges.
  • You may also need a visualization library to help generate time series charts and graphs. Some dashboards have this built in.

$50 per month

$96 all in. I think the likelihood is it’s closer to $50 than $100. No scrimping. At the higher end of budgets for tools I use (Lovable) and some estimates from Gemini, we’re talking about a tool that will cost under $100 a month to run and function very well.

This isn’t a complicated project or setup. It is, IMO, an excellent project to learn the vibe coding ropes. Which I will say is not all sunshine and rainbows.

So, Should I Buy One?

If you can afford it, I would get one. For at least a month or two. Review your online sentiment. See what people really say about you online. Identify some low lift wins around product marketing and review/reputation management, and review how your competitors fare.

This might be the most important part of LLM visibility. Set up a tracking dashboard via Google Analytics (or whatever dreadful analytics provider you use) and see a) how much traffic you get and b) whether it’s valuable.

The more valuable it is, the more value there will be in tracking your LLM visibility.

You could also make one. The joy of making one is a) you can learn a new skill and b) you can make other things for the same cost.

Frustrating, yes. Fun? Absolutely.

More Resources: 


This post was originally published on Leadership In SEO.


Featured Image: Viktoriia_M/Shutterstock

Google Answers What To Do For AEO/GEO via @sejournal, @martinibuster

Google’s VP of Product, Robby Stein, recently answered the question of what people should think about in terms of AEO/GEO. He provided a multi-part answer that began with how Google’s AI creates answers and ended with guidance on what creators should consider.

Foundations Of Google AI Search

The question asked was about AEO/GEO, which was characterized by the podcast host as the evolution of SEO. Google’s Robby Stein’s answer suggested thinking about the context of AI answers.

This is the question that was asked:

“What’s your take on this whole rise of AEO, GEO, which is kind of this evolution of SEO?

I’m guessing your answer is going to be just create awesome stuff and don’t worry about it, but you know, there’s a whole skill of getting to show up in these answers. Thoughts on what people should be thinking about here?”

Stein began his answer describing the foundations of how Google’s AI search works:

“Sure. I mean, I can give you a little bit of under the hood, like how this stuff works, because I do think that helps people understand what to do.

When our AI constructs a response, it’s actually trying to, it does something called query fan-out, where the model uses Google search as a tool to do other querying.

So maybe you’re asking about specific shoes. It’ll add and append all of these other queries, like maybe dozens of queries, and start searching basically in the background. And it’ll make requests to our data kind of backend. So if it needs real-time information, it’ll go do that.

And so at the end of the day, actually something’s searching. It’s not a person, but there’s searches happening.”

Robby Stein shows that Google’s AI still relies on conventional search engine retrieval, it’s just scaled and automated. The system performs dozens of background searches and evaluates the same quality signals that guide ordinary search rankings.

That means that “answer engine optimization” is basically the same as SEO because the underlying indexing, ranking and quality factors inherent to traditional SEO principles still apply to queries that the AI itself issues as part of the query fan-out process.

For SEOs, the insight is that visibility in AI answers depends less on gaming a new algorithm and more on producing content that satisfies intent so thoroughly that Google’s automated searches treat it as the best possible answer. As you’ll see later in this article, originality also plays a role.

Role Of Traditional Search Signals

An interesting part of this discussion is centered on the kinds of quality signals that Google describes in its Quality Raters Guidelines. Stein talks about originality of the content, for example.

Here’s what he said:

“And then each search is paired with content. So if for a given search, your webpage is designed to be extremely helpful.

And then you can look up Google’s human rater guidelines and read… what makes great information? This is something Google has studied more than anyone.

And it’s like:

  • Do you satisfy the user intent of what they’re trying to get?
  • Do you have sources?
  • Do you cite your information?
  • Is it original or is it repeating things that have been repeated 500 times?

And there’s these best practices that I think still do largely apply because it’s going to ultimately come down to an AI is doing research and finding information.

And a lot of the core signals, is this a good piece of information for the question, they’re still valid. They’re still extremely valid and extremely useful. And that will produce a response where you’re more likely to show up in those experiences now.”

Although Stein is describing AI Search results, his answer shows that Google’s AI Search still values the same underlying quality factors found in traditional search. Originality, source citations, and satisfying intent remain the foundation of what makes information “good” in Google’s view. AI has changed the interface of search and encouraged more complex queries, but the ranking factors continue to be the same recognizable signals related to expertise and authoritativeness.

More On How Google’s AI Search Works

The podcast host, Lenny, followed up with another question about how Google’s AI Search might follow a different approach from a strictly chatbot approach.

He asked:

“It’s interesting your point about how it goes in searches. When you use it, it’s like searching a thousand pages or something like that. Is that a just a different core mechanic to how other popular chatbots work because the others don’t go search a bunch of websites as you’re asking.”

Stein answered with more details about how AI search works, going beyond query fan-out, identifying factors it uses to surface what they feel to be the best answers. For example, he mentions parametric memory. Parametric memory is the knowledge that an AI has as part of its training. It’s essentially the knowledge stored within the model and not fetched from external sources.

Stein explained:

“Yeah, this is something that we’ve done uniquely for our AI. It obviously has the ability to use parametric memory and thinking and reasoning and all the things a model does.

But one of the things that makes it unique for designing it specifically for informational tasks, like we want it to be the best at informational needs. That’s what Google’s all about.

  • And so how does it find information?
  • How does it know if information is right?
  • How does it check its work?

These are all things that we built into the model. And so there is a unique access to Google. Obviously, it’s part of Google search.

So it’s Google search signals, everything from spam, like what’s content that could be spam and we don’t want to probably use in a response, all the way to, this is the most authoritative, helpful piece of information.

We’re going link to it and we’re going to explain, hey, according to this website, check out that information and you’re going to probably go see that yourself.

So that’s how we’ve thought about designing this.”

Stein’s explanation makes it clear that Google’s AI Search is not designed to mimic the conversational style of general chatbots but to reinforce the company’s core goal of delivering trustworthy information that’s authoritative and helpful.

Google’s AI Search does this by relying on signals from Google Search, such as spam detection and helpfulness, the system grounds its AI-generated answers in the same evaluation and ranking framework inherent in regular search ranking.

This approach positions AI Search as less a standalone version of search and more like an extension of Google’s information-retrieval infrastructure, where reasoning and ranking work together to surface factually accurate answers.

Advice For Creators

Stein at one point acknowledges that creators want to know what to do for AI Search. He essentially gives the advice to think about the questions people are asking. In the old days that meant thinking about what keywords searchers are using. He explains that’s no longer the case because people are using long conversational queries now.

He explained:

“I think the only thing I would give advice to would be, think about what people are using AI for.

I mentioned this as an expansionary moment, …that people are asking a lot more questions now, particularly around things like advice or how to, or more complex needs versus maybe more simple things.

And so if I were a creator, I would be thinking, what kind of content is someone using AI for? And then how could my content be the best for that given set of needs now?
And I think that’s a really tangible way of thinking about it.”

Stein’s advice doesn’t add anything new but it does reframe the basics of SEO for the AI Search era. Instead of optimizing for isolated keywords, creators should consider anticipating the fuller intent and informational journey inherent in conversational questions. That means structuring content to directly satisfy complex informational needs, especially “how to” or advice-driven queries that users increasingly pose to AI systems rather than traditional keyword search.

Takeaways

  • AI Is Search Still Built on Traditional SEO Signals
    Google’s AI Search relies on the same core ranking principles as traditional search—intent satisfaction, originality, and citation of sources.
  • How Query Fan-Out Works
    AI Search issues dozens of background searches per query, using Google Search as a tool to fetch real-time data and evaluate quality signals.
  • Integration of Parametric Memory and Search Signals
    The model blends stored knowledge (parametric memory) with live Google Search data, combining reasoning with ranking systems to ensure factual accuracy.
  • Google’s AI Search Is Like An Extension of Traditional Search
    AI Search isn’t a chatbot; it’s a search-based reasoning system that reinforces Google’s informational trust model rather than replacing it.
  • Guidance for Creators in the AI Search Era
    Optimizing for AI means understanding user intent behind long, conversational queries—focusing on advice- and how-to-style content that directly satisfies complex informational needs.

Google’s AI Search builds on the same foundations that have long defined traditional search, using retrieval, ranking, and quality signals to surface information that demonstrates originality and trustworthiness. By combining live search signals with the model’s own stored knowledge, Google has created a system that explains information and cites the websites that provided it. For creators, this means that success now depends on producing content that fully addresses the complex, conversational questions people bring to AI systems.

Watch the podcast segment starting at about the 15:30 minute mark:

Featured Image by Shutterstock/PST Vector

How Leaders Are Using AI Search to Drive Growth [Webinar] via @sejournal, @hethr_campbell

Turn Data Into an Actionable AI Search Strategy

AI search is transforming consumer behavior faster than any shift in the past 20 years. Many teams are chasing visibility, but few understand what the data actually means for their business or how to act on it.

Join Mark Traphagen, VP of Product Marketing and Training at seoClarity, and Tania German, VP of Marketing at seoClarity, for a live webinar designed for SEOs, digital leaders, and executives. You’ll learn how to interpret AI search data and apply it to your strategy to drive real business results.

What You’ll Learn

  • Why consumer discovery is changing so rapidly.
  • How visibility drives revenue with Instant Checkout in ChatGPT.
  • What Google’s AI Overviews and AI Mode mean for your brand’s presence.
  • Tactics to improve mentions, citations, and visibility on AI search engines.

Why Attend

This webinar gives you the clarity and measurement framework needed to confidently answer, “What’s our AI search strategy?” Walk away with a playbook you can use to lead your organization through the AI search shift successfully.

Register now to secure your seat and get a clear, data-backed framework for AI search strategy.

🛑 Can’t attend live? Register anyway, and we’ll send the full recording.

The AI Search Effect: What Agencies Need To Know For Local Search Clients

This post was sponsored by GatherUp. The opinions expressed in this article are the sponsor’s own.

Local Search Has Changed: From “Found” to “Chosen”

Not long ago, showing up in a Google search was enough. A complete Google Business Profile (GBP) and a steady stream of reviews could put your client in front of the right customers.

But today’s local search looks very different. It’s no longer just about being found; it’s about being chosen.

That shift has only accelerated with the rise of AI-powered search. Instead of delivering a list of links, engines like ChatGPT, Google’s Gemini, and Perplexity now generate instant summaries. Changing the way consumers interact with search results, these summaries are the key to whether or not your client’s business gets seen at all.

Reality Check: if listings aren’t accurate, consistent, and AI-ready, businesses risk invisibility.

AI Search Is Reshaping Behavior & Brand Visibility

AI search is already reshaping behavior.

Only 8% of users click a traditional link when an AI summary appears. That means the majority of your clients’ potential customers are making decisions without ever leaving the AI-generated response.

So, how does AI decide which businesses to include in its answers? Two categories of signals matter most:

Put simply, if a client’s listings are messy, incomplete, or outdated, AI is far less likely to surface them in a summary. And that’s a problem, considering more than 4 out of 5 people use search engines to find local businesses.

The Hidden Dangers of Neglected Listings

Agencies know the pain of messy listings firsthand. But your clients may not realize just how damaging it can be:

  • Trust erosion: 80% of consumers lose trust in businesses with incorrect or inconsistent.
  • Lost visibility: Roughly a third of local organic results now come from business directories. If listings are incomplete, that’s a third of opportunities gone.
  • Negative perception: A GBP with outdated hours or broken URLs communicates neglect, not professionalism.

Consider “Mary,” a marketing director overseeing 150+ locations. Without automation, her team spends hours chasing duplicate profiles, correcting seasonal hours, and fighting suggested edits. Updates lag behind reality. Customers’ trust slips. And every inconsistency is another signal to search engines, and now AI, that the business isn’t reliable.

For many agencies, the result is more than frustrated clients. It’s a high churn risk.

Why This Matters More Than Ever to Consumers

Consumers expect accuracy at every touchpoint, and they’re quick to lose confidence when details don’t add up.

  • 80% of consumers lose trust in a business with incorrect or inconsistent information, like outdated hours, wrong addresses, or broken links.
  • A Google Business Profile with missing fields or duplicate entries signals neglect.
  • When AI engines surface summaries, they pull from this. Inconsistencies make it less likely your client’s business will appear at all.

Reviews still play a critical role, but they work best when paired with clean, consistent listings. 99% of consumers read reviews before choosing a business, and 68% prioritize recent reviews over overall star ratings. If the reviews say “great service” but the business shows the wrong phone number or closed hours, that trust is instantly broken.

In practice, this means agencies must help clients maintain both accurate listings and authentic reviews. Together, they signal credibility to consumers and to AI search engines deciding which businesses make the cut.

Real-World Data: The ROI of Getting Listings Right

Agencies that take listings seriously are already seeing outsized returns:

  • A healthcare agency managing 850+ locations saved 132 hours per month and reduced costs by $21K annually through listings automation, delivering a six-figure annual ROI.
  • A travel brand optimizing global listings recorded a 200% increase in Google visibility and a 30x rise in social engagement.
  • A retail chain improving profile completeness saw a 31% increase in revenue attributed to local SEO improvements.

The proof is clear: accurate, consistent, and scalable listings management is no longer optional. It’s a revenue driver.

Actionable Steps Agencies Can Take Right Now

AI search is moving fast, but agencies don’t have to be caught flat-footed. Here are five practical steps to protect your clients’ visibility and trust.

1.  Audit Listings for Accuracy and Consistency

Start with a full audit of your clients’ GBPs and directory listings. Look for mismatches in hours, addresses, URLs, and categories. Even small discrepancies send negative signals to both consumers and AI search engines.

I know you updated your listings last year, and not much has changed, but unless your business is a time capsule, your customers expect real-time accuracy.

2.  Eliminate Duplicates

Duplicate listings aren’t just confusing to customers; they actively hurt SEO. Suppress duplicates across directories and consolidate data at the source to prevent aggregator overwrites. Google penalized 6.1% of business listings flagged for duplicate or spam entries in Q1 alone, underscoring how seriously platforms are taking accuracy enforcement.

3.  Optimize for Engagement

Encourage clients to respond authentically to reviews. Research shows 73% of consumers will give a business a second chance if they receive a thoughtful response to a negative review. Engagement isn’t just customer service; it’s a ranking signal.

4.  Create AI-Readable Content

AI thrives on structured, educational content. Encourage clients to build out their web presence with FAQs, descriptive product or service pages, and customer-centric content that mirrors natural language. This makes it easier for AI to pull them into summaries.

5.  Automate at Scale

Manual updates don’t cut it for multi-location brands. Implement automation for bulk publishing, data synchronization, and ongoing updates. This ensures accuracy and saves agencies countless hours of low-value labor.

The AI Opportunity: Agencies as Strategic Partners

For agencies, the rise of AI search is both a threat and an opportunity. Yes, clients who ignore their listings risk becoming invisible. But agencies that lean in can position themselves as strategic partners, helping businesses adapt to a disruptive new era.

That means reframing listings management not as “background work,” but as the foundation of trust and visibility in AI-powered search.

As GatherUp’s research concludes, “In the AI-driven search era, listings are no longer background work; they are the foundation of visibility and trust.”

The Time to Act Is Now

AI search is here, and it’s rewriting the rules of local visibility. Agencies that fail to help their clients adapt risk irrelevance.

But those that act now can deliver measurable growth, stronger client relationships, and defensible ROI.

The path forward is clear: audit listings, eliminate duplicates, optimize for engagement, publish AI-readable content, and automate at scale.

And if you want to see where your clients stand today, GatherUp offers a free listings audit to help identify gaps and opportunities.

👉 Run a free listings audit and see how your business measures up.

Image Credits

Featured Image: Image by GatherUp. Used with permission.

In-Post Images: Image by GatherUp. Used with permission.

5 SEO Tactics to Be Seen & Trusted on AI Search [Webinar] via @sejournal, @duchessjenm

Is your brand ready for AI-driven SERPs?

Search is evolving faster than ever. AI-driven engines like ChatGPT, Google SGE, and Bing Copilot are changing how users discover and trust brands. Traditional SEO tactics alone may no longer guarantee visibility or authority in Answer Engines.

Discover five proven tactics to protect your SERP presence and maintain trust in AI search.

What You’ll Learn

Craig Smith, Chief Strategy Officer at Outerbox, will show exactly how to adapt your SEO strategy for generative search and answer engines. 

You’ll walk away with actionable steps to:

Register now to get the SEO playbook your competitors wish they had.

Why You Can’t Miss This Webinar

AI Overviews are already impacting traffic. Brands that adapt now will dominate visibility and authority while others fall behind.

🛑 Can’t attend live? Register anyway and we’ll send you the recording so you can watch at your convenience.