Google’s AI Mode SEO Impact | AI Mode User Behavior Study [Part 2] via @sejournal, @Kevin_Indig

Last week, I shared the largest usability study of AI Mode, and it revealed how users interact with the new search surface:

They focus on the AI Mode text first 88% of the time, ignore link icons, and rarely click out.

This week, for Part 2, I’m covering what’s measurable, what’s guesswork, and what’s possibly next for visibility, trust, and monetization in AI Mode.

If you have questions about the study methodology or initial findings, make sure to check out What Our AI Mode User Behavior Study Reveals about the Future of Search to get up to speed.

Because this week, we’re jumping right in.

Which AI Mode Elements Can You “Optimize” For?

Before we dive into additional findings that I didn’t have room to cover last week, first, we need to get on the same page about your brand’s visibility opportunities in AI Mode.

There are a few distinct visibility opportunities, each with different functions:

  • Inline text links or inline links: A hyperlink directly in the AI Mode output copy that opens a feature in the right side panel for user exploration; extremely rarely, an AI Mode inline text link may open an external page in a new tab.
  • Link icons: Grey link icon that displays citations in the right sidebar.
  • Citation listings side panel/sidebar: List of external links (with an image thumbnail) the AI Mode is sourcing from; appears in the right column. The link icon “shuffles” this list when clicked.
  • Shopping packs: These appear similar to shopping carousels within classic organic search, and they occur in the left panel within the AI Mode text output.
  • Local packs: These are similar to the local packs paired with the embedded map within classic organic search, and they occur in the left panel within the AI Mode text output (very similar to the Shopping packs above).
  • Merchant card: Once a selection is made in the shopping pack, it opens a merchant card for further inspection.
  • Google Business Profile (GBP) Card: This appears on the right when a merchant card from a local pack is clicked. Once clicked, the GBP Card opens for further inspection.
  • Map embed: Embedded local map displaying solutions to the prompt/search need in the area.

Our AI Mode usability study collected data from 37 participants across seven specific search tasks, resulting in 250 unique tasks that provided robust insight into how people navigate the different elements within AI Mode.

The data showed that some of these visibility opportunities are more valuable than others, and it might not be the ones you think.

Let me level with you: I will not pretend I have the answers to exactly how you can earn appearance in each of the above AI Mode visibility opportunities (yet – I’m studying this intently as AI Mode rolls out globally across my clients and user adoption increases).

I would argue that none of us have enough data – at least, as of right now – to give exact plays and tactics to earn reliable, recurring visibility in new AI-chat-based search systems.

But what I can tell you is that high-quality, holistic SEO and brand authority practices have influence on AIO and AI Mode visibility outcomes.

Brand Trust Is The No. 1 Influence Factor In AI Mode

If it feels like I keep saying this repeatedly over the past few months – that brand trust and authority matter more than ever in AI Mode and AI Overviews – it’s because it’s true and underrated.

Similar to the UX study of AI Overviews I published in May 2025, the AI Mode study I published last week also confirms:

If AI Mode is a game of influence, then trust has the biggest impact on user decisions.

Your goal is to ensure your brand is (1) trusted by your target audience and (2) visible in AI Mode output text.

I’ll explain.

Study participants took on the following seven tasks:

  1. What do people say about Liquid Death, the beverage company? Do their drinks appeal to you?
  2. Imagine you’re going to buy a sleep tracker and the only two available are the Oura Ring 3 or the Apple Watch 9. Which would you choose, and why?
  3. You’re getting insights about the perks of a Ramp credit card vs. a Brex Card for small businesses. Which one seems better? What would make a business switch from another card: fee detail, eligibility fine print, or rewards?
  4. In the “Ask Anything” box in AI Mode, enter “Help me purchase a waterproof canvas bag.” Select one that best fits your needs and you would buy (for example, a camera bag, tote bag, duffel bag, etc.).
    • Proceed to the seller’s page. Click to add to the shopping cart and complete this task without going further.
  5. Compare subscription language apps to free language apps. Would you pay, and in what situation? Which product would you choose?
  6. Suppose you are visiting a friend in a large city and want to go to either: 1. A virtual reality arcade OR 2. A smart home showroom. What’s the name of the city you’re visiting?
  7. Suppose you work at a small desk and your cables are a mess. In the “Ask anything” box in AI Mode, enter: “The device cables are cluttering up my desk space. What can I buy today to help?” Then choose the one product you think would be the best solution. Put it in the shopping cart on the external website and end this task.

Look at these quotes from users as they made shopping decisions:

“If I were to choose one, I would probably just choose Duolingo just because I’ve used it. … I’m not too certain about the others.”

“Okay, we’re going with REI, that’s a good brand.”

“I don’t know the brand … that’s why I’m hesitant.”

“I trust Rosetta Stone more.”

Unless we’re talking about utility goods (like cables), where users decide by price and availability, brand makes a huge difference.

Participants’ reactions were strongly shaped by how familiar they were with the product and how complex it seemed.

With simple, familiar items like cable organizers or canvas bags, people could lean on prior knowledge and make choices confidently, even when AI Mode wasn’t perfectly clear.

But with less familiar or more abstract categories – like Liquid Death, language apps, or Ramp vs. Brex – user hesitation spiked, and participants often defaulted to a brand they already recognized.

Image Credit: Kevin Indig

Our AI Mode usability study showed that when brand familiarity is absent, shoppers default to marketplaces – or they keep reading the output.

Speaking of continuing reading through the AI Mode output, the overwhelming majority of tasks (221 out of 248, ~89%) show AI Mode text as the first thing participants noticed and engaged with.

This cannot be stressed enough.

It suggests the AI Mode output text itself is by far the most attention-grabbing entry point, ahead of any visual elements.

Inline Text Links Beat Link Icons

Recently, VP Product Search at Google, Robby Stein, said on X:

“We’ve found that people really prefer and are more likely to click links that are embedded within AI Mode responses, when they have more context on what they’re clicking and where they want to dig deeper.”

We can validate why Google made this choice with data.

But before you dive in below, here’s some additional context:

  • The inline text links are what we call the actual URL hyperlinks within the AI Mode copy, which is what Robby Stein is referring to above in his quote.
  • The grey link icon users hover over is what we call (in this study) the link icon.
  • The rich snippet on the right side of AI mode is what we refer to as the side panel or sidebar.
Image Credit: Kevin Indig

We found that inline text links draw about 27% more clicks than the right side panel of citations.

Inline links are within the copy or claim users are trying to verify, while the link icons feel detached and demand a context switch of sorts. People aren’t used to clicking on icons vs. text or a button for navigation.

Image Credit: Kevin Indig

This is notable because if Google were to adopt inline links as the default, it could raise the number of click-outs in AI Mode.

The biggest takeaway from this?

Getting a citation/inclusion within a link icon isn’t as valuable as an inline text link in the body of AI Mode.

It’s important to mention this, because many SEOs/marketers could assume that getting some kind of visibility within the link icon citations is valuable for our brands or clients.

Of course, I’d argue that any hard-won organic visibility is worth something in this era of search. But this usability study indicates that inclusion in a link icon citation likely has no real impact on visitors. So correcting this assumption amongst our industry – and our clients – is wise to do.

Local Packs, Maps, And GBP Cards Need More Data

Another interesting find?

Only 9.6% of valid tasks performed by study participants showed a Local Pack, and the Google Business Profile (GBP) card was effectively absent in nearly all test scenarios.

Only 3% of search tasks for the study showed a GBP card presence in any form.

Image Credit: Kevin Indig

Most notably: Though not always present, GBP cards played a curious and important role in driving on-SERP engagement. Users tended to scan them quickly, but also click them often.

Their presence appears to compete effectively with external links and merchant cards, which were used much less in the same contexts.

While the user behavior observed here is valid and notable enough for sharing, we must acknowledge that only one search task in the study had a specific localized or geographical intent.

More data is needed to solidify behavioral patterns across search tasks with geographical intent, and SEOs can also take into account that well-optimized GBP cards would be incredibly valuable based on high engagement with that feature.

Ecommerce SEOs Rest Easy: Shopping Tasks Take The External Clicks

In last week’s memo, I highlighted the following:

Clicks are rare and mostly transactional. The median number of external clicks per task was zero. Yep. You read that right. Ze-ro. And 77.6% of sessions had zero external visits.

Here, I’m going to expand on that finding. It’s more nuanced than “users rarely click at all.”

External clicks depend on transactional vs non-transactional tasks. And when the search task was shopping-related, the chance of an external click was 100%.

Shopping Packs appeared in 26% of tasks within this study. When they did appear, as in the screenshot, 34 of 65 times it was clicked.

Image Credit: Kevin Indig

Keep in mind, study participants were directed to take all steps to move through a shopping selection, including making a decision on an item and adding it to cart – just like a high-purchase-intent user would outside of the study environment.

However, when the search task was informational and non-transactional, the number of external clicks to sources outside the AI Mode output was nearly zero across all tasks in this study.

There were common sequences to user behavior when the search task was shopping-related:

  • Shopping Pack clicked → Merchant Card pop-up opened (occurrence: 28 times).
  • Inline Text Link clicked → Merchant Card pop-up opened (occurrence: 17 times).
  • Right panel clicked only (occurrence: 15 times).

Shopping packs are popular elements that people click on when they want to buy. Remember, clicking on one item or product in a pack brings up the detailed view of that one item (a Merchant Card).

One logical reason? They have images (common UX wisdom says people click where there’s an image).

Questions Are The New Search Habit – And Reveal An Interesting Behavior Pattern

It’s no mystery that users have been increasing conversational search since the advent of ChatGPT & Co.

This AI Mode study verified this once again, but the data also surfaced an interesting finding.

Out of 250 tasks, 88.8% of the prompts were framed as AI chatbot queries, or conversational prompts, while 11.2% resembled search-style queries, like classic search keywords. However, it’s important to note that we only analyzed the first initial query of the user and not subsequent follow-ups.

This data validation means users are overwhelmingly leaning toward conversational (chatbot-like) interactions rather than “search-like” phrasing of the past.

But here’s the unusual pattern we spotted in the data:

Users who phrased queries conversationally were much more likely to click out to external websites.

This is interesting because this behavior pattern may also mean “experienced” AI-based search or AI chat users click more to validate or explore information.

This is one hypothesis of why this pattern occurs. Another idea?

If a user takes the time to write a question, they are more careful in their approach to finding information, and therefore, they also want to look outside the “walled garden” of AI mode. This behavior could influence any search personas you develop for your brand.

Our data did not point to an entirely clear reason why the longer conversational phrasing was correlated to a higher likelihood of external website clicks, but it’s noteworthy nonetheless.


Featured Image: Paulo Bobita/Search Engine Journal

Are LLM Visibility Trackers Worth It?

TL;DR

  1. When it comes to LLM visibility, not all brands are created equal. For some, it matters far more than others.
  2. LLMs give different answers to the same question. Trackers combat this by simulating prompts repeatedly to get an average visibility/citation score.
  3. While simulating the same prompts isn’t perfect, secondary benefits like sentiment analysis are not SEO-specific issues. Which right now is a good thing.
  4. Unless a visibility tracker offers enough scale at a reasonable price, I would be wary. But if the traffic converts well and you need to know more, get tracking.
(Image Credit: Harry Clarkson-Bennett)

A small caveat to start. This really depends on how your business makes money and whether LLMs are a fundamental part of your audience journey. You need to understand how people use LLMs and what it means for your business.

Brands that sell physical products have a different journey from publishers that sell opinion or SaaS companies that rely more deeply on comparison queries than anyone else.

Or a coding company destroyed by one snidey Reddit moderator with a bone to pick…

For example, Ahrefs made public some of its conversion rate data from LLMs. 12.1% of their signups came from LLMs from just 0.5% of their total traffic. Which is huge.

AI search visitors convert 23x better than traditional organic search visitors for Ahrefs. (Image Credit: Harry Clarkson-Bennett)

But for us, LLM traffic converts significantly worse. It is a fraction of a fraction.

Honestly, I think LLM visibility trackers at this scale are a bit here today and gone tomorrow. If you can afford one, great. If not, don’t sweat it. Take it all with a pinch of salt. AI search is just a part of most journeys, and tracking the same prompts day in, day out has obvious flaws.

They’re just aggregating what someone said about you on Reddit while they’re taking a shit in 2016.

What Do They Do?

Trackers like Profound and Brand Radar are designed to show you how your brand is framed and recommended in AI answers. Over time, you can measure yours and your competitors’ visibility in the platforms.

Image Credit: Harry Clarkson-Bennett

But LLM visibility is smoke and mirrors.

Ask a question, get an answer. Ask the same question, to the same machine, from the same computer, and get a different answer. A different answer with different citations and businesses.

It has to be like this, or else we’d never use the boring ones.

To combat the inherent variance determined by their temperature setting, LLM trackers simulate prompts repeatedly throughout the day. In doing so, you get an average visibility and citation score alongside some other genuinely useful add-ons like your sentiment score and some competitor benchmarking.

“Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

OpenAI Documentation

Simulate a prompt 100 times. If your content was used in 70 of the responses and you were cited seven times, you would have a 70% visibility score and a 7% citation score.

Trust me, that’s much better than it sounds… These engines do not want to send you traffic.

In Brian Balfour’s excellent words, they have identified the moat and the gates are open. They will soon shut. As they shut, monetization will be hard and fast. The likelihood of any referral traffic, unless it’s monetized, is low.

Like every tech company ever.

If you aren’t flush with cash, I’d say most businesses just do not need to invest in them right now. They’re a nice-to-have rather than a necessity for most of us.

How Do They Work?

As far as I can tell, there are two primary models.

  1. Pay for a tool that tracks specific synthetic prompts that you add yourself.
  2. Purchase an enterprise-like tool that tracks more of the market at scale.

Some tools, like Profound, offer both. The cheaper model (the price point is not for most businesses) lets you track synthetic prompts under topics and/or tags. The enterprise model gives you a significantly larger scale.

Whereas tools like Ahrefs Brand Radar provide a broader view of the entire market. As the prompts are all synthetic, there are some fairly large holes. But I prefer broad visibility.

I have not used it yet, but I believe Similarweb have launched their own LLM visibility tracker, which includes real user prompts from Clickstream data.

This makes for a far more useful version of these tools IMO and goes some way to answering the synthetic elephant in the room. And it helps you understand the role LLMs play in the user journey. Which is far more valuable.

The Problem

Does doing good SEO improve your chances of improving your LLM visibility?

Certainly looks like it…

GPT-5 no longer needs to train on more information. It is as well-versed as its overlords now want to pay for. It’s bored of ravaging the internet’s detritus and reaches out to a search index using RAG to verify a response. A response, it does not quite have the appropriate level of confidence to answer effectively.

But I’m sure we will need to modify it somewhat if your primary goal is to increase LLM visibility. Increase expenditure on TOFU and digital PR campaigns being a notable point.

Image Credit: Harry Clarkson-Bennett

Right now, LLMs have an obvious spam problem. One I don’t expect they’ll be willing to invest in solving anytime soon. The AI bubble and gross valuation of these companies will dictate how they drive revenue. And quickly.

It sure as hell won’t be sorting out their spam problem. When you have a $300 billion contract to pay and revenues of $12 billion, you need some more money. Quickly.

So anyone who pays for best page link inclusions or adds hidden and footer text to their websites will benefit in the short-term. But most of us should still build things actual, breathing, snoring people.

With the new iterations of LLM trackers calling search instead of formulating an answer for prompts based on learned ‘knowledge’, it becomes even harder to create an ‘LLM optimization strategy.’

As a news site, I know that most prompts we would vaguely show up in would trigger the web index. So I just don’t quite see the value. It’s very SEO-led.

If you don’t believe me, Will Reynolds is an inarguably better source of information (Image Credit: Harry Clarkson-Bennett)

How You Can Add Value With Sentiment Analysis

I found almost zero value to be had from tracking prompts in LLMs at a purely answer level. So, let’s forget all that for a second and use them for something else. Let’s start with some sentiment analysis.

These trackers give us access to:

  • A wider online sentiment score.
  • Review sources LLMs called upon (at a prompt level).
  • Sentiment scores by topics.
  • Prompts and links to on and off-site information sources.

You can identify where some of these issues start. Which, to be fair, is basically Trustpilot and Reddit.

I won’t go through everything, but a couple of quick examples:

  1. LLMs may be referencing some not-so-recently defunct podcasts and newsletters as “reasons to subscribe.”
  2. Your cancellation process may be cited as the most serious issues for most customers.

Unless you have explicitly stated that these podcasts and newsletters have finished, it’s all fair game. You need to tighten up your product marketing and communications strategy.

For people first. Then for LLMs.

These are not SEO specific projects. We’re moving into an era where solely SEO projects will be difficult to get pushed through. A fantastic way of getting buy-in is to highlight projects with benefits outside of search.

Highlighting serious business issues – poor reviews, inaccurate, out-of-date information et al. – can help get C-suite attention and support for some key brand reputation projects.

Profound’s sentiment analysis tab (Image Credit: Harry Clarkson-Bennett)
Here it is broken down by topic. You can see individual prompts and responses to each topic (Image Credit: Harry Clarkson-Bennett)

To me, this has nothing to do with LLMs. Or what our audience might ask an ill-informed answer engine. They are just the vessel.

It is about solving problems. Problems that drive real value to your business. In your case, this could be about increasing the LTV of a customer. Increasing their retention rate, reducing churn, and increasing the chance of a conversion by providing an improved experience.

If you’ve worked in SEO for long enough, someone will have floated the idea of improving your online sentiment and reviews past you.

“But will this improve our SEO?”

Said Jeff, a beleaguered business owner.

Who knows, Jeff. It really depends on what is holding you back compared to your competition. And like it or not, search is not very investible right now.

But that doesn’t matter in this instance. This isn’t a search-first project. It’s an audience-first project. It encompasses everyone. From customer service to SEO and editorial. It’s just the right thing to do for the business.

A quick hark back to the Google Leak shows you just how many review and sentiment-focused metrics may affect how you rank.

There are nine alone that mention review or sentiment in the title

There are nine alone that mention review or sentiment in the title (Image Credit: Harry Clarkson-Bennett)

For a long time, search has been about brands and trust. Branded search volume, outperforming expected CTR (a Bayesian type predictive model), direct traffic, and general user engagement and satisfaction.

This isn’t because Google knows better than people. It’s because they have stored how we feel about pages and brands in relation to queries and used that as a feedback loop. Google trusts brands because we do.

Most of us have never had to worry about reviews and sentiment. But this is a great time to fix any issues you may have under the guise of AEO, GEO, SEO, or whatever you want to call it.

Lars Lofgren’s article titled How a Competitor Crippled a $23.5M Bootcamp By Becoming a Reddit Moderator is an incredible look at how Codesmith was nobbled by negative PR. Negative PR started and maintained by one Reddit Mod. One.

So keeping tabs on your reputation and identifying potentially serious issues is never a bad thing.

Could I Just Build My Own?

Yep. For starters, you’d need an estimation of monthly LLM API costs based on the number of monthly tokens required. Let’s use Profound’s lower-end pricing tier as an estimate and our old friend Gemini to figure out some estimated costs.

  • 200 prompts × 10 runs × 12 days (approx.) × 3 models = 24,000 monthly runs.
  • 24,000 runs × 1,000 tokens/query (conservative est.) = 24,000,000 tokens.

Based on this, here’s a (hopefully) accurate cost estimate per model from our robot pal.

Image Credit: Harry Clarkson-Bennett

Right then. You now need some back-end functionality, data storage, and some front-end visualization. I’ll tot up as we go.

$21 per month

Back-End

  • A Scheduler/Runner like Render VPS to execute 800 API calls per day.
  • A data orchestrater. Essentially, some Python code to parse raw JSON and extract relevant citation and visibility data.

$10 per month

Data Storage

  • A database, like Supabase (which you can integrate directly through Lovable), to store raw responses and structured metrics.
  • Data storage (which should be included as part of your database).

$15 per month

Front-End Visualization

  • A web dashboard to create interactive, shareable dashboards. I unironically love Lovable. It’s easy to connect directly to databases. I have also used Streamlit previously. Lovable looks far sleeker but has its own challenges.
  • You may also need a visualization library to help generate time series charts and graphs. Some dashboards have this built in.

$50 per month

$96 all in. I think the likelihood is it’s closer to $50 than $100. No scrimping. At the higher end of budgets for tools I use (Lovable) and some estimates from Gemini, we’re talking about a tool that will cost under $100 a month to run and function very well.

This isn’t a complicated project or setup. It is, IMO, an excellent project to learn the vibe coding ropes. Which I will say is not all sunshine and rainbows.

So, Should I Buy One?

If you can afford it, I would get one. For at least a month or two. Review your online sentiment. See what people really say about you online. Identify some low lift wins around product marketing and review/reputation management, and review how your competitors fare.

This might be the most important part of LLM visibility. Set up a tracking dashboard via Google Analytics (or whatever dreadful analytics provider you use) and see a) how much traffic you get and b) whether it’s valuable.

The more valuable it is, the more value there will be in tracking your LLM visibility.

You could also make one. The joy of making one is a) you can learn a new skill and b) you can make other things for the same cost.

Frustrating, yes. Fun? Absolutely.

More Resources: 


This post was originally published on Leadership In SEO.


Featured Image: Viktoriia_M/Shutterstock

Google Answers What To Do For AEO/GEO via @sejournal, @martinibuster

Google’s VP of Product, Robby Stein, recently answered the question of what people should think about in terms of AEO/GEO. He provided a multi-part answer that began with how Google’s AI creates answers and ended with guidance on what creators should consider.

Foundations Of Google AI Search

The question asked was about AEO/GEO, which was characterized by the podcast host as the evolution of SEO. Google’s Robby Stein’s answer suggested thinking about the context of AI answers.

This is the question that was asked:

“What’s your take on this whole rise of AEO, GEO, which is kind of this evolution of SEO?

I’m guessing your answer is going to be just create awesome stuff and don’t worry about it, but you know, there’s a whole skill of getting to show up in these answers. Thoughts on what people should be thinking about here?”

Stein began his answer describing the foundations of how Google’s AI search works:

“Sure. I mean, I can give you a little bit of under the hood, like how this stuff works, because I do think that helps people understand what to do.

When our AI constructs a response, it’s actually trying to, it does something called query fan-out, where the model uses Google search as a tool to do other querying.

So maybe you’re asking about specific shoes. It’ll add and append all of these other queries, like maybe dozens of queries, and start searching basically in the background. And it’ll make requests to our data kind of backend. So if it needs real-time information, it’ll go do that.

And so at the end of the day, actually something’s searching. It’s not a person, but there’s searches happening.”

Robby Stein shows that Google’s AI still relies on conventional search engine retrieval, it’s just scaled and automated. The system performs dozens of background searches and evaluates the same quality signals that guide ordinary search rankings.

That means that “answer engine optimization” is basically the same as SEO because the underlying indexing, ranking and quality factors inherent to traditional SEO principles still apply to queries that the AI itself issues as part of the query fan-out process.

For SEOs, the insight is that visibility in AI answers depends less on gaming a new algorithm and more on producing content that satisfies intent so thoroughly that Google’s automated searches treat it as the best possible answer. As you’ll see later in this article, originality also plays a role.

Role Of Traditional Search Signals

An interesting part of this discussion is centered on the kinds of quality signals that Google describes in its Quality Raters Guidelines. Stein talks about originality of the content, for example.

Here’s what he said:

“And then each search is paired with content. So if for a given search, your webpage is designed to be extremely helpful.

And then you can look up Google’s human rater guidelines and read… what makes great information? This is something Google has studied more than anyone.

And it’s like:

  • Do you satisfy the user intent of what they’re trying to get?
  • Do you have sources?
  • Do you cite your information?
  • Is it original or is it repeating things that have been repeated 500 times?

And there’s these best practices that I think still do largely apply because it’s going to ultimately come down to an AI is doing research and finding information.

And a lot of the core signals, is this a good piece of information for the question, they’re still valid. They’re still extremely valid and extremely useful. And that will produce a response where you’re more likely to show up in those experiences now.”

Although Stein is describing AI Search results, his answer shows that Google’s AI Search still values the same underlying quality factors found in traditional search. Originality, source citations, and satisfying intent remain the foundation of what makes information “good” in Google’s view. AI has changed the interface of search and encouraged more complex queries, but the ranking factors continue to be the same recognizable signals related to expertise and authoritativeness.

More On How Google’s AI Search Works

The podcast host, Lenny, followed up with another question about how Google’s AI Search might follow a different approach from a strictly chatbot approach.

He asked:

“It’s interesting your point about how it goes in searches. When you use it, it’s like searching a thousand pages or something like that. Is that a just a different core mechanic to how other popular chatbots work because the others don’t go search a bunch of websites as you’re asking.”

Stein answered with more details about how AI search works, going beyond query fan-out, identifying factors it uses to surface what they feel to be the best answers. For example, he mentions parametric memory. Parametric memory is the knowledge that an AI has as part of its training. It’s essentially the knowledge stored within the model and not fetched from external sources.

Stein explained:

“Yeah, this is something that we’ve done uniquely for our AI. It obviously has the ability to use parametric memory and thinking and reasoning and all the things a model does.

But one of the things that makes it unique for designing it specifically for informational tasks, like we want it to be the best at informational needs. That’s what Google’s all about.

  • And so how does it find information?
  • How does it know if information is right?
  • How does it check its work?

These are all things that we built into the model. And so there is a unique access to Google. Obviously, it’s part of Google search.

So it’s Google search signals, everything from spam, like what’s content that could be spam and we don’t want to probably use in a response, all the way to, this is the most authoritative, helpful piece of information.

We’re going link to it and we’re going to explain, hey, according to this website, check out that information and you’re going to probably go see that yourself.

So that’s how we’ve thought about designing this.”

Stein’s explanation makes it clear that Google’s AI Search is not designed to mimic the conversational style of general chatbots but to reinforce the company’s core goal of delivering trustworthy information that’s authoritative and helpful.

Google’s AI Search does this by relying on signals from Google Search, such as spam detection and helpfulness, the system grounds its AI-generated answers in the same evaluation and ranking framework inherent in regular search ranking.

This approach positions AI Search as less a standalone version of search and more like an extension of Google’s information-retrieval infrastructure, where reasoning and ranking work together to surface factually accurate answers.

Advice For Creators

Stein at one point acknowledges that creators want to know what to do for AI Search. He essentially gives the advice to think about the questions people are asking. In the old days that meant thinking about what keywords searchers are using. He explains that’s no longer the case because people are using long conversational queries now.

He explained:

“I think the only thing I would give advice to would be, think about what people are using AI for.

I mentioned this as an expansionary moment, …that people are asking a lot more questions now, particularly around things like advice or how to, or more complex needs versus maybe more simple things.

And so if I were a creator, I would be thinking, what kind of content is someone using AI for? And then how could my content be the best for that given set of needs now?
And I think that’s a really tangible way of thinking about it.”

Stein’s advice doesn’t add anything new but it does reframe the basics of SEO for the AI Search era. Instead of optimizing for isolated keywords, creators should consider anticipating the fuller intent and informational journey inherent in conversational questions. That means structuring content to directly satisfy complex informational needs, especially “how to” or advice-driven queries that users increasingly pose to AI systems rather than traditional keyword search.

Takeaways

  • AI Is Search Still Built on Traditional SEO Signals
    Google’s AI Search relies on the same core ranking principles as traditional search—intent satisfaction, originality, and citation of sources.
  • How Query Fan-Out Works
    AI Search issues dozens of background searches per query, using Google Search as a tool to fetch real-time data and evaluate quality signals.
  • Integration of Parametric Memory and Search Signals
    The model blends stored knowledge (parametric memory) with live Google Search data, combining reasoning with ranking systems to ensure factual accuracy.
  • Google’s AI Search Is Like An Extension of Traditional Search
    AI Search isn’t a chatbot; it’s a search-based reasoning system that reinforces Google’s informational trust model rather than replacing it.
  • Guidance for Creators in the AI Search Era
    Optimizing for AI means understanding user intent behind long, conversational queries—focusing on advice- and how-to-style content that directly satisfies complex informational needs.

Google’s AI Search builds on the same foundations that have long defined traditional search, using retrieval, ranking, and quality signals to surface information that demonstrates originality and trustworthiness. By combining live search signals with the model’s own stored knowledge, Google has created a system that explains information and cites the websites that provided it. For creators, this means that success now depends on producing content that fully addresses the complex, conversational questions people bring to AI systems.

Watch the podcast segment starting at about the 15:30 minute mark:

Featured Image by Shutterstock/PST Vector

How Leaders Are Using AI Search to Drive Growth [Webinar] via @sejournal, @hethr_campbell

Turn Data Into an Actionable AI Search Strategy

AI search is transforming consumer behavior faster than any shift in the past 20 years. Many teams are chasing visibility, but few understand what the data actually means for their business or how to act on it.

Join Mark Traphagen, VP of Product Marketing and Training at seoClarity, and Tania German, VP of Marketing at seoClarity, for a live webinar designed for SEOs, digital leaders, and executives. You’ll learn how to interpret AI search data and apply it to your strategy to drive real business results.

What You’ll Learn

  • Why consumer discovery is changing so rapidly.
  • How visibility drives revenue with Instant Checkout in ChatGPT.
  • What Google’s AI Overviews and AI Mode mean for your brand’s presence.
  • Tactics to improve mentions, citations, and visibility on AI search engines.

Why Attend

This webinar gives you the clarity and measurement framework needed to confidently answer, “What’s our AI search strategy?” Walk away with a playbook you can use to lead your organization through the AI search shift successfully.

Register now to secure your seat and get a clear, data-backed framework for AI search strategy.

🛑 Can’t attend live? Register anyway, and we’ll send the full recording.

The AI Search Effect: What Agencies Need To Know For Local Search Clients

This post was sponsored by GatherUp. The opinions expressed in this article are the sponsor’s own.

Local Search Has Changed: From “Found” to “Chosen”

Not long ago, showing up in a Google search was enough. A complete Google Business Profile (GBP) and a steady stream of reviews could put your client in front of the right customers.

But today’s local search looks very different. It’s no longer just about being found; it’s about being chosen.

That shift has only accelerated with the rise of AI-powered search. Instead of delivering a list of links, engines like ChatGPT, Google’s Gemini, and Perplexity now generate instant summaries. Changing the way consumers interact with search results, these summaries are the key to whether or not your client’s business gets seen at all.

Reality Check: if listings aren’t accurate, consistent, and AI-ready, businesses risk invisibility.

AI Search Is Reshaping Behavior & Brand Visibility

AI search is already reshaping behavior.

Only 8% of users click a traditional link when an AI summary appears. That means the majority of your clients’ potential customers are making decisions without ever leaving the AI-generated response.

So, how does AI decide which businesses to include in its answers? Two categories of signals matter most:

Put simply, if a client’s listings are messy, incomplete, or outdated, AI is far less likely to surface them in a summary. And that’s a problem, considering more than 4 out of 5 people use search engines to find local businesses.

The Hidden Dangers of Neglected Listings

Agencies know the pain of messy listings firsthand. But your clients may not realize just how damaging it can be:

  • Trust erosion: 80% of consumers lose trust in businesses with incorrect or inconsistent.
  • Lost visibility: Roughly a third of local organic results now come from business directories. If listings are incomplete, that’s a third of opportunities gone.
  • Negative perception: A GBP with outdated hours or broken URLs communicates neglect, not professionalism.

Consider “Mary,” a marketing director overseeing 150+ locations. Without automation, her team spends hours chasing duplicate profiles, correcting seasonal hours, and fighting suggested edits. Updates lag behind reality. Customers’ trust slips. And every inconsistency is another signal to search engines, and now AI, that the business isn’t reliable.

For many agencies, the result is more than frustrated clients. It’s a high churn risk.

Why This Matters More Than Ever to Consumers

Consumers expect accuracy at every touchpoint, and they’re quick to lose confidence when details don’t add up.

  • 80% of consumers lose trust in a business with incorrect or inconsistent information, like outdated hours, wrong addresses, or broken links.
  • A Google Business Profile with missing fields or duplicate entries signals neglect.
  • When AI engines surface summaries, they pull from this. Inconsistencies make it less likely your client’s business will appear at all.

Reviews still play a critical role, but they work best when paired with clean, consistent listings. 99% of consumers read reviews before choosing a business, and 68% prioritize recent reviews over overall star ratings. If the reviews say “great service” but the business shows the wrong phone number or closed hours, that trust is instantly broken.

In practice, this means agencies must help clients maintain both accurate listings and authentic reviews. Together, they signal credibility to consumers and to AI search engines deciding which businesses make the cut.

Real-World Data: The ROI of Getting Listings Right

Agencies that take listings seriously are already seeing outsized returns:

  • A healthcare agency managing 850+ locations saved 132 hours per month and reduced costs by $21K annually through listings automation, delivering a six-figure annual ROI.
  • A travel brand optimizing global listings recorded a 200% increase in Google visibility and a 30x rise in social engagement.
  • A retail chain improving profile completeness saw a 31% increase in revenue attributed to local SEO improvements.

The proof is clear: accurate, consistent, and scalable listings management is no longer optional. It’s a revenue driver.

Actionable Steps Agencies Can Take Right Now

AI search is moving fast, but agencies don’t have to be caught flat-footed. Here are five practical steps to protect your clients’ visibility and trust.

1.  Audit Listings for Accuracy and Consistency

Start with a full audit of your clients’ GBPs and directory listings. Look for mismatches in hours, addresses, URLs, and categories. Even small discrepancies send negative signals to both consumers and AI search engines.

I know you updated your listings last year, and not much has changed, but unless your business is a time capsule, your customers expect real-time accuracy.

2.  Eliminate Duplicates

Duplicate listings aren’t just confusing to customers; they actively hurt SEO. Suppress duplicates across directories and consolidate data at the source to prevent aggregator overwrites. Google penalized 6.1% of business listings flagged for duplicate or spam entries in Q1 alone, underscoring how seriously platforms are taking accuracy enforcement.

3.  Optimize for Engagement

Encourage clients to respond authentically to reviews. Research shows 73% of consumers will give a business a second chance if they receive a thoughtful response to a negative review. Engagement isn’t just customer service; it’s a ranking signal.

4.  Create AI-Readable Content

AI thrives on structured, educational content. Encourage clients to build out their web presence with FAQs, descriptive product or service pages, and customer-centric content that mirrors natural language. This makes it easier for AI to pull them into summaries.

5.  Automate at Scale

Manual updates don’t cut it for multi-location brands. Implement automation for bulk publishing, data synchronization, and ongoing updates. This ensures accuracy and saves agencies countless hours of low-value labor.

The AI Opportunity: Agencies as Strategic Partners

For agencies, the rise of AI search is both a threat and an opportunity. Yes, clients who ignore their listings risk becoming invisible. But agencies that lean in can position themselves as strategic partners, helping businesses adapt to a disruptive new era.

That means reframing listings management not as “background work,” but as the foundation of trust and visibility in AI-powered search.

As GatherUp’s research concludes, “In the AI-driven search era, listings are no longer background work; they are the foundation of visibility and trust.”

The Time to Act Is Now

AI search is here, and it’s rewriting the rules of local visibility. Agencies that fail to help their clients adapt risk irrelevance.

But those that act now can deliver measurable growth, stronger client relationships, and defensible ROI.

The path forward is clear: audit listings, eliminate duplicates, optimize for engagement, publish AI-readable content, and automate at scale.

And if you want to see where your clients stand today, GatherUp offers a free listings audit to help identify gaps and opportunities.

👉 Run a free listings audit and see how your business measures up.

Image Credits

Featured Image: Image by GatherUp. Used with permission.

In-Post Images: Image by GatherUp. Used with permission.

5 SEO Tactics to Be Seen & Trusted on AI Search [Webinar] via @sejournal, @duchessjenm

Is your brand ready for AI-driven SERPs?

Search is evolving faster than ever. AI-driven engines like ChatGPT, Google SGE, and Bing Copilot are changing how users discover and trust brands. Traditional SEO tactics alone may no longer guarantee visibility or authority in Answer Engines.

Discover five proven tactics to protect your SERP presence and maintain trust in AI search.

What You’ll Learn

Craig Smith, Chief Strategy Officer at Outerbox, will show exactly how to adapt your SEO strategy for generative search and answer engines. 

You’ll walk away with actionable steps to:

Register now to get the SEO playbook your competitors wish they had.

Why You Can’t Miss This Webinar

AI Overviews are already impacting traffic. Brands that adapt now will dominate visibility and authority while others fall behind.

🛑 Can’t attend live? Register anyway and we’ll send you the recording so you can watch at your convenience.

Google Explains Next Generation Of AI Search via @sejournal, @martinibuster

Google’s Robby Stein, VP of Product at Google, explained that Google Search is converging with AI in a new manner that builds on three pillars of AI. The implications for online publishers, SEOs, and eCommerce stores are profound.

Three Pillars Of AI Search

Google’s Stein said that there are three essential components to the “next generation” of Google Search:

  1. AI Overviews
  2. Multimodal search
  3. AI Mode

AI Overviews is natural language search. Multimodal are new ways of searching with images, enabled by Google Lens. AI Mode is the harnessing of web content and structured knowledge to provide a conversational turn-based way of discovering information and learning. Stein indicates that all three of these components will converge as the next step in the evolution of search. This is coming.

Stein explained:

“I can tell you there’s kind of three big components to how we can think about AI search and kind of the next generation of search experiences. One is obviously AI overviews, which are the quick and fast AI you get at the top of the page many people have seen. And that’s obviously been something growing very, very quickly. This is when you ask a natural question, you put it into Google, you get this AI now. It’s really helpful for people.

The second is around multimodal. This is visual search and lens. That’s the other big piece. You go to the camera in the Google app, and that’s seeing a bunch of growth.

And then with AI mode, it brings it all together. It creates an end-to-end frontier search experience on state-of-the-art models to really truly let you ask anything of Google search.”

AI Mode Triggered By Complex Queries

Screenshot showing how a complex two sentence query automatically triggers an AI Mode preview.

The above screenshot shows a complex two sentence search query entered into Google’s search box. The complex query automatically triggers an AI Mode preview with a “Show more” link that leads to an immersive AI Mode conversational search experience. Publishers who wish to be cited need to think about how their content will fit into this kind of context.

Next Generation Of Google: AI Mode Is Like A Brain

Stein described the next frontier of search as something that is radically different from what we know as Google Search. Many SEOs still think of search as this ranking paradigm with ten blue links. That’s something that’s not quite existed since Google debuted Featured Snippets back in 2014. That’s eleven years that the concept of ten blue links has been out of step with the reality in Google’s search results.

What Stein goes on to describe completely does away with the concept of ten blue links, replacing it with the concept of a brain that users can ask questions and interact with. SEOs, merchants and other publishers really need to begin doing away with the mental concept of ten blue links and focus on surfacing content within an interactive natural language environment that’s completely outside of search.

Stein explained this new concept of a brain in the context of AI Mode:

“You can go back and forth. You can have a conversation. And it taps into and is specially designed for search. So what does that mean? One of the cool things that I think it does is it’s able to understand all of this incredibly rich information that’s within Google.

  • So there’s 50 billion products in the Google Shopping Graph, for instance. They’re updated 2 billion times an hour by merchants with live prices.
  • You have 250 million places and maps.
  • You have all of the finance information.
  • And not to mention, you have the entire context of the web and how to connect to it so that you can get context, but then go deeper.

And you put all of that into this brain that is effectively this way to talk to Google and get at this knowledge.

That’s really what you can do now. So you can ask anything on your mind and it’ll use all of this information to hopefully give you super high quality and informed information as best as we can.”

Stein’s description shows that Google’s long-term direction is to move beyond retrieval toward an interactive turn-based mode of information discovery. The “brain” metaphor signals that search will increasingly be less about locating web pages but about generating informed responses built from Google’s own structured data, knowledge graphs, and web content. This represents a fundamental change and as you’ll see in the following paragraphs, this change is happening right now.

AI Mode Integrates Everything

Stein describes how Google is increasingly triggering AI Mode as the next evolution of how users find answers to questions and discover information about the world immediately around them. This goes beyond asking “what’s the best kayak” and becomes more of a natural language conversation, an information journey that can encompass images, videos, and text, just like in real life. It’s an integrated experience that goes way beyond a simple search box and ten links.

Stein provided more information of what this will look like:

“And you can use it directly at this google.com/ai, but it’s also been integrated into our core experiences, too. So we announced you can get to it really easily. You can ask follow-up questions of AI overviews right into AI mode now.

Same for the lens stuff, take a picture, takes it to AI mode. So you can ask follow-up questions and go there, too. So it’s increasingly an integrated experience into the core part of the product.”

How AI Will Converge Into One Interface

At this point the host of the podcast asked for a clearer explanation of how all of these things will be integrated.

He asked:

“I imagine much of this is… wait and see how people use it. But what’s the vision of how all these things connect?

Is the idea to continue having this AI mode on the side, AI overviews at the top, and then this multimodal experience? Or is there a vision of somehow pushing these together even more over time?”

Stein answered that all of these modes of information discovery will converge together. Google will be able to detect by the query whether to trigger AI Mode or just a simple search. There won’t be different interfaces, just the one.

Stein explained:

“I think there’s an opportunity for these to come closer together. I think that’s what AI Mode represents, at least for the core AI experiences. But I think of them as very complementary to the core search product.

And so you should be able to not have to think about where you’re asking a question. Ultimately, you just go to Google.

And today, if you put in whatever you want, we’re actually starting to use much of the power behind AI mode, right in AI Overviews. So you can just ask really hard, you could put a five-sentence question right into Google search.

You can try it. And then it should trigger AI at the top, it’s a preview. And then you can go deeper into AI mode and have this back and forth. So that’s how these things connect.

Same for your camera. So if you take a picture of something, like, what’s this plant? Or how do I buy these shoes? It should take you to an AI little preview. And then if you go deeper, again, it’s powered by AI mode. You can have that back and forth.

So you shouldn’t have to think about that. It should feel like a consistent, simple product experience, ultimately. But obviously, this is a new thing for us. And so we wanted to start it in a way that people could use and give us feedback with something like a direct entry point, like google.com/AI.”

Stein’s answer shows that Google is moving from separate AI features toward one unified search system that interprets intent and context automatically.

  • For users, that means typing, speaking, or taking a picture will all connect to the same underlying process that decides how to respond.
  • For publishers and SEOs, it means visibility will depend less on optimizing for keywords and more on aligning content with how Google understands and responds to different kinds of questions.

How Content Can Fit Into AI Triggered Search Experiences

Google is transitioning users out of the traditional ten blue links paradigm into a blended AI experience. Users can already enter questions consisting of multiple sentences and Google will automatically transition into an AI Mode deep question and answer. The answer is a preview with an option to trigger a deeper back and forth conversation.

Robbie Stein indicated that the AI Search experience will converge even more, depending on user feedback and how people interact with it.

These are profound changes that demand publishers ask deep questions about how content:

  • Should you consider how curating unique images, useful video content, and step-by-step tutorials may fit into your content strategies?
  • Information discovery is increasingly conversational, does your content fit into that context?
  • Information discovery may increasingly include camera snapshots, will your content fit into that kind of search?

These are examples of the kinds of questions publishers, SEOs and store owners should be thinking about.

Watch the podcast interview with Robby Stein

Inside Google’s AI turnaround: AI Mode, AI Overviews, and vision for AI-powered search | Robby Stein

Featured image/Screenshot of Lenny’s Podcast video

Google’s AI Reshapes Organic Listings

Google is quickly changing organic search results as it integrates AI. Thus far, the new features have caused traffic losses to most external sites, necessitating new search engine tactics and priorities.

Here’s how AI is impacting traditional organic search visibility to date.

AI Overviews

AI Overviews are answers to search queries. They summarize and cite top-ranking pages, typically, correlating traditional SEO with visibility in Overviews.

AI Overviews:

  • Eliminates searchers’ need to click. If your target query triggers an Overview, the result is likely fewer clicks, even if the Overview cites your page.
  • Cites pages that then often appear in average position 1 in Google Search Console with abnormally low click-throughs. Thus the average position in the Performance tab will increase, but click-throughs will decrease.

‘People also ask’

Traditional SEO typically recommends “People also ask” questions in content, to generate clicks. However, Google now serves occasional AI-generated answers to “People also ask” queries, which decreases clicks in that section and in organic listings.

Screenshot of the oil-stain example

Google often serves AI-generated answers to “People also ask” queries, such as this example for “oil stains.”

Suggested topics

Google now provides a search-result section that I call “suggested topics.” It functions similarly to a fan-out result, wherein Google suggests related topics to queries having multiple intents. For example, a search for “roof repair” could trigger suggestions exploring the symptoms and causes of roof damage.

Clicking on any of these suggestions produces an AI-generated answer, which is unlikely to generate traffic to an external source.

A search for “roof repair” could trigger suggestions exploring the symptoms and causes of roof damage.

AI-generated search snippets

Google is apparently testing AI-generated search snippets, foregoing the practice of using publishers’ meta descriptions or body text.

Google reportedly enhances a snippet sometimes with additional info, which can increase clicks.

Google’s testing of AI-generated search snippets replaces or enhances publishers’ meta descriptions or body text.

Local search

Google is integrating AI in blended results, especially local packs. Reportedly, Google’s AI now invites users to learn more about a local business and will even suggest related fan-out-style questions.

Screenshot of the search results for roofing.

Google’s AI suggests fan-out-style questions, such as “Do they offer roof cleaning?”

The feature mimics what Google’s URL bar does now: encourage users to learn more about any page.

Hence local businesses should focus on providing on-site details of products or services, encouraging customer reviews, answering questions, and more.

Google is also integrating AI actions into local packs, following the practice in AI Mode. For example, for a “car tires near me” search, Google might suggest having AI check prices.

Screenshot of the search result and the question

A “car tires near me” search might include a suggestion, such as “Have AI check prices.”

I once feared generative AI platforms would replace organic search. Instead, search engines are adopting AI themselves, making organic results less predictable, less trackable, and less traffic-generating.

We know what is happening. The key is adjusting traffic, tactics, and expectations accordingly.

Google Quietly Signals NotebookLM Ignores Robots.txt via @sejournal, @martinibuster

Google has quietly updated its list of user-triggered fetchers with new documentation for Google NotebookLM. The importance of this seemingly minor change is that it’s clear that Google NotebookLM will not obey robots.txt.

Google NotebookLM

NotebookLM is an AI research and writing tool that enables users to add a web page URL, which will process the content and then enable them to ask a range of questions and generate summaries based on the content.

Google’s tool can automatically create an interactive mind map that organizes topics from a website and extracts takeaways from it.

User-Triggered Fetchers Ignore Robots.txt

Google User-Triggered Fetchers are web agents that are triggered by users and by default ignore the robots.txt protocol.

According to Google’s User-Triggered Fetchers documentation:

“Because the fetch was requested by a user, these fetchers generally ignore robots.txt rules.”

Google-NotebookLM Ignores Robots.txt

The purpose of robots.txt is to give publishers control over bots that index web pages. But agents like the Google-NotebookLM fetcher aren’t indexing web content, they’re acting on behalf of users who are interacting with the website content through Google’s NotebookLM.

How To Block NotebookLM

Google uses the Google-NotebookLM user agent when extracting website content. So, it’s possible for publishers wishing to block users from accessing their content could create rules that automatically block that user agent. For example, a simple solution for WordPress publishers is to use Wordfence to create a custom rule to block all website visitors that are using the Google-NotebookLM user agent.

Another way to do it is with .htaccess using the following rule:


RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Google-NotebookLM [NC]
RewriteRule .* - [F,L]
AI Survival Strategies For Publishers

The rise of AI technologies has brought a level of panic to the publishing industry not seen since the birth of the world wide web. We all know AI is changing how people engage with websites, but we’re not sure yet what these changes will lead to.

The fact that AI is just past its peak in the hype cycle isn’t helping us form any clarity. The bubble is about to burst, but things will not go back to “normal” – whatever that normal may have been pre-AI.

There will be a new post-bubble status quo that will eventually materialize. I strongly believe publishing as a whole will end up in a healthier state, but also that some individual publishers will suffer – and may even cease to exist.

In this article, I’ll outline what I believe are several key survival strategies that online publishers can adopt to help them weather the storm and emerge stronger and more resilient.

1. Search Lives On

Despite the bleatings of a number of misguided LinkedIn influencers who I mercifully shall not name, there is no evidence that search as a channel is dying. Google is still by far the largest driver of traffic to websites, and news is no exception.

What we do need to realize is that “Peak Search” has already happened, and the total number of clicks that Google sends to the web will not substantially grow.

The Google traffic curve has flattened (Image Credit: Barry Adams)

Publishers that are what I call “SEO mature,” with strong SEO and audience growth tactics in place for many years, will not be able to grow their readership through search alone. For these publishers, search traffic is – at best – leveling off.

That doesn’t mean search cannot be a growth channel. Many publishers are nowhere near “SEO mature” and have plenty of scope for growth in search clicks. In fact, I daresay most of the publishers I work with have not yet maximized their search potential, and can achieve strong gains from improved editorial and technical SEO.

But search as a whole is now a flat channel. We cannot expect to see the same consistent increase in search clicks that have been foundational to many publishers’ growth strategies for the last two decades.

Search is now a zero-sum game. When you get a click, a competitor isn’t. The pie has ceased to grow, which means we have to fight harder for our slice. Good SEO is even more crucial. You cannot rely on Google’s own growth to get your share; you need to wrestle it out of the hands of your competitors.

AI Is An Accelerant

While publishers are rightfully skeptical of Google’s claims about the impact of AI on traffic, the flattening of the search traffic curve began long before AI appeared on the scene.

There have been many warning signs that the endless growth of Google clicks wasn’t endless after all. Zero-click search was a concept years before ChatGPT launched.

But most publishers failed to heed these warnings and continued to rely on Google as their primary growth channel, taking no efforts to diversify their audience strategies.

AI didn’t cause the search curve to flatten, but it did serve as a bucket of gasoline on that particular fire. AI has accelerated zero-click, and sped up the rising discomfort many publishers were already experiencing.

Featured snippets and other intrusive search elements, fragmented online user behavior, algorithm updates, audience fatigue, bad user experiences, and many more factors formed the foundation of zero-click. AI merely hastened the trend by offering a channel for users to engage with the web’s content without the friction imposed by visiting a website.

Maximizing Search

Apparently, realizing its precarious position as the gateway to the web and direct responsibility for the health of the publishing ecosystem, Google is throwing us a few bones to help us build audience loyalty.

One of these is “Preferred sources.” This is a new feature in Top Stories on Google’s results that allows a user to set preferred news sources.

When a Top Stories box is shown on a Google search page, if the user’s preferred sources have a relevant article, Google will give that source a spot in Top Stories.

Top Stories on Google.com for the ‘fernando alonso’ query with GPFans.com as a preferred source (Image Credit: Barry Adams)

Setting a preferred source can be done by clicking the relevant icon at the top of a Top Stories box and searching for your preferred publication, or directly with a link:

https://www.google.com/preferences/source?q=yourdomain.com

So with this link, you will set SEOforGoogleNews.com as a preferred source in Top Stories:

https://www.google.com/preferences/source?q=seoforgooglenews.com

You can encourage your readers to add your site as a preferred source with a call to action. Google even provides an image for this that you can add to your website:

(Image credit: Barry Adams)

This new feature does raise some concerns about filter bubbles and echo chambers. But that’s a station the Google train long since passed, especially with the fully personalized and filter-bubbled content feed that billions around the world engage with daily.

2. Discover Is Growing

Where search is flattening, Discover is on the rise. Most publishers I work with see good numbers coming from the Discover feed. For many, Discover is growing to such an extent that it more than compensates for diminished search traffic.

Despite this, I’m not a fan of building an audience strategy around Discover. There are several reasons for this, some of which I’ve outlined in a piece on Press Gazette and others which are detailed by David Buttle (also on Press Gazette).

Summarizing those objections here:

  1. Discover strategies encourage bad habits; Clickbait, churnalism, sensationalism, and low information gain.
  2. Discover traffic is volatile, unlikely to generate a consistent traffic profile, and highly susceptible to algorithm updates.
  3. Discover is not a core service that Google offers, and they can kill it without any meaningful loss for them.
  4. Reliance on Discover gives Google enormous power over publishers.
  5. Discover lacks the regulatory scrutiny that search is subjected to.

Yet I cannot deny the reality that Discover is a huge source of traffic, and publishers need to optimize for it to some degree.

In addition to the known Discover optimization strategies, which I will cover in an upcoming newsletter, Google has also given us a new feature:

Follow Publishers In Discover

Similar to setting preferred sources in Top Stories, the new Follow feature in Discover allows publishers to encourage audience loyalty. With the follow feature, a user will see more content from followed publishers in their Discover feed.

A user can click on a publisher name in their Discover feed and end up on that publisher’s dedicated Discover page. Tapping the “Follow on Google” button there will add the publisher to the user’s followed list, and ensure more articles from that publisher will be shown in the user’s Discover feed.

The follow feature is not yet rolled out globally. As I’m on an iPhone and in the UK, I haven’t yet been able to see it for myself. So, here is a screenshot from the Discover page for Barry Schwartz’s Search Engine Roundtable:

SERoundtable.com’s Discover page with Follow feature (Image credit: Barry Adams)

I’ll be dedicating an upcoming newsletter to Discover optimisation strategies, and include what I’ll learn about the Follow feature there.

In the same announcement where this Follow feature was introduced, Google also said that Discover will start showing more social media content like YouTube videos, Instagram posts, and even X posts.

This brings me to the third strategy for publishers to embrace:

3. Multimedia Content

Online news hasn’t been consumed in written form only for many years. It should come as no surprise that your audience wants to engage with your news in many different formats on different platforms.

As Discover is now integrating social posts into its feed, this presents additional opportunities for publishers to create content in various formats and publish these on popular platforms.

So, you should be doing YouTube videos, especially Shorts. And Instagram posts and videos. Though I cannot recommend you stay active on X (Twitter) – I personally have gotten up from that table, and you should too.

Podcasts are another obvious format that enjoys great popularity. News podcasts dominate the top rankings on most podcast platforms, and news publishers are especially well-placed to carve out audiences for themselves there.

Email newsletters are enjoying a resurgence in popularity (one that I have taken advantage of myself, as you can see), though I would argue that email has never really gone out of fashion. It just lost the spotlight for a while, but always kept on delivering for those that do it right.

It’s never too late to start experimenting with multimodal content. If you have a great piece of journalism, it doesn’t take much to turn that into a podcast. That podcast should be recorded with a camera, and voila, you have a YouTube video. You can then turn that video into a series of YouTube shorts, which can also populate your Instagram feed, etc.

(Image credit: Barry Adams)

The barrier to entry is low, and you won’t need a massive audience to make your multimodal adventures pay off.

At the upcoming NESS conference, a few sessions will dig into channel diversification. I’m especially looking forward to Steve Wilson-Beales’s session on cracking the YouTube algorithm.

However, none of the above is going to save your publishing site in the long term, if you don’t do the most important thing:

4. Become Unforgettable

This has been a bit of a mantra for me for a while now. I strongly believe that if your news website is interchangeable with others in your topical area, you are going to have a Very Bad Time in the next few years.

It’s a tough reality to face for many websites, and I see the ostrich approach all too often. Many publishers are incapable of being honest with themselves and seeing the truth of their commoditization, clinging to some vague perception of uniqueness and value add.

But the fact is that probably about half of all news websites are perfectly forgettable. They don’t have anything that makes them sufficiently distinct. These publishers don’t have loyal audiences; they have a cohort of habitual readers that can just as easily switch to a competing website.

The reason many publishers don’t understand their own place in their market is because they don’t really understand their readers.

I’m going to quote my friend and former colleague Andi Jarvis here, who runs a successful marketing strategy consultancy:

“Talk to your customers.”

Such a simple thing, yet so very rarely done. When is the last time you talked to your readers? Asked them what they liked about your website, and what they didn’t? Ask them what they wanted to see more of, and what other things you could be doing?

That’s an exercise you should regularly be engaging in. I can guarantee you, your own perception and your audience’s perception of your site will be very different indeed.

Andi has a questionnaire out at the moment that I recommend you take a few minutes to fill in, even if you currently don’t talk with your customers. And definitely check out his Strategy Sessions podcast.

When you talk to your audience and understand what they want from you, it allows you to make the right decisions for your publication’s long-term health. You’ll know where your real value-add is, and whether or not that’s worth a subscription fee (and how much you can charge).

It enables you to find the most popular channels and platforms your audience uses, so you can post there too. It tells you which creators’ content they enjoy, so you can reach out to them for partnerships. (Sparktoro is a great audience research tool that can help with this.)

There will be so much you will learn from just talking to your audience that it’s hard to overstate its importance. I know “marketing” is a dirty word for many publishers, but it genuinely is the critical ingredient.

Most importantly, it’ll give you the insights you need to really nail down your publication’s USP and deliver the kind of value to your visitors that transforms them from casual readers into a loyal audience.

That’s where the key to survival lies. A loyal audience immunizes you from whatever the ketamine-addled Silicon Valley tech bros next dream up. It ensures your continued success, independent of platforms and algorithms.

And that’s something worth striving for.

What About AEO/GEO/LLMO?

You’ll have noticed I didn’t mention optimizing for AI Search as a survival tactic. That’s because it’s not. LLMs are great at many things, but generating traffic for websites isn’t one of them.

For websites that have a transaction pipeline driven by search traffic, such as ecommerce or travel booking sites, optimizing for LLMs has some added value. Visibility in LLM-generated responses can generate conversions for these sites.

However, for content delivery websites like news publishers, there’s significantly less value in optimizing for LLM visibility. Citations in LLM responses don’t lead to clicks in any meaningful way, so the traffic opportunity there is non-existent.

However, if you adopt the survival strategies I summarized above, ironically, you’ll also do better in LLMs. As it stands, 99.9% of LLM optimization aligns with proper SEO, and the last 0.1% falls under the remit of what we’d call “good marketing,” which is what becoming unforgettable is all about.

I know, those suits in board rooms want to be reassured that you got this AI Optimization thing in hand. When you do SEO well, you will have. Don’t let AI hype get in the way of good business decisions.

Not coincidentally, several sessions at NESS 2025 will be dedicated to AI and its impact on publishers:

When you use the code barry2025 at checkout, you get 20% off the ticket price. Grab yours while you can!

That’s it for another edition. As always, thanks for reading and subscribing, and I’ll see you at the next one.

More Resources:

This post was originally published on SEO for Google News.


Featured Image: Stokkete/Shutterstock