Ironman, Not Superman via @sejournal, @DuaneForrester

I recently became frustrated while working with Claude, and it led me to an interesting exchange with the platform, which led me to examining my own expectations, actions, and behavior…and that was eye-opening. The short version is I want to keep thinking of AI as an assistant, like a lab partner. In reality, it needs to be seen as a robot in the lab – capable of impressive things, given the right direction, but only within a solid framework. There are still so many things it’s not capable of, and we, as practitioners, sometimes forget this and make assumptions based on what we wish a platform is capable of, instead of grounding it in the reality of the limits.

And while the limits of AI today are truly impressive, they pale in comparison to what people are capable of. Do we sometimes overlook this difference and ascribe human characteristics to the AI systems? I bet we all have at one point or another. We’ve assumed accuracy and taken direction. We’ve taken for granted “this is obvious” and expected the answer to “include the obvious.” And we’re upset when it fails us.

AI sometimes feels human in how it communicates, yet it does not behave like a human in how it operates. That gap between appearance and reality is where most confusion, frustration, and misuse of large language models actually begins. Research into human computer interaction shows that people naturally anthropomorphize systems that speak, respond socially, or mirror human communication patterns.

This is not a failure of intelligence, curiosity, or intent on the part of users. It is a failure of mental models. People, including highly skilled professionals, often approach AI systems with expectations shaped by how those systems present themselves rather than how they truly work. The result is a steady stream of disappointment that gets misattributed to immature technology, weak prompts, or unreliable models.

The problem is none of those. The problem is expectation.

To understand why, we need to look at two different groups separately. Consumers on one side, and practitioners on the other. They interact with AI differently. They fail differently. But both groups are reacting to the same underlying mismatch between how AI feels and how it actually behaves.

The Consumer Side, Where Perception Dominates

Most consumers encounter AI through conversational interfaces. Chatbots, assistants, and answer engines speak in complete sentences, use polite language, acknowledge nuance, and respond with apparent empathy. This is not accidental. Natural language fluency is the core strength of modern LLMs, and it is the feature users experience first.

When something communicates the way a person does, humans naturally assign it human traits. Understanding. Intent. Memory. Judgment. This tendency is well documented in decades of research on human computer interaction and anthropomorphism. It is not a flaw. It is how people make sense of the world.

From the consumer’s perspective, this mental shortcut usually feels reasonable. They are not trying to operate a system. They are trying to get help, information, or reassurance. When the system performs well, trust increases. When it fails, the reaction is emotional. Confusion. Frustration. A sense of having been misled.

That dynamic matters, especially as AI becomes embedded in everyday products. But it is not where the most consequential failures occur.

Those show up on the practitioner side.

Defining Practitioner Behavior Clearly

A practitioner is not defined by job title or technical depth. A practitioner is defined by accountability.

If you use AI occasionally for curiosity or convenience, you are a consumer. If you use AI repeatedly as part of your job, integrate its output into workflows, and are accountable for downstream outcomes, you are a practitioner.

That includes SEO managers, marketing leaders, content strategists, analysts, product managers, and executives making decisions based on AI-assisted work. Practitioners are not experimenting. They are operationalizing.

And this is where the mental model problem becomes structural.

Practitioners generally do not treat AI like a person in an emotional sense. They do not believe it has feelings or consciousness. Instead, they treat it like a colleague in a workflow sense. Often like a capable junior colleague.

That distinction is subtle, but critical.

Practitioners tend to assume that a sufficiently advanced system will infer intent, maintain continuity, and exercise judgment unless explicitly told otherwise. This assumption is not irrational. It mirrors how human teams work. Experienced professionals regularly rely on shared context, implied priorities, and professional intuition.

But LLMs do not operate that way.

What looks like anthropomorphism in consumer behavior shows up as misplaced delegation in practitioner workflows. Responsibility quietly drifts from the human to the system, not emotionally, but operationally.

You can see this drift in very specific, repeatable patterns.

Practitioners frequently delegate tasks without fully specifying objectives, constraints, or success criteria, assuming the system will infer what matters. They behave as if the model maintains stable memory and ongoing awareness of priorities, even when they know, intellectually, that it does not. They expect the system to take initiative, flag issues, or resolve ambiguities on its own. They overweight fluency and confidence in outputs while under-weighting verification. And over time, they begin to describe outcomes as decisions the system made, rather than choices they approved.

None of this is careless. It is a natural transfer of working habits from human collaboration to system interaction.

The issue is that the system does not own judgment.

Why This Is Not A Tooling Problem

When AI underperforms in professional settings, the instinct is to blame the model, the prompts, or the maturity of the technology. That instinct is understandable, but it misses the core issue.

LLMs are behaving exactly as they were designed to behave. They generate responses based on patterns in data, within constraints, without goals, values, or intent of their own.

They do not know what matters unless you tell them. They do not decide what success looks like. They do not evaluate tradeoffs. They do not own outcomes.

When practitioners assign thinking tasks that still belong to humans, failure is not a surprise. It is inevitable.

This is where thinking of Ironman and Superman becomes useful. Not as pop culture trivia, but as a mental model correction.

Ironman, Superman, And Misplaced Autonomy

Superman operates independently. He perceives the situation, decides what matters, and acts on his own judgment. He stands beside you and saves the day.

That is how many practitioners implicitly expect LLMs to behave inside workflows.

Ironman works differently. The suit amplifies strength, speed, perception, and endurance, but it does nothing without a pilot. It executes within constraints. It surfaces options. It extends capability. It does not choose goals or values.

LLMs are Ironman suits.

They amplify whatever intent, structure, and judgment you bring to them. They do not replace the pilot.

Once you see that distinction clearly, a lot of frustration evaporates. The system stops feeling unreliable and starts behaving predictably, because expectations have shifted to match reality.

Why This Matters For SEO And Marketing Leaders

SEO and marketing leaders already operate inside complex systems. Algorithms, platforms, measurement frameworks, and constraints you do not control are part of daily work. LLMs add another layer to that stack. They do not replace it.

For SEO managers, this means AI can accelerate research, expand content, surface patterns, and assist with analysis, but it cannot decide what authority looks like, how tradeoffs should be made, or what success means for the business. Those remain human responsibilities.

For marketing executives, this means AI adoption is not primarily a tooling decision. It is a responsibility placement decision. Teams that treat LLMs as decision makers introduce risk. Teams that treat them as amplification layers scale more safely and more effectively.

The difference is not sophistication. It is ownership.

The Real Correction

Most advice about using AI focuses on better prompts. Prompting matters, but it is downstream. The real correction is reclaiming ownership of thinking.

Humans must own goals, constraints, priorities, evaluation, and judgment. Systems can handle expansion, synthesis, speed, pattern detection, and drafting.

When that boundary is clear, LLMs become remarkably effective. When it blurs, frustration follows.

The Quiet Advantage

Here is the part that rarely gets said out loud.

Practitioners who internalize this mental model consistently get better results with the same tools everyone else is using. Not because they are smarter or more technical, but because they stop asking the system to be something it is not.

They pilot the suit, and that’s their advantage.

AI is not taking control of your work. You are not being replaced. What is changing is where responsibility lives.

Treat AI like a person, and you will be disappointed. Treat it like a syste,m and you will be limited. Treat it like an Ironman suit, and YOU will be amplified.

The future does not belong to Superman. It belongs to the people who know how to fly the suit.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Corona Borealis Studio/Shutterstock

Microsoft Explains How Duplicate Content Affects AI Search Visibility via @sejournal, @MattGSouthern

Microsoft has shared new guidance on duplicate content that’s aimed at AI-powered search.

The post on the Bing Webmaster Blog discusses which URL serves as the “source page” for AI answers when several similar URLs exist.

Microsoft describes how “near-duplicate” pages can end up grouped together for AI systems, and how that grouping can influence which URL gets pulled into AI summaries.

How AI Systems Handle Duplicates

Fabrice Canel and Krishna Madhavan, Principal Product Managers at Microsoft AI, wrote:

“LLMs group near-duplicate URLs into a single cluster and then choose one page to represent the set. If the differences between pages are minimal, the model may select a version that is outdated or not the one you intended to highlight.”

If multiple pages are interchangeable, the representative page might be an older campaign URL, a parameter version, or a regional page you didn’t mean to promote.

Microsoft also notes that many LLM experiences are grounded in search indexes. If the index is muddied by duplicates, that same ambiguity can show up downstream in AI answers.

How Duplicates Can Reduce AI Visibility

Microsoft lays out several ways duplication can get in the way.

One is intent clarity. If multiple pages cover the same topic with nearly identical copy, titles, and metadata, it’s harder to tell which URL best fits a query. Even when the “right” page is indexed, the signals are split across lookalikes.

Another is representation. If the pages are clustered, you’re effectively competing with yourself for which version stands in for the group.

Microsoft also draws a line between real page differentiation and cosmetic variants. A set of pages can make sense when each one satisfies a distinct need. But when pages differ only by minor edits, they may not carry enough unique signals for AI systems to treat them as separate candidates.

Finally, Microsoft links duplication to update lag. If crawlers spend time revisiting redundant URLs, changes to the page you actually care about can take longer to show up in systems that rely on fresh index signals.

Categories Of Duplicate Content Microsoft Highlights

The guidance calls out a few repeat offenders.

Syndication is one. When the same article appears across sites, identical copies can make it harder to identify the original. Microsoft recommends asking partners to use canonical tags that point to the original URL and to use excerpts instead of full reprints when possible.

Campaign pages are another. If you’re spinning up multiple versions targeting the same intent and differing only slightly, Microsoft recommends choosing a primary page that collects links and engagement, then using canonical tags for the variants and consolidating older pages that no longer serve a distinct purpose.

Localization comes up in the same way. Nearly identical regional pages can look like duplicates unless they include meaningful differences. Microsoft suggests localizing with changes that actually matter, such as terminology, examples, regulations, or product details.

Then there are technical duplicates. The guidance lists common causes such as URL parameters, HTTP and HTTPS versions, uppercase and lowercase URLs, trailing slashes, printer-friendly versions, and publicly accessible staging pages.

The Role Of IndexNow

Microsoft points to IndexNow as a way to shorten the cleanup cycle after consolidating URLs.

When you merge pages, change canonicals, or remove duplicates, IndexNow can help participating search engines discover those changes sooner. Microsoft links that faster discovery to fewer outdated URLs lingering in results, and fewer cases where an older duplicate becomes the page that’s used in AI answers.

Microsoft’s Core Principle

Canel and Madhavan wrote:

“When you reduce overlapping pages and allow one authoritative version to carry your signals, search engines can more confidently understand your intent and choose the right URL to represent your content.”

The message is consolidation first, technical signals second. Canonicals, redirects, hreflang, and IndexNow help, but they work best when you’re not maintaining a long tail of near-identical pages.

Why This Matters

Duplicate content isn’t a penalty by itself. The downside is weaker visibility when signals are diluted, and intent is unclear.

Syndicated articles can keep outranking the original if canonicals are missing or inconsistent. Campaign variants can cannibalize each other if the “differences” are mostly cosmetic. Regional pages can blend together if they don’t clearly serve different needs.

Routine audits can help you catch overlap early. Microsoft points to Bing Webmaster Tools as a way to spot patterns such as identical titles and other duplication indicators.

Looking Ahead

As AI answers become a more common entry point, the “which URL represents this topic” problem becomes harder to ignore.

Cleaning up near-duplicates can influence which version of your content gets surfaced when an AI system needs a single page to ground an answer.

Sam Altman Explains OpenAI’s Bet On Profitability via @sejournal, @martinibuster

In an interview with the Big Technology Podcast, Sam Altman seemed to struggle answering the tough questions about OpenAI’s path to profitability.

At about the 36 minute mark the interviewer asked the big question about revenues and spending. Sam Altman said OpenAI’s losses are tied to continued increases in training costs while revenue is growing. He said the company would be profitable much earlier if it were not continuing to grow its training spend so aggressively.

Altman said concern about OpenAI’s spending would be reasonable only if the company reached a point where it had large amounts of computing it could not monetize profitably.

The interviewer asked:

“Let’s, let’s talk about numbers since you brought it up. Revenue’s growing, compute spend is growing, but compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 2028, 29, where you’re going to become profitable.

So talk a little bit about like, how does that change? Where does the turn happen?”

Sam Altman responded:

“I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that’s the plan. Spend a lot of money training, but make more and more.

If we weren’t continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we’re making is to invest very aggressively in training these big models.”

At this point the interviewer pressed Altman harder about the path to profitability, this time mentioning the spending commitment of $1.4 trillion dollars versus the $20 billion dollars in revenue. This was not a softball question.

The interviewer pushed back:

“I think it would be great just to lay it out for everyone once and for all how those numbers are gonna work.”

Sam Altman’s first attempt to answer seemed to stumble in a word salad kind of way: 

“It’s very hard to like really, I find that one thing I certainly can’t do it and very few people I’ve ever met can do it.

You know, you can like, you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on.

Like for whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn’t seem to be one of them.”

Altman then regained his footing with a more coherent answer:

“The thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don’t have the compute.

Again, we’re so compute constrained, and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can’t monetize on a profitable per unit of compute basis, it’d be very reasonable to say, okay, this is like a little, how’s this all going to work?

But we’ve penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis, as you know, all of the work we’ve been doing to make compute cheaper comes to pass.

But we see this consumer growth, we see this enterprise growth. There’s a whole bunch of new kinds of businesses that, that we haven’t even launched yet, but will. But compute is really the lifeblood that enables all of this.

We have always been in a compute deficit. It has always constrained what we’re able to do.

I unfortunately think that will always be the case, but I wish it were less the case, and I’d like to get it to be less of the case over time, because I think there’s so many great products and services that we can deliver, and it’ll be a great business.”

The interviewer then sought to clarify the answer, asking:

“And then your expectation is through things like this enterprise push, through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.”

Sam Altman responded:

“Yeah, that is the plan.”

Altman’s comments define a specific threshold for evaluating whether OpenAI’s spending is a problem. He points to unused or unmonetizable computing power as the point at which concern would be justified, rather than current losses or large capital commitments.

In his explanation, the limiting factor is not willingness to pay, but how much computing capacity OpenAI can bring online and use. The follow-up question makes that explicit, and Altman’s confirmation makes clear that the company is relying on revenue growth from consumer use, enterprise adoption, and additional products to cover its costs over time.

Altman’s path to profitability rests on a simple bet: that OpenAI can keep finding buyers for its computing as fast as it can build it. Eventually, that bet either keeps winning or the chips run out.

Watch the interview starting at about the 36 minute mark:

Featured Image/Screenshot

Google’s Robby Stein Names 5 SEO Factors For AI Mode via @sejournal, @martinibuster

Robby Stein, Vice President of Product for Google Search, recently sat down for an interview where he answered questions about how Google’s AI Mode handles quality, how Google evaluates helpfulness, and how it leverages its experience with search to identify which content is helpful, including metrics like clicks. He also outlined five quality SEO-related factors used for AI Mode.

How Google Controls Hallucinations

Stein answered a question about hallucinations, where an AI lies in its answers. He said that the quality systems within AI Mode are based on everything Google has learned about quality from 25 years of experience with classic search. The systems that determine what links to show and whether content is good are encoded within the model and are based on Google’s experience with classic search.

The interviewer asked:

“These models are non-deterministic and they hallucinate occasionally… how do you protect against that? How do you make sure the core experience of searching on Google remains consistent and high quality?”

Robby Stein answered:

“Yeah, I mean, the good news is this is not new. While AI and generative AI in this way is frontier, thinking about quality systems for information is something that’s been happening for 20, 25 years.

And so all of these AI systems are built on top of those. There’s an incredibly rigorous approach to understanding, for a given question, is this good information? Are these the right links? Are these the right things that a user would value?

What’s all the signals and information that are available to know what the best things are to show someone. That’s all encoded in the model and how the model’s reasoning and using Google search as a tool to find you information.

So it’s building on that history. It’s not starting from scratch because it’s able to say, oh, okay, Robbie wants to go on this trip and is looking up cool restaurants in some neighborhood.

What are the things that people who are doing that have been relying on on Google for all these years? We kind of know what those resources are we can show you right there. And so I think that helps a lot.

And then obviously the models, now that you release the constraint on layout, obviously the models over time have also become just better at instruction following as well. And so you can actually just define, hey, here are my primitives, here are my design guidelines. Don’t do this, do this.

And of course it makes mistakes at times, but I think just the quality of the model has gotten so strong that those are much less likely to happen now.”

Stein’s explanation makes clear that AI Mode is encoded with everything learned from Google’s classic search systems rather than a rebuild from scratch or a break from them. The risk of hallucinations is managed by grounding AI answers in the same relevance, trust, and usefulness signals that have underpin classic search for decades. Those signals continue to determine which sources are considered reliable and which information users have historically found valuable. Accuracy in AI search follows from that continuity, with model reasoning guided by longstanding search quality signals rather than operating independently of them.

How Google Evaluates Helpfulness In AI Mode

The next question is about the quality signals that Google uses within AI Mode. Robby Stein’s answer explains that the way AI Mode determines quality is very much the same as with classic search.

The interviewer asked:

“And Robbie, as search is evolving, as the results are changing and really, again, becoming dynamic, what signals are you looking at to know that the user is not only getting what they want, but that is the best experience possible for their search?”

Stein answered:

“Yeah, there’s a whole battery of things. I mean, we look at, like we really study helpfulness and if people find information helpful.

And you do that through evaluating the content kind of offline with real people. You do that online by looking at the actual responses themselves.

And are people giving us thumbs up and thumbs downs?

Are they appreciating the information that’s coming?

And then you kind of like, you know, are they using it more? Are they coming back? Are they voting with their feet because it’s valuable to you.

And so I think you kind of triangulate, any one of those things can lead you astray.

There’s lots of ways that, interestingly, in many products, if the product’s not working, you may also cause you to use it more.

In search, it’s an interesting thing.

We have a very specific metric that manages people trying to use it again and again for the same thing.

We know that’s a bad thing because it means that they can’t find it.

You got to be really careful.

I think that’s how we’re building on what we’ve learned in search, that we really feel good that the things that we’re shipping are being found useful by people.”

Stein’s answer shows that AI Mode evaluates success using the same core signals used for search quality, even as the interface becomes more dynamic. Usefulness is not inferred from a single engagement signal but from a combination of human evaluation, explicit feedback, and behavioral patterns over time.

Importantly, Stein notes that just because people use it a lot, presumably in a single session, that the increased usage alone is not treated as success, since repeated attempts to answer the same query indicate failure rather than satisfaction. The takeaway is that AI Mode’s success is judged by whether users are satisfied, and that it uses quality signals designed to detect friction and confusion as much as positive engagement. This carries over continuity from classic search rather than redefining what usefulness means.

Five Quality Signals For AI Search

Lastly, Stein answers a question about the ranking of AI generated content and if SEO best practices still help for ranking in AI. Stein’s answer includes five factors that are used for determining if a website meets their quality and helpfulness standards.

Stein answered:

“The core mechanic is the model takes your question and reasons about it, tries to understand what you’re trying to get out of this.

It then generates a fan-out of potentially dozens of queries that are being Googled under the hood. That’s approximating what information people have found helpful for those questions.

There’s a very strong association to the quality work we’ve done over 25 years.

Is this piece of content about this topic?

Has someone found it helpful for the given question?

That allows us to surface a broader diversity of content than traditional Search, because it’s doing research for you under the hood.

The short of it is the same things apply.

  1. Is your content directly answering the user’s question?
  2. Is it high quality?
  3. Does it load quickly?
  4. Is it original?
  5. Does it cite sources?

If people click on it, value it, and come back to it, that content will rank for a given question and it will rank in the AI world as well.”

Watch the interview starting about the one hour and twenty three minute mark:

Google Says What To Tell Clients Who Want SEO For AI via @sejournal, @martinibuster

Google’s Danny Sullivan offered advice to SEOs who have clients asking for updates on what they’re going to do for AI SEO. He acknowledged it’s easier to give the advice than it is to have to actually tell clients, but he also said that advancements in content management systems drive technical SEO into the background, enabling SEOs and publishers to focus on the content.

What To Tell Clients

Danny Sullivan acknowledged that SEOs are in a tough spot with clients. He didn’t suggest specifics for how to rank better in AI search (although later in the podcast he did offer suggestions for what to do to rank better in AI search).

But he did offer suggestions for what to tell clients.

Danny explained:

“And the other thing is, and I’ve seen a number of people remark on this, is this concern that, well, I’ve been doing SEO, but now I’m getting clients or people saying to me, but I need the new stuff. I need the new stuff. And I can’t just tell them it’s the same old stuff.

So I don’t know if you feel like you need to dress it up a bit more, but I think the way you dress it up is to say, These are continuing to be the things that are going to make you successful in the long-term. I get you want the fancy new type of thing, but the history is that the fancy new type of thing doesn’t always stick around if we go off and do these particular types of things…

I’m keeping an eye on it, but right now, the best advice I can tell you when it comes to how we’re going to be successful with our AEO is that we continue on doing the stuff that we’ve been doing because that is what it’s built on.

Which is easy for me to say ’cause I don’t got someone banging on the door to say, Well, actually we do. And so we are doing that.

So that’s why, as part of the podcast, it’s just to kind of reassure that, look, just because the formats are changing didn’t mean you have to change everything that you had to do and that everything you had to shift around.”

Downside Of Prioritizing AEO/GEO For AI Search Visibility

There are many in the SEO community who are suggesting fairly spammy things to do to rank better in AI chatbots like ChatGPT, like creating listicles that recommend themselves as best whatever. Others are doing things like tweaking on keyword phrases, the kind of thing SEOs stopped doing by 2005 or 2006.

The problem with making dramatic changes to content in order to rank better in chatbots is that ChatGPT, Perplexity, and Anthropic Claude’s search traffic share is a fraction of a percent for each of them, with Claude close zero and ChatGPT estimated to be 0.2% – 0.5%.

So it absolutely makes zero sense to prioritize AEO/GEO over Google and Bing search at this point because the return on the investment is close to zero. It’s a different story when it comes to Google AI Overviews and AI Mode, but the underlying ranking systems for both AI interfaces remain Google’s classic search.

Danny shared that focusing on things that are specific to AI risks complicating what should be simple.

Google’s Danny Sullivan shared:

“And in fact, that the more that you dramatically shift things around, and start doing something completely different, or the more that you start thinking I need to do two different things, the more that you may be making things far more complicated, not necessarily successful in the long term as you think they are.”

Technical SEO Is Needed Less?

John Mueller followed up by mentioning that the advanced state of content management systems today means that SEOs and publishers no longer have to spend as much time on technical SEO issues because most CMS’s have the basics of SEO handled virtually out of the box. Danny Sullivan said that this frees up SEOs and creators to focus on their content, which he insisted will be helpful for ranking in AI search surfaces.

John Mueller commented:

“I think that makes a lot of sense. I think one of the things that perhaps throws SEOs off a little bit is that in the early days, there was a lot of almost like a technical transition where people initially had to do a lot of technical specific things to make their site even kind of accessible in search. And at some point nowadays, I think if you’re using a popular CMS like WordPress or Wix or any of them, basically you don’t have to worry about any of those technical details.

So it’s almost like that technical side of things is a lot less in the foreground now, and you can really focus on the content, and that’s really what users are looking for. So it’s like that, almost like a transition from technical to content side with regards to SEO.”

This echoed a previous statement from earlier in the podcast where Danny remarked on how some people have begun worrying less about SEO and focusing on content.

Danny said:

“But we really just want you to focus on your content and not really worry about this. If your content is on the web and generally accessible as most people’s content is, that’s it.

I’ve actually been heartened that I’ve seen a number of people saying things like: I don’t even want to think about this SEO stuff anymore. I’m just getting back into the joy of writing blogs.

I’m like, yes, great. That’s what we want you to do. That’s where we think you’re going to find your most success.”

Listen to Danny Sullivan’s remarks at about the 8 minute mark:

Featured Image by Shutterstock/Just dance

How Will AI Mode Impact Local SEO? via @sejournal, @JRiddall

In organic search, disruption has always been the norm, but the integration of AI into Google Search – with AI Overviews and now AI Mode – is not an incremental change; it is a fundamental restructuring. For marketers overseeing single or multi-location SEO strategies, the transition from the traditional blue-link environment to a conversational, synthesized search experience carries important stakes.

The initial manifestation of this shift, the AI Overview (AIO), which claims the premium “Position 0” real estate on a search engine results page (SERP), provided the initial shockwave. However, the long-term competitive reality is defined by AI Mode, a full conversational ecosystem where users can engage in multi-stage dialogue with AI. This interactive mode anticipates a user’s entire “information journey” by mapping out potential subsequent inquiries, known as latent questions or query fan-out, negating the need for users to click through for additional information.

The implications for local SEO are profound. Data confirms that when an AIO is present and a business’s content is not cited, organic click-through rates (CTR) can plummet by as much as 61%.

The priority for local marketing has irrevocably shifted: Success is no longer defined by securing Position 1 in the traditional organic listings, but by achieving inclusion and citation within the Position 0 AI Overview and the expanded AI Mode. Some are of the belief Google could go full AI Mode at any moment.

This blueprint outlines eight strategic imperatives for marketers to ensure resilient local visibility and drive high-intent conversions in the AI Mode era to come.

The Paradigm Shift: From Blue Links To Entity Authority

The mechanics of AI Mode fundamentally alter local search competition. For high-intent, local or transactional queries (e.g., “best walking tour in Chicago”), the AI often replaces the traditional Google 3-Pack with an expanded, enhanced local AI Mode display including Google Business Profile (GBP) cards.

AI Mode GBP Cards screenshot
Screenshot from Google search for [best walking tours in New Orleans], November 2025

A limited study conducted in May 2025 found AI Overviews (now typically accompanied by AI Mode) appeared for local search queries 57% of the time and were particularly dominant for informational, as opposed to local/commercial, intent queries.

A more recent behavioral study of travel booking in AI Mode found Google Business Profiles to be among the most highly displayed and engaged content for searchers booking local accommodations and experiences. This is likely the case for any locally oriented search. This creates new opportunities, but demands a strategic overhaul to ensure top-tier visibility.

The AI’s choice of businesses for this enhanced local pack leans heavily on Entity Authority. LLMs synthesize business summaries and attributes by drawing information from diverse, omni-channel sources. This reliance on verified, consistent facts across the entire web makes the digital ecosystem, rather than just the website’s content or backlink profile, the primary ranking vector.

In this new environment, traditional SEO and link acquisition strategies must be rebalanced with unique fact provision and entity authority strategies

8 Local SEO Recommendations For Visibility In AI Mode

To command a dominant position in the conversational search environment, local marketers must execute a comprehensive strategy focusing on local authority, data integrity, technical compliance, and an answer-first content structure.

1. Fortify Your Google Business Profile (GBP) As The Verified Core

GBP has been identified as generative AI’s most critical source of verified local data. Full optimization and consistent verification are non-negotiable gatekeepers for inclusion and visibility within AI Mode.

Non-Negotiable GBP Optimization:

Primary And Secondary Category Selection
Choose the most relevant and appropriate primary category for the business, along with limited additional secondary categories. Do not select generic or non-relevant categories as a means to being included or found within the same via AI search. Far too many businesses make the mistake of choosing as many categories as they think are even tangentially related to the services they offer, often diluting their primary area of expertise.

Comprehensive Service Listings
Ensure accurate and comprehensive listings of all services offered, aligning them perfectly with the services listed on the website and within schema markup. Here again, do not over-extend into generic or non-relevant service offerings.

Verified Hours and Attributes
Maintain current, verified hours of operation, paying special attention to temporary or seasonal closures. A newly important factor in organic and AI search visibility is whether or not a business is physically open when a search is being conducted.

Fill out all relevant business attributes, including payment types accepted, amenities (e.g., parking) available, and anything else which may set the business apart.

Active Engagement Signals
Behavioral signals, such as in-store visits tracked by Google Maps, and engagement signals on the GBP are increasing in importance, suggesting the AI weights profiles demonstrating real-world activity. Responding promptly to reviews and questions posed via GBP is critical, as is regularly posting photos, offers, updates, and other helpful content for your target audience.

Recommendation: The GBP must be treated as a live, mission-critical data feed, not a static listing. Any change to a service, hour, or attribute must be propagated across the GBP first, then the website, and finally any other third-party local or industry-specific directories.

2. Mandate Technical Precision With Schema

Structured data can support AI search visibility. Large Language Models (LLMs), in part, use schema markup to categorize, verify, and ingest factual information directly. Failure to comply with stringent technical specifications may render an entity ineligible for expanded, visually-rich AI results.

Required Technical Specifications:

LocalBusiness Schema And Service Schema
These must be implemented meticulously, defining the business type (e.g., Dentist, Vacation Rental Operator) and precisely describing the services offered using the Service and makesOffer properties.

Geographical Precision
The geo property (latitude and longitude) must be included in the LocalBusiness schema to satisfy the AI’s need for hyper-local accuracy in “near me” and navigational queries.

Visual Asset Compliance
To qualify for visually enhanced AI results, websites must provide multiple relevant service, product, and location-specific images. All images require relevant descriptive filenames and alt text, which must include pertinent keywords, where applicable.

Recommendation: Implement all schema using JSON-LD for simplified maintenance and validation via Google’s Rich Results Test and Schema.org markup validator, keeping the technical markup separate from page design.

3. Achieve Omnichannel Entity Consistency (NAP Harmony)

Generative AI systems rely on consistency and verifiability of a business’s factual data across multiple sources. Any conflict in Name, Address, and Phone (NAP) details, or service descriptions, across primary and third-party sources introduces ambiguity. AI models, like organic search algorithms preceding them, are programmed to reject or hesitate to cite conflicting data points, significantly degrading a business’s trustworthiness.

The Data Harmonization Mandate:

GBP Vs. Website
If a business lists four specific services on its website, but six on its Google Business Profile (GBP), the AI may not be able to provide a definitive, confident summary of service offerings.

Comprehensive Auditing
Invest in robust, real-time auditing and monitoring tools to ensure 100% NAP consistency across the corporate website, all individual location pages, GBPs, and major third-party directories (e.g., Yelp, Tripadvisor).

Recommendation: Treat your structured data and GBP as the single source of truth, and enforce a technical and content compliance mandate across all third-party listings and local data aggregators to eliminate signal dilution. Local authority is now synonymous with holistic entity management.

4. Harness The Power Of Authentic Review Sentiment (E-E-A-T)

Within AI-search, Google continues to emphasize the E-E-A-T framework (Experience, Expertise, Authoritativeness, and Trustworthiness). For local entities, this can in part be demonstrated through verifiable user interactions, authentic customer feedback, and structured review data. The AI synthesizes customer reviews into concise, attribute-level summaries serving as the user’s immediate decision cue.

Shifting Review Strategy To Influence The AI Summary:

Attribute-Level Prompting
The strategy must shift from merely gathering high star ratings to encouraging customers to mention desirable operational attributes (e.g., “fast service,” “knowledgeable staff,” “great atmosphere”). This provides the AI with positive attributes to feature prominently in the generated summary, which acts as a primary conversion trigger.

Review Schema Implementation
Implementing Review and AggregateRating schema is critical for providing the AI model with a structured roadmap to quickly identify recurring sentiment themes.

Proactive Management
Active, prompt management and response to both positive and negative reviews, focusing on service attributes, further establishes the ‘A’ authority and ‘T’ trust in E-E-A-T.

5. Adopt Answer Engine Optimization (AEO) And Query Fan-Out Mapping

Content strategy must transition from traditional keyword SEO to Answer Engine Optimization (AEO). AI Mode prioritizes highly informative, concise content specifically structured to answer user queries directly. Query fan-out refers to the process of not only answering the first query submitted, but also anticipating and providing answers to a range of subsequent related questions users have.

Content Strategy For Conversational Search

Map Latent Questions
Since complex queries often trigger AI Overviews, and AI Mode builds on the same multi-step reasoning systems, Google’s LLMs attempt to map the user’s broader information journey by predicting the follow-up questions they are likely to ask. Content therefore needs to address not only the initial ‘head query’ but also the latent questions that make up the next steps in that journey.

Structure For Extraction
Content inclusion is assessed partly by structure. Utilize clear formatting elements easy for the AI to extract and cite:

  • Hierarchical Headings: Implement a clean, tiered heading structure to guide LLMs through content based on its hierarchical importance.
  • Answer First Content: Incorporate semantically related questions and answers tied to perceived user intent naturally into body content.
  • FAQs/Q&A Formatting: Use structured Q&A formats along with FAQPage schema.
  • Ordered Lists: Present verifiable facts in easily digestible formats like bulleted and numbered lists.
  • Short, Concise Paragraphs: Ensure maximum readability and extraction suitability for the LLM.

Implement A Dual Content Strategy

  • Tier 1 (Informational/AEO): Unique, helpful, experience-backed content optimized for AIO citation (FAQs, guides) to establish E-E-A-T and secure brand visibility.
  • Tier 2 (Transactional/CRO): Core service pages and hyper-local pages focused on high-intent, bottom-of-the-funnel queries (“emergency plumber near me”), prioritizing clear calls-to-action and conversion architecture.

6. Diversify Entity Authority: Chase Branded Web Mentions

The AI’s holistic approach to entity authority means links are less important than they once were, while branded mentions are experiencing a resurgence. Research indicates a strong correlation between brands cited in AI Overviews/AI Mode and the frequency of their mention across the broader web (including social media, blogs, and forums like Reddit). In AI SEO, brand mentions (linked or not) are the new link. This shift is supported by data showing web mentions correlate highly with AI visibility.

Strategy For Earning “The AI Vote”:

Omnichannel Entity Acquisition
Proactively pursue high-quality, non-linked citations from authoritative local news sources, industry blogs, and high-quality directories. The goal is to maximize the sheer volume of high-quality, reinforcing brand mentions AI can reference.

Social & Video Integration
Leverage social media platforms and, critically, YouTube content. LLMs scrape video and social channels for entity information and context, making these verifiable sources of service and brand attribute data.

Recommendation: Shift resources from low-value link-building activities toward Digital PR and Content Distribution campaigns designed to earn non-linked brand mentions and reinforce local expertise across third-party industry and media sites.

7. Optimize For High-Velocity Conversions (CRO)

The inevitable decline in raw organic traffic is accompanied by an efficiency challenge. The traffic successfully navigating from AI Mode to the website should typically be more qualified and higher-intent, as the AI has already satisfied low-intent informational needs. The traffic remaining is typically the commercially valuable “bottom-of-the-funnel” user.

The Conversion Imperative:

CRO Over Traffic Generation
Resources should be strategically reallocated away from mass traffic generation toward maximizing the conversion potential of the qualified users who land on the website.

One interesting finding from the aforementioned AI Mode behavioral study was the number of users who expected to simply be able to complete their transaction once they left AI Mode, i.e., just click Book Now and pay. While this may be coming in the form of future Google integrations, the current transactional workflow requires users to start their booking from the beginning.

While the percentage of traffic from AI search may initially be less than 1%, the potential volume – with 1% of a trillion searches equating to 10 billion opportunities – justifies a dedicated focus on conversion for this high-value segment.

Perfecting Conversion Architecture
The final click from AI Mode to the website must lead to a seamless, high-velocity user experience. This involves:

  • Above-the-Fold CTAs: Ensuring clear, single-focus calls-to-action (CTAs) are immediately visible on landing pages.
  • Minimal Friction: Reducing form fields and providing one-click access to the most high-intent action (e.g., “Request a Quote,” “Book Now,” “Call Us”).
  • KPI Recalibration: Focus key performance indicators (KPIs) on high-value, direct actions tracked through Google Business Insights and Search Console, emphasizing direct calls, requests for driving directions, and specific booking actions, rather than low-intent clicks. Visibility in AI Mode becomes a more meaningful success metric than a singular keyword rank.

8. Future-Proofing: Un-hide Content And Prioritize Accessibility

A foundational requirement for AI Mode visibility is ensuring technical accessibility of content for the LLM’s consumption.

Accessibility As A Generative Requirement:

Un-hide Critical Content
Content crucial to establishing entity authority (e.g., licenses, certifications, key service attributes, location details) must not be hidden within toggles, tabs, accordions, or JavaScript requiring a user click to reveal.

Plain Text And HTML
While visuals are important, the core factual assertions must be rendered in clean, accessible HTML any machine can easily read and interpret.

Proactive Monitoring
Use LLM analysis tools (or reverse question-answering prompts) to regularly audit which questions your site is answering and which critical facts are not being found by the AI, ensuring your core message is the stuff being crawled and indexed.

The Generative Mandate For Local SEO In The AI Era

Google AI Mode represents the definitive passing of the torch from traditional link-based SEO to a sophisticated strategy centered on fact provision and entity validation. For marketers, the shift is not one to debate, but one to embrace immediately.

The future of local search visibility is a high-stakes competition for the top-tier real estate of the AI Overview and AI Mode. The required investment is a mandate across the entire digital portfolio:

  1. Technical Compliance: Adhering to strict schema and content specifications to gain eligibility.
  2. Data Integrity: Enforcing omnichannel consistency to build undeniable entity trust.
  3. Content Refinement: Adopting Answer Engine Optimization to answer the full spectrum of user queries.
  4. Link or Unlinked Branded Mentions: Earn and establish visibility in relatively high authority local and industry-relevant places.

This strategic pivot – away from mass-traffic keyword pursuits and toward precise entity authority management – is the only way to mitigate the risk of CTR collapse and capitalize on the high-quality, high-intent traffic AI Mode will deliver. Your business must now be structured as an impeccable source of verified, structured facts for AI to cite. The time for strategic adaptation is now.

More Resources:


Featured Image: Koupei Studio/Shutterstock

Google Explains How To Rank In AI Search via @sejournal, @martinibuster

Google’s John Mueller and Danny Sullivan discussed their thoughts on AI search and what SEOs and creators should be doing to make sure their content is surfaced. Danny showed some concern for folks who were relying on commodity content that is widely available.

What Creators Should Focus On For AI

John Mueller asked Danny Sullivan what publishers should be focusing on right now that’s specific to AI. Danny answered by explaining what kind of content you should not focus on and what kind of content creators should be focusing on.

He explained that the kind of content that creators should not focus on is commodity content. Commodity content is web content that consists of information that’s widely available and offers no unique value, no perspective, and requires no expertise. It is the kind of content that’s virtually interchangeable with any other site’s content because they are all essentially generic.

While Danny Sullivan did not mention recipe sites, his discussion about commodity content immediately brought recipe sites to mind because those kinds of sites seemingly go out of their way to present themselves as generically as possible, from the way the sites look, the “I’m just a mom of two kids” bio, and the recipes they provide. In my opinion, what Danny Sullivan said should make creators consider what they bring to the web that makes them notable.

To explain what he meant by commodity content, Danny used the example of publishers who used to optimize a web page for the time that the Super Bowl game began. His description of the long preamble they wrote before giving the generic answer of what time the Super Bowl starts reminded me again of recipe sites.

At about the twelve minute mark John Mueller asked Danny:

“So what would you say web creators should focus on nowadays with all of the AI?”

Danny answered:

“A key thing is to really focus on is the original aspect. Not a new thing.

These are not new things beyond search, but if you’re really trying to reframe your mind about what’s important, I think that on one hand, there’s a lot of content that is just kind of commodity content, factual information, and I think that the… LLM, AI systems are doing a good job of presenting that sort of stuff.

And it’s not originating from any type of thing.

So the classic example, as you know, will make people laugh, …but every year we have this little American football thing called the Super Bowl, which is our big event.

…But no one ever can seem to remember what time it’s on.

…Multiple places would then all write their “what time does the Super Bowl start in 2011?” post. And then they would write these giant long things.

…So, you know, and then at some point, we could see enough information and we have data feeds and everything else that we just kind of said, you do a search and …the Super Bowl is going to be at 3:30.

…I think the vast majority of people say, that’s a good thing. Thank you for just telling me the time of the Super Bowl.

It wasn’t super original information.”

Commodity Content Is Not Your Strength

Next Danny considered some of the content people are publishing today, encouraging them to think  about the generic nature of their content and to give some thought to how they can share something more original and unique.

Danny continued his answer:

“I think that is a thing people need to understand, is that more of this sort of commodity stuff, it isn’t going to necessarily be your strength.

And I do worry that some people, even with traditional SEO, focus on it too much.

There are a number of sites I know from the research and things that I’ve done that get a huge amount of traffic for the answer to various popular online word-solving games.

It’s just every day I’m going to give you the answer to it. …and that is great. Until the system shifts or whatever, and it’s common enough, or we’re pulling it from a feed or whatever, and now it’s like, here’s the answer.”

Bring Your Expertise To AI

Danny next suggested that people who are concerned about showing up in AI should start exploring how to express their authentic experience or expertise. He said this advice is not just for text content but also to video and podcast content as well.

He continued:

“Your original voice is that thing that only you can provide. It’s your particular take.

And so that’s what we think was our number one thing when we’re telling people is like, this is what we think your strength is going to be.

As we go into this new world, is already what you should be doing, but this is what your strength that you should be doing is focus on that original content.

I think related to that is this idea that people are also seeking original content that’s, …authentic to them, which typically means it’s a video, it’s a podcast…

…And you’ve seen that in the search we’ve already done, where we brought in more social, more experiential content.  Not to take away from the expert takes, it’s just that people want that.

Sometimes you’re just wanting to know someone’s firsthand experience alongside some expert take on it as well.

But if you are providing those expert takes, you’re doing reviews or whatever, and you’ve done that in the written form, you still have the opportunity to be doing those in videos and podcasts and so on.  Those are other opportunities.

So those are things that, again, it’s not unique to the AI formats, but they just may be, as you’re thinking about, how do I reevaluate what I’m doing overall in this era, that these are things you may want to be considering with it from there.”

John Mueller agreed that it makes sense to bring your unique voice to content in order to make it stand out. Danny’s point treats visibility in AI driven search as a matter of differentiation rather than optimization. The emphasis is not on adapting content to a new format, but on creating a recognizable voice and perspective with which to stand out.  Given that AI Search is still classic search under the hood, it makes sense to stand out from competitors with unique content that people will recognize and recommend.

Listen to the passage at around the twelve minute mark:

Featured Image by Shutterstock/Asier Romero

Google’s AI Mode Personal Context Features “Still To Come” via @sejournal, @MattGSouthern

Seven months after Google teased “personal context” for AI Mode at Google I/O, Nick Fox, Google’s SVP of Knowledge and Information, says the feature still is not ready for a public rollout.

In an interview with the AI Inside podcast, Fox framed the delay as a product and permissions issue rather than a model-capability issue. As he put it: “It’s still to come.”

What Google Promised At I/O

At Google I/O, Google said AI Mode would “soon” incorporate a user’s past searches to improve responses. It also said you would be able to opt in to connect other Google apps, starting with Gmail, with controls to manage those connections.

The idea was that you wouldn’t need to restate context in every query if you wanted Google to use relevant details already sitting in your account.

On timing, Fox said some internal testing is underway, but he did not share a public rollout date:

“Some of us are testing this internally and working through it, but you know, still to come in terms of the in terms of the public roll out.”

You can hear the question and Fox’s response in the video below starting around the 37-minute mark:

AI Mode Growth Continues Without Personal Context

Even without that personalization layer, Fox pointed to rapid adoption, describing AI Mode as having “grown to 75 million daily active users worldwide.”

The bigger change may be in how people phrase queries. Fox described questions that are “two to three times as long,” with more explicit first-person context.

Instead of relying on AI Mode to infer intent, people are writing the context into the prompt, Fox says:

“People are trying to put put the right context into the query”

That matters because the “personal context” feature was designed to reduce that manual effort.

Geographic Patterns In Adoption

Adoption also appears uneven by market, with the strongest traction in regions that received AI Mode first. Fox described the U.S. as the most “mature” market because the product has had more time to become part of people’s routines.

He also pointed to strong adoption in markets where the web is less developed in certain languages or regions, naming India, Brazil, and Indonesia. The argument there is that AI Mode can stitch together information across languages and borders in ways traditional search results may not have for those markets.

Younger users, he added, are adopting the experience faster across regions.

Publisher Relationship Updates

The interview also included updates tied to how AI Mode connects people back to publisher content.

Preferred Sources is one of them. The feature lets you choose specific publications you want to see more prominently in Google’s Top Stories unit, and Google describes it as available worldwide in English.

Fox also described ongoing work on links in AI experiences, including increasing the number of links shown and adding more context around them:

“We’re actually improving the links within our within our AI experience, increasing the number of them…”

On the commercial side, he noted Google has partnerships with “over 3,000 organizations” across “50 plus countries.”

Technical Updates

Fox talked through product and infrastructure changes now powering AI Mode and related experiences.

One was shipping Gemini 3 Pro in Search on day one, which he described as the first time Google has shipped a frontier model” in Search on launch day.

He also described generative layouts,” where the model can generate UI code on the fly for certain queries.

To keep the experience fast, he emphasized model routing, where simpler queries go to smaller, faster models and heavier work is reserved for more complex prompts.

Why This Matters

A version of AI Mode that personalizes answers using opt-in Gmail context is still not available and doesn’t have a public timeline.

In the meantime, people appear to be compensating by typing more context into their queries. If that becomes the norm, it may push publishers toward satisfying longer, more situation-specific questions.

Looking Ahead

While AI Mode is still in its early stages, the 75 million daily active users figure suggests it’s large enough to monitor for visibility.


Featured Image: Jackpress/Shutterstock

Google Gemini 3 Flash Becomes Default In Gemini App & AI Mode via @sejournal, @MattGSouthern

Google released Gemini 3 Flash, expanding its Gemini 3 model family with a faster model that’s now the default in the Gemini app.

Gemini 3 Flash is also rolling out globally as the default model for AI Mode in Search.

The release builds on Google’s recent Gemini 3 rollout, which introduced Gemini 3 Pro in preview and also announced Gemini 3 Deep Think as an enhanced reasoning mode.

What’s New

Gemini 3 Flash replaces Gemini 2.5 Flash as the default model in the Gemini app globally, which means free users get the Gemini 3 experience by default.

In Search, Gemini 3 Flash is rolling out globally as AI Mode’s default model starting today.

For developers, Gemini 3 Flash is available in preview via the Gemini API, including access through Google AI Studio, Google Antigravity, Vertex AI, Gemini Enterprise, plus tools such as Gemini CLI and Android Studio.

Pricing

Gemini 3 Flash pricing is listed at $0.50 per million input tokens and $3.00 per million output tokens on Google’s Gemini API pricing documentation.

On the same pricing page, Gemini 2.5 Flash is listed at $0.30 per million input tokens and $2.50 per million output tokens.

Google says Gemini 3 Flash uses 30% fewer tokens on average than Gemini 2.5 Pro for typical tasks, and citing third-party benchmarking for a “3x faster” comparison versus 2.5 Pro.

Why This Matters

The default language model in the Gemini app has changed, and users have access at no extra cost.

If you build on Gemini, Gemini 3 Flash offers a new option for high-volume workflows, priced well below Pro-tier rates.

Looking Ahead

Gemini 3 Flash is rolling out now. In Search, Gemini 3 Pro is also available in the U.S. via the AI Mode model menu.

Google AI Mode & AI Overviews Cite Different URLs, Per Ahrefs Report via @sejournal, @MattGSouthern

Google’s AI Mode and AI Overviews can produce answers with similar meaning while citing different sources, according to new data from Ahrefs.

The report, published on the Ahrefs blog, analyzed September 2025 U.S. data from Ahrefs’ Brand Radar tool and compared AI Mode and AI Overview responses for the same queries.

The authors looked at 730,000 query pairs for content similarity and 540,000 query pairs for citation and URL analysis.

What The Study Found

Ahrefs reports that AI Mode and AI Overviews cited the same URLs only 13% of the time. When comparing only the top three citations in each response, overlap increased to 16%.

The language in the responses also varied. Ahrefs reports 16% overlap in unique words and states that AI Mode and AI Overviews share the exact same first sentence only 2.5% of the time.

Ahrefs reported strong semantic alignment, with an average semantic similarity score of 86%, and 89% of response pairs scoring above 0.8 on a scale where 1.0 indicates identical meaning.

Despina Gavoyannis, Senior SEO Specialist at Ahrefs, writes:

“Put simply: 9 out of 10 times, AI Mode and AI Overview agreed on what to say. They just said it differently and cited different sources.”

Different Source Preferences

Ahrefs reports differences in which websites and content types each feature tends to cite.

For example, Wikipedia appears in 28.9% of AI Mode citations compared to 18.1% in AI Overviews. The data also finds that AI Mode cited Quora 3.5x more often and cited health sites at roughly double the rate of AI Overviews.

AI Overviews, by contrast, leaned more heavily on video content. YouTube was the most frequently cited source for AI Overviews, whereas Reddit was cited at similar rates in both AI Mode and AI Overviews.

Ahrefs also reports that AI Overviews cited videos and core pages (such as homepages) nearly twice as often as AI Mode. At the same time, both features showed a strong preference for article-format pages overall.

Entity And Brand Mentions

Ahrefs found AI Mode responses were about four times longer than AI Overviews on average and included more entities.

In the dataset, AI Mode averaged 3.3 entity mentions per response compared to 1.3 for AI Overviews. Approximately 61% of the time, AI Mode included all entities mentioned in the AI Overview response and then added additional entities.

Many responses didn’t include brands or entities. Ahrefs reports that 59.41% of AI Overview responses and 34.66% of AI Mode responses contained no mentions of persons or brands, which the authors associate with informational queries in which named entities are not typically part of the answer.

Citation Gaps

The data finds that AI Mode was more likely to include citations than AI Overviews.

Only 3% of AI Mode responses lacked sources, compared to 11% of AI Overviews. Ahrefs reports that missing citations typically occur in cases such as calculations, sensitive queries, help center redirects, or unsupported languages.

Why This Matters

This report suggests that AI Mode and AI Overviews can differ in the sources they credit, even when they reach similar conclusions for the same query.

For monitoring purposes, this can affect how you interpret “visibility” across experiences. A citation (or a mention) in AI Overviews does not necessarily imply you will be cited in AI Mode for the same query, and AI Mode’s longer responses may include additional entities and competitors compared to the shorter AI Overview format.

Google’s documentation states that both AI Overviews and AI Mode may use “query fan-out,” which issues multiple related searches across subtopics and data sources while a response is being generated.

Google also notes that AI Mode and AI Overviews may use different models and techniques, so the responses and links they display will vary.

Looking Ahead

Ahrefs notes this analysis compares single generations of AI Mode and AI Overview responses. In related research, Ahrefs reported that 45.5% of AI Overview citations change when AI Overviews update, suggesting that overlap can appear different across repeated runs.

Even with that caveat, the low overlap observed in this dataset indicates that AI Mode and AI Overviews frequently select different URLs as supporting sources for the same query.


Featured Image: hafakot/Shutterstock