Trust Still Lives In Blue Links via @sejournal, @Kevin_Indig

I’ve been extremely antsy to publish this study. Consider it the AIO Usability study 1.5, with new insights. You also want to stay tuned for our first AI Mode usability study! It’s coming in a few weeks (make sure to subscribe not to miss it).

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Since March, everyone’s been asking the same question: “Are AI Overviews killing our conversions?”

Our 2025 usability study gives a clearer answer than the hot takes you’ll see on LinkedIn and X (Twitter).

In May 2025, I published significant findings from the first comprehensive UX study of AI Overviews (AIOs). Today, I’m presenting you with new insights from that study based on a cutting-edge RAG system that analyzed over 100,000 words of transcription.

The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time.

Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.

Net-net: AIO is a preview layer. Blue links still close. Before we dive in, you need to hear these insights from Garrett French, CEO of Xofu, who financed this study:

“What lit me up most from this latest work from Kevin: We have direct insight now into an “anchor pattern” of AIO behavior.

In this usability study, we discovered that users rarely voice distrust of AI Overviews directly – instead they hesitate, refine, or click out.

Therefore, hesitation itself is the loudest signal to us.

We see the same in complex, transition-enabling purchase-committee buying (B2B and B2C): Procurement stalls without lifecycle clarity, engineer stall without specs, IT stalls without validation.

These aren’t complaints. They’re unresolved, unanswered, and even unknown questions that have NEVER shown themselves in KW demand.

As content marketers, we have never held ourselves systematically accountable to answering them.

Customer service logs – as an example of one surface for discovering friction – expose the same hesitations in traceable form through repeated chats, escalations, deployment blocks, etc.

Customer service logs are one surface; AIOs are another.

But the real source of truth is always contextual audience friction.

Answering these “friction-inducing, unasked latent questions give us a way to read those signals and design content that truly moves decisions forward.

What The Study Actually Found:

  • Organic results are the most trusted and most consistently successful destination across tasks.
  • Sponsored results are noticed but actively skipped due to low trust.
  • In-SERP answers quickly resolved roughly 85% of straightforward factual questions.
  • Users often use AIO as a preview or shortcut, then click out to finish or validate (on brand sites, YouTube, coupon portals, and the like).
  • Shopping carousels aid discovery more than closure. Expect reassessment clicks.
  • Trust splits by stakes: Low-stakes search journeys often end in the AIO, while finance or health pushes people to known authorities like PayPal, NIH, or Mayo Clinic.
  • Age and device matter. Younger users, especially on smartphones, accept AIOs faster; older cohorts favor blue links and authority domains.
  • When the AIO is wrong or feels generic, people bail. We logged 12 unique “AIO is misleading/wrong” flags in higher-stakes contexts.

(Interested in diving deeper into the first findings from this study or need a refresher? Read the first full iteration of the UX study of AIOs.)

Why This Matters For The Bottom Line

In my earlier analysis, I argued that top-of-funnel visibility had more downstream impact than our marketing analytics ever credited. I also argued that demand doesn’t just disappear because clicks shrink.

This study’s behavior patterns support that: AIO satisfies quick lookup intent, but purchase intent still routes through external validation and brand trust – aka clicks. Participants in this study shared thoughts aloud, like:

  • “There’s the AI results, but I’d rather go straight to PayPal’s own site.”
  • “Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”

And that preserves downstream conversions (when you show up in the right places and have earned authority).

Image Credit: Kevin Indig

Deeper Insights: Secondary Findings You Need To See

Recently, I worked with Eric Van Buskirk (the research director of the study) and his team over at Clickstream Solutions to do a deeper analysis of the May 2025 findings.

Using an advanced RAG-driven AI system, we analyzed all 91,559 (!) words of the transcripts from recorded user sessions across 275 task instances.

This is important to understand: We were able to find new insights from this study because Eric has built cutting-edge technology.

Our new RAG system analyzes structured fields like SERP Features, AIO satisfaction, or user reactions from transcriptions and annotations. It creates a retrieval layer and uses ChatGPT-5 for semantic search.

The result is faster, more rigorous, and more transparent research. Every claim can be traced to data rows and transcript quotes, patterns are checked across the full dataset, and visual evidence is a query away.

(To sum that all up in plain language: Eric’s custom-built advanced RAG-driven AI system is wildly cool and extremely effective.)

Practical benefits:

  • Auditable insights: Conclusions map back to exact data slices.
  • Speed: Test a hypothesis in minutes instead of re-reading sessions.
  • Scale: Triangulate transcripts, coded fields, and outcomes across all participants.
  • Fit for the AI era: Clean structure and trustworthy signals mirror how retrieval systems pick sources, which aligns with our broader stance on visibility and trust.

Here’s what we found:

  1. The data verified four distinct AIO Intent Patterns.
  2. Key SERP features drove more engagement than others.
  3. Core brands shape trust in AIOs.

About The New RAG System

We rebuilt the analysis on a retrieval-augmented system so answers come from the study data, not model guesswork. The backbone lives on structured fields with full transcripts and annotations, indexed in a lightweight database and paired with bucketed data for cohort filtering and cross-checks.

Core components:

  • Dataset ingestion and cleaning.
  • Retrieval layer based on hybrid keyword + semantic search.
  • Auto-coded sentiment to turn speech into consistent, queryable signals.
  • Validation loop to minimize hallucination.

The result is faster, more rigorous, and more transparent research. Every claim can be traced to rows and quotes, patterns are checked across the full dataset, and visual evidence is a query away.

Practical benefits:

  • Map conclusions back to exact data slices.
  • Test a hypothesis in minutes.
  • Triangulate transcripts, coded fields, and outcomes across all participants.
  • Clean structure and trustworthy signals.

Which AIO Intent Patterns Were Verified Through The Data

One of the biggest secondary findings from the AIO usability study is that the AIO Intent Patterns aren’t just “gut feelings” anymore – they’re statistically validated, built from measurable behavior.

Before some of you roll your eyes and annoyingly declare “here’s yet another newly created SEO/marketing buzzword,” the patterns we discovered in the data weren’t exactly search personas, and they weren’t exactly search intents, either.

Therefore, we’re using the phrase “AIO Intent Pattern” to distinguish these concepts from one another.

Here’s how I define AIO Intent Patterns: AIO Intent Patterns represent statistically validated clusters of user behavior – like dwell, scroll, refinements, and sentiment – that define how people respond to AIOs. They’re recurring, measurable behaviors that describe how people interact with AI Overviews, whether they accept, validate, compare, or reject them.

And, again, these patterns aren’t exactly search intents or queries, but they’re not exactly user profiles either.

Instead, these patterns represent a set of behaviors (that appeared throughout our data) carried out by users to validate AIOs in different and distinct ways. So that’s why we’ve called the individual behavioral patterns “validations” below.

By running a RAG-driven coding pass across 250+ task instances, we were able to quantify four different behavioral patterns of engagement with AIOs:

  1. Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
  2. Trust-driven validations that convert only with credibility (validate AIOs).
  3. Comparative validations that use AIOs but compare with multiple sources.
  4. Skeptical rejections that automatically distrust AIOs for high-stakes queries.

What matters most here is that these aren’t arbitrary labels.

Statistical tests showed the differences in dwell time, scrolling, and refinements between the four groups were far too large to be random.

To put it plainly: These are real AIO use behavioral segments or AIO use intents you can plan for.

Let’s look at each one.

1. Efficiency-First Validations

These are validations where users intend to seek a shortcut. Users dip into AIOs for fast fact lookups, skim for one answer, and move on.

Efficiency-driven validations thrive on content that’s concise, scannable, and fact-rich. Typical queries that are resolved directly in the AIO include:

  • “1 cup in ml”
  • “how to take a screenshot on Mac”
  • “UTC to CET converter”
  • “what is robots.txt”
  • “email regex example”

Below, you can check out two examples of “efficiency-first validation” task actions from the study.

“Okay, so I like the summary at the top. And I would go ahead and follow these instructions and only come back to a search if they didn’t work.”

“I just had to go straight to the AI overview… and I liked that answer. It gave me the information I needed, organized and clear. Found it.”

Our data shows an average dwell time of just 14 seconds for this group overall, with almost no scrolling or refinements.

Users that have an efficiency-first intent for their queries have a neutral to positive sentiment toward AIOs – with no hesitation flags – because AIOs scratch the efficiency-intent itch quickly.

For this behavioral pattern, the AIO often is the final answer – especially on mobile – and if they do click, it’s usually the first clear, extractable source.

👉 Optimization tips for this validation group:

  • Compress key facts into crisp TLDRs, FAQs, and schema so AIO can surface them.
  • Place definitions, checklists, and example blocks near the top of your page.
  • Use simple tables and step lists that can be lifted cleanly.
  • Ensure brand mentions and key facts appear high on the page for visibility.

2. Trust-Driven Validations

These validations are full of caution. Users with trust-driven intents engage with AIOs but rarely stop there.

They’ll skim the overview, hesitate, and then click out to an authority domain to validate what they saw, like in this example below:

The user shares that “…at the top, it gave me a really good description on how to transfer money. But I still clicked the PayPal link because it was directly from the official site. That’s what I went with – I trust that information to be more accurate.”

Typical queries that trigger this validation pattern include:

  • “PayPal buyer protection rules”
  • “Mayo Clinic strep symptoms”
  • “Is creatine safe long term”
  • “Stripe refund timeline”
  • “GDPR consent requirements example”

And our data from the study verifies users scroll more (2.7x on average), dwell longer (~57s), and often flag uncertainty in trust-driven mode. What they want is authority.

These users have a high rate of hesitation flags in their search experiments. Their sentiment is mixed – often neutral, sometimes anxious or frustrated – and their confidence is only medium to low.

For these searches, the AIO is a starting point, not the destination. They’ll click out to Mayo Clinic, PayPal, Stripe, or other trusted domains to validate.

👉 Optimization tips for this validation group:

  • Reinforce trust scaffolding on your landing pages: expert reviewers, citations, and last-reviewed dates.
  • Mirror official terminology and link to primary sources.
  • Add “What to do next” boxes that align with authority guidance.
  • Build strong E-E-A-T signals since credibility is the conversion lever here.

3. Comparative Validations

This search intent actively leans into the AIO for classic comparative queries (think “Ahrefs vs Semrush for content teams”) to fulfill their search intent OR to compare informational resources to get clarity on the “best” of something; they expand, scroll, refine, and use interactive features – but they don’t stop there.

Instead, they explore across multiple sources, hopping to YouTube reviews, Reddit threads, and vendor sites before making a decision.

Example queries that reveal AIO comparative validation behavior:

  • “Notion vs Obsidian for teams”
  • “Best mirrorless camera under 1000”
  • “How to change a bike tire”
  • “Standing desk benefits vs risks”
  • “Programmatic SEO examples B2B”
  • “How to install a nest thermostat”

Here’s an example using a “how to” search, where the user is comparing sources for the best way to receive the most accurate information:

“The AI Overview gave me clear step-by-step instructions that matched what I expected. But since it was a physical DIY task, I still preferred to branch out to watch a video for confirmation.”

On average, searchers looking for comparative validations in the AIO dwell for 45+ seconds, scroll 4-5 times, and often open multiple tabs.

Their AIO sentiment is positive, and their confidence is high, but they still want to compare.

If this feels familiar – like classic transactional or commercial search intents – it’s because it is related.

If you’ve been doing SEO for any time, it’s likely you’ve created some of these “versus” or “comparison” pages. You also have likely created “how to” content with step-by-step how-to guidance, like how to install a flatscreen TV on your wall.

Before AIOs, your target users would find themselves there if you ranked well in search.

But now, the AIO frames the landscape first, and the decision comes after weighing pros and cons across information sources to find the best solution.

👉 Optimization tips for this validation group:

  • Publish structured comparison pages with decision tables and use-case breakdowns.
  • Pair each page with short demo videos, social proof, and credible community posts to echo your takeaways.
  • Include “Who it is for” and “Who it isn’t for” sections to reduce ambiguity.
  • Seed content in YouTube and forums that AIOs (and users) can pick up.

4. Skeptical Rejections

Searchers with a make-or-break intent? They’re the outright AIO skeptical rejectors.

When stakes are high – health, finance, or legal … the typical YMYL (Your Money, Your Life) stuff – they don’t trust AIO to get it right.

Users may scan the summary briefly, but they quickly move to authoritative sources like government sites, hospitals, or financial institutions.

Common queries where this rejection pattern shows up:

  • “Metformin dosage for PCOS”
  • “How to file taxes as a freelancer in Germany”
  • “Credit card chargeback rights EU”
  • “Infant fever when to go to ER”
  • “LLC vs GmbH legal liability”

For this search intent, the dwell time in an AIO is short or nonexistent, and their sentiment often skews negative.

They show determination to bypass the AI layer in favor of direct authority validation.

👉 Optimization tips for this validation group:

  • Prioritize citations and mentions from highly trusted domains so AIOs lean on you indirectly.
  • Align your pages with the language and categories used by official sources.
  • Add explicit disclaimers and clear subheadings to strengthen authority signals.
  • For YMYL topics, focus on being cited rather than surfaced as the final answer.

SERP Features That Drove Engagement

Our RAG AI-driven system of the usability data verified that not all SERP features are created equal.

When we cut the data down to only features with meaningful engagement – which our study defined as ≥5 seconds of dwell time across at least 10 instances – only four SERP features findings stood out.

(I’ll give you a moment to take a few wild guesses regarding the outcomes … and then you’ll see if you’re right.)

Drumroll please. 🥁🥁🥁

(Okay, moment over. Here we go.)

1. Organic Results Are Still The Backbone

Whenever our study participants gave the classic blue links more than a passing glance, they almost always found success.

Transcripts from the study make it explicit: Users trusted official sites, government domains, and familiar authority brands, as one participant’s quote demonstrates:

“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”

What about social or community sites that showed up in the organic blue-link results?

Reddit and YouTube were the social or community platforms found in the SERP that were mentioned most by study participants.

Reddit had 45 unique mentions across the entire study. Overall, seeing a Reddit result in organic results produces a user sentiment that is mostly positive, with some users feeling neutral toward the inclusion of Reddit in search, and very few negative comments about Reddit results.

YouTube had 20 unique mentions across the entire study. The sentiment toward YouTube inclusion in SERP results was overwhelmingly positive (19 out of 20 of those instances had a positive user sentiment). The emotions flagged from the study participants around YouTube results included happy/satisfied or curious/exploring.

There was a very clear theme across the study that appeared when social or community sites popped up in organic results:

  • Reddit was invoked when participants wanted community perspective, usually in comparison tasks. Confidence was high because Reddit validated nuance, but AIO trust was weak (users bypassed AIOs to Reddit instead).
  • YouTube was used as a visual validator, especially in product or technical comparison tasks. Users expressed positive sentiment and high satisfaction, even when explicit trust wasn’t verbalized. They treated YouTube as a natural step after the AIOs/organic SERP results.

2. Sponsored Results Barely Register

People saw them, but rarely acted on them. “I don’t like going to sponsored sites” was a common refrain.

High visibility, but low trust.

3. Shopping Carousels Aid Discovery But Not Closure.

Participants clicked into Shopping carousels for product ideas, but often bounced back out to reassess with external sites.

The carousel works as a catalog – not a closer.

4. Featured Snippets Continue To Punch Above Their Weight

For straightforward factual lookups, Snippets had an ~85% success rate of engagement.

They were efficient and final for fact-based queries like [example] and [example].

⚠️ Important note: Even though Google is replacing Featured Snippets with AIOs, it’s clear that this method of receiving information within the SERP has a high engagement. While the SERP feature may be in the process of being discontinued, the data shows users like engaging with snippets. The takeaway here is that if you were often appearing for featured snippets and you’re now often appearing for AIO citations, keep up the good work to continue earning visibility there, because it still matters.

SERP Features x AIO Intent Patterns

When you keep the intent pattern layers in mind with different persona groups, it makes the search behaviors sharper:

  • Younger users on mobile leaned heavily on AIO and snippets, often stopping there if the stakes were low. → That’s the hallmark of efficiency-first validations (quick fact lookups) and comparative validations (scrolling, refining, and treating AIO as the main lens).
  • Older users consistently bypassed AI elements in favor of organic authority results. → This is classic behavior for trust-driven validations, when users click out to brands like PayPal or the Mayo Clinic, and skeptical rejections, when users distrust AIO altogether for high-stakes tasks.
  • Transactional queries – money, health, booking – nearly always pushed people toward trusted brands, regardless of what AIO or ads surfaced. → This connects directly to trust-driven validations (users who need authority reinforcement to fulfill their search intent) and skeptical rejections (users who reject AIO in YMYL contexts because AIOs don’t meet the intent behind the behavior).

What this shows is that, for SEOs, the priority isn’t about chasing every feature and “winning them all.”

Take this as an example:

“The AI overview didn’t pop up, so I used the search results. These were mostly weird websites, but CNBC looked trustworthy. They had a comparison of different platforms like CardCash and GCX, so I went with CNBC because they’re a trusted source.”

Your job is to match intent (as always):

  • Earn extractable presence in AIOs for quick facts,
  • Reinforce trust scaffolding on authority-driven organic pages, and
  • Treat Shopping and Sponsored slots as visibility and awareness plays rather than conversion levers.

Which Brands Shaped Trust In AIOs

AIOs don’t stand on their own; they borrow credibility from the brands they surface – whether you like it or not.

(Google truly seems to be cannibalizing itself while devouring all of us, too.)

When participants validated or rejected an AI answer, it often hinged on whether a familiar or authoritative brand was mentioned.

Our RAG-coded study data surfaced clear winners:

  • Institutional authorities like PayPal, NIH, and government sites consistently shaped trust, even without clicks.
  • Ecommerce and retail giants (Amazon, Walmart, Groupon) carried positive associations from brand familiarity.
  • Financial and tax prep services (H&R Block, Jackson Hewitt, CPA mentions) were trusted anchors in transactional searches.
  • Car rental brands (Budget, Avis, Dollar, Kayak, Zipcar, Turo) dominated travel-related tasks.
  • Emerging platforms (Raise, CardCash, GameFlip, Kade Pay) gained traction primarily because an AIO surfaced them, not because of prior awareness.

👉 Why it matters: Brand trust is the glue between AIO exposure and user action.

Here’s a quick paraphrase of this user’s exploration: We’re looking for places to sell gift cards for instant payment. Platforms like Raise, Gift Card Granny, or CardCash come up. On CardCash, I tried a $10 7-Eleven card, and the offer was $8.30. So they ‘tax’ you for selling. That’s good to know – but it shows you can sell gift cards for cash, and CardCash is one option.

In this instance, the AIO surfaced CardCash. The user didn’t know about it before this search. They explored it in detail, but trust friction (“they tax you”) shaped whether they’d actually use it.

For SEOs, this means three plays running in tandem:

  1. Win mentions in AIOs by ensuring your content is structured, scannable, and extractable.
  2. Strengthen authority off-site so when users validate (or reject the AIO), they land on your pages with confidence.
  3. Build topical authority in your niche through comprehensive persona-based topic coverage and valuable information gain across your topics. (This can be a powerful entry point or opportunity for teams competing against larger brands.)

What does this all mean for your own tactical optimizations?

But here’s the most crucial thing to take away from this analysis today:

With this information in mind, you can now go to your stakeholders and guide them to look at all your prompts, queries, and topics with fresh eyes.

You need to determine:

  • Which of the target queries/topics are quick answers?
  • Which of the target queries/topics are instances where people need more trust and assurance?
  • When do your ideal users expect to explore more, based on the target queries/topics?

This will help you set expectations accordingly and measure success over time.


Featured Image: Paulo Bobita/Search Engine Journal

Who Owns Web Performance? Building A Framework For Digital Accountability via @sejournal, @billhunt

In my previous article, “Closing the Digital Performance Gap,” I made the case that web effectiveness is a business issue, not a marketing metric. The website is no longer just a reflection of your brand – it is your brand. If it’s not delivering measurable business results, that’s a leadership problem, not a team problem.

But there’s a deeper issue underneath that: Who actually owns web performance?

The truth is, many companies don’t have a good answer. Or they think they do until something breaks. The SEO team doesn’t own the infrastructure. The dev team isn’t briefed on platform changes. The content team isn’t looped in until after a redesign. Visibility drops, conversions dip, and someone asks, “Why isn’t our SEO team performing?”

Because they don’t own the full system, no one does.

If we want to close the digital performance gap, we must address this root problem: lack of accountability.

The Fallacy Of Distributed Ownership

The idea that “everyone owns the website” likely stems from early digital transformation initiatives, where cross-functional collaboration was encouraged to break down departmental silos. The intent was to foster shared responsibility across departments – but the unintended consequence was diffused accountability.

It sounds collaborative, but in practice, it often means no one is fully accountable for performance.

Here’s how it typically breaks down:

  • IT owns infrastructure and hosting.
  • Marketing owns content and campaigns.
  • SEO owns visibility – but not implementation.
  • UX owns experience – but not findability.
  • Legal owns compliance – but limits usability.
  • Product owns the content management system (CMS) – but doesn’t track SEO.

Each group is doing its job, often with excellence. But the result? Disconnected execution. Strategy gets lost in translation, and performance stalls.

Case in point: For a global alcohol brand, a site refresh had legal requirements mandating an age verification gate before users could access the site. That was the extent of their specification. IT built the gate exactly to spec: a page with the statement to enter your birthdate and three pull-down options for Month, Day, and Year, and a check of that date to the U.S. legal drinking age. UX and creative delayed launch for weeks while debating the optimal wording, positioning, and color scheme.

Once launched, the website traffic, both direct and organic search, dropped to zero. This was due to several key reasons:

  1. Analytics were not set up to track visits before and after the age gate.
  2. Search engines can’t input a birthdate, so they were blocked.
  3. The age requirement was set to the U.S. standard, rejecting younger, yet legal visitors from other countries.

Because everything was done in silos, no one had considered these critical details.

When we finally got all stakeholders in a room, agreed on the issues, and sorted through them, we redesigned the system:

  • Search engines were recognized and bypassed the age requirement.
  • The age requirement and date format are adapted to the user’s location.
  • UX developed multiple variations and tested abandonment.
  • Analytics captured pre- and post-gate performance.
  • UX used the data to validate new landing page formats.

The result? A compliant, user-friendly, and search-accessible module that could be reused globally. Visibility, conversions, and compliance all increased exponentially. But we lost months and millions in potential traffic simply because no one owned the whole picture.

Without centralized accountability, the site was optimized in parts but underperforming as a whole.

The AI Era Raises The Stakes

This kind of siloed ownership might have been manageable in the old “10 blue links” era. But in an AI-first world – where Google and other platforms synthesize content into answers, summarize brands, and bypass traditional click paths – every decision across your digital operation impacts your visibility, trust, and conversion.

Search visibility today depends on structured data, crawlable infrastructure, content relevance, and citation-worthiness. If even one of these is out of alignment, you lose shelf space in the AI-driven SERP. And chances are, the team responsible for the weak link doesn’t even know they’re part of the problem.

Why Most SEO Advice Falls Short

I’ve seen well-meaning advice to “improve your SEO strategy” fall flat – because it assumes the SEO team has control over all the necessary elements. They don’t.

  • You can’t fix crawl issues if you can’t talk to the dev team.
  • You can’t win AI citations if your content team doesn’t structure or enrich their pages.
  • You can’t build authority if your legal or PR teams strip bios and outbound references.

What’s needed isn’t better tactics. It’s organizational clarity.

The Case For Centralized Digital Ownership

To create sustained performance, companies need to designate real ownership over web effectiveness. That doesn’t mean centralizing every task – but it does mean centralizing accountability.

Here are three practical approaches:

1. Establish A Digital Center Of Excellence (CoE)

A CoE provides governance, guidance, and support across business units and regions. It ensures that:

  • Standards are defined and enforced.
  • Platforms are chosen and maintained with shared goals.
  • Learnings are captured and distributed.
  • Key performance indicators (KPIs) are consistent and comparable.

2. Appoint A Digital Effectiveness Officer (DEO)

Think of this like a Commissioning Authority in construction – a role that ensures every component works together to meet the original performance spec. A DEO:

  • Connects the dots between dev, SEO, UX, and content.
  • Tracks impact beyond traffic (revenue, leads, brand trust).
  • Advocates for platform investment and cross-team prioritization.

3. Build Shared KPIs Across Departments

Most teams optimize for what they’re measured on. If the SEO team is judged on rankings but not revenue, and the content team is judged on output but not visibility, you get misaligned efforts. Create chained KPIs that reflect end-to-end performance.

Characteristics Of A Performance-Driven Model

Companies that close the accountability gap tend to share these traits:

  • Unified Taxonomy and Tagging – so content is findable and trackable.
  • Structured Governance – clear roles and escalation paths across teams.
  • Shared Dashboards – everyone sees the same numbers, not vanity metrics.
  • Tech Stack Discipline – fewer, better tools with cross-functional usage.
  • Scenario Planning – AI, zero-click SERPs, and platform volatility are modeled, not ignored.

Final Thought: Performance Requires Ownership

If you’re serious about web effectiveness, you need more than skilled people and good tools. You need a system where someone is truly accountable for how the site performs – across traffic, visibility, UX, conversion, and AI resilience.

This doesn’t mean a top-down mandate. It means orchestrated ownership with clear roles, measurable outcomes, and a strategic anchor.

It’s time to stop asking the SEO team to fix what they don’t control.

It’s time to build a framework where the web is everyone’s responsibility – and someone’s job.

Let’s make web performance a leadership priority, not a guessing game.

More Resources:


Featured Image: SFIO CRACHO/Shutterstock

Google Uses Infinite 301 Redirect Loops For Missing Documentation via @sejournal, @martinibuster

Google removed outdated structured data documentation, but instead of returning a 404 response, they have chosen to redirect the old URLs to a changelog that links to the old URL, thereby causing an infinite loop between the two pages. Although that is technically not a soft 404, it is an interesting use of a 301 redirect for a missing web page and not how SEOs typically handle missing web pages and 404 server responses. Did Google make a mistake?

Google Removed Structured Data Documentation

Google quitely published a changelog note announcing they had removed obsolete structured data documentation. An announcement was made three months ago in June and today they finally removed the obsolete documentation.

The missing pages are for the following structured data that is no longer supported:

  • Course info
  • Estimated salary
  • Learning video
  • Special announcement
  • Vehicle listing.

Those pages are completely missing. Gone, and likely never coming back. The usual procedure in that kind of situation is to return a 404 Page Not Found server response. But that’s not what is happening.

Instead of a 404 response Google is returning a 301 redirect back to the changelog. What makes this setup somewhat weird is that Google is linking back to the missing web page from the changelog, which then redirects back to the changelog, creating an infinite loop between the two pages.

Screenshot Of Changelog

In the above screenshot I’ve underlined  in red the link to the Course Info structured data.

The words “course info” are a link to this URL:
https://developers.google.com/search/docs/appearance/structured-data/course-info

Which redirects right back to the changelog here:
https://developers.google.com/search/updates#september-2025

Which of course contains the links to the five URLs that  no longer exist, essentially causing an infinite loop.

It’s not a good user experience and it’s not good for crawlers. So the question is, why did Google do that? 

301 redirects are an option for pages that are missing, so Google is technically correct to use a 301 redirect. However, 301 redirects are generally used to point “to a more accurate URL” which generally means a redirect to a replacement page, one that serves the same or similar purpose.

Technically they didn’t create a soft 404. But the way they handled the missing pages creates a loop that sends crawlers back and forth between a missing web page and the changelog. It seems that it would have been a better user and crawler experience to instead link to the June 2025 blog post that explains why these structured data types are no longer supported  rather than create an infinite loop.

I don’t think it’s anything most SEOs or publishers would do, so why does Google think it’s a good idea?

Featured Image by Shutterstock/Kues

AI Is Changing Local Search Faster Than You Think [Webinar] via @sejournal, @hethr_campbell

For multi-location brands, local search has always been competitive. But 2025 has introduced a new player: AI

From AI Overviews to Maps Packs, how consumers discover your stores is evolving, and some brands are already pulling ahead.

Robert Cooney, VP of Client Strategy at DAC, and Kyle Harris, Director of Local Optimization, have spent months analyzing enterprise local search trends. Their findings reveal clear gaps between brands that merely appear and those that consistently win visibility across hundreds of locations.

The insights are striking:

  • Some queries favor Maps Packs, others AI Overviews. Winning in both requires strategy, not luck.
  • Multi-generational search habits are shifting. Brands that align content to real consumer behavior capture more attention.
  • The next wave of “agentic search” is coming, and early preparation is the key to staying relevant.

This webinar is your chance to see these insights in action. Walk away with actionable steps to protect your visibility, optimize local presence, and turn AI-driven search into a growth engine for your stores.

📌 Register now to see how enterprise brands are staying ahead of AI in local search. Can’t make it live? Sign up and we’ll send the recording straight to your inbox.

Structured Data’s Role In AI And AI Search Visibility via @sejournal, @marthavanberkel

The way people find and consume information has shifted. We, as marketers, must think about visibility across AI platforms and Google.

The challenge is that we don’t have the same ability to control and measure success as we do with Google and Microsoft, so it feels like we’re flying blind.

Earlier this year, Google, Microsoft, and ChatGPT each commented about how structured data can help LLMs to better understand your digital content.

Structured data can give AI tools the context they need to determine their understanding of content through entities and relationships. In this new era of search, you could say that context, not content, is king.

Schema Markup Helps To Build A Data Layer

By translating your content into Schema.org and defining the relationships between pages and entities, you are building a data layer for AI. This schema markup data layer, or what I like to call your “content knowledge graph,” tells machines what your brand is, what it offers, and how it should be understood.

This data layer is how your content becomes accessible and understood across a growing range of AI capabilities, including:

  • AI Overviews
  • Chatbots and voice assistants
  • Internal AI systems

Through grounding, structured data can contribute to visibility and discovery across Google, ChatGPT, Bing, and other AI platforms. It also prepares your web data to be of value to accelerate your internal AI initiatives as well.

The same week that Google and Microsoft announced they were using structured data for their generative AI experiences, Google and OpenAI announced their support of the Model Context Protocol.

What Is Model Context Protocol?

In November 2024, Anthropic introduced Model Context Protocol (MCP), “an open protocol that standardizes how applications provide context to LLMs” and was subsequently adopted by OpenAI and Google DeepMind.

You can think of MCP as the USB-C connector for AI applications and agents or an API for AI. “MCP provides a standardized way to connect AI models to different data sources and tools.”

Since we are now thinking of structured data as a strategic data layer, the problem Google and OpenAI need to solve is how they scale their AI capabilities efficiently and cost-effectively. The combination of structured data you put on your website, with MCP, would allow accuracy in inferencing and the ability to scale.

Structured Data Defines Entities And Relationships

LLMs generate answers based on the content they are trained on or connected to. While they primarily learn from unstructured text, their outputs can be strengthened when grounded in clearly defined entities and relationships, for example, via structured data or knowledge graphs.

Structured data can be used as an enhancer that allows enterprises to define key entities and their relationships.

When implemented using Schema.org vocabulary, structured data:

  • Defines the entities on a page: people, products, services, locations, and more.
  • Establishes relationships between those entities.
  • Can reduce hallucinations when LLMs are grounded in structured data through retrieval systems or knowledge graphs.

When schema markup is deployed at scale, it builds a content knowledge graph, a structured data layer that connects your brand’s entities across your site and beyond. 

A recent study by BrightEdge demonstrated that schema markup improved brand presence and perception in Google’s AI Overviews, noting higher citation rates on pages with robust schema markup.

Structured Data As An Enterprise AI Strategy

Enterprises can shift their view of structured data beyond the basic requirements for rich result eligibility to managing a content knowledge graph.

According to Gartner’s 2024 AI Mandates for the Enterprise Survey, participants cite data availability and quality as the top barrier to successful AI implementation.

By implementing structured data and developing a robust content knowledge graph you can contribute to both external search performance and internal AI enablement.

A scalable schema markup strategy requires:

  • Defined relationships between content and entities: Schema markup properties connect all content and entities across the brand. All page content is connected in context.
  • Entity Governance: Shared definitions and taxonomies across marketing, SEO, content, and product teams.
  • Content Readiness: Ensuring your content is comprehensive, relevant, representative of the topics you want to be known for, and connected to your content knowledge graph.
  • Technical Capability: Cross-functional tools and processes to manage schema markup at scale and ensure accuracy across thousands of pages.

For enterprise teams, structured data is a cross-functional capability that prepares web data to be consumed by internal AI applications.

What To Do Next To Prepare Your Content For AI

Enterprise teams can align their content strategies with AI requirements. Here’s how to get started:

1. Audit your current structured data to identify gaps in coverage and whether schema markup is defining relationships within your website. This context is critical for AI inferencing.

2. Map your brand’s key entities, such as products, services, people, and core topics, and ensure they are clearly defined and consistently marked up with schema markup across your content. This includes identifying the main page that defines an entity, known as the entity home.

3. Build or expand your content knowledge graph by connecting related entities and establishing relationships that AI systems can understand.

4. Integrate structured data into AI budget and planning, alongside other AI investments and that content is intended for AI Overviews, chatbots, or internal AI initiatives.

5. Operationalize schema markup management by developing repeatable workflows for creating, reviewing, and updating schema markup at scale.

By taking these steps, enterprises can ensure that their data is AI-ready, inside and outside the enterprise.

Structured Data Provides A Machine-Readable Layer

Structured data doesn’t assure placement in AI Overviews or directly control what large language models say about your brand. LLMs are still primarily trained on unstructured text, and AI systems weigh many signals when generating answers.

What structured data does provide is a strategic, machine-readable layer. When used to build a knowledge graph, schema markup defines entities and the relationships between them, creating a reliable framework that AI systems can draw from. This reduces ambiguity, strengthens attribution, and makes it easier to ground outputs in fact-based content when structured data is part of a connected retrieval or grounding system.

By investing in semantic, large-scale schema markup and aligning it across teams, organizations position themselves to be as discoverable in AI experiences as possible.

More Resources:


Featured Image: Koto Amatsukami/Shutterstock

Google’s Antitrust Ruling: What The Remedies Really Mean For Search, SEO, And AI Assistants via @sejournal, @gregjarboe

When Judge Amit P. Mehta issued his long-awaited remedies decision in the Google search antitrust case, the industry exhaled a collective sigh of relief. There would be no breakup of Google, no forced divestiture of Chrome or Android, and no user-facing “choice screen” like the one that reshaped Microsoft’s browser market two decades ago. But make no mistake – this ruling rewrites the playbook for search distribution, data access, and competitive strategy over the next six years.

This article dives into what led to the decision, what it actually requires, and – most importantly – what it means for SEO, PPC, publishers, and the emerging generation of AI-driven search assistants.

What Led To The Decision

The Department of Justice and a coalition of states sued Google in 2020, alleging that the company used exclusionary contracts and massive payments to cement its dominance in search. In August 2024, Judge Mehta ruled that Google had indeed violated antitrust law, writing, “Google is a monopolist, and it has acted as one to maintain its monopoly.” The question then became: what remedies would actually restore competition?

The DOJ and states pushed for sweeping measures – including a breakup of Google’s Chrome browser or Android operating system, and mandatory choice screens on devices. Google countered that such steps would harm consumers and innovation. By the time remedies hearings wrapped, generative AI had exploded into the mainstream, shifting the court’s sense of what competition in search could look like.

What The Court Decided

Judge Mehta’s ruling, issued September 2, 2025, imposed a mix of behavioral remedies:

  • Exclusive contracts banned. Google can no longer strike deals that make it the sole default search engine on browsers, phones, or carriers. That means Apple, Samsung, Mozilla, and mobile carriers can now entertain offers from rivals like Microsoft Bing or newer AI entrants.
  • Payments still allowed. Crucially, the court did not ban Google from paying for placement. Judge Mehta explained that removing payments altogether would “impose substantial harms on distribution partners.” In other words, the checks will keep flowing – but without exclusivity.
  • Index and data sharing. Google must share portions of its search index and some user interaction data with “qualified competitors” on commercial terms. Ads data, however, is excluded. This creates a potential on-ramp for challengers, but it doesn’t hand them the secret sauce of Google’s ranking systems.
  • No breakup, no choice screen. Calls to divest Chrome or Android were rejected as overreach. Similarly, the court declined to mandate a consumer-facing choice screen. Change will come instead through contracts and UX decisions by distribution partners.
  • Six-year oversight. Remedies will be overseen by a technical committee for six years. A revised judgment is due September 10, with remedies taking effect roughly 60 days after final entry.

As Judge Mehta put it, “Courts must… craft remedies with a healthy dose of humility,” noting that generative AI has already “changed the course of this case.”

How The Market Reacted

Investors immediately signaled relief. Alphabet shares jumped ~8% after hours, while Apple gained ~4%. The lack of a breakup, and the preservation of lucrative search placement payments, reassured Wall Street that Google’s search empire was not being dismantled overnight.

But beneath the relief lies a new strategic reality: Google’s moat of exclusivity has been replaced with a marketplace for defaults.

Strategic Insights: Beyond The Headlines

Most coverage of the decision has focused on what didn’t happen – the absence of a breakup or a choice screen. But the deeper story is how distribution, data, and AI will interact under the new rules.

1. Defaults Move From Moat To Marketplace

Under the old model, Google’s exclusive deals ensured it was the default on Safari, Android, and beyond. Now, partners can take money from multiple providers. That turns the default position into a marketplace, not a moat.

Apple, in particular, gains leverage. Court records revealed that Google paid Apple $20 billion in 2022 and paid $26.3 billion in 2021  – the figure is not to any one company, but Apple likely represents the largest recipient – to remain Safari’s default search engine. Without exclusivity, Apple can entertain bids from Microsoft, OpenAI, or others – potentially extracting even more money by selling multiple placements or rotating defaults.

We may see new UX experiments: rotating search tiles, auction-based setup flows, or AI assistant shortcuts integrated into operating systems. Distribution partners like Samsung or Mozilla could pilot “multi-home defaults,” where Google, Bing, and an AI engine all coexist in visible slots.

2. Data Access Opens An On-Ramp For Challengers

Index-sharing and limited interaction data access lower barriers for rivals. Crawling the web is expensive; licensing Google’s index could accelerate challengers like Bing, Perplexity, or OpenAI’s rumored search product.

But it’s not full parity. Without ads data and ranking signals, competitors must still differentiate on product experience. Think faster answers, vertical specialization, or superior AI integration. As I like to put it: Index access gives challengers legs, not lungs.

Much depends on how “qualified competitor” is defined. A narrow definition could limit access to a token few; a broad one could empower a new wave of vertical and AI-driven search entrants.

3. AI Is Already Shifting The Game

The court acknowledged that generative AI reshaped its view of competition. Assistants like Copilot, Gemini, or Perplexity are increasingly acting as intent routers – answering directly, citing sources, or routing users to transactions without a traditional SERP.

That means the battle for distribution may shift from browsers and search bars to AI copilots embedded in operating systems, apps, and devices. If users increasingly ask their assistant instead of typing a query, exclusivity deals matter less than who owns the assistant.

For SEO and SEM professionals, this accelerates the shift toward zero-click answers, assistant-ready content, and schema that supports citations.

4. Financial Dynamics: Relief Today, Pressure Tomorrow

Yes, investors cheered. But over time, Google could face rising traffic acquisition costs (TAC) as Apple, Samsung, and carriers auction off default positions. Defending its distribution may get more expensive, eating into margins.

At the same time, without a choice screen, search market share is likely to shift gradually, not collapse. Expect Google’s U.S. query share to remain in the high 80s in the near term, with only single-digit erosion as rivals experiment with new models.

5. Knock-On Effects: The Ad-Tech Case Looms

Don’t overlook the second front: the DOJ’s separate antitrust case against Google’s ad-tech stack, now moving toward remedies hearings in Virginia. If that case results in structural changes – say, forcing Google to separate its publisher ad server from its exchange – it could reshape how search ads are bought, measured, and monetized.

For publishers, both cases matter. If rivals gain traction with AI-driven assistants, referral traffic could diversify – but also become more volatile, depending on how assistants handle citations and click-throughs.

What Happens Next

  • September 10, 2025: DOJ and Google file a revised judgment.
  • ~60 days later: Remedies begin taking effect.
  • Six years: Oversight period, with ongoing compliance monitoring.

Key Questions To Watch:

  • How will Apple implement non-exclusive search defaults in Safari?
  • Who qualifies as a “competitor” for index/data access, and on what terms?
  • Will rivals like Microsoft, Perplexity, or OpenAI buy into distribution slots aggressively?
  • How will AI assistants evolve as distribution front doors?

What This Means For SEO And PPC

This ruling isn’t just about contracts in Silicon Valley – it has practical consequences for marketers everywhere.

  • Distribution volatility planning. SEM teams should budget for a world where Safari queries become more contestable. Test Bing Ads, Copilot Ads, and assistant placements.
  • Assistant-ready content. Optimize for concise, cite-worthy answers with schema markup. Publish FAQs, data tables, and source-friendly content that large language models (LLMs) like to quote.
  • Syndication hedge. If new index-sharing programs emerge, explore partnerships with vertical search startups. Early pilots could deliver traffic streams outside the Google ecosystem.
  • Attribution resilience. As assistants mediate more traffic, referral strings will get messy. Double down on UTM governance, server-side tracking, and marketing mix models to parse signal from noise.
  • Creative testing. Build two-tier content: a punchy, fact-dense abstract that assistants can lift, and a deeper explainer for human readers.

Market Scenarios

  • Base Case (Most Likely): Google retains high-80s market share. TAC costs rise gradually. AI assistants siphon a modest share of informational queries by 2027. Impact: margin pressure more than market share loss.
  • Upside for Rivals: If index access is broad and AI assistants nail UX, Bing, Perplexity, and others could win five to 10 points combined in specific verticals. Impact: SEM arbitrage opportunities emerge, and SEO adapts to answer-first surfaces.
  • Regulatory Cascade: If the ad-tech remedies impose structural changes, Google’s measurement edge narrows, and OEMs test choice-like UX voluntarily. Impact: more fragmentation, more testing for marketers.

Final Takeaway

Judge Mehta summed up the challenge well: “Courts must craft remedies with a healthy dose of humility.” The ruling doesn’t topple Google, but it does force the search giant to compete on more open terms. Exclusivity is gone; auctions and assistants are in.

For marketers, the message is clear: Don’t wait for regulators to rebalance the playing field. Diversify now – across engines, assistants, and ad formats. Optimize for answerability as much as for rankings. And be ready: The real competition for search traffic is just beginning.

More Resources:


Featured Image: beast01/Shutterstock

The Problem With Always-On SEO: Why You Need Sprints, Not Checklists via @sejournal, @coreydmorris

There’s a lot that goes into SEO. And, now, more broadly into being found online and online visibility overall, whether we’re talking about an organic result in a search engine, an AI Overview, or through a large language model (LLM).

With SEO being a discipline that often takes a long time (compared to ads and some other channels and platforms), with a large amount of complexity, technical aspects, contradictions of how it works, and even disagreements, it has to be organized in a way that can be implemented.

Over the years and decades, this has resulted in the acceptance of specific “best practices,” along with the fact that it is a longer-term commitment. That, ultimately, has led to the use of checklists and specific cadences to accomplish what is typically seen as an “ongoing” and never-ending discipline.

In full disclosure, you’ll find articles written by me that talk about checklists and ways to structure the work that is important to be visible and found online. I’m not saying we have to throw them out, but we can’t simply do the list or activities.

“Always-on SEO” sounds great in theory: ongoing optimization, constant monitoring, and steady progress. But in reality, it often becomes a nebulous set of tasks without priority, strategy, or momentum.

This article challenges the default mindset of treating SEO as a perpetual checklist and proposes a sprint-based approach, where work is grouped into focused time blocks with measurable goals.

By approaching SEO in strategic sprints, teams can prioritize, measure, adapt, and improve – all while staying aligned with larger business goals.

The Problem With Perpetual SEO Checklists

What I often see with SEO checklists is a lack of prioritization. Everything becomes a task, but nothing is deemed critical.

The checklist might have “right” and “good” things in it, but it isn’t weighted or prioritized based on any level of strategic approach or potential level of impact.

And, when there’s a lack of direction, we often can end up with a set of actions, activities, or tactics that have no clear end or evaluation defined. This ends up getting us into a place of just “doing SEO” without being able to objectively say what the result was or how things were improved.

Like any digital marketing channel, activity without the right anchor or foundation, in SEO, can result in wasted effort.

Technical fixes and content updates may not support meaningful business goals and can be a huge investment of time and money that ultimately don’t impact the business. And, activity without results or clear direction can drive SEO teams and professionals to boredom or burnout.

I’ve taken over a number of situations where a business thought SEO didn’t work for them or that the team was not competent enough due to stakeholder confusion.

When activity doesn’t generate results and you find it out a year into an investment, it is hard to recover, especially when no one really knows what “done” or what success looks like in the effort.

I say all of this not to bring up pain, say that checklists aren’t good, or even that the ongoing tactics aren’t right. I’m simply saying we have to have a deeper understanding and meaning behind what we’re doing in SEO.

What Sprint-Based SEO Looks Like

SEO sprints are focused and time-bound (e.g., four weeks) efforts with specific goals tied to strategy. Rather than working on everything at once, you work on the highest-impact priorities in chunks.

Common sprint types:

  • Content optimization sprints.
  • Technical SEO fix sprints.
  • Internal linking improvement sprints.
  • New content creation sprints.
  • Authority/link building sprints.

You can also combine types into a custom sprint. Regardless of whether you stay in a category or make one that contains blended themes or tactics, it needs to be anchored to an initial strategy, plan, or audit for your first one.

Each sprint ends with measurable outputs, documented outcomes, and clear learnings. The first one might be rooted in an initial plan, but each subsequent sprint will include a retrospective review from the previous one to help fuel continuous learning, efficiencies, improvements, and ultimate impact.

Benefits Of SEO Sprints

A quick win benefit is gaining focus. Pivoting away from a generic checklist to sprint structure results in solving a defined problem, not tackling a vague backlog.

As noted earlier, sprints are time-based as well. By having the right length (not too short or small of a sample size, yet too long and repeating tactics that aren’t effective), you gain the benefits of agility and an adaptable longer-term approach overall.

Agility in sprints allows you to adjust based on performance and new insights. Checklists are not only generic or often disconnected from strategy, but are getting out of date constantly with shifts in online visibility optimization sources and methods.

Accountability and team clarity come more naturally as well. It’s easier to report on and justify value with clear before/after comparisons and to keep people engaged and in the know on what’s happening now and what’s next.

This matters for overall business alignment of key performance indicators (KPIs) and not getting too deep and lost in the jargon, technical aspects, and “hope” for return on investment (ROI) versus seeing shorter-term, higher-impact efforts.

Sprints can be tied directly to goals (revenue, lead generation, funnel support) and not just rankings or other KPIs that are upstream and further removed from business outcomes, and shorter-term expectations can take pressure off of long-term waiting for something to happen.

How To Implement Sprint-Based SEO

Start with strategy. Identify what matters to the business and where SEO fits. Define sprint themes and objectives, and make them specific enough to be meaningful and measurable.

Example: “Improve organic conversions for top 5 services pages” vs. “Improve rankings.”

Build a backlog or tactics plan, but don’t treat it like a checklist. Use it to feed sprint plans, but not overwhelm day-to-day work.

In short:

  • Plan your first sprint: Choose one clear objective, timeline, and outcome.
  • Track and review: Report on progress, document what was done, and define what’s next.
  • Iterate: Use learnings from each sprint to improve the next.

When (And Where) “Always-On” SEO Still Applies

Certain things do need continuous attention. I’m not saying that it is right for 100% of your sprints to be 100% custom.

There are recurring things that could, or likely should, go into sprints or be monitored and maintained by regular or routine audits or checklists, e.g., crawl errors, broken links, technical issues, etc.

But, this maintenance work shouldn’t be the SEO strategy. It should support it. Use “always-on” as infrastructure or basics, not direction, and remember that the checklist isn’t the strategy, and if you have one, it is a planning tool, not necessarily your tactical plan and roadmap to ultimate SEO ROI.

Why It’s Time To Rethink “Always-On” SEO

I’ve hit on it enough, but I will wrap up by reminding you that endless to-do lists don’t move the needle.

Checklists can be good things and full of the “right” tactics. However, they often lack strategy and don’t serve shorter attention spans or allow for enough agility.

Sprint-based SEO helps teams be more strategic, productive, and aligned with the business overall, with room to implement prioritized tactics, tied to overall goals, and adjust to market and business needs and conditions.

Shifting your team from “always-on” to “intentionally paced” is a move to start seeing results and not just activity.

More Resources:


Featured Image: wenich_mit/Shutterstock

Google Antitrust Case: AI Overviews Use FastSearch, Not Links via @sejournal, @martinibuster

A sharp-eyed search marketer discovered the reason why Google’s AI Overviews showed spammy web pages. The recent Memorandum Opinion in the Google antitrust case featured a passage that offers a clue as to why that happened and speculates how it reflects Google’s move away from links as a prominent ranking factor.

Ryan Jones, founder of SERPrecon (LinkedIn profile), called attention to a passage in the recent Memorandum Opinion that shows how Google grounds its Gemini models.

Grounding Generative AI Answers

The passage occurs in a section about grounding answers with search data. Ordinarily, it’s fair to assume that links play a role in ranking the web pages that an AI model retrieves from a search query to an internal search engine. So when someone asks Google’s AI Overviews a question, the system queries Google Search and then creates a summary from those search results.

But apparently, that’s not how it works at Google. Google has a separate algorithm that retrieves fewer web documents and does so at a faster rate.

The passage reads:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. Rem. Tr. at 3509:23–3511:4 (Reid). FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. Id. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Ryan Jones shared these insights:

“This is interesting and confirms both what many of us thought and what we were seeing in early tests. What does it mean? It means for grounding Google doesn’t use the same search algorithm. They need it to be faster but they also don’t care about as many signals. They just need text that backs up what they’re saying.

…There’s probably a bunch of spam and quality signals that don’t get computed for fastsearch either. That would explain how/why in early versions we saw some spammy sites and even penalized sites showing up in AI overviews.”

He goes on to share his opinion that links aren’t playing a role here because the grounding uses semantic relevance.

What Is FastSearch?

Elsewhere the Memorandum shares that FastSearch generates limited search results:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

Now the question is, what’s the RankEmbed model?

The Memorandum explains that RankEmbed is a deep-learning model. In simple terms, a deep-learning model identifies patterns in massive datasets and can, for example, identify semantic meanings and relationships. It does not understand anything in the same way that a human does; it is essentially identifying patterns and correlations.

The Memorandum has a passage that explains:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets. …(Allan)

…Google has developed various “top-level” signals that are inputs to producing the final score for a web page. Id. at 2793:5–2794:9 (Allan) (discussing RDXD-20.018). Among Google’s top-level signals are those measuring a web page’s quality and popularity. Id.; RDX0041 at -001.

Signals developed through deep-learning models, like RankEmbed, also are among Google’s top-level signals.”

User-Side Data

RankEmbed uses “user-side” data. The Memorandum, in a section about the kind of data Google should provide to competitors, describes RankEmbed (which FastSearch is based on) in this manner:

“User-side Data used to train, build, or operate the RankEmbed model(s); “

Elsewhere it shares:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: _____% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

Then:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms. PXR0171 at -086 (“Embedding based retrieval is effective at semantic matching of docs and queries”);

…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…RankEmbed particularly helped Google improve its answers to long-tail queries.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.

…The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.

…RankEmbedBERT needs to be retrained to reflect fresh data…”

A New Perspective On AI Search

Is it true that links do not play a role in selecting web pages for AI Overviews? Google’s FastSearch prioritizes speed. Ryan Jones theorizes that it could mean Google uses multiple indexes, with one specific to FastSearch made up of sites that tend to get visits. That may be a reflection of the RankEmbed part of FastSearch, which is said to be a combination of “click-and-query data” and human rater data.

Regarding human rater data, with billions or trillions of pages in an index, it would be impossible for raters to manually rate more than a tiny fraction. So it follows that the human rater data is used to provide quality-labeled examples for training. Labeled data are examples that a model is trained on so that the patterns inherent to identifying a high-quality page or low-quality page can become more apparent.

Featured Image by Shutterstock/Cookie Studio

8 Generative Engine Optimization (GEO) Strategies For Boosting AI Visibility in 2025 via @sejournal, @samanyougarg

This post was sponsored by Writesonic. The opinions expressed in this article are the sponsor’s own.

AI search now makes the first decision.

When? Before a buyer hits your website.

If you’re not part of the AI answer, you’re not part of the deal. In fact, 89% of B2B buyers use AI platforms like ChatGPT for research.

Picture this:

  • A founder at a 12-person SaaS asks, “best CRM for a 10-person B2B startup.”
  • AI answer cites:
    a TechRadar roundup,
    a r/SaaS thread,
    a fresh comparison,
    Not you.
  • Your brand is missing.
  • They book demos with two rivals.
  • You never hear about it.

Here is why. AI search works on intent, not keywords.

It reads content, then grounds answers with sources. It leans on third-party citations, community threads, and trusted publications. It trusts what others say about you more than what you say about yourself.

Most Generative Engine Optimization (GEO) tools stop at the surface. They track mentions, list prompts you missed, and ship dashboards. They do not explain why you are invisible or what to fix. Brands get reports, not steps.

We went hands-on. We analyzed millions of conversations and ran controlled tests. The result is a practical playbook: eight strategies that explain the why, give a quick diagnostic, and end with actions you can ship this week.

Off-Page Authority Builders For AI Search Visibility

1. Find & Fix Your Citation Gaps

Citation gaps are the highest-leverage strategy most brands miss.

Translation: This is an easy win for you.

What Is A Citation Gap?

A citation gap is when AI platforms cite web pages that mention your competitors but not you. These cited pages become the sources AI uses to generate its answers.

Think of it like this:

  • When someone asks ChatGPT about CRMs, it pulls information from specific web pages to craft its response.
  • If those source pages mention your competitors but not you, AI recommends them instead of your brand.

Finding and fixing these gaps means getting your brand mentioned on the exact pages AI already trusts and cites as sources.

Why You Need Citations In Answer Engines

If you’re not cited in an answer engine, you are essentially invisible.

Let’s break this down.

TechRadar publishes “21 Best Collaboration Tools for Remote Teams” mentioning:

  • Asana.
  • Monday.
  • Notion.

When users ask ChatGPT about remote project management, AI cites this TechRadar article.

Your competitors appear in every response. You don’t.

How To Fix Citation Gaps

That TechRadar article gets cited for dozens of queries, including “best remote work tools,” “Monday alternatives,” “startup project management.”

Get mentioned in that article, and you appear in all those AI responses. One placement creates visibility across multiple search variations.

Contact the TechRadar author with genuine value, such as:

  • Exclusive data about remote productivity.
  • Unique use cases they missed.
  • Updated features that change the comparison.

The beauty? It’s completely scalable.

Quick Win:

  1. Identify 50 high-authority articles where competitors are mentioned but you’re not.
  2. Get into even 10 of them, and your AI visibility multiplies exponentially.

2. Engage In The Reddit & UGC Discussions That AI References

Social platformsImage created by Writesonic, August 2025

AI trusts real user conversations over marketing content.

Reddit citations in AI overviews surged from 1.3% to 7.15% in just three months, a 450% increase. User-generated content now makes up 21.74% of all AI citations.

Why You Should Add Your Brand To Reddit & UGC Conversations

Reddit, Quora, LinkedIn Pulse, and industry forums together, and you’ve found where AI gets most of its trusted information.

If you show up as “trusted” information, your visibility increases.

How To Inject Your Brand Into AI-Sourced Conversations

Let’s say a Reddit thread titled “Best project management tool for a startup with 10 people?” gets cited whenever users ask about startup tools.

Since AI already cites these, if you enter the conversation and include your thoughtful contribution, it will get included in future AI answers.

Pro Tip #1: Don’t just promote your brand. Share genuine insights, such as:

  • Hidden costs.
  • Scaling challenges.
  • Migration tips.

Quick Win:

Find and join the discussions AI seems to trust:

  • Reddit threads with 50+ responses.
  • High-upvote Quora answers in your industry.
  • LinkedIn Pulse articles from recognized experts.
  • Active forum discussions with detailed experiences.

Pro Tip #2: Finding which articles get cited and which Reddit threads AI trusts takes forever manually. GEO platforms automate this discovery, showing you exactly which publications to pitch and which discussions to join.

On-Page Optimization For GEO

3. Study Which Topics Get Cited Most, Then Write Them

Something we’re discovering: when AI gives hundreds of citations for a topic, it’s not just citing one amazing article.

Instead, AI pulls from multiple sites covering that same topic.

If you haven’t written about that topic at all, you’re invisible while competitors win.

Consider Topic Clusters To Get Cited

Let’s say you’re performing a content gap analysis for GEO.

You notice these articles all getting 100+ AI citations:

  • “Best Project Management Software for Small Teams”
  • “Top 10 Project Management Tools for Startups”
  • “Project Management Software for Teams Under 20”

Different titles, same intent: small teams need project management software.

When users ask, “PM tool for my startup,” AI might cite 2-3 of these articles together for a comprehensive answer.

Ask “affordable project management,” and AI pulls different ones. The point is that these topics cluster around the same user need.

How To Outperform Competitors In AI Generated Search Answers

Identify intent clusters for your topic and create one comprehensive piece on your own website so your own content gets cited.

In this example, we’d suggest writing “Best Project Management Software for Small Teams (Under 50 People).”

It should cover startups, SMBs, and budget considerations all in one authoritative guide.

Quick Win:

  • Find 20 high-citation topic clusters you’re missing.
  • Create comprehensive content for each cluster.
  • Study what makes the top versions work, such as structure, depth, and comparison tables.
  • Then make yours better with fresher data and broader coverage.

4. Update Content Regularly To Maintain AI Visibility

AI platforms heavily favor recent content.

Content from the past two to three months dominates AI citations, with freshness being a key ranking factor. If your content appears outdated, AI tends to overlook it in favor of newer alternatives.

Why You Should Keep Your Content Up To Date For GEO Visibility

Let’s say your “Email Marketing Best Practices” from 2023 used to get AI citations.

Now it’s losing to articles with 2025 data. AI sees the date and chooses fresher content every time.

How To Keep Your Content Fresh Enough To Be Cited In AIOs

Weekly refresh for top 10 pages:

  • Add two to three new statistics.
  • Include a recent case study.
  • Update “Last Modified” date prominently.
  • Add one new FAQ.
  • Change title to “(Updated August 2025)”.

Bi-weekly, on less important pages:

  • Replace outdated examples.
  • Update internal links.
  • Rewrite the weakest section.
  • Add seasonal relevance.

Pro Tip: Track your content’s AI visibility systematically. Certain advanced GEO tools alert you when pages lose citations, so you know exactly what to refresh and when.

5. Create “X vs Y” And “X vs Y vs Z” Comparison Pages

Users constantly ask AI to help them choose between options. AI platforms love comparison content. They even prompt users to compare features and create comparison tables.

Pages that deliver these structured comparisons dominate AI search results.

Common questions flooding AI platforms:

  • “Slack vs Microsoft Teams for remote work”
  • “HubSpot vs Salesforce for small business”
  • “Asana or Monday for creative agencies”

AI can’t answer these without citing detailed comparisons. Generic blog posts don’t work. Promotional content gets ignored.

Create comprehensive comparisons like: “Asana vs Monday vs ClickUp: Project Management for Creative Teams.”

How To Create Comparisons That Have High Visibility On SERPs

Use a content structure that wins:

  • Quick decision matrix upfront.
  • Pricing breakdown by team size.
  • Feature-by-feature comparison table.
  • Integrations.
  • Learning curve and onboarding time.
  • Best for: specific use cases.

Make it genuinely balanced:

  • Asana: “Overwhelming for teams under 5”
  • Monday: “Gets expensive with add-ons”
  • ClickUp: “Steep learning curve initially”

Include your product naturally in the comparison. Be honest about limitations while highlighting genuine advantages.

AI prefers citing fair comparisons over biased reviews. Include real limitations, actual pricing (not just “starting at”), and honest trade-offs. This builds trust that gets you cited repeatedly.

Technical GEO To Do Right Now

6. Fix Robots.txt Blocking AI Crawlers

Most websites accidentally block the very bots they want to attract. Like putting a “Do Not Enter” sign on your store while wondering why customers aren’t coming in.

ChatGPT uses three bots:

  • ChatGPT-User: Main bot serving actual queries (your money maker)
  • OAI-SearchBot: Activates when users click search toggle.
  • GPTBot: Collects training data for future models.

Strategic decision: Publications worried about content theft might block GPTBot. Product companies should allow it, however, because you want future AI models trained on your content for long-term visibility.

Essential bots to allow:

  • Claude-Web (Anthropic).
  • PerplexityBot.
  • GoogleOther (Gemini).

Add to robots.txt:

User-agent: ChatGPT-User
Allow: /
User-agent: Claude-Web
Allow: /
User-agent: PerplexityBot
Allow: /

Verify it’s working: Check server logs for these user agents actively crawling your content. No crawl activity means no AI visibility.

7. Fix Broken Pages For AI Crawlers

Just like Google Search Console shows Googlebot errors, you need visibility for AI crawlers. But AI bots behave differently and can be aggressive.

Monitor AI bot-specific issues:

  • 404 errors on important pages.
  • 500 server errors during crawls.
  • Timeout issues when bots access content.

If your key product pages error when ChatGPT crawls them, you’ll never appear in AI responses.

Common problems:

  • AI crawlers triggering DDoS protection.
  • CDN security blocking legitimate bots.
  • Rate limiting preventing full crawls.

Fix: Whitelist AI bots in your CDN (Cloudflare, Fastly). Set up server-side tracking to differentiate AI crawlers from regular traffic. No errors = AI can cite you.

8. Avoid JavaScript For Main Content

Most AI crawlers can’t execute JavaScript. If your content loads dynamically, you’re invisible to AI.

Quick test: Disable JavaScript in your browser. Visit key pages. Can you see the main content, product descriptions, and key information?

Blank page = AI sees nothing.

Solutions:

  • Server-side rendering (Next.js, Nuxt.js).
  • Static site generators (Gatsby, Hugo).
  • Progressive enhancement (core content works without JS).

Bottom line: If it needs JavaScript to display, AI can’t read it. Fix this or stay invisible.

Take Action Now

People ask ChatGPT, Claude, and Perplexity for recommendations every day. If you’re missing from those answers, you’re missing deals.

These eight strategies boil down to three moves: get mentioned where AI already looks (high-authority sites and Reddit threads), create content AI wants to cite (comparisons and fresh updates), and fix the technical blocks keeping AI out (robots.txt and JavaScript issues).

You can do all this manually. Track mentions in spreadsheets, find citation gaps by hand, and update content weekly. It works on a smaller scale, consumes time, and requires a larger team.

Writesonic provides you with a GEO platform that goes beyond tracking to giving you precise actions to boost visibility – create new content, refresh existing pages, or reach out to sites that mention competitors but not you.

Plus, get real AI search volumes to prioritize high-impact prompts.


Image Credits

Featured Image: Image by Writesonic. Used with permission.

In-Post Image: Image by Writesonic. Used with permission.

What To Expect AT NESS 2025: Surviving The AI-First Era via @sejournal, @NewsSEO_

This post was sponsored by NESS. The opinions expressed in this article are the sponsor’s own.

For anyone who isn’t paying attention to news SEO because they feel it isn’t their relevant niche – think again.

The foundations of SEO are underpinned by publishing content. Therefore, news SEO is relevant to all SEO. We are all publishers online.

John Shehata and Barry Adams are the experts within this vertical and, between them, have experience working with most of the top news publications worldwide.

Together, they founded the News and Editorial SEO Summit (NESS) in 2021, and in the last four years, the SEO industry has seen the most significant and rapid changes since it began 30 years ago.

I spoke to both John and Barry to get their insights into some of the current issues SEOs face, how SEO can survive this AI-first era, and to get a preview of the topics to be discussed at their upcoming fifth NESS event to be held on October 21-22, 2025.

You can watch the full interview at the end of this article.

SEO Repackaged For The AI Era

I started out by commenting that recently, at Google Search Central Live in Thailand, Gary Illyes came out to say that there is no difference between GEO, AEO, and SEO. I asked Barry what he thought about this and if the introduction of AI Mode is going to continue taking away publisher traffic.

Surprisingly, Barry agreed with Google to say, “It’s SEO. It’s just SEO. I fully agree with what the Googlers are saying on this front, and it’s not often that I fully agree with Googlers.”

He went on to say, “I have yet to find any LLM optimization strategy that is not also an SEO strategy. It’s just SEO repackaged for the AI era so that agencies can charge more money without actually creating any more added value.”

AI Mode Is A Threat To Publisher Traffic

While AI Overviews have drawn significant attention, Barry identifies AI Mode as a more serious threat to publisher traffic.

Unlike AI Overviews, which still display traditional search results alongside AI-generated summaries, AI Mode creates an immersive conversational experience that encourages users to continue their search journey within Google’s ecosystem.

Barry warns that if AI Mode becomes the default search experience, it could be “insanely damaging for the web because it’s just going to make a lot of traffic evaporate without any chance of recovery.”

He added that “If you can maintain your traffic from search at the moment, you’re already doing better than most.”

Moving Up The Value Chain

At NESS, John will be speaking about how to survive this AI-first era, and I asked him for a preview of how SEOs can survive what is happening right now.

John highlighted a major issue: “Number one, I think SEOs need to move up the value chain. And I have been saying this for a long time, SEOs cannot be only about keywords and rankings. It has to be much bigger than that.”

He then went on to talk about three key areas as solutions: building topical authority, traffic diversification, and direct audience relationships.

“They [news publishers] need to think about revenue diversification as well as going back to some traditional revenue streams, such as events or syndication. They also need to build their own direct relationships with users, either through apps or newsletters. And newsletters never got the attention they deserve in any of the different brands I’m familiar with, but now it’s gaining more traction. It’s extremely important.”

Quality Journalism Is Crucial For Publishers

Despite the AI disruption, both John and Barry stress that technical SEO fundamentals remain important, but to a point.

“You have to make sure the foundations are in place,” Barry notes, but he believes the technical can only take you so far. After that, investment in content is critical.

“When those foundations are at the level where there’s not much value in getting further optimization, then the publisher has to do the hard work of producing the content that builds the brand. The foundation can only get you so far. But if you don’t have the foundation, you are building a house on quicksand and you’re not going to be able to get much traction anyway.”

John also noted that “it’s important to double down on technical elements of the site.” He went on to say, “While I think you need to look at your schema, your speed, all of the elements, the plumbing, just to make sure that whatever channel you work with has good access and good understanding of your data.”

Barry concluded by reaffirming the importance of content quality. “The content is really what needs to shine. And if you don’t have that in place, if you don’t have that unique brand voice, that quality journalism, then why are you in business in the first place?”

The AI Agents Question

James Carson and Marie Haynes are both speaking about AI agents at NESS 2025, and when I asked Barry and John about the introduction of AI agents into newsrooms, the conversation was both optimistic and cautious.

John sees significant potential for AI to handle research tasks, document summarization, and basic content creation for standardized reporting like market updates or sports scores.

“A lot of SEO teams are using AI to recommend Google Discover headlines that intrigue curiosity, checking certain SEO elements on the site and so on. So I think more and more we have seen AI integrated not to write the content itself, but to guide the content and optimize the efficiency of the whole process.” John commented.

However, Barry remains skeptical about current AI agent reliability for enterprise environments.

“You cannot give an AI agent your credit card details to start shopping on your behalf, and then it just starts making things up and ends up spending thousands of your dollars on the wrong things … The AI agents are nowhere near that maturity level yet and I’m not entirely sure they will ever be at that maturity level because I do think the current large language model technology has fundamental limitations.”

John countered that “AI agents can save us hundreds of hours, hundreds.” He went on to say, “These three elements together, automation, AI agents, and human supervision together can be a really powerful combination, but not AI agent completely solo. And I agree with Barry, it can lead to disastrous consequences.”

Looking Forward

The AI-first era demands honest acknowledgment of changed realities. Easy search traffic growth is over, but opportunities exist for publishers willing to adapt strategically.

Success requires focusing on unique value propositions, building direct audience relationships, and maintaining technical excellence while accepting that traditional growth metrics may no longer apply.

The future belongs to publishers who understand that survival means focusing on their audience to build authentic connections that value their specific perspective and expertise.

Watch the full interview below.


If you’re a news publisher, or an SEO, you cannot afford to miss the fifth NESS on October 21-22, 2025.

SEJ readers have a special 20% discount on tickets. Just use the code “SEJ2025” at the checkout here.

Headline speakers include Marie Haynes, Mike King, Lily Ray, Kevin Indig, and of course John Shehata and Barry Adams.

Over two days, there are 20 speakers representing the best news publishers such as Carly Steven (Daily Mail), Maddie Shepherd (CBS), Christine Liang (The New York Times), Jessie Willms (The Guardian), among others.

Check out the full schedule here.


Featured Image: Shelley Walsh/Search Engine Journal/ NESS