Google Retiring Core Web Vitals CrUX Dashboard via @sejournal, @martinibuster

Google has announced that the CrUX Dashboard, the Looker Studio-based visualization tool for CrUX data, will be retired at the end of November 2025. The reason given for the deprecation is that it was not designed for “wide-scale” use and that Google has developed more scalable alternatives.

Why The CrUX Dashboard Is Being Retired

The CrUX Dashboard was built in Looker Studio to summarize monthly CrUX data. It gained popularity as Core Web Vitals became the de facto standard for how developers and SEOs measured performance.

Behind the scenes, however, the tool struggled to keep up with demand. According to the official Chrome announcement, it suffered “frequent outages, especially around the second Tuesday of each month when new data was published.”

The Chrome team concluded that while the dashboard showed the value of CrUX data, it was not built on the right technology.

Transition To Better Alternatives

To address these issues, Google launched the CrUX History API, which delivered weekly instead of monthly data, allowing more frequent monitoring of trends. The History API was faster and more scalable, leading to adoption by third-party tools.

In 2024, Google introduced CrUX Vis, which was more scalable and faster. Today, in 2025, CrUX Vis receives four to five times more users than the CrUX Dashboard, showing that users are increasingly moving to the newer tool.

What the Change Means for Users

Chrome will shut down the CrUX Connector to BigQuery in late November 2025. When this connector is removed, dashboards that depend on it will stop updating. Users who want to keep the old dashboard will need to connect directly to BigQuery with their own credentials. The announcement explains that the CrUX Connector infrastructure is unreliable and requires too much monitoring to maintain, which is why investment has shifted to the History API and CrUX Vis.

Some users have asked Google to postpone the shutdown until 2026, but the announcement makes it clear that this is not an option. Although the dashboard and its connector will be retired, the underlying BigQuery dataset will continue to be updated and supported. Google stated that it sees BigQuery as a valuable, longer-term public dataset.

Check out the CrUIX Vis tool here.

Read the original announcement:

CrUX Dashboard deprecation

GEO: How To Position Your Agency As An AI Search Authority

This post was sponsored by Visto. The opinions expressed in this article are the sponsor’s own.

Clients keep asking a new question: “Are we visible in AI search?”

This is the reality: Google’s AI Overviews are reducing organic traffic by 30-70% for many businesses.

In fact, we’re seeing that SEO agencies that incorporate GEO (Generative Engine Optimization) tactics into their SEO strategy and offerings are charging $4,000/month for these additional menu services.

However, when it comes to GEO, a newly evolved and still-evolving branch of SEO, answering the AI visibility question is:

  • Less about grand strategy.
  • More about a quick field check.

But if you skip the check and jump straight to fixes, you risk solving the wrong problem.

Phase 1. Perform An AI Visibility Audit To Confirm If There Is A Visibility Gap

Start with a simple AI Visibility Audit:

  1. Select five to 10 key phrases that align with the business’s goals.
  2. Search those phrases across Google’s AI Overviews, Bing Copilot, Perplexity, and ChatGPT.
  3. Look at the AI answer first, not the classic blue links.
  4. Do you show up? Are you cited? Which competitors are visible and cited? Notate this for each phrase.
  5. Notate down which competitors are cited and where any links point; take screenshots to showcase in any presentations.

Once you identify which phrases you display and those you do not, you can begin to build a comprehensive audit, repeating the steps as you would for keyword research or, traditionally, People Also Ask research.

The Easy Way: Use this AI Visibility audit and bring the snapshot to your next client call. It gets you out of the “we think” zone and into “here’s what we saw today.”

Phase 2. Interpret Your AI Visibility From The Audit Results

Once you have your audit results in hand, it’s time to determine where you stand:

  • Highly visible: Your brand is named inside the answer. Great. Assess what’s working, and expand upon it.
  • Partially visible: Your content fuels the answer, but the brand is missing. That erodes authority over time.
  • Absent: The answer engines are leaning on other sources. That’s your gap, and your opportunity.

Notice how some of this is traditional ranking talk, and other facets are new.

So, it’s time for a new lens here.

Look at GEO as more of a traffic channel, as opposed to a new technique: Do we show up in the answer people actually read?

This is where agencies need to act fast. If you’re not helping clients with GEO now, they’ll find someone who will.

Phase 3. Showcase The Real Problem Behind Falling Organic Traffic

In this step, it’s time to connect the dots for everyone outside of your SEO team.

How will clients or bosses handle a change to your reporting?

What is the best way to convince a stakeholder that they need additional SEO services to stay ahead during the GEO boom?

How To Clarify The AI Addition To SEO For Clients & Stakeholders

This is how to turn a vague “traffic is down” conversation into “here’s where we’re missing in the answer and what we’ll fix.”

Within your audit presentation, the AI Search findings should follow this structure:

  1. Rule out serving issues that can tank crawl or clicks. Do not include these in the report during this part of the conversation.
  2. Split branded from non-branded terms, as AI answers often cluster around certain intents. Display this information broken out.

Pro Tip: Leverage a side-by-side comparison. The left side could include the AI answer with your brand’s status. The right side a quick look at on-site metrics for those same topics.

Phase 4. Consider The Perfect Mix Of Traditional SEO & GEO

Once your audit is approved, and a contract is in place to expand your SEO offerings to include GEO techniques, it’s time to apply the perfect mix of traditional SEO and GEO to improve visibility in the areas you’ve identified in the audit.

From a high level, there are two constraints that change the game, especially when adding GEO tactics to your SEO offerings:

  • Speed (“time to first token”). AI systems have to answer fast. Crawlers are impatient, so pages that surface the right answer early tend to win the tie.
  • Context window. Models skim and compress. Think skim-friendly, middle-school clarity: straightforward headings, unambiguous entities, and no padding.

That’s why old habits can backfire. You’re optimizing for clarity, entities, and extractability, not density.

How Do I Approach SEO & GEO The Right Way?

The way we think about it is this: if SEO is about ranking for keywords, GEO is about showing up for prompts.

How Does A Prompt Differ From Keywords?

When someone types a prompt, modern AI doesn’t just “look up” one thing. It:

  1. Breaks the prompt into sub-questions.
  2. Runs background searches.
  3. Shortlists a small set of pages worth crawling right now.

From our perspective, that’s the bridge between SEO and GEO: your classic search visibility still matters, but only as a feeder into which sources the AI decides to read.

What To Focus On When Incorporating GEO Into Your SEO Strategies

You will see overlaps here; that’s because there are slight changes to traditional methods that you’ll need to consider when optimizing for answer engines.

What to focus on, from a traditional SEO angle:

  • On-page SEO: answer-first structure, clean headings, scannable evidence.
  • Technical SEO (or GEO for Answer Engines): Fast paths to answers; crawlability that supports quick fetches.
  • Content gaps your competitors are filling in AI answers. We’re consistently surprised by how often the “nearly there” pages win. If the AI crawler already understands a page, one sharp paragraph and a clearer H1 can push it over the top.
  • Link analysis to strengthen credible citations.
  • Competitor analysis of who’s being named in answers (and why).
  • Sentiment analysis to catch how your brand is described when it’s mentioned.

What to focus on, from the GEO perspective:

  • The semantic space AI explores vs. the entity mapping in your content.
  • Technical GEO (or SEO for Answer Engines): Fast paths to answers; crawlability that supports quick fetches.
  • Content gaps your competitors are filling in AI answers.

The Easy Way: Visto can consolidate these checks into a single workflow, allowing you to baseline quickly and track progress without needing a dozen tools.

Phase 5. Implement GEO Tactics Into Your SEO Strategy To Regain & Grow Visibility

Step 1. Provide Answers Upfront

Within traditional SEO, this refers to improving readability.

Your goal here is to give the answer engine what it needs as quickly as a good support team would:

  • Lead your most important pages with the plain-English answer your buyer is after.
  • One or two sentences up top, then the detail and sources.

If the reader needs to scroll to find the point, the crawler will likely give up at that same point.

Step 2. Strengthen Entity Clarity

Next, make the page unambiguous with consistent:

  • Product names.
  • Categories.
  • Specs.
  • Simple schema to help the system map your entity to the right concepts.

Think of this as labeling the shelves in a small shop. If the labels are clear, the model finds what it came for without guessing.

Step 3. Implement Technical GEO

Then handle the technical side of GEO. AI crawlers care about time to the first useful token, so shorten the path to the answer.

Tighten titles and H1s, move key facts above the fold, and keep interstitials from blocking the first read. The AI crawler has a limited context window and reads fast. Help it skim the right lines.

Step 4. Assess Comparison Coverage

If your customers compare options, publish a straightforward comparison that highlights only the differences people ask about.

What we’ve seen is that honest tables and short “who it’s for” notes get cited more than glossy positioning.

Step 5. Manage Links & Sentiment

Finally, reinforce what supports the page. Link credible sources to the version you want cited. Check how your brand is described in the existing answers. If the tone is off, correct the original source you’re referencing.

Then, regularly review your metrics: presence, named mentions, and competitor share. GEO isn’t a set-and-forget channel, so a light monthly review helps prevent drift.

Visto’s platform automates much of this tracking, giving agencies the tools to prove value with measurable, prompt-level insights and easy-to-share reports.

Examples: Learn From Early GEO Adopters Who Are Rebuilding Traffic

“In the first two quarters, we have seen an 88% year-over-year increase in organic traffic and a 42% YoY increase in unique pageviews from organic traffic.

Agencies using a platform like Visto’s see their clients’ brands referenced more in AI answers after tightening entities and updating a handful of high-value pages.

The agencies succeeding are those positioning themselves as AI search authorities now, not waiting to see how things shake out.

Get Started With Visto

Visto helps agencies measure AI visibility and manage the work.

Built specifically for marketing agencies, the platform shows where your brand appears in AI answers, summarizes citations across engines, and highlights the pages most likely to move the needle.

Visto provides:

  • Direct access to GEO experts who understand agency needs.
  • Consistent product updates aligned with the latest AI search trends.
  • The ability to influence the roadmap with your input.
  • Education and support to confidently lead your clients through the AI shift.
  • Sales enablement tools that are purpose-built for marketing agencies to prospect clients.
  • A focus on actionability and optimization, in addition to visibility and analytics.

Don’t wait for your clients to ask why they’re invisible in AI search. Position your agency as the AI search authority they need right now.

Special Offer: For SEJ readers, sign up for three months free access and start prospecting and serving clients.


Image Credits

Featured Image: Image by Visto. Used with permission.

Google Ads Rolls Out New Creative & Omnichannel Tools via @sejournal, @MattGSouthern

Google is rolling out creative and omnichannel updates across Ads and YouTube.

The tools are designed to help you keep assets fresh, connect store and online demand, and plan spend across key shopping windows.

What’s New

Creative: Asset Studio, Product Studio, And Imagen 4

A new suite of generative tools is coming to Asset Studio, with asset generation in Performance Max and Demand Gen powered by Imagen 4.

In Product Studio, you’ll be able to swap product scenes at scale, replace backgrounds, turn images or text into short videos, and get proactive campaign concept suggestions.

See an example of a campaign concept suggestion below:

Image Credit: Google

Google says the new tools can speed up testing while keeping brand direction intact.

Omnichannel & YouTube

Demand Gen can now optimize for total sales across online, in-app, and in-store conversions. You can also use local offers to show nearby shoppers in-store promotions.

On YouTube, a Creator partnerships hub is meant to simplify brand-creator collaborations, and the YouTube Masthead is now shoppable so you can feature specific products tied to your goals.

Insights And Budgets: Plan 3–90 Day Bursts

New AI-powered insights in Google Merchant Center aim to surface actionable tips. Google is also expanding campaign total budgets from Demand Gen and YouTube to include Search, Performance Max, and Shopping.

You can set a start date, end date, and a total budget for periods between 3 and 90 days, and Google’s systems will pace spend to match peaks in demand.

Loyalty: Member-Only Offers

Google is introducing loyalty features that let you display member-only pricing and shipping benefits, with retention goals available in loyalty mode for Performance Max or Standard Shopping.

Looking Ahead

If your holiday plan spans multiple bursts, these tools can help you keep creative fresh, capture store demand, and avoid end-of-month pacing surprises.

Start by aligning product feeds and assets, then test omnichannel optimization and short budget windows around your key dates.

Google AI Max For Search Goes Global In Beta via @sejournal, @MattGSouthern

Google’s AI Max for Search campaigns is now available worldwide in beta across Google Ads, Google Ads Editor, Search Ads 360, and the Google Ads API.

AI Max packages Google’s AI features as a one-click suite inside Search campaigns. New built-in experiments allow you to test the impact with minimal setup.

Image Credit: Google

What’s New

One-Click Experiments

AI Max is positioned as a faster path to smarter optimization inside Search campaigns.

New one-click experiments are integrated in the campaign flow, so you can compare performance without rebuilding campaigns.

Availability spans all major surfaces, including the API for teams that automate workflows.

How The Built-In Experiments Work

AI Max experiments are run within the same Search campaign by splitting traffic between a control (with AI Max off) and a trial (with AI Max on).

Since the test doesn’t clone the campaign, you’ll avoid sync errors and can ramp up faster. Once the experiment ends, review the performance and decide whether to apply the change or discard it.

Controls You Can Tweak During A Test

By default, your experiment starts with Search term matching and Asset optimization enabled, but it’s easy to customize these settings.

You can choose to turn off Search term matching at the ad group level or disable Asset optimization at the campaign level if that better suits your goals.

For more control over your landing pages, consider using URL exclusions at the campaign level and URL inclusions at the ad group level.

Brand controls are also available for added flexibility: you can set brand inclusions or exclusions at the campaign level, and specify brand inclusions within ad groups.

The “locations of interest” feature at the ad group level offers more geographic targeting precision.

Reporting Surfaces

Results appear under Experiments with an expanded Experiment summary.

AI Max also adds transparency across reports. These include “AI Max” match-type indicators in Search terms and Keywords reports, plus combined views that show the matched term, headlines, and landing URLs.

Auto-Apply Option

If you want, you can set the experiment to auto-apply when results are favorable. Otherwise, apply manually from the Experiments table or enable AI Max from Campaign settings after the test concludes.

Setup Limits To Know

You can’t create an AI Max experiment via this flow if the campaign:

  • Has legacy features like text customization (old ACA), brand inclusions/exclusions, or ad-group location inclusion already configured
  • Targets the Display Network
  • Uses a Portfolio bid strategy
  • Uses Shared budgets

Coming Soon: Text Guidelines

Google is working on a feature that will provide text guidelines to help AI create brand-safe content that meets your business needs.

This will be available to more advertisers this fall for both AI Max and Performance Max. In the meantime, stick to your usual brand approvals and policy checks.

Getting Started

Google recommends checking out a best-practices guide and Think Week materials if you’re interested in getting started with AI Max.

If you’re already handling Search at scale, the API support simplifies standardizing experiments and comparing results to your existing setup.

Looking Ahead

Expect more controls around creative and safety as text guidelines roll out. Until then, low-lift experiments let you measure AI Max without committing your entire account.

Trust Still Lives In Blue Links via @sejournal, @Kevin_Indig

I’ve been extremely antsy to publish this study. Consider it the AIO Usability study 1.5, with new insights. You also want to stay tuned for our first AI Mode usability study! It’s coming in a few weeks (make sure to subscribe not to miss it).

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Since March, everyone’s been asking the same question: “Are AI Overviews killing our conversions?”

Our 2025 usability study gives a clearer answer than the hot takes you’ll see on LinkedIn and X (Twitter).

In May 2025, I published significant findings from the first comprehensive UX study of AI Overviews (AIOs). Today, I’m presenting you with new insights from that study based on a cutting-edge RAG system that analyzed over 100,000 words of transcription.

The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time.

Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.

Net-net: AIO is a preview layer. Blue links still close. Before we dive in, you need to hear these insights from Garrett French, CEO of Xofu, who financed this study:

“What lit me up most from this latest work from Kevin: We have direct insight now into an “anchor pattern” of AIO behavior.

In this usability study, we discovered that users rarely voice distrust of AI Overviews directly – instead they hesitate, refine, or click out.

Therefore, hesitation itself is the loudest signal to us.

We see the same in complex, transition-enabling purchase-committee buying (B2B and B2C): Procurement stalls without lifecycle clarity, engineer stall without specs, IT stalls without validation.

These aren’t complaints. They’re unresolved, unanswered, and even unknown questions that have NEVER shown themselves in KW demand.

As content marketers, we have never held ourselves systematically accountable to answering them.

Customer service logs – as an example of one surface for discovering friction – expose the same hesitations in traceable form through repeated chats, escalations, deployment blocks, etc.

Customer service logs are one surface; AIOs are another.

But the real source of truth is always contextual audience friction.

Answering these “friction-inducing, unasked latent questions give us a way to read those signals and design content that truly moves decisions forward.

What The Study Actually Found:

  • Organic results are the most trusted and most consistently successful destination across tasks.
  • Sponsored results are noticed but actively skipped due to low trust.
  • In-SERP answers quickly resolved roughly 85% of straightforward factual questions.
  • Users often use AIO as a preview or shortcut, then click out to finish or validate (on brand sites, YouTube, coupon portals, and the like).
  • Shopping carousels aid discovery more than closure. Expect reassessment clicks.
  • Trust splits by stakes: Low-stakes search journeys often end in the AIO, while finance or health pushes people to known authorities like PayPal, NIH, or Mayo Clinic.
  • Age and device matter. Younger users, especially on smartphones, accept AIOs faster; older cohorts favor blue links and authority domains.
  • When the AIO is wrong or feels generic, people bail. We logged 12 unique “AIO is misleading/wrong” flags in higher-stakes contexts.

(Interested in diving deeper into the first findings from this study or need a refresher? Read the first full iteration of the UX study of AIOs.)

Why This Matters For The Bottom Line

In my earlier analysis, I argued that top-of-funnel visibility had more downstream impact than our marketing analytics ever credited. I also argued that demand doesn’t just disappear because clicks shrink.

This study’s behavior patterns support that: AIO satisfies quick lookup intent, but purchase intent still routes through external validation and brand trust – aka clicks. Participants in this study shared thoughts aloud, like:

  • “There’s the AI results, but I’d rather go straight to PayPal’s own site.”
  • “Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”

And that preserves downstream conversions (when you show up in the right places and have earned authority).

Image Credit: Kevin Indig

Deeper Insights: Secondary Findings You Need To See

Recently, I worked with Eric Van Buskirk (the research director of the study) and his team over at Clickstream Solutions to do a deeper analysis of the May 2025 findings.

Using an advanced RAG-driven AI system, we analyzed all 91,559 (!) words of the transcripts from recorded user sessions across 275 task instances.

This is important to understand: We were able to find new insights from this study because Eric has built cutting-edge technology.

Our new RAG system analyzes structured fields like SERP Features, AIO satisfaction, or user reactions from transcriptions and annotations. It creates a retrieval layer and uses ChatGPT-5 for semantic search.

The result is faster, more rigorous, and more transparent research. Every claim can be traced to data rows and transcript quotes, patterns are checked across the full dataset, and visual evidence is a query away.

(To sum that all up in plain language: Eric’s custom-built advanced RAG-driven AI system is wildly cool and extremely effective.)

Practical benefits:

  • Auditable insights: Conclusions map back to exact data slices.
  • Speed: Test a hypothesis in minutes instead of re-reading sessions.
  • Scale: Triangulate transcripts, coded fields, and outcomes across all participants.
  • Fit for the AI era: Clean structure and trustworthy signals mirror how retrieval systems pick sources, which aligns with our broader stance on visibility and trust.

Here’s what we found:

  1. The data verified four distinct AIO Intent Patterns.
  2. Key SERP features drove more engagement than others.
  3. Core brands shape trust in AIOs.

About The New RAG System

We rebuilt the analysis on a retrieval-augmented system so answers come from the study data, not model guesswork. The backbone lives on structured fields with full transcripts and annotations, indexed in a lightweight database and paired with bucketed data for cohort filtering and cross-checks.

Core components:

  • Dataset ingestion and cleaning.
  • Retrieval layer based on hybrid keyword + semantic search.
  • Auto-coded sentiment to turn speech into consistent, queryable signals.
  • Validation loop to minimize hallucination.

The result is faster, more rigorous, and more transparent research. Every claim can be traced to rows and quotes, patterns are checked across the full dataset, and visual evidence is a query away.

Practical benefits:

  • Map conclusions back to exact data slices.
  • Test a hypothesis in minutes.
  • Triangulate transcripts, coded fields, and outcomes across all participants.
  • Clean structure and trustworthy signals.

Which AIO Intent Patterns Were Verified Through The Data

One of the biggest secondary findings from the AIO usability study is that the AIO Intent Patterns aren’t just “gut feelings” anymore – they’re statistically validated, built from measurable behavior.

Before some of you roll your eyes and annoyingly declare “here’s yet another newly created SEO/marketing buzzword,” the patterns we discovered in the data weren’t exactly search personas, and they weren’t exactly search intents, either.

Therefore, we’re using the phrase “AIO Intent Pattern” to distinguish these concepts from one another.

Here’s how I define AIO Intent Patterns: AIO Intent Patterns represent statistically validated clusters of user behavior – like dwell, scroll, refinements, and sentiment – that define how people respond to AIOs. They’re recurring, measurable behaviors that describe how people interact with AI Overviews, whether they accept, validate, compare, or reject them.

And, again, these patterns aren’t exactly search intents or queries, but they’re not exactly user profiles either.

Instead, these patterns represent a set of behaviors (that appeared throughout our data) carried out by users to validate AIOs in different and distinct ways. So that’s why we’ve called the individual behavioral patterns “validations” below.

By running a RAG-driven coding pass across 250+ task instances, we were able to quantify four different behavioral patterns of engagement with AIOs:

  1. Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
  2. Trust-driven validations that convert only with credibility (validate AIOs).
  3. Comparative validations that use AIOs but compare with multiple sources.
  4. Skeptical rejections that automatically distrust AIOs for high-stakes queries.

What matters most here is that these aren’t arbitrary labels.

Statistical tests showed the differences in dwell time, scrolling, and refinements between the four groups were far too large to be random.

To put it plainly: These are real AIO use behavioral segments or AIO use intents you can plan for.

Let’s look at each one.

1. Efficiency-First Validations

These are validations where users intend to seek a shortcut. Users dip into AIOs for fast fact lookups, skim for one answer, and move on.

Efficiency-driven validations thrive on content that’s concise, scannable, and fact-rich. Typical queries that are resolved directly in the AIO include:

  • “1 cup in ml”
  • “how to take a screenshot on Mac”
  • “UTC to CET converter”
  • “what is robots.txt”
  • “email regex example”

Below, you can check out two examples of “efficiency-first validation” task actions from the study.

“Okay, so I like the summary at the top. And I would go ahead and follow these instructions and only come back to a search if they didn’t work.”

“I just had to go straight to the AI overview… and I liked that answer. It gave me the information I needed, organized and clear. Found it.”

Our data shows an average dwell time of just 14 seconds for this group overall, with almost no scrolling or refinements.

Users that have an efficiency-first intent for their queries have a neutral to positive sentiment toward AIOs – with no hesitation flags – because AIOs scratch the efficiency-intent itch quickly.

For this behavioral pattern, the AIO often is the final answer – especially on mobile – and if they do click, it’s usually the first clear, extractable source.

👉 Optimization tips for this validation group:

  • Compress key facts into crisp TLDRs, FAQs, and schema so AIO can surface them.
  • Place definitions, checklists, and example blocks near the top of your page.
  • Use simple tables and step lists that can be lifted cleanly.
  • Ensure brand mentions and key facts appear high on the page for visibility.

2. Trust-Driven Validations

These validations are full of caution. Users with trust-driven intents engage with AIOs but rarely stop there.

They’ll skim the overview, hesitate, and then click out to an authority domain to validate what they saw, like in this example below:

The user shares that “…at the top, it gave me a really good description on how to transfer money. But I still clicked the PayPal link because it was directly from the official site. That’s what I went with – I trust that information to be more accurate.”

Typical queries that trigger this validation pattern include:

  • “PayPal buyer protection rules”
  • “Mayo Clinic strep symptoms”
  • “Is creatine safe long term”
  • “Stripe refund timeline”
  • “GDPR consent requirements example”

And our data from the study verifies users scroll more (2.7x on average), dwell longer (~57s), and often flag uncertainty in trust-driven mode. What they want is authority.

These users have a high rate of hesitation flags in their search experiments. Their sentiment is mixed – often neutral, sometimes anxious or frustrated – and their confidence is only medium to low.

For these searches, the AIO is a starting point, not the destination. They’ll click out to Mayo Clinic, PayPal, Stripe, or other trusted domains to validate.

👉 Optimization tips for this validation group:

  • Reinforce trust scaffolding on your landing pages: expert reviewers, citations, and last-reviewed dates.
  • Mirror official terminology and link to primary sources.
  • Add “What to do next” boxes that align with authority guidance.
  • Build strong E-E-A-T signals since credibility is the conversion lever here.

3. Comparative Validations

This search intent actively leans into the AIO for classic comparative queries (think “Ahrefs vs Semrush for content teams”) to fulfill their search intent OR to compare informational resources to get clarity on the “best” of something; they expand, scroll, refine, and use interactive features – but they don’t stop there.

Instead, they explore across multiple sources, hopping to YouTube reviews, Reddit threads, and vendor sites before making a decision.

Example queries that reveal AIO comparative validation behavior:

  • “Notion vs Obsidian for teams”
  • “Best mirrorless camera under 1000”
  • “How to change a bike tire”
  • “Standing desk benefits vs risks”
  • “Programmatic SEO examples B2B”
  • “How to install a nest thermostat”

Here’s an example using a “how to” search, where the user is comparing sources for the best way to receive the most accurate information:

“The AI Overview gave me clear step-by-step instructions that matched what I expected. But since it was a physical DIY task, I still preferred to branch out to watch a video for confirmation.”

On average, searchers looking for comparative validations in the AIO dwell for 45+ seconds, scroll 4-5 times, and often open multiple tabs.

Their AIO sentiment is positive, and their confidence is high, but they still want to compare.

If this feels familiar – like classic transactional or commercial search intents – it’s because it is related.

If you’ve been doing SEO for any time, it’s likely you’ve created some of these “versus” or “comparison” pages. You also have likely created “how to” content with step-by-step how-to guidance, like how to install a flatscreen TV on your wall.

Before AIOs, your target users would find themselves there if you ranked well in search.

But now, the AIO frames the landscape first, and the decision comes after weighing pros and cons across information sources to find the best solution.

👉 Optimization tips for this validation group:

  • Publish structured comparison pages with decision tables and use-case breakdowns.
  • Pair each page with short demo videos, social proof, and credible community posts to echo your takeaways.
  • Include “Who it is for” and “Who it isn’t for” sections to reduce ambiguity.
  • Seed content in YouTube and forums that AIOs (and users) can pick up.

4. Skeptical Rejections

Searchers with a make-or-break intent? They’re the outright AIO skeptical rejectors.

When stakes are high – health, finance, or legal … the typical YMYL (Your Money, Your Life) stuff – they don’t trust AIO to get it right.

Users may scan the summary briefly, but they quickly move to authoritative sources like government sites, hospitals, or financial institutions.

Common queries where this rejection pattern shows up:

  • “Metformin dosage for PCOS”
  • “How to file taxes as a freelancer in Germany”
  • “Credit card chargeback rights EU”
  • “Infant fever when to go to ER”
  • “LLC vs GmbH legal liability”

For this search intent, the dwell time in an AIO is short or nonexistent, and their sentiment often skews negative.

They show determination to bypass the AI layer in favor of direct authority validation.

👉 Optimization tips for this validation group:

  • Prioritize citations and mentions from highly trusted domains so AIOs lean on you indirectly.
  • Align your pages with the language and categories used by official sources.
  • Add explicit disclaimers and clear subheadings to strengthen authority signals.
  • For YMYL topics, focus on being cited rather than surfaced as the final answer.

SERP Features That Drove Engagement

Our RAG AI-driven system of the usability data verified that not all SERP features are created equal.

When we cut the data down to only features with meaningful engagement – which our study defined as ≥5 seconds of dwell time across at least 10 instances – only four SERP features findings stood out.

(I’ll give you a moment to take a few wild guesses regarding the outcomes … and then you’ll see if you’re right.)

Drumroll please. 🥁🥁🥁

(Okay, moment over. Here we go.)

1. Organic Results Are Still The Backbone

Whenever our study participants gave the classic blue links more than a passing glance, they almost always found success.

Transcripts from the study make it explicit: Users trusted official sites, government domains, and familiar authority brands, as one participant’s quote demonstrates:

“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”

What about social or community sites that showed up in the organic blue-link results?

Reddit and YouTube were the social or community platforms found in the SERP that were mentioned most by study participants.

Reddit had 45 unique mentions across the entire study. Overall, seeing a Reddit result in organic results produces a user sentiment that is mostly positive, with some users feeling neutral toward the inclusion of Reddit in search, and very few negative comments about Reddit results.

YouTube had 20 unique mentions across the entire study. The sentiment toward YouTube inclusion in SERP results was overwhelmingly positive (19 out of 20 of those instances had a positive user sentiment). The emotions flagged from the study participants around YouTube results included happy/satisfied or curious/exploring.

There was a very clear theme across the study that appeared when social or community sites popped up in organic results:

  • Reddit was invoked when participants wanted community perspective, usually in comparison tasks. Confidence was high because Reddit validated nuance, but AIO trust was weak (users bypassed AIOs to Reddit instead).
  • YouTube was used as a visual validator, especially in product or technical comparison tasks. Users expressed positive sentiment and high satisfaction, even when explicit trust wasn’t verbalized. They treated YouTube as a natural step after the AIOs/organic SERP results.

2. Sponsored Results Barely Register

People saw them, but rarely acted on them. “I don’t like going to sponsored sites” was a common refrain.

High visibility, but low trust.

3. Shopping Carousels Aid Discovery But Not Closure.

Participants clicked into Shopping carousels for product ideas, but often bounced back out to reassess with external sites.

The carousel works as a catalog – not a closer.

4. Featured Snippets Continue To Punch Above Their Weight

For straightforward factual lookups, Snippets had an ~85% success rate of engagement.

They were efficient and final for fact-based queries like [example] and [example].

⚠️ Important note: Even though Google is replacing Featured Snippets with AIOs, it’s clear that this method of receiving information within the SERP has a high engagement. While the SERP feature may be in the process of being discontinued, the data shows users like engaging with snippets. The takeaway here is that if you were often appearing for featured snippets and you’re now often appearing for AIO citations, keep up the good work to continue earning visibility there, because it still matters.

SERP Features x AIO Intent Patterns

When you keep the intent pattern layers in mind with different persona groups, it makes the search behaviors sharper:

  • Younger users on mobile leaned heavily on AIO and snippets, often stopping there if the stakes were low. → That’s the hallmark of efficiency-first validations (quick fact lookups) and comparative validations (scrolling, refining, and treating AIO as the main lens).
  • Older users consistently bypassed AI elements in favor of organic authority results. → This is classic behavior for trust-driven validations, when users click out to brands like PayPal or the Mayo Clinic, and skeptical rejections, when users distrust AIO altogether for high-stakes tasks.
  • Transactional queries – money, health, booking – nearly always pushed people toward trusted brands, regardless of what AIO or ads surfaced. → This connects directly to trust-driven validations (users who need authority reinforcement to fulfill their search intent) and skeptical rejections (users who reject AIO in YMYL contexts because AIOs don’t meet the intent behind the behavior).

What this shows is that, for SEOs, the priority isn’t about chasing every feature and “winning them all.”

Take this as an example:

“The AI overview didn’t pop up, so I used the search results. These were mostly weird websites, but CNBC looked trustworthy. They had a comparison of different platforms like CardCash and GCX, so I went with CNBC because they’re a trusted source.”

Your job is to match intent (as always):

  • Earn extractable presence in AIOs for quick facts,
  • Reinforce trust scaffolding on authority-driven organic pages, and
  • Treat Shopping and Sponsored slots as visibility and awareness plays rather than conversion levers.

Which Brands Shaped Trust In AIOs

AIOs don’t stand on their own; they borrow credibility from the brands they surface – whether you like it or not.

(Google truly seems to be cannibalizing itself while devouring all of us, too.)

When participants validated or rejected an AI answer, it often hinged on whether a familiar or authoritative brand was mentioned.

Our RAG-coded study data surfaced clear winners:

  • Institutional authorities like PayPal, NIH, and government sites consistently shaped trust, even without clicks.
  • Ecommerce and retail giants (Amazon, Walmart, Groupon) carried positive associations from brand familiarity.
  • Financial and tax prep services (H&R Block, Jackson Hewitt, CPA mentions) were trusted anchors in transactional searches.
  • Car rental brands (Budget, Avis, Dollar, Kayak, Zipcar, Turo) dominated travel-related tasks.
  • Emerging platforms (Raise, CardCash, GameFlip, Kade Pay) gained traction primarily because an AIO surfaced them, not because of prior awareness.

👉 Why it matters: Brand trust is the glue between AIO exposure and user action.

Here’s a quick paraphrase of this user’s exploration: We’re looking for places to sell gift cards for instant payment. Platforms like Raise, Gift Card Granny, or CardCash come up. On CardCash, I tried a $10 7-Eleven card, and the offer was $8.30. So they ‘tax’ you for selling. That’s good to know – but it shows you can sell gift cards for cash, and CardCash is one option.

In this instance, the AIO surfaced CardCash. The user didn’t know about it before this search. They explored it in detail, but trust friction (“they tax you”) shaped whether they’d actually use it.

For SEOs, this means three plays running in tandem:

  1. Win mentions in AIOs by ensuring your content is structured, scannable, and extractable.
  2. Strengthen authority off-site so when users validate (or reject the AIO), they land on your pages with confidence.
  3. Build topical authority in your niche through comprehensive persona-based topic coverage and valuable information gain across your topics. (This can be a powerful entry point or opportunity for teams competing against larger brands.)

What does this all mean for your own tactical optimizations?

But here’s the most crucial thing to take away from this analysis today:

With this information in mind, you can now go to your stakeholders and guide them to look at all your prompts, queries, and topics with fresh eyes.

You need to determine:

  • Which of the target queries/topics are quick answers?
  • Which of the target queries/topics are instances where people need more trust and assurance?
  • When do your ideal users expect to explore more, based on the target queries/topics?

This will help you set expectations accordingly and measure success over time.


Featured Image: Paulo Bobita/Search Engine Journal

Who Owns Web Performance? Building A Framework For Digital Accountability via @sejournal, @billhunt

In my previous article, “Closing the Digital Performance Gap,” I made the case that web effectiveness is a business issue, not a marketing metric. The website is no longer just a reflection of your brand – it is your brand. If it’s not delivering measurable business results, that’s a leadership problem, not a team problem.

But there’s a deeper issue underneath that: Who actually owns web performance?

The truth is, many companies don’t have a good answer. Or they think they do until something breaks. The SEO team doesn’t own the infrastructure. The dev team isn’t briefed on platform changes. The content team isn’t looped in until after a redesign. Visibility drops, conversions dip, and someone asks, “Why isn’t our SEO team performing?”

Because they don’t own the full system, no one does.

If we want to close the digital performance gap, we must address this root problem: lack of accountability.

The Fallacy Of Distributed Ownership

The idea that “everyone owns the website” likely stems from early digital transformation initiatives, where cross-functional collaboration was encouraged to break down departmental silos. The intent was to foster shared responsibility across departments – but the unintended consequence was diffused accountability.

It sounds collaborative, but in practice, it often means no one is fully accountable for performance.

Here’s how it typically breaks down:

  • IT owns infrastructure and hosting.
  • Marketing owns content and campaigns.
  • SEO owns visibility – but not implementation.
  • UX owns experience – but not findability.
  • Legal owns compliance – but limits usability.
  • Product owns the content management system (CMS) – but doesn’t track SEO.

Each group is doing its job, often with excellence. But the result? Disconnected execution. Strategy gets lost in translation, and performance stalls.

Case in point: For a global alcohol brand, a site refresh had legal requirements mandating an age verification gate before users could access the site. That was the extent of their specification. IT built the gate exactly to spec: a page with the statement to enter your birthdate and three pull-down options for Month, Day, and Year, and a check of that date to the U.S. legal drinking age. UX and creative delayed launch for weeks while debating the optimal wording, positioning, and color scheme.

Once launched, the website traffic, both direct and organic search, dropped to zero. This was due to several key reasons:

  1. Analytics were not set up to track visits before and after the age gate.
  2. Search engines can’t input a birthdate, so they were blocked.
  3. The age requirement was set to the U.S. standard, rejecting younger, yet legal visitors from other countries.

Because everything was done in silos, no one had considered these critical details.

When we finally got all stakeholders in a room, agreed on the issues, and sorted through them, we redesigned the system:

  • Search engines were recognized and bypassed the age requirement.
  • The age requirement and date format are adapted to the user’s location.
  • UX developed multiple variations and tested abandonment.
  • Analytics captured pre- and post-gate performance.
  • UX used the data to validate new landing page formats.

The result? A compliant, user-friendly, and search-accessible module that could be reused globally. Visibility, conversions, and compliance all increased exponentially. But we lost months and millions in potential traffic simply because no one owned the whole picture.

Without centralized accountability, the site was optimized in parts but underperforming as a whole.

The AI Era Raises The Stakes

This kind of siloed ownership might have been manageable in the old “10 blue links” era. But in an AI-first world – where Google and other platforms synthesize content into answers, summarize brands, and bypass traditional click paths – every decision across your digital operation impacts your visibility, trust, and conversion.

Search visibility today depends on structured data, crawlable infrastructure, content relevance, and citation-worthiness. If even one of these is out of alignment, you lose shelf space in the AI-driven SERP. And chances are, the team responsible for the weak link doesn’t even know they’re part of the problem.

Why Most SEO Advice Falls Short

I’ve seen well-meaning advice to “improve your SEO strategy” fall flat – because it assumes the SEO team has control over all the necessary elements. They don’t.

  • You can’t fix crawl issues if you can’t talk to the dev team.
  • You can’t win AI citations if your content team doesn’t structure or enrich their pages.
  • You can’t build authority if your legal or PR teams strip bios and outbound references.

What’s needed isn’t better tactics. It’s organizational clarity.

The Case For Centralized Digital Ownership

To create sustained performance, companies need to designate real ownership over web effectiveness. That doesn’t mean centralizing every task – but it does mean centralizing accountability.

Here are three practical approaches:

1. Establish A Digital Center Of Excellence (CoE)

A CoE provides governance, guidance, and support across business units and regions. It ensures that:

  • Standards are defined and enforced.
  • Platforms are chosen and maintained with shared goals.
  • Learnings are captured and distributed.
  • Key performance indicators (KPIs) are consistent and comparable.

2. Appoint A Digital Effectiveness Officer (DEO)

Think of this like a Commissioning Authority in construction – a role that ensures every component works together to meet the original performance spec. A DEO:

  • Connects the dots between dev, SEO, UX, and content.
  • Tracks impact beyond traffic (revenue, leads, brand trust).
  • Advocates for platform investment and cross-team prioritization.

3. Build Shared KPIs Across Departments

Most teams optimize for what they’re measured on. If the SEO team is judged on rankings but not revenue, and the content team is judged on output but not visibility, you get misaligned efforts. Create chained KPIs that reflect end-to-end performance.

Characteristics Of A Performance-Driven Model

Companies that close the accountability gap tend to share these traits:

  • Unified Taxonomy and Tagging – so content is findable and trackable.
  • Structured Governance – clear roles and escalation paths across teams.
  • Shared Dashboards – everyone sees the same numbers, not vanity metrics.
  • Tech Stack Discipline – fewer, better tools with cross-functional usage.
  • Scenario Planning – AI, zero-click SERPs, and platform volatility are modeled, not ignored.

Final Thought: Performance Requires Ownership

If you’re serious about web effectiveness, you need more than skilled people and good tools. You need a system where someone is truly accountable for how the site performs – across traffic, visibility, UX, conversion, and AI resilience.

This doesn’t mean a top-down mandate. It means orchestrated ownership with clear roles, measurable outcomes, and a strategic anchor.

It’s time to stop asking the SEO team to fix what they don’t control.

It’s time to build a framework where the web is everyone’s responsibility – and someone’s job.

Let’s make web performance a leadership priority, not a guessing game.

More Resources:


Featured Image: SFIO CRACHO/Shutterstock

5 Ways Content Marketers Can Build Consumer Trust Through Responsible Personalization And AI via @sejournal, @rio_seo

In a digital-first era, customer loyalty is no longer an expectation. It’s something that can’t be bought or bribed, but rather earned through intentional action. Yet content marketers can build consumer trust when given the right framework and strategy.

Undoubtedly, technology will continue to evolve, and as it does, so will customer expectations. Content marketing leaders are put in a tough position, where they must navigate a delicate balance between leveraging technology innovations while still ensuring human connection remains at the forefront.

Your customers crave human-centric connection, and new research reveals consumers are rewarding the businesses that prioritize transparency, personalization, and ethical AI usage. The brands that put their customers at the heart of their business and truly understand what motivates them to take action will win.

Recent research from Forsta, surveying more than 4,000 consumers across the U.S. and UK, highlights a rising trend: Customers are increasingly willing to pay more, stay longer, and advocate for brands they trust.

Trust isn’t just a soft metric that’s nice to sporadically review. Instead, it’s becoming one of the most prominent ways to assess business performance and drive long-term value. For content marketing leaders, this marks a shift in the playbook, which we’ll delve into throughout this post.

Using research-backed insights, we’ll examine five strategies to build consumer trust in an increasingly competitive environment to drive growth and forge stronger customer relationships.

How To Build Trust Through Content Marketing

Cost effectiveness is no longer as persuasive as it once was. In fact, according to the aforementioned study, 71% of consumers (U.S. – 71%, UK – 72%) would rather choose a business they trust with their data over one that’s more affordable.

That staggering figure alone highlights a notable shift in what drives purchasing decisions. Slashing prices doesn’t move the needle; trust does.

For content marketing leaders, a significant opportunity is within reach. Consumers are telling us exactly what they want, decoding any preconceived notions. They want to buy from businesses that respect their privacy, communicate openly, and personalize their experiences in a way that resonates with them individually.

Trust has evolved to become the cornerstone of modern brand-building, and content marketers should adapt and evolve to earn business.

1. Personalize With Purpose

Content marketers understand the importance of personalizing customer experiences. For example, sending a mass email to your audience without proper segmentation or targeting is about as useless as shouting into a void.

Additionally, given the astounding rise and usage of AI, personalization is now easier than ever to achieve. Knowing personalization remains a top demand, it’s no longer nice to have. It’s a must.

However, consumers aren’t giving away their personal information in exchange for custom-tailored experiences. They’re becoming more attuned to how businesses use their data and, in turn, have become more selective when sharing personal information.

If the value exchange isn’t obvious, transparent, or respectful, consumers may second-guess engaging with your business.

The study asked respondents what mattered most when it came to personalization, and the answer may surprise you: The majority stated efficiency.

The most appreciated personalized experience isn’t targeted ads or dynamic pricing; it goes back to the basics. Consumers want personalization that’s efficient and responsive when they seek help. They want to feel heard and supported without being passed from agent to agent.

This finding flips traditional personalization logic on its head. Instead of focusing solely on selling products or services, content marketing leaders must also examine how personalized support can reduce friction and enhance the customer journey.

Key Takeaway: Shift how you think about personalization. It’s no longer about “attention-grabbing” but rather “value-delivering.”

Use both structured and unstructured data to identify where your greatest opportunities lie, from examining your reviews to your chat logs. Then, write content that addresses those concerns to educate and empower your target audience.

2. Be Transparent About AI Usage

AI is already redefining how businesses operate and how they engage with consumers. From leveraging AI tools to create search engine-optimized content outlines to performing keyword research to ensure content aligns with search intent, AI enables scale and speed humans simply can’t match.

But customers are still wary of what’s AI and what’s not. When they feel deceived, trust erodes, and so too can revenue. The study found that 38% of consumers (U.S. – 38%, UK – 40%) would lose trust in a brand if they discovered AI-generated content or interactions weren’t disclosed.

This doesn’t mean AI usage should be abolished. Instead, it reinforces that transparency is non-negotiable.

Customers want to know when and where AI is being used, and this information shouldn’t be hidden in plain sight. Your AI policies should be front and center, easily located on your landing pages and website’s privacy policy.

Key Takeaway: AI isn’t a replacement for human writers, but should rather be viewed as a helpful assistant. Brands must clearly disclose AI usage, offer opt-outs when appropriate, and stay away from using AI to fully draft content.

3. Ensure Every Experience Is A Positive One

Customer loyalty is fragile. Negative experiences are remembered, and businesses may not get a second chance to right their wrongs, as evidenced by the following finding.

More than 60% of consumers (U.S. – 63%, UK – 62%) said they would stop buying from a brand after just one or two negative experiences. This leaves little opportunity for error before customers take their hard-earned money elsewhere.

This begs the question: What types of mistakes are unforgivable? It’s often not the major mistakes that you’d expect, but rather the accumulation of small grievances.

Over half of consumers (U.S. 53%, UK – 51%) said that inconveniences like long checkout lines or slow customer service can do more damage than something you’d expect to be more catastrophic, like sending out an email for a sale that’s no longer active.

The little things add up, and customers are quick to move on even if it happens just once.

Key Takeaway: Marketing and customer experience leaders must build feedback loops to catch and fix small annoyances before they become a bigger issue, like affecting your business’s bottom line.

Both teams should stay aligned to ensure nothing falls through the cracks, such as a faulty form on a gated content’s landing page or a broken call-to-action (CTA) link in an ebook.

4. Focus On Human Connection

Despite the rise of digital tools, the data is clear: Consumers still want and value human interaction. A chatbot may help to solve a quick issue, but many want to speak to and engage with an actual human. If this isn’t an option, your business runs the risk of creating a trust deficit with potential customers.

Unsurprisingly, over half (58%) of U.S. respondents said they value the ability to talk to a real person when they need support. Customers don’t want to get stuck in a phone tree; they want real support in real-time.

This doesn’t mean abandoning digital transformation, but it should strike a delicate balance with empathy. Human connection is valued throughout all stages of the customer journey, whether engaging with a social post or responding to a promotional email. Make human connection seamless and simple.

Key Takeaway: Digital tools can be helpful for enabling quick support, but they shouldn’t eliminate the option for human connection, especially when escalation is necessary. Invest in omnichannel experiences that offer the best of both worlds.

5. Ensure Value In Exchange For Data

Consumers are still willing to share their data, but only if they believe they’ll get something worthwhile out of it.

Banks, for example, are largely seen as trustworthy, with 69% of U.S. and 81% of UK consumers agreeing they trust banks to handle their data responsibly.

In contrast, social media platforms and AI tools (like ChatGPT, Gemini, Perplexity, and more) rank lowest when it comes to trust.

For content marketing leaders, this adds a layer of complexity to strategies for success. We know customers do want personalized experiences, but it comes with conditions. They expect brands to use their data only for meaningful interactions, not for profit or intrusive profiling.

The value exchange must be evident, meaning content standards must be set high. Content can no longer be drafted to meet a quota or stuff some keywords.

In addition to drafting relevant and helpful content that matches search intent, marketers should clearly disclose:

  • What data you collect.
  • What they’ll get in exchange for it.
  • How you protect it.
  • Why you collect it.

Key Takeaway: Make data transparency a part of your brand promise. Clearly disclose the benefit consumers will receive in exchange for their personal information. Create content that resonates with your audience, solves their pain points, and offers them clear value.

Framework For Turning Trust Into A Strategic Asset

To truly operationalize trust, marketing leaders must move beyond surface-level gestures and embed it into every layer of their customer journey. Trust must no longer be treated as a compliance issue but rather as a growth strategy.

Brands that build a reputation for responsible data use, transparent AI disclosure, exceptional customer experiences, and prioritize human connection will stand out in today’s marketplace.

Key actions for content marketing leaders to take include:

  • Audit CX for friction: Map key points of failure across your digital journey. Understand the types of content that are converting best and what needs reassessment. Continually measure content marketing performance to identify what’s landing well with your audience.
  • Be radically transparent: From AI disclosures to privacy policies, it’s better to overcommunicate to your audience. Share how and when AI is used.
  • Use AI responsibly: AI simply can’t match the expertise, strength, and emotion of human writers. Therefore, it should be used as an aid rather than a crutch when it comes to drafting content.
  • Reframe personalization: Personalization is a must, but not at the cost of frustrating customers. Use personalization strategically, ensuring it serves utility over novelty.
  • Empower cross-functional teams: Every team should have visibility into shared trust key performance indicators (KPIs) so each team understands how they can help grow consumer trust.

The future of marketing isn’t just about accelerating AI, personalization, or even digital transformation. It’s about trust.

Trust is what turns first-time buyers into lifelong advocates. It’s what enables brands to charge a premium, recover from mistakes, and stand out in crowded markets. In an era where consumer skepticism is high, trust must be earned through every stage of the customer journey, from first click to collecting payment.

For content marketing leaders, the takeaway is clear: Trust is your brand’s most valuable asset. Invest in it wisely.

More Resources:


Featured Image: DILA CREATIONS/Shutterstock

Google Uses Infinite 301 Redirect Loops For Missing Documentation via @sejournal, @martinibuster

Google removed outdated structured data documentation, but instead of returning a 404 response, they have chosen to redirect the old URLs to a changelog that links to the old URL, thereby causing an infinite loop between the two pages. Although that is technically not a soft 404, it is an interesting use of a 301 redirect for a missing web page and not how SEOs typically handle missing web pages and 404 server responses. Did Google make a mistake?

Google Removed Structured Data Documentation

Google quitely published a changelog note announcing they had removed obsolete structured data documentation. An announcement was made three months ago in June and today they finally removed the obsolete documentation.

The missing pages are for the following structured data that is no longer supported:

  • Course info
  • Estimated salary
  • Learning video
  • Special announcement
  • Vehicle listing.

Those pages are completely missing. Gone, and likely never coming back. The usual procedure in that kind of situation is to return a 404 Page Not Found server response. But that’s not what is happening.

Instead of a 404 response Google is returning a 301 redirect back to the changelog. What makes this setup somewhat weird is that Google is linking back to the missing web page from the changelog, which then redirects back to the changelog, creating an infinite loop between the two pages.

Screenshot Of Changelog

In the above screenshot I’ve underlined  in red the link to the Course Info structured data.

The words “course info” are a link to this URL:
https://developers.google.com/search/docs/appearance/structured-data/course-info

Which redirects right back to the changelog here:
https://developers.google.com/search/updates#september-2025

Which of course contains the links to the five URLs that  no longer exist, essentially causing an infinite loop.

It’s not a good user experience and it’s not good for crawlers. So the question is, why did Google do that? 

301 redirects are an option for pages that are missing, so Google is technically correct to use a 301 redirect. However, 301 redirects are generally used to point “to a more accurate URL” which generally means a redirect to a replacement page, one that serves the same or similar purpose.

Technically they didn’t create a soft 404. But the way they handled the missing pages creates a loop that sends crawlers back and forth between a missing web page and the changelog. It seems that it would have been a better user and crawler experience to instead link to the June 2025 blog post that explains why these structured data types are no longer supported  rather than create an infinite loop.

I don’t think it’s anything most SEOs or publishers would do, so why does Google think it’s a good idea?

Featured Image by Shutterstock/Kues

AI Is Changing Local Search Faster Than You Think [Webinar] via @sejournal, @hethr_campbell

For multi-location brands, local search has always been competitive. But 2025 has introduced a new player: AI

From AI Overviews to Maps Packs, how consumers discover your stores is evolving, and some brands are already pulling ahead.

Robert Cooney, VP of Client Strategy at DAC, and Kyle Harris, Director of Local Optimization, have spent months analyzing enterprise local search trends. Their findings reveal clear gaps between brands that merely appear and those that consistently win visibility across hundreds of locations.

The insights are striking:

  • Some queries favor Maps Packs, others AI Overviews. Winning in both requires strategy, not luck.
  • Multi-generational search habits are shifting. Brands that align content to real consumer behavior capture more attention.
  • The next wave of “agentic search” is coming, and early preparation is the key to staying relevant.

This webinar is your chance to see these insights in action. Walk away with actionable steps to protect your visibility, optimize local presence, and turn AI-driven search into a growth engine for your stores.

📌 Register now to see how enterprise brands are staying ahead of AI in local search. Can’t make it live? Sign up and we’ll send the recording straight to your inbox.

Help! My therapist is secretly using ChatGPT

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted. 

It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found. 

I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?

In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.

In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?

Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.

There is some evidence AI can deliver more standardized, “manualized” therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.

What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?

At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.

OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?

I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things. 

Read the full story from Laurie Clarke

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.