Google I/O 2025: The Background Google Didn’t Tell You via @sejournal, @MordyOberstein

On March 29, 2025, the New York Yankees hit a franchise record nine home runs in one game versus the Milwaukee Brewers.

To accomplish this feat, they used a bat that will probably change baseball forever (or not).

They provided Google with the opportunity to oversell its AI abilities, hoping that no one would be familiar with baseball, torpedo bats, and Google Search – all at the same time.

I am that person.

The Background Google Didn’t Tell You

That same day, I was sitting at the very desk I am writing this article on, ordering my groceries online and watching Nestor Cortes pitch against his former team, the New York Yankees.

Cortes, a personal favorite of mine (and of all Yankees fans), picked up right where he left off in 2024 … giving up home runs (he gave a grand slam to the Dodgers in the World Series that I am still hurting from) – dinger (that’s a word for a home run) after dinger.

As I was watching the Yankees crush Cortes (my 7-year-old was on cloud nine), I noticed one of the players’ bats was oddly shaped (that player was Austin Wells, note this for later). I thought it must have been my eyes, but then, player after player, it was the same thing.

The shape of the bat was different. What the Yankees did was custom-load the bulk of the wood of the bat to where the player (per advanced analytics) makes contact most often (so that when they did make contact, it would be harder). Not every player, but a good chunk of the lineup, was using these bats.

You do marketing, not baseball. Why do you care?

Because, as the Yankees hit a franchise record number of home runs in this game, the entire baseball world went bonkers.

Is this bat legal? What is this thing? Who is using it? Since when?

If you are in the baseball world, you know this was an absolutely huge story.

If you’re not familiar with baseball, the term “torpedo bat” sounds entirely obscure and obtuse, which is what Google was counting on when it used this example at Google I/O 2025 to show how advanced its AI abilities are.

What Did Google Say Exactly?

Rajan Patel, who is the VP of Search Engineering at Google, got on stage at Google I/O and said he was a huge baseball fan.

Screenshot from Google I/O 2025 livestream, May 2025

So what?

Rajan used a query related to baseball to show how Google’s AI Mode was analyzing complex data.

Specifically, he ran the following query (prompt?): “Show the batting average and OBP for this season and last for notable players who currently use a torpedo bat.”

It seems really complex, which is exactly how Rajan packaged this to the audience, saying: “Think about it, there are so many parts to that question.”

It does seem like a really niche sort of topic that you’d have to have very specific knowledge about.

It’s got multiple layers to it. It’s got data and acronyms wrapped up in it. It’s got everything you need to think that this is a seriously complex question that only advanced AI could answer. It’s even asking for two years’ worth of data.

The reality is that you could find this information out in three very easy searches and never even have to click on a website to do it.

Shall we?

Why What Google Said Is Not ‘Advanced’

This whole “torpedo bats” thing seems extremely niche, which is, again, from an optics and perception point of view, this is exactly what Google wants you to think.

In reality, as I mentioned, this was a hot story in baseball for a good while.

Moreover, the question of who uses these bats was a big deal. People thought, initially, that these players were cheating.

It was a semi-scandal, which means there is a ton of content from big-name websites that specifically lists which players are using the type of bat.

Here you go, it’s right in the featured snippet:

Screenshot from search for [who uses torpedo bats], Google, May 2025

The last player mentioned above, Dansby Swanson, is who Google featured in its talk:

Screenshot from Google I/O 2025 livestream, May 2025

Compare the list Google shows to the one from Yahoo Sports.

Four out of the seven on Google lists are right there in the featured snippet (Austin Wells, Jazz Chisolm [Jr.], Anthony Volpe, and Dansby Swanson).

For the record, three out of those four play for the Yankees, and Wells was the first person we saw use the bat type in 2025 (some players experimented using it in 2024).

Not hard information to access. It’s right there – so are the players who are “notable.”

Rajan makes a point of saying Google needs to know who the notable players are to answer this “complex” question. Meaning, he portrayed this as being “complex.”

Maybe in 2015, not in 2025.

It’s been stored in the Knowledge Graph for years.

Just Google [best mlb players]:

Screenshot from search for [best players mlb], Google, May 2025

Boop. Carousel of the best MLB players in the league right now (pretty good list too!)

So, the complex thing that Google’s AI Mode had to do was pull information from a Yahoo Sports page (or wherever) and combine that with information it’s been using in the Knowledge Graph for years?

If that’s hard for Google and complex for AI, then we have problems.

There were, in essence, three parts to Google’s “query” here:

  1. Answering who in the league uses a torpedo bat.
  2. Identifying notable current players.
  3. Pulling the stats (this year’s and last year’s) of these notable players using this new bat.

The data part seems complex. Stats? Current stats? Seems like a lot.

There are two things you need to know:

The first thing is that baseball is famous for its stats. Fans and teams have been tracking stats for 100 years – number of home runs, batting averages, earned run averages, runs batted in, strikeout walks, doubles, and triples (we haven’t even gotten into spin rates and launch angles).

That’s correct, baseball teams track how many times the ball spun between when the pitcher threw the ball and when the catcher caught the ball.

Today, the league is dominated by advanced analytics.

Guess who powers it all?

I bet you guessed, Google.

Screenshot from search for [who powers mlb statcast], Google, May 2025

The second thing you need to know is that Google has been collecting stats on specific players in its Knowledge Graph for a good while.

Forget that the stats on specific players can be found on dozens upon dozens of websites; Google itself collects them.

Here’s a search for the league’s best player (no, he does not use a torpedo bat):

Screenshot from search for [aaron judge stats], Google, May 2025

Did you notice the stats? Of course, you did; it’s a tab in the Knowledge Panel.

It’s information that might seem incredibly vast or complex, but it’s literally stored by Google.

What I’m saying is, Google created a “complex” scenario that was nothing more than combining two things it stores in the Knowledge Graph with one thing that is spread all over the web (i.e., the list of players using this type of bat).

Is that really that complex for Google, or was it engineered to look complex for the optics?

What Is The Best Way To Talk About AI Products?

I love the graphs. Taking the data and the information and creating a custom graph with AI?

Love that. That’s amazing. That’s so useful.

Google, you don’t need to oversell it; it’s awesome without you doing that.

Google is not going to listen to me (it will read this article, but it will not listen to my advice). I’m not writing this for Google.

I am writing this for you. If you are a small or a medium-sized marketing team and you’re looking at how Google and other big brands market their AI products as a beacon for your own marketing … don’t.

Don’t feel you have to. Decide on your own what is the best way to talk about AI products.

Is the best way really to overstate the complexity? To try to “package” something as more than it really is?

I get the temptation, but people are not stupid. They will start to see through the smoke and mirrors.

It may take them time. It may take them more time than you might think – but it will happen.

I’ll end with a personal story.

My wife is a nurse. She was recently sent to a seminar where they talked about “what’s happening with all the AI stuff.”

My wife came home and was taken aback by what’s going on out there and how people are using AI, as well as how good the AI was (or wasn’t).

My wife is now a thousand times more skeptical about AI.

What happens if you’re following what these brands are doing and oversell AI when your target audience eventually has the experience my wife did?

More Resources:


Featured Image: Master1305/Shutterstock

Google’s Official Advice On Optimizing For AI Overviews & AI Mode via @sejournal, @MattGSouthern

Google has released new guidelines for website owners who want to excel in AI-powered search.

In a blog post, Search Advocate John Mueller shared tips for ranking in AI Overviews and AI Mode.

This guidance comes as Google moves beyond traditional “blue links” to offer more AI-driven search features.

AI Is Changing Search Behavior

Google noted that users now ask longer questions and follow-up queries through these new interfaces, which creates challenges and opportunities for publishers.

Mueller writes:

“The underpinnings of what Google has long advised carries across to these new experiences. Focus on your visitors and provide them with unique, satisfying content.”

Content Quality Remains Paramount

Google says creating “unique, non-commodity content” is still the foundation for success in all search formats, including AI.

The company recommends focusing on content that meets user needs instead of trying to trick the algorithm.

Google points out that AI search users ask more specific questions and follow-ups. This suggests that thorough, detailed content works especially well in these new search environments.

Technical Requirements and Page Experience

Beyond good content, Google stressed the importance of technical access.

This includes ensuring that:

  • Googlebot isn’t blocked
  • Pages load correctly
  • Content can be indexed

Also focus on user experience factors like mobile-friendly design, fast loading speeds, and clear main content.

Mueller writes in the blog post:

“Even the best content can be disappointing to people if they arrive at a page that’s cluttered, difficult to navigate or makes it hard to find the main information they’re seeking. Ensure that you’re providing a good page experience for those who arrive either from classic or AI search results…”

Managing Content Visibility In AI Experiences

Google confirms that current content controls work for AI search.

Publishers can use the following tags to control how their content appears:

  • nosnippet
  • data-nosnippet
  • max-snippet
  • noindex

More restrictions will limit visibility in AI results.

Multimedia Content For Multimodal Search

Google’s blog post stressed the growing importance of images and videos as Google’s AI improves.

With multimodal search, you can upload images and ask questions about them. Google recommends adding high-quality visuals to support your text content.

Ecommerce businesses should keep their Merchant Center and Business Profile information updated for better performance in visual searches.

Rethinking Success Metrics

Google shared insights about user behavior with AI search results, suggesting publishers may need to reconsider how they measure success:

“We’ve seen that when people click to a website from search results pages with AI Overviews, these clicks are higher quality, where users are more likely to spend more time on the site.”

Google suggests AI results provide better context about topics, potentially sending more engaged website visitors.

Mueller encourages site owners to look beyond just clicks and focus on more meaningful metrics like sales, signups, and engagement.

What This Means

This guidance shows that while search looks different now, Google’s main ranking principles haven’t changed.

Unique content, technical quality, and user experience still define success, even as AI changes how people use search.

The key takeaways are:

  • Your website meets the technical requirements for Google Search
  • Optimize your images and videos
  • Review your meta directives
  • Rethink how you measure traffic quality from AI search rather than just counting clicks.

Google’s full guidance, along with additional resources on AI features and generative AI content, can be found on the Google Search Central blog.


Featured Image: bluestork/Shutterstock

Google Gemini Upgrades: New AI Capabilities Announced At I/O via @sejournal, @MattGSouthern

Google has announced updates to its Gemini AI platform at Google I/O, introducing features that could transform how search and marketing professionals analyze data and interact with digital tools.

The new capabilities focus on enhanced reasoning, improved interface interactions, and more efficient workflows.

Gemini 2.5 Models Get Performance Upgrades

Google highlights that Gemini 2.5 Pro leads the WebDev Arena leaderboard with an ELO score of 1420. It ranks first in all categories on the LMArena leaderboard, which measures human preferences for AI models.

The model features a one-million-token context window for processing large content inputs, effectively supporting both long text analysis and video understanding.

Meanwhile, Gemini 2.5 Flash has been updated to enhance performance in reasoning, multimodality, code, and long context processing.

Google reports it now utilizes 20-30% fewer tokens than previous versions. The updated Flash model is currently available in the Gemini app and will be generally available for production in Google AI Studio and Vertex AI in early June.

Gemini Live: New Camera and Screen Sharing Capabilities

The expanded Gemini Live feature is a significant addition to the Gemini ecosystem, now available on Android and iOS devices.

Google reports that Gemini Live conversations are, on average, five times longer than text-based interactions.

The updated version includes:

  • Camera and screen sharing capabilities, allowing users to point their phones at objects for real-time visual help.
  • Integration with Google Maps, Calendar, Tasks, and Keep (coming in the next few weeks).
  • The ability to create calendar events directly from conversations.

These features enable marketers to demonstrate products, troubleshoot issues, and plan campaigns through natural conversations with AI assistance.

Deep Think: Enhanced Reasoning for Complex Problems

The experimental “Deep Think” mode for Gemini 2.5 Pro uses research techniques that enable the model to consider multiple solutions before responding.

Google is making Deep Think available to trusted testers through the Gemini API to gather feedback prior to a wider release.

New Developer Tools for Marketing Applications

Several enhancements to the developer experience include:

  • Thought Summaries: Both 2.5 Pro and Flash will now provide structured summaries of their reasoning process in the Gemini API and Vertex AI.
  • Thinking Budgets: This feature is expanding to 2.5 Pro, enabling developers to manage token usage for thinking prior to responses, which impacts costs and performance.
  • MCP Support: The introduction of native support for the Model Context Protocol in the Gemini API allows for integration with open-source tools.

Here are examples of what thought summaries and thinking budgets look like in the Gemini interface:

Image Credit: Google
Image Credit: Google

Gemini in Chrome & New Subscription Plans

Gemini is being integrated into Chrome, rolling out to Google AI subscribers in the U.S. This feature allows users to ask questions about content while browsing websites.

You can see an example of this capability in the image below:

Image Credit: Google

Google also announced two subscription plans: Google AI Pro and Google AI Ultra.

The Ultra plan costs $249.99/month (with 50% off the first three months for new users) and provides access to Google’s advanced models with higher usage limits and early access to experimental AI features.

Looking Ahead

These updates to Gemini signify notable advancements in AI that marketers can integrate into their analytical workflows.

As these features roll out in the coming months, SEO and marketing teams can assess how these tools fit with their current strategies and technical requirements.

The incorporation of AI into Chrome and the upgraded conversational abilities indicate ongoing evolution in how consumers engage with digital content, a trend that search and marketing professionals must monitor closely.

Google Expands AI Features in Search: What You Need to Know via @sejournal, @MattGSouthern

At its annual I/O developer conference, Google announced upgrades to its AI-powered Search tools, making features like AI Mode and AI Overviews available to more people.

These updates, which Search Engine Journal received an advanced look at during a preview event, show Google’s commitment to creating interactive search experiences.

Here’s what’s changing and what it means for digital marketers.

AI Overviews: Improved Accuracy, Global Reach

AI Overviews, launched last year, are now available in over 200 countries and more than 40 languages.

Google reports that this feature is transforming how people utilize Search, with a 10% increase in search activity for queries displaying AI Overviews in major markets like the U.S. and India.

At the news preview, Liz Reid, Google’s VP and Head of Search, addressed concerns regarding AI accuracy.

She acknowledged that there have been “edge cases” where AI Overviews provided incorrect or even harmful information. Reid explained that these issues were taken seriously, corrections were made, and continuous AI training has led to improved results over time.

Expect Google to continue enhancing how AI ensures accuracy and reliability.

AI Mode: Now Available to More Users

AI Mode is now rolling out to all users in the U.S. without the need to sign up for Search Labs.

Previously, only testers could try AI Mode. Now, anyone in the U.S. will see a new tab for AI Mode in Search and in the Google app search bar.

How AI Mode Works

AI Mode uses a “query fan-out” system that breaks big questions into smaller parts and runs many searches at once.

Users can also ask follow-up questions and get links to helpful sites within the search results.

Google is using AI Mode and AI Overviews as testing grounds for new features, like the improved Gemini 2.5 AI model. User feedback will help shape what becomes part of the main Search experience.

New Tools: Deep Search, Live Visual Search, and AI-Powered Agents

Deep Search: Research Made Easy

Deep Search in AI Mode helps users dig deeper. It can run hundreds of searches at once and build expert-level, fully-cited reports in minutes.

Image Credit: Google
Image Credit: Google

Live Visual Search With Project Astra

Google is updating how users can search visually. With Search Live, you can use your phone’s camera to talk with Search about what you see.

For example, point your camera at something, ask a question, and get quick answers and links. This feature can boost local searches, visual shopping, and on-the-go learning.

Image Credit: Google

AI Agents: Getting Tasks Done for You

Google is adding agentic features, which are AI tools capable of managing multi-step tasks.

Initially, AI Mode will assist users in purchasing event tickets, reserving restaurant tables, and scheduling appointments. The AI evaluates hundreds of options and completes forms, but users always finalize the purchase.

Partners such as Ticketmaster, StubHub, Resy, and Vagaro are already onboard.

Image Credit: Google
Image Credit: Google

Smarter Shopping: Try On Clothes and Buy With Confidence

AI Mode is enhancing the shopping experience. The new tools use Gemini and Google’s Shopping Graph and include:

  • Personalized Visuals: Product panels show items based on your style and needs.
  • Virtual Try-On: Upload a photo to see how clothing looks on you, powered by Google’s fashion AI.
  • Agentic Checkout: Track prices, get sale alerts, and let Google’s AI buy for you via Google Pay when the price drops.
  • Custom Charts: For sports and finance, AI Mode can build charts and graphs using live data.
Image Credit: Google

Personalization and Privacy Controls

Soon, AI Mode will offer more personalized results by using your past searches and, if you opt in, data from other Google apps like Gmail.

For example, if you’re planning a trip, AI Mode can suggest restaurants or events based on your bookings and interests. Google says you’ll always know when your personal info is used and can manage your privacy settings anytime.

Google’s View: Search Use Cases Are Growing

CEO Sundar Pichai addressed how AI is reshaping search during the preview event.

He described the current transformation as “far from a zero sum moment,” noting that the use cases for Search are “dramatically expanding.”

Pichai highlighted increasing user excitement and conveyed optimism, stating that “all of this will keep getting better” as AI capabilities mature.

Looking Ahead

Google’s latest announcements signal a continued push toward AI as the core of the search experience.

With AI Mode rolling out in the U.S. and global expansion of AI Overviews, marketers should proactively adapt their strategies to meet the evolving expectations of both users and Google’s algorithms.

From Search To Discovery: Why SEO Must Evolve Beyond The SERP via @sejournal, @alexmoss

The search landscape undergoes its biggest shift in a generation.

If you’ve been in SEO long enough to remember the glory days of the all-organic search engine results pages (SERP), you’ll know how much of this real estate has been gradually taken over by paid ads, other first-party products, and rich snippets.

Now, the most aggressive transition of all: AI Overviews (as well as search-based large language model platforms).

At BrightonSEO last month, I explored how this evolution is forcing us to rethink what SEO means and why discoverability, not just ranking, is the new north star.

The “Dawn” Of The Zero-Click Isn’t Just Over – It’s Now Assumed

We’ve been reading about the rise of zero-click searches for some time now, but this “takeover” has been much more noticeable over the past 12 months.

I recently searched [how to teach my child to tell the time], and after scrolling through a parade of paid product ads, Google-owned assets, and the AI Overview summaries, I scrolled a good three pages down the SERP.

Google and other search and discovery platforms want to keep users in their ecosystems. For SEO pros, this means traditional metrics such as click-through rate (CTR) are becoming less valuable by the day.

From Answer Engines To Assistant Engines

LLMs have changed not just the way a result is displayed to the user but also changed the traditional search flow born within the browser into a multi-step flow that the native SERP simply cannot support in the same way.

The research process is collapsing into a single, seamless exchange.

Traditional flow vs Multi-step flowImage used with permission from Alain Schlesser, May 2025

But as technology accelerates, our own curiosity and research skills are at risk of declining or disappearing completely as the evolution of technology exponentially grows.

Assistant engines and wider LLMs  are the new gatekeepers between our content and the person discovering that content – our potential “new audience.”

They parse, consume, understand, and then synthesize content, which is the deciding factor in what it mentions to whom/what it interacts with.

Structured data is still crucial, as context, transparency, and sentiment matter more than ever.

Personal LLM agent flow diagramPersonal LLM agent flow diagram by Alain Schlesser, used with permission, May 2025

Challenges Are Different, But Also The Same

As an SEO, our challenges with this new behavior affect the way we do – and report on – our jobs.

In reality, many are just old headaches in shiny new wrappers:

  • Attribution is a mess: With AI Overviews and LLMs synthesizing content, it’s harder than ever to see where your traffic comes from – or if you’re getting any at all. There are some tools out there that do monitor, but we’re in the early days to see a standard. Even Google said they have no plans on adding insights on AIO within Search Console.
  • Traffic is fragmenting (again): We saw this with social media platforms at the beginning, where discovery happened outside the organic SERPs. Discovery is now happening everywhere, all at once. With attribution also harder to ascertain, this is a bigger challenge today.
  • Budgets are under scrutiny from fear, uncertainty, and doubt (FUD): The native SERP is changing too much, so some may assume there’s less (or no) value in doing SEO much anymore (untrue!).

The Shift Of Success Metrics

The days of our current success metrics are dwindling. The days of vanity-led metrics are coming to an end.

Similar to how our challenges are the same but different, this also applies to how we redefine success metrics:

Old Hat New Hat
Content Context + sentiment
Keywords Intent
Brand Brand + sentiment
Rankings Mentions
Links from external sources Citations across various channels
SERP monopoly Share of voice
E-E-A-T Still E-E-A-T
Structured data Entities, knowledge graph & vector embeds
Answering Assisting

What Can You Do About It?

Information can be aggregated, but personality can’t. This is why it’s still our responsibility to help “assist the assistant” to consider and include you as part of that aggregated information and synthesized answer.

  • Stick to the fundamentals: Never neglect SEO 101.
  • Third-party perspective is increasingly important, so ensure this is maintained and managed well to ensure positive brand sentiment.
  • Embrace structured data: Even if some say it’s becoming less crucial for LLMs to understand entities, structured data is being used right now inside major LLMs to output structured data within responses, giving them an established and standardised way to understand your content.
  • Educate stakeholders: Shift the conversation from rankings and clicks to discoverability and brand presence. The days of the branded unlinked mention suddenly have more value than “acquiring X followed non-branded anchor text links pcm.”
  • Experiment with your content: Try new ways to produce and market your content beyond the traditional word. Here, video is useful not only for humans but also for LLMs, who are now “watching” and understanding them to aid their response.
  • Create helpful, unique content: To add to the above, don’t produce for the sake of production.

LLMs.txt: The Potential To Be The New Standard

Keep an eye on emerging standards proposals, such as llms.txt, which is one way some are adapting and contributing to how LLMs ingest our content beyond our traditional approaches offered with robots.txt and XML sitemaps.

While some are skeptical about this standard, I believe it is still something worth implementing now, and I understand its true benefits for the future.

There is (virtually) non-existent risk in implementing something that doesn’t take too much time or resources to produce, so long as you’re doing so with a white hat approach.

Conclusion: Embrace Discoverability And New Metrics

SEO isn’t dead. It’s expanding, but at a rate we haven’t experienced before.

Discoverability is the new go-to success metric, but it’s not without flaws, especially as the way we search continues to change.

This is no longer about “ranking well” anymore. This is now about being understood, surfaced, trusted, and discovered across every platform and assistant that matters.

Embrace and adapt to the changes, as it’s going to continue for some time.

More Resources:


Featured Image: PeopleImages.com – Yuri A/Shutterstock

Does Google’s AI Overviews Violate Its Own Spam Policies? via @sejournal, @martinibuster

Search marketers assert that Google’s new long-form AI Overviews answers have become the very thing Google’s documentation advises publishers against: scraped content lacking originality or added value, at the expense of content creators who are seeing declining traffic.

Why put the effort into writing great content if it’s going to be rewritten into a complete answer that removes the incentive to click the cited source?

Rewriting Content And Plagiarism

Google previously showed Featured Snippets, which were excerpts from published content that users could click on to read the rest of the article. Google’s AI Overviews (AIO) expands on that by presenting entire articles that answer a user’s questions and sometimes anticipates follow-up questions and provides answers to those, too.

And it’s not an AI providing answers. It’s an AI repurposing published content. That action is called plagiarism when a student does the same thing by repurposing an existing essay without adding unique insight or analysis.

The thing about AI is that it is incapable of unique insight or analysis, so there is zero value-add in Google’s AIO, which in an academic setting would be called plagiarism.

Example Of Rewritten Content

Lily Ray recently published an article on LinkedIn drawing attention to a spam problem in Google’s AIO. Her article explains how SEOs discovered how to inject answers into AIO, taking advantage of the lack of fact checking.

Lily subsequently checked on Google, presumably to see if her article was ranking and discovered that Google had rewritten her entire article and was providing an answer that was almost as long as her original.

She tweeted:

“It re-wrote everything I wrote in a post that’s basically as long as my original post “

Did Google Rewrite Entire Article?

An algorithm that search engines and LLMs may use to analyze content is to determine what questions the content answers. This way the content can be annotated according to what answers it provides, making it easier to match a query to a web page.

I used ChatGPT to analyze Lily’s content and also AIO’s answer. The number of questions answered by both documents were almost exactly the same, twelve. Lily’s article answered 13 questions while AIO provided answeredo twelve.

Both articles answered five similar questions:

  • Spam Problem In AI Overviews
    AIO: “s there a spam problem affecting Google AI Overviews?
    Lily Ray: What types of problems have been observed in Google’s AI Overviews?
  • Manipulation And Exploitation of AI Overviews
    AIO: How are spammers manipulating AI Overviews to promote low-quality content?
    Lily Ray: What new forms of SEO spam have emerged in response to AI Overviews?
  • Accuracy And Hallucination Concerns
    AIO: Can AI Overviews generate inaccurate or contradictory information?
    Lily Ray: Does Google currently fact-check or validate the sources used in AI Overviews?
  • Concern About AIO In The SEO Community
    AIO: What concerns do SEO professionals have about the impact of AI Overviews?
    Lily Ray: Why is the ability to manipulate AI Overviews so concerning?
  • Deviation From Principles of E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness)
    AIO: What kind of content is Google prioritizing in response to these issues?
    Lily Ray: How does the quality of information in AI Overviews compare to Google’s traditional emphasis on E-E-A-T and trustworthy content?

Plagiarizing More Than One Document

Google’s AIO system is designed to answer follow-up and related questions, “synthesizing” answers from more than one original source and that’s the case with this specific answer.

Whereas Lily’s content argues that Google isn’t doing enough, AIO rewrote the content from another document to say that Google is taking action to prevent spam. Google’s AIO differs from Lily’s original by answering five additional questions with answers that are derived from another web page.

This gives the appearance that Google’s AIO answer for this specific query is “synthesizing” or “plagiarizing” from two documents to answer the question Lily Ray’s search query, “spam in ai overview google.”

Takeaways

  • Google’s AI Overviews is repurposing web content to create long-form content that lacks originality or added-value.
  • Google’s AIO answers mirror the content they summarize, copying the structure and ideas to answer identical questions inherent in the articles.
  • Google’s AIO arguably deviates from Google’s own quality standards, using rewritten content in a manner that mirrors Google’s own definitions of spam.
  • Google’s AIO features apparent plagiarism of multiple sources.

The quality and trustworthiness of AIO responses may  not reach the quality levels set by Google’s principles of Experience, Expertise, Authoritativeness, and Trustworthiness because AI lacks experience and apparently there is no mechanism for fact-checking.

The fact that Google’s AIO system provides essay-length answers arguably removes any incentive for users to click through to the original source and may help explain why many in the search and publisher communities are seeing less traffic. The perception of AIO traffic is so bad that one search marketer quipped on X that ranking #1 on Google is the new place to hide a body, because nobody would ever find it there.

Google could be said to plagiarize content because AIO answers are rewrites of published articles that lack unique analysis or added value, placing AIO firmly within most people’s definition of a scraper spammer.

Featured Image by Shutterstock/Luis Molinero

Create Your Own ChatGPT Agent For On-Page SEO Audits via @sejournal, @makhyan

ChatGPT is more than just a prompting and response platform. You can send prompts to ask for help with SEO, but it becomes more powerful the moment that you make your own agent.

I conduct many SEO audits – it’s a necessity for an enterprise site – so I was looking for a way to streamline some of these processes.

How did I do it? By creating a ChatGPT agent that I’m going to share with you so that you can customize and change it to meet your needs.

I’ll keep things as “untechnical” as possible, but just follow the instructions, and everything should work.

I’m going to explain the following steps”

  1. Configuration of your own ChatGPT.
  2. Creating your own Cloudflare code to fetch a page’s HTML data.
  3. Putting your SEO audit agents to work.

At the end, you’ll have a bot that provides you with information, such as:

Custom ChatGPT for SEOCustom ChatGPT for SEO (Image from author, May 2025)

You’ll also receive a list of actionable steps to take to improve your SEO based on the agent’s findings.

Creating A Cloudflare Pages Worker For Your Agent

Cloudflare Pages workers help your agent gather information from the website you’re trying to parse and view its current state of SEO.

You can use a free account to get started, and you can register by doing the following:

  1. Going to http://pages.dev/
  2. Creating an account

I used Google to sign up because it’s easier, but choose the method you’re most comfortable with. You’ll end up on a screen that looks something like this:

Cloudflare DashboardCloudflare Dashboard (Screenshot from Cloudfare, May 2025)

Navigate to Add > Workers.

Add a Cloudflare WorkerAdd a Cloudflare Worker (Screenshot from Cloudfare, May 2025)

You can then select a template, import a repository, or start with Hello World! I chose the Hello World option, as it’s the easiest one to use.

Selecting Cloudflare WorkerSelecting Cloudflare Worker (Screenshot from Cloudfare, May 2025)

Go through the next screen and hit “Deploy.” You’ll end up on a screen that says, “Success! Your project is deployed to Region: Earth.”

Don’t click off this page.

Instead, click on “Edit code,” remove all of the existing code, and enter the following code into the editor:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  const { searchParams } = new URL(request.url);
  const targetUrl = searchParams.get('url');
  const userAgentName = searchParams.get('user-agent');

  if (!targetUrl) {
    return new Response(
      JSON.stringify({ error: "Missing 'url' parameter" }),
      { status: 400, headers: { 'Content-Type': 'application/json' } }
    );
  }

  const userAgents = {
    googlebot: 'Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.6167.184 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
    samsung5g: 'Mozilla/5.0 (Linux; Android 13; SM-S901B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Mobile Safari/537.36',
    iphone13pmax: 'Mozilla/5.0 (iPhone14,3; U; CPU iPhone OS 15_0 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/19A346 Safari/602.1',
    msedge: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246',
    safari: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9',
    bingbot: 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/',
    chrome: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36',
  };

  const userAgent = userAgents[userAgentName] || userAgents.chrome;

  const headers = {
    'User-Agent': userAgent,
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Encoding': 'gzip',
    'Cache-Control': 'no-cache',
    'Pragma': 'no-cache',
  };


  try {
    let redirectChain = [];
    let currentUrl = targetUrl;
    let finalResponse;

    // Follow redirects
    while (true) {
      const response = await fetch(currentUrl, { headers, redirect: 'manual' });

      // Add the current URL and status to the redirect chain only if it's not already added
      if (!redirectChain.length || redirectChain[redirectChain.length - 1].url !== currentUrl) {
        redirectChain.push({ url: currentUrl, status: response.status });
      }

      // Check if the response is a redirect
      if (response.status >= 300 && response.status < 400 && response.headers.get('location')) { const redirectUrl = new URL(response.headers.get('location'), currentUrl).href; currentUrl = redirectUrl; // Follow the redirect } else { // No more redirects; capture the final response finalResponse = response; break; } } if (!finalResponse.ok) { throw new Error(`Request to ${targetUrl} failed with status code: ${finalResponse.status}`); } const html = await finalResponse.text(); // Robots.txt const domain = new URL(targetUrl).origin; const robotsTxtResponse = await fetch(`${domain}/robots.txt`, { headers }); const robotsTxt = robotsTxtResponse.ok ? await robotsTxtResponse.text() : 'robots.txt not found'; const sitemapMatches = robotsTxt.match(/Sitemap:s*(https?://[^s]+)/gi) || []; const sitemaps = sitemapMatches.map(sitemap => sitemap.replace('Sitemap: ', '').trim());

    // Metadata
    const titleMatch = html.match(/]*>s*(.*?)s*/i);
    const title = titleMatch ? titleMatch[1] : 'No Title Found';

    const metaDescriptionMatch = html.match(//i);
    const metaDescription = metaDescriptionMatch ? metaDescriptionMatch[1] : 'No Meta Description Found';

    const canonicalMatch = html.match(//i);
    const canonical = canonicalMatch ? canonicalMatch[1] : 'No Canonical Tag Found';

    // Open Graph and Twitter Info
    const ogTags = {
      ogTitle: (html.match(//i) || [])[1] || 'No Open Graph Title',
      ogDescription: (html.match(//i) || [])[1] || 'No Open Graph Description',
      ogImage: (html.match(//i) || [])[1] || 'No Open Graph Image',
    };

    const twitterTags = {
      twitterTitle: (html.match(//i) || [])[2] || 'No Twitter Title',
      twitterDescription: (html.match(//i) || [])[2] || 'No Twitter Description',
      twitterImage: (html.match(//i) || [])[2] || 'No Twitter Image',
      twitterCard: (html.match(//i) || [])[2] || 'No Twitter Card Type',
      twitterCreator: (html.match(//i) || [])[2] || 'No Twitter Creator',
      twitterSite: (html.match(//i) || [])[2] || 'No Twitter Site',
      twitterLabel1: (html.match(//i) || [])[2] || 'No Twitter Label 1',
      twitterData1: (html.match(//i) || [])[2] || 'No Twitter Data 1',
      twitterLabel2: (html.match(//i) || [])[2] || 'No Twitter Label 2',
      twitterData2: (html.match(//i) || [])[2] || 'No Twitter Data 2',
      twitterAccountId: (html.match(//i) || [])[2] || 'No Twitter Account ID',
    };

    // Headings
    const headings = {
      h1: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), h2: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), h3: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), }; // Images const imageMatches = [...html.matchAll(/]*src="(.*?)"[^>]*>/gi)]; const images = imageMatches.map(img => img[1]); const imagesWithoutAlt = imageMatches.filter(img => !/alt=".*?"/i.test(img[0])).length; // Links const linkMatches = [...html.matchAll(/]*href="(.*?)"[^>]*>/gi)]; const links = { internal: linkMatches.filter(link => link[1].startsWith(domain)).map(link => link[1]), external: linkMatches.filter(link => !link[1].startsWith(domain) && link[1].startsWith('http')).map(link => link[1]), }; // Schemas (JSON-LD) const schemaJSONLDMatches = [...html.matchAll(/
The First-Ever UX Study Of Google’s AI Overviews: The Data We’ve All Been Waiting For via @sejournal, @Kevin_Indig

One thing I need you to understand about the groundbreaking data I’m about to show you is that no one has ever done this kind of analysis before.

Ever.

To our knowledge, no other independent usability study has explored a major web platform at this scale.

AI changes everything, and search is at the forefront.

Together with Eric van Buskirk and his team, I conducted a behavioral study that provides us with unique and mission-critical insights into how people use Google, especially AI Overviews (AIOs).

This data allows us all to better understand how people actually use the new feature and, therefore, better optimize for this new world of search.

We captured screen recordings + think-aloud sessions on 70 people (≈ 400 AIO encounters) to see what really happens when Google shows an AIO.

We tracked their scrolls, hovers, dwells, comments, and even their emotions!

The effort to gather and evaluate this data was high. It required:

  • A solid five-figure USD investment.
  • A team of six people.
  • Combing through 13,500 words of annotations.
  • Sifting through 29 hours of recordings.
  • So many hours we lost count.

I want to call out that the study was directed by Eric Van Buskirk.

We designed the questions, focus points, and the method together, but Eric hired collaborators, ran the study, and delivered the results. Once the study was finished, we interpreted the data together.

Here’s a three-minute video summary of the results:

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Executive Summary

Our usability study puts hard numbers behind what many SEO pros have sensed anecdotally:

  • Traffic drain is real and measurable. Desktop outbound click-through rate (CTR) can fall by two-thirds the moment an AIO appears; mobile fares better, but still loses almost half its clicks.
  • Attention stays up-screen. Seven in 10 searchers never read past the first third of an AIO; trust and visibility are won – or lost – inside a few lines.
  • Demographics define user behavior. Younger mobile users embrace AI answers and social proof; older searchers still dig for blue links and authority sites. Query intents with high-risk outcomes (like Your Money or Your Life searches) also cause users to dig more into search for validation.
  • The decision filter has changed. Brand/authority is now the first gate, search intent relevance the second; snippet wording only matters once trust is secured.
  • Residual clicks follow community proof and video. Reddit threads, YouTube demos, and forum posts soak up roughly a third of the traffic that AIO leaves behind.

Together, these findings show that visibility, not raw referral traffic, is becoming the main currency of organic search.

Key Takeaways

Before you dig into the overall findings, here are the high notes:

  1. AIOs kill clicks, especially on desktop: External click rates drop when an AIO block appears.
  2. Most users skim only the top third of the panel: Citations or mentions for your brand must surface early to be seen. Median scroll = 30% of panel height; only a minority of users scroll past 75%.
  3. Trust is earned through depth: Scroll-depth and stated trust move together (ρ = 0.38). Clear sources high up accelerate both trust and scroll-stop rate.
  4. Age and device shape engagement: 25 to 34-year-olds on mobile are the power users: They pick AIO as the final answer in 1 of 2 queries.
  5. Community and video matter post-AIO: When users do leave the SERP, many outbound clicks go to Reddit, YouTube, or forum posts – social proof seals decisions.

Methodology Summary

You’ll find a detailed methodology at the end of the article (and a methodology deep dive from Eric here), but here’s a short summary of how the data was collected:

We asked 70 U.S. searchers (42 on mobile, 27 on desktop) to complete eight real-world Google queries – six that trigger an AIO and two that do not – while UXtweak recorded their screens, scrolls, clicks, and think-aloud commentary.

Over 525 task videos (≈ 400 AIO encounters) were frame-by-frame coded by three analysts who logged scroll depth, dwell time, internal vs. external clicks, trust statements, and emotional reactions for every SERP element that held attention for at least five seconds.

The resulting 408 annotated results provide the quantitative spine – and the qualitative color – behind the findings you’re about to read.

We asked participants to complete these eight tasks:

  1. Using Google Search, find a tax accountant in your area by searching as you typically would.
  2. What are the best months to buy a new car?
  3. Find a portable charger for phones under $15. Search as you typically would.
  4. Find out how to transfer from PayPal to a bank.
  5. Search Google for “promo code for a car rental.”
  6. Search for two or three reasons why artificial sweeteners might cause health problems.
  7. Search Google for “sell gift cards for an instant payment,” and imagine you have to choose one of the services mentioned.
  8. Search Google for “how to waterproof fabric boots at home.”
Image Credit: Kevin Indig

1. How Do Users Actually Read AIO Content?

Several analyses have examined the impact of AIOs on click-through rates and organic traffic.

But no one has yet looked into how users actually engage with AIOs – until now.

In our analysis, we captured how far down the AIO users scroll, when they click the “show more” button, and where they dwell on the page.

Key Stats:

An overwhelming 88% of users clicked “show more” to expand truncated AIOs.

We measured how far down* the participants scrolled who looked at the AIO result for at least five seconds:

  • Average scroll depth: 75%.
  • Median scroll depth: 30%.

*0% = they never scrolled inside the box; 100% = they reached the very bottom at least once.

Image Credit: Kevin Indig

A few outliers skew the average.

The median is much more telling: Most users stop reading AIOs after the top third.

In total, 86% of participants “skimmed quickly,” meaning they didn’t take much time to read everything in the AIO but scavenged for key insights.

Dwell times averaged between 30-45 seconds, indicating meaningful user engagement rather than superficial interactions.

Eric, director of the study, found that 40% of questions end with statements like “I usually don’t go past this” or “AIO answers all my questions.”

The remainder, almost a third of the sessions, show people scanning AIOs, then choosing a brand site, video, Reddit thread, or .gov/.edu result instead. (“I like AIO, but I still prefer Reddit,” was a sentiment we heard.)

But who scrolls further down the AIO?

  • Young people: Ages 25-34 years.
  • Mobile users: An average of 54% mobile users vs. 29% desktop users keep scrolling the AIO.
  • Searchers with an intent that reflects high stakes: Think tasks that involve financial or medical queries. Low-stakes searches, like coupon codes, are the opposite. Here’s a look at the average scroll depth across intents:
    • Health YMYL – 52%.
    • DIY or how-to – 54%.
    • Financial YMYL – 46%.
    • Decision timing (“best month to buy…”) – 41%.
    • Promo code queries – 34%.

When we asked participants about how much they trust AI-generated summaries, we got an average of 3.4 – quite high!

Image Credit: Kevin Indig

Why It Matters:

Similar to classic search results, aim to be cited as high up the AIO as possible to be the most visible.

When optimizing, we also need to consider the stakes of each individual, singular search query and what it might take for a person to verify a claim or find a trustworthy solution, whether the search is in YMYL topic or a general, traditionally low-risk topic.

This is more practical than the YMYL framework we’ve been using for a long time. The more a user has to lose when making the wrong decision, the more likely they are to engage deeply with AIOs.

Ultimately, our study shows that users engage more with an AIO out of skepticism. The higher the stakes are for a decision, the more they question the AIO. And the more they work to validate the AIO with sources outside of it.

Insight:

Users treat an AIO as a fact sheet: quick scan, expand if needed, minimal internal navigation.

You can see this in the difference between average and median scroll depth. Only a few users scrolled down to 75% of the AIO.

Users who end the task saying they trust the AIO are the same ones who have scrolled far enough to read citations or expanded paragraphs. Authoritative sources showing up high in the AIO accelerate trust.

Practical Takeaways:

  1. Most people will never reach the bottom of the AIO, so valuable mentions and citations are only up high, similar to how classic search results work.
  2. Similar to optimizing for Featured Snippets, when targeting AIOs, keep answers in content blocks concise, to the point, and simple.
  3. Invest in your positioning, messaging, and becoming an authoritative source in your area of expertise. That way, users recognize your brand in the SERPs – and ideally before they search.

I’m dropping more insights and guidance on how to apply these learnings for paid subscribers later this week. Make sure you don’t miss it. Upgrade here.

2. What’s The Click-Through Behavior Like When AIOs Are Present?

AIOs give users answers before they click on web results.

Therefore, the logical question is how much less traffic can websites expect when AI Overviews show up?

Key Stats:

Everyone obviously wants to know how AIOs impact click-through rates. But clicks are just a proxy for completed user journeys.

While these are usually hard to track, we were able to figure out exactly when participants completed their journeys based on their commentary and screen tracking.

Image Credit: Kevin Indig

The remaining ~80% of queries were answered using:

  • Organic and sponsored results.
  • Community forums.
  • Videos.
  • Map packs.
  • Other prominent SERP Features, like People Also Ask.

This observation actually fits Google’s narrative of AIOs being a “jumping-off point,” but I want to be clear that AIOs also kill a lot of clicks in the process, and our tasks require a higher level of skepticism than many highly searched queries.1

Of course, there’s a difference between commercial and informational queries that we must keep in mind.

Notably, 4 out of 5 users progressed past the AIO, so ranking in the first organic or paid slots remains critical for monetizable queries.

Most answers (81 % on desktop and 78 % on mobile) for transactional and/or commercial queries came from other non-AIO SERP elements, such as:

  • Organic links.
  • Discussions and forums.
  • Featured snippets.
  • Promo‑code aggregators.
  • Sponsored results.

But for AIO actions that took place in the Overview itself, here’s what we found:

  • On mobile, 19% of participants clicked a citation-related element within the AIO panel, such as a link icon or hyperlinked text (excluding “show more” clicks).
  • On desktop, users clicked internally within an AIO just 7.4% of the time.

Overall, our main AIO blocks contained few hyper-texted links, and on desktop, these links were nearly absent. The primary click out of the main panels was the (somewhat confusing) link icon.

The data we gathered from this part of the study confirms a few things:

  • Don’t expect too much traffic, even when you’re cited high up the AIO. Traffic loss is inevitable and probably impossible to compensate for. (But there is hope.)
  • Revenue models tied to sessions are suffering and will suffer more. (For example: Sites that rely on ads and affiliate models.)
  • Marketing dashboards that track only visits are under-valuing visibility wins or hiding looming losses.
  • The SERP battleground is shifting from rank to AIO presence. Budgets and optimization practices have to follow.

Insight:

The data shows that users treat AIO primarily as a read-only summary.

Users read, decide, and stay put.

Outbound traffic is the exception, not the rule. When AIOs are absent, outbound click rates rise to an average of 28% on desktop and 38% on mobile.

Notice how simple questions don’t require click-throughs in the following clip from the study:

And yet, there are cases in which organic results convince users to be better than AIOs:

Practical Takeaways:

Optimize content for AIO citations, but don’t measure referral traffic (ie, clicks) for success.

Instead, measure visibility by monitoring the following:

  • Impressions (easy but fuzzy).
  • Citation rank (how high up the AIO you’re cited).
  • Share of Voice (how often you’re cited, how high, and how you show up in the organic results).

You also need to immediately communicate to leaders, stakeholders, partners, and clients that organic traffic is already or about to drop significantly.

For those who are subscribed to the paid version of The Growth Memo, we have a prepped slide deck to help you communicate these changes to your stakeholders coming out this week.

3. How Do People React To SERPs Emotionally?

Emotions drive decisions more than rationale.

As I wrote in Messy Middle:

“We’re more emotional animals and make more decisions from our gut than we like to admit.”

Besides engagement, we also wanted to know how users feel about the results they’re seeing.

Why? Because emotions have an impact on our decisions, from clicks to purchases.

Key Stats:

Image Credit: Kevin Indig

Why It Matters:

Emotion is tied to risk. Searchers are internally asking What’s at stake? When making a decision to trust a result

And as a result, high-stakes niches – or even expensive products – receive more skepticism and scrutiny from users.

This skepticism plays out in the form of clicks – a.k.a. your opportunities to convince people that you’re trustworthy.

The good news: Users don’t rely on AIOs only for YMYL queries; they also validate and verify with classic results.

In low-risk niches where the threat of picking a wrong answer is low – like coupon codes or certain informational queries – brands can focus on page speed and price.

Overall, here’s what stuck out the most from this segment of the study:

  • Hesitation or confusion spikes on medical or money queries when AIOs cite unknown brands.
  • Reassurance-seeking (opening a second organic link “to be sure”) appears in 38% of sessions where an AIO is present.
  • No-reaction silent scans dominate product or local-intent tasks.

For high-stakes queries, users care about authoritative sources, as you can see in this clip from the study:

But organic results can still win if they signal better relevance.

  1. Sites in the health and finance spaces have a higher chance of seeing lower traffic losses from AIOs.
  2. Aim to get mentioned or linked from highly authoritative sources, like .gov sites.
  3. Prioritize trust-building in your on-page experience to catch those double-check clicks. You can do this with visible editorial guidelines, expert authors + reviewers, and high-effort content production (original graphics, etc.).

4. What Influences The Type Of Result A User Chooses As Their Final Answer?

Up until now, my mental model of search – and I would argue the industry’s as well – was that users pick results by relevance: “Does it answer my question?”

But that has changed, and I think AI is a big reason.

We grouped over 550 think-aloud comments into four recurring themes to explain the new user behavior in the search results:

Source Trustworthiness (= Primary Click Motivator)

Whenever a recognised brand, authority site, .gov or .edu appeared, it was chosen first in 58% of the cases where such a link was present.

Comparison/Validation (= Secondary Driver)

After reading an AIO or Blue Link, 18% of users still opened a Reddit thread, YouTube video, or second organic result “just to double-check.”

Snippet/Preview Relevance (= Speeds Decision)

After clearing the trust gate, users scanned the two-line snippet, bolded query terms, or AIO phrasing. When the snippet looked off-topic, users skipped even trusted domains.

Top-Of-Page Visibility (= Skews The Decision)

Limited viewport and thumb ergonomics make “position-0” features (AIO, featured snippet) and rank-1 organic vastly more influential on phones.

First-screen links were chosen 71% of the time. Users only scroll when the topic feels risky.

Image Credit: Kevin Indig

Why It Matters:

It’s not just about matching the intent of the query anymore. The old notion of “search intent relevance only” is outdated.

Brand authority and trustworthiness compound: Once you’re trusted, you likely outrank unknown rivals – even without richer snippets.

Of course, placement matters, and SERP real estate above the fold is scarce … and skews user decisions.

Trust is the core ingredient when it comes to anything AI. Search is no exception.

Insight:

Users apply a rapid two-step filter that looks like this:

“Do I trust this result?” → “Does this result answer my question?”

Look at these two clips from the study and notice how the participant selects results he explicitly trusts:

You’ll hear the participant state:

  • “Yelp is a good resource that I use a lot, so I’d probably click Yelp.”
  • “I’ll try to find one that has decent reviews and that’s nearby.”
  • “I trust Yelp.”

You’ll also hear this in Clip No. 2 above:

  • “US News and World Reports is trustworthy. Edmonds is trustworthy.”
  • “I picked this ’cause US News and World Reports is a trusted source, and they have a clear answer right here in the key takeaways.” (Note: They make information easy to find.)

Of course, there is nuance to this two-step filter.

The director of this study, Eric, adds the following observations:

  1. How-to and evergreen intents (like waterproofing boots, selling gift cards, or coupon hunting) are easiest to satisfy for AIOs. Users feel the AI is “tried and tested” and “super helpful” for these stable facts.
  2. Location-sensitive or personal-risk queries trigger more skepticism. One study participant shared aloud that, “It only says New York … that doesn’t help me,” and another shared, “I’d go straight to PayPal for accuracy.”
  3. Medical-risk examples show a mixed approach: Some users praise the concise summary, others insist on cross-checking with authoritative sources like the Mayo Clinic or the NHS.

Ultimately, we’ve noticed that the more time users spend reading the AIO, the higher their chance of trusting the answer and being influenced by it.

This is a priming effect: Once a brand or concept appears in the AIO, it remains top-of-mind.

Practical Takeaways:

  1. Trust is the gatekeeper. One of the biggest drivers of SEO success is how well you’ve earned “share of mind” before someone even sees your brand in an AIO or search result. Maybe that was always true, but AIOs make it non-negotiable now.
  2. Being present in the AIO (ideally high up) is valuable because it leaves an impression on users. That’s where impressions as a metric become more valuable.

5. How Do Demographics Influence Search Behavior And Interactions With AIOs?

We often talk about user intent in SEO, but completely ignore demographics.

History has shown that technical jumps, like what we’re living through with AI right now, have a bigger impact on younger demographics.

The same is true of Search.

Our research found stark differences in how people of different ages engage with the search results.

Below, you’ll see the percentage of time when an organic result was chosen, whether there was an AIO present for the user’s review or not.

Image Credit: Kevin Indig

Why It Matters:

One-size-fits-all SEO practices don’t work anymore.

Just like for other social or content platforms, segmentation by demographic becomes as critical as segmentation by keyword intent.

Insight:

AIO adoption is generational.

Older audiences are still relying heavily on classic organic results. Younger demographics are more likely to focus on the AIO and validate with Reddit.

Practical Takeaways:

Prioritize content and SERP Feature bets by age segment.

For brands that target an older audience, double down on classic organic search. Don’t over-index on AIOs. The exception here would be queries with a local intent or online shopping searches.

In these cases, user intent overpowers age preferences.

Quick reminder here: Premium subscribers get expanded info later this week. Upgrade to paid.

6. How Do Devices Impact User Behavior?

Devices reflect search context.

Mobile devices are used more often on the go, which is why mobile searches are more likely to have local intent and SERP Features like local packs.

Mobile users are also more restrained in their behavior due to smaller screen real estate. These factors are also reflected in how users engage with AIOs.

Image Credit: Kevin Indig

Why It Matters:

You need to ensure mobile snippets and structured data are flawless; they get more scrutiny.

Mobile is now the primary remaining source of incremental Google traffic. Optimize for it first.

Insight:

Vertical scrolling and thumb ergonomics make mobile users dig deeper and click out more.

And when an AIO is missing, users revert to classic “blue link” behavior, especially on mobile, where more than one-third of searches produced a click to a non-Google site.

Practical Takeaways:

You need to track and compare mobile and desktop SERPs.
We missed this in classic SEO, and now it’s so much more important.

To prioritize which format you optimize for, you must validate that you get more mobile users to your site first (use Google Search Console).

If mobile is important to your target audience, regularly run separate mobile rank and snippet audits. And optimize the above-the-fold experience, in addition to:

  • Making the page skimmable.
  • Shortening time-to-value on the page (essentially, the time it takes to resolve the query or reach an insight from your site).
  • Simplifying navigation on the page and site.

7. How And When Do Users Engage With Community-Based, Video-Based, And Shopping Carousel Content?

The controversial rise of Reddit often leaves us wondering why Google gives community content so much prominence across all topics and verticals.

Our study explains what users really do.

Key Stats:

We looked at where clicks go when users leave Google or want to validate answers:

Image Credit: Kevin Indig

Keep in mind that SERP Features and corresponding user behavior vary by question or task performed.

In this study, only one task surfaced video results: “how to waterproof fabric boots at home.”

And here’s how users in this study responded to video results:

  • Users watched the preview frames, hovered for autoplay, then clicked through to YouTube in 5 of the 7 cases.
  • Although videos made up less than 2% of all logged elements, their 37-second dwell time exceeds AIO dwell time itself (31 seconds).
  • Users linger to watch autoplay previews or scroll thumbnails before deciding to click through.

For shopping-related tasks, we noticed the following:

  • 30% of clicks go to local packs.
  • 26.4% of clicks go to shopping modules (product grids).
  • 13.2% of clicks go to text ads.
  • 40% of clicks went to paid-organic results (text + PLAs).
  • 7 out of 10 clicks bypassed classic organic links in favour of Google‑curated verticals or ads.

By the way, Amazon was a huge competitor to the shopping carousel.

Many people said, “I would just go to Amazon” (see clip below):

This study participant states: “Typically, I go to Amazon … scroll past the sponsored results and look for something with a lot of reviews.”

Why It Matters:

Social proof platforms (Reddit, YouTube) absorb the demand that AIOs can’t satisfy. Be present there.

Insight:

Community proof-points matter. When users leave the SERP after looking at an AIO, community links receive a lot of those clicks (18% when AIOs are not present).

People – especially the younger cohort that trusts AIO the most – use forums to get a (validating) voice from another human. Users in their 20s to 30s clicked Reddit or YouTube far more than older cohorts.

For some queries, like how-tos, users skip the AIO intentionally because they expect richer media, like videos.

Practical Takeaways:

  1. Invest in Organic Reddit (or the most relevant forum in your industry) when and if it appears for your most relevant queries. Seek both citations and social proof, as they reinforce each other.
  2. Optimize video thumbnails and the first 15 seconds. Users decide whether or not to click from the autoplay preview; if the opening doesn’t show the task in action, they skip.

Conclusion: Welcome To The New World Of Search

You made it to the end! Congratulations to you and your attention span (or did you just scroll here 🤔?).

To summarize everything you just (hopefully) read: If your brand isn’t surfaced in the first third of an AIO, it’s effectively invisible.

Search has flipped from a click economy to a visibility economy.

And within that economy, the new currency is authority, which now outranks search intent relevance.

Users ask, “Do I trust this brand?” before they even consider the answer.

If I had to boil the findings down to one sentence, it would be this: Users treat an AIO as a fact sheet: They quickly scan, expand if needed, and use minimal internal navigation.

Top Takeaways For Operators:

  1. Shift KPIs from clicks to presence. Track how often, how high, and for which queries your brand appears in AIO.
  2. Lead with authority. Invest in expert endorsements, .gov/.edu links, and PR that earns immediate trust.
  3. Package answers for skimmers. Key-fact boxes, bullets, and schema matter more than ever.
  4. Own the validation click. Seed Reddit threads, video demos, and comparison guides – users still seek a second opinion.
  5. Segregate desktop and mobile strategy. Treat desktop as a branding surface; fight for mobile if you need traffic.

Top Takeaways For Decision Makers:

  1. Expect – and budget for – a structural drop in organic sessions. AIOs cut outbound clicks roughly in half on desktop and by a third on mobile; revenue models tied to sessions (ads, affiliate) need hedging strategies.
  2. Shift KPIs and tooling from “rank” to “share of voice in AIO.” Track how often, how high, and for which queries your brand appears in the panel; classic position-tracking alone masks looming losses. Keep in mind we’re still refining the new metrics model.
  3. Invest in authority signals that secure trust instantly. Recognition by .gov, .edu, expert reviewers, or high-profile PR sways 58% of users to choose a cited source first. Brand trust precedes relevance in the new decision filter.
  4. Allocate resources to validation channels – Reddit, YouTube, forums – where many residual clicks go after an AIO. Owning the follow-up click preserves influence even when Google keeps the first.

Open Questions That Still Matter

  • Citation mechanics. How does Google choose which sources surface in the collapsed AIO, and in what order?
  • Attribution leakage. Will Search Console or GA ever expose AIO-driven impressions so brands can value “on-SERP” exposure?
  • Monetization models. If outbound traffic keeps shrinking, how will publishers, affiliates, and SaaS products replace lost session-based revenue?
  • Personalization vs. authority. Will future AIOs weigh personal history over global trust signals – and can brands influence that balance?
  • Regulatory impact. Could antitrust or copyright actions force Google to show more outbound links  – or fewer?
  • Behavior over time. Do users acclimate to AIOs and eventually click less (or more) as trust grows?

Hint: Paid subscribers can get answers to these questions (and can send me any question that’s top of mind!) related to this study.

Additional Resources

Other primary research that puts the qualitative data into perspective:

Methodology

Study Design And Objective

We conducted a mixed-methods, within-subjects usability study to quantify how Google’s AI Overviews (AIO) change user behavior.

Each participant completed eight live Google searches: six queries that consistently triggered an AIO and two that did not. This arrangement lets us isolate the incremental effect of AIO while holding person-level variables constant.

Participants And Recruitment

Sixty-nine English-speaking U.S. adults were recruited on Prolific between 22 March and 8 April 2025.

Eligibility required a ≥ 95% Prolific approval rate, a Chromium-based browser (for the recording extension), and a functioning microphone.

Participants chose their own device; 42 used mobile (61%) and 27 used desktop (39%).

Age distribution was: 18-24 yrs 29%, 25-34 yrs 30%, 35-44 yrs 12%, 45-54 yrs 17%, 55-64 yrs 3%, 65+ yrs 3%.

A pilot with eight users refined instructions; 18 further sessions were excluded for technical failure and four for non-compliance. The final dataset contains 525 valid task videos.

Task Protocol

Each session ran in UXtweak’s Remote Moderated mode.

After reading a task prompt, the participant navigated to google.com, searched, and spoke thoughts aloud. They declared a final answer (“I’m selecting this because…”) before clicking “Done” in an overlay.

Task set:

  1. Local service (“find a tax accountant near you”) – no AIO.
  2. Decision timing (“best month to buy a car”) – AIO.
  3. Low-cost product (“portable charger < $15”) – no AIO.
  4. Transactional YMYL (“transfer PayPal to bank”) – AIO.
  5. Coupon/deal (“car-rental promo code”) – AIO.
  6. Health YMYL (“why artificial sweeteners might cause health problems”) – AIO.
  7. Finance YMYL (“sell gift cards for instant payment”) – AIO.
  8. DIY how-to (“how to waterproof fabric boots”) – AIO.

Capture Stack

UXtweak recorded full-screen video (1080p desktop or device resolution mobile), cursor paths, scroll events, and audio. Recordings averaged 25 min; incentives were $8 USD.

Annotation Procedure

Three trained coders reviewed every video in parallel and logged one row per SERP element that held attention ≈ for 5 seconds or longer. Twenty-three variables were captured, grouped as:

  • Structural – participant-ID, task-ID, device, query.
  • Feature – element type (AIO, organic link, map pack, sponsored, video pack, shopping carousel, forum, etc.).
  • Engagement – scroll depth inside AIO (0/25/50/75/100%), number of scroll gestures, dwell-time (s), internal clicks, outbound clicks.
  • Behavioral – spoken reaction (hesitation, confusion, reassurance-seeking, none), reading style (skim, re-read, etc.), AIO button used (show-more, citation click, carousel chip).
  • Outcome – final answer, satisfaction flag, explicit trust flag.

The research director (Eric van Buskirk) spot-checked 10% of videos. Inter-coder agreement: dwell-time SD ± 3 s; Cohen’s κ on trust category = 0.79 (substantial).

Data Processing And Metrics

Annotations were exported to Python/pandas 2.2. Scroll values entered as whole numbers were normalised to fractions (e.g., 80 → 0.80).

The 99th percentile of dwell was Winsorised to dampen outliers. This produced 408 evaluated SERP elements and ≈ 350 valid AIO observations.

Statistical Analysis

Descriptives (means, medians, proportions) were stratified by device, age, and query intent.

Spearman rank correlations tested monotonic relationships among scroll %, dwell, trust, and query-refinement counts (power >.8 to detect ρ ≥ .25).

Welch t-tests compared mobile vs desktop means; McNemar χ² compared click-through incidence with vs without AIO.

Reliability And Power

With n ≈ 350 AIO rows, the standard error for a proportion of .50 is ≈ .05; correlations ≥ .30 are significant at α =.05. Cross-coder checks ensured temporal metrics and categorical judgements were consistent.

Limitations

Sample skews young (58% ≤ 34 yrs) and U.S.-based; think-aloud may lengthen dwell by ~5-10 s. Coder-judged trust/emotion involves subjectivity despite reliability checks.

Study window overlaps Google’s March 2025 core update; SERP UI was in flux. Findings generalise to Chromium browsers; Safari/Firefox users were not sampled.

Ethical Compliance

Participants gave informed consent; recordings stored encrypted; no personally identifying data retained. Study conforms to Prolific’s ethics policy and UXtweak TOS.

This narrative supplies sufficient procedural and statistical detail for replication or secondary analysis.

1 AI Overviews: About last week 


SEJ’s Content & SEO Strategist Shelley Walsh prerecorded an interview with Kevin before the launch to talk about his research. For more explanation about his findings, watch below.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Links To Itself: 43% Of AI Overviews Point Back To Google via @sejournal, @MattGSouthern

New research shows that Google’s AI Overviews often link to Google, contributing to the walled garden effect that encourages users to stay longer on Google’s site.

A study by SE Ranking examined Google’s AI Overviews in five U.S. states. It found that 43% of these AI answers contain links redirecting users to Google’s search results. Each answer typically includes 4-6 links to Google.

This aligns with recent data indicating that Google users make 10 clicks before visiting other websites. These patterns suggest that Google is working to keep users within its ecosystem for longer periods.

Google Citing Itself in AI Answers

The SE Ranking study analyzed 100,013 keywords across five states: Colorado, Texas, California, New York, and Washington, D.C.

It tracked how Google’s AI summaries function in different regions. Although locations showed slight variance, the study found that Google.com is the most-cited website in AI Overviews.

Google appears in about 44% of all AI answers, significantly ahead of the next most-cited sources, YouTube, Reddit, Quora, and Wikipedia, appearing in about 13%.

The research states:

“Based on the data combined from all five states (141,507 total AI Overview appearances), our data analysis shows that 43.42% (61,437 times) of AI Overview responses contain links to Google organic results, while 56.58% of responses do not.”

Image Credit: SE Ranking

Building on the Walled Garden Trend

These findings complement a recent analysis from Momentic, which found that Google’s “pages per visit” has reached 10, indicating users spend significantly more clicks on Google before visiting other sites.

Overall, this research reveals Google is creating a more self-contained search experience:

  • AI Overviews appear in approximately 30% of all searches
  • Nearly half of these AI answers link back to Google itself
  • Users now make 10 clicks within Google before leaving
  • Longer, more specific queries trigger AI Overviews more frequently

Google still drives substantial traffic outward; 175.5 million visits in March, according to Momentic.

However, it’s less effective at sending users away than ChatGPT. Google produces just 0.6 external visits per user, while ChatGPT generates 1.4 visits per user.

More Key Stats from the Study

The SE Ranking research uncovered several additional findings:

  • AI Overviews almost always appear alongside other SERP features (99.25% of the time), most commonly with People Also Ask boxes (98.5%)
  • The typical AI Overview consists of about 1,766 characters (roughly 254 words) and cites an average of 13.3 sources
  • Medium-difficult keywords (21-40 on the difficulty scale) most frequently trigger AI Overviews (33.4%), whereas highly competitive terms (81-100) rarely generate them (just 3.7%)
  • Keywords with CPC values between $2-$5 produce the highest rate of AI Overviews (32%), while expensive keywords ($10+) yield them the least (17.3%)
  • Fashion and Beauty has the lowest AI Overview appearance rate (just 1.4%), followed by E-Commerce (2.1%) and News/Politics (3.8%)
  • The longer an AI Overview’s answer, the more sources it cites. Responses under 600 characters cite about five sources, while those over 6,600 characters cite around 28 sources.

These statistics further emphasize how Google’s AI Overviews are reshaping search behavior.

This data stresses the need to optimize for multiple traffic sources while remaining visible within Google’s results pages.