From Search To Discovery: Why SEO Must Evolve Beyond The SERP via @sejournal, @alexmoss

The search landscape undergoes its biggest shift in a generation.

If you’ve been in SEO long enough to remember the glory days of the all-organic search engine results pages (SERP), you’ll know how much of this real estate has been gradually taken over by paid ads, other first-party products, and rich snippets.

Now, the most aggressive transition of all: AI Overviews (as well as search-based large language model platforms).

At BrightonSEO last month, I explored how this evolution is forcing us to rethink what SEO means and why discoverability, not just ranking, is the new north star.

The “Dawn” Of The Zero-Click Isn’t Just Over – It’s Now Assumed

We’ve been reading about the rise of zero-click searches for some time now, but this “takeover” has been much more noticeable over the past 12 months.

I recently searched [how to teach my child to tell the time], and after scrolling through a parade of paid product ads, Google-owned assets, and the AI Overview summaries, I scrolled a good three pages down the SERP.

Google and other search and discovery platforms want to keep users in their ecosystems. For SEO pros, this means traditional metrics such as click-through rate (CTR) are becoming less valuable by the day.

From Answer Engines To Assistant Engines

LLMs have changed not just the way a result is displayed to the user but also changed the traditional search flow born within the browser into a multi-step flow that the native SERP simply cannot support in the same way.

The research process is collapsing into a single, seamless exchange.

Traditional flow vs Multi-step flowImage used with permission from Alain Schlesser, May 2025

But as technology accelerates, our own curiosity and research skills are at risk of declining or disappearing completely as the evolution of technology exponentially grows.

Assistant engines and wider LLMs  are the new gatekeepers between our content and the person discovering that content – our potential “new audience.”

They parse, consume, understand, and then synthesize content, which is the deciding factor in what it mentions to whom/what it interacts with.

Structured data is still crucial, as context, transparency, and sentiment matter more than ever.

Personal LLM agent flow diagramPersonal LLM agent flow diagram by Alain Schlesser, used with permission, May 2025

Challenges Are Different, But Also The Same

As an SEO, our challenges with this new behavior affect the way we do – and report on – our jobs.

In reality, many are just old headaches in shiny new wrappers:

  • Attribution is a mess: With AI Overviews and LLMs synthesizing content, it’s harder than ever to see where your traffic comes from – or if you’re getting any at all. There are some tools out there that do monitor, but we’re in the early days to see a standard. Even Google said they have no plans on adding insights on AIO within Search Console.
  • Traffic is fragmenting (again): We saw this with social media platforms at the beginning, where discovery happened outside the organic SERPs. Discovery is now happening everywhere, all at once. With attribution also harder to ascertain, this is a bigger challenge today.
  • Budgets are under scrutiny from fear, uncertainty, and doubt (FUD): The native SERP is changing too much, so some may assume there’s less (or no) value in doing SEO much anymore (untrue!).

The Shift Of Success Metrics

The days of our current success metrics are dwindling. The days of vanity-led metrics are coming to an end.

Similar to how our challenges are the same but different, this also applies to how we redefine success metrics:

Old Hat New Hat
Content Context + sentiment
Keywords Intent
Brand Brand + sentiment
Rankings Mentions
Links from external sources Citations across various channels
SERP monopoly Share of voice
E-E-A-T Still E-E-A-T
Structured data Entities, knowledge graph & vector embeds
Answering Assisting

What Can You Do About It?

Information can be aggregated, but personality can’t. This is why it’s still our responsibility to help “assist the assistant” to consider and include you as part of that aggregated information and synthesized answer.

  • Stick to the fundamentals: Never neglect SEO 101.
  • Third-party perspective is increasingly important, so ensure this is maintained and managed well to ensure positive brand sentiment.
  • Embrace structured data: Even if some say it’s becoming less crucial for LLMs to understand entities, structured data is being used right now inside major LLMs to output structured data within responses, giving them an established and standardised way to understand your content.
  • Educate stakeholders: Shift the conversation from rankings and clicks to discoverability and brand presence. The days of the branded unlinked mention suddenly have more value than “acquiring X followed non-branded anchor text links pcm.”
  • Experiment with your content: Try new ways to produce and market your content beyond the traditional word. Here, video is useful not only for humans but also for LLMs, who are now “watching” and understanding them to aid their response.
  • Create helpful, unique content: To add to the above, don’t produce for the sake of production.

LLMs.txt: The Potential To Be The New Standard

Keep an eye on emerging standards proposals, such as llms.txt, which is one way some are adapting and contributing to how LLMs ingest our content beyond our traditional approaches offered with robots.txt and XML sitemaps.

While some are skeptical about this standard, I believe it is still something worth implementing now, and I understand its true benefits for the future.

There is (virtually) non-existent risk in implementing something that doesn’t take too much time or resources to produce, so long as you’re doing so with a white hat approach.

Conclusion: Embrace Discoverability And New Metrics

SEO isn’t dead. It’s expanding, but at a rate we haven’t experienced before.

Discoverability is the new go-to success metric, but it’s not without flaws, especially as the way we search continues to change.

This is no longer about “ranking well” anymore. This is now about being understood, surfaced, trusted, and discovered across every platform and assistant that matters.

Embrace and adapt to the changes, as it’s going to continue for some time.

More Resources:


Featured Image: PeopleImages.com – Yuri A/Shutterstock

Does Google’s AI Overviews Violate Its Own Spam Policies? via @sejournal, @martinibuster

Search marketers assert that Google’s new long-form AI Overviews answers have become the very thing Google’s documentation advises publishers against: scraped content lacking originality or added value, at the expense of content creators who are seeing declining traffic.

Why put the effort into writing great content if it’s going to be rewritten into a complete answer that removes the incentive to click the cited source?

Rewriting Content And Plagiarism

Google previously showed Featured Snippets, which were excerpts from published content that users could click on to read the rest of the article. Google’s AI Overviews (AIO) expands on that by presenting entire articles that answer a user’s questions and sometimes anticipates follow-up questions and provides answers to those, too.

And it’s not an AI providing answers. It’s an AI repurposing published content. That action is called plagiarism when a student does the same thing by repurposing an existing essay without adding unique insight or analysis.

The thing about AI is that it is incapable of unique insight or analysis, so there is zero value-add in Google’s AIO, which in an academic setting would be called plagiarism.

Example Of Rewritten Content

Lily Ray recently published an article on LinkedIn drawing attention to a spam problem in Google’s AIO. Her article explains how SEOs discovered how to inject answers into AIO, taking advantage of the lack of fact checking.

Lily subsequently checked on Google, presumably to see if her article was ranking and discovered that Google had rewritten her entire article and was providing an answer that was almost as long as her original.

She tweeted:

“It re-wrote everything I wrote in a post that’s basically as long as my original post “

Did Google Rewrite Entire Article?

An algorithm that search engines and LLMs may use to analyze content is to determine what questions the content answers. This way the content can be annotated according to what answers it provides, making it easier to match a query to a web page.

I used ChatGPT to analyze Lily’s content and also AIO’s answer. The number of questions answered by both documents were almost exactly the same, twelve. Lily’s article answered 13 questions while AIO provided answeredo twelve.

Both articles answered five similar questions:

  • Spam Problem In AI Overviews
    AIO: “s there a spam problem affecting Google AI Overviews?
    Lily Ray: What types of problems have been observed in Google’s AI Overviews?
  • Manipulation And Exploitation of AI Overviews
    AIO: How are spammers manipulating AI Overviews to promote low-quality content?
    Lily Ray: What new forms of SEO spam have emerged in response to AI Overviews?
  • Accuracy And Hallucination Concerns
    AIO: Can AI Overviews generate inaccurate or contradictory information?
    Lily Ray: Does Google currently fact-check or validate the sources used in AI Overviews?
  • Concern About AIO In The SEO Community
    AIO: What concerns do SEO professionals have about the impact of AI Overviews?
    Lily Ray: Why is the ability to manipulate AI Overviews so concerning?
  • Deviation From Principles of E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness)
    AIO: What kind of content is Google prioritizing in response to these issues?
    Lily Ray: How does the quality of information in AI Overviews compare to Google’s traditional emphasis on E-E-A-T and trustworthy content?

Plagiarizing More Than One Document

Google’s AIO system is designed to answer follow-up and related questions, “synthesizing” answers from more than one original source and that’s the case with this specific answer.

Whereas Lily’s content argues that Google isn’t doing enough, AIO rewrote the content from another document to say that Google is taking action to prevent spam. Google’s AIO differs from Lily’s original by answering five additional questions with answers that are derived from another web page.

This gives the appearance that Google’s AIO answer for this specific query is “synthesizing” or “plagiarizing” from two documents to answer the question Lily Ray’s search query, “spam in ai overview google.”

Takeaways

  • Google’s AI Overviews is repurposing web content to create long-form content that lacks originality or added-value.
  • Google’s AIO answers mirror the content they summarize, copying the structure and ideas to answer identical questions inherent in the articles.
  • Google’s AIO arguably deviates from Google’s own quality standards, using rewritten content in a manner that mirrors Google’s own definitions of spam.
  • Google’s AIO features apparent plagiarism of multiple sources.

The quality and trustworthiness of AIO responses may  not reach the quality levels set by Google’s principles of Experience, Expertise, Authoritativeness, and Trustworthiness because AI lacks experience and apparently there is no mechanism for fact-checking.

The fact that Google’s AIO system provides essay-length answers arguably removes any incentive for users to click through to the original source and may help explain why many in the search and publisher communities are seeing less traffic. The perception of AIO traffic is so bad that one search marketer quipped on X that ranking #1 on Google is the new place to hide a body, because nobody would ever find it there.

Google could be said to plagiarize content because AIO answers are rewrites of published articles that lack unique analysis or added value, placing AIO firmly within most people’s definition of a scraper spammer.

Featured Image by Shutterstock/Luis Molinero

Create Your Own ChatGPT Agent For On-Page SEO Audits via @sejournal, @makhyan

ChatGPT is more than just a prompting and response platform. You can send prompts to ask for help with SEO, but it becomes more powerful the moment that you make your own agent.

I conduct many SEO audits – it’s a necessity for an enterprise site – so I was looking for a way to streamline some of these processes.

How did I do it? By creating a ChatGPT agent that I’m going to share with you so that you can customize and change it to meet your needs.

I’ll keep things as “untechnical” as possible, but just follow the instructions, and everything should work.

I’m going to explain the following steps”

  1. Configuration of your own ChatGPT.
  2. Creating your own Cloudflare code to fetch a page’s HTML data.
  3. Putting your SEO audit agents to work.

At the end, you’ll have a bot that provides you with information, such as:

Custom ChatGPT for SEOCustom ChatGPT for SEO (Image from author, May 2025)

You’ll also receive a list of actionable steps to take to improve your SEO based on the agent’s findings.

Creating A Cloudflare Pages Worker For Your Agent

Cloudflare Pages workers help your agent gather information from the website you’re trying to parse and view its current state of SEO.

You can use a free account to get started, and you can register by doing the following:

  1. Going to http://pages.dev/
  2. Creating an account

I used Google to sign up because it’s easier, but choose the method you’re most comfortable with. You’ll end up on a screen that looks something like this:

Cloudflare DashboardCloudflare Dashboard (Screenshot from Cloudfare, May 2025)

Navigate to Add > Workers.

Add a Cloudflare WorkerAdd a Cloudflare Worker (Screenshot from Cloudfare, May 2025)

You can then select a template, import a repository, or start with Hello World! I chose the Hello World option, as it’s the easiest one to use.

Selecting Cloudflare WorkerSelecting Cloudflare Worker (Screenshot from Cloudfare, May 2025)

Go through the next screen and hit “Deploy.” You’ll end up on a screen that says, “Success! Your project is deployed to Region: Earth.”

Don’t click off this page.

Instead, click on “Edit code,” remove all of the existing code, and enter the following code into the editor:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  const { searchParams } = new URL(request.url);
  const targetUrl = searchParams.get('url');
  const userAgentName = searchParams.get('user-agent');

  if (!targetUrl) {
    return new Response(
      JSON.stringify({ error: "Missing 'url' parameter" }),
      { status: 400, headers: { 'Content-Type': 'application/json' } }
    );
  }

  const userAgents = {
    googlebot: 'Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.6167.184 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
    samsung5g: 'Mozilla/5.0 (Linux; Android 13; SM-S901B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Mobile Safari/537.36',
    iphone13pmax: 'Mozilla/5.0 (iPhone14,3; U; CPU iPhone OS 15_0 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/19A346 Safari/602.1',
    msedge: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246',
    safari: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9',
    bingbot: 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/',
    chrome: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36',
  };

  const userAgent = userAgents[userAgentName] || userAgents.chrome;

  const headers = {
    'User-Agent': userAgent,
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Encoding': 'gzip',
    'Cache-Control': 'no-cache',
    'Pragma': 'no-cache',
  };


  try {
    let redirectChain = [];
    let currentUrl = targetUrl;
    let finalResponse;

    // Follow redirects
    while (true) {
      const response = await fetch(currentUrl, { headers, redirect: 'manual' });

      // Add the current URL and status to the redirect chain only if it's not already added
      if (!redirectChain.length || redirectChain[redirectChain.length - 1].url !== currentUrl) {
        redirectChain.push({ url: currentUrl, status: response.status });
      }

      // Check if the response is a redirect
      if (response.status >= 300 && response.status < 400 && response.headers.get('location')) { const redirectUrl = new URL(response.headers.get('location'), currentUrl).href; currentUrl = redirectUrl; // Follow the redirect } else { // No more redirects; capture the final response finalResponse = response; break; } } if (!finalResponse.ok) { throw new Error(`Request to ${targetUrl} failed with status code: ${finalResponse.status}`); } const html = await finalResponse.text(); // Robots.txt const domain = new URL(targetUrl).origin; const robotsTxtResponse = await fetch(`${domain}/robots.txt`, { headers }); const robotsTxt = robotsTxtResponse.ok ? await robotsTxtResponse.text() : 'robots.txt not found'; const sitemapMatches = robotsTxt.match(/Sitemap:s*(https?://[^s]+)/gi) || []; const sitemaps = sitemapMatches.map(sitemap => sitemap.replace('Sitemap: ', '').trim());

    // Metadata
    const titleMatch = html.match(/]*>s*(.*?)s*/i);
    const title = titleMatch ? titleMatch[1] : 'No Title Found';

    const metaDescriptionMatch = html.match(//i);
    const metaDescription = metaDescriptionMatch ? metaDescriptionMatch[1] : 'No Meta Description Found';

    const canonicalMatch = html.match(//i);
    const canonical = canonicalMatch ? canonicalMatch[1] : 'No Canonical Tag Found';

    // Open Graph and Twitter Info
    const ogTags = {
      ogTitle: (html.match(//i) || [])[1] || 'No Open Graph Title',
      ogDescription: (html.match(//i) || [])[1] || 'No Open Graph Description',
      ogImage: (html.match(//i) || [])[1] || 'No Open Graph Image',
    };

    const twitterTags = {
      twitterTitle: (html.match(//i) || [])[2] || 'No Twitter Title',
      twitterDescription: (html.match(//i) || [])[2] || 'No Twitter Description',
      twitterImage: (html.match(//i) || [])[2] || 'No Twitter Image',
      twitterCard: (html.match(//i) || [])[2] || 'No Twitter Card Type',
      twitterCreator: (html.match(//i) || [])[2] || 'No Twitter Creator',
      twitterSite: (html.match(//i) || [])[2] || 'No Twitter Site',
      twitterLabel1: (html.match(//i) || [])[2] || 'No Twitter Label 1',
      twitterData1: (html.match(//i) || [])[2] || 'No Twitter Data 1',
      twitterLabel2: (html.match(//i) || [])[2] || 'No Twitter Label 2',
      twitterData2: (html.match(//i) || [])[2] || 'No Twitter Data 2',
      twitterAccountId: (html.match(//i) || [])[2] || 'No Twitter Account ID',
    };

    // Headings
    const headings = {
      h1: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), h2: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), h3: [...html.matchAll(/

]*>(.*?)

/gis)].map(match => match[1]), }; // Images const imageMatches = [...html.matchAll(/]*src="(.*?)"[^>]*>/gi)]; const images = imageMatches.map(img => img[1]); const imagesWithoutAlt = imageMatches.filter(img => !/alt=".*?"/i.test(img[0])).length; // Links const linkMatches = [...html.matchAll(/]*href="(.*?)"[^>]*>/gi)]; const links = { internal: linkMatches.filter(link => link[1].startsWith(domain)).map(link => link[1]), external: linkMatches.filter(link => !link[1].startsWith(domain) && link[1].startsWith('http')).map(link => link[1]), }; // Schemas (JSON-LD) const schemaJSONLDMatches = [...html.matchAll(/
The First-Ever UX Study Of Google’s AI Overviews: The Data We’ve All Been Waiting For via @sejournal, @Kevin_Indig

One thing I need you to understand about the groundbreaking data I’m about to show you is that no one has ever done this kind of analysis before.

Ever.

To our knowledge, no other independent usability study has explored a major web platform at this scale.

AI changes everything, and search is at the forefront.

Together with Eric van Buskirk and his team, I conducted a behavioral study that provides us with unique and mission-critical insights into how people use Google, especially AI Overviews (AIOs).

This data allows us all to better understand how people actually use the new feature and, therefore, better optimize for this new world of search.

We captured screen recordings + think-aloud sessions on 70 people (≈ 400 AIO encounters) to see what really happens when Google shows an AIO.

We tracked their scrolls, hovers, dwells, comments, and even their emotions!

The effort to gather and evaluate this data was high. It required:

  • A solid five-figure USD investment.
  • A team of six people.
  • Combing through 13,500 words of annotations.
  • Sifting through 29 hours of recordings.
  • So many hours we lost count.

I want to call out that the study was directed by Eric Van Buskirk.

We designed the questions, focus points, and the method together, but Eric hired collaborators, ran the study, and delivered the results. Once the study was finished, we interpreted the data together.

Here’s a three-minute video summary of the results:

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Executive Summary

Our usability study puts hard numbers behind what many SEO pros have sensed anecdotally:

  • Traffic drain is real and measurable. Desktop outbound click-through rate (CTR) can fall by two-thirds the moment an AIO appears; mobile fares better, but still loses almost half its clicks.
  • Attention stays up-screen. Seven in 10 searchers never read past the first third of an AIO; trust and visibility are won – or lost – inside a few lines.
  • Demographics define user behavior. Younger mobile users embrace AI answers and social proof; older searchers still dig for blue links and authority sites. Query intents with high-risk outcomes (like Your Money or Your Life searches) also cause users to dig more into search for validation.
  • The decision filter has changed. Brand/authority is now the first gate, search intent relevance the second; snippet wording only matters once trust is secured.
  • Residual clicks follow community proof and video. Reddit threads, YouTube demos, and forum posts soak up roughly a third of the traffic that AIO leaves behind.

Together, these findings show that visibility, not raw referral traffic, is becoming the main currency of organic search.

Key Takeaways

Before you dig into the overall findings, here are the high notes:

  1. AIOs kill clicks, especially on desktop: External click rates drop when an AIO block appears.
  2. Most users skim only the top third of the panel: Citations or mentions for your brand must surface early to be seen. Median scroll = 30% of panel height; only a minority of users scroll past 75%.
  3. Trust is earned through depth: Scroll-depth and stated trust move together (ρ = 0.38). Clear sources high up accelerate both trust and scroll-stop rate.
  4. Age and device shape engagement: 25 to 34-year-olds on mobile are the power users: They pick AIO as the final answer in 1 of 2 queries.
  5. Community and video matter post-AIO: When users do leave the SERP, many outbound clicks go to Reddit, YouTube, or forum posts – social proof seals decisions.

Methodology Summary

You’ll find a detailed methodology at the end of the article (and a methodology deep dive from Eric here), but here’s a short summary of how the data was collected:

We asked 70 U.S. searchers (42 on mobile, 27 on desktop) to complete eight real-world Google queries – six that trigger an AIO and two that do not – while UXtweak recorded their screens, scrolls, clicks, and think-aloud commentary.

Over 525 task videos (≈ 400 AIO encounters) were frame-by-frame coded by three analysts who logged scroll depth, dwell time, internal vs. external clicks, trust statements, and emotional reactions for every SERP element that held attention for at least five seconds.

The resulting 408 annotated results provide the quantitative spine – and the qualitative color – behind the findings you’re about to read.

We asked participants to complete these eight tasks:

  1. Using Google Search, find a tax accountant in your area by searching as you typically would.
  2. What are the best months to buy a new car?
  3. Find a portable charger for phones under $15. Search as you typically would.
  4. Find out how to transfer from PayPal to a bank.
  5. Search Google for “promo code for a car rental.”
  6. Search for two or three reasons why artificial sweeteners might cause health problems.
  7. Search Google for “sell gift cards for an instant payment,” and imagine you have to choose one of the services mentioned.
  8. Search Google for “how to waterproof fabric boots at home.”
Image Credit: Kevin Indig

1. How Do Users Actually Read AIO Content?

Several analyses have examined the impact of AIOs on click-through rates and organic traffic.

But no one has yet looked into how users actually engage with AIOs – until now.

In our analysis, we captured how far down the AIO users scroll, when they click the “show more” button, and where they dwell on the page.

Key Stats:

An overwhelming 88% of users clicked “show more” to expand truncated AIOs.

We measured how far down* the participants scrolled who looked at the AIO result for at least five seconds:

  • Average scroll depth: 75%.
  • Median scroll depth: 30%.

*0% = they never scrolled inside the box; 100% = they reached the very bottom at least once.

Image Credit: Kevin Indig

A few outliers skew the average.

The median is much more telling: Most users stop reading AIOs after the top third.

In total, 86% of participants “skimmed quickly,” meaning they didn’t take much time to read everything in the AIO but scavenged for key insights.

Dwell times averaged between 30-45 seconds, indicating meaningful user engagement rather than superficial interactions.

Eric, director of the study, found that 40% of questions end with statements like “I usually don’t go past this” or “AIO answers all my questions.”

The remainder, almost a third of the sessions, show people scanning AIOs, then choosing a brand site, video, Reddit thread, or .gov/.edu result instead. (“I like AIO, but I still prefer Reddit,” was a sentiment we heard.)

But who scrolls further down the AIO?

  • Young people: Ages 25-34 years.
  • Mobile users: An average of 54% mobile users vs. 29% desktop users keep scrolling the AIO.
  • Searchers with an intent that reflects high stakes: Think tasks that involve financial or medical queries. Low-stakes searches, like coupon codes, are the opposite. Here’s a look at the average scroll depth across intents:
    • Health YMYL – 52%.
    • DIY or how-to – 54%.
    • Financial YMYL – 46%.
    • Decision timing (“best month to buy…”) – 41%.
    • Promo code queries – 34%.

When we asked participants about how much they trust AI-generated summaries, we got an average of 3.4 – quite high!

Image Credit: Kevin Indig

Why It Matters:

Similar to classic search results, aim to be cited as high up the AIO as possible to be the most visible.

When optimizing, we also need to consider the stakes of each individual, singular search query and what it might take for a person to verify a claim or find a trustworthy solution, whether the search is in YMYL topic or a general, traditionally low-risk topic.

This is more practical than the YMYL framework we’ve been using for a long time. The more a user has to lose when making the wrong decision, the more likely they are to engage deeply with AIOs.

Ultimately, our study shows that users engage more with an AIO out of skepticism. The higher the stakes are for a decision, the more they question the AIO. And the more they work to validate the AIO with sources outside of it.

Insight:

Users treat an AIO as a fact sheet: quick scan, expand if needed, minimal internal navigation.

You can see this in the difference between average and median scroll depth. Only a few users scrolled down to 75% of the AIO.

Users who end the task saying they trust the AIO are the same ones who have scrolled far enough to read citations or expanded paragraphs. Authoritative sources showing up high in the AIO accelerate trust.

Practical Takeaways:

  1. Most people will never reach the bottom of the AIO, so valuable mentions and citations are only up high, similar to how classic search results work.
  2. Similar to optimizing for Featured Snippets, when targeting AIOs, keep answers in content blocks concise, to the point, and simple.
  3. Invest in your positioning, messaging, and becoming an authoritative source in your area of expertise. That way, users recognize your brand in the SERPs – and ideally before they search.

I’m dropping more insights and guidance on how to apply these learnings for paid subscribers later this week. Make sure you don’t miss it. Upgrade here.

2. What’s The Click-Through Behavior Like When AIOs Are Present?

AIOs give users answers before they click on web results.

Therefore, the logical question is how much less traffic can websites expect when AI Overviews show up?

Key Stats:

Everyone obviously wants to know how AIOs impact click-through rates. But clicks are just a proxy for completed user journeys.

While these are usually hard to track, we were able to figure out exactly when participants completed their journeys based on their commentary and screen tracking.

Image Credit: Kevin Indig

The remaining ~80% of queries were answered using:

  • Organic and sponsored results.
  • Community forums.
  • Videos.
  • Map packs.
  • Other prominent SERP Features, like People Also Ask.

This observation actually fits Google’s narrative of AIOs being a “jumping-off point,” but I want to be clear that AIOs also kill a lot of clicks in the process, and our tasks require a higher level of skepticism than many highly searched queries.1

Of course, there’s a difference between commercial and informational queries that we must keep in mind.

Notably, 4 out of 5 users progressed past the AIO, so ranking in the first organic or paid slots remains critical for monetizable queries.

Most answers (81 % on desktop and 78 % on mobile) for transactional and/or commercial queries came from other non-AIO SERP elements, such as:

  • Organic links.
  • Discussions and forums.
  • Featured snippets.
  • Promo‑code aggregators.
  • Sponsored results.

But for AIO actions that took place in the Overview itself, here’s what we found:

  • On mobile, 19% of participants clicked a citation-related element within the AIO panel, such as a link icon or hyperlinked text (excluding “show more” clicks).
  • On desktop, users clicked internally within an AIO just 7.4% of the time.

Overall, our main AIO blocks contained few hyper-texted links, and on desktop, these links were nearly absent. The primary click out of the main panels was the (somewhat confusing) link icon.

The data we gathered from this part of the study confirms a few things:

  • Don’t expect too much traffic, even when you’re cited high up the AIO. Traffic loss is inevitable and probably impossible to compensate for. (But there is hope.)
  • Revenue models tied to sessions are suffering and will suffer more. (For example: Sites that rely on ads and affiliate models.)
  • Marketing dashboards that track only visits are under-valuing visibility wins or hiding looming losses.
  • The SERP battleground is shifting from rank to AIO presence. Budgets and optimization practices have to follow.

Insight:

The data shows that users treat AIO primarily as a read-only summary.

Users read, decide, and stay put.

Outbound traffic is the exception, not the rule. When AIOs are absent, outbound click rates rise to an average of 28% on desktop and 38% on mobile.

Notice how simple questions don’t require click-throughs in the following clip from the study:

And yet, there are cases in which organic results convince users to be better than AIOs:

Practical Takeaways:

Optimize content for AIO citations, but don’t measure referral traffic (ie, clicks) for success.

Instead, measure visibility by monitoring the following:

  • Impressions (easy but fuzzy).
  • Citation rank (how high up the AIO you’re cited).
  • Share of Voice (how often you’re cited, how high, and how you show up in the organic results).

You also need to immediately communicate to leaders, stakeholders, partners, and clients that organic traffic is already or about to drop significantly.

For those who are subscribed to the paid version of The Growth Memo, we have a prepped slide deck to help you communicate these changes to your stakeholders coming out this week.

3. How Do People React To SERPs Emotionally?

Emotions drive decisions more than rationale.

As I wrote in Messy Middle:

“We’re more emotional animals and make more decisions from our gut than we like to admit.”

Besides engagement, we also wanted to know how users feel about the results they’re seeing.

Why? Because emotions have an impact on our decisions, from clicks to purchases.

Key Stats:

Image Credit: Kevin Indig

Why It Matters:

Emotion is tied to risk. Searchers are internally asking What’s at stake? When making a decision to trust a result

And as a result, high-stakes niches – or even expensive products – receive more skepticism and scrutiny from users.

This skepticism plays out in the form of clicks – a.k.a. your opportunities to convince people that you’re trustworthy.

The good news: Users don’t rely on AIOs only for YMYL queries; they also validate and verify with classic results.

In low-risk niches where the threat of picking a wrong answer is low – like coupon codes or certain informational queries – brands can focus on page speed and price.

Overall, here’s what stuck out the most from this segment of the study:

  • Hesitation or confusion spikes on medical or money queries when AIOs cite unknown brands.
  • Reassurance-seeking (opening a second organic link “to be sure”) appears in 38% of sessions where an AIO is present.
  • No-reaction silent scans dominate product or local-intent tasks.

For high-stakes queries, users care about authoritative sources, as you can see in this clip from the study:

But organic results can still win if they signal better relevance.

  1. Sites in the health and finance spaces have a higher chance of seeing lower traffic losses from AIOs.
  2. Aim to get mentioned or linked from highly authoritative sources, like .gov sites.
  3. Prioritize trust-building in your on-page experience to catch those double-check clicks. You can do this with visible editorial guidelines, expert authors + reviewers, and high-effort content production (original graphics, etc.).

4. What Influences The Type Of Result A User Chooses As Their Final Answer?

Up until now, my mental model of search – and I would argue the industry’s as well – was that users pick results by relevance: “Does it answer my question?”

But that has changed, and I think AI is a big reason.

We grouped over 550 think-aloud comments into four recurring themes to explain the new user behavior in the search results:

Source Trustworthiness (= Primary Click Motivator)

Whenever a recognised brand, authority site, .gov or .edu appeared, it was chosen first in 58% of the cases where such a link was present.

Comparison/Validation (= Secondary Driver)

After reading an AIO or Blue Link, 18% of users still opened a Reddit thread, YouTube video, or second organic result “just to double-check.”

Snippet/Preview Relevance (= Speeds Decision)

After clearing the trust gate, users scanned the two-line snippet, bolded query terms, or AIO phrasing. When the snippet looked off-topic, users skipped even trusted domains.

Top-Of-Page Visibility (= Skews The Decision)

Limited viewport and thumb ergonomics make “position-0” features (AIO, featured snippet) and rank-1 organic vastly more influential on phones.

First-screen links were chosen 71% of the time. Users only scroll when the topic feels risky.

Image Credit: Kevin Indig

Why It Matters:

It’s not just about matching the intent of the query anymore. The old notion of “search intent relevance only” is outdated.

Brand authority and trustworthiness compound: Once you’re trusted, you likely outrank unknown rivals – even without richer snippets.

Of course, placement matters, and SERP real estate above the fold is scarce … and skews user decisions.

Trust is the core ingredient when it comes to anything AI. Search is no exception.

Insight:

Users apply a rapid two-step filter that looks like this:

“Do I trust this result?” → “Does this result answer my question?”

Look at these two clips from the study and notice how the participant selects results he explicitly trusts:

You’ll hear the participant state:

  • “Yelp is a good resource that I use a lot, so I’d probably click Yelp.”
  • “I’ll try to find one that has decent reviews and that’s nearby.”
  • “I trust Yelp.”

You’ll also hear this in Clip No. 2 above:

  • “US News and World Reports is trustworthy. Edmonds is trustworthy.”
  • “I picked this ’cause US News and World Reports is a trusted source, and they have a clear answer right here in the key takeaways.” (Note: They make information easy to find.)

Of course, there is nuance to this two-step filter.

The director of this study, Eric, adds the following observations:

  1. How-to and evergreen intents (like waterproofing boots, selling gift cards, or coupon hunting) are easiest to satisfy for AIOs. Users feel the AI is “tried and tested” and “super helpful” for these stable facts.
  2. Location-sensitive or personal-risk queries trigger more skepticism. One study participant shared aloud that, “It only says New York … that doesn’t help me,” and another shared, “I’d go straight to PayPal for accuracy.”
  3. Medical-risk examples show a mixed approach: Some users praise the concise summary, others insist on cross-checking with authoritative sources like the Mayo Clinic or the NHS.

Ultimately, we’ve noticed that the more time users spend reading the AIO, the higher their chance of trusting the answer and being influenced by it.

This is a priming effect: Once a brand or concept appears in the AIO, it remains top-of-mind.

Practical Takeaways:

  1. Trust is the gatekeeper. One of the biggest drivers of SEO success is how well you’ve earned “share of mind” before someone even sees your brand in an AIO or search result. Maybe that was always true, but AIOs make it non-negotiable now.
  2. Being present in the AIO (ideally high up) is valuable because it leaves an impression on users. That’s where impressions as a metric become more valuable.

5. How Do Demographics Influence Search Behavior And Interactions With AIOs?

We often talk about user intent in SEO, but completely ignore demographics.

History has shown that technical jumps, like what we’re living through with AI right now, have a bigger impact on younger demographics.

The same is true of Search.

Our research found stark differences in how people of different ages engage with the search results.

Below, you’ll see the percentage of time when an organic result was chosen, whether there was an AIO present for the user’s review or not.

Image Credit: Kevin Indig

Why It Matters:

One-size-fits-all SEO practices don’t work anymore.

Just like for other social or content platforms, segmentation by demographic becomes as critical as segmentation by keyword intent.

Insight:

AIO adoption is generational.

Older audiences are still relying heavily on classic organic results. Younger demographics are more likely to focus on the AIO and validate with Reddit.

Practical Takeaways:

Prioritize content and SERP Feature bets by age segment.

For brands that target an older audience, double down on classic organic search. Don’t over-index on AIOs. The exception here would be queries with a local intent or online shopping searches.

In these cases, user intent overpowers age preferences.

Quick reminder here: Premium subscribers get expanded info later this week. Upgrade to paid.

6. How Do Devices Impact User Behavior?

Devices reflect search context.

Mobile devices are used more often on the go, which is why mobile searches are more likely to have local intent and SERP Features like local packs.

Mobile users are also more restrained in their behavior due to smaller screen real estate. These factors are also reflected in how users engage with AIOs.

Image Credit: Kevin Indig

Why It Matters:

You need to ensure mobile snippets and structured data are flawless; they get more scrutiny.

Mobile is now the primary remaining source of incremental Google traffic. Optimize for it first.

Insight:

Vertical scrolling and thumb ergonomics make mobile users dig deeper and click out more.

And when an AIO is missing, users revert to classic “blue link” behavior, especially on mobile, where more than one-third of searches produced a click to a non-Google site.

Practical Takeaways:

You need to track and compare mobile and desktop SERPs.
We missed this in classic SEO, and now it’s so much more important.

To prioritize which format you optimize for, you must validate that you get more mobile users to your site first (use Google Search Console).

If mobile is important to your target audience, regularly run separate mobile rank and snippet audits. And optimize the above-the-fold experience, in addition to:

  • Making the page skimmable.
  • Shortening time-to-value on the page (essentially, the time it takes to resolve the query or reach an insight from your site).
  • Simplifying navigation on the page and site.

7. How And When Do Users Engage With Community-Based, Video-Based, And Shopping Carousel Content?

The controversial rise of Reddit often leaves us wondering why Google gives community content so much prominence across all topics and verticals.

Our study explains what users really do.

Key Stats:

We looked at where clicks go when users leave Google or want to validate answers:

Image Credit: Kevin Indig

Keep in mind that SERP Features and corresponding user behavior vary by question or task performed.

In this study, only one task surfaced video results: “how to waterproof fabric boots at home.”

And here’s how users in this study responded to video results:

  • Users watched the preview frames, hovered for autoplay, then clicked through to YouTube in 5 of the 7 cases.
  • Although videos made up less than 2% of all logged elements, their 37-second dwell time exceeds AIO dwell time itself (31 seconds).
  • Users linger to watch autoplay previews or scroll thumbnails before deciding to click through.

For shopping-related tasks, we noticed the following:

  • 30% of clicks go to local packs.
  • 26.4% of clicks go to shopping modules (product grids).
  • 13.2% of clicks go to text ads.
  • 40% of clicks went to paid-organic results (text + PLAs).
  • 7 out of 10 clicks bypassed classic organic links in favour of Google‑curated verticals or ads.

By the way, Amazon was a huge competitor to the shopping carousel.

Many people said, “I would just go to Amazon” (see clip below):

This study participant states: “Typically, I go to Amazon … scroll past the sponsored results and look for something with a lot of reviews.”

Why It Matters:

Social proof platforms (Reddit, YouTube) absorb the demand that AIOs can’t satisfy. Be present there.

Insight:

Community proof-points matter. When users leave the SERP after looking at an AIO, community links receive a lot of those clicks (18% when AIOs are not present).

People – especially the younger cohort that trusts AIO the most – use forums to get a (validating) voice from another human. Users in their 20s to 30s clicked Reddit or YouTube far more than older cohorts.

For some queries, like how-tos, users skip the AIO intentionally because they expect richer media, like videos.

Practical Takeaways:

  1. Invest in Organic Reddit (or the most relevant forum in your industry) when and if it appears for your most relevant queries. Seek both citations and social proof, as they reinforce each other.
  2. Optimize video thumbnails and the first 15 seconds. Users decide whether or not to click from the autoplay preview; if the opening doesn’t show the task in action, they skip.

Conclusion: Welcome To The New World Of Search

You made it to the end! Congratulations to you and your attention span (or did you just scroll here 🤔?).

To summarize everything you just (hopefully) read: If your brand isn’t surfaced in the first third of an AIO, it’s effectively invisible.

Search has flipped from a click economy to a visibility economy.

And within that economy, the new currency is authority, which now outranks search intent relevance.

Users ask, “Do I trust this brand?” before they even consider the answer.

If I had to boil the findings down to one sentence, it would be this: Users treat an AIO as a fact sheet: They quickly scan, expand if needed, and use minimal internal navigation.

Top Takeaways For Operators:

  1. Shift KPIs from clicks to presence. Track how often, how high, and for which queries your brand appears in AIO.
  2. Lead with authority. Invest in expert endorsements, .gov/.edu links, and PR that earns immediate trust.
  3. Package answers for skimmers. Key-fact boxes, bullets, and schema matter more than ever.
  4. Own the validation click. Seed Reddit threads, video demos, and comparison guides – users still seek a second opinion.
  5. Segregate desktop and mobile strategy. Treat desktop as a branding surface; fight for mobile if you need traffic.

Top Takeaways For Decision Makers:

  1. Expect – and budget for – a structural drop in organic sessions. AIOs cut outbound clicks roughly in half on desktop and by a third on mobile; revenue models tied to sessions (ads, affiliate) need hedging strategies.
  2. Shift KPIs and tooling from “rank” to “share of voice in AIO.” Track how often, how high, and for which queries your brand appears in the panel; classic position-tracking alone masks looming losses. Keep in mind we’re still refining the new metrics model.
  3. Invest in authority signals that secure trust instantly. Recognition by .gov, .edu, expert reviewers, or high-profile PR sways 58% of users to choose a cited source first. Brand trust precedes relevance in the new decision filter.
  4. Allocate resources to validation channels – Reddit, YouTube, forums – where many residual clicks go after an AIO. Owning the follow-up click preserves influence even when Google keeps the first.

Open Questions That Still Matter

  • Citation mechanics. How does Google choose which sources surface in the collapsed AIO, and in what order?
  • Attribution leakage. Will Search Console or GA ever expose AIO-driven impressions so brands can value “on-SERP” exposure?
  • Monetization models. If outbound traffic keeps shrinking, how will publishers, affiliates, and SaaS products replace lost session-based revenue?
  • Personalization vs. authority. Will future AIOs weigh personal history over global trust signals – and can brands influence that balance?
  • Regulatory impact. Could antitrust or copyright actions force Google to show more outbound links  – or fewer?
  • Behavior over time. Do users acclimate to AIOs and eventually click less (or more) as trust grows?

Hint: Paid subscribers can get answers to these questions (and can send me any question that’s top of mind!) related to this study.

Additional Resources

Other primary research that puts the qualitative data into perspective:

Methodology

Study Design And Objective

We conducted a mixed-methods, within-subjects usability study to quantify how Google’s AI Overviews (AIO) change user behavior.

Each participant completed eight live Google searches: six queries that consistently triggered an AIO and two that did not. This arrangement lets us isolate the incremental effect of AIO while holding person-level variables constant.

Participants And Recruitment

Sixty-nine English-speaking U.S. adults were recruited on Prolific between 22 March and 8 April 2025.

Eligibility required a ≥ 95% Prolific approval rate, a Chromium-based browser (for the recording extension), and a functioning microphone.

Participants chose their own device; 42 used mobile (61%) and 27 used desktop (39%).

Age distribution was: 18-24 yrs 29%, 25-34 yrs 30%, 35-44 yrs 12%, 45-54 yrs 17%, 55-64 yrs 3%, 65+ yrs 3%.

A pilot with eight users refined instructions; 18 further sessions were excluded for technical failure and four for non-compliance. The final dataset contains 525 valid task videos.

Task Protocol

Each session ran in UXtweak’s Remote Moderated mode.

After reading a task prompt, the participant navigated to google.com, searched, and spoke thoughts aloud. They declared a final answer (“I’m selecting this because…”) before clicking “Done” in an overlay.

Task set:

  1. Local service (“find a tax accountant near you”) – no AIO.
  2. Decision timing (“best month to buy a car”) – AIO.
  3. Low-cost product (“portable charger < $15”) – no AIO.
  4. Transactional YMYL (“transfer PayPal to bank”) – AIO.
  5. Coupon/deal (“car-rental promo code”) – AIO.
  6. Health YMYL (“why artificial sweeteners might cause health problems”) – AIO.
  7. Finance YMYL (“sell gift cards for instant payment”) – AIO.
  8. DIY how-to (“how to waterproof fabric boots”) – AIO.

Capture Stack

UXtweak recorded full-screen video (1080p desktop or device resolution mobile), cursor paths, scroll events, and audio. Recordings averaged 25 min; incentives were $8 USD.

Annotation Procedure

Three trained coders reviewed every video in parallel and logged one row per SERP element that held attention ≈ for 5 seconds or longer. Twenty-three variables were captured, grouped as:

  • Structural – participant-ID, task-ID, device, query.
  • Feature – element type (AIO, organic link, map pack, sponsored, video pack, shopping carousel, forum, etc.).
  • Engagement – scroll depth inside AIO (0/25/50/75/100%), number of scroll gestures, dwell-time (s), internal clicks, outbound clicks.
  • Behavioral – spoken reaction (hesitation, confusion, reassurance-seeking, none), reading style (skim, re-read, etc.), AIO button used (show-more, citation click, carousel chip).
  • Outcome – final answer, satisfaction flag, explicit trust flag.

The research director (Eric van Buskirk) spot-checked 10% of videos. Inter-coder agreement: dwell-time SD ± 3 s; Cohen’s κ on trust category = 0.79 (substantial).

Data Processing And Metrics

Annotations were exported to Python/pandas 2.2. Scroll values entered as whole numbers were normalised to fractions (e.g., 80 → 0.80).

The 99th percentile of dwell was Winsorised to dampen outliers. This produced 408 evaluated SERP elements and ≈ 350 valid AIO observations.

Statistical Analysis

Descriptives (means, medians, proportions) were stratified by device, age, and query intent.

Spearman rank correlations tested monotonic relationships among scroll %, dwell, trust, and query-refinement counts (power >.8 to detect ρ ≥ .25).

Welch t-tests compared mobile vs desktop means; McNemar χ² compared click-through incidence with vs without AIO.

Reliability And Power

With n ≈ 350 AIO rows, the standard error for a proportion of .50 is ≈ .05; correlations ≥ .30 are significant at α =.05. Cross-coder checks ensured temporal metrics and categorical judgements were consistent.

Limitations

Sample skews young (58% ≤ 34 yrs) and U.S.-based; think-aloud may lengthen dwell by ~5-10 s. Coder-judged trust/emotion involves subjectivity despite reliability checks.

Study window overlaps Google’s March 2025 core update; SERP UI was in flux. Findings generalise to Chromium browsers; Safari/Firefox users were not sampled.

Ethical Compliance

Participants gave informed consent; recordings stored encrypted; no personally identifying data retained. Study conforms to Prolific’s ethics policy and UXtweak TOS.

This narrative supplies sufficient procedural and statistical detail for replication or secondary analysis.

1 AI Overviews: About last week 


SEJ’s Content & SEO Strategist Shelley Walsh prerecorded an interview with Kevin before the launch to talk about his research. For more explanation about his findings, watch below.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Links To Itself: 43% Of AI Overviews Point Back To Google via @sejournal, @MattGSouthern

New research shows that Google’s AI Overviews often link to Google, contributing to the walled garden effect that encourages users to stay longer on Google’s site.

A study by SE Ranking examined Google’s AI Overviews in five U.S. states. It found that 43% of these AI answers contain links redirecting users to Google’s search results. Each answer typically includes 4-6 links to Google.

This aligns with recent data indicating that Google users make 10 clicks before visiting other websites. These patterns suggest that Google is working to keep users within its ecosystem for longer periods.

Google Citing Itself in AI Answers

The SE Ranking study analyzed 100,013 keywords across five states: Colorado, Texas, California, New York, and Washington, D.C.

It tracked how Google’s AI summaries function in different regions. Although locations showed slight variance, the study found that Google.com is the most-cited website in AI Overviews.

Google appears in about 44% of all AI answers, significantly ahead of the next most-cited sources, YouTube, Reddit, Quora, and Wikipedia, appearing in about 13%.

The research states:

“Based on the data combined from all five states (141,507 total AI Overview appearances), our data analysis shows that 43.42% (61,437 times) of AI Overview responses contain links to Google organic results, while 56.58% of responses do not.”

Image Credit: SE Ranking

Building on the Walled Garden Trend

These findings complement a recent analysis from Momentic, which found that Google’s “pages per visit” has reached 10, indicating users spend significantly more clicks on Google before visiting other sites.

Overall, this research reveals Google is creating a more self-contained search experience:

  • AI Overviews appear in approximately 30% of all searches
  • Nearly half of these AI answers link back to Google itself
  • Users now make 10 clicks within Google before leaving
  • Longer, more specific queries trigger AI Overviews more frequently

Google still drives substantial traffic outward; 175.5 million visits in March, according to Momentic.

However, it’s less effective at sending users away than ChatGPT. Google produces just 0.6 external visits per user, while ChatGPT generates 1.4 visits per user.

More Key Stats from the Study

The SE Ranking research uncovered several additional findings:

  • AI Overviews almost always appear alongside other SERP features (99.25% of the time), most commonly with People Also Ask boxes (98.5%)
  • The typical AI Overview consists of about 1,766 characters (roughly 254 words) and cites an average of 13.3 sources
  • Medium-difficult keywords (21-40 on the difficulty scale) most frequently trigger AI Overviews (33.4%), whereas highly competitive terms (81-100) rarely generate them (just 3.7%)
  • Keywords with CPC values between $2-$5 produce the highest rate of AI Overviews (32%), while expensive keywords ($10+) yield them the least (17.3%)
  • Fashion and Beauty has the lowest AI Overview appearance rate (just 1.4%), followed by E-Commerce (2.1%) and News/Politics (3.8%)
  • The longer an AI Overview’s answer, the more sources it cites. Responses under 600 characters cite about five sources, while those over 6,600 characters cite around 28 sources.

These statistics further emphasize how Google’s AI Overviews are reshaping search behavior.

This data stresses the need to optimize for multiple traffic sources while remaining visible within Google’s results pages.

U.S. Copyright Office Cites Legal Risk At Every Stage Of Generative AI via @sejournal, @martinibuster

The United States Copyright Office released a pre-publication version of a report on the use of copyrighted materials for training generative AI, outlining a legal and factual case that identifies copyright risks at every stage of generative AI development.

The report was created in response to public and congressional concern about the use of copyrighted content, including pirated versions, by AI systems without first obtaining permission. While the Copyright Office doesn’t make legal rulings, the reports it creates offer legal and technical guidance that can influence legislation and court decisions.

The report offers four reasons AI technology companies should be concerned:

  1. The report states that many acts of data acquisition, the process of creating datasets from copyrighted work, and training could “constitute prima facie infringement.”
  2. It challenges the common industry defense that training models does not involve “copying,” noting that the process of creating datasets involves the creation of multiple copies, and that improvements in model weights can also contain copies of those works. The report cites reports of instances where AI reproduces copyrighted works, either word for word or “near identical” copies.
  3. It states that the training process implicates the right of reproduction, one of the exclusive rights granted to emphasizes that memorization and regurgitation of copyrighted content by models may constitute infringement, even if unintended.
  4. Transformative use, where it adds a new meaning to an original work, is an important consideration in fair use analysis. The report acknowledges that “some uses of copyrighted works in AI training are likely to be transformative,” but it “disagrees” with the argument that AI training is transformative simply because it resembles “human learning,” such as when a person reads a book and learns from it.

Copyright Implications At Every Stage of AI Development

Perhaps the most damning part of the report is where it says that there may be copyright issues at every stage of the AI development and lists each stage of development and what may be wrong with it.

A. Data Collection and Curation

The steps required to produce a training dataset containing copyrighted works clearly implicate the right of reproduction…

B. Training

The training process also implicates the right of reproduction. First, the speed and scale of training requires developers to download the dataset and copy it to high-performance storage prior to training.96 Second, during training, works or substantial portions of works are temporarily reproduced as they are “shown” to the model in batches.

Those copies may persist long enough to infringe the right of reproduction,160 depending on the model at issue and the specific hardware and software implementations used by developers.

Third, the training process—providing training examples, measuring the model’s performance against expected outputs, and iteratively updating weights to improve performance—may result in model weights that contain copies of works in the training data. If so, then subsequent copying of the model weights, even by parties not involved in the training process, could also constitute prima facie infringement.

C. RAG

RAG also involves the reproduction of copyrighted works.110 Typically, RAG works in one of two ways. In one, the AI developer copies material into a retrieval database, and the generative AI system can later access that database to retrieve relevant material and supply it to the model along with the user’s prompt.111 In the other, the system retrieves material from an external source (for example, a search engine or a specific website).181 Both methods involve making reproductions, including when the system copies retrieved content at generation time to augment its response.

D. Outputs

Generative AI models sometimes output material that replicates or closely resembles copyrighted works. Users have demonstrated that generative AI can produce near exact replicas of still images from movies,112 copyrightable characters,113 or text from news stories.114 Such outputs likely infringe the reproduction right and, to the extent they adapt the originals, the right to prepare derivative works.”

The report finds infringement risks at every stage of generative AI development, and while its findings are not legally binding, they could be used to create legislation and serve as guidance for courts.

Takeaways

  • AI Training And Copyright Infringement:
    The report argues that both data acquisition and model training can involve unauthorized copying, possibly constituting “prima facie infringement.”
  • Rejection Of Industry Defenses:
    The Copyright Office disputes common AI industry claims that training does not involve copying and that AI training is analogous to human learning.
  • Fair Use And Transformative Use:
    The report disagrees with the broad application of transformative use as a defense, especially when based on comparisons to human cognition.
  • Concern About All Stages Of AI Development:
    Copyright concerns are identified at every stage of AI development, from data collection, training, retrieval-augmented generation (RAG), and model outputs.
  • Memorization and Model Weights:
    The Office warns that AI models may retain copyrighted content in weights, meaning even use or distribution of those weights could be infringing.
  • Output Reproduction and Derivative Works:
    The ability of AI to generate near-identical outputs (e.g., movie stills, characters, or articles) raises concerns about violations of both reproduction and derivative work rights.
  • RAG-Specific Infringement Risk:
    Both methods of RAG, copying content into a database or retrieving from external sources, are described as involving potentially infringing reproductions.

The U.S. Copyright Office report describes multiple ways that generative AI development may infringe copyright law, challenging the legality of using copyrighted data without permission at every technical stage, from dataset creation to model outputs. It rejects the use of the analogy of human learning as a defense and the industry’s broad application of fair use. Although the report doesn’t have the same force as a judicial finding, the report can be used as guidance for lawmakers and courts.

Featured Image by Shutterstock/Treecha

Google Ads AI Vs. Third-Party AI Tools: Comparison For Google Ads Creatives

Every day, marketing teams face a crucial decision: Should they rely on Google’s built-in AI tools or invest in custom solutions for specific ad campaign tasks?

I’ve watched this debate play out countless times with clients.

Google continues adding more AI features for tasks like ad copy generation, headline creation, image generation, and product feed optimization.

Meanwhile, specialized tools and custom solutions are thriving, and no real breakthrough for Google AI can be seen.

Recent research supports this tension.

Gherheș et al. (2025) found that while AI-generated content can outperform human-created alternatives in certain contexts, the quality varies significantly depending on implementation and purpose.

Their study revealed that over 50% of users preferred AI-generated informative content over sensationalized approaches, suggesting that how AI is deployed matters more than the technology itself.

But which approach actually delivers better results? And at what cost?

As Pavlik (2024) notes in his analysis of AI in journalism, tools like ChatGPT don’t simply replace human creativity but rather present opportunities for “improving the quality and effectiveness” of creative work when properly integrated into existing workflows.

A recent study by Ameet Khabra compared the performance of human-written versus AI-generated ad copy in Google Ads campaigns.

Over an eight-week period with a $500 budget, human-crafted ads significantly outperformed AI-created content from Copy AI, achieving 60% more clicks, a 1.33% higher click-through rate, and a lower cost per click ($4.85 vs. $6.05).

Researchers attributed human copywriters’ superior performance to their ability to understand audience emotions, employ creativity and emotional appeal, adapt to specific contexts, and leverage cultural nuances that AI still struggles to replicate.

While acknowledging AI’s evolving capabilities and potential value as a supplementary tool, the study emphasizes the enduring importance of human creativity in crafting compelling advertising messages that drive engagement and conversions.

Regardless of these mixed research findings, one thing is certain: AI is increasingly embedded in creative processes across marketing, and its integration is inevitable.

The question isn’t whether AI will play a role in advertising creation, but rather how marketers can best leverage these tools to enhance their campaigns.

As AI capabilities evolve rapidly, today’s limitations may be tomorrow’s strengths. With this inevitability in mind, marketers need practical guidance on navigating the current landscape of available solutions.

This article compares Google Ads integrated AI tools against third-party and custom solutions for creative and optimization tasks specifically.

AI-Generated Ad Copy

Google AI Automatically Created Assets

Google’s AI text generator aims to streamline the ad creation process by converting basic product descriptions into campaign-ready assets.

The platform encourages advertisers to input unique selling propositions and key product features to generate contextually relevant ad copy.

Upon testing this tool with a simulated video game business specializing in refurbished PlayStation 5 consoles and games, the performance fell notably short of expectations.

The output quality was inconsistent, but more concerning were the significant compliance issues observed.

In one particularly problematic instance, the system generated the phrase “Welcome to the Amazon® Website” as suggested ad text, presenting a clear trademark infringement risk and potential legal exposure for advertisers.

Such critical errors highlight a fundamental limitation in Google’s native AI solution: While offering workflow convenience, it demonstrates inadequate safeguards for brand compliance and legal protection.

The system also produced contextually inappropriate messaging, such as “PlayStation 5 Problems Solved,” which misaligned with sales-oriented campaign objectives by suggesting repair services or technical support rather than product offerings.

Without careful human review, these problems make the tool risky to use, especially for businesses in competitive markets where mistaken identity or inaccurate representations could lead to serious legal issues and damage to your reputation.

Image from author, April 2025

When generating longer headlines, there were much fewer results.

Only three ad suggestions appeared, one of which included free shipping information for orders over $50, which was a hallucination, as this information was never disclosed in the prompt or the landing page.

Image from author, April 2025

Creating descriptions was even worse, as there was only one ad suggestion and not even a good one from a copywriting perspective.

Image from author, April 2025

After trying with different prompts, I was able to get at least five new descriptions out of Google AI.

Still, the results were quite disappointing. The ad copy contained hallucinations like the “free shipping over 100 USD,” as well as the business name “Example Video Games,” instead of using the business name of the account or extracting it from the landing page or URL.

Overall, underwhelming results, considering Google is one of the biggest companies on earth and owns the biggest online advertising platform.

Image from author, April 2025

Third-Party Ad Copy Creation

While Google’s AI text generator struggles with brand accuracy and contextual relevance, several general-purpose AI models offer more sophisticated ad copy creation capabilities that balance automation with quality control.

Leading general AI assistants like Claude, ChatGPT, and Gemini represent compelling alternatives for marketers seeking higher-quality ad copy generation.

Unlike Google’s more constrained system, these platforms offer greater flexibility in handling nuanced prompting and brand-specific requirements.

Image by author, April 2025

In testing with our video game business scenario, we prompted each model to create headlines for refurbished PlayStation 5 consoles.

The results demonstrated significant advantages over Google’s native offering:

  • Claude 3.7 produced premium-positioned headlines like “Save On Certified PS5 Consoles,” “Quality PS5 | Full Warranty,” and “Premium PS5 | Fast Shipping” that emphasize both value and quality assurance. Claude’s headlines maintained strong brand positioning while highlighting availability (“PS5 Consoles In Stock Now”) and price advantages (“PS5 Consoles 30% Off Retail”) without sacrificing perceived value.
  • ChatGPT (o3-mini) focused more on emotional appeal and deal framing with options such as “PS5 Deals You’ll Love,” “Get More, Spend Less PS5,” and “Budget PS5, Premium Fun.” ChatGPT’s approach effectively balanced affordability messaging with aspirational elements, potentially appealing to both value-conscious and experience-focused consumers.
  • Gemini 2.0 took a more direct value-oriented approach with straightforward headlines like “Refurbished PS5 Deals,” “Cheap Used PS5,” and “Discount PS5 Titles.” While less nuanced in positioning, Gemini’s headlines clearly communicate the core offering and may perform well for price-sensitive segments or direct response campaigns.

All three models demonstrated superior context awareness compared to Google’s native tool, with each showcasing different strategic approaches to headline creation.

They successfully avoided the hallucinations and brand confusion issues observed in Google’s Ad tool, while providing greater headline variety tailored to different marketing objectives.

The key advantage these general AI assistants offer is their adaptability and more refined understanding of marketing language.

By providing detailed prompting with brand guidelines, target audience information, and specific messaging requirements, marketers can achieve significantly better results than with Google’s more limited integrated tool.

For businesses where ad copy directly impacts conversion rates, leveraging these more sophisticated AI options can yield higher-quality creative assets that better represent brand positioning and speak more effectively to customer needs.

Despite Gemini’s relevant headline ideas, it struggled to adhere to the 30-character limit for some prompts on Google Ads headlines – a surprising limitation given that Gemini is Google’s own AI model and would be expected to understand Google Ads guidelines inherently – while Claude and ChatGPT consistently produced properly sized headlines without major additional editing or truncation.

Image Generation

Google AI Image Generation

Image generation is another area where AI can really shine and reduce the workload.

Images are a core asset in ecommerce, not only used for product images, but also for category pages, shop banners, display ads, and more.

For our virtual video game business, I tried to create some images matching our PlayStation 5 asset group. The results were interesting to say.

The first created image looks very similar to an Xbox. Specifically, an Xbox One S or an Xbox Series S, which is the latest model.

Now, there are no logos or trademarks seen, and the form factor is a little off.

AI-generated image by author, April 2025

Even more interesting, depending on the exact prompt, Google AI shows an error message related to branded items and content restrictions.

Image from author, April 2025

Another image created looks a little more like a PlayStation, but not as described and advertised as a PlayStation 5, but rather an older PlayStation 4 model.

Again, the content restrictions are most likely responsible for the results.

AI-generated image by author, April 2025

While the image results are somewhat disappointing for those branded items, it is understandable that Google AI follows content restrictions and brand guidelines to avoid any legal issues, as the PlayStation is a trademark of Sony, and the Xbox is a trademark of Microsoft.

It’s interesting to see that Google AI tries to work around this limitation and still creates an image, but in that specific case, the image is more or less useless, as there is little value in showcasing a non-existent video game console.

A question here would be why the content restrictions and guidelines did not apply to text creation when the text asset “Welcome to the Amazon® Website” was created.

To check the image creation quality, I tried a different approach for non-branded items in the dog food category.

The image is good at first glance since multiple products are shown with a dog in the picture, supporting the category, but some things are off.

The text in the image is still a mess for Google AI. Plus, the proportions are wrong. The dog is way too small, considering the cans of dog food displayed, which are small items.

AI-generated image by author, April 2025

Better than video game consoles, but still not good enough to rely solely on Google AI without any backup or alternative.

Third-Party Image Generation

ChatGPT

Using the same prompts to create images, ChatGPT delivers amazing results compared to the Google Ads integrated image creator.

Visually, it was able to recreate a PlayStation video game console with a gaming controller.

ChatGPT even got details right, except the brand logo, which might be due to some brand protection measures.

AI-generated image by author, April 2025

Also, the latest Xbox model was created with in-depth details.

This time, even the Xbox logo was created, maybe because ChatGPT and Microsoft have made a trademark deal of some sort? Or trademark restrictions have some loopholes.

AI-generated image by author, April 2025

At last, the creation of the dog food image was also a success.

The prompt included the brand to be named “Doug’s Best Dog Food,” which was perfectly written on the product, along with a nicely placed bowl full of pellets in front of a golden retriever.

In comparison, Google AI was able to create a decent image, but upon closer look, issues with displaying words were apparent, which ChatGPT could handle perfectly.

AI-generated image by author, April 2025

Qwen

Qwen is an image generation tool based on Deepseek, which is a Chinese-based AI developer.

AI-generated image by author, April 2025

The image from Qwen clearly had an “AI” look compared to the ChatGPT or Google AI image.

However, it got the brand name “Doug’s Best Dog Food” right. With some improvements, Qwen can produce decent images, if you are okay with having a digital image look.

Google AI was able to create a more real-life looking image, with the downside of not being able to display the brand name correctly.

Video Creation Tool

Google Ads Video Creation Tool

Google’s built-in video creation tool aims to make video advertising accessible to marketers without production resources.

The tool covers multiple marketing objectives – from brand awareness and consideration to direct sales and app promotion – accommodating various business categories, including apps, products, and services.

It offers flexibility with vertical, square, and horizontal formats in lengths ranging from quick six-second spots to 15-second and longer videos.

However, the tool’s output quality reveals its limitations.

Most videos are essentially slideshows, stitching together static images, logos, and text overlays rather than fluid video content.

While this approach democratizes video ad creation, the results often lack the polish and engagement power of properly produced video content.

For many marketers, this represents the fundamental tradeoff of Google’s native tools: accessibility and integration vs. creative limitations that might impact performance.

Image from author, April 2025

At best, marketers get a nice-looking slideshow; speaking of a serious advertising video would be too much.

The better templates are mostly for app-related videos, where at least some kind of animation is included with a finger doing some phone touch gestures.

Overall, the native video creation tool serves as a backup for marketers who need a video immediately and don’t have any tools on hand.

In any other case, it’s best to postpone video creation and start with a more decent tool.

Third-Party Video Creation

Canva Video Creation

Screenshot from Canva, April 2025

Canva makes much better videos than Google Ads’ built-in tools with almost the same effort.

Google mostly creates basic slideshows, but Canva gives you thousands of professional templates, animations, and stock videos to use in your marketing.

The simple drag-and-drop design lets you make engaging videos with smooth transitions and text effects that keep viewers engaged.

Unlike Google’s static slideshows, Canva creates flowing video content that looks professionally made.

If you spend just a few more minutes using Canva instead of Google’s tool, your videos will look much more professional and likely perform better with your audience.

Qwen

Alibaba’s Qwen is a strong competitor to Google Ads’ basic video tools, giving marketers better videos without needing special skills.

While Google just makes simple slideshows, Qwen uses AI to turn your images and text into dynamic videos with smooth movements and professional transitions.

The tool is great at automatically creating cohesive visual stories even from minimal input, adding motion to still images in ways that look professional.

What stands out is how Qwen creates animations that actually match your product type and brand style, avoiding the one-size-fits-all look of Google’s templates.

Though not as well-known as Canva in the West, Qwen’s AI approach produces polished videos that look intentionally designed rather than template-made, making it a great choice for marketers who want better videos than what Google offers.

Image by author, April 2025
Image by author, April 2025

For the example of a dog food brand, Qwen delivered exceptional results.

With the first prompt, Qwen created a five-second clip of golden retrievers playing around and going to a human hand to eat dog food from the hand.

Not only did the video look pretty close to real life compared to the Qwen image generation “AI look,” but Qwen also did this as a free tool. No cost involved.

If you compare this to the Google video creation, which is basically a PowerPoint presentation, Qwen makes a really good performance.

Sora

Another great video tool is Sora from OpenAI.

Since Sora is included in the $20 Premium membership, you can generate videos at almost no cost, though with some limitations on video quality and length.

Still, there are a few tools out there that can generate decent AI video output for that cost.

Product Image Improvements

Product Studio

The Product Studio for Merchant Center Next is a beta image optimization tool within the Merchant Center, also accessible via the Google App within Shopify.

It allows for creating product images in various scenes, as well as removing backgrounds and increasing image quality.

Image from author, April 2025
Image from author, April 2025

These are two tries to display a gaming controller in a scene.

Although the quality of the product image has remained reasonably good, the scenes are barely usable.

The image processing prompt was “Showcase this controller in a living room, in front of a TV with neon lighting.”

In practice, the desired scene was not even remotely depicted. The controller in front of notebooks or pens is out of place; the second attempt resulted in three black backgrounds and a fiery background.

Free Alternatives To Google’s Product Studio

Unlike Google’s Product Studio, which struggles with accurate scene generation as shown in the gaming controller example, several free tools offer more reliable image optimization capabilities.

Canva’s free tier includes a background removal tool that produces clean cutouts with remarkable accuracy.

While scene creation is more limited in the free version, you can still place products on various pre-designed backgrounds or use their extensive template library to create more contextually appropriate product displays than what you experienced with Google’s tool.

To remove backgrounds, use remove.bg, which is a specialized tool that focuses exclusively on background removal with impressive results, even for complex products like your gaming controller.

The free version has size limitations but delivers professional-quality cutouts that can then be placed into scenes using other tools.

For everything more complex, GIMP is a free and capable tool. This open-source image editor provides robust capabilities for both background removal and scene composition.

Though it has a steeper learning curve, GIMP offers precise control over image quality enhancement and realistic product placement.

Final Thoughts

Google’s native AI tools, while conveniently integrated into their advertising platform, consistently underperform compared to third-party alternatives.

The evidence is clear and concerning. Google’s AI ad copy generator produced legally problematic content with brand infringement risks and hallucinated product details.

Its image generation produced visually inaccurate representations. The video creation tool delivered little more than basic slideshows rather than engaging video content.

Meanwhile, third-party solutions or Google’s own Gemini model used externally demonstrated superior capabilities across all creative functions:

  • General-purpose AI assistants like Claude and ChatGPT produced more compelling, accurate, and compliant ad copy.
  • Specialized tools like Canva, Remove.bg, and Photopea offered vastly superior image manipulation options.
  • Video creation platforms like Canva and Qwen delivered professional-quality animation and transitions impossible with Google’s basic tools.

This performance gap reveals the fundamental tradeoff marketers face: convenience of integration vs. creative quality and performance.

Google’s in-platform AI tools provide workflow efficiency but at the significant cost of creative limitations, brand safety concerns, and potential legal exposure.

For marketers serious about campaign performance, the path forward is clear: Leverage external AI solutions for creative development, then import these higher-quality assets into Google’s advertising platform.

This hybrid approach maintains the advantage of Google’s targeting and delivery mechanisms while avoiding the substantial limitations of their creative AI tools.

As AI continues to evolve in marketing, successful advertisers will be those who strategically select the right tools for each specific function rather than defaulting to in-platform options for convenience alone.

The evidence suggests that, for now, the marketing advantage lies decidedly with those willing to look beyond Google’s native AI for their creative development needs.

More Resources:


Featured Image: KinoMasterskaya/Shutterstock

The Triple-P Framework: AI & Search Brand Presence, Perception & Performance

As brands compete for market share across a whole range of AI platforms, each with its own way of presenting information, brands are on red alert.

The three pillars of presence, perception, and performance that I discuss in this article may help marketers navigate new times. This is especially true as search and AI undergo their biggest make-over ever.

What’s driving this change?

AI isn’t just retrieving information anymore – it’s actively evaluating, framing, and recommending brands before prospects even click a link.

It’s happening now, and it’s accelerating.

Think about it. Today, in many ways, ChatGPT has become just as synonymous with AI as Google was when it launched core search.

More and more users and marketers are experimenting with and utilizing Google AIO, ChatGPT, Perplexity, and more.

According to a recent BrightEdge survey, over 53% of marketers regularly use multiple (two or more) AI search platforms weekly.

AI Is Reshaping How Brands Are Presented And Perceived

Consider how buyers research options today: In Google AIO, a traveler planning a Barcelona vacation once needed dozens of separate searches, each representing an opportunity for visibility.

Now? They ask one question to an AI assistant and receive a complete itinerary, compressing what 50 touchpoints once took into a single interaction.

AI is no longer a passive search engine. It’s an active evaluator, interpreting intent, forming opinions, and determining which brands deserve attention.

In enterprise SEO and B2B contexts, the shift is even more pronounced. AI is effectively writing the request for proposal (RFP), establishing evaluation criteria, and creating shortlists without brands having direct input.

Take enterprise software evaluation, for instance. When a CIO asks an AI about the “best enterprise resource planning solutions,” the AI’s response typically features:

  • A curated shortlist of vendors.
  • Evaluation criteria that the AI deems relevant.
  • Strengths and limitations of each solution.
  • Recommendations based on various scenarios.

These responses don’t just inform decisions. They frame the entire evaluation process before a vendor’s content is visited.

The question isn’t whether this transformation is happening. It’s whether your brand is prepared for it.

Read more: 5 Key Enterprise SEO And AI Trends For 2025

The Triple-P Framework For AI Search Success

After analyzing thousands of AI search responses using our BrightEdge Generative Parser™, I’ve developed the Triple-P framework (Presence, Perception, and Performance) as a strategic compass for navigating this new landscape.

Let’s break down each component.

Presence: Beyond Traditional Rankings

While Google still commands 89.71% of search market share, the ecosystem is diversifying rapidly:

  • ChatGPT: 19% monthly traffic growth.
  • Perplexity: 12% monthly traffic growth.
  • Claude: 166% monthly traffic surge.
  • Grok: 266% early-stage spike.

(Source: BrightEdge Generative Parser™ April 2025)

Our research shows that the presence of AI Overviews has nearly doubled since June 2024, with comparison features growing by 70-90% and product visualization features by 45-50% in B2B sectors.

Image from author, May 2025

For enterprise marketers, Google is always your starting point. However, it’s not just about ranking on Google anymore; it’s about showing up wherever AI models showcase your brand.

For example, consider these industry-specific implications:

  • For CPG brands: When consumers ask about product sustainability, AI doesn’t just list eco-friendly options; it evaluates authenticity based on consistent messaging across digital touchpoints.
  • For SaaS companies: Buyers researching integration capabilities receive AI-curated assessments that either position you as a compatibility leader or exclude you entirely.
  • For healthcare providers: Patient questions about treatment options trigger AI responses that cite the most authoritative content, not necessarily the highest-ranking websites.

We are in an era of compressed decision-making. Invisibility equals elimination.

Perception: When AI Forms Opinions

The most revealing insight from our research is that only 31% of AI-generated brand mentions are positive; of those, just 20% include direct recommendations.

Source: BrightEdge AI Catalyst and Generative Parser ™, May 2025

This is a wake-up call for all marketers, especially those managing a brand.

Even when your brand appears in AI results, how it’s framed varies dramatically depending on the AI model, training data, and interpretive logic.

In some AI engines, your brand may appear as the industry leader. In others, you may be completely absent.

What The Data Shows:

  • Brands with strong pre-existing recognition receive more positive mentions in AI responses.
  • Consistent messaging across digital touchpoints makes brands more likely to be cited positively.
  • AI systems appear to “average” brand signals across the web when forming perceptions.

When we analyzed sentiment distribution (April 2025) in AI responses by industry, we saw significant variation, which you could group-match to verticals. For example:

  • Finance: Positive mentions aligned around good content on regulatory compliance and security.
  • Healthcare: Positive mentions aligned around good content with accuracy and credibility as key factors.
  • Retail: Positive mentions aligned around good customer experience and shopping.
  • Technology: Positive mentions aligned around content on innovation and reliability as primary criteria.

The implications are clear: Perception management is now as crucial as presence.

How does this play out in practice?

When brands implement coordinated perception management strategies across multiple channels, they see improvements in AI sentiment within 60-90 days.

Performance: New Metrics That Matter

The final P (Performance) requires entirely new measurement approaches.

When AI overviews appear in search results, click-through rates often drop by up to 50% according to internal BrightEdge data. Yet, conversion rates typically remain strong, suggesting AI qualifies leads before they reach your site.

We’re entering an era where impressions will be high, click-through rates may drop, but conversions will increase.

I explained at our recent quarterly briefing. AI filters options and delivers buyers who are closer to decisions.

The impact varies dramatically by query type:

  • Informational queries: Reduction in clicks, minimal conversion impact.
  • Navigational queries: Reduction in clicks, negligible conversion impact.
  • Commercial queries: Reduction in clicks, higher conversion rates.
  • Transactional queries: Reduction in clicks, higher conversion rates.

This pattern suggests AI is most effective at qualifying commercial intent, delivering more purchase-ready traffic.

And impressions matter now – they are a new brand metric.

Five Essential AI Search Metrics:

  1. AI Presence Rate: Percentage of target queries where your brand appears in AI responses.
  2. Citation Authority: How consistently you are cited as the primary source.
  3. Share Of AI Conversation: Your semantic real estate in AI answers versus competitors.
  4. Prompt Effectiveness: How well your content answers natural language prompts.
  5. Response-To-Conversion Velocity: How quickly AI-influenced prospects convert. Brands with strong pre-existing recognition will receive more positive mentions in AI responses.

Position within AI responses matters as much as position in traditional SERPs once did.

Monthly reporting cycles are becoming obsolete. AI-generated results can shift within hours, demanding real-time monitoring capabilities.

The DNA Of AI-Optimized Content

In my experience, content is more likely to be cited by AI with:

  • Comprehensive coverage: Content addressing multiple related questions outperforms narrow content.
  • Structured data implementation: Pages with robust schema markup see higher citation rates.
  • Expert validation: Content with clear expert authorship signals receives more citations.
  • Multi-format delivery: Topics presented in multiple formats (text, video, data visualizations) earn more citations.
  • First-party data inclusion: Original research and proprietary data increase citation likelihood.

These patterns suggest AI systems are increasingly sophisticated in their ability to identify genuinely authoritative content versus content merely optimized for traditional ranking factors.

In my last article, I discussed how Google AIO, ChatGPT, and Perplexity differ and where they share some common optimization traits.

Five Actionable Strategies For Triple-P Success

Based on our extensive research, here are five implementation strategies aligned with this framework:

1. Adopt Entity-Based SEO

AI prioritizes content from known, trusted entities. Stop optimizing for fragmented keywords and start building comprehensive topic authority.

Our data shows that authoritative content is three times more likely to be cited in AI responses than narrowly focused pages.

Implementation Steps:

  • Perform an entity audit: Identify how search engines currently understand your brand as an entity.
  • Develop topical maps: Create comprehensive coverage of core topics rather than isolated keywords
  • Implement entity-based schema: Use structured data to explicitly define your brand’s relationship to key topics.
  • Build consistent entity references: Ensure name, address, and phone (NAP) consistency across all digital properties.
  • Cultivate authoritative connections: Earn mentions and links from recognized authorities in your space.

Enterprise brands implementing entity-based SEO will see an uplift in AI citations.

2. Implement Perception Management

With 69% of AI brand mentions not explicitly positive, you must actively shape sentiment.

Image from author, May 2025

Brands that implement proactive sentiment management strategies will see success.

Implementation Steps:

  • Monitor AI sentiment tracking: Establish baseline sentiment across AI platforms.
  • Identify perception gaps: Compare AI perceptions against desired brand positioning.
  • Address criticism proactively: Create content that honestly addresses common concerns.
  • Amplify authentic strengths: Develop evidence-based content highlighting genuine advantages.
  • Build consistent messaging: Align key messages across all digital touchpoints.

3. Integrate Real-Time Citation Monitoring

Tracking AI citations regularly is now vital to improve mention rates.

This requires capability beyond traditional rank tracking or Google Search Console analysis.

Implementation Steps:

  • Deploy continuous monitoring: Track AI responses for priority queries across platforms.
  • Implement competitor citation alerts: Get notified when competitors gain or lose citations.
  • Conduct prompt variation testing: Analyze how different user phrasings affect your brand’s inclusion.
  • Track citation position: Monitor where within AI responses your brand appears.
  • Measure citation authority: Assess whether you’re positioned as a primary or secondary source.

4. Deploy Cross-Core Search And AI Platforms

Companies that take an integrated approach across traditional search and multiple AI platforms will see higher return on investment (ROI) on search investments.

The future belongs to unified measurement frameworks that connect traditional SEO metrics with emerging AI citation patterns.

Implementation Steps:

  • Build unified dashboards: Integrate traditional search metrics with AI citation data.
  • Map keyword-to-prompt relationships: Connect traditional keywords to conversational AI prompts.
  • Analyze traffic source shifts: Track changing patterns between direct search and AI-referred traffic.
  • Segment by AI platform: Monitor performance variations across different AI search environments.
  • Connect to business outcomes: Tie AI presence metrics directly to conversion and revenue data.

5. Use AI To Win At AI

This isn’t theoretical. It’s delivering measurable results:

  • BrightEdge Autopilot users averaged a 65% performance improvement.
  • BrightEdge Copilot users saved 1.2 million content research hours.

The brands succeeding most in AI search leverage AI in their workflows.

Implementation Steps:

  • Automate content research: Use AI to identify comprehensive topic coverage opportunities.
  • Implement AI-driven schema markup: Systematically structure data for machine interpretation.
  • Deploy prompt effectiveness testing: Continuously test how well content answers real user prompts.
  • Create AI-optimized content briefs: Define exactly what comprehensive coverage means for each topic.
  • Analyze AI citation patterns: Identify what characteristics make competitor content citation-worthy.

Teams using AI for AI optimization will benefit from higher productivity and improved performance to gain that must-have competitive edge in search and AI today.

What’s Coming Next: AI-To-AI Marketing

Looking ahead to two to three years, expect AI to evolve from an information assistant to a trusted advisor that buyers rely on for evaluation, comparison, and vendor selection.

We’re already seeing early indicators of AI-to-AI marketing, where procurement teams use AI agents to automate research and vendor vetting.

Emerging Trends:

  • Digital twin marketplaces: Buyers will interact with simulated versions of B2B solutions before speaking with vendors
  • Vertical-specific AI companions: Industry-specialized models for cybersecurity, manufacturing, and healthcare.
  • AI agent purchasing: Autonomous systems are not just researching but also completing transactions on users’ behalf.
  • Continuous entity validation: AI systems continuously monitor brand claims against real-world evidence.
  • Multi-modal search experiences: Voice, image, and text-based AI interactions requiring omnichannel optimization.

Read more: As Chatbots And AI Search Engines Converge: Key Strategies For SEO

The Trust Premium In AI Search

Consumers are always more likely to trust brands they already recognize.

  • AI functions as a trust bridge.
  • When consumers delegate decision-making to AI, pre-existing brand familiarity becomes disproportionately influential.
  • The impact is most pronounced in high-consideration purchases.

This creates both a challenge and an opportunity. Established brands must protect their advantage, while emerging brands must strategically build recognition signals detectable by AI.

Organizational Structure For AI Search Success

Leading organizations are already creating “collaborative intelligence” roles – specialists managing the interplay between human creativity and AI amplification.

Successful teams typically include:

  • AI Search Strategists: Focus on overall presence, perception, and performance.
  • Prompt Engineers: Specialize in understanding how users phrase requests to AI.
  • Content Scientists: Develop evidence-based approaches to comprehensive coverage.
  • AI Citation Analysts: Monitor and optimize for inclusion in AI responses.
  • Schema Specialists: Ensure that the machine-readable structure enhances entity understanding.

These cross-functional teams integrate with traditional SEO, content marketing, analytics, and business intelligence functions.

The Bottom Line

In this new landscape, the question isn’t whether your website ranks. It’s whether AI recommends your brand when it matters most.

The Triple-P framework gives you the structure to navigate this future with confidence.

Here’s what I recommend getting started:

  • Conduct an AI presence audit: Understand where your brand appears in AI responses across key platforms.
  • Analyze sentiment distribution: Assess not just if you’re mentioned, but how you’re portrayed in AI-generated content.
  • Connect AI metrics to business results: Start tracking the relationship between AI presence and conversion patterns.
  • Identify entity perception gaps: Compare how AI systems understand your brand versus your desired positioning.
  • Deploy real-time monitoring: Implement systems to track citation changes as they happen.

The branded AI search revolution isn’t coming – it’s already here.

The brands that embrace the Triple-P framework today will be the ones AI recommends tomorrow.

Note: In March 2025, BrightEdge surveyed over 1,000 of its customers who are marketers. Findings from this survey are referenced above.

More Resources:


Featured Image: Moon Safari/Shutterstock

Do More With Less: How To Build An AI Search Strategy With Limited Resources [Webinar] via @sejournal, @hethr_campbell

Feeling overwhelmed by AI in search?

Working with limited time, tools, or a small team?

You’re not alone. As search engines evolve, it’s becoming harder to keep up, especially if your resources are stretched thin.

Join us for “Do More With Less: How To Build an AI Search Strategy With Limited Resources,” a practical webinar designed to help small teams create a strong, AI-powered SEO strategy that actually works.

Why This Webinar Is Worth Your Time:

You don’t need a big budget or a large team to get results. You just need a smart plan and the right tools to help you stay ahead.

In this session, you’ll learn how to:
✅ Build a step-by-step SEO roadmap that uses AI effectively.
✅ Prioritize what matters through smarter audits and tools
✅ Keep up with the latest changes in AI-powered search

Presented by Vincent Moreau, SEO Consultant at Botify, this session will give you practical steps you can use right away.

What Makes This Session Different:

We’re focused on real solutions for real constraints. If you’re looking to grow with limited resources, this is your chance to learn how.

Let’s simplify your strategy and make AI work for your SEO goals.

Can’t make it live? No problem. Sign up anyway, and we’ll send you the full recording.