SEO for Paws, is a live-streamed fundraiser founded by Anton Shulke, an expert at organizing events, to help a charity close to his heart.
Anton has tirelessly continued his support for his favorite charity, which aids the many pets that were left behind in Kyiv after war broke out in Ukraine. The previous event in March managed to generate approx $7,000 for the worthy cause, with all funds going straight to the shelters where it’s needed.
Anton is well-known for his love of cats. Dynia, who traveled across Europe with Anton’s family after escaping Kyiv, is a regular feature on his social media channels.
Image from Anton Shulke, September 2025
One Cat Turned Into A Shelter Of 50
Among the many pet shelters that SEO For Paws has helped is an apartment run by Alya, who cares for up to 50 animals.
Alya has always cared for animals, and meeting an old, sick cat she called Fox was the start of becoming an organized shelter.
In 2016, she started with five cats living in her apartment, and today has 50 alongside 15 of her grandmother’s cats.
There’s a lot involved in care for this many animals, including the feeding, cleaning, washing litter boxes, replacing litter, and performing hygiene or medical procedures when needed.
Running a home-based shelter is not easy. Sometimes it’s sad, sometimes it’s exhausting. But Alya says that looking around at all the little whiskered faces, the furry bodies sprawled across the furniture, makes it worth it. Giving them a life of warmth, food, and love is worth every challenge.
To keep supporting individuals like Alya, we need your help. You can donate via Anton’s Buy Me a Coffee.
SEO For Paws – Cat Lovers, Dog Lovers, And SEO
The upcoming “SEO for Paws” livestream aims to continue fundraising efforts. The event, which runs from 12:00 p.m. to 4:30 p.m. ET, will offer actionable SEO and digital marketing advice from experts while raising money for the animal shelters.
Headline speakers who have donated their time to support his cause include Andrey Lipattsev, David Carrasco, Olga Zarr, Judith Lewis, James Wirth, Zach Chahalis, Jamie Indigo, and Lee Elliott.
Attendance is free, but participants are encouraged to donate to help the charity.
Event Highlights
Date and Time: September 25, 2025, from 12:00 p.m. to 4:30 p.m. ET.
Access: Free registration with the option to join live, participate in Q&A sessions, and a recording will be made available on YouTube.
Speakers: The live stream will feature SEO and digital marketing experts, who will share actionable insights.
How To Make A Difference
The “SEO for Paws” live stream is an opportunity to make a meaningful difference while listening to excellent speakers.
All money raised is donated to help cats and dogs in Ukraine.
Digital marketers are providing more evidence that Google’s disabling of the num=100 search parameter correlates exactly with changes in Google Search Console impression rates. What looked like reliable data may, in fact, have been a distorted picture shaped by third-party SERP crawlers. It’s becoming clear that squeezing meaning from the top 100 search results is increasingly a thing of the past and that this development may be a good thing for SEO.
Num=100 Search Parameter
Google recently disabled the use of a search parameter that caused web searches to display 100 organic search results for a given query. Search results keyword trackers depended on this parameter for efficiently crawling Google’s search results. By eliminating the search parameter, Google is forcing data providers into an unsustainable position that requires them to scale their crawling by ten times in order to extract the top 100 search results.
Rank Tracking: Fighting To Keep It Alive
Mike Roberts, founder of SpyFu, wrote a defiant post saying that they will find a way to continue bringing top 100 data to users.
His post painted an image of an us versus them moment:
“We’re fighting to keep it alive. But this hits hard – delivering is very expensive.
We might even lose money trying to do this… but we’re going to try anyway.
If we do this alone, it’s not sustainable. We need your help.
This isn’t about SpyFu vs. them.
If we can do it – the way the ecosystem works – all your favorite tools will be able to do it. If nothing else, then by using our API (which has 100% of our keyword and ranking data).”
Rank Tracking: Where The Wind Is Blowing
Tim Soulo, CMO of Ahrefs, sounded more pragmatic about the situation, tweeting that the future of ranking data will inevitably be focused on the Top 20 search results.
“Ramping up the data pulls by 10x is just not feasible, given the scale at which all SEO tools operate.
So the question is:
‘Do you need keyword data below Top 20?’
Because most likely it’s going to come at a pretty steep premium going forward.
Personally, I see it this way:
▪️ Top 10 – is where all the traffic is at. Definitely a must-have.
▪️ Top 20 – this is where “opportunity” is at, both for your and your competitors. Also must-have.
▪️ Top 21-100 – IMO this is merely an indication that a page is “indexed” by Google. I can’t recall any truly actionable use cases for this data.”
Many of the responses to his tweet were in agreement, as am I. Anything below the top 20, as Tim suggested, only tells you that a site is indexed. The big picture, in my opinion, is that it doesn’t matter whether a site is ranked in position 21 or 91; they’re pretty much equivalently suffering from serious quality or relevance issues that need to be worked out. Any competitors in that position shouldn’t be something to worry about because they are not up and coming; they’re just limping their way in the darkness of page three and beyond.
Page two positions, however, provide actionable and useful information because they show that a page is relevant for a given keyword term but that the sites ranked above it are better in terms of quality, user experience, and/or relevance. They could even be as good as what’s on page one but, in my experience, it’s less about links and more often it’s about user preference for the sites in the top ten.
Distorted Search Console Data
It’s becoming clear that search results scraping distorted Google’s Search Console data. Users are reporting that Search Console keyword impression data is significantly lower since Google blocked the Num=100 search parameter. Impressions are the times when Google shows a web page in the search results, meaning that the site is ranking for a given keyword phrase.
SEO and web developer Tyler Gargula (LinkedIn profile) posted the results of an analysis of over three hundred Search Console properties, showing that 87.7% of the sites experienced drops in impressions. 77.6% of the sites in the analysis experienced losses in query counts, losing visibility for unique keyword phrases.
“Keyword Length: Short-tail and mid-tail keywords experienced the largest drops in impressions, with single word keywords being much lower than I anticipated. This could be because short and mid-tail keywords are popular across the SEO industry and easier to track/manage within popular SEO tracking tools.
Keyword Ranking Positions: There has been reductions in keywords ranking on page 3+, and in turn an increase in keywords ranking in the top 3 and page 1. This suggests keywords are now more representative of their actual ranking position, versus receiving skewed positions from num=100.”
Google Is Proactively Fighting SERP Scraping
Disabling the num=100 search parameter is just the prelude to a bigger battle. Google is hiring an engineer to assist in statistical analysis of SERP patterns and to work together with other teams to develop models for combating scrapers. It’s obvious that this activity negatively affects Search Console data, which in turn makes it harder for SEOs to get an accurate reading on search performance.
What It Means For The Future
The num=100 parameter was turned off in a direct attack on the scraping that underpinned the rank-tracking industry. Its removal is forcing the search industry to reconsider the value of data beyond the top 20 results. This may be a turning point toward better attribution and and clearer measures of relevance.
Google has been doing what it always does, and that is to constantly iterate to try and retain the best product it can.
Large language models (LLMs) and generative AI chatbots are a new reality in SEO, and to keep up, Google is evolving its interface to try and cross the divide between AI and search. Although, what we should all remember is that Google has already been integrating AI in its algorithms for years.
Continuing my IMHO series and speaking to experts to gain their valuable insights, I spoke with Ray Grieselhuber, CEO of Demand Sphere and organizer of Found Conference. We explored AI search vs. traditional search, grounding data, the influence of schema, and what it all means for SEO.
“There is not really any such thing anymore as traditional search versus AI search. It’s all AI search. Google pioneered AI search more than 10 years ago.”
Scroll to the end of this article, if you want to watch the full interview.
Why Grounding Data Matters More Than The LLM Model
“It’s the grounding data that matters, far more than the model itself. The models will be trained to achieve certain results but, as always, the index/datasets are the prize.”
I asked him to expand on why grounding data is so important. Ray explained, “Unless something radically changes in how LLMs work, we’re not going to have infinite context windows. If you need up-to-date, grounded data, you need indexed data, and it has to come from somewhere.”
Earlier this year, Ray and his team analyzed ChatGPT’s citation patterns, comparing them to search results from both Google and Bing. Their research revealed that ChatGPT’s results overlap with Google search results about 50% of the time, compared to only 15-20% overlap with Bing.
“It’s been known that Bing has an historical relationship with OpenAI.” Ray expanded, “but, they don’t have Google’s data, index size, or coverage. So eventually, you’re going to source Google data one way or another.”
He went on to say, “That’s what I mean by the index being the prize. Google still has a massive data and index advantage.”
Interestingly, when Ray first presented these findings at Brighton SEO in April, the response was mixed. “I had people who seemed appalled that OpenAI would be using Google results,” Ray recalled.
Maybe the anger stems from the wishful idea that AI would render Google irrelevant, but Google’s dataset still remains central to search.
“Humans are searchers, always have been, always will be. It’s just a question of the experience, behavior, and the tools they use. Focus on search as a primitive and being found and you can ignore pointless debates about what to call it.”
I asked him where he thinks that SEOs go wrong in their approach to the introduction of GEO/LLM visibility, and Ray responded by saying that in the industry, we often have a dialectical tension.
“We have this weird tendency in our industry to talk about how something is either dead and dying. Or, this is the new thing and you have to just rush and forget everything that you learned up until now.”
Ray thinks what we should really be focusing on is human behavior:
“These things don’t make sense in the context of what’s happening overall because I always go back to what is the core instinctual human behavior? If you’re a marketer your job is to attract human attention through their search behavior and that’s really what matters.”
“The major question is what is the experience that’s going to mediate that human behavior and their attention mechanisms versus what you have to offer, you know, as a marketer.
“There is not really any such thing anymore as traditional search versus AI search. It’s all AI search. Google pioneered AI search more than 10 years ago. They’ve been doing it for the last 10 years and now for some reason everyone’s just figuring out that now it’s AI search.”
Ray concluded, “Human behavior is the constant; experiences evolve.”
Ray’s analysis reveals the truth is nuanced. LLMs don’t directly process schema in their training data, but there is some limited influence of structured data through retrieval layers when LLMs use search results as grounding data.
Ray explained that Google has essentially trained the entire internet to optimize its semantic understanding through schema markup. The reason they did this is not just for users.
“Google used Core Web Vitals to get the entire internet to optimize itself so that Google wouldn’t have to spend so much money crawling the internet, and they kind of did the same thing with building their semantic layer that enabled them to create an entire new level of richness in the results.”
Ray stressed that schema is only being used as a hint, and it shouldn’t be a question of does this work or not – should we implement Schema to influence results? Instead, SEOs should be focusing on the impact on user and human behavior.
Attract Human Attention Through Search Behavior
Binary thinking, such as SEO is dead, or LLMs are the new SEO, misses the reality that search behavior remains fundamentally unchanged. Humans are searchers who want to find information efficiently, and this underlying need remains constant.
Ray said that what really matters and underlines SEO is to attract human attention through their search behavior.
“I think people will be forced to become the marketers they should have been all along, instead of ignoring the user,” he predicted.
My prediction is that in a few years, we will look back on this time as a positive change. I think search will be better for it as a result of SEOs having to embrace marketing skills and become creative.
Ray believes that we need to use our own data more and to encourage a culture of experimenting with it, and learning from your users and customers. Broad studies are useful for direction, but not for execution.
“If you’re selling airline tickets, it doesn’t really matter how people are buying dog food,” he added.
An Industry Built For Change
Despite the disruption, Ray sees opportunity. SEOs are uniquely positioned to adapt.
“We’re researchers and builders by nature; that’s why this industry can embrace change faster than most,” he said.
Success in the age of AI-powered search isn’t about mastering new tools or chasing the latest optimization techniques. It’s about understanding how people search for information, what experiences they expect, and how to provide genuine value throughout their journey, principles that have always defined effective marketing.
He believes that some users will eventually experience AI exhaustion, returning to Google’s familiar search experience. But ultimately, people will navigate across both generative AI and traditional search. SEOs will have to meet them where they are.
Google is hiring a new anti-scraping czar, whose job will be to analyze search traffic to identify the patterns of search scrapers, assess the impact, and work with engineering teams to develop new anti-scraping models for improving anti-scraping defenses.
Search Results Scraping
SEOs rely on SERP tracking companies to provide search results data for understanding search ranking trends, enabling competitive intelligence, and other keyword-related research and analysis.
Many of these companies conduct massive amounts of automated crawling of Google’s search results to take a snapshot of ranking positions and data related to search features triggered by keyword phrases. This scraping is suspected of causing significant changes to what’s reported in Google Search Console.
In the early days of SEO, there used to be a free keyword data source via Yahoo’s Overture, their PPC service. Many SEOs used to search on Yahoo so often that their searches would unintentionally inflate the keyword volume. Smart SEOs would know better to not optimize for those keyword phrases.
I have suspected that some SEOs may also have intentionally scraped Yahoo’s search results using fake keyword phrases in order to generate keyword volumes for those queries, in order to mislead competitors into optimizing for phantom search queries.
&num=100 Results Parameter
There is a growing suspicion backed by Google Search Console data that search result scraping may have inflated the official keyword impression data and that it may be the reason why Search Console Data appears to show that AI Search results aren’t sending traffic while Google’s internal data shows the opposite.
“Investigate and analyze patterns of abuse on Google Search, utilizing data-motivated insights to develop countermeasures and enhance platform security. Analyze datasets to identify trends, patterns, and anomalies that may indicate abuse within Google Search.
Develop and track metrics to measure scraper impact and the effectiveness of anti-scraping defenses. Collaborate with engineering teams to design, test, and launch new anti-scraper rules, models, and system enhancements.
Investigate proof-of-concept attacks and research reports that identify blind spots and guide the engineering team’s development priorities. Evaluate the effectiveness of existing and proposed detection mechanisms, understanding the impact on scrapers and real users.
Contribute to the development of signals and features for machine learning models to detect abusive behavior. Develop and maintain threat intelligence on scraper actors, motivations, tactics and the scraper ecosystem.”
What Does It Mean?
There hasn’t been an official statement from Google but it’s fairly apparent that Google may be putting a stop to search results scrapers. This should result in more accurate Search Console data, so that’s a plus.
Your impressions might be up, but the traffic isn’t following. Competitors are showing up in AI search while your brand remains invisible.
How do you measure success when ChatGPT or Gemini doesn’t show traditional rankings? How do you define “winning” in a world where every query can produce a different answer?
Learn the SEO & GEO strategies enterprise brands are using to secure visibility in AI Overviews and large language models.
AI Mode is growing fast. Millions of users are turning to AI engines for answers, and brand visibility is now the single most important metric.
In this webinar, Tom Capper, Sr. Search Scientist at STAT Search Analytics, will guide you through how enterprise SEOs can adapt, measure, and thrive in this new environment.
The Core Web Vitals Technology Report by the open source HTTPArchive community ranks content management systems by how well they perform on Google’s Core Web Vitals (CWV). The July update shows that every major platform has improved since June, but not all gains were equal. Joomla posted the largest month-over-month increase, while Duda ranked first in July with 84.96% of sites passing CWV.
Why Core Web Vitals Matter
Core Web Vitals (CWV) are metrics created by Google to measure how fast, stable, and responsive a website feels to users. Websites that load quickly and respond smoothly keep visitors engaged, while sites that fall short frustrate users and increase bounce rates. For businesses and publishers, CWV scores reflect the user experience and competitiveness online.
How the Data Is Collected
The CWV Technology Report combines two public datasets:
Chrome UX Report (CrUX): Data from Chrome users who opt in to share performance statistics as they browse. This reflects how real users experience websites.
HTTP Archive: Lab-based tests that analyze how sites are built and whether they follow performance best practices.
Together, these sources provide a consistent picture of how different website platforms perform on Core Web Vitals in the real world.
Percentage Change from June to July
#1 Joomla — largest gain (+3.23%).
#2 Wix — +2.61%.
#3 Drupal — +1.47%.
#4 Duda — +1.33%.
#5 Squarespace — +1.27%.
#6 WordPress — smallest gain (+0.90%).
This ranking shows which platforms advanced most in July. Joomla experienced the highest level of growth, while WordPress improved the least. Wix’s CWV month over month performance improvement was a notable 2.51%.
Ranking by July CWV Score
Duda once again is the Core Web Vitals champion, ranked by the percentage of websites that has a good CWV score.
#1 Duda — 84.96%
#2 Wix — 73.37%
#3 Squarespace — 68.93%
#4 Drupal — 60.54%
#5 Joomla — 54.78%
#6 WordPress — 44.34%
Joomla showed the fastest growth, but it still ranked fifth in July. Duda led with the highest overall performance.
Why the Numbers Matter
Core Web Vitals scores translate into real differences in how users experience websites. Platforms with higher CWV scores offer faster, smoother interactions, while those at the bottom frustrate users with slower performance. While all six platforms in the comparison are improving month to month, what matters most is the actual experience users get right now.
Duda is the Core Web Vitals champion in July with a score of 84.96% of websites built with the Duda platform having a good CWV score.
Joomla had the largest gain, but still ranked near the bottom with only 54.78% of sites showing a good CWV score.
Wix and Squarespace ranked in the second and third places, showing strong performance but both significantly behind Duda by over ten percentage points.
WordPress ranked last, both in July scores and in the month over month rate of improvement.
Do Content Management Systems Matter For Ranking?
I have seen discussions online about whether the choice of content management system has an impact on rankings. Some people assert that plugins make WordPress easier to rank in Google.
There is also a perception that WordPress is faster than Wix, Duda, and Squarespace. The facts, of course, show that the opposite is true. WordPress is the slowest of the content management systems in this comparison.
The percentage of sites built with Duda that had a good Core Web Vitals score is 84.96%. The percentage of WordPress sites with a good CWV score is 44.34%. That means Duda’s percentage of sites with good CWV scores is about 92% higher than those built with WordPress.
Another issue with WordPress is that it has a considerable amount of technical debt, something that private content management systems do not have to struggle with to the same degree. Technical debt refers to the accumulation of outdated code and design decisions that make it harder to maintain, update, or improve the platform over time. It is not unique to WordPress, but it is an issue because of how WordPress is built and how its ecosystem works.
Some reasons for WordPress’s technical debt:
WordPress was originally conceived as a blogging platform and has evolved into a full CMS, able to be extended as virtually any kind of website.
Adding new features on top of legacy code means workarounds must be made for backward compatibility, which creates complexity and slows down innovation.
High volunteer demands with insufficient systemic support.
Lack of equitable financial remuneration or stipends for ongoing work.
Pressure to maintain legacy systems and innovate new features leads to overwhelming workloads.
Consequences
Loss of institutional knowledge and experienced contributors.
Increasing technical debt and slowed innovation cycles.
Threat to WordPress’s long-term ecosystem health.”
WordPress has recently moved to a slower annual release cycle, and one of the benefits of that change (summarized by WordPress here) is that it gives the project time to address the issue of technical debt.
The point is that if the content management system did have an effect on the ability to rank, WordPress sites would probably struggle to rank because of the relatively poor performance scores and the slower pace of development when compared to private content management systems like Wix. But that’s not the case.
WordPress websites rank very well despite all the issues with the platform, including security. So it may be that the choice of CMS does not necessarily matter for SEO, especially since private solutions like Wix and Duda are purposely built with SEO in mind. Nevertheless, performance is important for things that matter, such as conversions and the user experience, and the fact is that the HTTPArchive Technology Comparison Report ranks WordPress last for Core Web Vitals performance in July.
Bias in search isn’t always negative. It’s easy to frame it as something sinister, but bias shows up for structural reasons, behavioral reasons, and sometimes as a deliberate choice. The real task for marketers and communicators is recognizing when it’s happening, and what that means for visibility, perception, and control.
Two recent pieces got me thinking more deeply about this. The first is Dejan’s exploration of Selection Rate (SR), which highlights how AI systems favor certain sources over others. The second is Bill Hartzer’s upcoming book “Brands on the Ballot,” which introduces the concept of non-neutral branding in today’s polarized market. Put together, these show how bias isn’t just baked into algorithms; it’s also unavoidable in how brands are interpreted by audiences.
Image Credit: Duane Forrester
Selection Rate And Primary Bias
Selection Rate can be thought of as the percentage of times a source is chosen out of the available options (selections ÷ options × 100). It’s not a formal standard, but a useful way to illustrate primary bias in AI retrieval. Dejan points out that when an AI system is asked a question, it often pulls from multiple grounding sources. But not all sources are selected equally. Over time, some get picked again and again, while others barely show up.
That’s primary bias at work.
For marketers, the implication is clear: If your content is rarely chosen as a grounding source, you’re effectively invisible inside that AI’s output ecosystem. If it’s selected frequently, you gain authority and visibility. High SR becomes a self-reinforcing signal.
This isn’t just theoretical. Tools like Perplexity, Bing Copilot, and Gemini surface both answers and their sources. Frequent citation enhances your brand’s visibility and perceived authority. Researchers even coined a term for how this feedback loop can lock in dominance: neural howlround. In an LLM, certain highly weighted inputs can become entrenched, creating response patterns that are resistant to correction, even when new training data or live prompts are introduced.
This concept isn’t new. In traditional search, higher-ranked pages earn more clicks. Those clicks send engagement signals back into the system, which can help sustain ranking position. It’s the same feedback loop, just through a different lens. SR doesn’t create bias; it reveals it, and whether you benefit depends on how well you’ve structured your presence to be retrieved in the first place.
Branding And The Reality Of Interpretation
Brands on the Ballot frames this as non-neutral branding: Companies can’t avoid being interpreted. Every decision, big or small, is read as a signal. That’s bias at the level of perception.
We see this constantly. When Nike featured Colin Kaepernick, some people doubled down on loyalty while others publicly cut ties. When Bud Light partnered with a trans influencer, backlash dominated national news. Disney’s disputes with Florida politicians over cultural policy became a corporate identity story overnight.
None of these were just “marketing campaigns.” Each was read as a cultural stance. Even decisions that seem operational (which platforms you advertise on, which sponsorships you accept, which suppliers you choose) are interpreted as signals of alignment.
Neutrality doesn’t land as neutral anymore, which means PR and marketing teams alike need to plan for interpretation as part of their day-to-day reality.
Directed Bias As A Useful Lens
Marketers already practice deliberate exclusion through ICP targeting and positioning. You decide who you want to reach and, by extension, who you don’t. That’s not new.
But when you view those choices through the lens of bias, it sharpens the point: Positioning is bias with intent. It’s not hidden. It’s not accidental. It’s a deliberate narrowing of focus.
That’s where the idea of directed bias comes in. You can think of it as another way to describe ICP targeting or market positioning. It’s not a doctrine, just a lens. The value in naming it this way is that it connects what marketers already do to the broader conversation about how search and AI systems encode bias.
Bias isn’t confined to branding or AI. We’ve known for years that search rankings can shape behavior.
A 2024 PLOS study showed that simply altering the order of results can shift opinions by as much as 30%. People trust higher-ranked results more, even when the underlying information is the same.
Filter bubbles amplify this effect. By tailoring results based on history, search engines reinforce existing views and limit exposure to alternatives.
Beyond those behavioral biases lie structural ones. Search engines reward freshness, meaning sites crawled and updated more frequently often gain an edge in visibility, especially for time-sensitive queries. Country-code top-level domains (ccTLDs) like .fr or .jp can signal regional relevance, giving them preference in localized searches. And then there’s popularity and brand bias: Established or trusted brands are often favored in rankings, even when their content isn’t necessarily stronger, which makes it harder for smaller or newer competitors to break through.
For marketing and PR professionals, the lesson is the same: Input bias (what data is available about you) and process bias (how systems rank and present it) directly shape what audiences believe to be true.
Bias In LLM Outputs
Large language models introduce new layers of bias.
Training data is rarely balanced. Some groups, voices, or perspectives can be over-represented while others are missing. That shapes the answers these systems give. Prompt design adds another layer: Confirmation bias and availability bias can creep in depending on how the question is asked.
Recent research shows just how messy this can get.
MIT researchers found that even the order of documents fed into an LLM can change the outcome.
A 2024 Nature paper catalogued the different types of bias showing up in LLMs, from representation gaps to cultural framing.
A PNAS study confirmed that even after fairness tuning, implicit biases still persist.
LiveScience reported that newer chatbots tend to oversimplify scientific studies, glossing over critical details.
These aren’t fringe findings. They show that bias in AI isn’t an edge case; it’s the default. For marketers and communicators, the point isn’t to master the science; it’s to understand that outputs can misrepresent you if you’re not shaping what gets pulled in the first place.
Pulling The Threads Together
Selection Rate shows us bias at work inside AI retrieval systems. Branding shows us how bias works in the marketplace of perception. Directed bias is a way to connect those realities, reminding us that not all bias is accidental. Sometimes it’s chosen.
The key isn’t to pretend bias doesn’t exist; of course, it does. It’s to recognize whether it’s happening to you passively, or whether you’re applying it actively and strategically. Both marketers and PR specialists have a role here: one in building retrievable assets, the other in shaping narrative resilience. (PS: An AI cannot really replace a human for this work.)
So what should you do with this?
Understand Where Bias Is Exposed
In search, bias is revealed through studies, audits, and SEO testing. In AI, it’s uncovered by researchers probing outputs with structured prompts. In branding, it’s revealed in customer reaction. The key is knowing that bias always shows itself somewhere, and if you’re not looking for it, you’re missing critical signals about how you’re being perceived or retrieved.
Recognize Who Hides Bias
Search engines and LLM providers don’t always disclose how selections are weighted. Companies often claim neutrality even when their choices say otherwise. Hiding bias doesn’t make it go away; it makes it harder to address and creates more risk when it eventually surfaces. If you aren’t transparent about your stance, someone else may define it for you.
Treat Bias As Clarity
You don’t need to frame your positioning as “our directed bias.” But you should acknowledge that when you pick an ICP, craft messaging, or optimize content for AI retrieval, you’re making deliberate choices about inclusion and exclusion. Clarity means accepting those choices, measuring their impact, and owning the direction you’ve set. That’s the difference between bias shaping you and you shaping bias.
Apply Discipline To Your AI Footprint
Just as you shape brand positioning with intent, you need to decide how you want to appear in AI systems. That means publishing content in ways that are retrievable, structured with trust markers, and aligned with your desired stance. If you don’t manage this actively, AI will still make choices about you; they just won’t be choices you controlled.
A Final Danger To Consider
Bias isn’t really a villain. Hidden bias is.
In search engines, in AI systems, and in the marketplace, bias is the default. The mistake isn’t having it. The mistake is letting it shape outcomes without realizing it’s there. You can either define your bias with intent or leave it to chance. One path gives you control. The other leaves your brand and business at the mercy of how others decide to interpret you.
And here’s a thought that occurred to me while working through this: What if bias itself could be turned into an attack vector? I’m certain this isn’t a fresh idea, but let’s walk through it anyway. Imagine a competitor seeding enough content to frame your company in a certain light, so that when an LLM compresses those inputs into an answer, their version of you is what shows up. They wouldn’t even need to name you directly. Just describe you well enough that the system makes the connection. No need to cross any legal lines here either, as today’s LLMs are really good at guessing a brand when you just describe their logo or a well-known trait in common language.
The unsettling part is how plausible that feels. LLMs don’t fact-check in the traditional sense; they compress patterns from the data available to them. If the patterns are skewed because someone has been deliberately shaping the narrative, the outputs can reflect that skew. In effect, your competitor’s “version” of your brand could become the “default” description users see when they ask the system about you.
Now imagine this happening at scale. A whisper campaign online doesn’t need to trend to have impact. It just needs to exist in enough places, in enough variations, that an AI model treats it as consensus. Once it’s baked into responses, users may have a hard time finding your side of the story.
I don’t know if that’s an actual near-term risk or just an edge-case thought experiment, but it’s worth asking: Would you be prepared if someone tried to redefine your business that way?
New data from the Audience Key content marketing platform indicates that Amazon’s visibility has suffered a significant drop. The decline follows two changes Amazon made to its presence in Google Shopping, although it is uncertain whether those changes are direct or indirect causes.
The first change was the discontinuation of its paid Shopping ads, and the second was the consolidation of its three merchant store names (Amazon, Amazon.com, and Amazon.com – Seller) into a single store identity, “Amazon.” These changes appear to have had a measurable effect on how often Amazon product cards appear in Google’s organic Shopping results.
Audience Key is a content marketing platform that fills a gap in competitive intelligence by tracking and reporting on Google’s organic product grid rankings at scale. This is a new product that has recently rolled out.
According to Audience Key:
“Across 79,000+ keywords, Audience Key’s first-of-its-kind tracking showed the effects of Amazon’s changes to its merchant feed — the approach initially wiped out 31% of its organic product card rankings. Weeks later, Amazon has now disappeared completely — creating a seismic shift that is immediately reshaping e-commerce SERPs and freeing up prime shelf space for rivals.”Tom Rusling, founder of Audience Key notified me today that Amazon has subsequently completely dropped out of the organic search results, beginning on August 18th.
Anecdotally, I’ve seen Amazon completely dropped out of Google’s organic product grids, including for search queries I know for certain they used to rank for and are now completely gone from the search engine results pages (SERPs).
Overall Impact
The most immediate change was the overall scale of Amazon’s presence. Before July 25, Amazon’s listings appeared in 428,984 organic product cards. After the change, that presence dropped to 294,983.
Before July 25: 428,984 product cards
After July 25: 294,983 product cards
Net change: -134,001 cards (31% decline)
This shows that Amazon’s move was not just a brand consolidation but also a large reduction in visibility. It is possible that the brand consolidation triggered a temporary drop in visibility because it’s such a wide-scale change.
Category-Level Changes
The reduction was not spread evenly. Some product categories were hit harder than others. Apparel had the steepest losses, while categories like Home Goods and Laptop Computers also fell sharply.
Smaller categories such as Tires and Indoor Decor declined more moderately, but all showed the same downward trend.
Apparel Category Experiences The Largest Declines
Apparel stands out as the category where Amazon saw the steepest reductions, with its presence cut by more than half across several tracked segments.
Below is the data I currently have, I’m waiting for clarification from Audience Key about whether the following apparel categories are more specific:
Apparel: 4,571 → 1,804 (-60%)
Apparel: 4,503 → 1,859 (-59%)
Apparel: 31,852 → 13,632 (-57%)
Apparel: 6,932 → 3,029 (-56%)
Several Other Major Categories Affected
The losses were also large in high-volume categories. Home Goods, Laptop Computers, and Outdoor Furnishings all saw reductions, while Business Supplies and Technology products also suffered visibility declines.
Business Supplies: 12,510 → 9,786 (-22%)
Home Goods: 133,717 → 73,833 (-45%)
Laptop Computers: 30,520 → 19,615 (-36%)
Outdoor Furnishings: 58,416 → 41,995 (-28%)
Scientific and Technology: 58,880 → 50,666 (-14%)
Smaller Categories Also Affected
Even niche verticals were affected, though the percentage losses were less severe than in Apparel or Home Goods. These declines show Amazon’s reductions were spread across both major and smaller categories.
Structures: 6,241 → 4,229 (-32%)
Tires: 3,063 → 2,609 (-15%)
Indoor Decor: 23,634 → 19,789 (-16%)
Indoor Decor (variant): 6,626 → 5,926 (-11%)
Merchant Store Consolidation
Another change came from how Amazon presented itself in Shopping results. Before July 25, the company appeared under three names: Amazon, Amazon.com, and Amazon.com – Seller. Afterward, only the unified “Amazon” label remained.
Total before consolidation (all three names): 428,984 product cards
After consolidation (single “Amazon”): 294,980 product cards
This simplified Amazon’s presence by unifying it under one name, but it also coincided with a decline in overall coverage.
Where Amazon Is At Today?
Even with the July drops in visibility, Amazon remained the most visible merchant in Google Shopping, with smaller visibility than before. But that’s not longer the case, the situation for Amazon appears to have worsened.
Audience Key speculated on what is going on:
“We thought the first chapter of this story was complete, but just as we prepared this study for publication, everything changed. Again. Our latest U.S. search data reveals a stunning shift: Amazon vanished from the organic product grids.
Whether this is a short-term anomaly or a more permanent new normal, only time will tell. We will continue to monitor and report on our findings. The sudden removal leaves us — and the industry — asking one big question: WHY???
That is certainly a topic for speculation.”
Audience Key speculates that Amazon may be withholding their product feed from Google or that this is a technical or strategic change on Amazon’s part.
One thing that we know about Google organic search is that large-scale changes can have a dramatic impact on search visibility. Audience Key has a unique product that is focused on tracking Google’s product grid, something that many ecommerce companies may find useful. They are apparently well-positioned to notice this kind of change.
Read Audience Key’s blog post about these changes:
Google has announced that the CrUX Dashboard, the Looker Studio-based visualization tool for CrUX data, will be retired at the end of November 2025. The reason given for the deprecation is that it was not designed for “wide-scale” use and that Google has developed more scalable alternatives.
Why The CrUX Dashboard Is Being Retired
The CrUX Dashboard was built in Looker Studio to summarize monthly CrUX data. It gained popularity as Core Web Vitals became the de facto standard for how developers and SEOs measured performance.
Behind the scenes, however, the tool struggled to keep up with demand. According to the official Chrome announcement, it suffered “frequent outages, especially around the second Tuesday of each month when new data was published.”
The Chrome team concluded that while the dashboard showed the value of CrUX data, it was not built on the right technology.
Transition To Better Alternatives
To address these issues, Google launched the CrUX History API, which delivered weekly instead of monthly data, allowing more frequent monitoring of trends. The History API was faster and more scalable, leading to adoption by third-party tools.
In 2024, Google introduced CrUX Vis, which was more scalable and faster. Today, in 2025, CrUX Vis receives four to five times more users than the CrUX Dashboard, showing that users are increasingly moving to the newer tool.
What the Change Means for Users
Chrome will shut down the CrUX Connector to BigQuery in late November 2025. When this connector is removed, dashboards that depend on it will stop updating. Users who want to keep the old dashboard will need to connect directly to BigQuery with their own credentials. The announcement explains that the CrUX Connector infrastructure is unreliable and requires too much monitoring to maintain, which is why investment has shifted to the History API and CrUX Vis.
Some users have asked Google to postpone the shutdown until 2026, but the announcement makes it clear that this is not an option. Although the dashboard and its connector will be retired, the underlying BigQuery dataset will continue to be updated and supported. Google stated that it sees BigQuery as a valuable, longer-term public dataset.
I’ve been extremely antsy to publish this study. Consider it the AIO Usability study 1.5, with new insights. You also want to stay tuned for our first AI Mode usability study! It’s coming in a few weeks (make sure to subscribe not to miss it).
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Since March, everyone’s been asking the same question: “Are AI Overviews killing our conversions?”
Our 2025 usability study gives a clearer answer than the hot takes you’ll see on LinkedIn and X (Twitter).
In May 2025, I published significant findings from the first comprehensive UX study of AI Overviews (AIOs). Today, I’m presenting you with new insights from that study based on a cutting-edge RAG system that analyzed over 100,000 words of transcription.
The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time.
Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.
Net-net: AIO is a preview layer. Blue links still close. Before we dive in, you need to hear these insights from Garrett French, CEO of Xofu, who financed this study:
“What lit me up most from this latest work from Kevin: We have direct insight now into an “anchor pattern” of AIO behavior.
In this usability study, we discovered that users rarely voice distrust of AI Overviews directly – instead they hesitate, refine, or click out.
Therefore, hesitation itself is the loudest signal to us.
We see the same in complex, transition-enabling purchase-committee buying (B2B and B2C): Procurement stalls without lifecycle clarity, engineer stall without specs, IT stalls without validation.
These aren’t complaints. They’re unresolved, unanswered, and even unknown questions that have NEVER shown themselves in KW demand.
As content marketers, we have never held ourselves systematically accountable to answering them.
Customer service logs – as an example of one surface for discovering friction – expose the same hesitations in traceable form through repeated chats, escalations, deployment blocks, etc.
Customer service logs are one surface; AIOs are another.
But the real source of truth is always contextual audience friction.
Answering these “friction-inducing, unasked latent questions give us a way to read those signals and design content that truly moves decisions forward.
What The Study Actually Found:
Organic results are the most trusted and most consistently successful destination across tasks.
Sponsored results are noticed but actively skipped due to low trust.
In-SERP answers quickly resolved roughly 85% of straightforward factual questions.
Users often use AIO as a preview or shortcut, then click out to finish or validate (on brand sites, YouTube, coupon portals, and the like).
Shopping carousels aid discovery more than closure. Expect reassessment clicks.
Trust splits by stakes: Low-stakes search journeys often end in the AIO, while finance or health pushes people to known authorities like PayPal, NIH, or Mayo Clinic.
Age and device matter. Younger users, especially on smartphones, accept AIOs faster; older cohorts favor blue links and authority domains.
When the AIO is wrong or feels generic, people bail. We logged 12 unique “AIO is misleading/wrong” flags in higher-stakes contexts.
(Interested in diving deeper into the first findings from this study or need a refresher? Read the first full iteration of the UX study of AIOs.)
Why This Matters For The Bottom Line
In my earlier analysis, I argued that top-of-funnel visibility had more downstream impact than our marketing analytics ever credited. I also argued that demand doesn’t just disappear because clicks shrink.
This study’s behavior patterns support that: AIO satisfies quick lookup intent, but purchase intent still routes through external validation and brand trust – aka clicks. Participants in this study shared thoughts aloud, like:
“There’s the AI results, but I’d rather go straight to PayPal’s own site.”
“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”
And that preserves downstream conversions (when you show up in the right places and have earned authority).
Image Credit: Kevin Indig
Deeper Insights: Secondary Findings You Need To See
Recently, I worked with Eric Van Buskirk (the research director of the study) and his team over at Clickstream Solutions to do a deeper analysis of the May 2025 findings.
Using an advanced RAG-driven AI system, we analyzed all 91,559 (!) words of the transcripts from recorded user sessions across 275 task instances.
This is important to understand: We were able to find new insights from this study because Eric has built cutting-edge technology.
Our new RAG system analyzes structured fields like SERP Features, AIO satisfaction, or user reactions from transcriptions and annotations. It creates a retrieval layer and uses ChatGPT-5 for semantic search.
The result is faster, more rigorous, and more transparent research. Every claim can be traced to data rows and transcript quotes, patterns are checked across the full dataset, and visual evidence is a query away.
(To sum that all up in plain language: Eric’s custom-built advanced RAG-driven AI system is wildly cool and extremely effective.)
Practical benefits:
Auditable insights: Conclusions map back to exact data slices.
Speed: Test a hypothesis in minutes instead of re-reading sessions.
Scale: Triangulate transcripts, coded fields, and outcomes across all participants.
Fit for the AI era: Clean structure and trustworthy signals mirror how retrieval systems pick sources, which aligns with our broader stance on visibility and trust.
Here’s what we found:
The data verified four distinct AIO Intent Patterns.
Key SERP features drove more engagement than others.
Core brands shape trust in AIOs.
About The New RAG System
We rebuilt the analysis on a retrieval-augmented system so answers come from the study data, not model guesswork. The backbone lives on structured fields with full transcripts and annotations, indexed in a lightweight database and paired with bucketed data for cohort filtering and cross-checks.
Core components:
Dataset ingestion and cleaning.
Retrieval layer based on hybrid keyword + semantic search.
Auto-coded sentiment to turn speech into consistent, queryable signals.
Validation loop to minimize hallucination.
The result is faster, more rigorous, and more transparent research. Every claim can be traced to rows and quotes, patterns are checked across the full dataset, and visual evidence is a query away.
Practical benefits:
Map conclusions back to exact data slices.
Test a hypothesis in minutes.
Triangulate transcripts, coded fields, and outcomes across all participants.
Clean structure and trustworthy signals.
Which AIO Intent Patterns Were Verified Through The Data
One of the biggest secondary findings from the AIO usability study is that the AIO Intent Patterns aren’t just “gut feelings” anymore – they’re statistically validated, built from measurable behavior.
Before some of you roll your eyes and annoyingly declare “here’s yet another newly created SEO/marketing buzzword,” the patterns we discovered in the data weren’t exactly search personas, and they weren’t exactly search intents, either.
Therefore, we’re using the phrase “AIO Intent Pattern” to distinguish these concepts from one another.
Here’s how I define AIO Intent Patterns: AIO Intent Patterns represent statistically validated clusters of user behavior – like dwell, scroll, refinements, and sentiment – that define how people respond to AIOs. They’re recurring, measurable behaviors that describe how people interact with AI Overviews, whether they accept, validate, compare, or reject them.
And, again, these patterns aren’t exactly search intents or queries, but they’re not exactly user profiles either.
Instead, these patterns represent a set of behaviors (that appeared throughout our data) carried out by users to validate AIOs in different and distinct ways. So that’s why we’ve called the individual behavioral patterns “validations” below.
By running a RAG-driven coding pass across 250+ task instances, we were able to quantify four different behavioral patterns of engagement with AIOs:
Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
Trust-driven validations that convert only with credibility (validate AIOs).
Comparative validations that use AIOs but compare with multiple sources.
Skeptical rejections that automatically distrust AIOs for high-stakes queries.
What matters most here is that these aren’t arbitrary labels.
Statistical tests showed the differences in dwell time, scrolling, and refinements between the four groups were far too large to be random.
To put it plainly: These are real AIO use behavioral segments or AIO use intents you can plan for.
Let’s look at each one.
1. Efficiency-First Validations
These are validations where users intend to seek a shortcut. Users dip into AIOs for fast fact lookups, skim for one answer, and move on.
Efficiency-driven validations thrive on content that’s concise, scannable, and fact-rich. Typical queries that are resolved directly in the AIO include:
“1 cup in ml”
“how to take a screenshot on Mac”
“UTC to CET converter”
“what is robots.txt”
“email regex example”
Below, you can check out two examples of “efficiency-first validation” task actions from the study.
“Okay, so I like the summary at the top. And I would go ahead and follow these instructions and only come back to a search if they didn’t work.”
“I just had to go straight to the AI overview… and I liked that answer. It gave me the information I needed, organized and clear. Found it.”
Our data shows an average dwell time of just 14 seconds for this group overall, with almost no scrolling or refinements.
Users that have an efficiency-first intent for their queries have a neutral to positive sentiment toward AIOs – with no hesitation flags – because AIOs scratch the efficiency-intent itch quickly.
For this behavioral pattern, the AIO often is the final answer – especially on mobile – and if they do click, it’s usually the first clear, extractable source.
👉 Optimization tips for this validation group:
Compress key facts into crisp TLDRs, FAQs, and schema so AIO can surface them.
Place definitions, checklists, and example blocks near the top of your page.
Use simple tables and step lists that can be lifted cleanly.
Ensure brand mentions and key facts appear high on the page for visibility.
2. Trust-Driven Validations
These validations are full of caution. Users with trust-driven intents engage with AIOs but rarely stop there.
They’ll skim the overview, hesitate, and then click out to an authority domain to validate what they saw, like in this example below:
The user shares that “…at the top, it gave me a really good description on how to transfer money. But I still clicked the PayPal link because it was directly from the official site. That’s what I went with – I trust that information to be more accurate.”
Typical queries that trigger this validation pattern include:
“PayPal buyer protection rules”
“Mayo Clinic strep symptoms”
“Is creatine safe long term”
“Stripe refund timeline”
“GDPR consent requirements example”
And our data from the study verifies users scroll more (2.7x on average), dwell longer (~57s), and often flag uncertainty in trust-driven mode. What they want is authority.
These users have a high rate of hesitation flags in their search experiments. Their sentiment is mixed – often neutral, sometimes anxious or frustrated – and their confidence is only medium to low.
For these searches, the AIO is a starting point, not the destination. They’ll click out to Mayo Clinic, PayPal, Stripe, or other trusted domains to validate.
👉 Optimization tips for this validation group:
Reinforce trust scaffolding on your landing pages: expert reviewers, citations, and last-reviewed dates.
Mirror official terminology and link to primary sources.
Add “What to do next” boxes that align with authority guidance.
Build strong E-E-A-T signals since credibility is the conversion lever here.
3. Comparative Validations
This search intent actively leans into the AIO for classic comparative queries (think “Ahrefs vs Semrush for content teams”) to fulfill their search intent OR to compare informational resources to get clarity on the “best” of something; they expand, scroll, refine, and use interactive features – but they don’t stop there.
Instead, they explore across multiple sources, hopping to YouTube reviews, Reddit threads, and vendor sites before making a decision.
Example queries that reveal AIO comparative validation behavior:
“Notion vs Obsidian for teams”
“Best mirrorless camera under 1000”
“How to change a bike tire”
“Standing desk benefits vs risks”
“Programmatic SEO examples B2B”
“How to install a nest thermostat”
Here’s an example using a “how to” search, where the user is comparing sources for the best way to receive the most accurate information:
“The AI Overview gave me clear step-by-step instructions that matched what I expected. But since it was a physical DIY task, I still preferred to branch out to watch a video for confirmation.”
On average, searchers looking for comparative validations in the AIO dwell for 45+ seconds, scroll 4-5 times, and often open multiple tabs.
Their AIO sentiment is positive, and their confidence is high, but they still want to compare.
If this feels familiar – like classic transactional or commercial search intents – it’s because it is related.
If you’ve been doing SEO for any time, it’s likely you’ve created some of these “versus” or “comparison” pages. You also have likely created “how to” content with step-by-step how-to guidance, like how to install a flatscreen TV on your wall.
Before AIOs, your target users would find themselves there if you ranked well in search.
But now, the AIO frames the landscape first, and the decision comes after weighing pros and cons across information sources to find the best solution.
👉 Optimization tips for this validation group:
Publish structured comparison pages with decision tables and use-case breakdowns.
Pair each page with short demo videos, social proof, and credible community posts to echo your takeaways.
Include “Who it is for” and “Who it isn’t for” sections to reduce ambiguity.
Seed content in YouTube and forums that AIOs (and users) can pick up.
4. Skeptical Rejections
Searchers with a make-or-break intent? They’re the outright AIO skeptical rejectors.
When stakes are high – health, finance, or legal … the typical YMYL (Your Money, Your Life) stuff – they don’t trust AIO to get it right.
Users may scan the summary briefly, but they quickly move to authoritative sources like government sites, hospitals, or financial institutions.
Common queries where this rejection pattern shows up:
“Metformin dosage for PCOS”
“How to file taxes as a freelancer in Germany”
“Credit card chargeback rights EU”
“Infant fever when to go to ER”
“LLC vs GmbH legal liability”
For this search intent, the dwell time in an AIO is short or nonexistent, and their sentiment often skews negative.
They show determination to bypass the AI layer in favor of direct authority validation.
👉 Optimization tips for this validation group:
Prioritize citations and mentions from highly trusted domains so AIOs lean on you indirectly.
Align your pages with the language and categories used by official sources.
Add explicit disclaimers and clear subheadings to strengthen authority signals.
For YMYL topics, focus on being cited rather than surfaced as the final answer.
SERP Features That Drove Engagement
Our RAG AI-driven system of the usability data verified that not all SERP features are created equal.
When we cut the data down to only features with meaningful engagement – which our study defined as ≥5 seconds of dwell time across at least 10 instances – only four SERP features findings stood out.
(I’ll give you a moment to take a few wild guesses regarding the outcomes … and then you’ll see if you’re right.)
Drumroll please. 🥁🥁🥁
(Okay, moment over. Here we go.)
1. Organic Results Are Still The Backbone
Whenever our study participants gave the classic blue links more than a passing glance, they almost always found success.
Transcripts from the study make it explicit: Users trusted official sites, government domains, and familiar authority brands, as one participant’s quote demonstrates:
“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”
What about social or community sites that showed up in the organic blue-link results?
Reddit and YouTube were the social or community platforms found in the SERP that were mentioned most by study participants.
Reddit had 45 unique mentions across the entire study. Overall, seeing a Reddit result in organic results produces a user sentiment that is mostly positive, with some users feeling neutral toward the inclusion of Reddit in search, and very few negative comments about Reddit results.
YouTube had 20 unique mentions across the entire study. The sentiment toward YouTube inclusion in SERP results was overwhelmingly positive (19 out of 20 of those instances had a positive user sentiment). The emotions flagged from the study participants around YouTube results included happy/satisfied or curious/exploring.
There was a very clear theme across the study that appeared when social or community sites popped up in organic results:
Reddit was invoked when participants wanted community perspective, usually in comparison tasks. Confidence was high because Reddit validated nuance, but AIO trust was weak (users bypassed AIOs to Reddit instead).
YouTube was used as a visual validator, especially in product or technical comparison tasks. Users expressed positive sentiment and high satisfaction, even when explicit trust wasn’t verbalized. They treated YouTube as a natural step after the AIOs/organic SERP results.
2. Sponsored Results Barely Register
People saw them, but rarely acted on them. “I don’t like going to sponsored sites” was a common refrain.
High visibility, but low trust.
3. Shopping Carousels Aid Discovery But Not Closure.
Participants clicked into Shopping carousels for product ideas, but often bounced back out to reassess with external sites.
The carousel works as a catalog – not a closer.
4. Featured Snippets Continue To Punch Above Their Weight
For straightforward factual lookups, Snippets had an ~85% success rate of engagement.
They were efficient and final for fact-based queries like [example] and [example].
⚠️ Important note:Even though Google is replacing Featured Snippets with AIOs, it’s clear that this method of receiving information within the SERP has a high engagement. While the SERP feature may be in the process of being discontinued, the data shows users like engaging with snippets. The takeaway here is that if you were often appearing for featured snippets and you’re now often appearing for AIO citations, keep up the good work to continue earning visibility there, because it still matters.
SERP Features x AIO Intent Patterns
When you keep the intent pattern layers in mind with different persona groups, it makes the search behaviors sharper:
Younger users on mobile leaned heavily on AIO and snippets, often stopping there if the stakes were low. → That’s the hallmark of efficiency-first validations (quick fact lookups) and comparative validations (scrolling, refining, and treating AIO as the main lens).
Older users consistently bypassed AI elements in favor of organic authority results. → This is classic behavior for trust-driven validations, when users click out to brands like PayPal or the Mayo Clinic, and skeptical rejections, when users distrust AIO altogether for high-stakes tasks.
Transactional queries – money, health, booking – nearly always pushed people toward trusted brands, regardless of what AIO or ads surfaced. → This connects directly to trust-driven validations (users who need authority reinforcement to fulfill their search intent) and skeptical rejections (users who reject AIO in YMYL contexts because AIOs don’t meet the intent behind the behavior).
What this shows is that, for SEOs, the priority isn’t about chasing every feature and “winning them all.”
Take this as an example:
“The AI overview didn’t pop up, so I used the search results. These were mostly weird websites, but CNBC looked trustworthy. They had a comparison of different platforms like CardCash and GCX, so I went with CNBC because they’re a trusted source.”
Your job is to match intent (as always):
Earn extractable presence in AIOs for quick facts,
Reinforce trust scaffolding on authority-driven organic pages, and
Treat Shopping and Sponsored slots as visibility and awareness plays rather than conversion levers.
Which Brands Shaped Trust In AIOs
AIOs don’t stand on their own; they borrow credibility from the brands they surface – whether you like it or not.
Emerging platforms (Raise, CardCash, GameFlip, Kade Pay) gained traction primarily because an AIO surfaced them, not because of prior awareness.
👉 Why it matters: Brand trust is the glue between AIO exposure and user action.
Here’s a quick paraphrase of this user’s exploration: We’re looking for places to sell gift cards for instant payment. Platforms like Raise, Gift Card Granny, or CardCash come up. On CardCash, I tried a $10 7-Eleven card, and the offer was $8.30. So they ‘tax’ you for selling. That’s good to know – but it shows you can sell gift cards for cash, and CardCash is one option.
In this instance, the AIO surfaced CardCash. The user didn’t know about it before this search. They explored it in detail, but trust friction (“they tax you”) shaped whether they’d actually use it.
For SEOs, this means three plays running in tandem:
Win mentions in AIOs by ensuring your content is structured, scannable, and extractable.
Strengthen authority off-site so when users validate (or reject the AIO), they land on your pages with confidence.
Build topical authority in your niche through comprehensive persona-based topic coverage and valuable information gain across your topics. (This can be a powerful entry point or opportunity for teams competing against larger brands.)
What does this all mean for your own tactical optimizations?
But here’s the most crucial thing to take away from this analysis today:
With this information in mind, you can now go to your stakeholders and guide them to look at all your prompts, queries, and topics with fresh eyes.
You need to determine:
Which of the target queries/topics are quick answers?
Which of the target queries/topics are instances where people need more trust and assurance?
When do your ideal users expect to explore more, based on the target queries/topics?
This will help you set expectations accordingly and measure success over time.
Featured Image: Paulo Bobita/Search Engine Journal