Google’s Mueller Advises Testing Ecommerce Sites For Agentic AI via @sejournal, @martinibuster

Google’s John Mueller re-posted the results of an experiment that tested if ecommerce sites were accessible by AI Agents, commenting that it may be useful to check if your ecommerce site works for AI agents that are shopping on behalf of actual customers.

AI Agent Experiment On Ecommerce Sites

Malte Polzin posted commentary on LinkedIn on an experiment he did to test if the top 50 Swiss ecommerce sites were open for business for users who are shopping online with ChatGPT agents.

They reported that most of the ecommerce stores were accessible to ChatGPT’s AI agent but he also found some stores were not for a few reasons.

Reasons Why ChatGPT’s AI Agent Couldn’t Shop

  • CAPTCHA prevented ChatGPT’s AI agent from shopping
  • Blocked by Cloudflare’s Turnstile tool that’s a CAPTCHA alternative.
  • Store blocked access with a maintenance page
  • Bot defense blocked access

Google’s John Mueller Offers Advice

Google’s John Mueller recommended checking if your ecommerce store is open for business to shoppers who use AI agents. It may become more commonplace that users employ agentic search for online shopping.

He wrote:

“Pro tip: check your ecommerce site to see if it works for shoppers using the common agents. (Or, if you’d prefer they go elsewhere because you have too much business, maybe don’t.)

Bot-detection sometimes triggers on users with agents, and it can be annoying for them to get through. (Insert philosophical discussion on whether agents are more like bots or more like users, and whether it makes more sense to differentiate by actions rather than user-agent.)”

Should SEOs Add Agentic AI Testing To Site Audits?

SEOs want to consider adding Agentic AI accessibility to their site audits for ecommerce sites. There may be other use cases where an AI agent may need access to filling out forms, for example on a local services website.

Which Marketing Jobs Are Most Affected by AI? via @sejournal, @MattGSouthern

New research from Microsoft reveals that marketing and sales professionals are among the most affected by generative AI, based on an analysis of 200,000 real workplace conversations with Bing Copilot.

The research examined nine months of anonymized data from January to September 2024, offering a large-scale look at how professionals use AI in their daily tasks.

AI’s Role In Marketing & Sales Work

Microsoft calculated an “AI applicability score” to measure how often AI is used to complete or assist with job-related tasks and how effectively it performs those tasks.

Sales representatives received one of the highest scores (0.46), followed closely by customer service representatives (0.44), writers and authors (0.45), and other marketing roles like:

  • Technical Writers (0.38)
  • Public Relations Specialists (0.36)
  • Advertising Sales Agents (0.36)
  • Market Research Analysts (0.35)

Overall, “Sales and Related” occupations ranked highest in AI impact across all major job categories, followed by computing and administrative roles.

As Microsoft researchers note:

“The current capabilities of generative AI align most strongly with knowledge work and communication occupations.”

Tasks Where AI Performs Well

The study found AI is particularly effective at:

  • Gathering information
  • Writing and editing content
  • Communicating information to others
  • Supporting ongoing learning in a specific field

These tasks often show high success and satisfaction rates among users.

However, the study also uncovered that in 40% of conversations, the AI performed tasks different from what the user initially requested. For example, when someone asks for help with research, the AI might instead explain research methods rather than deliver information.

This reflects AI’s role as more of a helper than a replacement. As the researchers put it:

“The AI often acts in a service role to the human as a coach, advisor, or teacher.”

Areas Where Human Strength Excels

Some marketing tasks still show resistance to AI. These include:

  • Visual design and creative work
  • Strategic data analysis
  • Roles that require physical presence or in-person interaction, such as event marketing or client-based sales

These activities consistently scored lower for AI satisfaction and task completion.

Education, Wages & Job Security

The study found a weak correlation between AI impact and wages. The correlation coefficient was 0.07, indicating that AI is reshaping tasks across income levels, not just automating low-paying jobs.

For roles requiring a Bachelor’s degree, the average AI applicability score was slightly higher (0.27), compared to 0.19 for jobs with lower education requirements. This suggests knowledge work may see more AI involvement, but not necessarily replacement.

The researchers caution against assuming automation leads to job loss:

“This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive.”

What You Can Do

The data supports a clear takeaway: AI is here to stay, but it’s not taking over every aspect of marketing work.

Digital anthropologist Giles Crouch, quoted in coverage of the study, said:

“The conversation has gone from this fear of massive job loss to: How can we get real benefit from these tools? How will it make our work better?”

There are a few ways marketing professionals can adapt, such as:

  • Sharpening skills in areas where AI falls short, such as visual creativity and strategic interpretation
  • Using AI as a productivity booster for content creation and information gathering
  • Positioning themselves as AI collaborators rather than competitors

Looking Ahead

AI is reshaping marketing by changing how work gets done, not by eliminating roles.

As with past technological changes, those who adapt and integrate these tools into their workflow may find themselves better positioned for long-term success.

The full report includes a detailed breakdown of occupations and task types across the U.S. workforce.


Featured Image: Roman Samborskyi/Shutterstock

Google Warns: CSS Background Images Aren’t Indexed via @sejournal, @MattGSouthern

In a recent Search Off the Record podcast, Google’s Search Relations team cautioned developers against using CSS for all website images.

While CSS background images can enhance visual design, they’re invisible to Google Image Search. This could lead to missed opportunities in image indexing and search visibility.

Here’s what Google’s Search Advocates advise.

The CSS Image Problem

During the episode, John Mueller shared a recurring issue:

“I had someone ping me I think last week or a week before on social media: “It looks like my developer has decided to use CSS for all of the images because they believe it’s better.” Does this work?”

According to the Google team, this approach stems from a misunderstanding of how search engines interpret images.

When visuals are added via CSS background properties instead of standard HTML image tags, they may not appear in the page’s DOM, and therefore can’t be indexed.

As Martin Splitt explained:

“If you have a content image, if the image is part of the content… you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like ah so this page has this image that is not just decoration. It is part of the content and then image search can pick it up.”

Content vs. Decoration

The difference between a content image and a decorative image is whether it adds meaning or is purely cosmetic.

Decorative images, such as patterned backgrounds, atmospheric effects, or animations, can be safely implemented using CSS.

When the image conveys meaning or is referenced in the content, CSS is a poor fit.

Splitt offered the following example:

“If I have a blog post about this specific landscape and I want to like tell people like look at this amazing panoramic view of the landscape here and then it’s a background image… the problem is the content specifically references this image, but it doesn’t have the image as part of the content.”

In such cases, placing the image in HTML using the img or picture tag ensures it’s understood as part of the page’s content and eligible for indexing in Google Image Search.

What Makes CSS Images Invisible?

Splitt explained why this happens:

“For a user looking at the browser, what are you talking about, Martin? The image is right there. But if you look at the DOM, it absolutely isn’t there. It is just a CSS thing that has been loaded to style the page.”

Because Google parses the DOM to determine content structure, images styled purely through CSS are often overlooked, especially if they aren’t included as actual HTML elements.

This distinction reflects a broader web development principle.

Splitt adds:

“There is ideally a separation between the way the site looks and what the content is.”

What About Stock Photos?

The team addressed the use of stock photos, which are sometimes added for visual appeal rather than original content.

Splitt says:

“The meaning is still like this image is not mine. It’s a stock image that we bought or licensed but it is still part of the content,” the team noted.

While these images may not rank highly due to duplication, implementing them in HTML still helps ensure proper indexing and improves accessibility.

Why This Matters

The team highlighted several examples where improper implementation could reduce visibility:

  • Real estate listings: Home photos used as background images won’t show up in relevant image search queries.
  • News articles: Charts or infographics added via CSS can’t be indexed, weakening discoverability.
  • E-commerce sites: Product images embedded in background styles may not appear in shopping-related searches.

What To Do Next

Google’s comments indicate that you should follow these best practices:

  • Use HTML (img or picture) tags for any image that conveys content or is referenced on the page.
  • Reserve CSS backgrounds for decorative visuals that don’t carry meaning.
  • If users might expect to find an image via search, it should be in the HTML.
  • Proper implementation helps not only with SEO, but also with accessibility tools and screen readers.

Looking Ahead

Publishers should be mindful of how images are implemented.

While CSS is a powerful tool for design, using it to deliver content-related images may conflict with best practices for indexing, accessibility, and long-term SEO strategy.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock

Google On Balancing Needs Of Users And The Web Ecosystem via @sejournal, @martinibuster

At the recent Search Central Live Deep Dive 2025, Kenichi Suzuki asked Google’s Gary Illyes how Google measures high quality and user satisfaction of traffic from AI Overviews. Illyes’ response, published by Suzuki on LinkedIn, covered multiple points.

Kenichi asked for specific data, and Gary’s answer offered an overview of how Google gathers external data to form internal opinions on how AI Overviews is perceived by users in terms of satisfaction. He said that the data informs public statements by Google, including those made by CEO Sundar Pichai.

Illyes began his answer by saying that he couldn’t share specifics about the user satisfaction data, but he still continued to offer his overview.

User Satisfaction Surveys

The first data point that Illyes mentioned was user satisfaction surveys to understand how people feel about AI Overviews. Kenichi wrote that Illyes said:

“The public statements made by company leaders, such as Sundar Pichai, are validated by this internal data before being made public.”

Observed User Behavior

The second user satisfaction data point that Illyes mentioned was inferring from the broader market. Kenichi wrote:

“Gary suggested that one can infer user preference by looking at the broader market. He pointed out that the rapidly growing user base for other AI tools (like ChatGPT and Copilot) likely consists of the same demographic that enjoys and finds value in AI Overviews.”

Motivated By User-Focus

This part means putting the user first as the motivation for introducing a new feature. Illyes specifically said that causing a disruption is not Google’s motivation for AI search features.

Acknowledged The Web Ecosystem

The last point he made was to explain that Google’s still figuring out how to balance their user-focused approach with the need to maintain a healthy web ecosystem.

Kenichi wrote that Illyes said:

“He finished by acknowledging that they are still figuring out how to balance this user-focused approach with the need to continue supporting the wider web ecosystem.”

Balancing The Needs Of The Web Ecosystem

At the dawn of modern SEO, Google did something extraordinary: they reached out to web publishers through the most popular SEO forum at the time, WebmasterWorld. Gary Illyes himself, before he joined Google, was a WebmasterWorld member. This outreach by Google was the initiative of one Googler, Matt Cutts. Other Googlers provided interviews, but Matt Cutts, under the WebmasterWorld nickname of GoogleGuy, held two-way conversations with the search and publisher community.

This is no longer the case at Google, which is largely back to one-way communication accompanied by intermittent social media outreach.

The SEO community may share in the blame for this situation, as some SEOs post abusive responses on social media. Fortunately, those people are in the minority, but that behavior nonetheless puts a chill on the few opportunities provided to have a constructive dialogue.

It’s encouraging to hear Illyes mention the web ecosystem, and it would be even further encouraging to hear Googlers, including the CEO, focus more on how they intend to balance the needs of the users with those of the creators who publish content, because many feel that Google’s current direction is not sustainable for publishers.

Featured Image by Shutterstock/1000 Words

Why A Site Deindexed By Google For Programmatic SEO Bounced Back via @sejournal, @martinibuster

A company founder shared their experience with programmatic SEO, which they credited for initial success until it was deindexed by Google, calling it a big mistake they won’t repeat. The post, shared on LinkedIn, received scores of supportive comments.

The website didn’t receive a manual action, Google deindexed the web pages due to poor content quality.

Programmatic SEO (pSEO)

Programmatic SEO (aka pSEO) is a phrase that encompasses a wide range of tactics that have automation at the heart of it. Some of it can be very useful, like automating sitewide meta descriptions, titles, and alt text for images.

pSEO is also the practice of using AI automation to scale content creation sitewide, which is what the person did. They created fifty thousand pages targeting long tail phrases, phrases that are not commonly queried. The site initially received hundreds of clicks and millions of impressions but the success was not long-lived.

According to the post by Miquel Palet (LinkedIn Profile):

“Google flagged our domain. Pages started getting deindexed. Traffic plummeted overnight.

We learned the hard way that shortcuts don’t scale sustainably.

It was a huge mistake, but also a great lesson.

And it’s one of the reasons we rebranded to Tailride.”

Thin AI Content Was The Culprit

A follow-up post explained that they believe the AI generated content backfired was because it was thin content, which makes sense. Thin content, regardless of how it was authored, can be problematic.

One of the posts by Palet explained:

“We’re not sure, but probably not because AI. It was thin content and probably duplicated.”

Rasmus Sørensen (LinkedIn profile), an experienced digital marketer shared his opinion that he’s seen some marketers pushing shady practices under the banner of pSEO:

“Thanks for sharing and putting some real live experiences forward. Programmatic SEO had been touted as the next best thing in SEO. It’s not and I’ve seen soo much garbage published the last few months and agencies claiming that their pSEO is the silver bullet.
It very rarely is.”

Joe Youngblood (LinkedIn profile) shared that SEO trends can be abused and implied that it is a viable strategy if done correctly:

“I would always do something like pSEO under the supervision of a seasoned SEO consultant. This tale happens all too frequently with an SEO trend…”

What They Did To Fix The site

The company founder shared that they rebranded the website to a new domain, redirecting the old domain to the new one, and focused their site on higher quality content that’s relevant to users.

They explained:

“Less pages + more quality”

A site: search for their domain shows that Google is now indexing their content, indicating that they are back on track.

Takeaways

Programmatic SEO can be useful if approached with an understanding of where the line is between good quality and “not-quality” content.

Featured Image by Shutterstock/Cast Of Thousands

Why Is SureRank WordPress SEO Plugin So Popular? via @sejournal, @martinibuster

A new SEO plugin called SureRank, by Brainstorm Force, makers of the popular Astra theme, is rapidly growing in popularity. In beta for a few months, it was announced in July and has amassed over twenty thousand installations. That’s a pretty good start for an SEO plugin that has only been out of beta for a few weeks.

One possible reason that SureRank is quickly becoming popular is that it’s created by a trusted brand, much loved for its Astra WordPress theme.

SureRank By Brainstorm Force

SureRank is the creation of the publishers of many highly popular plugins and themes installed in many millions of websites, such as Astra theme, Ultimate Addons for Elementor, Spectra Gutenberg Blocks – Website Builder for the Block Editor, and Starter Templates – AI-Powered Templates for Elementor & Gutenberg, to name a few.

Why Another SEO Plugin?

The goal of SureRank is to provide an easy-to-use SEO solution that includes only the necessary features every site needs in order to avoid feature bloat. It positions itself as an SEO assistant that guides the user with an intuitive user interface.

What Does SureRank Do?

SureRank has an onboarding process that walks a user through the initial optimizations and setup. It then performs an analysis and offers suggestions for site-level improvements.

It currently enables users to handle the basics like:

  • Edit titles and meta descriptions
  • Custom write social media titles, descriptions, and featured images,
  • Tweak home page and, archive page meta data
  • Meta robot directives, canonicals, and sitemaps
  • Schema structured data
  • Site and page level SEO analysis
  • Automatic image alt text generation
  • Google Search Console integration
  • WooCommerce integration

SureRank also provides a built-in tool for migrating settings from other popular SEO plugins like Rank Math, Yoast, and AIOSEO.

Check out the SureRank SEO plugin at the official WordPress.org repository:

SureRank – SEO Assistant with Meta Tags, Social Preview, XML Sitemap, and Schema

Featured Image by Shutterstock/Roman Samborskyi

Google Confirms CSS Class Names Don’t Influence SEO via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller clarified how CSS affects SEO.

While some aspects of CSS have no bearing on SEO, others can directly influence how search engines interpret and rank content.

Here’s what matters and what doesn’t.

Class Names Don’t Matter For Rankings

One of the clearest takeaways from the episode is that CSS class names have no impact on Google Search.

Splitt stated:

“I don’t think it does. I don’t think we care because the CSS class names are just that. They’re just assigning a specific somewhat identifiable bit of stylesheet rules to elements and that’s it. That’s all. You could name them all “blurb.” It would not make a difference from an SEO perspective.”

Class names, they explained, are used only for applying visual styling. They’re not considered part of the page’s content. So they’re ignored by Googlebot and other HTML parsers when extracting meaningful information.

Even if you’re feeding HTML into a language model or a basic crawler, class names won’t factor in unless your system is explicitly designed to read those attributes.

Why Content In Pseudo Elements Is A Problem

While class names are harmless, the team warned about placing meaningful content in CSS pseudo elements like :before and :after.

Splitt stated:

“The idea again—the original idea—is to separate presentation from content. So content is in the HTML, and how it is presented is in the CSS. So with before and after, if you add decorative elements like a little triangle or a little dot or a little light bulb or like a little unicorn—whatever—I think that is fine because it’s decorative. It doesn’t have meaning in the sense of the content. Without it, it would still be fine.”

Adding visual flourishes is acceptable, but inserting headlines, paragraphs, or any user-facing content into pseudo elements breaks the core principle of web development.

That content becomes invisible to search engines, screen readers, and any other tools that rely on parsing the HTML directly.

Mueller shared a real-world example of how this can go wrong:

“There was once an escalation from the indexing team that said we should contact the site and tell them to stop using before and after… They were using the before pseudo class to add a number sign to everything that they considered hashtags. And our indexing system was like, it would be so nice if we could recognize these hashtags on the page because maybe they’re useful for something.”

Because the hashtag symbols were added via CSS, they were never seen by Google’s systems.

Splitt tested it live during the recording and confirmed:

“It’s not in the DOM… so it doesn’t get picked up by rendering.”

Oversized CSS Can Hurt Performance

The episode also touched on performance issues related to bloated stylesheets.

According to data from the HTTP Archive’s 2022 Web Almanac, the median size of a CSS file had grown to around 68 KB for mobile and 72 KB for desktop.

Mueller stated:

“The Web Almanac says every year we see CSS grow in size, and in 2022 the median stylesheet size was 68 kilobytes or 72 kilobytes. … They also mentioned the largest one that they found was 78 megabytes. … These are text files.”

That kind of bloat can negatively impact Core Web Vitals and overall user experience, which are two areas that do influence rankings. Frameworks and prebuilt libraries are often the cause.

While developers can mitigate this with minification and unused rule pruning, not everyone does. This makes CSS optimization a worthwhile item on your technical SEO checklist.

Keep CSS Crawlable

Despite CSS’s limited role in ranking, Google still recommends making CSS files crawlable.

Mueller joked:

“Google’s guidelines say you should make your CSS files crawlable. So there must be some kind of magic in there, right?”

The real reason is more technical than magical. Googlebot uses CSS files to render pages the way users would see them.

Blocking CSS can affect how your pages are interpreted, especially for layout, mobile-friendliness, or elements like hidden content.

Practical Tips For SEO Pros

Here’s what this episode means for your SEO practices:

  • Stop optimizing class names: Keywords in CSS classes won’t help your rankings.
  • Check pseudo elements: Any real content, like text meant to be read, should live in HTML, not in :before or :after.
  • Audit stylesheet size: Large CSS files can hurt page speed and Core Web Vitals. Trim what you can.
  • Ensure CSS is crawlable: Blocking stylesheets may disrupt rendering and impact how Google understands your page.

The team also emphasized the importance of using proper HTML tags for meaningful images:

“If the image is part of the content and you’re like, ‘Look at this house that I just bought,’ then you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like, ah, so this page has this image that is not just decoration.”

Use CSS for styling and HTML for meaning. This separation helps both users and search engines.

Listen to the full podcast episode below:

ChatGPT Appears To Use Google Search As A Fallback via @sejournal, @martinibuster

Aleyda Solís conducted an experiment to test how fast ChatGPT indexes a web page and unexpectedly discovered that ChatGPT appears to use Google’s search results as a fallback for web pages that it cannot access or that are not yet indexed on Bing.

According to Aleyda:

I’ve run a simple but straightforward to follow test that confirms the reliance of ChatGPT on Google SERPs snippets for its answers.

Created A New Web Page, Not Yet Indexed

Aleyda created a brand new page (titled “LLMs.txt Generators”) on her website, LearningAISearch.com. She immediately tested ChatGPT (with web search enabled) to see if it could access or locate the page but ChatGPT failed to find it. ChatGPT responded with the suggestion that the URL was not publicly indexed or possibly outdated.

She then asked Google Gemini about the web page, which successfully fetched and summarized the live page content.

Submitted Web Page For Indexing

She next submitted the web page for indexing via Google Search Console and Bing Webmaster Tools. Google successfully indexed the web page but Bing had problems with it.

After several hours elapsed Google started showing results for the page with the site: operator and with a direct search for the URL. But Bing continued to have trouble indexing the web page.

Checked ChatGPT Until It Used Google Search Snippet

Aleyda went back to ChatGPT and after several tries it gave her an incomplete summary of the page content, mentioning just one tool that was listed on it. When she asked ChatGPT for the origin of that incomplete snippet it responded that it was using a “cached snippet via web search””, likely from “search engine indexing.”

She confirmed that the snippet shown by ChatGPT matched Google’s search result snippet, not Bing’s (which still hadn’t indexed it).

Aleyda explained:

“A snippet from where?

When I followed up asking where was that snippet they grabbed the information being shown, the answer was that it had “located a cached snippet via web search that previews the page content – likely from search engine indexing.”

But I knew the page wasn’t indexed yet in Bing, so it had to be … Google search results? I went to check.

When I compared the text snippet provided by ChatGPT vs the one shown in Google Search Results for the specific Learning AI Search LLMs.txt Generators page, I could confirm it was the same information…”

Not An Isolated Incident

Aleyda’s article on her finding (Confirmed: ChatGPT uses Google SERP Snippets for its Answers [A Test with Proof]) links to someone else’s web page that summarizes a similar experience where ChatGPT used a Google snippet. So she’s not the only one to experience this.

Proof That Traditional SEO Remains Relevant For AI Search

Aleyda also documented what happened on a LinkedIn post where Kyle Atwater Morley shared his observation:

“So ChatGPT is basically piggybacking off Google snippets to generate answers?

What a wake-up call for anyone thinking traditional SEO is dead.”

Stéphane Bureau shared his opinion on what’s going on:

“If Bing’s results are insufficient, it appears to fall back to scraping Google SERP snippets.”

He elaborated on his post with more details later on in the discussion:

“Based on current evidence, here’s my refined theory:

When browsing is enabled, ChatGPT sends search requests via Bing first (as seen in DevTools logs).

However, if Bing’s results are insufficient or outdated, it appears to fall back to scraping Google SERP snippets—likely via an undocumented proxy or secondary API.

This explains why some replies contain verbatim Google snippets that never appear in Bing API responses.

I’ve seen multiple instances that align with this dual-source behavior.”

Takeaway

ChatGPT was initially unable to access the page directly, and it was only after the page began to appear in Google’s search results that it was able to respond to questions about the page. Once the snippet appeared in Google’s search results, ChatGPT began referencing it, revealing a reliance on publicly visible Google Search snippets as a fallback when the same data is unavailable in Bing.

What would be interesting to see is whether the server logs held a clue as to whether ChatGPT attempted to crawl the page and, if so, what error code was returned in response to the failure to retrieve the data. It’s curious that ChatGPT was unable to retrieve the page, and though it probably doesn’t have any bearing on the conclusions, it would still contribute to making the conclusions feel more complete to have that last bit of information crossed off.

Nevertheless, it appears that this is yet more proof that standard SEO is still applicable for AI-powered search, including for ChatGPT Search. This adds to recent comments by Gary Illyes that confirms that there is no need for specialized GEO or AEO in order to rank well in Google AI Overviews and AI Mode.

Featured Image by Shutterstock/Krakenimages.com

Validity of Pew Research On Google AI Search Results Challenged via @sejournal, @martinibuster

Questions about the methodology used by the Pew Research Center suggest that its conclusions about Google’s AI summaries may be flawed. Facts about how AI summaries are created, the sample size, and statistical reliability challenge the validity of the results.

Google’s Official Statement

A spokesperson for Google reached out with an official statement and a discussion about why the Pew research findings do not reflect actual user interaction patterns related to AI summaries and standard search.

The main points of Google’s rebuttal are:

  • Users are increasingly seeking out AI features
  • They’re asking more questions
  • AI usage trends are increasing visibility for content creators.
  • The Pew research used flawed methodology.

Google shared:

“People are gravitating to AI-powered experiences, and AI features in Search enable people to ask even more questions, creating new opportunities for people to connect with websites.

This study uses a flawed methodology and skewed queryset that is not representative of Search traffic. We consistently direct billions of clicks to websites daily and have not observed significant drops in aggregate web traffic as is being suggested.”

Sample Size Is Too Low

I discussed the Pew Research with Duane Forrester (formerly of Bing, LinkedIn profile) and he suggested that the sampling size of the research was too low to be meaningful (900+ adults and 66,000 search queries). Duane shared the following opinion:

“Out of almost 500 billion queries per month on Google and they’re extracting insights based on 0.0000134% sample size (66,000+ queries), that’s a very small sample.

Not suggesting that 66,000 of something is inconsequential, but taken in the context of the volume of queries happening on any given month, day, hour or minute, it’s very technically not a rounding error and were it my study, I’d have to call out how exceedingly low the sample size is and that it may not realistically represent the real world.”

How Reliable Are Pew Center Statistics?

The Methodology page for the statistics used list how reliable the statistics are for the following age groups:

  • Ages 18-29 were ranked at plus/minus 13.7 percentage points. That ranks as a low level of reliability.
  • Ages 30–49 were ranked at plus/minus 7.9 percentage points. That ranks in the moderate, somewhat reliable, but still a fairly wide range.
  • Ages 50–64 were ranked at plus/minus 8.9 percentage points. That ranks as a moderate to low level of reliability.
  • Age 65+ were ranked at at plus/minus 10.2 percentage points, which is firmly in the low range of reliability.

The above reliability scores are from Pew Research’s Methodology page. Overall, all of these results have a high margin of error, making them statistically unreliable. At best, they should be seen as rough estimates, although as Duane says, the sample size is so low that it’s hard to justify it as reflecting real-world results.

Pew Research Results Compare Results In Different Months

After thinking about it overnight and reviewing the methodology, an aspect of the Pew Research methodology that stood out is that they compared the actual search queries from users during the month of March with the same queries the researchers conducted in one week in April.

That’s problematic because Google’s AI summaries change from month to month. For example, the kinds of queries that trigger an AI Overview changes, with AIOs becoming more prominent for certain niches and less so for other topics. Additionally user trends may impact what gets searched on which itself could trigger a temporary freshness update to the search algorithms that prioritize videos and news.

The takeaway is that comparing search results from different months is problematic for both standard search and AI summaries.

Pew Research Ignores That AI Search Results Are Dynamic

With respect to AI overviews and summaries, these are even more dynamic, subject to change not just for every user but to the same user.

Searching for a query in AI Overviews then repeating the query in an entirely different browser will result in a different AI summary and completely different set of links.

The point is that the Pew Research Center’s methodology where they compare user queries with scraped queries a month later are flawed because the two sets of queries and results cannot be compared, they are each inherently different because of time, updates, and the dynamic nature of AI summaries.

The following screenshots are the links shown for the query, What is the RLHF training in OpenAI?

Google AIO Via Vivaldi Browser

Screenshot shows links to Amazon Web Services, Medium, and Kili Technology

Google AIO Via Chrome Canary Browser

Screenshot shows links to OpenAI, Arize AI, and Hugging Face

Not only are the links on the right hand side different, AI summary content and the links embedded within that content are also different.

Could This Be Why Publishers See Inconsistent Traffic?

Publishers and SEOs are used to static ranking positions in search results for a given search query. But Google’s AI Overviews and AI Mode show dynamic search results. The content in the search results and the links that are shown are dynamic, showing a wide range of sites in the top three positions for the exact same queries. SEOs and publishers have asked Google to show a broader range of websites and that, apparently, is what Google’s AI features are doing. Is this a case of be careful of what you wish for?

Featured Image by Shutterstock/Stokkete