Google On Balancing Needs Of Users And The Web Ecosystem via @sejournal, @martinibuster

At the recent Search Central Live Deep Dive 2025, Kenichi Suzuki asked Google’s Gary Illyes how Google measures high quality and user satisfaction of traffic from AI Overviews. Illyes’ response, published by Suzuki on LinkedIn, covered multiple points.

Kenichi asked for specific data, and Gary’s answer offered an overview of how Google gathers external data to form internal opinions on how AI Overviews is perceived by users in terms of satisfaction. He said that the data informs public statements by Google, including those made by CEO Sundar Pichai.

Illyes began his answer by saying that he couldn’t share specifics about the user satisfaction data, but he still continued to offer his overview.

User Satisfaction Surveys

The first data point that Illyes mentioned was user satisfaction surveys to understand how people feel about AI Overviews. Kenichi wrote that Illyes said:

“The public statements made by company leaders, such as Sundar Pichai, are validated by this internal data before being made public.”

Observed User Behavior

The second user satisfaction data point that Illyes mentioned was inferring from the broader market. Kenichi wrote:

“Gary suggested that one can infer user preference by looking at the broader market. He pointed out that the rapidly growing user base for other AI tools (like ChatGPT and Copilot) likely consists of the same demographic that enjoys and finds value in AI Overviews.”

Motivated By User-Focus

This part means putting the user first as the motivation for introducing a new feature. Illyes specifically said that causing a disruption is not Google’s motivation for AI search features.

Acknowledged The Web Ecosystem

The last point he made was to explain that Google’s still figuring out how to balance their user-focused approach with the need to maintain a healthy web ecosystem.

Kenichi wrote that Illyes said:

“He finished by acknowledging that they are still figuring out how to balance this user-focused approach with the need to continue supporting the wider web ecosystem.”

Balancing The Needs Of The Web Ecosystem

At the dawn of modern SEO, Google did something extraordinary: they reached out to web publishers through the most popular SEO forum at the time, WebmasterWorld. Gary Illyes himself, before he joined Google, was a WebmasterWorld member. This outreach by Google was the initiative of one Googler, Matt Cutts. Other Googlers provided interviews, but Matt Cutts, under the WebmasterWorld nickname of GoogleGuy, held two-way conversations with the search and publisher community.

This is no longer the case at Google, which is largely back to one-way communication accompanied by intermittent social media outreach.

The SEO community may share in the blame for this situation, as some SEOs post abusive responses on social media. Fortunately, those people are in the minority, but that behavior nonetheless puts a chill on the few opportunities provided to have a constructive dialogue.

It’s encouraging to hear Illyes mention the web ecosystem, and it would be even further encouraging to hear Googlers, including the CEO, focus more on how they intend to balance the needs of the users with those of the creators who publish content, because many feel that Google’s current direction is not sustainable for publishers.

Featured Image by Shutterstock/1000 Words

Why A Site Deindexed By Google For Programmatic SEO Bounced Back via @sejournal, @martinibuster

A company founder shared their experience with programmatic SEO, which they credited for initial success until it was deindexed by Google, calling it a big mistake they won’t repeat. The post, shared on LinkedIn, received scores of supportive comments.

The website didn’t receive a manual action, Google deindexed the web pages due to poor content quality.

Programmatic SEO (pSEO)

Programmatic SEO (aka pSEO) is a phrase that encompasses a wide range of tactics that have automation at the heart of it. Some of it can be very useful, like automating sitewide meta descriptions, titles, and alt text for images.

pSEO is also the practice of using AI automation to scale content creation sitewide, which is what the person did. They created fifty thousand pages targeting long tail phrases, phrases that are not commonly queried. The site initially received hundreds of clicks and millions of impressions but the success was not long-lived.

According to the post by Miquel Palet (LinkedIn Profile):

“Google flagged our domain. Pages started getting deindexed. Traffic plummeted overnight.

We learned the hard way that shortcuts don’t scale sustainably.

It was a huge mistake, but also a great lesson.

And it’s one of the reasons we rebranded to Tailride.”

Thin AI Content Was The Culprit

A follow-up post explained that they believe the AI generated content backfired was because it was thin content, which makes sense. Thin content, regardless of how it was authored, can be problematic.

One of the posts by Palet explained:

“We’re not sure, but probably not because AI. It was thin content and probably duplicated.”

Rasmus Sørensen (LinkedIn profile), an experienced digital marketer shared his opinion that he’s seen some marketers pushing shady practices under the banner of pSEO:

“Thanks for sharing and putting some real live experiences forward. Programmatic SEO had been touted as the next best thing in SEO. It’s not and I’ve seen soo much garbage published the last few months and agencies claiming that their pSEO is the silver bullet.
It very rarely is.”

Joe Youngblood (LinkedIn profile) shared that SEO trends can be abused and implied that it is a viable strategy if done correctly:

“I would always do something like pSEO under the supervision of a seasoned SEO consultant. This tale happens all too frequently with an SEO trend…”

What They Did To Fix The site

The company founder shared that they rebranded the website to a new domain, redirecting the old domain to the new one, and focused their site on higher quality content that’s relevant to users.

They explained:

“Less pages + more quality”

A site: search for their domain shows that Google is now indexing their content, indicating that they are back on track.

Takeaways

Programmatic SEO can be useful if approached with an understanding of where the line is between good quality and “not-quality” content.

Featured Image by Shutterstock/Cast Of Thousands

Why Is SureRank WordPress SEO Plugin So Popular? via @sejournal, @martinibuster

A new SEO plugin called SureRank, by Brainstorm Force, makers of the popular Astra theme, is rapidly growing in popularity. In beta for a few months, it was announced in July and has amassed over twenty thousand installations. That’s a pretty good start for an SEO plugin that has only been out of beta for a few weeks.

One possible reason that SureRank is quickly becoming popular is that it’s created by a trusted brand, much loved for its Astra WordPress theme.

SureRank By Brainstorm Force

SureRank is the creation of the publishers of many highly popular plugins and themes installed in many millions of websites, such as Astra theme, Ultimate Addons for Elementor, Spectra Gutenberg Blocks – Website Builder for the Block Editor, and Starter Templates – AI-Powered Templates for Elementor & Gutenberg, to name a few.

Why Another SEO Plugin?

The goal of SureRank is to provide an easy-to-use SEO solution that includes only the necessary features every site needs in order to avoid feature bloat. It positions itself as an SEO assistant that guides the user with an intuitive user interface.

What Does SureRank Do?

SureRank has an onboarding process that walks a user through the initial optimizations and setup. It then performs an analysis and offers suggestions for site-level improvements.

It currently enables users to handle the basics like:

  • Edit titles and meta descriptions
  • Custom write social media titles, descriptions, and featured images,
  • Tweak home page and, archive page meta data
  • Meta robot directives, canonicals, and sitemaps
  • Schema structured data
  • Site and page level SEO analysis
  • Automatic image alt text generation
  • Google Search Console integration
  • WooCommerce integration

SureRank also provides a built-in tool for migrating settings from other popular SEO plugins like Rank Math, Yoast, and AIOSEO.

Check out the SureRank SEO plugin at the official WordPress.org repository:

SureRank – SEO Assistant with Meta Tags, Social Preview, XML Sitemap, and Schema

Featured Image by Shutterstock/Roman Samborskyi

Google Confirms CSS Class Names Don’t Influence SEO via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller clarified how CSS affects SEO.

While some aspects of CSS have no bearing on SEO, others can directly influence how search engines interpret and rank content.

Here’s what matters and what doesn’t.

Class Names Don’t Matter For Rankings

One of the clearest takeaways from the episode is that CSS class names have no impact on Google Search.

Splitt stated:

“I don’t think it does. I don’t think we care because the CSS class names are just that. They’re just assigning a specific somewhat identifiable bit of stylesheet rules to elements and that’s it. That’s all. You could name them all “blurb.” It would not make a difference from an SEO perspective.”

Class names, they explained, are used only for applying visual styling. They’re not considered part of the page’s content. So they’re ignored by Googlebot and other HTML parsers when extracting meaningful information.

Even if you’re feeding HTML into a language model or a basic crawler, class names won’t factor in unless your system is explicitly designed to read those attributes.

Why Content In Pseudo Elements Is A Problem

While class names are harmless, the team warned about placing meaningful content in CSS pseudo elements like :before and :after.

Splitt stated:

“The idea again—the original idea—is to separate presentation from content. So content is in the HTML, and how it is presented is in the CSS. So with before and after, if you add decorative elements like a little triangle or a little dot or a little light bulb or like a little unicorn—whatever—I think that is fine because it’s decorative. It doesn’t have meaning in the sense of the content. Without it, it would still be fine.”

Adding visual flourishes is acceptable, but inserting headlines, paragraphs, or any user-facing content into pseudo elements breaks the core principle of web development.

That content becomes invisible to search engines, screen readers, and any other tools that rely on parsing the HTML directly.

Mueller shared a real-world example of how this can go wrong:

“There was once an escalation from the indexing team that said we should contact the site and tell them to stop using before and after… They were using the before pseudo class to add a number sign to everything that they considered hashtags. And our indexing system was like, it would be so nice if we could recognize these hashtags on the page because maybe they’re useful for something.”

Because the hashtag symbols were added via CSS, they were never seen by Google’s systems.

Splitt tested it live during the recording and confirmed:

“It’s not in the DOM… so it doesn’t get picked up by rendering.”

Oversized CSS Can Hurt Performance

The episode also touched on performance issues related to bloated stylesheets.

According to data from the HTTP Archive’s 2022 Web Almanac, the median size of a CSS file had grown to around 68 KB for mobile and 72 KB for desktop.

Mueller stated:

“The Web Almanac says every year we see CSS grow in size, and in 2022 the median stylesheet size was 68 kilobytes or 72 kilobytes. … They also mentioned the largest one that they found was 78 megabytes. … These are text files.”

That kind of bloat can negatively impact Core Web Vitals and overall user experience, which are two areas that do influence rankings. Frameworks and prebuilt libraries are often the cause.

While developers can mitigate this with minification and unused rule pruning, not everyone does. This makes CSS optimization a worthwhile item on your technical SEO checklist.

Keep CSS Crawlable

Despite CSS’s limited role in ranking, Google still recommends making CSS files crawlable.

Mueller joked:

“Google’s guidelines say you should make your CSS files crawlable. So there must be some kind of magic in there, right?”

The real reason is more technical than magical. Googlebot uses CSS files to render pages the way users would see them.

Blocking CSS can affect how your pages are interpreted, especially for layout, mobile-friendliness, or elements like hidden content.

Practical Tips For SEO Pros

Here’s what this episode means for your SEO practices:

  • Stop optimizing class names: Keywords in CSS classes won’t help your rankings.
  • Check pseudo elements: Any real content, like text meant to be read, should live in HTML, not in :before or :after.
  • Audit stylesheet size: Large CSS files can hurt page speed and Core Web Vitals. Trim what you can.
  • Ensure CSS is crawlable: Blocking stylesheets may disrupt rendering and impact how Google understands your page.

The team also emphasized the importance of using proper HTML tags for meaningful images:

“If the image is part of the content and you’re like, ‘Look at this house that I just bought,’ then you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like, ah, so this page has this image that is not just decoration.”

Use CSS for styling and HTML for meaning. This separation helps both users and search engines.

Listen to the full podcast episode below:

ChatGPT Appears To Use Google Search As A Fallback via @sejournal, @martinibuster

Aleyda Solís conducted an experiment to test how fast ChatGPT indexes a web page and unexpectedly discovered that ChatGPT appears to use Google’s search results as a fallback for web pages that it cannot access or that are not yet indexed on Bing.

According to Aleyda:

I’ve run a simple but straightforward to follow test that confirms the reliance of ChatGPT on Google SERPs snippets for its answers.

Created A New Web Page, Not Yet Indexed

Aleyda created a brand new page (titled “LLMs.txt Generators”) on her website, LearningAISearch.com. She immediately tested ChatGPT (with web search enabled) to see if it could access or locate the page but ChatGPT failed to find it. ChatGPT responded with the suggestion that the URL was not publicly indexed or possibly outdated.

She then asked Google Gemini about the web page, which successfully fetched and summarized the live page content.

Submitted Web Page For Indexing

She next submitted the web page for indexing via Google Search Console and Bing Webmaster Tools. Google successfully indexed the web page but Bing had problems with it.

After several hours elapsed Google started showing results for the page with the site: operator and with a direct search for the URL. But Bing continued to have trouble indexing the web page.

Checked ChatGPT Until It Used Google Search Snippet

Aleyda went back to ChatGPT and after several tries it gave her an incomplete summary of the page content, mentioning just one tool that was listed on it. When she asked ChatGPT for the origin of that incomplete snippet it responded that it was using a “cached snippet via web search””, likely from “search engine indexing.”

She confirmed that the snippet shown by ChatGPT matched Google’s search result snippet, not Bing’s (which still hadn’t indexed it).

Aleyda explained:

“A snippet from where?

When I followed up asking where was that snippet they grabbed the information being shown, the answer was that it had “located a cached snippet via web search that previews the page content – likely from search engine indexing.”

But I knew the page wasn’t indexed yet in Bing, so it had to be … Google search results? I went to check.

When I compared the text snippet provided by ChatGPT vs the one shown in Google Search Results for the specific Learning AI Search LLMs.txt Generators page, I could confirm it was the same information…”

Not An Isolated Incident

Aleyda’s article on her finding (Confirmed: ChatGPT uses Google SERP Snippets for its Answers [A Test with Proof]) links to someone else’s web page that summarizes a similar experience where ChatGPT used a Google snippet. So she’s not the only one to experience this.

Proof That Traditional SEO Remains Relevant For AI Search

Aleyda also documented what happened on a LinkedIn post where Kyle Atwater Morley shared his observation:

“So ChatGPT is basically piggybacking off Google snippets to generate answers?

What a wake-up call for anyone thinking traditional SEO is dead.”

Stéphane Bureau shared his opinion on what’s going on:

“If Bing’s results are insufficient, it appears to fall back to scraping Google SERP snippets.”

He elaborated on his post with more details later on in the discussion:

“Based on current evidence, here’s my refined theory:

When browsing is enabled, ChatGPT sends search requests via Bing first (as seen in DevTools logs).

However, if Bing’s results are insufficient or outdated, it appears to fall back to scraping Google SERP snippets—likely via an undocumented proxy or secondary API.

This explains why some replies contain verbatim Google snippets that never appear in Bing API responses.

I’ve seen multiple instances that align with this dual-source behavior.”

Takeaway

ChatGPT was initially unable to access the page directly, and it was only after the page began to appear in Google’s search results that it was able to respond to questions about the page. Once the snippet appeared in Google’s search results, ChatGPT began referencing it, revealing a reliance on publicly visible Google Search snippets as a fallback when the same data is unavailable in Bing.

What would be interesting to see is whether the server logs held a clue as to whether ChatGPT attempted to crawl the page and, if so, what error code was returned in response to the failure to retrieve the data. It’s curious that ChatGPT was unable to retrieve the page, and though it probably doesn’t have any bearing on the conclusions, it would still contribute to making the conclusions feel more complete to have that last bit of information crossed off.

Nevertheless, it appears that this is yet more proof that standard SEO is still applicable for AI-powered search, including for ChatGPT Search. This adds to recent comments by Gary Illyes that confirms that there is no need for specialized GEO or AEO in order to rank well in Google AI Overviews and AI Mode.

Featured Image by Shutterstock/Krakenimages.com

Validity of Pew Research On Google AI Search Results Challenged via @sejournal, @martinibuster

Questions about the methodology used by the Pew Research Center suggest that its conclusions about Google’s AI summaries may be flawed. Facts about how AI summaries are created, the sample size, and statistical reliability challenge the validity of the results.

Google’s Official Statement

A spokesperson for Google reached out with an official statement and a discussion about why the Pew research findings do not reflect actual user interaction patterns related to AI summaries and standard search.

The main points of Google’s rebuttal are:

  • Users are increasingly seeking out AI features
  • They’re asking more questions
  • AI usage trends are increasing visibility for content creators.
  • The Pew research used flawed methodology.

Google shared:

“People are gravitating to AI-powered experiences, and AI features in Search enable people to ask even more questions, creating new opportunities for people to connect with websites.

This study uses a flawed methodology and skewed queryset that is not representative of Search traffic. We consistently direct billions of clicks to websites daily and have not observed significant drops in aggregate web traffic as is being suggested.”

Sample Size Is Too Low

I discussed the Pew Research with Duane Forrester (formerly of Bing, LinkedIn profile) and he suggested that the sampling size of the research was too low to be meaningful (900+ adults and 66,000 search queries). Duane shared the following opinion:

“Out of almost 500 billion queries per month on Google and they’re extracting insights based on 0.0000134% sample size (66,000+ queries), that’s a very small sample.

Not suggesting that 66,000 of something is inconsequential, but taken in the context of the volume of queries happening on any given month, day, hour or minute, it’s very technically not a rounding error and were it my study, I’d have to call out how exceedingly low the sample size is and that it may not realistically represent the real world.”

How Reliable Are Pew Center Statistics?

The Methodology page for the statistics used list how reliable the statistics are for the following age groups:

  • Ages 18-29 were ranked at plus/minus 13.7 percentage points. That ranks as a low level of reliability.
  • Ages 30–49 were ranked at plus/minus 7.9 percentage points. That ranks in the moderate, somewhat reliable, but still a fairly wide range.
  • Ages 50–64 were ranked at plus/minus 8.9 percentage points. That ranks as a moderate to low level of reliability.
  • Age 65+ were ranked at at plus/minus 10.2 percentage points, which is firmly in the low range of reliability.

The above reliability scores are from Pew Research’s Methodology page. Overall, all of these results have a high margin of error, making them statistically unreliable. At best, they should be seen as rough estimates, although as Duane says, the sample size is so low that it’s hard to justify it as reflecting real-world results.

Pew Research Results Compare Results In Different Months

After thinking about it overnight and reviewing the methodology, an aspect of the Pew Research methodology that stood out is that they compared the actual search queries from users during the month of March with the same queries the researchers conducted in one week in April.

That’s problematic because Google’s AI summaries change from month to month. For example, the kinds of queries that trigger an AI Overview changes, with AIOs becoming more prominent for certain niches and less so for other topics. Additionally user trends may impact what gets searched on which itself could trigger a temporary freshness update to the search algorithms that prioritize videos and news.

The takeaway is that comparing search results from different months is problematic for both standard search and AI summaries.

Pew Research Ignores That AI Search Results Are Dynamic

With respect to AI overviews and summaries, these are even more dynamic, subject to change not just for every user but to the same user.

Searching for a query in AI Overviews then repeating the query in an entirely different browser will result in a different AI summary and completely different set of links.

The point is that the Pew Research Center’s methodology where they compare user queries with scraped queries a month later are flawed because the two sets of queries and results cannot be compared, they are each inherently different because of time, updates, and the dynamic nature of AI summaries.

The following screenshots are the links shown for the query, What is the RLHF training in OpenAI?

Google AIO Via Vivaldi Browser

Screenshot shows links to Amazon Web Services, Medium, and Kili Technology

Google AIO Via Chrome Canary Browser

Screenshot shows links to OpenAI, Arize AI, and Hugging Face

Not only are the links on the right hand side different, AI summary content and the links embedded within that content are also different.

Could This Be Why Publishers See Inconsistent Traffic?

Publishers and SEOs are used to static ranking positions in search results for a given search query. But Google’s AI Overviews and AI Mode show dynamic search results. The content in the search results and the links that are shown are dynamic, showing a wide range of sites in the top three positions for the exact same queries. SEOs and publishers have asked Google to show a broader range of websites and that, apparently, is what Google’s AI features are doing. Is this a case of be careful of what you wish for?

Featured Image by Shutterstock/Stokkete

Web Guide: Google’s New AI Search Experiment via @sejournal, @MattGSouthern

Google has launched Web Guide, an experimental feature in Search Labs that uses AI to reorganize search results pages.

The goal is to help you find information by grouping related links together based on the intent behind your query.

What Is Web Guide?

Web Guide replaces the traditional list of search results with AI-generated clusters. Each group focuses on a different aspect of your query, making it easier to dive deeper into specific areas.

According to Austin Wu, Group Product Manager for Search at Google, Web Guide uses a custom version of Gemini to understand both your query and relevant web content. This allows it to surface pages you might not find through standard search.

Here are some examples provided by Google:

Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.

How It Works

Behind the scenes, Web Guide uses the familiar “query fan-out” technique.

Instead of running one search, it issues multiple related queries in parallel. It then analyzes and organizes the results into categories tailored to your search intent.

This approach gives you a broader overview of a topic, helping you learn more without needing to refine your query manually.

When Web Guide Helps

Google says Web Guide is most useful in two situations:

  • Exploratory searches: For example, “how to solo travel in Japan” might return clusters for transportation, accommodations, etiquette, and must-see places.
  • Multi-part questions: A query like “How to stay close with family across time zones?” could bring up tools for scheduling, video calls, and relationship tips.

In both cases, Web Guide aims to support deeper research, not just quick answers.

How To Try It

Web Guide is available through Search Labs for users who’ve opted in. You can access it by selecting the Web tab in Search and switching back to standard results anytime.

Over time, Google plans to test AI-organized results in the All tab and other parts of Search based on user feedback.

How Web Guide Differs From AI Mode

While Web Guide and AI Mode both use Google’s Gemini model and similar technologies like query fan-out, they serve different functions within Search.

  • Web Guide is designed to reorganize traditional search results. It clusters existing web pages into groups based on different aspects of your query, helping you explore a topic from multiple angles without generating new content.
  • AI Mode provides a conversational, AI-generated response to your query. It can break down complex questions into subtopics, synthesize information across sources, and present a summary or interactive answer box. It also supports follow-up questions and features like Deep Search for more in-depth exploration.

In short, Web Guide focuses on how results are presented, while AI Mode changes how answers are generated and delivered.

Looking Ahead

Web Guide reflects Google’s continued shift away from the “10 blue links” model. It follows features like AI Overviews and the AI Mode, which aim to make search more dynamic.

Because Web Guide is still a Labs feature, its future depends on how people respond to it. Google is taking a gradual rollout approach, watching how it affects the user experience.

If adopted more broadly, this kind of AI-driven organization could reshape how people find your content, and how you need to optimize for it.


Featured Image: Screenshot from labs.google.com/search/experiment/34, July 2025. 

Google Launches AI-Powered Virtual Try-On & Shopping Tools via @sejournal, @MattGSouthern

Google unveiled three new shopping features today that use AI to enhance the way people discover and buy products.

The updates include a virtual try-on tool for clothing, more flexible price tracking alerts, and an upcoming visual style inspiration feature powered by AI.

Virtual Try-On Now Available Nationwide

Following a limited launch in Search Labs, Google’s virtual try-on tool is now available to all U.S. searchers.

The feature lets you upload a full-length photo and use AI to see how clothing items might look on your body. It works across Google Search, Shopping, and even product results in Google Images.

Tap the “try it on” icon on an apparel listing, upload a photo, and you’ll receive a visualization of yourself wearing the item. You can also save favorite looks, revisit past try-ons, and share results with others.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

The tool draws from billions of apparel items in its Shopping Graph, giving shoppers a wide range of options to explore.

Smarter Price Alerts

Google is also rolling out an enhanced price tracking feature for U.S. shoppers.

You can now set alerts based on specific criteria like size, color, and target price. This update makes it easier to track deals that match your exact preferences.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

AI-Powered Style Inspiration Arrives This Fall

Later in 2025, Google plans to launch a new shopping experience within AI Mode, offering outfit and room design inspiration based on your query.

This feature uses Google’s vision match technology and taps into 50 billion products indexed in the Shopping Graph.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

What This Means for E-Commerce Marketers

These updates carry a few implications for marketers and online retailers:

  • Improve Product Images: With virtual try-on now live, high-quality and standardized apparel images are more likely to be included in AI-driven displays.
  • Competitive Pricing Matters: The refined price alert system could influence purchase behavior, especially as consumers gain more control over how they track product deals.
  • Optimize for Visual Search: The upcoming inspiration features suggest a growing role for visual-first shopping. Retailers should ensure their product feeds contain rich attribute data that helps Google’s systems surface relevant items.

Looking Ahead

Google’s suite of AI-powered shopping features can help create more personalized and interactive retail experiences.

For search marketers, these tools offer new ways to engage, but also raise the bar in terms of presentation and data quality.

For e-commerce teams, staying competitive may require rethinking how products are priced, presented, and positioned within Google’s growing suite of AI-enhanced tools.


Featured Image: Roman Samborskyi/Shutterstock

Google Says You Don’t Need AEO Or GEO To Rank In AI Overviews via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI Search does not require specialized optimization, saying that “AI SEO” is not necessary and that standard SEO is all that is needed for both AI Overviews and AI Mode.

AI Search Is Everywhere

Standard search, in the way it used to be with link algorithms playing a strong role, no longer exists. AI is embedded within every step of the organic search results, from crawling to indexing and ranking. AI has been a part of Google Search for ten years, beginning with RankBrain and expanding from there.

Google’s Gary Illyes made it clear that AI is embedded within every step of today’s search ranking process.

Kenichi Suzuki (LinkedIn Profile) posted a detailed summary of what Illyes discussed, covering four main points:

AI Search features use the same infrastructure as traditional search

  1. AI Search Optimization = SEO
  2. Google’s focus is on content quality and is agnostic as to how it was created
  3. AI is deeply embedded into every stage of search
  4. Generative AI has unique features to ensure reliability

There’s No Need For AEO Or GEO

The SEO community has tried to wrap their minds around AI search, with some insisting that ranking in AI search requires an approach to optimization so distinct from SEO that it warrants its own acronym. Other SEOs, including an SEO rockstar, have insisted that optimizing for AI search is fundamentally the same as standard search. I’m not saying that one group of SEOs is right and another is wrong. The SEO community collectively discussing a topic and reaching different conclusions is one of the few things that doesn’t change in search marketing.

According to Google, ranking in AI Overviews and AI Mode requires only standard SEO practices.

Suzuki shared why AI search doesn’t require different optimization strategies:

“Their core message is that new AI-powered features like AI Overviews and AI Mode are built upon the same fundamental processes as traditional search. They utilize the same crawler (Googlebot), the same core index, and are influenced by the same ranking systems.

They repeatedly emphasized this with the phrase “same as above” to signal that a separate, distinct strategy for “AI SEO” is unnecessary. The foundation of creating high-quality, helpful content remains the primary focus.”

Content Quality Is Not About How It’s Created

The second point that Google made was that their systems are tuned to identify content quality and that identifying whether the content was created by a human or AI is not part of that quality assessment.

Gary Illyes is quoted as saying:

“We are not trying” to differentiate based on origin.”

According to Kenichi, the objective is to:

“…identify and reward high-quality, helpful, and reliable content, regardless of whether it was created by a human or with the assistance of AI.”

AI Is Embedded Within Every Stage Of Search

The third point that Google emphasized is that AI plays a role at every stage of search: crawling, indexing, and ranking.

Regarding the ranking part, Suzuki wrote:

“RankBrain helps interpret novel queries, while the Multitask Unified Model (MUM) understands information across various formats (text, images, video) and 75 different languages.”

Unique Processes Of Generative AI Features

The fourth point that Google emphasized is to acknowledge that AI Overviews does two different things at the ranking stage:

  1. Query Fan-Out
    Generates multiple queries in order to provide deeper answers to queries, using the query fan-out technique.
  2. Grounding
    AI Overviews checks the generated answers against online sources to make sure that they are factually accurate, a process called grounding.

Suzuki explains:

“It then uses a process called “grounding” to check the generated text against the information in its search index, a crucial step designed to verify facts and reduce the risk of AI ‘hallucinations.’”

Takeaways:

AI SEO vs. Traditional SEO

  • Google explicitly states that specialized “AI SEO” is not necessary.
  • Standard SEO practices remain sufficient to rank in AI-driven search experiences.

Integration of AI in Google Search

  • AI technology is deeply embedded across every stage of Google’s organic search: crawling, indexing, and ranking.
  • Technologies like RankBrain and the Multitask Unified Model (MUM) are foundational to Google’s current search ranking system.

Google’s Emphasis on Content Quality

  • Content quality assessment by Google is neutral regarding whether humans or AI produce the content.
  • The primary goal remains identifying high-quality, helpful, and reliable content.

Generative AI-Specific Techniques

  • Google’s AI Overviews employ specialized processes like “query fan-out” to answer queries thoroughly.
  • A technique called “grounding” is used to ensure factual accuracy by cross-checking generated content against indexed information.

Google clarified that there’s no need for AEO/GEO for Google AI Overviews and AI Mode. Standard search engine optimization is all that’s needed to rank across both standard and AI-based search. Content quality remains an important part of Google’s algorithms, and they made a point to emphasize that they don’t check whether content is created by a human or AI.

Featured Image by Shutterstock/Luis Molinero