Google On Scaled Content: “It’s Going To Be An Issue” via @sejournal, @martinibuster

Google’s John Mueller and Danny Sullivan discussed why AI generated content is problematic, citing the newly updated quality rater guideline and sharing examples of how AI can be used in a positive way that has added value.

Danny Sullivan, known as Google Search Liaison, spoke about the topic in more detail, providing an example of what a high quality use of AI generated content is to serve as a contrast to what isn’t a good use of it.

Update To The Quality Rater Guidelines

The quality rater guidelines (QRG) is a book created by Google to provide guidance to third-party quality raters who rate tests of changes to Google’s search results. It was recently updated and it now includes guidance about AI generated content that’s folded into a section about content created with little effort or originality.

Mueller discussed AI generated content in the context of scaled content abuse, noting that the quality raters are taught to rate that kind of content as low quality.

The new section of the QRG advises the raters:

“The lowest rating applies if all or almost all of the MC on the page (including text, images, audio, videos, etc) is copied, paraphrased, embedded, auto or AI generated, or reposted from other sources with little to no effort, little to no originality, and little to no added value for visitors to the website. Such pages should be rated Lowest, even if the page assigns credit for the content to another source.”

Doesn’t Matter How It’s Scaled: It’s Going To Be An Issue

Danny Sullivan, known as Google Search Liaison, started his part of the discussion by saying that to Google, AI generated content is no different than scaled content tactics from the past, comparing it to the spam tactics of 2005 when Google used statistical analysis and other methods to catch scaled content. He also emphasized that it doesn’t matter how the content was scaled.

According to my notes, here’s a paraphrase of what he said:

“The key things are, large amounts of unoriginal content and also no matter how it’s created.

Because like, ‘What are you going to do about AI? How are you going to deal with all the AI explosion? AI can generate thousands of pages?’

Well 2005 just called, it’d like to explain to you how human beings can generate thousands of pages overnight that look like they’re human generated because they weren’t human generated and etcetera, etcetera, etcetera.

If you’ve been in the SEO space for a long time, you well understand that scaled content is not a new type of thing. So we wanted to really stress: we don’t really care how you’re doing this scaled content, whether it’s AI, automation, or human beings. It’s going to be an issue.

So those are things that you should consider if you’re wondering about the scaled content abuse policy and you want to avoid being caught by it.”

How To Use AI In A Way That Adds Value

A helpful thing about Danny’s session is that he offered an example of a positive use AI, citing how retailers offer a summary of actual user reviews that give an overall user sentiment of the product without having to read reviews. This is an example of how AI is providing an added value as opposed to being the entire main content.

This is from my notes of what he said:

“When I go to Amazon, I skip down to the reviews and the reviews have a little AI-generated thing at the top that tells me what the users generally think, and I’m like, this is really helpful.

And the thing that’s really helpful to me about it is, it’s AI applied to original content, the reviews, to give me a summary. That was added value for me and unique value for me. I liked it.”

As Long As It’s High Quality….

Danny next discussed how they tried to put out a detailed policy about AI generated content but he said it was misconstrued by some parts of the SEO community to mean that AI generated content was fine as long as it was quality AI generated content.

In my 25 years of SEO experience, let me tell you, whenever an SEO tells you that an SEO tactic is fine “as long as it’s quality” run. The “as long as it’s quality” excuse has been used to justify low-quality SEO practices like reciprocal links, directory links, paid links, and guest posts – If it’s not already an SEO joke it should be.

Danny continued:

“And then people’s differentiation of what’s quality is all messed up. And they say Google doesn’t care if it’s AI!’ And that is not really what we said.

We didn’t say that.”

Don’t Mislead Yourself About Quality Of Scaled Content

Danny advised that anyone using artificially generated content should think about two things to use as tests for whether it’s a good idea:

  1. The motivation for mass generated content.
  2. Unoriginality of the scaled content.

Traffic Motivated Content

The motivation shouldn’t be because it will bring more traffic. The motivation should because there’s a value-add for site visitors.

This is how Danny Sullivan explained it, according to my notes:

“Any method that you undertake to mass generate content, you should be carefully thinking about it. There’s all sorts of programmatic things, maybe they’re useful. Maybe they’re not. But you should think about it.

And the things to especially think about is if you’re primarily doing into it to game search traffic.

Like, if the primary intent of the content was, ‘I’m going to get that traffic’ and not, ‘some user actually expected it’ if they ever came to my website directly. That’s one of the many things you can use to try to determine it.”

Originality Of Scaled Content

SEOs who praise their AI-generated content lose their enthusiasm when the content is about a topic they’re actually expert in and will concede that it’s not as smart as they are… And what’s going on, that if you are not an expert then you lack the expertise to judge the credibility of the AI generated content.

AI is trained to crank out the next likeliest word in a series of words, a level of unoriginality so extreme that only a computer can accomplish it.

Sullivan next offered a critique of the originality of AI-generated content:.

“The other thing is, is it unoriginal?

If you are just using the tool saying, ‘Write me 100 pages on the 100 different topics that I got because I ran some tool that pulled all the People Also Asked questions off of Google and I don’t know anything about those things and they don’t have any original content or any value. I just kind of think it’d be nice to get that traffic.’

You probably don’t have anything original.

You’re not necessarily offering anything with really unique value with it there.

A lot of AI tools or other tools are very like human beings because they’ve read a lot of human being stuff like this as well. Write really nice generic things that read very well as if they are quality and that they answer what I’m kind of looking for, but they’re not necessarily providing value.

And sometimes people’s idea of quality differ, but that’s not the key point of it when it comes to the policy that we have with it from there, that especially because these days some people would tell you that it’s quality.”

Takeaways:

  • Google doesn’t “care how you’re doing this scaled content, whether it’s AI, automation, or human beings. It’s going to be an issue.”
  • The QRG explicitly includes AI-generated content in its criteria for ‘Lowest’ quality ratings, signaling that this is something Google is concerned about.
  • Ask if the motivation for using AI-generated content is primarily to drive search traffic or to help users
  • Originality and value-add are important qualities of content to consider
Google’s Martin Splitt Reveals 3 JavaScript SEO Mistakes & Fixes via @sejournal, @MattGSouthern

Google’s Martin Splitt recently shared insights on how JavaScript mistakes can hurt a website’s search performance.

His talk comes as Google Search Advocate John Mueller also urges SEO pros to learn more about modern client-side technologies.

Mistake 1: Rendered HTML vs. Source HTML

During the SEO for Paws Conference, a live-streamed fundraiser by Anton Shulke, Splitt drew attention to a trend he’s noticing.

Many SEO professionals still focus on the website’s original source code even though Google uses the rendered HTML for indexing. Rendered HTML is what you see after JavaScript has finished running.

Splitt explains:

“A lot of people are still looking at view source. That is not what we use for indexing. We use the rendered HTML.”

This is important because JavaScript can change pages by removing or adding content. Understanding this can help explain some SEO issues.

Mistake 2: Error Pages Being Indexed

Splitt pointed out a common error with single-page applications and JavaScript-heavy sites: they often return a 200 OK status for error pages.

This happens because the server sends a 200 response before the JavaScript checks if the page exists.

Splitt explains:

“Instead of responding with 404, it just responds with 200 … always showing a page based on the JavaScript execution.”

When error pages get a 200 code, Google indexes them like normal pages, hurting your SEO.

Splitt advises checking server settings to handle errors properly, even when using client-side rendering.

Mistake 3: Geolocation Request Issue

Another problem arises when sites ask users for location or other permissions.

Splitt says Googlebot will always refuse the request if a site relies on geolocation (or similar requests) without a backup plan.

Splitt explains:

“Googlebot does not say yes on that popup. It says no on all these requests … so if you request geolocation, Googlebot says no.”

The page can appear blank to Googlebot without alternative content, meaning nothing is indexed. This can turn into a grave SEO mistake.

How to Debug JavaScript for SEO

Splitt shared a few steps to help diagnose and fix JavaScript issues:

  1. Start with Search Console: Use the URL Inspection tool to view the rendered HTML.
  2. Check the Content: Verify if the expected content is there.
  3. Review HTTP Codes: Look at the status codes in the “More info” > “Resources” section.
  4. Use Developer Tools: Open your browser’s developer tools. Check the “initiator” column in the Network tab to see which JavaScript added specific content.

Splitt adds:

“The initiator is what loaded it. If it’s injected by JavaScript, you can see which part of the code did it.”

Following these steps can help you find the problem areas and work with your developers to fix them.

See Splitt’s full talk in the recording below:

A Shift in SEO Skills

Splitt’s advice fits with Mueller’s call for SEOs to broaden their skill set.

Mueller recently suggested that SEO professionals learn about client-side frameworks, responsive design, and AI tools.

Mueller stated:

“If you work in SEO, consider where your work currently fits in … if your focus was ‘SEO at server level,’ consider that the slice has shrunken.”

Modern JavaScript techniques create new challenges that old SEO methods cannot solve alone. Splitt’s real-world examples show why understanding these modern web practices is now critical.

What This Means For SEO Professionals

Both Google Advocates point to a clear trend: SEO now requires more technical skills. As companies look for professionals who can blend SEO and web development, the demand for these modern skills is growing.

To keep up, SEO pros should:

  • Learn How JavaScript Affects Indexing: Know the difference between source and rendered HTML.
  • Master Developer Tools: Use tools like Search Console and browser developer tools to spot issues.
  • Collaborate with Developers: Work together to build sites that serve users and search engines well.
  • Broaden Your Skillset: Add client-side techniques to your traditional SEO toolkit.

Looking Ahead

As the web evolves, so must the skills of SEO professionals. However, leveling up your knowledge doesn’t have to be intimidating.

This fresh look at JavaScript’s role in SEO shows that even simple changes can have a big impact.


Featured Image: BestForBest/Shutterstock

AI Researchers Warn: Hallucinations Persist In Leading AI Models via @sejournal, @MattGSouthern

A report from the Association for the Advancement of Artificial Intelligence (AAAI) reveals a disconnect between public perceptions of AI capabilities and the reality of current technology.

Factuality remains a major unsolved challenge for even the most advanced models.

The AAAI’s “Presidential Panel on the Future of AI Research” report draws on input from 24 experienced AI researchers and survey responses from 475 participants.

Here are the findings that directly impact search and digital marketing strategies.

Leading AI Models Fail Basic Factuality Tests

Despite billions in research investment, AI factuality remains largely unsolved.

According to the report, even the most advanced models from OpenAI and Anthropic “correctly answered less than half of the questions” on new benchmarks like SimpleQA, a collection of straightforward factual questions.

The report identifies three main techniques being deployed to improve factuality:

  • Retrieval-augmented generation (RAG): Gathering relevant documents using traditional information retrieval before generating answers.
  • Automated reasoning checks: Verifying outputs against predefined rules to cull inconsistent responses.
  • Chain-of-thought (CoT): Breaking questions into smaller units and prompting AI to reflect on tentative conclusions

However, these techniques show limited success, with 60% of AI researchers expressing pessimism that factuality issues will be “solved” in the near future.

This suggests you should prepare for continuous human oversight to ensure content and data accuracy. AI tools may speed up routine tasks, but full autonomy remains risky.

The Reality Gap: AI Capabilities vs. Public Perception

The report highlights a concerning perception gap, with 79% of AI researchers surveyed disagreeing or strongly disagreeing that “current perception of AI capabilities matches the reality.”

The report states:

“The current Generative AI Hype Cycle is the first introduction to AI for perhaps the majority of people in the world and they do not have the tools to gauge the validity of many claims.”

As of November, Gartner placed Generative AI just past its peak of inflated expectations and is now heading toward the “trough of disillusionment” in its Hype Cycle framework.

For those in SEO and digital marketing, this cycle can provoke boom-or-bust investment patterns. Decision-makers might overcommit resources based on AI’s short-term promise, only to experience setbacks when performance fails to meet objectives.

Perhaps most concerning, 74% of researchers believe research directions are driven by hype rather than scientific priorities, potentially diverting resources from foundational issues like factuality.

Dr. Henry Kautz, chair of the Factuality & Trustworthiness section of the report, notes that “many of the public statements of people quite new to the field are out of line with reality,” suggesting that even expert commentary should be evaluated cautiously.

Why This Matters for SEOs & Digital Marketing

Adopting New Tools

The pressure to adopt AI tools can overshadow their limitations. Since issues of factual accuracy remain unresolved, marketers should use AI responsibly.

Conducting regular audits and seeking expert reviews can help reduce the risks of misinformation, particularly in industries regulated by YMYL (Your Money, Your Life) standards, such as finance and healthcare.

The Impact On Content Quality

AI-based content generation can lead to inaccuracies that can directly harm user trust and brand reputation. Search engines may demote websites that publish unreliable or deceptive material produced by AI.

Taking a human-plus-AI approach, where editors meticulously fact-check AI outputs, is recommended.

Navigating the Hype

Beyond content creation challenges, leaders must adopt a clear-eyed view to navigate the hype cycle. The report warns that hype can misdirect resources and overshadow more sustainable gains.

Search professionals who understand AI’s capabilities and limitations will be best positioned to make strategic decisions that deliver real value.

For more details, read the full report (PDF link).


Featured Image: patpitchaya/Shutterstock

Google Cautions SEOs & Creators About Filler Content via @sejournal, @martinibuster

Google’s John Mueller cautioned publishers and SEOs about filler content, which is generally created with the apparent goal of reaching a word-count threshold without concern for the user experience. Although recipe sites are the cause of this warning, this is the kind of thing that all SEOs and publishers should be concerned about.

Probable Origin Of Filler Content

What Mueller warned about, filler content, probably has origins in the idea that “content is king” which resulted in the SEO practice that values creating content similar to what Google is already ranking except doing it ten times better so that it towers over the competition (10x and skyscraper content strategies).

John Mueller Warns About Filler Content

Mueller’s observations about filler content were in the context of an overview of recent changes in the Quality Rater Guidelines (QRG), a book that Google created to bring more objective standards to how third party raters rate search results that are being tested.

Mueller said that filler content is low quality content that’s designed to make a page longer. Speaking informally and loosely, he said that filler content is problematic because users can find it “annoying.”

This is, according to my notes, what he said:

“Recently, quality rater guidelines, there are few things that I think are interesting for some sites that we have mentioned in the quality rater guidelines now which weren’t in there before.

And so this is the kind of thing which I think is important for sites to realize. On the one hand we’ve written about filler content, which is the kind of fluff that some websites put on their pages to make the pages longer. And sometimes they have good reasons to make the pages longer.

But for us this is sometimes problematic and users sometimes find it annoying. So we have that mentioned in the quality rater guidelines.”

Filler Content Is A Poor User Experience

What Mueller is referring to is the new section 5.2.2 of the QRG which lays out how to objectively judge whether a page has filler content. Filler content previously was nestled within section 5.2 but it’s now broken out into its own section. The main takeaway is that filler content is a user experience issue.

Here’s what the new section of the QRG says:

“5.2.2 Filler as a Poor User Experience

The main content (MC) of a page should support its purpose. Web site owners and content creators should place the most helpful and essential MC near the top of the page so that visitors can immediately access it.

A high quality page has the most helpful MC placed most prominently. Content that supports the page purpose without directly contributing to the primary goal can be included, but it should be shown lower on the page in a less prominent position. For example, on recipe pages, the recipe itself and important supporting content directly related to the recipe should be prominently displayed near the top of the webpage.

Sometimes, MC includes “filler” – low-effort content that adds little value and doesn’t directly support the purpose of the page. Filler can artificially inflate content, creating a page that appears rich but lacks content website visitors find valuable. Filler can result in a poor experience for people who visit the page, especially if placed prominently ahead of helpful content for the purpose of the page.

Important: Content that supports the page purpose without directly contributing to its primary goal can still be valuable if placed appropriately. Filler refers to low-effort content that occupies valuable and prominent space without providing value or without being helpful or satisfying for the primary purpose of the page.

A Low rating is appropriate if the page

● Contains a large amount of low quality and unhelpful filler, causing a poor experience for people visiting the page

● Contains a large amount visually prominent filler that makes it difficult to find the helpful MC, causing frustration for people visiting the page”

Content Filler – Not Just For Recipe Sites

The new quality rater guideline section about filler content specifically mentions recipe sites, likely because of their notoriously long filler content that’s so bad they have to add a link to skip to the the part that’s useful.

Some SEOs generally dislike change and this will probably make some people angry, but in my opinion, any publisher that feels they need to add a link to “skip to recipe” should probably consider that they’re doing something wrong. As Google says, important supporting content should be up front. If users have to skip the content to get to it then whatever it is that’s being skipped is not useful.

Recipe sites may be the worst offenders for filler content, but that doesn’t mean publishers in other niches can ignore the policy. All SEOs and content creators should recognize that filler content is problematic.

Imitating the top-ranked content is a practice that content publishers need to reconsider. It works against what Google is actually trying to rank and can lead to artificial word count targets instead of focusing on the user’s needs.

Reddit Introduces Faster Ad Setup and Pixel Integration via @sejournal, @brookeosmundson

Starting today, Reddit rolled out a series of updates aimed at making it easier for small and medium-sized businesses (SMBs) to advertise on the platform.

The changes focus on simplifying the ad creation process, improving signal quality, and helping advertisers move campaigns from other platforms like Meta with fewer headaches.

These updates follow Reddit’s continued push to make its Ads Manager more accessible, especially for smaller businesses that may not have the luxury of dedicated ad ops teams or outside agencies.

Launching Campaigns Faster With New Tools

In the Reddit Ads update, they announced two new tools to streamline campaign creation:

  • Campaign Import.
  • Simplified Campaign Quality Assurance (QA).

The first of the additions is Campaign Import, a tool that lets advertisers bring campaigns over from Meta directly into Reddit Ads Manager.

The process is straightforward — after connecting their Meta account, advertisers can select an existing campaign, import it, and make any necessary adjustments to suit Reddit’s environment.

This isn’t just a time-saver; it gives brands a quick way to leverage proven creative and targeting strategies while adapting them to Reddit’s unique audiences.

Another welcomed update is Reddit’s new Campaign Quality Assurance (QA) system. Instead of clicking back and forth between settings pages, advertisers now get a consolidated review page summarizing all key campaign details.

If something looks off — budget, targeting, placements, or creative — users can jump directly to the relevant section and make fixes before going live.

It may seem small, but anyone who’s fumbled through nested ad platforms under tight deadlines knows how much this improves workflow.

Improved Quality Signals In Reddit Ads

In addition to the streamlined campaign creation tools, Reddit also announced two features to improve the quality of audience and user behavior signals:

  • 1-click Google Tag Manager integration for Reddit Pixel.
  • Event Manager Quality Assurance (QA).

The platform now offers a 1-click integration with Google Tag Manager (GTM) for the Reddit Pixel, dramatically reducing the friction of installing and configuring conversion tags.

Advertisers can now fire up GTM, install the Reddit Pixel in minutes, and start sending conversion data without needing to pull in a developer. This update alone will make performance-focused advertisers breathe a little easier.

Reddit also upgraded its Event Manager QA tools. The revamped Events Overview now gives a clearer breakdown of conversion events coming from both the Reddit Pixel and the Conversions API (CAPI).

Advertisers can spot data discrepancies faster and ensure their lower-funnel campaigns are set up for success.

Jim Squires, EVP of Business Marketing and Growth at Reddit, noted that SMBs have always been an essential part of the platform’s community and advertising base.

We continue to make improvements to the Reddit Ads Manager that make it easier to launch and manage campaigns, so they can focus on what matters most: growing and running their businesses.

Reddit Ads Continues To Push Forward

With these latest updates, Reddit continues refining its ad platform for a broader range of advertisers, with particular attention to reducing friction for growing businesses.

Advertisers who have been looking for more streamlined ways to import, optimize, and measure campaigns will likely find these tools helpful as they plan their next steps on Reddit.

Have you already tried out Reddit Ads? Will these updates make you lean towards testing a new platform next quarter?

Google Confirms You Can’t Add EEAT To Your Web Pages via @sejournal, @martinibuster

Google’s John Mueller offered an overview of EEAT (Experience, Expertise, Authoritativeness, and Trustworthiness) at the Search Central Live NYC event and affirmed why it matters for some sites and why it’s not something SEOs can add to a website.

EEAT’s Relation To Quality Raters And YMYL Websites

John Mueller started this part of his discussion by explicitly tying the concept to its use as a way for the third party quality raters to provide a more objective judgment about the quality of the search results. He did not say that EEAT was created for SEOs to use as a ranking factor guide, in fact he expressly said that’s not how it works.

What is especially notable is that Mueller says that EEAT comes into play algorithmically for sites that are in topics that affect health or finance, what Google terms Your Money Or Your Life (YMYL) topics.

This is what he said, according to my notes, which contains some paraphrasing:

“EEAT is one of the ways that we look at page quality. EEAT is experience, expertise, authoritativeness and trustworthiness. And this is something that we tell the third party quality raters to watch out for when they’re doing page quality evaluation and something that we take into account when we think the query or a set of pages is on a specific topic where it’s more critical, where we call them your money, your life pages. Where we think that the user actually needs to have something that they can rely on and some signs that they can rely on the content that is present.”

EEAT Is Not Something You Add To Web Pages

In his follow-up statements he dismissed the idea that an SEO can add EEAT to their web pages. EEAT is not something you can add to a website. That’s not how it works. So if adding EEAT is part of what you do for SEO, stop. That’s not SEO.

This is what Mueller said:

“Sometimes SEOs come to us or like mention that they’ve added EEAT to their web pages. That’s not how it works. Sorry, you can’t sprinkle some experiences on your web pages. It’s like, that’s that doesn’t make any sense.”

Photo From Google Search Central Live NYC

EEAT Is Not Needed On Non-YMYL Pages

Lastly, Mueller repeated the point that EEAT is not something that they’re looking for in run of the mill websites. Obviously it’s great if the content has expertise and trustworthiness and so on. But he said it’s not something they’re algorithmically alert for on those kinds of sites, specifically naming recipe sites.

This is what he said:

“From a practical point of view, it’s important to look at this, especially if you’re publishing things on these critical topics, and to look at how you can highlight what it is that you’re already doing so that it’s clear for users.

But if you’re creating a recipe for cookies, you don’t need to have the sidebar with like, ‘this author has created cookies for 27 years.’ I think most people will be able to understand.”

Takeaways

EEAT’s Purpose and Scope

EEAT is used by third-party quality raters to assess search result quality. It was not created by Google as a list of ranking factors for an SEO checklist.

EEAT’s Role in YMYL Topics

Google algorithmically considers EEAT for pages that affect users’ finances or health, which is  referred in the Quality Raters Guidelines  Your Money or Your Life (YMYL) topics. These are the topic areas where reliability and trust are critical for user safety and confidence and where Google is especially concerned that those qualities are expressed in some way internally and/or externally about those sites. Google doesn’t say what those signals are.

Misconceptions About EEAT in SEO

John Mueller emphasized that EEAT is not something SEOs can “add” to a website the way they might add keywords or internal links. Attempting to “add EEAT” is a misunderstanding of how the concept works within search.

EEAT and Non-YMYL Websites

EEAT is not something that is required in an algorithmic context for non-YMYL sites, such as recipe blogs or other kinds of non-critical content. While it’s useful in a general or marketing sense to to reflect expertise and trust, it’s not a ranking focus for most topics.

EEAT is explicitly created for the third party quality raters to use as a more objective benchmark. That fact gets lost in all the conversations by SEOs about the topic of EEAT. It is also something that’s not particularly important for sites that are outside of YMYL topics. Lastly, EEAT is not something that an SEO can add to their page. Creating a bio with an AI generated image, linking it to a fake LinkedIn profile and then calling it EEAT is not a thing. Trustworthiness, for example, is something that is earned and results in people making recommendations (which doesn’t mean that SEOs should create fake social media profiles and start talking about an author at a website). Nobody really knows what the EEAT signals are.

Featured Image by Shutterstock/RealPeopleStudio

Google Updates Unfair Advantage Policy, Advertisers React via @sejournal, @brookeosmundson

On Friday, Google sent out a subtle but impactful policy update to advertisers, confirming changes to its long-standing “Unfair Advantage Policy”.

While the official enforcement date is April 14, 2025, the conversation has already started — and it’s anything but quiet.

The PPC community is buzzing with opinions, questions, and concerns. But this update didn’t come out of nowhere.

About a month ago, Google quietly laid the groundwork for this change without most people noticing.

Let’s unpack exactly what’s happening, why it matters, and how advertisers are reacting.

What Did Google Change?

The core of the update is about limiting how many ads a business, app, or site can show in a single ad location. Here’s Google’s new language:

Google email to advertisers about Unfair Advantage policy update.

The new language is crucial to understand.

The focus isn’t on restricting brands from showing multiple ads across different placements—it’s about stopping advertisers from stacking multiple ads in the same slot, which would effectively block competition and inflate dominance.

It’s not a total ban on multiple ads from the same advertiser showing on a single page, but rather a limit within a specific ad location.

However, as with many Google Ads policies, the phrase “single ad location” is doing a lot of heavy lifting—and advertisers are left wondering how Google will interpret and enforce it in practice.

One notable detail: Google says violations won’t lead to instant account suspensions. Advertisers will receive a warning and at least seven days to address any violations before facing suspension.

This is important. Google seems to be trying to strike a balance between tightening policy and giving advertisers room to adapt.

The Breadcrumb Many Missed – February Auction Documentation Update

Interestingly, this isn’t the first time Google has hinted at this shift.

Back in February 2025, advertisers noticed that Google updated its documentation on “How the Google Ads Auction Works”.

The update clarified that Google runs separate auctions for each ad location, meaning that the auction for the first position is distinct from the auction for the second, third, and so on.

Ginny Marvin, Google Ads Liaison, even acknowledged the change directly in LinkedIn discussions. This detail flew under the radar for many but now seems like a foundational piece for this official Unfair Advantage update.

Effectively, Google was setting the table a month ago. This policy update simply formalizes how those auctions will now prevent advertisers from “double-serving” or stacking ads in the same position.

Why Google Is Doing This, And Why Now

Google’s goal here appears twofold:

  1. Auction Fairness — Google wants to prevent scenarios where advertisers, affiliates, or large multi-account setups game the system by occupying multiple positions within a single auction.

  2. Affiliate Abuse Control — This rule directly calls out affiliates who break affiliate program rules, a growing concern in Google’s search ecosystem.

Of course, some advertisers suspect there’s a third goal: protecting the user experience and, more directly, protecting Google’s own long-term revenue by encouraging more advertisers to compete rather than allowing the largest players to squeeze others out.

Advertisers Give Mixed Reactions to Google Update

While this update was emailed to advertisers on Friday afternoon, marketers didn’t waste time sharing their takes on the update.

Andrea Atzori, who also received the email from Google, took to LinkedIn to provide his take on the update.

Atzori highlighted that this change is more about clarification than transformation, as he’d seen the same advertiser in multiple locations previously.

Navah Hopkins also took to LinkedIn with a more brief update, eager to hear thoughts from fellow marketers on the Unfair Advantage policy.

Hopkins and others noted that while the update may sound fair in theory, the proof will come in how it affects impression share, Auction Insights, and real-world campaign performance.

From the comments on Hopkin’s post, early reactions seem to lead towards skepticism and questions:

Chris Chambers commented:

This is going to be wild from a metric reporting standpoint since it seems like right now it counts as 2 impressions and also affects your impression share and position in Auction Insights (same with competitors).

But it also seems like now the advertisers with the most to spend in each niche will get even more real estate and be able to show twice, potentially cutting out smaller competitors completely from the first page.

Steve Gerencser had a similar take to Chambers:

I wonder how they are going to count people that pogo from one ad right back to the next and then back to something else? I can see a lot of wasted ad spend, or an opportunity for someone with deep pockets to dominate.

Some worry that well-funded advertisers will still find ways to dominate, while smaller brands hope this levels the playing field.

What Advertisers Should Watch For

While the policy may not seem earth-shattering at first glance, it does come with a few things advertisers should actively monitor.

First, smaller and mid-sized advertisers may stand to benefit, at least in theory. By limiting how many ads a single business can show in one location, Google could slightly reduce the dominance of big-budget brands that have historically owned the top of the page through multiple placements.

This could open up space for other players to get visibility where previously they were pushed out.

But, as several PPC pros pointed out on LinkedIn, the big question is how Google defines and enforces a single ad location in practice.

Google clarified last month that each ad location runs its own auction, meaning it’s technically possible for a brand to show up in multiple places on the same page—just not in the exact same slot.

So, while the policy aims to limit dominance, it doesn’t necessarily mean fewer total appearances for advertisers with deep pockets.

This also has potential ripple effects on Auction Insights reports. If Google starts filtering or limiting how often multiple ads from the same business appear in a given location, expect impression share metrics and overlap rates to behave differently—maybe even unexpectedly.

Advertisers will need to watch Auction Insights and Impression Share trends closely post-April to see if any patterns emerge.

Additionally, affiliate marketers and businesses using aggressive multi-account or multi-site strategies should be especially careful. The updated policy makes it clear that affiliates must play by their program’s rules and can no longer try to sneak multiple ads for the same offer into the same auction.

While Google says you’ll get a warning before any suspension, it’s probably wise to get ahead of this now, rather than risk a compliance issue later.

And finally, there’s still some ambiguity about multi-brand or franchise setups. If you’re managing a brand with multiple sub-brands, sister companies, or franchisees, the question remains: will Google treat you as one business under this policy or multiple?

That detail could make a big difference, especially for large organizations or verticals like automotive, real estate, or hospitality.

Final Thoughts: Is This Really a Game-Changer?

Honestly? It’s hard to call this a monumental shift yet. The update feels more like a formalization of existing enforcement patterns than a radical new rulebook.

That said, the PPC community is right to question what this will look like in Auction Insights and daily performance reports. Whether this is a minor tweak or the start of stricter anti-duplication policing will become clearer as advertisers see real-world data throughout Q2 and beyond.

Either way, it’s worth watching—especially if you’ve ever benefitted from, or competed against, someone taking up too much SERP real estate.

Google Says They Launch Thousands Of Updates Every Year via @sejournal, @martinibuster

Google’s John Mueller explained during a session of the Search Central Live NYC event that they do over 700,000 tests per year in order to keep up with user expectations. His explanation of why Google performs so many tests and launches thousands of changes should give SEOs an idea of the pace of change going on at Google and should inspire publishers and SEOs to consider ways that they too can take steps to anticipate user expectations and roll out changes to satisfy them.

Updates Are Not Done In Isolation

The first thing that Mueller said about updates is that they’re not done in isolation but rather they use a third party raters, a fresh pair of eyes, to evaluate their tests and new updates to their algorithms.

Mueller explained:

“So there is a lot of activity happening on the web and we kind of have to keep up with that as well.

How we look at things when it comes to updates, I think this is maybe a bit of a jarring transition here, but essentially when we work on changes with regards to search, one of the things that is core to all of the changes that we do is that we don’t do them in isolation just because we think they’re good, but rather that we find ways to actually test to make sure that they are actually good.

And one of the ways that we do that is we work together with so-called quality raters.”

Number Of Tests And Launched Updates

Google conducts a staggering number of tests every year and launches thousands of changes (updates).

Photo Showing Number of Google Updates Per Year

John Mueller said (includes paraphrasing):

“When it comes to changes that we do this number is from 2023. I imagine the number from last year is similar. We’ve made over 4,700 launches. And these launches come from over 700,000 tests and we make tests all the time. You can try to calculate like how many tests are running every day. If you assume that a test maybe runs for two weeks over the course of the day, like there are lots of tests that are happening in parallel.”

Google Says It’s All About User Expectations

Mueller offered an explanation of what motivates Google to do so many tests and launch so thousands of updates to the search results. He said it’s all about meeting user expectations.

This is what he said:

“And that also means that when people look at the search results, they see things that are sometimes a bit different.

From our point of view it’s not so much that we’re doing all of this work to keep making changes to make it hard for people to keep up, but rather because we see that users have very high expectations of the web and we want to make sure that whatever expectations they have tomorrow we can still kind of fulfill.”

Takeaway For Publishers And SEOs

Google’s not running hundreds of thousands of tests a year to confuse SEOs and publishers. They’re doing it to stay ahead of what users want before users even know they want it.

SEO has historically been reactive, which means that search marketers and publishers wait until Google announces an update and then they run back to their websites and “fix” whatever they think is broken. SEO eyes are always on Google when they should really be thinking ahead about how their consumers or site visitors are aging out or no longer reading blogs and whose habits might be changing. Do you have to wait until Google announces an accessibility update before you test if your site is usable for visitors on screen readers? Are client sites usable for people who are color blind or are you going to wait for an update? That’s reactive.

One of the reasons Google is number one in many things is because they didn’t wait for someone else to do it first. Before GMAIL all email providers gave their users email space measured in megabytes. Google killed their competition because they were offering users gigabytes of free space.

So maybe SEOs and publishers should scroll up and re-read the reasons that John Mueller gave to explain why Google does hundreds of thousands of tests and launches thousands of updates every year. If you’re not already being proactive then I really think that this is the year you start thinking about ways to do that.

Takeaways:

Google’s Testing Volume and Frequency

  • Google performs over 700,000 tests annually.
  • In 2023 alone, these tests led to over 4,700 changes to Search.
  • Tests often run in parallel, with many active at the same time.
  • This volume reflects a continuous, high-speed development cycle.

Why Google Runs So Many Tests

  • Google’s motivation for running so many tests is to anticipate user expectations.
  • Despite their setbacks with AI, the number of tests and changes is the reason why Google remains a formidable competitor.

Implications for SEOs and Publishers

  • Search marketers and publishers who want to keep up with Google should consider emulating Google’s approach to users and look for ways to anticipate user behavior, expectations, and trends.
  • Start testing and improving now rather than waiting for a Google update before accounting for shortcomings.
  • Consider a site audit by a fresh pair of eyes.
Google Shares Valuable SEO Takeaway About Quality Raters Guidelines via @sejournal, @martinibuster

At the recent Search Central Live NYC, Google’s John Mueller discussed the third-party quality raters they use to evaluate changes to Google’s search algorithms. Although it wasn’t stated explicitly, the nuance was implied: keeping a human in the loop remains an important factor in fine-tuning your SEO decisions.

Third Party Quality Raters

Hopefully by now everyone knows that Google employs third-party quality raters to review algorithm changes and provide feedback that can be used to judge the success of various algorithm updates and tests. They don’t actually affect the rankings of individual websites, their judgment are about the effectiveness of the algorithms which themselves affect hundreds of thousands and millions of sites across the Internet.

Ordinarily such judgment calls of whether a site is useful or not are highly subjective (a matter of personal opinion). That’s why Google created a set of guidelines for the quality raters to use so as to standardize the criteria the raters use and make their judgments more objective (like considering facts that are either true or false).

Here is, according to my notes, how John Mueller explained it:

“And one of the ways that we do that is we work together with so-called quality raters. These are external people who review the quality of search results, who review the quality of web pages to let us know, are we in a good place? Are we going in the right direction? Are the changes that we are working on actually making sense and acceptable for you?”

What’s notable about that exchange is that the whole point of judging the algorithms is whether or not they are acceptable to humans.

Mueller next introduced the topic of the quality raters guidelines and how it’s important for SEOs and publishers to read. In fact, he calls it important and encourages anyone concerned about ranking better to at least give it a scan for topics that may be important to the individual.

He continued:

“So we have a set of guidelines that we published for these quality raters, which I think is actually surprisingly important. It’s a gigantic book, something I don’t know, 180 pages long. But it’s a lot of guidelines where we kind of draw out what we think makes sense for quality raters to review with regards to the content. And this is publicly available. You can look at it yourself as well.

I think for most websites it makes sense at least have gone through it, or maybe control F and search through it for keywords that you care about just so that you have a sense of what Google is thinking when they’re making changes.”

The Quality Raters Guidelines Is Not A Handbook Of Ranking Factors

Three are many SEOs who have spread the misinformation that the quality raters guidelines offers a peek into what Google is using for ranking websites. That’s false.

Mueller continues (my paraphrase):

“Obviously quality rater guidelines is not a document that says like, this is how we do ranking, but more just, this is how we review things on the web when we ask for input from these quality raters.

They do a number of different tasks for us and so one of them is page quality where they tell us like, is this a high quality page or not? Another one is to evaluate whether the pages that we show in the search results meet the needs of a user. Which is highly subjective sometimes, but we give them information on what they can do there and the other one is A/B testing, side-by-side testing where we present quality raters with a set of pages before and a set of pages afterwards, and they tell us which one of these is actually better.”

Humans In The Loop

The important takeaway from Mueller’s discussion about the quality raters and the guidelines they use is that how humans react to the search results is at the heart of what Google is doing with their algorithms. Some people tend to think of Google’s algorithms as mechanical machines that are cranking out search results and that’s pretty much what they are but they’re also emulating human judgment about what is and is not spam, what is and is not a high quality search result.

Rote SEO is highly focused on feeding the machine but the machine itself is emulating human judgment. SEO today is more than ever about considering how every choice made about a site affects humans and less about worrying about whether you’ve got enough entity keywords on a page and if the H1 heading is missing.

Human Judgment Is Core to Google’s Algorithm Development

Quality raters are used to judge whether algorithm changes make search results better for people. Algorithms are adjusted based on human reactions, not by machine metrics.

Quality Raters Guidelines Reflect Google’s Values

Google’s Quality Rater Guidelines are not a ranking manual. They define what Google considers useful, high-quality content. They can serve as a mirror that business owners and SEOs can hold up to their own content to see how it aligns with Google’s criteria for high quality.

SEO Today Is About Human Experience

The deeper message buried in what Mueller was talking about is that Google’s algorithms are trying to emulate human judgment, so SEOs should focus on user experience and usefulness, not checklists or busy-work like adding author bios with superfluous information that does nothing for site visitors.

Google Revisits 15% Unseen Queries Statistic In Context Of AI Search via @sejournal, @martinibuster

At Search Central Live NYC, Google’s John Mueller revisited the statistic that 15% of search queries Google sees are completely brand new and have never been encountered. He also addressed what impact the advent of AI has had on the number of novel search queries users are making today.

Understanding Language

Understanding language is important to serving relevant search queries. That means Google needs to understand the nuances of what people mean when they search, which could include unconventional use of specific words or complete misuse of words. BERT is an example of a technology that Google uses to understand user queries.

Google’s introduction to BERT explains:

“We see billions of searches every day, and 15 percent of those queries are ones we haven’t seen before–so we’ve built ways to return results for queries we can’t anticipate.

…Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.”

Are There More Unknown Search Queries?

In the context of an overview of Google Search John Mueller briefly discussed the statistic of new queries Google search sees and if LLMs have made any impact.

This is, according to my notes, what he said:

“15% of all queries are new every day. This is something that I’m surprised is still the case. I would have thought at some point most of the searches would have been made people just ask the same thing over and over again.

But when when we recalculate these metrics, it’s always around 15%. I imagined maybe with LLM’s and the AI systems that maybe it would be a bit higher in recent years but it’s still hovering around that number..”

He then speculated why that 15% number remains the same, attributing it to things are always changing and that life is not static.

My paraphrase of what Mueller observed:

“It’s fantastic to see because it means to me that people keep going to search and looking for something new… and if people would stop going to search or stop searching new things, and to me that would be a sign that maybe something is wrong here. So this is a great number.”

Curious Outcome

It’s amazing that something as groundbreaking like AI search and the ability to search visually would have added more complex searches that Google has never seen before but that 15% number keeps holding steady.