How Google Protects Searchers From Scams: Updates Announced via @sejournal, @MattGSouthern

Google has announced improvements to its security systems, revealing that AI now plays a crucial role in protecting users from scams.

Additionally, Google has released a report detailing the effectiveness of AI in combating scams in search results.

Google’s AI-Powered Defense Strategy

Google’s report highlights its progress in spotting scams. Its AI systems block hundreds of millions of harmful search results daily.

Google claims it can now catch 20 times more scammy pages before they appear in search results compared to three years ago. This comes from investments in AI systems designed to spot fraud patterns.

Google explains in its report:

“Advancements in AI have bolstered our scam-fighting technologies — enabling us to analyze vast quantities of text on the web, identify coordinated scam campaigns and detect emerging threats — staying one step ahead to keep you safe on Search.”

How Google’s AI Identifies Sophisticated Scams

Google’s systems can now spot networks of fake websites that might look real when viewed alone. This broader view helps catch coordinated scam campaigns that used to slip through the cracks.

Google says its AI is most effective in two areas:

  1. Fake customer service: After spotting a rise in fake airline customer service scams, Google added protections that cut these scams by more than 80% in search results.
  2. Fake official sites: New protections launched in 2024 reduced scams pretending to be government services by over 70%.

Cross-Platform Protection Extends Beyond Search

Google is expanding its scam-fighting to Chrome and Android, too.

Chrome’s Enhanced Protection with Gemini Nano

Chrome’s Enhanced Protection mode now uses Gemini Nano, an AI model that works right on your device. It analyzes websites in real-time to spot dangers.

Jasika Bawa, Group Product Manager for Chrome, says:

“The on-device approach provides instant insight on risky websites and allows us to offer protection, even against scams that haven’t been seen before.”

Android’s Expanded Defenses

For mobile users, Google has added:

  • AI warnings in Chrome for Android that flag suspicious notifications
  • Scam detection in Google Messages and Phone by Google that spots call and text scams

Multilingual Protection Through Language Models

Google is improving its ability to fight scams across languages. Using large language models, Google can find a scam in one language and then protect users searching in other languages.

This matters for international SEO specialists and marketers with global audiences. It shows that Google is getting better at analyzing content in different languages.

What This Means

As Google enhances its ability to detect deceptive content, the standard for quality keeps rising for all websites.

Google now views security as an interconnected system across all its products, rather than as separate features.

Maintaining high transparency, accuracy, and user focus remains the best strategy for long-term search success.

10Web Releases API For Scaled White Label AI Website Building via @sejournal, @martinibuster

10Web has launched an AI Website Builder API that turns text prompts into fully functional WordPress websites hosted on 10Web’s infrastructure, enabling platforms to embed AI website creation into their product workflows. Designed for SaaS tools, resellers, developers, and agencies, the API delivers business-ready sites with ecommerce features, AI-driven customization, and full white-label support to help entrepreneurs launch quickly and at scale.

Developer And Platform Focused API

10Web AI website builder API was designed for developers and platforms who serve entrepreneurs, enabling them to embed website creation into their own tools so that non-technical users (entrepreneurs and small business owners) can launch websites with zero coding or technical knowledge.

10Web describes their product capabilities:

“Text-to-website AI: Generates structure, content, sections, and visuals

Plugin presets: Define default tools per client, project, or vertical

Drag-and-drop editing: Built-in Elementor-based editor for post-generation control

Managed WordPress infrastructure: Hosting, SSL, staging, backups, and DNS

Dashboards & sandbox: Analytics, developer tools, and real-time preview”

Learn more at 10Web:

Integrate the #1 AI Website Builder API into your platform

Featured Image by Shutterstock/Surf Ink

New AI Models Make More Mistakes, Creating Risk for Marketers via @sejournal, @MattGSouthern

The newest AI tools, built to be smarter, make more factual errors than older versions.

As The New York Times highlights, tests show errors as high as 79% in advanced systems from companies like OpenAI.

This can create problems for marketers who rely on these tools for content and customer service.

Rising Error Rates in Advanced AI Systems

Recent tests reveal a trend: newer AI systems are less accurate than their predecessors.

OpenAI’s latest system, o3, got facts wrong 33% of the time when answering questions about people. That’s twice the error rate of their previous system.

Its o4-mini model performed even worse, with a 48% error rate on the same test.

For general questions, the results (PDF link) were:

  • OpenAI’s o3 made mistakes 51% of the time
  • The o4-mini model was wrong 79% of the time

Similar problems appear in systems from Google and DeepSeek.

Amr Awadallah, CEO of Vectara and former Google executive, tells The New York Times:

“Despite our best efforts, they will always hallucinate. That will never go away.”

Real-World Consequences For Businesses

These aren’t just abstract problems. Real businesses are facing backlash when AI gives wrong information.

Last month, Cursor (a tool for programmers) faced angry customers when its AI support bot falsely claimed users couldn’t use the software on multiple computers.

This wasn’t true. The mistake led to canceled accounts and public complaints.

Cursor’s CEO, Michael Truell, had to step in:

“We have no such policy. You’re of course free to use Cursor on multiple machines.”

Why Reliability Is Declining

Why are newer AI systems less accurate? According to a New York Times report, the answer lies in how they’re built.

Companies like OpenAI have used most of the available internet text for training. Now they’re using “reinforcement learning,” which involves teaching AI through trial and error. This approach helps with math and coding, but seems to hurt factual accuracy.

Researcher Laura Perez-Beltrachini explained:

“The way these systems are trained, they will start focusing on one task—and start forgetting about others.”

Another issue is that newer AI models “think” step-by-step before answering. Each step creates another chance for mistakes.

These findings are concerning for marketers using AI for content, customer service, and data analysis.

AI content with factual errors could hurt your search rankings and brand.

Pratik Verma, CEO of Okahu, tells the New York Times:

“You spend a lot of time trying to figure out which responses are factual and which aren’t. Not dealing with these errors properly basically eliminates the value of AI systems.”

Protecting Your Marketing Operations

Here’s how to safeguard your marketing:

  • Have humans review all customer-facing AI content
  • Create fact-checking processes for AI-generated material
  • Use AI for structure and ideas rather than facts
  • Consider AI tools that cite sources (called retrieval-augmented generation)
  • Create clear steps to follow when you spot questionable AI information

The Road Ahead

Researchers are working on these accuracy problems. OpenAI says it’s “actively working to reduce the higher rates of hallucination” in its newer models.

Marketing teams need their own safeguards while still using AI’s benefits. Companies with strong verification processes will better balance AI’s efficiency with the need for accuracy.

Finding this balance between speed and correctness will remain one of digital marketing’s biggest challenges as AI continues to evolve.


Featured Image: The KonG/Shutterstock

Google Disputes News That Search Engine Use Is Falling via @sejournal, @martinibuster

Google took the unusual step of issuing a response to news reports that AI search engines and chatbots were causing a decline in traditional search engine use, directly contradicting testimony given by an Apple executive in the ongoing U.S. government antitrust lawsuit against Google.

Apple Testimony That Triggered Stock Sell-Off

Google’s stock price took a steep dive on the news that people were turning away from traditional search engines, dropping by 7.51% on Wednesday. What triggered the stock sell-off was testimony by Eddy Cue, Apple’s senior vice president of services, who testified that search engine use by users of Apple’s Safari browser declined for the first time last month, expressing his opinion that a technological shift is underway that is undercutting the use of traditional search engines.

Early AI Adopters Turning Away From Google?

There is a view in Silicon Valley that Google Search is legacy technology. A recent episode of the Y Combinator show featured the host sharing that their Google search traffic has dropped by 15% and that he attributes that to AI use in both Google and chatbots. He explained that if you want to see the future you look to the early adopters, commenting that everyone he knows in Silicon Valley uses ChatGPT to get answers and that Google Search is defacto legacy technology.

The host described how 25 years ago the early adopters were using Google but that now, Google Search feels weird to him.

He said:

“People are now switching their behavior to where your default action if you’re looking for information is, you know ChatGPT or perplexity, or one of these things, and even just, you know, observing my own behavior. I’ll use Google mostly for kind of navigational. Like, if I’m just looking for a specific website and I know it’s going to give the same thing, but it’s starting to have that weird kind of, like legacy website, like I’m using eBay or something.”

Google’s Statement

Google’s statement was short and to the point, with no accompanying images to make it look like a blog post. Google’s statement could even be seen as terse.

Here’s what Google published:

“Here’s our statement on this morning’s press reports about Search traffic.

We continue to see overall query growth in Search. That includes an increase in total queries coming from Apple’s devices and platforms. More generally, as we enhance Search with new features, people are seeing that Google Search is more useful for more of their queries — and they’re accessing it for new things and in new ways, whether from browsers or the Google app, using their voice or Google Lens. We’re excited to continue this innovation and look forward to sharing more at Google I/O.”

AI Revolution: What Nobody Else Is Seeing

Here’s the video of the Y Combinator show that offers a peek at how people in Silicon Valley relate to Google Search. The part I quoted is at about the 24 minute mark.

Featured Image by Shutterstock/Framalicious

Apple May Add AI Search Engines to Safari As Google Use Drops via @sejournal, @MattGSouthern

Apple is reportedly planning to redesign Safari to focus on AI search engines.

According to recent testimony in the Google antitrust case, this comes as the company prepares for possible changes to its profitable Google deal.

Apple Signals Shift In Search Strategy

Eddy Cue, Apple’s senior vice president of services, testified that Safari searches dropped for the first time last month.

He believes users are choosing AI tools over regular search engines. This change happens as courts decide what to do after Google lost its antitrust case in August.

Per a report from Bloomberg, Cue testified:

“You may not need an iPhone 10 years from now as crazy as it sounds. The only way you truly have true competition is when you have technology shifts. Technology shifts create these opportunities. AI is a new technology shift, and it’s creating new opportunities for new entrants.”

AI Search Providers May Replace Traditional Search

Cue believes AI search providers such as OpenAI, Perplexity AI, and Anthropic will eventually replace traditional search engines like Google.

“We will add them to the list — they probably won’t be the default,” Cue said, noting Apple has already talked with Perplexity.

Currently, Apple offers ChatGPT as an option in Siri and plans to add Google’s Gemini later this year.

Cue admitted that these AI search tools need to improve their search indexes. However, he said their other features are “so much better that people will switch.”

“There’s enough money now, enough large players, that I don’t see how it doesn’t happen,” he said about the shift from standard search to AI-powered options.

Context: Google’s Antitrust Battle Timeline

This testimony comes during a key moment in the case against Google:

  • August 2024: Judge Mehta ruled Google broke antitrust law through exclusive search deals
  • October 2024: DOJ proposed remedies targeting search distribution, data usage, search results, and advertising
  • December 2024: Google offered counter-proposals to loosen search deals
  • March 2025: DOJ filed revised proposals, including possibly forcing Google to sell Chrome

The $20 Billion Question

The core issue is Google’s deal with Apple, worth a reported $20 billion per year, that makes Google the default search engine on Safari.

While expecting changes to this deal, Cue admitted he has “lost sleep over the possibility of losing the revenue share from their agreement.”

We learned about this payment during the trial. In 2022, Google paid Apple $20 billion to be Safari’s default search engine.

Last year, they expanded their partnership to add Google Lens to the Visual Intelligence feature on new iPhones.

Proposed Remedies & Responses

The DOJ’s latest filing suggests several significant changes:

  • Making Google sell off Chrome
  • Limiting Google’s payments for default search placement
  • Stopping Google from favoring its products in search results
  • Making Google’s advertising practices more transparent

Google has criticized these proposals, calling them a “radical interventionist agenda” that would “break a range of Google products.”

Instead, Google suggests letting browser companies deal with multiple search engines and giving device makers more freedom about which search options are preloaded.

What This Means

If Apple shifts Safari toward AI, prepare for significant changes in search.

It’s not a stretch to say the outcome could reshape search competition and digital marketing for years.


Featured Image: Bendix M/Shutterstock

ChatGPT Leads AI Search Race While Google & Others Slip, Data Shows via @sejournal, @MattGSouthern

ChatGPT leads the AI search race with an 80.1% market share, according to fresh data from Similarweb.

Over the last six months, OpenAI’s tool has maintained a strong lead despite ups and downs.

Meanwhile, traditional search engines are struggling to grow as AI tools reshape how people find information online.

AI Search Market Share: Today’s Picture

The latest numbers show ChatGPT’s market share rebounding to 80.1%, up from 77.6% a month ago.

Here’s how the competition stacks up:

  • DeepSeek: 6.5% (down from 7.6% last month)
  • Google’s AI tools: 5.6% (up slightly from 5.5% last month)
  • Perplexity: 1.5% (down from 1.9% last month)
  • Grok: 2.6% (down from 3.2% last month)

These numbers are part of Similarweb’s bigger “AI Global” report (PDF link).

Traditional Search Engines Losing Ground

The most important finding may be that traditional search engines aren’t growing:

  • Google: -2% year-over-year
  • Bing: -18% year-over-year (a big drop from +18% in January)
  • Yahoo: -11% year-over-year
  • DuckDuckGo: -6% year-over-year
  • Baidu: -12% year-over-year

Traditional search shows a steady decline of -1% to -2% compared to last year. It’s important to note, however, that Google has seven times the user base of ChatGPT.

Which AI Categories Are Growing Fastest

While AI is changing search, some AI categories are growing faster than others:

  • DevOps & Code Completion: +103% (over 12 weeks)
  • General AI tools: +34%
  • Music Generation: +12%
  • Voice Generation: +8%

On the other hand, some AI areas are shrinking, including Writing and content Generation (-12 %), Customer Support (11%), and Legal AI (70%).

Beyond Search: Other Affected Industries

AI’s impact goes beyond just search engines. Other digital sectors facing big changes include:

  • EdTech: -28% year-over-year (with Chegg down 66% and CourseHero down 69%)
  • Website Builders: -13% year-over-year
  • Freelance Platforms: -19% year-over-year

Design platforms are still growing at +10% year over year, suggesting that AI might be helping rather than replacing these services.

What This Means

Traditional SEO still matters, but it isn’t enough. As traditional search traffic drops, you need to branch out.

Similarweb’s data shows consistent negative growth for traditional search engines alongside ChatGPT’s dominant market position, indicating a significant shift in information discovery patterns.

The takeaway for search marketers is to adapt to AI-driven search while keeping up with practices that work in traditional search. This balanced approach will be key to success in 2025 and beyond.


Featured Image: Fajri Mulia Hidayat/Shutterstock

WordPress WooCommerce Bug Causing Sites To Crash via @sejournal, @martinibuster

A WordPress bug is causing WooCommerce sites to display a fatal error, crashing ecommerce sites. The problem originates from a single line of code. A workaround has been created. The WooCommerce team is aware of the issue and is working on issuing a permanent fix in the form of a patch.

WooCommerce Sites Crashing

Someone posted about the error at the WordPress.org support forums and others with the same problem replied that they were experiencing the same thing. Most of those responding reported that they had not recently done anything to their sites, that they had crashed all of a sudden.

The person who initially reported the bug offered a workaround for getting websites back up and running, an edit of a single line of code in the BlockPatterns.php file, which is a WooCommerce file.

The file is located here:

wp-content/plugins/woocommerce/src/Blocks/BlockPatterns.php

Others reported receiving the same fatal error message:

“Uncaught Error: strpos(): Argument #1 ($haystack) must be of type string, null given in /var/www/site/data/www/site.com.br/wp-content/plugins/woocommerce/src/Blocks/BlockPatterns.php on line 251”

One of the commenters on the discussion posted:

“Same issue here.

It occurred in version 9.8.2, and upgrading to 9.8.3 didn’t resolve it. Downgrading to 9.7.1 didn’t help either.

The problem happened without any interaction with plugins or recent updates. Replacing the code at line 251 worked as a temporary workaround.

We’ll need to find a more stable solution until the WooCommerce team releases an official patch.”

Others reported that they received the error after updating their plugins but that rolling back the update didn’t solve the problem, while others reported that they hadn’t done anything prior to experiencing the crash.

Someone from WooCommerce support responded to say that the WooCommerce team is aware of the problem and are working to address it:

“Thank you for reporting this. It’s a known issue, and a temporary workaround has been shared here: https://github.com/woocommerce/woocommerce/issues/57760#issuecomment-2854510504

You can track progress and updates on the GitHub thread: https://github.com/woocommerce/woocommerce/issues/57760, as the team is aware and actively addressing it.”

Discussion On GitHub

The official WooCommerce GitHub repository has this note:

“Some sites might see a fatal error around class BlockPatterns.php, with the website not loading. This was due a bad response from Woo pattern repository. A fix was deployed to the repository but certain sites might still have a bad cache value.”

They also wrote:

“The issue has been fixed from the cache source side but certain sites were left with a bad cache value, we will be releasing patch updates to fix that.”

Featured Image by Shutterstock/Kues

It’s Official: Google Launches AI Max for Search Campaigns via @sejournal, @brookeosmundson

Google Ads has announced a major update to Search campaigns. The new AI Max campaign setting will roll out globally in beta starting later this month.

Per Google’s announcement, advertisers who enable AI Max in their Search campaigns can expect stronger performance through improved query matching, dynamic creative, and better control features.

According to Google, early testing shows advertisers see an average 14% more conversions or conversion value at a similar CPA or ROAS. Campaigns still using mostly exact or phrase match keywords see even greater uplifts, around 27%.

This update follows months of closed beta testing with large brands already reporting positive results.

Let’s take a deeper look at what AI Max brings and why it matters to paid search marketers.

What is AI Max for Search Campaigns?

If you’ve been hearing the term “Search Max” in the wild lately, the official name for it is AI Max for Search.

AI Max is not a new campaign type. Instead, it’s a one-click upgrade available within existing Search campaign settings.

Once activated, it layers in three core enhancements:

  • Search term matching: Uses AI to extend keyword matching into relevant, high-performing queries your current keywords might miss.

  • Text customization: Rebrands the former Automatically Created Assets (ACA) tool. Dynamically generates new headlines and descriptions based on your landing pages, existing ads, and keywords.

  • Final URL expansion: Sends users to the most relevant pages on your site based on query intent.

Advertisers can opt out of text customization or final URL expansion at the campaign level, and opt out of search term matching at the ad group level. However, Google recommends using all three together for maximum performance.

AI Max is designed to complement, not replace, keyword match types. If a user’s search exactly matches a keyword in your campaign, that will always take priority.

Why is Google Introducing AI Max?

Search behavior is changing fast. As Google integrates more AI-powered experiences like AI Overviews and Google Lens into Search, people are using more complex, conversational, and even visual queries.

Advertisers have also voiced concerns about losing transparency and control as campaign automation expands.

AI Max aims to address both.

  • Advertisers keep access to existing Search reports and controls while layering in new targeting and creative tools.
  • More granular reporting is rolling out, including search terms by asset and improved URL parameters for detailed tracking.

Essentially, it’s Google’s answer to increasing demand for flexible automation, but with guardrails in place for marketers.

Are There Controls For Brand Safety?

Google added several controls to address a frequent advertiser concern: automation overreaching into irrelevant or risky placements.

Here’s what’s included with the AI Max for Search rollout:

  • Brand controls: Choose which brands your ads appear alongside (or exclude specific brands).
  • Location of interest controls: Target based on user geo intent at the ad group level (great for multi-location businesses).
  • Creative asset controls: Remove generated assets or block them entirely if they don’t meet brand guidelines.

One note of caution: as of now, AI-generated assets will go live before advertisers have the chance to review them.

Advertisers will need to monitor and react quickly to any compliance issues.

Are There Updates Coming to Reporting?

While AI Max integrates into existing Search reporting, the functionality is bringing new insights:

  • Search terms reporting will now show associated headlines and URLs.
  • Asset reports will measure performance not just by impressions, but by spend and conversions.
  • A new URL parameter will offer deeper visibility into search queries and performance across match types.

These reporting improvements will start in the Google Ads online interface as the feature rolls out.

Support for API, Report Editor, and Desktop Editor access is slated for later in 2025.

How Does AI Max Compare to Performance Max or Dynamic Search Ads?

Many marketers are asking how AI Max fits alongside other Google campaign types.

Here’s the current landscape of differences or overlap between other campaign types:

  • Performance Max and AI Max for Search may be eligible for the same Search auctions. However, if a user’s search query exactly matches a keyword in your Search campaign, Search will always take priority.
  • Dynamic Search Ads (DSA) remain available. AI Max is not a direct replacement, though it does overlap in some areas like final URL expansion and keywordless matching.
  • Optimized Targeting for audiences could be seen as a similar concept to AI Max’s query expansion, but applied to audiences rather than keywords.

Additionally, AI Max for Search can be A/B tested against traditional Search setups using drafts and experiments. More customized testing tools are in development.

Who is AI Max Not Ideal For?

While AI Max offers clear benefits to trying out, this new setting may not suit every advertiser verticals.

If you’re an advertiser or a brand with the following scenarios, I’d recommend using caution when testing out AI Max for Search.

  • Advertisers with strict creative guidelines or sensitive content policies.
  • Brands needing pinning for ad assets (since final URL expansion does not support pinning).
  • Businesses with websites that change frequently, making automated creative risky or inaccurate.

For industries like legal or healthcare, where lead quality and content compliance are crucial, AI Max may require careful testing before wide adoption.

What This Means for Search Marketers

AI Max represents a significant shift in how Google Search campaigns can scale.

It brings the adaptive reach and creative flexibility of Performance Max without requiring a new campaign type or sacrificing keyword control.

For advertisers already embracing broad match and automated bidding, AI Max may feel like a natural progression.

For those still relying on exact and phrase match keywords, it offers an opportunity to expand cautiously while maintaining key controls.

The rollout also signals Google’s direction: automation will continue to evolve, but advertiser input and oversight remain essential.

Marketers who test AI Max thoughtfully by balancing automation with strategy are likely to gain a competitive edge as search behavior grows more complex.

Google’s Walled Garden: Users Make 10 Clicks Before Leaving via @sejournal, @MattGSouthern

New data shows Google keeps users on its site longer. Visitors now make 10 clicks on Google’s site before leaving for another website.

This finding comes from a 13-month study comparing Google and ChatGPT traffic patterns.

Google Keeps Users In, ChatGPT Sends Them Out

Tyler Einberger of Momentic analyzed Similarweb data showing that Google’s “pages per visit” metric has climbed to 10 as of March, a big jump from before.

Image Credit: Momentic.

What does this mean? Users spend more clicks on Google’s search results than on other websites.

The report explains:

“Increasing ‘Pages per Visit’ for Google.com is an indicator that users are spending more clicks within Google’s search results (SERPs). Since most SERP interactions—like interacting with SERP features, paging, refining searches, or clicking images—change the URL but keep visitors on Google’s domain.”

Google still sends the most overall traffic to external websites.

Google generated 175.5 million outgoing visits in March compared to ChatGPT’s 57.6 million. This represents a 66.4% increase for Google compared to last year.

The Efficiency Gap

ChatGPT is more efficient at sending people to other websites.

The numbers tell the story:

  • ChatGPT generates 1.4 external website visits per user
  • Google produces just 0.6 visits per user

This means ChatGPT users are 2.3 times more likely to visit external websites than Google users, even though Google’s audience is about 6.8 times larger.

The SERP Retention Strategy

Google’s increasing in-platform clicks match its strategy of expanding search features. These features provide immediate answers without requiring users to visit other websites.

Google is succeeding at two goals:

  1. Remaining the web’s primary traffic source
  2. Keeping users on Google’s properties longer

While Google sent more outgoing traffic in early 2025, its audience barely grew. This shows a complex relationship between keeping users and referring them elsewhere.

What This Means

For SEO pros and marketers, this trend creates new challenges and opportunities:

  • With users spending more time on Google’s interfaces, capturing attention in the first screen view matters more than ever.
  • Focus on appearing in featured snippets, knowledge panels, and other SERP elements to maintain visibility as traditional organic clicks become harder to get.
  • Consider ChatGPT and other AI platforms as additional traffic sources since they refer more visitors per user.
  • Users now interact with multiple SERP features before clicking a website, requiring better attribution models and content strategies.

The Broader AI Search Market

While Google and ChatGPT lead the conversation, other AI search platforms are growing fast.

Perplexity grew 110.7% month-over-month in March. Grok grew 48.1% and Claude grew 23%.

These newer platforms could change current traffic patterns as they gain users, though the report doesn’t analyze their referral efficiency in detail.

Google remains the biggest traffic source overall. However, its growing “walled garden” approach means marketers should watch these trends and diversify where their traffic comes from.


Featured Image: Here Now/Shutterstock

Google’s Updated Raters Guidelines Target Fake EEAT Content via @sejournal, @martinibuster

A major update to Google’s Search Quality Raters Guidelines (QRG) clarifies and expands on multiple forms of deception that Google wants its quality raters to identify. This change continues the trend of refining the guidelines so that quality raters become better at spotting increasingly granular forms of quality issues.

TL/DR

Authenticity should be the core principle for any SEO and content strategy.

Quality Guidelines Section 4.5.3

Section 4.5.3 has essentially been rewritten to be clearer and easier to understand but most importantly it has been expanded to cover more kinds of deception. One can speculate that the quality raters weren’t overlooking certain kinds of website deception and that these changes are addressing that shortcoming. This could also signal that Google’s algorithms may in the near future become more adept at spotting the described kinds of deception.

The change in the heading of section 4.5.3 reflects the scope of the changes, with greater detail over the original version.

The section title changed from this:

4.5.3 Deceptive Page Purpose and Deceptive MC Design

To this:

“4.5.3 Deceptive Page Purpose, Deceptive Information about the Website, Deceptive Design”

The entire section was lightly rewritten and reorganized for greater clarity. It’s not necessarily a new policy but rather a more detailed and nuanced version of it, with a few parts that are brand new.

Deceptive Purpose

The following is a new paragraph about deceptive purpose:

“Deceptive purpose:

● A webpage with deliberately inaccurate information to promote products in order to make money from clicks on monetized links. Examples include a product recommendation page on a website falsely impersonating a celebrity blog, or a product recommendation based on a false claim of personal, independent testing when no such testing was conducted.”

Google very likely has algorithmic signals and processes to detect and remove sites with these kinds of deceptive content. While one wouldn’t expect that a little faking would be enough to result in a sudden drop in rankings, why take the chance? It’s always the safest approach to focus on authenticity.

To be clear, the focus of this section isn’t just about putting fake information on a website but rather it’s about deceptive purpose. The opposite of a deceptive purpose is a purpose rooted in authenticity, with authentic intent.

Deceptive EEAT Content

There is now a brand new section that is about fake EEAT (Expertise, Experience, Authoritativeness, and Trustworthiness) content on a website. A lot of SEOs talk about adding EEAT to their web pages but the fact is that EEAT is not something that one adds to a website. EEAT is a quality of a website that’s inherent in the overall experience of researching a site, learning about a site, and in the process of consuming the content, which can result in signals that site visitors may generate about a website.

Here’s the guidance about fake EEAT content:

“● A webpage or website with deceptive business information. For example, a website may claim to have a physical “brick and mortar” store but in fact only exists online. While there is nothing wrong with being an online business, claiming to have a physical “brick and mortar” (e.g. fake photo, fake physical store address) is deceptive.

● A webpage or website with “fake” owner or content creator profiles. For example, AI generated content with made up “author” profiles (AI generated images or deceptive creator descriptions) in order to make it appear that the content is written by people.

● Factually inaccurate and deceptive information about creator expertise. For example, an author or creator profile inaccurately claims to have credentials or expertise (e.g. the content creator claims falsely to be a medical professional) to make the content appear more trustworthy than it is.”

Deceptive Content, Buttons, And Links

The new quality raters guidelines also goes after sites that use deceptive practices to get users to take actions they didn’t intend to. This is an extreme level of deception that shouldn’t be a concern to any normal site.

The following are additions to the section about deceptive design:

“● Pages with deceptively designed buttons or links . For example, buttons or links on pop ups, interstitials or on the page are designed to look like they do one thing (such as close a pop up) but in fact have a different result which most people would not expect, e.g. download an app.

● Pages with a misleading title or a title that has nothing to do with the content on the page. People who come to the page expecting content related to the title will feel tricked or deceived.”

Takeaways

There are three  important takeaways from the updates to section 4.5.3 of Google’s Search Quality Raters Guidelines:

1. Expanded Definition Of Deceptive Purpose

  • Section 4.5.3 now explicitly includes new examples of deceptive page intent, such as fake endorsements or falsified product testing.
  • The revision emphasizes that deceptive purpose goes beyond misinformation—it includes misleading motivations behind the content.

2. Focus On Deceptive EEAT Content

  • A new subsection addresses deceptive representations of EEAT, including:
  • Fake business details (e.g. pretending to have a physical store).
  • Made-up author profiles or AI-generated personas.
  • False claims of creator expertise, such as unearned professional credentials.

3. Deceptive Design and UI Practices

The raters guidelines calls attention to manipulative interface elements, such as:

  • Buttons that pretend to close popups but trigger downloads instead.
  • Misleading page titles that don’t match the content.

Google’s January 2025 update to the Search Quality Raters Guidelines significantly expands how raters should identify deceptive web content. The update clarifies deceptive practices involving page purpose, false EEAT (Expertise, Experience, Authoritativeness, Trustworthiness) content, and misleading design elements. The purpose of the update is to help raters to better recognize manipulation that could mislead users or inflate rankings and could indicate the kinds of low quality that Google is focusing on.

Featured Image by Shutterstock/ArtFamily