ChatGPT Often Retrieves But Rarely Cites Reddit Pages, Data Shows via @sejournal, @MattGSouthern

An Ahrefs analysis of 1.4 million ChatGPT prompts found that pages from a dedicated Reddit source were rarely cited in ChatGPT responses, even though they were often retrieved.

Ahrefs highlights this pattern in a new report.

What The Report Looked At

Ahrefs examined 1.4 million ChatGPT 5.2 prompts, tracking which pages were retrieved and later cited in the final response. About half of the retrieved pages were cited overall.

The citation rate varied by source, with pages from general web searches cited most frequently. In contrast, pages from a Reddit source, described by Ahrefs, were cited only 1.93% of the time. This highlights the Reddit gap: while the Reddit source was often retrieved, it rarely appeared as a visible citation.

The Reddit Finding

Of all the pages retrieved but not cited in Ahrefs’ dataset, 67.8% originated from the specific Reddit source Ahrefs identified.

Ahrefs writes that ChatGPT “is using Reddit extensively to understand topics, gauge consensus, and build context—but it almost never gives Reddit the credit.”

One point to clarify is that Reddit pages can still be cited by ChatGPT when they appear in standard web search results. The 1.93% figure refers to what Ahrefs calls a separate Reddit source, distinct from general web searches. In May 2024, OpenAI and Reddit announced a data partnership granting OpenAI access to Reddit’s data.

What Does Help A Page Get Cited

Ahrefs examined how closely page titles and URLs aligned with the specific sub-questions generated by ChatGPT during the search process. To do this, Ahrefs used open-source tools to compute similarity scores, approximating ChatGPT’s internal matching process. Pages with higher scores for matching those sub-questions were cited more frequently in the dataset.

When ChatGPT Search responds to a prompt, it often breaks the prompt down into several narrower queries and searches for pages related to each. In Ahrefs’ data, titles and URLs matching these narrower queries had a stronger correlation with citations than pages that only broadly matched the original prompt. URL structure also played a role. Pages with clear, descriptive URL slugs were cited about 89.78% of the time they appeared in search results, compared to 81.11% for pages with less descriptive URLs. This aligns with SE Ranking’s analysis, which found that ChatGPT tends to favor URLs describing broader topics over those focused on a single keyword.

Why This Matters

Ahrefs data indicates that Reddit’s impact on answer development differs from what businesses might anticipate. It appears Reddit can shape answers indirectly without being explicitly cited. This kind of influence is still important, but is more about the upstream effect rather than direct citation acknowledgment.

For clear citation credit, Ahrefs’ data shows the best indicator is whether your page titles and URLs align with the specific sub-queries that ChatGPT Search produces from a prompt. Simply matching the broad keyword doesn’t suffice.

Looking Ahead

The study evaluates ChatGPT 5.2 on desktop in February 2025. Since then, OpenAI has launched several model updates, such as the GPT-5.3 Instant transition, which Resoneo links to a 20% decrease in the number of cited domains per ChatGPT response. It’s uncertain whether the Reddit gap and title-matching patterns observed by Ahrefs still apply to these newer models.


Featured Image: Koshiro K/Shutterstock

Search Ad Growth Slows As Social & Video Gain Faster via @sejournal, @MattGSouthern

Search advertising is one of the largest digital ad categories, but its growth is slowing as social media and video post faster gains, according to IAB’s annual report, conducted by PwC.

What The Data Shows

In 2025, digital advertising revenue reached $294 billion, reflecting a 13% increase from the previous year. The report uses self-reported revenue data from companies selling advertising online. PwC says it does not audit the information or provide assurance.

Search advertising, including AI search, generated $114 billion, making it one of the largest segments in the report, though IAB’s category definitions overlap.

Search saw an 11% growth year-over-year, slower than the 15% in 2024. Social media experienced stronger growth, with ad revenue totaling $117 billion, a 32% rise or $29 billion increase. The IAB attributed this to the creator economy, enhanced commerce integration, and improved targeting and measurement.

Digital video grew by 25%, reaching $78 billion, faster than the 19% growth in the previous year, indicating more ad spending attracted by video. Commerce media hit $63 billion, up 18%, while programmatic advertising increased by 20% to $162 billion.

In its 2026 outlook, IAB said creator advertising reached $37 billion in 2025, with projections of $44 billion in 2026, noting a move from campaign-based influencer marketing to continuous creator programs.

A note on the data: categories like social, search, video, display, and commerce media overlap in the $294 billion total, so a single ad, such as a social video ad, could be counted in multiple categories.

Why This Matters

The slowdown in search growth warrants attention alongside other recent indicators. Google’s Q4 2025 earnings reported a 17% increase in Search revenue, but this reflects just a single quarter for one company.

In contrast, the IAB data covers the entire year across a broad industry dataset, with growth rates falling from 15% to 11%, indicating the overall category is expanding more slowly than the competing channels vying for the same budgets. This doesn’t imply search is shrinking; it still generated $114 billion in revenue, even though social and video ads grew at a faster pace. Commerce media, at $63 billion, now accounts for over 20% of total digital ad revenue.

Looking Ahead

IAB will host a webinar on April 21 at 1 p.m. ET with experts from IAB, PwC, and Madison & Wall to discuss the findings.

Google’s Patent On Autonomous Search Results via @sejournal, @martinibuster

The United States Patent Office recently published Google’s continuation on a patent for a search system that detects when there is no satisfactory answer for a query and waits to automatically deliver the answer when it becomes available.

Search And AI Assistant

The patent, published in February 2026, is a continuation of an older patent, with the main changes being to apply this patent within the context of an AI assistant. The invention describes solving the problem of answering a question when no actual answer is available at the time a user makes the query. What it does is waits until there’s a satisfactory answer, at which point it circles back to the user with the answer, without them having to ask again.

The patent is titled, Autonomously providing search results post-facto, including in assistant context. Although the patent mentions quality thresholds, those thresholds are defined in the sense of whether the answer meets the user’s needs.

The patent describes six scenarios that would trigger the invention:

  1. When no search results meet defined quality or authoritative-answer criteria.
  2. When results exist but fail to provide a definitive or authoritative answer that satisfies those criteria.
  3. When no results meet quality criteria because the information is not yet available.
  4. When a query seeks a specific answer and no result satisfies the required criteria.
  5. When a resource later satisfies the defined criteria after previously lacking required information.
  6. When a previously available resource is refined or updated so that it now meets the criteria.

Useful And Complete Answers

Google’s patent says that the invention is a solution for times when there is no useful or complete answers because the information does not yet exist or is not good enough, forcing users to keep searching repeatedly.

The system checks if results meet:

  • A quality standard
  • Authoritativeness standard
  • Or a completeness standard.

If the current answers don’t meet those standards, the system will store the query and monitor for new or updated information. Once it becomes available it will send the results to the user later without them searching again.

Follow-Up Questions Are Not Necessary

What is novel about the invention is that it enables follow-up delivery of results after the original query without requiring a new follow-up questions. It also surfaces search results proactively in notifications or assistant conversations.

At a later time, when new or updated information becomes available that satisfies the criteria, the system proactively delivers that information to the user. This delivery can occur through notifications, within an unrelated interaction, or during a later conversation with an automated assistant.

The system may also optionally notify the user that no good results are currently available and ask if they want to be informed when better results appear.

What this system does is it transforms search from a one-time, user-initiated action into a persistent, ongoing process where the system continues working in the background and updates the user when meaningful information becomes available.

Cross-Device Continuity

An interesting feature of this invention is that it can reach out to the user across multiple devices.

Here is where it’s outlined:

[0012] In some implementations, the query is received on an additional computing device that is in addition to the computing device for which the content is provided for presentation to the user.”

This capabiilty is highlighed again in section [0067]:

“For example, the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device.”

It can also go cross-device as a visual and/or audible output across devices and in the form of an automated assistant, and can present the information when the user is interacting with the automated assistant in a different context, describing an “ecosystem” of devices.

Lastly, the patent explains that the information can be surfaced when the user is interfacing with the automated assistant in a completely different context:

[0040]”…the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device. The content may be provided for presentation in various forms. For example, the content may be provided as a visual and/or audible push notification on a mobile computing device of the user, and may be surfaced independent of the user again submitting the query and/or another query.

Also, for example, the content may be presented as visual and/or audible output of an automated assistant during a dialog session between the user and the automated assistant, where the dialog session is unrelated to the query and/or another query seeking similar information.”

Takeaways

The patent (Autonomously providing search results post-facto, including in assistant context) is in line with Google’s vision of tasked-based agentic search, where AI assistants help users accomplish things. This patent could be applied to an AI agent that is asked for tickets to an event when the tickets aren’t yet available. Or it could be applied to making restaurant reservations when the reservations when the dates open up. Both of those scenarios are related to task-based agentic search (TBAS)

Here are seven takeaways:

  1. The system stores data associated with the user about unresolved queries, allowing it to track unanswered information needs over time rather than treating each search as a one-off event.
  2. It delivers results within future interactions, including unrelated assistant conversations, not just through standalone notifications.
  3. The notifications can happen across an ecosystem of devices.
  4. A lack of results is defined by failing to meet quality criteria, which can be the absence of information, the answer not being available yet, or the answer is not available from authoritative sources.
  5. The system focuses on queries that seek specific answers, rather than general informational searches.
  6. It supports cross-device continuity, enabling a query on one device to be fulfilled later on another.
  7. The design reduces repeated searches by eliminating the need for users to check back, then autonomously circling back when the information is available.

Featured Image by Shutterstock/uyabdami

Google Is Replacing Dynamic Search Ads With AI Max via @sejournal, @brookeosmundson

Google just announced the deprecation of Dynamic Search Ads (DSA) and is officially moving its legacy capabilities into AI Max.

Starting in September, eligible campaigns using Dynamic Search Ads (DSA), automatically created assets (ACA), and campaign-level broad match settings will automatically upgrade to AI Max.

While advertisers have speculated about this change for months, the update is now official.

If you’re running Dynamic Search Ads, automatically created assets (ACA), and/or campaign-level broad match settings, keep reading to understand how your campaigns will be affected.

DSA Features Migrating Into AI Max

Beginning in September, advertisers will no longer be able to create new DSA campaigns through Google Ads, Google Ads Editor, or the Google Ads API. Existing eligible campaigns will be migrated automatically.

Google positions AI Max as the next generation of DSA.

Historically, DSA helped advertisers capture additional search demand beyond their keyword lists by using website content to generate headlines and choose landing pages. That made it useful for large sites, inventory-heavy businesses, and advertisers looking for broader query coverage.

AI Max keeps that concept but adds more signals and controls.

According to Google, AI Max combines advertiser assets, landing page content, and broader intent signals to help match ads to more relevant queries. It also adds controls such as:

  • Brand controls
  • Location controls
  • Text guidelines
  • Search term matching
  • Text customization
  • Final URL expansion
Image credit: Google, April 2026

Google says campaigns using the full AI Max feature suite see an average of 7% more conversions or conversion value at a similar CPA or ROAS compared with using search term matching alone.

Google is also splitting the transition into two phases.

Phase 1: Voluntary Upgrades

Google announced that upgrade tools for existing DSA users are rolling out this week.

DSA advertisers will receive tools to move historical settings and data into new standard ad groups. ACA and campaign-level broad match users may see in-platform prompts to upgrade to AI Max.

Phase 2: Automatic Upgrades

Starting in September, remaining eligible campaigns with legacy settings will be upgraded automatically.

Google says all eligible upgrades are expected to finish by the end of September.

It’s important to note how legacy settings will be automatically migrated over to AI Max settings:

  • DSA users will have all three AI Max features enabled by default (search term matching, text customization, final URL expansion)
  • ACA users will have two AI Max features enabled by default (search term matching and text customization)
  • Campaign-level broad match users will have just search term matching enabled by default

What Advertisers Can Do To Prepare For The AI Max Transition

If you still rely on Dynamic Search Ads, now is the time to review where those campaigns sit in your account and how much value they drive.

Some advertisers use DSA as a core growth lever. Others use it as a low-maintenance catch-all for incremental growth. Your next steps may differ depending on that role.

#1. Review Your DSA Performance Now

Before the automatic upgrades begin, pull recent performance data for your DSA campaigns.

Look at conversions, assisted conversions, search terms, landing pages, and efficiency metrics. That baseline will help you judge whether performance changes after migration are positive, neutral, or negative.

#2. Upgrade On Your Timeline Before Automatic Upgrades

Google is encouraging advertisers to move early, and there is a practical reason for that.

A voluntary upgrade gives you more control over settings, structure, and testing than waiting for an automatic migration.

If DSA is important to your business, it makes sense to evaluate the upgrade before September.

#3. Test AI Max Impact

Google recommends using one-click experiments because they give advertisers a cleaner way to compare performance before making a full rollout decision. While I haven’t tried this yet, I will be testing it myself in the coming months.

Even if AI Max improves results on average, averages do not guarantee results in every account. Lead generation, e-commerce, local services, and B2B advertisers may all see different outcomes.

Run controlled tests where possible and compare against your existing baseline.

#4. Lean Into Additional Controls

Many advertisers asked for more steering options in search automation, and Google has listened to our feedback. AI Max includes more controls than legacy DSA.

Spend time understanding brand settings, location controls, and text guidance. Those inputs may matter as much as the automation itself.

#5. Watch Search Match and Landing Page Quality

Once you’ve migrated your DSAs to AI Max, watch closely for the search terms your campaigns are now matching with. How does it compare to past DSA performance?

You’ll also want to pay attention to the landing pages used (if final URL expansion is turned 0n), lead quality, and conversion paths.

Looking Ahead

Dynamic Search Ads have helped advertisers scale beyond their current keyword lists for years. Now, Google is folding that capability into its broader AI Max framework.

The clearest next step is to review where DSA is still active in your account and decide whether to migrate on your own timeline or wait for the automatic upgrade.

The real focus should be protecting performance during the transition and understanding where AI Max improves results, or where it needs tighter management control.

Google Just Made It Easy For SEOs To Kick Out Spammy Sites via @sejournal, @martinibuster

Google updated their report spam documentation to make it clear that they may use reported spam to initiate manual actions against websites that are found to be spamming. This is a change in policy that makes it easier for site owners and SEOs to report actual spam.

Change In Spam Report Policy

The previous spam reporting documentation previously said that Google would not use the spam reports for taking actions against websites.

This wording was mostly removed:

“While Google does not use these reports to take direct action against violations, these reports still play a significant role in helping us understand how to improve our spam detection systems that protect our search results.”

That part is narrowed to emphasize that the submitted spam reports help improve their spam detection systems:

“These reports help us understand how to improve the spam detection systems that protect our search results.”

More Aggressive Approach To Spam

Google also added new wording to make it clear that Google may use the spam reports to take manual actions against websites. Google used to refer to manual actions in terms related to penalization but it may be that the word “penalization” carries connotations of punishment which isn’t what Google is doing when they remove a site from the index. It’s not a punishment, it’s just a removal from the index.

Google’s new wording makes it clear that taking manual action against reported sites are now an option:

“Google may use your report to take manual action against violations. If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

Everything else about the page is the same, including the button for filing a spam report.

Screenshot: Spam Report Button

Clicking the “Report spam” button leads to a form that now can lead to a manual action:

Screenshot: Spam Report Form

Is This Good News For SEOs?

Site owners and SEOs who are sick of seeing spammy sites dominating the search results may want to check out the new page and start reporting actual spammy websites. Nobody really enjoys spam and now there’s something users can do about it.

Featured Image by Shutterstock/NLshop

New Google Search Console Message Glitch Gives SEOs A Scare via @sejournal, @martinibuster

Google Search Console erroneously sent out emails to site owners advising them that Google has just started to record impressions beginning on April 12th. The implication of the message is that Search Console has not previously been collecting those impressions, which is incorrect.

Search Console Impressions

The Search Console impressions report shows how often a site appeared in Google’s search results, regardless of whether or not users clicked. The impressions report by itself is not the metric to pay attention to, but rather the meaningful metrics are t he associated keywords and their positions in the search results. This enables an SEO to identify high value keyword performance and to enable better decisions on addressing performance shortcomings.

The report breaks queries down by:

1. Queries (What people searched)

2. Pages (Which URLs showed up)

3. Countries (Where searchers were located geographically)

4. Devices (Desktop, Mobile, and Tablet)

5. Search Appearance (shows if the impressions are from Rich Results, Videos, Web Light, and Merchant Listings)

Actual Search Console Reporting Errors

Google sent the following message to Search Console users:

“Google systems confirm that on April 12, 2026 we started collecting Google Search impressions for your website in Search Console. This means that pages from your website are now appearing in Google search results for some queries. Here’s how you can monitor your site’s Search performance using Search Console.”

This is an interesting message because it comes after it was disclosed that Google had been incorrectly reporting impressions since May 13, 2025. A note in a Google Support page from April 3 explained:
https://support.google.com/webmasters/answer/6211453#performance-reports-search-results-discover-google-news&zippy=%2Cperformance-reports-search-results-discover-google-news

“A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”

Is today’s erroneous note related to any fixes made to the impressions report? Google’s John Mueller described it as just a glitch.

Mueller posted remarks on Bluesky about the message in response to a query about it:

“Sorry – this is just a normal glitch, unrelated to anything else.”

It’s a curious because it appears that the impression reporting errors and this erroneous messaging may be related. Are they related or is it just a glitch?

Google Chrome Skills Turn Gemini Prompts Into Reusable Workflows via @sejournal, @MattGSouthern

Google announced Skills in Chrome, a new Gemini in Chrome feature that lets you save prompts and rerun them as one-click tools across selected pages and tabs.

What’s New

Skills turn a prompt you’ve already written into a saved tool you can trigger again later. After running a prompt in Gemini’s Chrome side panel, you can save it as a Skill from your chat history. The next time you need it, type a forward slash or click the plus sign in Gemini in Chrome, select the Skill, and it runs on whatever page you’re viewing.

The feature also works across tabs. You can select additional open tabs when running a Skill, which means a single saved prompt can pull information from multiple pages at once.

Google is launching a library of prebuilt Skills that includes workflows for breaking down product ingredients, comparing specs across tabs, and cross-referencing a gift budget with a recipient’s interests. You can add any library Skill to your saved collection and edit the underlying prompt to customize it.

Why This Matters

This update changes how Chrome’s AI features work together. Over the past year, Google has added page-aware prompts and multi-tab context, connected apps like Gmail and Calendar, and auto-browse for multi-step tasks. Skills add reusability to those capabilities.

A saved prompt that reads a page, compares it against two other open tabs, and drafts a summary email through a connected app is closer to a lightweight automated workflow than a chatbot conversation.

How It Helps

For SEO and marketing work, the multi-tab capability creates several possibilities. You could save a Skill that compares competitor pages against yours, or one that extracts structured data from product pages you’re auditing. A repeatable prompt that checks title tags, meta descriptions, and heading structure across client sites would save time during routine audits.

The launch categories focus on shopping, productivity, and wellness rather than developer or enterprise tools. That suggests Skills are intended more as a consumer productivity feature rather than a power-user API.

Looking Ahead

Skills is the latest in a series of Chrome updates that have upgraded the browser’s AI capabilities.

Taken together, they point to Chrome becoming a more persistent AI assistant rather than a one-off side panel.


Featured Image: Google, 2026. 

Google Lists 9 Scenarios That Explain How It Picks Canonical URLs via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit about why Google picks one web page over another when multiple pages have duplicate content, also explaining why Google sometimes appears to pick the wrong URL as the canonical.

Canonical URLs

The word canonical was previously mostly used in the religious sense to describe what writings or beliefs were recognized to be authoritative. In the SEO community, the word is used to refer to which URL is the true web page when multiple web pages share the same or similar content.

Google enables site owners and SEOs to provide a hint of which URL is the canonical with the use of an HTML attribute called rel=canonical. SEOs often refer to rel=canonical as an HTML element, but it’s not. Rel=canonical is an attribute of the element. An HTML element is a building block for a web page. An attribute is markup that modifies the element.

Why Google Picks One URL Over Another

A person on Reddit asked Mueller to provide a deeper dive on the reasons why Google picks one URL over another.

They asked:

“Hey John, can I please ask you to go a little deeper on this? Let’s say I want to understand why Google thinks two pages are duplicate and it chooses one over the other and the reason is not really in plain sight. What can one do to better understand why a page is chosen over another if they cover different topics? Like, IDK, red panda and “regular” panda 🐼. TY!!”

Mueller answered with about nine different reasons why Google chooses one page over another, including the technical reasons why Google appears to get it wrong but in reality it’s someetimes due to something that the site owner over SEO overlooked.

Here are the nine reasons he cited for canonical choices:

  1. Exact duplicate content
    The pages are fully identical, leaving no meaningful signal to distinguish one URL from another.
  2. Substantial duplication in main content
    A large portion of the primary content overlaps across pages, such as the same article appearing in multiple places.
  3. Too little unique main content relative to template content
    The page’s unique content is minimal, so repeated elements like navigation, menus, or layout dominate and make pages appear effectively the same.
  4. URL parameter patterns inferred as duplicates
    When multiple parameterized URLs are known to return the same content, Google may generalize that pattern and treat similar parameter variations as duplicates.
  5. Mobile version used for comparison
    Google may evaluate the mobile version instead of the desktop version, which can lead to duplication assessments that differ from what is manually checked.
  6. Googlebot-visible version used for evaluation
    Canonical decisions are based on what Googlebot actually receives, not necessarily what users see.
  7. Serving Googlebot alternate or non-content pages
    If Googlebot is shown bot challenges, pseudo-error pages, or other generic responses, those may match previously seen content and be treated as duplicates.
  8. Failure to render JavaScript content
    When Google cannot render the page, it may rely on the base HTML shell, which can be identical across pages and trigger duplication.
  9. Ambiguity or misclassification in the system
    In some cases, a URL may be treated as duplicate simply because it appears “misplaced” or due to limitations in how the system interprets similarity.

Here’s Mueller’s complete answer:

“There is no tool that tells you why something was considered duplicate – over the years people often get a feel for it, but it’s not always obvious. Matt’s video “How does Google handle duplicate content?” is a good starter, even now.

Some of the reasons why things are considered duplicate are (these have all been mentioned in various places – duplicate content about duplicate content if you will :-)): exact duplicate (everything is duplicate), partial match (a large part is duplicate, for example, when you have the same post on two blogs; sometimes there’s also just not a lot of content to go on, for example if you have a giant menu and a tiny blog post), or – this is harder – when the URL looks like it would be duplicate based on the duplicates found elsewhere on the site (for example, if /page?tmp=1234 and /page?tmp=3458 are the same, probably /page?tmp=9339 is too — this can be tricky & end up wrong with multiple parameters, is /page?tmp=1234&city=detroit the same too? how about /page?tmp=2123&city=chicago ?).

Two reasons I’ve seen people get thrown off are: we use the mobile version (people generally check on desktop), and we use the version Googlebot sees (and if you show Googlebot a bot-challenge or some other pseudo-error-page, chances are we’ve seen that before and might consider it a duplicate). Also, we use the rendered version – but this means we need to be able to render your page if it’s using a JS framework for the content (if we can’t render it, we might take the bootstrap HTML page and, chances are it’ll be duplicate).

It happens that these systems aren’t perfect in picking duplicate content, sometimes it’s also just that the alternative URL feels obviously misplaced. Sometimes that settles down over time (as our systems recognize that things are really different), sometimes it doesn’t.

If it’s similar content then users can still find their way to it, so it’s generally not that terrible. It’s pretty rare that we end up escalating a wrong duplicate – over the years the teams have done a fantastic job with these systems; most of the weird ones are unproblematic, often it’s just some weird error page that’s hard to spot.”

Takeaway

Mueller offered a deep dive into the reasons why Google chooses canonicals. He described the process of choosing canonicals as like a fuzzy sorting system built from overlapping signals, with Google comparing content, URL patterns, rendered output, and crawler-visible versions, while borderline classifications (“weird ones”) are given a pass because they don’t pose a problem.

Featured Image by Shutterstock/Garun .Prdt

New Google Spam Policy Targets Back Button Hijacking via @sejournal, @MattGSouthern

Google added a new section to its spam policies designating “back button hijacking” as an explicit violation under the malicious practices category. Enforcement begins on June 15, giving websites two months to make changes.

Google published a blog post explaining the policy. It also updated the spam policies documentation to list back-button hijacking alongside malware and unwanted software as a malicious practice.

What Is Back Button Hijacking

Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Google’s blog post describes several ways this can happen.

Users might be sent to pages they never visited. They might see unsolicited recommendations or ads. Or they might be unable to navigate back at all.

Google wrote in the blog post:

“When a user clicks the ‘back’ button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation.”

Why Google Is Acting Now

Google said it’s seen an increase in this behavior across the web. The blog post noted that Google has previously warned against inserting deceptive pages into browser history, referencing a 2013 post on the topic, and said the behavior “has always been against” Google Search Essentials.

Google wrote:

“People report feeling manipulated and eventually less willing to visit unfamiliar sites.”

What Enforcement Looks Like

Sites involved in back button hijacking risk manual spam penalties or automated demotions, both of which can lower their visibility in Google Search results.

Google is giving a two-month grace period before enforcement starts on June 15. This follows a similar pattern to the March 2024 spam policy expansion, which also gave sites two months to comply with the new site reputation abuse policy.

Third-Party Code As A Source

Google’s blog post acknowledges that some back-button hijacking may not originate from the site owner’s code.

Google wrote:

“Some instances of back button hijacking may originate from the site’s included libraries or advertising platform.”

Google’s wording indicates sites can be affected even if issues come from third-party libraries or ad platforms, placing responsibility on websites to review what runs on their pages.

How This Fits Into Google’s Spam Policy Framework

The addition falls under Google’s category of malicious practices. That section discusses behaviors causing a gap between user expectations and experiences, including malware distribution and unwanted software installation. Google expanded the existing spam policy category instead of creating a new one.

The March 2026 spam update completed its rollout less than three weeks ago. That update enforced existing policies without adding new ones. Today’s announcement adds new policy language ahead of the June 15 enforcement date.

Why This Matters

Sites using advertising scripts, content recommendation widgets, or third-party engagement tools should audit those integrations before June 15. Any script that manipulates browser history or prevents normal back-button navigation is now a potential spam violation.

The two-month window is the compliance period. After June 15, Google can take manual or automated action.

Sites that receive a manual action can submit a reconsideration request through Search Console after fixing the issue.

Looking Ahead

Google hasn’t indicated whether enforcement will come through a dedicated spam update or through ongoing SpamBrain and manual review.