Bing AI Dashboard Maps Grounding Queries To Cited Pages via @sejournal, @MattGSouthern
  • Bing Webmaster Tools added a new mapping feature to the AI Performance dashboard.
  • You can now click a grounding query to see which pages are cited for it.
  • Or click a page to see which grounding queries drive its citations.

Bing’s AI Performance dashboard now maps grounding queries to cited pages, letting you connect AI citation data to specific URLs on your site.

Google Responds To Error That Causes Old Branding To Persist In SERPs via @sejournal, @martinibuster

Google’s John Mueller answered a question about Google rewriting title tags to show the old brand of a site that rebranded in 2015. Apparently everything was updated to the new brand name, but Google’s search results stubbornly persist in showing the old branding.

Old Brand Name Shown In Title Tags

The person asking the question on Bluesky related that a company updated their entire site with its new branding, but Google ignores it in favor of showing the old branding in the search results.

They posted:

“Hey @johnmu.com, curious about Site Name persistence. Treatwell (UK) is still showing as “Wahanda” in results – a rebrand that happened in 2015! Is there a specific “legacy” signal that might override current SiteName structured data for such a long period in one country only? “

Google’s Mueller was puzzled by the situation and didn’t have an answer as to why it was happening. Perhaps it’s one of those rare cases where a bug keeps a part of the index from updating. But he did suggest using the domain name as an alternate site name.

Mueller referred the person to one of Google’s developer pages, “What to do if your preferred site name isn’t selected.”

He responded:

“That’s a bit odd – I’ll pass it on to the team. FWIW what generally works in cases like this is to use the domain name as an alternate site name – developers.google.com/search/docs/… – but it would be nice if that weren’t needed.”

The site itself does not appear to contain on-page instances of the rogue branding. The old domain is correctly 301 redirecting to the new domain. However, there are some links in the footer that contain referral codes with the old branding on them, and the sitemap contains links to 404 pages that contain the old branding. Although those may not be the cause of the branding mismatch in the Google search results, it’s a good SEO practice to be tidy about what’s in your sitemaps and to remove outdated links.

These kinds of rare errors are interesting because they kind of provide a sneak peek into a part of Google’s indexing that isn’t normally in view, like a crack in a wall. What insights do you derive from this anomalous situation?

Featured Image by Shutterstock/SsCreativeStudio

Is WordPress Too Complex For Most Sites? via @sejournal, @martinibuster

Joost de Valk, the co-founder of the Yoast SEO plugin, provoked a discussion and some controversy with a recent blog post that posited that the concept of needing a content management system (CMS) to publish a website is increasingly outdated. This insight came to him after migrating his site to a static Astro-based website with the help of AI.

Joost wrote that the reality today is that many businesses and individuals need nothing more complicated than a static website and that a CMS is overkill for those simple needs.

He affirmed that CMSs are vital for building complex websites, but he also makes the case that the complexity problem that a CMS solves is not representative of the needs of most websites:

“Let me be clear: there are real use cases where a CMS earns its complexity. …These aren’t edge cases. They represent a lot of websites.

But they don’t represent most websites. Most websites are a handful of pages and maybe a blog.”

His article shares eight key observations:

  1. Creating a website was never exclusively a conversation about a CMS
  2. Yet CMS options are more widespread than ever website options
  3. Growing trend right now is away from CMS
  4. Joost de Valk joined the trend away from a CMS to Astro.
  5. Static HTML websites are as SEO-friendly as CMS-based websites.
  6. Simplicity outperforms complexity for many needs.
  7. Content Management Systems remain the best choice for complex requirements.
  8. The case for a CMS will become less relevant once users are able to chat with an AI in order to publish content.

Joost explained that last point:

“I built this entire Astro site with AI assistance. The next step, editing content through conversation, is not a big leap. It’s a small one.

…When editing a static site becomes as easy as sending a message, the CMS’s core advantage for the majority of websites disappears.”

For some, it might be difficult to imagine publishing a website without a CMS, and others believe that WordPress SEO plugins provide an advantage over other platforms. But for those of us who have been in SEO for a long time, we know from experience that static HTML sites are generally faster than any CMS-based website.

Before WordPress existed and became viable, I used to spin up static HTML sites from components I hand coded, including PHP-based websites. Those sites ranked exceptionally well and easily handled DDoS-level traffic. Although I didn’t have to deal with Schema structured data because it hadn’t been invented yet, automating title tags and meta descriptions across a website was a relatively trivial thing to do. No plugins are necessary to SEO a static HTML website, and this is one of the insights that de Valk discovered after transitioning his blog away from WordPress.

He shared:

“I built Yoast SEO, so you’d think this is where a static site falls short. It doesn’t. Everything Yoast SEO does on WordPress, I can do in Astro. XML sitemaps, meta tags, canonical URLs, Open Graph tags, structured data with full JSON-LD schema graphs, auto-generated social share images: it’s all there. In fact, it’s easier to get right on a static site because you control the entire HTML output. There’s no theme or plugin conflict messing with your head tags. No render-blocking resources injected by something you forgot you installed. What you build is what gets served.

The SEO features that a CMS plugin provides aren’t magic. They’re HTML output. And any modern static site generator can produce that same HTML, often cleaner.”

It’s true, the web pages Joost’s blog serves today are a fraction of the size of what they were when published using WordPress. One URL on de Valk’s website that I checked (/healthy-doubt) went from over 1,400 lines of code to only 180 lines of code. Furthermore, something de Valk didn’t mention is that the Astro-based HTML rendered with only eight minor HTML validation issues. WordPress sites tend to render with scores and even hundreds of invalid HTML issues.

Although Google can crawl and index the code that underlies the average WordPress website, invalid HTML nevertheless runs counter to the most fundamental goal of SEO: to make it easy for search engines to crawl, parse, and understand the content.

Article Provoked Controversy

Many developers responded against Joost’s article but many others agreed with him.

Dipak Gajjar (@dipakcgajjar) tweeted:

“A properly configured WordPress site with object cache and a CDN in front is already near-static in terms of delivery. You just get the CMS on top for free.

Good luck @jdevalk convincing a non-technical client to push markdown files to Git just to publish a blog post. WordPress exists because content management is a real problem. Static tools solve the developer experience, not the client experience.”

@cameronjonesweb asked:

“Hands up who thinks it’s a great idea to make their clients update their website content by committing markdown files to GitHub…”

@andrewhoyer pushed back on Joost’s article:

“Blogs would never have become popular without software. Only a tiny fraction of people can edit HTML and CSS by hand. Just because a few of us can doesn’t make static sites a good option.”

But it wasn’t all verbal tomatoes getting thrown at Joost, there were some roses tossed his way, too.

Alex Schneider (@Aslex) agreed that AI is lowering the barrier to creating and maintaining static websites.

Schneider tweeted:

“Static sites aren’t just for people who know HTML anymore. AI tools already let anyone generate and publish content to static sites with zero coding. And let’s be honest, traditional blogs are dying anyway.”

@LusciousPotate shared their opinion that WordPress is outdated:

“Constant WordPress updates, constant plug-in updates, constant security issues. It’s old, the tech stack is outdated; it needs to be put out to pasture.”

Is WordPress Still Relevant?

Generating a static site with Astro still requires some technical knowledge, and at this point in time it’s nowhere near as easy as using WordPress to get online. Many hosting platforms simplify the process of creating websites with WordPress, including with the use of AI. WordPress 7.0 looks to be the start of the most profound changes to WordPress, quite likely making it even easier for anyone to publish a website.

So yes, a strong case can be made for the continued relevance of content management systems, especially WordPress. Yet it may be that static website generator platforms may become a thing in the near future.

Read the de Valk’s blog post here: Do you need a CMS?

Featured Image by Shutterstock/TierneyMJ

Google Tested AI Headlines In Discover. Now It’s Testing Them In Search via @sejournal, @MattGSouthern

When Google started rewriting headlines with AI in Discover last year, it called the test “small.” By the following month, it was reclassified as a feature.

Now the same pattern is showing up in traditional search results.

Google confirmed to The Verge (subscription required) that it’s testing AI-generated headline rewrites in Search. The company described the test as “small and narrow.” It’s similar language to what Google used before reclassifying AI headlines in Discover as a feature.

What’s Happening In Search

Multiple Verge staff members spotted rewritten headlines over the past few months. In one case, “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” appeared in results as “‘Cheat on everything’ AI tool.” Another article was rewritten to “Copilot Changes: Marketing Teams at it Again,” phrasing the article never used.

The test isn’t limited to news sites. Google said it affects other types of websites too.

None of the rewrites included any disclosure that Google had changed the original headline.

Google told The Verge the goal is to “identify content on a page that would be a useful and relevant title to a users’ query.” The company said the test aims at “better matching titles to users’ queries and facilitating engagement with web content.”

Any broader launch may not use generative AI, the company said, but it didn’t explain what the alternative would look like. The test hasn’t been approved for wider rollout.

How Discover’s AI Headlines Became A Feature

We’ve been tracking Google’s treatment of Discover through several changes this year. Here’s how the headline experiment played out.

In December, Google called AI-generated headlines in Discover “a small UI experiment for a subset of Discover users.” By January, Google reclassified the feature. It now “performs well for user satisfaction,” according to Nieman Lab’s reporting.

That’s about a month from test to reclassified feature.

During that period, Google revised its Discover guidelines alongside the February Discover core update and rolled out AI previews that show short AI-generated summaries with links. Each change added another layer of AI-mediated content between publishers and readers in Discover.

The Search test follows the same opening move. Google describes it as small, narrow, and not approved for broader rollout.

How This Differs From Existing Title Rewrites

Title tag rewrites in search results aren’t new. Google has been doing this for years using rule-based systems. An analysis of over 80,000 title tags found Google changed 61% of them. A follow-up study put that number at 76%.

Those existing rewrites pull from elements already on the page. According to Google’s title link documentation, the system draws from title elements, H1 headings, og:title meta tags, anchor text, and other on-page sources.

The new test is different. In the Copilot example, the rewritten headline used phrasing that didn’t exist anywhere in the article. That’s generative AI creating new text.

Why This Matters

An analysis of over 400 publishers found Discover’s share of Google-sourced traffic had climbed from 37% to roughly 68%. For publishers relying so heavily on Discover, AI headline rewrites becoming a feature in Search would mean losing headline control across both of their primary Google traffic sources.

Google’s title link documentation describes inputs Google may use to generate titles but doesn’t include a publisher control for opting out of rewrites. And because Google doesn’t disclose when a headline has been rewritten, you may not know it’s happening to your content unless you check manually.

Sean Hollister, senior editor at The Verge, wrote:

“This is like a bookstore ripping the covers off the books it puts on display and changing their titles.”

Louisa Frahm, SEO director at ESPN, wrote on LinkedIn:

“After 10+ years in news SEO, I’ve come to find that a headline is the most prominent element for attracting readers in timely windows, to provide a targeted synopsis that elevates your brand voice. If that vision gets altered and facts are misrepresented, long-term audience trust will be compromised.”

Looking Ahead

Publishers monitoring their search visibility should check whether their headlines are appearing as written in Google results. There’s no tool for this, so it requires manual spot-checking.


Featured Image: elenabsl/Shutterstock

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

A BuzzStream report analyzing 4 million AI citations found that press releases distributed through syndication channels barely appear in AI-generated answers.

Background

Press release distribution services have been marketing AI visibility as a selling point.

For example, ACCESS Newswire offers an “AI Visibility Checklist” for press releases. eReleases published a guide positioning press releases as tools for AI search visibility. Business Wire has written about optimizing releases for answer engine discovery.

BuzzStream’s data offers a different perspective.

What They Found

The report’s authors used XOFU, a citation monitoring tool from Citation Labs, to track where AI platforms pull their sources across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini. BuzzStream ran 3,600 prompts across 10 industries and collected data for one week.

Overall, news publications accounted for 14% of all citations in the dataset. But within that news category, the numbers drop off quickly for syndicated and distributed content.

Press releases published through syndication channels like Yahoo and MSN accounted for 0.32% of news citations and 0.04% of the entire dataset.

Direct citations from newswire services like PRNewswire made up 0.21% of the full dataset. They appeared most often in exploratory and informational prompts, but even there they only reached 0.37%.

Syndicated news content overall, including articles republished through MSN and Yahoo networks, accounted for 6.2% of news citations and 0.9% of the total dataset.

To identify syndicated content, BuzzStream cross-referenced author names against publications using its ListIQ tool and manually confirmed cases where the author name didn’t match the publication. The company acknowledged this method has limits, since some sites repost press releases without labeling them as such.

What The Data Shows About What Works

The report’s more interesting finding is what does get cited.

Original editorial content made up 81% of news citations in the dataset. Affiliate and review content accounted for the rest. The split held across prompt types, though affiliate content had its strongest showing in evaluative prompts at 39%.

The report broke prompts into three categories. Evaluative prompts like “Is Sony better than Bose?” generated the most news citations at 18% of all citations. Brand awareness prompts like “What is Chase known for?” generated the fewest at 7%. Informational prompts fell in between.

Editorial content that appeared most often in evaluative citations included head-to-head comparisons and cost analysis from outlets like Reuters, CNBC, and CNET.

The ChatGPT Newsroom Exception

One platform-level finding stood out. Internal press releases and newsroom content on company-owned domains accounted for 18% of ChatGPT’s citations in the dataset.

On Google’s AI platforms, that number dropped to around 3%.

BuzzStream cited examples including Iberdrola’s corporate press room and Target’s corporate subdomain. When prompted about Iberdrola’s role in renewables, ChatGPT cited a press release from Iberdrola’s own website. When asked about Target’s products, ChatGPT cited a 2015 press release from Target’s corporate domain.

BuzzStream said most earlier trends looked fairly uniform across platforms, with newsroom content on ChatGPT standing out as a clearer exception.

Why This Matters

The data challenges a premise that press release distribution services have been promoting. Multiple distribution platforms now market press releases as a path to AI visibility.

BuzzStream’s data suggests the distributed version of a press release, the one that lands on Yahoo Finance or MSN through a wire service, rarely becomes the version AI platforms cite. Original editorial coverage and owned newsroom content performed better by wide margins.

This connects to patterns we’ve been tracking. A BuzzStream report we covered in January found 79%of top news publishers block at least one AI training bot, and 71% block retrieval bots. Hostinger’s analysis of 66 billion bot requests showed AI training crawlers losing access while search bots expanded their reach.

The citation data suggests that even when syndicated content is accessible to AI crawlers, it rarely gets cited.

Google’s VP of Product for Search, Robby Stein, said in an interview we covered that being mentioned by other sites could help with AI recommendations, comparing AI’s behavior to how a human might research a question. That comparison favors earned editorial coverage over distributed press releases.

Adam Riemer made a related point in his Ask an SEO column, drawing a line between digital PR that builds brand coverage in publications and link building that focuses on placement metrics. BuzzStream’s data suggests that line extends to AI citations too.

For transparency, BuzzStream sells outreach and digital PR tools, so the finding that earned media outperforms distribution aligns with its business model. The company partnered with Citation Labs and used Citation Labs’ XOFU monitoring tool for the data collection.

Looking Ahead

This is part one of a multi-part analysis from BuzzStream. The single-week data window and large-brand focus are limits worth noting. Smaller brands with less existing editorial coverage may see different results.

Businesses investing in digital PR may want to look more closely at how different distribution channels perform in their category. Data suggests the channel you use can affect where your brand gets cited.


Featured Image: Cagkan Sayin/Shutterstock

Vibe Coding Plugins? Validate With Official WordPress Plugin Checker via @sejournal, @martinibuster

Vibe coding WordPress plugins with AI can raise concerns about whether a plugin follows best practices for compatibility and security. WordPress.org’s Plugin Check Plugin offers a solution for those who wish to check whether a plugin conforms to the official standards. The latest version can now connect to AI.

The plugin is developed by WordPress.org, and it’s meant as a tool for plugin authors to test their own plugins with similar kinds of tests used by the official WordPress plugin repository, which can also help speed up the process of getting accepted into the repository.

According to the official plugin description:

“Plugin Check is a tool for testing whether your plugin meets the required standards for the WordPress.org plugin directory. With this plugin you will be able to run most of the checks used for new submissions, and check if your plugin meets the requirements.

Additionally, the tool flags violations or concerns around plugin development best practices, from basic requirements like correct usage of internationalization functions to accessibility, performance, and security best practices.”

The Plugin Check Plugin also has a Plug Namer feature that will check if a plugin’s name is similar to another plugin, if it may violate a trademark, complies with WordPress naming guidelines, and if the plugin name is too generic or broad.

The latest version of the plugin is version 1.9.0 and it adds the following new features:

  • Supports the new WordPress 7.0 AI connectors so that the plugin can work with the WordPress AI infrastructure
  • Updated block compatibility check for WordPress 7.0.
  • Checks for external URLs in top-level admin menus to avoid admin issues.
  • This latest version also contains additional tweaks, enhancements, and improvements.

User reviews share positive experiences:

“This plugin helped me identify areas of my plugin that I thought I had taken care of. When developing my first plugin. I learned a lot through the feedback given and was able to re-run and eventually remove of all errors.”

“Useful tool for catching issues early. If you’re serious about plugin development, this is a must-have.”

Download the official WordPress Plugin Checker Tool here:

Plugin Check (PCP) By WordPress.org

Google AI Mode’s Personal Intelligence Now Free In U.S. via @sejournal, @MattGSouthern

Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid AI Pro and AI Ultra subscribers, the feature is now expanding to users with personal Google accounts.

What’s New

Announced in a blog post, the expansion covers AI Mode in Search, the Gemini app, and Gemini in Chrome. AI Mode access is available today, while the Gemini app and Chrome rollouts are starting now.

Personal Intelligence connects a user’s Gmail and Google Photos to AI-powered search and chat responses. When enabled, AI Mode and Gemini can reference email confirmations, travel bookings, and photo memories to answer questions without the user providing that context manually.

What Changed

When Google first launched Personal Intelligence in January, you needed a subscription to try it. Today’s expansion removes that paywall for U.S. users on personal Google accounts.

The feature still isn’t available for Google Workspace business, enterprise, or education accounts.

You can opt in by connecting apps through their Search or Gemini settings, and you can turn connections on or off at any time.

What Google Says About Training Data

The blog post includes a disclosure about how data from connected accounts is handled.

According to the post, Gemini and AI Mode don’t train directly on your Gmail inbox or Google Photos library. Google describes the training as limited to “specific prompts in Gemini or AI Mode and the model’s responses.”

That means prompts generated while using Personal Intelligence could include details drawn from connected apps, even though Google says it doesn’t train directly on raw Gmail or Photos data.

Why This Matters

The move from paid to free changes the scale of this feature. When Personal Intelligence required a Pro or Ultra subscription, it reached a smaller audience of paying users. Opening it to anyone with a personal Google account in the U.S. puts it in front of a much larger base.

Increased personalization means AI Mode responses could vary more from user to user. Two people searching the same query may get different results if one has connected their Gmail and the other hasn’t. That makes it harder to benchmark what AI Mode shows for a given topic.

This feature could also change how people type queries into AI Mode. If Google already has the necessary context about a person, we might see searches become shorter. That’s an idea I explored in this video back when Google originally launched the feature:

Looking Ahead

No expansion beyond the U.S. or to Workspace accounts has been announced. Moving from paid to free in less than two months suggests Google is confident in this feature. How people respond to the linking of personal data to search will likely shape future rollout plans.

Google Removes ‘What People Suggest,’ Expands Health AI Tools via @sejournal, @MattGSouthern

Google has removed “What People Suggest,” a search feature that used AI to organize health perspectives from online discussions. The confirmation came as Google held its annual Check Up event, where it announced new AI health features for YouTube.

A Google spokesperson confirmed the removal to The Guardian, calling it part of a “broader simplification” of the search results page. The spokesperson said the decision was unrelated to the quality or safety of the feature. The Guardian also reported, citing three people familiar with the matter, that the feature was pulled after a trial run.

“What People Suggest” launched on mobile devices in the U.S. last year at Google’s annual health event, The Check Up. At the time, Karen DeSalvo, then Google’s chief health officer, said people value hearing from others who have experienced similar health conditions. DeSalvo retired in August and was succeeded by Dr. Michael Howell, who led this year’s Check Up announcements.

What Google Announced At The Check Up

At its 2026 Check Up event, Google announced AI health features across YouTube, Fitbit, and clinician education.

Google says health-related videos on YouTube have surpassed 1 trillion views globally. The company is adding an AI-powered “Ask” button on eligible health videos that lets viewers interact with the content.

Separately, Google is experimenting with AI to organize peer-reviewed scientific information and help present complex topics to broader audiences.

In the blog post, Howell said a central challenge has been connecting people to the right health information at the right time.

Google.org is committing $10 million to fund organizations that will reimagine clinician education for AI. The Council of Medical Specialty Societies and the American Academy of Nursing are the first partners.

Why This Matters

AI features in search results for health-related topics keep changing. Google pulled back one feature that showed forum-style perspectives and put new investment into medical education and structured video tools.

YouTube’s growing role in health-related AI Overviews is already documented. SE Ranking’s study of German health queries found YouTube was the most-cited domain in health AI Overviews, appearing more often than medical or government sites. Adding interactive AI on top of those videos could reinforce that pattern.

How We Got Here

Google’s AI features for health queries have faced pressure over the past year.

In January, the Guardian published an investigation that found health experts considered some AI Overview responses misleading for medical queries. Google disputed elements of the reporting but later removed AI Overviews for some specific health searches, including queries about liver function tests.

“What People Suggest” launched during the same period Google was expanding AI Overviews to thousands more health topics. Ahrefs data from November showed medical YMYL queries triggered AI Overviews 44.1% of the time, the highest rate among YMYL categories.

Looking Ahead

The pattern over the past year points to tighter guardrails around some health AI experiences. Whether that direction holds is less certain.

The removal of “What People Suggest,” and YouTube’s continued citation visibility in AI Overviews, could point that way. But Google’s track record with health-related AI features also shows these decisions can change quickly.


Featured Image: Mamun_Sheikh/Shutterstock

Google AI Overviews Cut Germany’s Top Organic CTR By 59% via @sejournal, @MattGSouthern

AI Overviews cut the click-through rate on Germany’s top organic position by 59%, according to a SISTRIX analysis of more than 100 million keywords.

The data, published by founder Johannes Beus, puts numbers on a pattern that multiple studies have now documented across different markets. The dataset stands out for its size and for offering category-level detail in Germany.

What The Data Shows

SISTRIX found that AI Overviews appear on roughly 20% of all keywords in German search results. That’s close to SE Ranking’s finding of about 21% in the US market from November, though the datasets cover different markets and use different methodologies.

When AIOs are present, the CTR at position 1 drops from 27% to 11%. Across all positions, a typical search leads to an organic click 57% of the time without an AIO. With one, that falls to 33%.

About 79% of AIOs in German results appear above the organic listings. The rest show up further down the page, after the first few organic results.

SISTRIX estimates the total cost at 265 million lost organic clicks per month across the German market. Averaged across all keywords, including those without AIOs, that works out to a 6.6% click loss.

Impact Varies By Category

SISTRIX broke down the data by category, and the gap between the most-affected and the least-affected is large.

Parenting and baby content sites lost over 24% of their organic clicks. The health and home improvement categories also showed losses well above average.

At the other end, recipe sites like Chefkoch lost about 1%. News and media sites lost 7.37%, below the average. Shopping and travel booking sites were barely affected.

SISTRIX’s Beus wrote that informational queries are hit hardest. Transactional searches, where people need to do something that an AI summary can’t replace, are mostly spared.

Biggest Losers

In raw numbers, Wikipedia leads with an estimated 31.6 million lost clicks per month in Germany, representing about 5% of its Google traffic in that market. DocCheck (4.8 million), AOK (4 million), ADAC (3.1 million), and Pons (3.1 million) follow.

By percentage, specialized health portals are hit hardest. SISTRIX data shows lumedis.de losing 30% of its organic clicks, ratgeber-herzinsuffizienz.de losing 29%, and herzstiftung.de losing 29%.

Sites with the smallest losses include wetter.com (0.18%), Booking.com (0.46%), Idealo (0.85%), and Amazon (1.73%).

How This Compares To Other Markets

The German data aligns with other regions, but comparisons are limited by differing methods and keywords.

A Pew Research Center study of US searches found that users clicked 8% of the time when an AIO was present, compared to 15% without one. That’s a 47% relative reduction. A GrowthSRC analysis found a 32% drop at position 1 in the US.

The German numbers (59% loss at position 1) are steeper. Whether that reflects actual differences between the markets or differences in measurement methodology isn’t clear from the available data.

Why This Matters

The category-level breakdown is the most useful part of this data if you’re managing organic search in European markets. A blended 6% average click loss sounds manageable, but losing 24% of clicks in your specific vertical isn’t.

SISTRIX’s data shows search volume alone doesn’t reliably predict traffic where AIOs are active. Whether an AIO appears and impacts CTR in your category must now be part of keyword analysis.

Looking Ahead

SISTRIX previously reported 17% AIO prevalence in Germany in August, and that’s now 20%. Growth slowed, but the feature’s presence in German search results continues expanding.

SISTRIX is a commercial SEO analytics provider. The data in this analysis is drawn from their proprietary keyword database.


Featured Image: Lana Sham/Shutterstock

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Search referral traffic to small publishers dropped 60% over two years, according to Chartbeat data reported exclusively by Axios.

That’s nearly three times the decline at large publishers. The analytics firm, which tracks traffic across thousands of client websites globally, segmented its network by size. Mid-sized publishers (10,000 to 100,000 daily page views) lost 47%, and large publishers (over 100,000 daily page views) lost 22%.

What’s New

Aggregate search traffic data from Chartbeat isn’t new. Our January Reuters Institute coverage cited Chartbeat data showing a 33% global decline in Google Search referrals. What’s new is the size breakdown. Previous Chartbeat figures cited in earlier coverage were aggregate numbers, and this data shows the losses are concentrated at the bottom.

Page views from Google Search fell 34% between December 2024 and December 2025, per the Chartbeat data. Google Discover, the other top referral source, fell 15% over the same period.

ChatGPT referrals grew more than 200% during that window, but chatbots still account for less than 1% of all publisher page view referrals. Growth in chatbot traffic hasn’t come close to replacing what search lost.

How Larger Publishers Are Compensating

Larger publishers appear to be finding alternative traffic sources to partially offset search losses. News and media sites in particular are seeing growth in direct and internal traffic as a share of referrals.

Email and app referrals are also growing among news publishers, per the Axios report. Our Reuters Institute coverage in January found the same pattern, with publishers saying they planned to invest more in owned channels.

Overall weekly page views across all publishers in Chartbeat’s network dropped 6% between 2024 and 2025. The firm attributed that to factors outside search, including a quieter election cycle, though that’s their interpretation, not a measured cause.

AI Referral Engagement Varies By Site Type

One finding that stands out for content strategy is that news and media sites get the highest total page views from AI chatbot referrals, but the lowest engagement per article.

Axios reports that this pattern suggests readers use news citations in chatbots for quick fact-checks or context, not deeper reading.

The other category in the data is “utilitarian sites,” meaning publishers offering health advice or gardening tips. Those publishers see fewer total referrals from AI platforms but more page views per article.

Methodology Notes

Chartbeat sells analytics tools to publishers and has tracked traffic across its client network for close to two decades. Its data covers thousands of websites globally but skews toward news and media publishers.

Small publishers in this data average 1,000 to 10,000 daily page views, medium is 10,000 to 100,000, and large is over 100,000.

Axios received the data exclusively, and Chartbeat hasn’t published it independently.

Why This Matters

Search referral traffic loss is hitting sites with the fewest resources to build alternative traffic.

Most reporting on search traffic declines has treated publishers as a single group. This Chartbeat data breaks down the data by size. For anyone working with smaller publishers, these numbers should change the conversation.

AI chatbot users click to news sites for quick checks but spend more time on how-to content. That means the value of an AI referral depends on what you publish.

Looking Ahead

We’ll be watching for Chartbeat to publish the full data set. How chatbot referral engagement differs by site type is still early data worth tracking.


Featured Image: fizkes/Shutterstock