Google Begins Rolling Out The March 2026 Spam Update via @sejournal, @MattGSouthern

Google started rolling out the March 2026 spam update today, according to the Google Search Status Dashboard.

The update is global and in all languages, with a rollout that may take a few days.

What’s New

The Search Status Dashboard listed the update as an incident affecting ranking at 12:00 PM PT on March 24, with the release note posted at 12:18 PM PDT.

Google’s description reads:

“Released the March 2026 spam update, which applies globally and to all languages. The rollout may take a few days to complete.”

Google hasn’t published a blog post or announced new spam policies with this rollout. So far, it seems to be a standard spam update, not a broader policy change like the March 2024 update, which added categories such as content abuse, expired domain abuse, and site reputation abuse.

How Spam Updates Work

Google describes spam updates as improvements to spam-prevention systems like SpamBrain, targeting sites violating spam policies, which can lead to lower rankings or removal from search results.

Spam updates differ from core updates, which re-assess content quality. Spam updates enforce policies against violations like cloaking, link spam, and content abuse.

Sites affected by a spam update can recover, but recovery takes time. Google states improvements may only appear once automated systems detect compliance over months.

Context

This is Google’s first spam update since the August 2025 spam update, which ran from August 26 to September 22 and took nearly 27 days to complete. That update was characterized by SISTRIX as penalty-only, with affected spammy domains losing visibility but no broad ranking changes.

Google’s estimated timeline of “a few days” for the March 2026 update suggests a shorter rollout than recent spam updates, though timelines can stretch. The December 2024 spam update completed in seven days. The August 2025 update took nearly four weeks.

The March 2026 spam update comes about three weeks after the February Discover update finished rolling out.

Why This Matters

Ranking changes during spam update rollouts can happen quickly. Monitoring Search Console data over the next few days will help distinguish spam-related drops from normal fluctuation.

Google hasn’t announced new spam policy categories with this update, so the existing spam policies remain the relevant framework for evaluating any impact.

Looking Ahead

Google will update the Search Status Dashboard when the rollout is complete. Search Engine Journal will report on the completion and any observed effects.


Featured Image: Hurunaga Yuuka/Shutterstock

Google Adds AI & Bot Labels To Forum, Q&A Structured Data via @sejournal, @MattGSouthern

Google updated its Discussion Forum and Q&A Page structured data documentation, adding several new supported properties to both markup types.

The most notable addition is digitalSourceType, a property that lets forum and Q&A sites indicate when content was created by a trained AI model or another automated system.

Content Source Labeling Comes To Forum Markup

The new digitalSourceType property uses IPTC digital source enumeration values to indicate how content was created. Google supports two values:

  • TrainedAlgorithmicMediaDigitalSource for content created by a trained model, such as an LLM.
  • AlgorithmicMediaDigitalSource for content created by a simpler algorithmic process, such as an automatic reply bot.

The property is listed as recommended, not required, for both the DiscussionForumPosting and Comment types in the Discussion Forum docs, and for Question, Answer, and Comment types in the Q&A Page docs.

Google already uses similar IPTC source type values in its image metadata documentation to identify how images were created. The update extends that concept to text-based forum and Q&A content.

New Comment Count Property

Google added commentCount as a recommended property across both documentation pages. It lets sites declare the total number of comments on a post or answer, even when not all comments appear in the markup.

The Q&A Page documentation includes a new formula: answerCount + commentCount should equal the total number of replies of any type. This gives Google a clearer picture of thread activity on pages where comments are paginated or truncated.

Expanded Shared Content Support

The Discussion Forum documentation expanded its sharedContent property. Previously, sharedContent accepted a generic CreativeWork type. The updated docs now explicitly list four supported subtypes:

  • WebPage for shared links.
  • ImageObject for posts where an image is the primary content.
  • VideoObject for posts where a video is the primary content.
  • DiscussionForumPosting or Comment for quoted or reposted content from other threads.

The addition of DiscussionForumPosting and Comment as accepted types is new. Google’s updated documentation includes a code example showing how to mark up a referenced comment with its URL, author, date, and text.

The image property description was also updated across both docs with a note about link preview images. Google now recommends placing link preview images inside the sharedContent field’s attached WebPage rather than in the post’s image field.

Why This Matters

For sites that publish a mix of human and machine-generated content, the digitalSourceType addition provides a structured way to communicate that to Google. The new properties are optional, and no existing implementations will break.

Google has not said how it will use the digitalSourceType data in its ranking or display systems. The documentation only describes it as a way to indicate content origin.

Looking Ahead

The update does not include changes to required properties, so existing forum and Q&A structured data implementations remain valid. Sites that want to adopt the new properties can add them incrementally.

Research: “You Are An Expert” Prompts Can Damage Factual Accuracy via @sejournal, @martinibuster

“You are an expert” persona prompting can harm performance as much as it helps. A new study shows that persona prompting improves alignment with human expectations but can reduce factual accuracy on knowledge-heavy tasks, with effects varying by task type and model. The takeaway is that persona prompting works better on some kinds of tasks than it does in others.

Persona Prompting

Persona prompting is a common way to shape how large language models respond, especially in applications where tone and alignment with human expectations matter. It is widely used because it improves how outputs read and feel. Given how widespread persona prompting is, it may come as a surprise that its actual effect on performance remains unclear, as prior research shows inconsistent results, throwing the technique into doubt as to whether it is helping or harming.

The researchers concluded that persona prompting is neither broadly beneficial nor harmful, and that its efficacy depends on the type of task.

They found:

  • It improves alignment-related outputs such as tone, formatting, and safety behavior
  • Persona prompting degrades performance on tasks that rely on factual accuracy and reasoning

Based on this, the authors introduce a method called PRISM (Persona Routing via Intent-based Self-Modeling), that applies personas selectively, using intent-based routing instead of treating personas as a default setting. Their findings show that persona prompting works best as a conditional tool and provide a better understanding of when persona prompting helps and when it should be avoided.

Managing Behavioral Signals

In section three of the paper, the researchers say that expert personas have “useful behavioral signals” but that naïve use of persona prompting damages as much as it helps. They say this raises the question of whether those benefits can be separated from the harms and applied only where they improve results.

Behavioral signals influence LLM output. These signals are the reason persona prompting works. They drive improvements in tone, structure, safety behavior, and how well responses match expectations. Without them, there would be no benefit to persona prompting.

Yet, in a seeming paradox, the paper shows that those same signals interfere with tasks that depend on factual accuracy and reasoning. That is why the paper treats them as something to manage, not maximize.

These signals include:

  • Stylistic adaptation and tone matching: Adopting a professional or creative voice.
  • Structured formatting: Providing step-by-step or technical layouts.
  • Format adherence: Helping the model follow complex structures, like professional emails or step-by-step STEM explanations.
  • Intent following: Focusing the model on the user’s underlying goal, especially in tasks like data extraction.
  • Safety refusal: Identifying and declining harmful requests more effectively by adopting a “Safety Monitor” role.

Persona Prompt Wins

The paper found that persona prompts were a win in five out of eight categories of tasks:

  1. Extraction: +0.65 score increase.
  2. STEM: +0.60 score increase.
  3. Reasoning: +0.40 score increase.
  4. Writing: Improved through better stylistic adaptation.
  5. Roleplaying a domain expert: Improved through better tone matching.

The persona prompting won in the above categories because they are more about style and clarity rather than whether the answer is correct for facts and knowledge. They also found that the longer and more detailed the persona prompt, the stronger the alignment and safety behaviors become.

Persona Prompt Failures

Conversely, the expert persona consistently degraded performance in the remaining three (out of eight) categories because they rely on precise fact retrieval or strict logic rather than style and clarity. The reason for the performance drop is that adding a detailed expert persona essentially “distracts” the model by activating an “instruction-following mode” that prioritizes tone and style.

Activating expert personas come at the expense of “factual recall.” The model is so focused on trying to act like an expert that it forgets the information it learned during its initial training.That explains the drops in accuracy for facts and math.

Persona expert prompts performed worse in the following three categories:

  1. Math
  2. Coding
  3. Humanities (memorized factual knowledge)

The paper notes that on one of the knowledge benchmarks (MMLU), accuracy dropped from a 71.6% baseline to 68.0% even with the “minimum” persona, and fell further to 66.3% with the “long” persona.

They explained the safety improvements:

“More detailed persona descriptions provide richer alignment information, amplifying instruction-tuning behaviors proportionally.”

And showed why factual accuracy takes a hit:

“Persona Damages Pretraining Tasks
During pretraining, language models acquire capabilities such as factual knowledge memorization, classification, entity relationship recognition, and zero-shot reasoning. These abilities can be accessed without relying on instruction-tuning, and can be damaged by extra instruction-following context, such as expert persona prompts.”

Conclusions Reached

The researchers conclude that persona prompting consistently improves alignment-dependent tasks such as writing, roleplay, and safety behavior, while degrading performance on tasks that rely on pretraining-based knowledge, including math, coding, and general knowledge benchmarks.

They also found that a model’s sensitivity to personas scales with its training. Models that are more optimized to follow instructions are more “steerable,” which means they get the biggest boost in safety and tone, but they also suffer the largest drops in factual accuracy.

Takeaways

1. Be selective about using persona prompts:

  • Do not default to “You are an expert” prompts
  • Treat persona prompting as situational. Using it everywhere introduces hidden accuracy risks.

2. Persona prompting is effective for:

  • Writing quality
  • Tone
  • Formatting and organization
  • Readability

3. Tasks that don’t benefit from persona prompting and should instead use neutral prompting to preserve accuracy:

  • Fact-checking
  • Statistics
  • Technical explanations
  • Logic-heavy outputs
  • Research
  • SEO analysis

4. Remember these three findings:

  • Use persona prompting to generate content, then switch to a non-persona prompt (or a stricter mode) to verify facts.
  • Highly detailed “expert” prompts strengthen tone and clarity but reduce factual and knowledge accuracy.
  • “You are an expert” prompts may cause a model to prioritize sounding correct over actually being correct.

5. Match your prompts to the task:

  • Content creation: Persona helps
  • Analysis and validation: Persona hurts

The most effective approach is not one prompt, but a workflow that switches prompts depending on the task, similar to the researcher’s PRISM approach.

Read the research paper:
Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM

Featured Image by Shutterstock/ImageFlow

Bing AI Dashboard Maps Grounding Queries To Cited Pages via @sejournal, @MattGSouthern
  • Bing Webmaster Tools added a new mapping feature to the AI Performance dashboard.
  • You can now click a grounding query to see which pages are cited for it.
  • Or click a page to see which grounding queries drive its citations.

Bing’s AI Performance dashboard now maps grounding queries to cited pages, letting you connect AI citation data to specific URLs on your site.

Google Responds To Error That Causes Old Branding To Persist In SERPs via @sejournal, @martinibuster

Google’s John Mueller answered a question about Google rewriting title tags to show the old brand of a site that rebranded in 2015. Apparently everything was updated to the new brand name, but Google’s search results stubbornly persist in showing the old branding.

Old Brand Name Shown In Title Tags

The person asking the question on Bluesky related that a company updated their entire site with its new branding, but Google ignores it in favor of showing the old branding in the search results.

They posted:

“Hey @johnmu.com, curious about Site Name persistence. Treatwell (UK) is still showing as “Wahanda” in results – a rebrand that happened in 2015! Is there a specific “legacy” signal that might override current SiteName structured data for such a long period in one country only? “

Google’s Mueller was puzzled by the situation and didn’t have an answer as to why it was happening. Perhaps it’s one of those rare cases where a bug keeps a part of the index from updating. But he did suggest using the domain name as an alternate site name.

Mueller referred the person to one of Google’s developer pages, “What to do if your preferred site name isn’t selected.”

He responded:

“That’s a bit odd – I’ll pass it on to the team. FWIW what generally works in cases like this is to use the domain name as an alternate site name – developers.google.com/search/docs/… – but it would be nice if that weren’t needed.”

The site itself does not appear to contain on-page instances of the rogue branding. The old domain is correctly 301 redirecting to the new domain. However, there are some links in the footer that contain referral codes with the old branding on them, and the sitemap contains links to 404 pages that contain the old branding. Although those may not be the cause of the branding mismatch in the Google search results, it’s a good SEO practice to be tidy about what’s in your sitemaps and to remove outdated links.

These kinds of rare errors are interesting because they kind of provide a sneak peek into a part of Google’s indexing that isn’t normally in view, like a crack in a wall. What insights do you derive from this anomalous situation?

Featured Image by Shutterstock/SsCreativeStudio

Is WordPress Too Complex For Most Sites? via @sejournal, @martinibuster

Joost de Valk, the co-founder of the Yoast SEO plugin, provoked a discussion and some controversy with a recent blog post that posited that the concept of needing a content management system (CMS) to publish a website is increasingly outdated. This insight came to him after migrating his site to a static Astro-based website with the help of AI.

Joost wrote that the reality today is that many businesses and individuals need nothing more complicated than a static website and that a CMS is overkill for those simple needs.

He affirmed that CMSs are vital for building complex websites, but he also makes the case that the complexity problem that a CMS solves is not representative of the needs of most websites:

“Let me be clear: there are real use cases where a CMS earns its complexity. …These aren’t edge cases. They represent a lot of websites.

But they don’t represent most websites. Most websites are a handful of pages and maybe a blog.”

His article shares eight key observations:

  1. Creating a website was never exclusively a conversation about a CMS
  2. Yet CMS options are more widespread than ever website options
  3. Growing trend right now is away from CMS
  4. Joost de Valk joined the trend away from a CMS to Astro.
  5. Static HTML websites are as SEO-friendly as CMS-based websites.
  6. Simplicity outperforms complexity for many needs.
  7. Content Management Systems remain the best choice for complex requirements.
  8. The case for a CMS will become less relevant once users are able to chat with an AI in order to publish content.

Joost explained that last point:

“I built this entire Astro site with AI assistance. The next step, editing content through conversation, is not a big leap. It’s a small one.

…When editing a static site becomes as easy as sending a message, the CMS’s core advantage for the majority of websites disappears.”

For some, it might be difficult to imagine publishing a website without a CMS, and others believe that WordPress SEO plugins provide an advantage over other platforms. But for those of us who have been in SEO for a long time, we know from experience that static HTML sites are generally faster than any CMS-based website.

Before WordPress existed and became viable, I used to spin up static HTML sites from components I hand coded, including PHP-based websites. Those sites ranked exceptionally well and easily handled DDoS-level traffic. Although I didn’t have to deal with Schema structured data because it hadn’t been invented yet, automating title tags and meta descriptions across a website was a relatively trivial thing to do. No plugins are necessary to SEO a static HTML website, and this is one of the insights that de Valk discovered after transitioning his blog away from WordPress.

He shared:

“I built Yoast SEO, so you’d think this is where a static site falls short. It doesn’t. Everything Yoast SEO does on WordPress, I can do in Astro. XML sitemaps, meta tags, canonical URLs, Open Graph tags, structured data with full JSON-LD schema graphs, auto-generated social share images: it’s all there. In fact, it’s easier to get right on a static site because you control the entire HTML output. There’s no theme or plugin conflict messing with your head tags. No render-blocking resources injected by something you forgot you installed. What you build is what gets served.

The SEO features that a CMS plugin provides aren’t magic. They’re HTML output. And any modern static site generator can produce that same HTML, often cleaner.”

It’s true, the web pages Joost’s blog serves today are a fraction of the size of what they were when published using WordPress. One URL on de Valk’s website that I checked (/healthy-doubt) went from over 1,400 lines of code to only 180 lines of code. Furthermore, something de Valk didn’t mention is that the Astro-based HTML rendered with only eight minor HTML validation issues. WordPress sites tend to render with scores and even hundreds of invalid HTML issues.

Although Google can crawl and index the code that underlies the average WordPress website, invalid HTML nevertheless runs counter to the most fundamental goal of SEO: to make it easy for search engines to crawl, parse, and understand the content.

Article Provoked Controversy

Many developers responded against Joost’s article but many others agreed with him.

Dipak Gajjar (@dipakcgajjar) tweeted:

“A properly configured WordPress site with object cache and a CDN in front is already near-static in terms of delivery. You just get the CMS on top for free.

Good luck @jdevalk convincing a non-technical client to push markdown files to Git just to publish a blog post. WordPress exists because content management is a real problem. Static tools solve the developer experience, not the client experience.”

@cameronjonesweb asked:

“Hands up who thinks it’s a great idea to make their clients update their website content by committing markdown files to GitHub…”

@andrewhoyer pushed back on Joost’s article:

“Blogs would never have become popular without software. Only a tiny fraction of people can edit HTML and CSS by hand. Just because a few of us can doesn’t make static sites a good option.”

But it wasn’t all verbal tomatoes getting thrown at Joost, there were some roses tossed his way, too.

Alex Schneider (@Aslex) agreed that AI is lowering the barrier to creating and maintaining static websites.

Schneider tweeted:

“Static sites aren’t just for people who know HTML anymore. AI tools already let anyone generate and publish content to static sites with zero coding. And let’s be honest, traditional blogs are dying anyway.”

@LusciousPotate shared their opinion that WordPress is outdated:

“Constant WordPress updates, constant plug-in updates, constant security issues. It’s old, the tech stack is outdated; it needs to be put out to pasture.”

Is WordPress Still Relevant?

Generating a static site with Astro still requires some technical knowledge, and at this point in time it’s nowhere near as easy as using WordPress to get online. Many hosting platforms simplify the process of creating websites with WordPress, including with the use of AI. WordPress 7.0 looks to be the start of the most profound changes to WordPress, quite likely making it even easier for anyone to publish a website.

So yes, a strong case can be made for the continued relevance of content management systems, especially WordPress. Yet it may be that static website generator platforms may become a thing in the near future.

Read the de Valk’s blog post here: Do you need a CMS?

Featured Image by Shutterstock/TierneyMJ

Google Tested AI Headlines In Discover. Now It’s Testing Them In Search via @sejournal, @MattGSouthern

When Google started rewriting headlines with AI in Discover last year, it called the test “small.” By the following month, it was reclassified as a feature.

Now the same pattern is showing up in traditional search results.

Google confirmed to The Verge (subscription required) that it’s testing AI-generated headline rewrites in Search. The company described the test as “small and narrow.” It’s similar language to what Google used before reclassifying AI headlines in Discover as a feature.

What’s Happening In Search

Multiple Verge staff members spotted rewritten headlines over the past few months. In one case, “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” appeared in results as “‘Cheat on everything’ AI tool.” Another article was rewritten to “Copilot Changes: Marketing Teams at it Again,” phrasing the article never used.

The test isn’t limited to news sites. Google said it affects other types of websites too.

None of the rewrites included any disclosure that Google had changed the original headline.

Google told The Verge the goal is to “identify content on a page that would be a useful and relevant title to a users’ query.” The company said the test aims at “better matching titles to users’ queries and facilitating engagement with web content.”

Any broader launch may not use generative AI, the company said, but it didn’t explain what the alternative would look like. The test hasn’t been approved for wider rollout.

How Discover’s AI Headlines Became A Feature

We’ve been tracking Google’s treatment of Discover through several changes this year. Here’s how the headline experiment played out.

In December, Google called AI-generated headlines in Discover “a small UI experiment for a subset of Discover users.” By January, Google reclassified the feature. It now “performs well for user satisfaction,” according to Nieman Lab’s reporting.

That’s about a month from test to reclassified feature.

During that period, Google revised its Discover guidelines alongside the February Discover core update and rolled out AI previews that show short AI-generated summaries with links. Each change added another layer of AI-mediated content between publishers and readers in Discover.

The Search test follows the same opening move. Google describes it as small, narrow, and not approved for broader rollout.

How This Differs From Existing Title Rewrites

Title tag rewrites in search results aren’t new. Google has been doing this for years using rule-based systems. An analysis of over 80,000 title tags found Google changed 61% of them. A follow-up study put that number at 76%.

Those existing rewrites pull from elements already on the page. According to Google’s title link documentation, the system draws from title elements, H1 headings, og:title meta tags, anchor text, and other on-page sources.

The new test is different. In the Copilot example, the rewritten headline used phrasing that didn’t exist anywhere in the article. That’s generative AI creating new text.

Why This Matters

An analysis of over 400 publishers found Discover’s share of Google-sourced traffic had climbed from 37% to roughly 68%. For publishers relying so heavily on Discover, AI headline rewrites becoming a feature in Search would mean losing headline control across both of their primary Google traffic sources.

Google’s title link documentation describes inputs Google may use to generate titles but doesn’t include a publisher control for opting out of rewrites. And because Google doesn’t disclose when a headline has been rewritten, you may not know it’s happening to your content unless you check manually.

Sean Hollister, senior editor at The Verge, wrote:

“This is like a bookstore ripping the covers off the books it puts on display and changing their titles.”

Louisa Frahm, SEO director at ESPN, wrote on LinkedIn:

“After 10+ years in news SEO, I’ve come to find that a headline is the most prominent element for attracting readers in timely windows, to provide a targeted synopsis that elevates your brand voice. If that vision gets altered and facts are misrepresented, long-term audience trust will be compromised.”

Looking Ahead

Publishers monitoring their search visibility should check whether their headlines are appearing as written in Google results. There’s no tool for this, so it requires manual spot-checking.


Featured Image: elenabsl/Shutterstock

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

A BuzzStream report analyzing 4 million AI citations found that press releases distributed through syndication channels barely appear in AI-generated answers.

Background

Press release distribution services have been marketing AI visibility as a selling point.

For example, ACCESS Newswire offers an “AI Visibility Checklist” for press releases. eReleases published a guide positioning press releases as tools for AI search visibility. Business Wire has written about optimizing releases for answer engine discovery.

BuzzStream’s data offers a different perspective.

What They Found

The report’s authors used XOFU, a citation monitoring tool from Citation Labs, to track where AI platforms pull their sources across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini. BuzzStream ran 3,600 prompts across 10 industries and collected data for one week.

Overall, news publications accounted for 14% of all citations in the dataset. But within that news category, the numbers drop off quickly for syndicated and distributed content.

Press releases published through syndication channels like Yahoo and MSN accounted for 0.32% of news citations and 0.04% of the entire dataset.

Direct citations from newswire services like PRNewswire made up 0.21% of the full dataset. They appeared most often in exploratory and informational prompts, but even there they only reached 0.37%.

Syndicated news content overall, including articles republished through MSN and Yahoo networks, accounted for 6.2% of news citations and 0.9% of the total dataset.

To identify syndicated content, BuzzStream cross-referenced author names against publications using its ListIQ tool and manually confirmed cases where the author name didn’t match the publication. The company acknowledged this method has limits, since some sites repost press releases without labeling them as such.

What The Data Shows About What Works

The report’s more interesting finding is what does get cited.

Original editorial content made up 81% of news citations in the dataset. Affiliate and review content accounted for the rest. The split held across prompt types, though affiliate content had its strongest showing in evaluative prompts at 39%.

The report broke prompts into three categories. Evaluative prompts like “Is Sony better than Bose?” generated the most news citations at 18% of all citations. Brand awareness prompts like “What is Chase known for?” generated the fewest at 7%. Informational prompts fell in between.

Editorial content that appeared most often in evaluative citations included head-to-head comparisons and cost analysis from outlets like Reuters, CNBC, and CNET.

The ChatGPT Newsroom Exception

One platform-level finding stood out. Internal press releases and newsroom content on company-owned domains accounted for 18% of ChatGPT’s citations in the dataset.

On Google’s AI platforms, that number dropped to around 3%.

BuzzStream cited examples including Iberdrola’s corporate press room and Target’s corporate subdomain. When prompted about Iberdrola’s role in renewables, ChatGPT cited a press release from Iberdrola’s own website. When asked about Target’s products, ChatGPT cited a 2015 press release from Target’s corporate domain.

BuzzStream said most earlier trends looked fairly uniform across platforms, with newsroom content on ChatGPT standing out as a clearer exception.

Why This Matters

The data challenges a premise that press release distribution services have been promoting. Multiple distribution platforms now market press releases as a path to AI visibility.

BuzzStream’s data suggests the distributed version of a press release, the one that lands on Yahoo Finance or MSN through a wire service, rarely becomes the version AI platforms cite. Original editorial coverage and owned newsroom content performed better by wide margins.

This connects to patterns we’ve been tracking. A BuzzStream report we covered in January found 79%of top news publishers block at least one AI training bot, and 71% block retrieval bots. Hostinger’s analysis of 66 billion bot requests showed AI training crawlers losing access while search bots expanded their reach.

The citation data suggests that even when syndicated content is accessible to AI crawlers, it rarely gets cited.

Google’s VP of Product for Search, Robby Stein, said in an interview we covered that being mentioned by other sites could help with AI recommendations, comparing AI’s behavior to how a human might research a question. That comparison favors earned editorial coverage over distributed press releases.

Adam Riemer made a related point in his Ask an SEO column, drawing a line between digital PR that builds brand coverage in publications and link building that focuses on placement metrics. BuzzStream’s data suggests that line extends to AI citations too.

For transparency, BuzzStream sells outreach and digital PR tools, so the finding that earned media outperforms distribution aligns with its business model. The company partnered with Citation Labs and used Citation Labs’ XOFU monitoring tool for the data collection.

Looking Ahead

This is part one of a multi-part analysis from BuzzStream. The single-week data window and large-brand focus are limits worth noting. Smaller brands with less existing editorial coverage may see different results.

Businesses investing in digital PR may want to look more closely at how different distribution channels perform in their category. Data suggests the channel you use can affect where your brand gets cited.


Featured Image: Cagkan Sayin/Shutterstock

Vibe Coding Plugins? Validate With Official WordPress Plugin Checker via @sejournal, @martinibuster

Vibe coding WordPress plugins with AI can raise concerns about whether a plugin follows best practices for compatibility and security. WordPress.org’s Plugin Check Plugin offers a solution for those who wish to check whether a plugin conforms to the official standards. The latest version can now connect to AI.

The plugin is developed by WordPress.org, and it’s meant as a tool for plugin authors to test their own plugins with similar kinds of tests used by the official WordPress plugin repository, which can also help speed up the process of getting accepted into the repository.

According to the official plugin description:

“Plugin Check is a tool for testing whether your plugin meets the required standards for the WordPress.org plugin directory. With this plugin you will be able to run most of the checks used for new submissions, and check if your plugin meets the requirements.

Additionally, the tool flags violations or concerns around plugin development best practices, from basic requirements like correct usage of internationalization functions to accessibility, performance, and security best practices.”

The Plugin Check Plugin also has a Plug Namer feature that will check if a plugin’s name is similar to another plugin, if it may violate a trademark, complies with WordPress naming guidelines, and if the plugin name is too generic or broad.

The latest version of the plugin is version 1.9.0 and it adds the following new features:

  • Supports the new WordPress 7.0 AI connectors so that the plugin can work with the WordPress AI infrastructure
  • Updated block compatibility check for WordPress 7.0.
  • Checks for external URLs in top-level admin menus to avoid admin issues.
  • This latest version also contains additional tweaks, enhancements, and improvements.

User reviews share positive experiences:

“This plugin helped me identify areas of my plugin that I thought I had taken care of. When developing my first plugin. I learned a lot through the feedback given and was able to re-run and eventually remove of all errors.”

“Useful tool for catching issues early. If you’re serious about plugin development, this is a must-have.”

Download the official WordPress Plugin Checker Tool here:

Plugin Check (PCP) By WordPress.org

Google AI Mode’s Personal Intelligence Now Free In U.S. via @sejournal, @MattGSouthern

Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid AI Pro and AI Ultra subscribers, the feature is now expanding to users with personal Google accounts.

What’s New

Announced in a blog post, the expansion covers AI Mode in Search, the Gemini app, and Gemini in Chrome. AI Mode access is available today, while the Gemini app and Chrome rollouts are starting now.

Personal Intelligence connects a user’s Gmail and Google Photos to AI-powered search and chat responses. When enabled, AI Mode and Gemini can reference email confirmations, travel bookings, and photo memories to answer questions without the user providing that context manually.

What Changed

When Google first launched Personal Intelligence in January, you needed a subscription to try it. Today’s expansion removes that paywall for U.S. users on personal Google accounts.

The feature still isn’t available for Google Workspace business, enterprise, or education accounts.

You can opt in by connecting apps through their Search or Gemini settings, and you can turn connections on or off at any time.

What Google Says About Training Data

The blog post includes a disclosure about how data from connected accounts is handled.

According to the post, Gemini and AI Mode don’t train directly on your Gmail inbox or Google Photos library. Google describes the training as limited to “specific prompts in Gemini or AI Mode and the model’s responses.”

That means prompts generated while using Personal Intelligence could include details drawn from connected apps, even though Google says it doesn’t train directly on raw Gmail or Photos data.

Why This Matters

The move from paid to free changes the scale of this feature. When Personal Intelligence required a Pro or Ultra subscription, it reached a smaller audience of paying users. Opening it to anyone with a personal Google account in the U.S. puts it in front of a much larger base.

Increased personalization means AI Mode responses could vary more from user to user. Two people searching the same query may get different results if one has connected their Gmail and the other hasn’t. That makes it harder to benchmark what AI Mode shows for a given topic.

This feature could also change how people type queries into AI Mode. If Google already has the necessary context about a person, we might see searches become shorter. That’s an idea I explored in this video back when Google originally launched the feature:

Looking Ahead

No expansion beyond the U.S. or to Workspace accounts has been announced. Moving from paid to free in less than two months suggests Google is confident in this feature. How people respond to the linking of personal data to search will likely shape future rollout plans.