You’re Not Scaling Content. You’re Scaling Disappointment

Every few years, the SEO industry discovers a new way to mass-produce content and convinces itself that this time it’ll work. That the sheer volume of pages will overwhelm Google’s ability to assess quality. That if you just publish enough, the numbers will carry you.

It never works. It has never worked. And the people selling you these approaches know it has never worked. They just need it to work long enough to collect the invoice.

The Pattern Has A Name. It’s Called “Not Learning”

Let’s walk through the timeline, because apparently, we need to do this again.

2008-2011: Content Spinning

The pitch was simple: Take one article, run it through software that swaps synonyms, and suddenly you have 50 “unique” articles. The word “unique” was doing a lot of heavy lifting in that sentence. These articles read like someone had fed a dictionary through a blender. But even if the output had been polished, the premise was broken. Here’s what the content spinners never grasped, and what their successors still don’t: Uniqueness is trivially easy to produce. A monkey dropping its hands on a keyboard produces unique content. The string of characters has never existed before – congratulations, it’s original. The hard part was never uniqueness. It was producing uniqueness that’s worth something. Unique and valuable are not synonyms, and the gap between them is where every scaling strategy falls apart.

Google tolerated it for a while. Its systems simply hadn’t caught up yet. Then Panda arrived in February 2011, hit nearly 12% of all search queries, and content farms watched their traffic evaporate overnight … I was “fortunate” enough to watch it happen in real time. Demand Media, the poster child of the content-farm model, reported a $6.4 million loss the following year.

The lesson was supposed to be clear: You cannot industrialize quality. Volume without substance is a liability with a longer tail than most budgets can absorb.

2015-2022: Programmatic SEO

The pitch evolved. Instead of spinning existing articles, you’d build templates and fill them with structured data. “Best [X] in [City]” pages, generated by the thousand, each one a thin wrapper around a database query. Some of these actually provided value – if the underlying data was good and the template served genuine user needs. Most didn’t. Most were just doorway pages wearing a better outfit. Google spent years refining its ability to detect and demote templated content that existed primarily for indexing purposes rather than for humans.

The lesson was supposed to be reinforced: scale works when there’s substance underneath. Without it, you’re just building a bigger target.

2023-Present: AI-Generated Content At Scale

And here we are again. Same pitch, shinier tools. “We can produce 500 articles a month!” Wonderful. Can you produce 500 articles a month that are worth reading? That contain something a reader couldn’t get from the results already in the index? That demonstrate any form of expertise, experience, or original thought?

No? Then you’re not scaling content. You’re scaling your crawl budget waste.

And the pattern recognition failures are stunning. (This wasn’t subtle. Several of us noticed. No, we weren’t impressed.)

I recently came across an AI visibility tool – one that sells itself on helping you get discovered by AI systems – that had generated hundreds of pages following the pattern “best SEO agencies in {city}.” Déjà vu. Anyone who lived through programmatic SEO recognizes this immediately – it’s the 2017 playbook, except now the copy is written by an LLM. The template got a grammar upgrade and an “it’s AEO” stamp. The strategy didn’t.

Lily Ray flagged a similar case: a resume site with 500+ programmatic pages for “resume examples for {career}.” Every title following the exact same formula. Near-identical page templates. Misused AggregateRating schema. Obvious AI content throughout. Her summary was three words: “Worked until it didn’t.”

Image Credit: Pedro Dias

That phrase should be tattooed on every content scaling pitch deck. Worked until it didn’t. It always does. And then it doesn’t.

The irony of an AI optimization tool using mass-generated doorway pages to build its own visibility would be funny if it weren’t so perfectly on-brand for this industry.

The Qualitative Wall Doesn’t Move

Here’s what every generation of content scalers fails to understand: Google doesn’t evaluate content in isolation. It evaluates content relative to everything else in the index on the same topic.

Publishing 500 AI-generated articles about mortgage rates doesn’t make you an authority on mortgage rates. It makes you the 500th source saying the same thing in slightly different words. And Google already has 499 of those. It doesn’t need yours.

The qualitative wall is this: There is a minimum threshold of genuine value – original insight, lived experience, specific expertise, something the reader cannot get elsewhere – below which no amount of volume helps you. You can publish a million pages below that threshold. You’ll rank for nothing that matters.

And it gets worse. For the people scaling AI content specifically to gain visibility in AI-powered answer systems, the volume strategy doesn’t just fail; it actively backfires. A 2025 paper on retrieval evaluation for LLM-era systems introduces a metric that measures both helpful and distracting passages in retrieval. The finding that matters here: Low-utility content doesn’t sit quietly in the index waiting to be ignored. It can pull retrieval models off-track, degrading the quality of answers those systems produce. Your 500 thin articles aren’t just invisible. They’re noise. And if your site also has genuinely useful pages buried in that noise, congratulations – you’ve built your own interference pattern. The volume you thought would help discovery is actively drowning the pages that might have earned it.

This isn’t a new insight. It’s the same insight that content spinners ignored in 2010, that programmatic SEO factories ignored in 2018, and that AI content mills are ignoring right now. The tools got better at producing text. The text still has nothing to say.

Google Told You. Repeatedly

Google’s spam policies define scaled content abuse as generating pages “for the primary purpose of search rankings and not helping users.” They explicitly list “using generative AI tools or other similar tools to generate many pages without adding value for users” as an example. This is not subtext. It’s text.

In June 2025, Google began issuing manual actions specifically for scaled content abuse, targeting sites that had been mass-publishing AI-generated content. Sites across the UK, US, and EU received Search Console notifications citing “aggressive spam techniques, such as large-scale content abuse.” Complete visibility drops. Pages didn’t slide down the rankings; they vanished.

The August 2025 spam update continued the enforcement. Subsequent core updates have kept tightening the screws. Each time, the same profile gets hit: high volume, low substance, no editorial oversight.

And each time, the affected site owners acted surprised. As if Google hadn’t been telling them this for 15 years.

‘But Our Content Is Ranking Well’

This is my favorite delusion. I’ve seen it at every stage of this cycle. “Our AI content is ranking, so it must be fine.” Claiming “this is ranking well” is often precisely why Google issues algorithmic improvements and manual actions for your site. If your low-value content is ranking, the system hasn’t gotten to you yet. That’s all it means.

Google aggregates signals at the site level, not just the page level. You can have individual pages performing while the overall quality signal of your site degrades. And when the enforcement catches up (algorithmically or manually), it doesn’t pick off pages one by one. It hits the lot.

This is the content spinner’s fallacy, recycled: “It’s working right now, so it must be a strategy.” Demand Media’s content was ranking too. Right up until it wasn’t.

Lily captured this perfectly: “The case study: scaling AI content is working! The reality:” – followed by the traffic cliff that inevitably arrives. Every scaling success story is a snapshot taken before the correction. Nobody publishes the sequel.

Image Credit: Pedro Dias

The Economics Don’t Even Make Sense

Set aside the risk for a moment. Let’s talk about what you’re actually producing.

Five hundred AI-generated articles a month. Each one needs to be reviewed for accuracy – because LLMs hallucinate, and publishing incorrect information is a liability that extends well beyond SEO. Each one needs to be checked for originality – because if it reads like everything else in the index, it provides no added value; no competitive advantage. Each one needs editorial oversight to ensure it actually serves the audience you claim to serve.

If you’re doing all of that, the cost just moved – and possibly increased – while you convinced yourself you were being efficient. The “efficiency” of AI content generation evaporates the moment you apply the quality standards the content actually needs to meet.

And if you’re not doing any of that? You’re publishing unreviewed, unoriginal, potentially inaccurate content at scale under your brand name. I genuinely do not understand how anyone signs off on that.

Same Mistake, Better Tools

Content spinning. Programmatic SEO. AI-generated content at scale. Three different tools, one identical mistake: treating content as a manufacturing problem.

Manufacturing produces identical outputs at scale – that’s the point. Content derives its value from the opposite: from being specific, from being informed by experience, from saying something the rest of the index doesn’t. Every attempt to industrialise it crashes into that contradiction.

You can’t automate specificity. You can’t template experience. You can’t generate original thought by running a prompt through an LLM and hoping something useful comes out. And these constraints won’t be solved by the next model release. They’re baked into what makes content worth reading in the first place.

The people who keep chasing scale are optimising for the wrong variable. They see “more content” as an input that produces “more traffic” as an output. But the function is not linear. It never was. It’s gated by quality, and no amount of volume bypasses the gate.

The Only Question That Matters

Before you publish anything (AI-assisted or otherwise), ask one question: What does this page offer that the reader cannot already get?

If the answer is “nothing, but we’ll have more pages indexed,” you’re not building a content strategy. You’re building a liability. And you’re doing it with the confidence of someone who has apparently never heard of Panda, never looked at what happened to programmatic SEO sites in 2022, and never read Google’s own spam policies.

You can convince yourself for as long as you want. But you’ll only fool everyone else for a while.

The wall is still there. It’s always been there. The tools keep changing. The wall doesn’t.

More Resources:


This post was originally published on The Inference.


Featured Image: Roman Samborskyi/Shutterstock

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

A BuzzStream report analyzing 4 million AI citations found that press releases distributed through syndication channels barely appear in AI-generated answers.

Background

Press release distribution services have been marketing AI visibility as a selling point.

For example, ACCESS Newswire offers an “AI Visibility Checklist” for press releases. eReleases published a guide positioning press releases as tools for AI search visibility. Business Wire has written about optimizing releases for answer engine discovery.

BuzzStream’s data offers a different perspective.

What They Found

The report’s authors used XOFU, a citation monitoring tool from Citation Labs, to track where AI platforms pull their sources across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini. BuzzStream ran 3,600 prompts across 10 industries and collected data for one week.

Overall, news publications accounted for 14% of all citations in the dataset. But within that news category, the numbers drop off quickly for syndicated and distributed content.

Press releases published through syndication channels like Yahoo and MSN accounted for 0.32% of news citations and 0.04% of the entire dataset.

Direct citations from newswire services like PRNewswire made up 0.21% of the full dataset. They appeared most often in exploratory and informational prompts, but even there they only reached 0.37%.

Syndicated news content overall, including articles republished through MSN and Yahoo networks, accounted for 6.2% of news citations and 0.9% of the total dataset.

To identify syndicated content, BuzzStream cross-referenced author names against publications using its ListIQ tool and manually confirmed cases where the author name didn’t match the publication. The company acknowledged this method has limits, since some sites repost press releases without labeling them as such.

What The Data Shows About What Works

The report’s more interesting finding is what does get cited.

Original editorial content made up 81% of news citations in the dataset. Affiliate and review content accounted for the rest. The split held across prompt types, though affiliate content had its strongest showing in evaluative prompts at 39%.

The report broke prompts into three categories. Evaluative prompts like “Is Sony better than Bose?” generated the most news citations at 18% of all citations. Brand awareness prompts like “What is Chase known for?” generated the fewest at 7%. Informational prompts fell in between.

Editorial content that appeared most often in evaluative citations included head-to-head comparisons and cost analysis from outlets like Reuters, CNBC, and CNET.

The ChatGPT Newsroom Exception

One platform-level finding stood out. Internal press releases and newsroom content on company-owned domains accounted for 18% of ChatGPT’s citations in the dataset.

On Google’s AI platforms, that number dropped to around 3%.

BuzzStream cited examples including Iberdrola’s corporate press room and Target’s corporate subdomain. When prompted about Iberdrola’s role in renewables, ChatGPT cited a press release from Iberdrola’s own website. When asked about Target’s products, ChatGPT cited a 2015 press release from Target’s corporate domain.

BuzzStream said most earlier trends looked fairly uniform across platforms, with newsroom content on ChatGPT standing out as a clearer exception.

Why This Matters

The data challenges a premise that press release distribution services have been promoting. Multiple distribution platforms now market press releases as a path to AI visibility.

BuzzStream’s data suggests the distributed version of a press release, the one that lands on Yahoo Finance or MSN through a wire service, rarely becomes the version AI platforms cite. Original editorial coverage and owned newsroom content performed better by wide margins.

This connects to patterns we’ve been tracking. A BuzzStream report we covered in January found 79%of top news publishers block at least one AI training bot, and 71% block retrieval bots. Hostinger’s analysis of 66 billion bot requests showed AI training crawlers losing access while search bots expanded their reach.

The citation data suggests that even when syndicated content is accessible to AI crawlers, it rarely gets cited.

Google’s VP of Product for Search, Robby Stein, said in an interview we covered that being mentioned by other sites could help with AI recommendations, comparing AI’s behavior to how a human might research a question. That comparison favors earned editorial coverage over distributed press releases.

Adam Riemer made a related point in his Ask an SEO column, drawing a line between digital PR that builds brand coverage in publications and link building that focuses on placement metrics. BuzzStream’s data suggests that line extends to AI citations too.

For transparency, BuzzStream sells outreach and digital PR tools, so the finding that earned media outperforms distribution aligns with its business model. The company partnered with Citation Labs and used Citation Labs’ XOFU monitoring tool for the data collection.

Looking Ahead

This is part one of a multi-part analysis from BuzzStream. The single-week data window and large-brand focus are limits worth noting. Smaller brands with less existing editorial coverage may see different results.

Businesses investing in digital PR may want to look more closely at how different distribution channels perform in their category. Data suggests the channel you use can affect where your brand gets cited.


Featured Image: Cagkan Sayin/Shutterstock

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

AI search has changed the rules, but has your tracking? 

How do you measure visibility without rankings?

Which prompts actually reflect real buyer intent?

And how do you avoid AI tracking data that looks useful, but isn’t?

Learn how to set up AI prompt tracking you can trust for smarter decisions.

ChatGPT, Google AI Overviews & Perplexity Are Reshaping Discoverability

In this on-demand webinar, Nick Gallagher, Sr. SEO Strategy Director at Conductor, breaks down how AI prompt tracking really works, why topics matter more than individual prompts, and how to avoid common mistakes that skew insights.

You’ll leave with a clear framework for measuring AI visibility in a way that reflects real user behavior and supports smarter search and content strategies.

You’ll Learn:

  • How AI prompt tracking works, and why setup matters more than volume
  • Best practices for choosing topics, prompts, and answer engines
  • Common mistakes that lead to inaccurate or misleading AI visibility data

Watch on-demand and learn how reputation management is shaping local visibility, trust, and growth in 2026.

View the slides below or check out the full webinar for all the details.

Vibe Coding Plugins? Validate With Official WordPress Plugin Checker via @sejournal, @martinibuster

Vibe coding WordPress plugins with AI can raise concerns about whether a plugin follows best practices for compatibility and security. WordPress.org’s Plugin Check Plugin offers a solution for those who wish to check whether a plugin conforms to the official standards. The latest version can now connect to AI.

The plugin is developed by WordPress.org, and it’s meant as a tool for plugin authors to test their own plugins with similar kinds of tests used by the official WordPress plugin repository, which can also help speed up the process of getting accepted into the repository.

According to the official plugin description:

“Plugin Check is a tool for testing whether your plugin meets the required standards for the WordPress.org plugin directory. With this plugin you will be able to run most of the checks used for new submissions, and check if your plugin meets the requirements.

Additionally, the tool flags violations or concerns around plugin development best practices, from basic requirements like correct usage of internationalization functions to accessibility, performance, and security best practices.”

The Plugin Check Plugin also has a Plug Namer feature that will check if a plugin’s name is similar to another plugin, if it may violate a trademark, complies with WordPress naming guidelines, and if the plugin name is too generic or broad.

The latest version of the plugin is version 1.9.0 and it adds the following new features:

  • Supports the new WordPress 7.0 AI connectors so that the plugin can work with the WordPress AI infrastructure
  • Updated block compatibility check for WordPress 7.0.
  • Checks for external URLs in top-level admin menus to avoid admin issues.
  • This latest version also contains additional tweaks, enhancements, and improvements.

User reviews share positive experiences:

“This plugin helped me identify areas of my plugin that I thought I had taken care of. When developing my first plugin. I learned a lot through the feedback given and was able to re-run and eventually remove of all errors.”

“Useful tool for catching issues early. If you’re serious about plugin development, this is a must-have.”

Download the official WordPress Plugin Checker Tool here:

Plugin Check (PCP) By WordPress.org

PPC Automation Layering: How Smart Advertisers Combine Automation With Strategy via @sejournal, @brookeosmundson

Automation has been part of PPC management for longer than many marketers realize.

Bid adjustments, keyword expansion, and audience targeting have been guided by machine learning inside platforms like Google Ads for years. What has changed is the depth of automation now influencing campaign performance.

Smart Bidding, automated assets, dynamic targeting, and recommendation engines now handle many tasks that used to require daily manual management.

That shift has changed the job of a PPC manager.

This is where PPC automation layering becomes useful. Instead of relying on a single automated feature, marketers combine multiple tools and signals to shape how campaigns perform.

Read on to learn more about automation layering and helpful use cases to make your job easier.

What Is Automation Layering?

PPC automation layering is the strategic use of multiple automation tools and rules to manage and optimize PPC campaigns.

The main goal of PPC automation layering is to improve the efficiency and effectiveness of your PPC efforts.

This is where automation layering comes in.

Instead of relying on one automated feature, advertisers use several layers of automation working together. Each layer contributes different inputs, signals, or guardrails.

Some examples of automation layering include:

  • Smart Bidding strategies: Ad platforms take care of keyword bidding based on goals input within campaign settings. Examples of Smart Bidding include target CPA, target ROAS, maximize conversions, and more.
  • Automated PPC rules: Ad platforms can run specific account rules on a schedule based on the goal of the rule. An example would be to have Google Ads pause time-sensitive sale ads on a specific day and time.
  • PPC scripts: These are blocks of code that give ad platforms certain parameters to look out for and then have the platform take a specific action if those parameters are met.
  • Google Ads Recommendations tab: Google reviews campaign performance and puts together recommendations for PPC marketers to either take action on or dismiss if irrelevant.
  • Third-party automation tools: Tools such as Google Ads Editor, Optmyzr, Adalysis, and more can help take PPC management to the next level with their automated software and additional insights.
  • AI-Powered analysis tools: Platforms like ChatGPT, Copilot, Claude, and Gemini all have different capabilities, from campaign analysis to keyword research, that can streamline your workflow and efficiency.

See the pattern here?

Automation and machine learning produce outputs of PPC management based on the inputs of PPC marketers to produce better campaign results.

How Has Automation Changed PPC Management?

Automation has gradually reshaped how paid media accounts are managed.

Ten to fifteen years ago, many PPC managers (including myself) spent most of their time adjusting bids, expanding keyword lists and negatives, and refining campaign structures. Success often came from tightly controlling every lever in the account.

Today, many of those levers are controlled by algorithms and automation.

Platforms automatically adjust bids in real time, assemble ad combinations dynamically, and expand targeting beyond the parameters advertisers originally set. These systems are designed to find conversions more efficiently than manual management.

In many cases, they do.

But automation introduces a new challenge. Algorithms are only as effective as the signals they receive.

For example, a few automation features built into the Google Ads platform include:

  • Keyword and campaign bid management.
  • Audience expansion.
  • Automated ad asset creation.
  • Keyword expansion.
  • And much more.

Automation has essentially taken over many of the day-to-day management tasks that PPC advertisers were used to doing.

While everyone can agree that easier paid media management sounds great, the learning curve for marketers has been far from perfect.

This leads us to the next big question: Will automation replace PPC marketers?

Does Automation Replace PPC Experts?

Job layoffs and restructuring due to automation are certainly a sensitive topic.

In reality, automation has already replaced many repetitive tasks that once filled a marketer’s day. Bid adjustments, keyword expansion, and ad rotation are increasingly handled by machine learning systems.

But it’s time to settle this debate once and for all.

Automation will not replace the need for PPC marketers.

What we have, and will continue to see, is a shift in the role of PPC experts.

Since automation and machine learning take the role of day-to-day management, PPC experts will spend more time doing things such as:

  • Analyzing data and data quality.
  • Strategic decision making.
  • Reviewing and optimizing outputs from automation.
  • Identifying growth opportunities.

Automation and machines are great at pulling levers, making overall campaign management more efficient.

But automation tools alone cannot replace human touch in creating a story based on data and insights.

This is the beauty of PPC automation layering.

Lean into what automation tools have to offer, which leaves you more time to become a more strategic PPC marketer.

PPC Automation Layering Use Cases

There are many ways that PPC marketers and automation technologies can work together for optimal campaign results.

Below are just a few examples of how to use automation layering to your advantage.

1. Make The Most Of Smart Bidding Capabilities

As mentioned earlier in this guide, Smart Bidding is one of the most useful PPC automation tools.

Google Ads has developed its own automated bidding strategies to take the guesswork out of manual bid management. These have been around since 2016, so this isn’t necessarily a “new” automation tool compared to others.

However, Smart Bidding is not foolproof and certainly not a “set and forget” strategy.

Smart Bidding outputs can only be as effective as the inputs given to the machine learning system.

So, how should you use automation layering for Smart Bidding?

First, pick a Smart Bidding strategy that best fits an individual campaign goal. You can choose from:

Whenever starting a Smart Bidding strategy, it’s important to put some safeguards in place to reduce the volatility in campaign performance.

This could mean setting up an automated rule to alert you whenever significant volatility is reported, such as:

  • Spike in cost per click (CPC) or cost.
  • Dip in impressions, clicks, or cost.

Either of these scenarios could be due to learning curves in the algorithm, or it could be an indicator that your bids are too low or too high.

For example, say a campaign has a set target CPA goal of $25, but then all of a sudden, impressions and clicks fall off a cliff.

This could mean that the target CPA is set too low, and the algorithm has throttled ad serving to preserve only for individual users the algorithm thinks are most likely to purchase.

Without having an alert system in place, campaign volatility could go unnoticed for hours, days, or even weeks if you’re not checking performance in a timely manner.

2. Interact With Recommendations & Insights To Improve Automated Outputs

The goal of the ad algorithms is to get smarter every day and improve campaign performance.

But again, automated outputs are only as good as the input signals it’s been given at the beginning.

Many experienced PPC marketers tend to write off the Google Ads Recommendations or Insights tab due to perceptions of receiving irrelevant suggestions.

However, these systems were meant to learn from the input of marketers to better learn how to optimize.

Just because a recommendation is given on the platform does not mean you have to implement it.

The beauty of this tool is you have the ability to dismiss the opportunity and then tell Google why you’re dismissing it.

There’s even an option for “this is not relevant.”

Be willing to interact with the Recommendations and Insights tab on a weekly or bi-weekly basis to help better train the algorithms for optimizing performance based on what you signal as important.

Regularly reviewing recommendations, rather than ignoring them completely, creates another layer of automation feedback inside the account.

3. Automate Competitor Analysis With Tools

It’s one thing to ensure your ads and campaigns are running smoothly at all times.

Next-level strategy is using automation to keep track of your competitors and what they’re doing.

Multiple third-party tools have competitor analysis features to alert you on items such as:

  • Keyword coverage.
  • Content marketing.
  • Social media presence.
  • Market share.
  • And more.

Keep in mind that these tools are a paid subscription, but many are useful in many other automation areas outside of competitor analysis.

Some of these tools include Moz, Google Trends, and Klue.

The goal is not simply to keep up with your competitors and copy what they’re doing.

Setting up automated competitor analysis helps you stay informed and can reinforce your market positioning or react in a way to help set you apart from competitor content.

4. Using LLM Platforms To Accelerate PPC Analysis

A newer layer of automation is emerging through large language model platforms such as ChatGPT, Claude, Gemini, and Copilot.

It’s important to note that these platforms do not control campaign delivery. Instead, they help marketers process and interpret information faster.

LLM platforms can assist with tasks such as reviewing exported performance data, identifying patterns across campaigns, or summarizing performance changes between reporting periods.

For example, marketers can upload campaign reports and ask targeted questions about cost trends, conversion performance, or impression share shifts. The model can quickly highlight patterns that might otherwise require significant manual analysis.

LLMs can also support areas like keyword expansion, creative brainstorming, and reporting summaries. When paired with platform automation features such as Smart Bidding or responsive ad formats, this approach helps advertisers produce stronger inputs for the algorithm to evaluate.

These tools should not replace human analysis, but they can accelerate many of the workflows surrounding campaign management.

In Summary

Automation now shapes nearly every part of paid media management.

Because of this, the role of the PPC practitioner continues to evolve.

Instead of managing every setting manually, marketers increasingly guide how automation systems operate. That guidance comes through better signals, stronger inputs, and thoughtful campaign structures.

Automation layering helps bring those elements together.

By combining platform automation, scripts, rules, external tools, and AI-driven analysis, advertisers can create a system where automation improves efficiency without losing control over their accounts.

The platforms may be running the mechanics of campaign delivery, but the direction still comes from the marketer.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

FAQ

What are some key benefits of PPC automation layering?

PPC automation layering enhances the efficiency and effectiveness of PPC campaign management. It combines multiple automation tools and strategies like Smart Bidding, automated PPC rules, PPC scripts, and third-party platforms. By leveraging these technologies, marketers can focus on higher-level strategic tasks while the system manages routine tasks, such as keyword bidding, campaign bid management, and data analysis.

Will automation replace the need for PPC experts?

Automation will not replace PPC experts, but it will shift their role over time. While automation can handle many day-to-day management tasks like bid adjustments and ad scheduling, PPC experts should shift their focus to strategic decision-making, data analysis, and optimizing the outputs from automation tools. Human oversight remains crucial for effective campaign management.

What are some practical use cases for PPC automation layering?

Practical use cases for PPC automation layering include:

  • Smart Bidding strategies: Choosing the best bidding strategy (e.g., Target CPA, Target ROAS) and setting up rules to monitor performance volatility.
  • Recommendations & Insights: Regularly interacting with the Google Ads Recommendations and Insights tab to refine automated outputs.
  • Competitor Analysis: Using third-party tools like Semrush, Moz, or Google Trends to automate competitor analysis, staying informed on market positioning without manually tracking competitors.

These strategies help optimize campaign results while allowing more time for strategic analysis and decision-making.

What’s Hot, What’s Not: AI Search Changes In Q1 2026 [Recap] via @sejournal, @MattGSouthern

SEJ Live’s opening panel covered three months of AI search changes from three angles. I covered the news, SEJ Founder Loren Baker covered the business case, and Managing Editor Shelley Walsh covered content strategy. The on-demand recording is available here.

The session was called “What’s Hot, What’s Not,” and our goal was to identify the Q1 changes worth acting on in Q2, and what steps you can start taking today.

AI Overviews Are Costing Clicks, But Not All Of Them

The headline number from Q1 is that clicks drop when AI Overviews appear, but the loss varies by query type. Google’s VP of Search, Robby Stein, said that when people scroll past an AI Overview without engaging, Google pulls it back for that query. The pages losing traffic are the ones answering simple questions. If someone searches for store hours or a return policy, the AI answers it, and nobody clicks through.

Shelley pointed to data from Amsive showing that branded queries with AI Overviews see an 18% increase in click-through rates. When people trust a source, they click through even when a summary is available.

She also pointed out that between half and three-quarters of all queries don’t trigger an AI Overview at all, depending on whose data you use. BrightEdge puts it at about half. Conductor puts it higher. Either way, there are entire categories of queries where you can still compete without an AI Overview in the way.

AI Mode And ChatGPT Are Both Selling Ads Now

AI Mode crossed 100 million monthly active users in the U.S. and India, with 75 million using it daily. During Q1, Google expanded how it monetizes AI-powered search, including Direct Offers in AI Mode, which lets businesses place promotions inside AI responses.

OpenAI began testing ads in ChatGPT for logged-in adult users on the Free and Go tiers. Industry reports put the early pricing at about $60 CPM with a $200,000 minimum commitment. OpenAI said the ads use the current conversation context for targeting.

Between Google and OpenAI, there are now multiple ways to place ads inside AI-generated answers. That wasn’t the case a few months ago.

Start tracking how often your brand gets mentioned in ChatGPT and AI Mode responses. You’ll want to know where you stand before deciding whether paid placement makes sense.

Replaceable Content Is What AI Threatens

Shelley’s segment drew a line between replaceable and valuable content. AI can summarize “what is SEO” or “how to change a bike chain” as well as any page that restates common knowledge. If your content is built on answering those kinds of questions, you’re competing directly with AI.

But content based on original research and firsthand experience is different. Shelley called this “golden knowledge,” borrowing a phrase from SEO veteran Grant Simmons. It’s your data and your experience. LLMs can’t generate it from training data.

Shelley said this looks like video interviews and original research, plus opinionated commentary from practitioners. She pointed to SEJ’s own changes as an example. SEJ has moved editorial toward experience-first formats and shifted revenue from programmatic to sponsorship and downloadable assets. Growing a direct audience is now the top priority.

The question to ask, she said, is why someone would click through from an AI summary to your site. If your content is a summary, there’s no reason. If it has depth, case studies, implementation detail, or nuance the summary can’t contain, that’s what drives the click.

Schema Markup Now Trains LLMs Across Platforms

Loren’s segment made the case that structured data has more value now than at any point in the last decade. Schema markup has always helped with rich snippets in Google. Now it also trains LLMs across platforms.

He shared an example of a client whose CEO shared a common name, and searching for that name plus “CEO” surfaced executives from other companies. Loren implemented organization and person schema. As soon as it went live, the correct CEO appeared in AI Overviews.

Loren ranked the structured data signals AI systems respond to. Schema markup was at the top, followed by clean heading hierarchy and semantic HTML. He put llms.txt as an emerging standard worth watching.

On markdown, Loren noted that Cloudflare had announced a new /crawl endpoint that same morning. The feature renders sites in clean HTML and markdown for LLMs, plus structured JSON. Loren’s point was that if Cloudflare is building this at the platform level, and LLMs learn from markdown, then the tooling to serve it is growing.

Getting Schema Off The Dev Backlog

Loren’s most relatable point was about internal buy-in. Anyone who’s worked with development teams knows schema tends to sit in the backlog behind other priorities. But the conversation changes when you tie technical SEO work to AI visibility.

Tell a client that AI answers depend on structured data, and that ticket moves up the sprint board. He connected this to broader executive buy-in. C-suite leaders are seeing AI Overviews and ChatGPT answers about their companies, and they’re asking questions. That attention creates an opening to secure funding for technical work that would have stalled in previous years.

For ecommerce specifically, Loren recommended the Shopify Knowledge Base App, which crawls product content and generates question-and-answer pairs.

Looking Ahead

During Q&A, the panel was asked about AI-generated content. Shelley confirmed that Search Engine Journal’s content is human-written, and we plan to keep it that way. All three of us agreed that AI works best as an augmentation tool for writers who already know their subject.

The full session, including the Q&A, is available on demand. The other two sessions from the event are also available. CallRail’s Emily Popson covered AI search KPIs in Session 2, and Forrester’s Nikhil Lai covered answer engine strategy in Session 3.

More Resources:


Featured Image: Search Engine Journal

Google AI Mode’s Personal Intelligence Now Free In U.S. via @sejournal, @MattGSouthern

Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid AI Pro and AI Ultra subscribers, the feature is now expanding to users with personal Google accounts.

What’s New

Announced in a blog post, the expansion covers AI Mode in Search, the Gemini app, and Gemini in Chrome. AI Mode access is available today, while the Gemini app and Chrome rollouts are starting now.

Personal Intelligence connects a user’s Gmail and Google Photos to AI-powered search and chat responses. When enabled, AI Mode and Gemini can reference email confirmations, travel bookings, and photo memories to answer questions without the user providing that context manually.

What Changed

When Google first launched Personal Intelligence in January, you needed a subscription to try it. Today’s expansion removes that paywall for U.S. users on personal Google accounts.

The feature still isn’t available for Google Workspace business, enterprise, or education accounts.

You can opt in by connecting apps through their Search or Gemini settings, and you can turn connections on or off at any time.

What Google Says About Training Data

The blog post includes a disclosure about how data from connected accounts is handled.

According to the post, Gemini and AI Mode don’t train directly on your Gmail inbox or Google Photos library. Google describes the training as limited to “specific prompts in Gemini or AI Mode and the model’s responses.”

That means prompts generated while using Personal Intelligence could include details drawn from connected apps, even though Google says it doesn’t train directly on raw Gmail or Photos data.

Why This Matters

The move from paid to free changes the scale of this feature. When Personal Intelligence required a Pro or Ultra subscription, it reached a smaller audience of paying users. Opening it to anyone with a personal Google account in the U.S. puts it in front of a much larger base.

Increased personalization means AI Mode responses could vary more from user to user. Two people searching the same query may get different results if one has connected their Gmail and the other hasn’t. That makes it harder to benchmark what AI Mode shows for a given topic.

This feature could also change how people type queries into AI Mode. If Google already has the necessary context about a person, we might see searches become shorter. That’s an idea I explored in this video back when Google originally launched the feature:

Looking Ahead

No expansion beyond the U.S. or to Workspace accounts has been announced. Moving from paid to free in less than two months suggests Google is confident in this feature. How people respond to the linking of personal data to search will likely shape future rollout plans.

Google Removes ‘What People Suggest,’ Expands Health AI Tools via @sejournal, @MattGSouthern

Google has removed “What People Suggest,” a search feature that used AI to organize health perspectives from online discussions. The confirmation came as Google held its annual Check Up event, where it announced new AI health features for YouTube.

A Google spokesperson confirmed the removal to The Guardian, calling it part of a “broader simplification” of the search results page. The spokesperson said the decision was unrelated to the quality or safety of the feature. The Guardian also reported, citing three people familiar with the matter, that the feature was pulled after a trial run.

“What People Suggest” launched on mobile devices in the U.S. last year at Google’s annual health event, The Check Up. At the time, Karen DeSalvo, then Google’s chief health officer, said people value hearing from others who have experienced similar health conditions. DeSalvo retired in August and was succeeded by Dr. Michael Howell, who led this year’s Check Up announcements.

What Google Announced At The Check Up

At its 2026 Check Up event, Google announced AI health features across YouTube, Fitbit, and clinician education.

Google says health-related videos on YouTube have surpassed 1 trillion views globally. The company is adding an AI-powered “Ask” button on eligible health videos that lets viewers interact with the content.

Separately, Google is experimenting with AI to organize peer-reviewed scientific information and help present complex topics to broader audiences.

In the blog post, Howell said a central challenge has been connecting people to the right health information at the right time.

Google.org is committing $10 million to fund organizations that will reimagine clinician education for AI. The Council of Medical Specialty Societies and the American Academy of Nursing are the first partners.

Why This Matters

AI features in search results for health-related topics keep changing. Google pulled back one feature that showed forum-style perspectives and put new investment into medical education and structured video tools.

YouTube’s growing role in health-related AI Overviews is already documented. SE Ranking’s study of German health queries found YouTube was the most-cited domain in health AI Overviews, appearing more often than medical or government sites. Adding interactive AI on top of those videos could reinforce that pattern.

How We Got Here

Google’s AI features for health queries have faced pressure over the past year.

In January, the Guardian published an investigation that found health experts considered some AI Overview responses misleading for medical queries. Google disputed elements of the reporting but later removed AI Overviews for some specific health searches, including queries about liver function tests.

“What People Suggest” launched during the same period Google was expanding AI Overviews to thousands more health topics. Ahrefs data from November showed medical YMYL queries triggered AI Overviews 44.1% of the time, the highest rate among YMYL categories.

Looking Ahead

The pattern over the past year points to tighter guardrails around some health AI experiences. Whether that direction holds is less certain.

The removal of “What People Suggest,” and YouTube’s continued citation visibility in AI Overviews, could point that way. But Google’s track record with health-related AI features also shows these decisions can change quickly.


Featured Image: Mamun_Sheikh/Shutterstock

Google AI Overviews Cut Germany’s Top Organic CTR By 59% via @sejournal, @MattGSouthern

AI Overviews cut the click-through rate on Germany’s top organic position by 59%, according to a SISTRIX analysis of more than 100 million keywords.

The data, published by founder Johannes Beus, puts numbers on a pattern that multiple studies have now documented across different markets. The dataset stands out for its size and for offering category-level detail in Germany.

What The Data Shows

SISTRIX found that AI Overviews appear on roughly 20% of all keywords in German search results. That’s close to SE Ranking’s finding of about 21% in the US market from November, though the datasets cover different markets and use different methodologies.

When AIOs are present, the CTR at position 1 drops from 27% to 11%. Across all positions, a typical search leads to an organic click 57% of the time without an AIO. With one, that falls to 33%.

About 79% of AIOs in German results appear above the organic listings. The rest show up further down the page, after the first few organic results.

SISTRIX estimates the total cost at 265 million lost organic clicks per month across the German market. Averaged across all keywords, including those without AIOs, that works out to a 6.6% click loss.

Impact Varies By Category

SISTRIX broke down the data by category, and the gap between the most-affected and the least-affected is large.

Parenting and baby content sites lost over 24% of their organic clicks. The health and home improvement categories also showed losses well above average.

At the other end, recipe sites like Chefkoch lost about 1%. News and media sites lost 7.37%, below the average. Shopping and travel booking sites were barely affected.

SISTRIX’s Beus wrote that informational queries are hit hardest. Transactional searches, where people need to do something that an AI summary can’t replace, are mostly spared.

Biggest Losers

In raw numbers, Wikipedia leads with an estimated 31.6 million lost clicks per month in Germany, representing about 5% of its Google traffic in that market. DocCheck (4.8 million), AOK (4 million), ADAC (3.1 million), and Pons (3.1 million) follow.

By percentage, specialized health portals are hit hardest. SISTRIX data shows lumedis.de losing 30% of its organic clicks, ratgeber-herzinsuffizienz.de losing 29%, and herzstiftung.de losing 29%.

Sites with the smallest losses include wetter.com (0.18%), Booking.com (0.46%), Idealo (0.85%), and Amazon (1.73%).

How This Compares To Other Markets

The German data aligns with other regions, but comparisons are limited by differing methods and keywords.

A Pew Research Center study of US searches found that users clicked 8% of the time when an AIO was present, compared to 15% without one. That’s a 47% relative reduction. A GrowthSRC analysis found a 32% drop at position 1 in the US.

The German numbers (59% loss at position 1) are steeper. Whether that reflects actual differences between the markets or differences in measurement methodology isn’t clear from the available data.

Why This Matters

The category-level breakdown is the most useful part of this data if you’re managing organic search in European markets. A blended 6% average click loss sounds manageable, but losing 24% of clicks in your specific vertical isn’t.

SISTRIX’s data shows search volume alone doesn’t reliably predict traffic where AIOs are active. Whether an AIO appears and impacts CTR in your category must now be part of keyword analysis.

Looking Ahead

SISTRIX previously reported 17% AIO prevalence in Germany in August, and that’s now 20%. Growth slowed, but the feature’s presence in German search results continues expanding.

SISTRIX is a commercial SEO analytics provider. The data in this analysis is drawn from their proprietary keyword database.


Featured Image: Lana Sham/Shutterstock

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Search referral traffic to small publishers dropped 60% over two years, according to Chartbeat data reported exclusively by Axios.

That’s nearly three times the decline at large publishers. The analytics firm, which tracks traffic across thousands of client websites globally, segmented its network by size. Mid-sized publishers (10,000 to 100,000 daily page views) lost 47%, and large publishers (over 100,000 daily page views) lost 22%.

What’s New

Aggregate search traffic data from Chartbeat isn’t new. Our January Reuters Institute coverage cited Chartbeat data showing a 33% global decline in Google Search referrals. What’s new is the size breakdown. Previous Chartbeat figures cited in earlier coverage were aggregate numbers, and this data shows the losses are concentrated at the bottom.

Page views from Google Search fell 34% between December 2024 and December 2025, per the Chartbeat data. Google Discover, the other top referral source, fell 15% over the same period.

ChatGPT referrals grew more than 200% during that window, but chatbots still account for less than 1% of all publisher page view referrals. Growth in chatbot traffic hasn’t come close to replacing what search lost.

How Larger Publishers Are Compensating

Larger publishers appear to be finding alternative traffic sources to partially offset search losses. News and media sites in particular are seeing growth in direct and internal traffic as a share of referrals.

Email and app referrals are also growing among news publishers, per the Axios report. Our Reuters Institute coverage in January found the same pattern, with publishers saying they planned to invest more in owned channels.

Overall weekly page views across all publishers in Chartbeat’s network dropped 6% between 2024 and 2025. The firm attributed that to factors outside search, including a quieter election cycle, though that’s their interpretation, not a measured cause.

AI Referral Engagement Varies By Site Type

One finding that stands out for content strategy is that news and media sites get the highest total page views from AI chatbot referrals, but the lowest engagement per article.

Axios reports that this pattern suggests readers use news citations in chatbots for quick fact-checks or context, not deeper reading.

The other category in the data is “utilitarian sites,” meaning publishers offering health advice or gardening tips. Those publishers see fewer total referrals from AI platforms but more page views per article.

Methodology Notes

Chartbeat sells analytics tools to publishers and has tracked traffic across its client network for close to two decades. Its data covers thousands of websites globally but skews toward news and media publishers.

Small publishers in this data average 1,000 to 10,000 daily page views, medium is 10,000 to 100,000, and large is over 100,000.

Axios received the data exclusively, and Chartbeat hasn’t published it independently.

Why This Matters

Search referral traffic loss is hitting sites with the fewest resources to build alternative traffic.

Most reporting on search traffic declines has treated publishers as a single group. This Chartbeat data breaks down the data by size. For anyone working with smaller publishers, these numbers should change the conversation.

AI chatbot users click to news sites for quick checks but spend more time on how-to content. That means the value of an AI referral depends on what you publish.

Looking Ahead

We’ll be watching for Chartbeat to publish the full data set. How chatbot referral engagement differs by site type is still early data worth tracking.


Featured Image: fizkes/Shutterstock