PPC Automation Layering: How Smart Advertisers Combine Automation With Strategy via @sejournal, @brookeosmundson

Automation has been part of PPC management for longer than many marketers realize.

Bid adjustments, keyword expansion, and audience targeting have been guided by machine learning inside platforms like Google Ads for years. What has changed is the depth of automation now influencing campaign performance.

Smart Bidding, automated assets, dynamic targeting, and recommendation engines now handle many tasks that used to require daily manual management.

That shift has changed the job of a PPC manager.

This is where PPC automation layering becomes useful. Instead of relying on a single automated feature, marketers combine multiple tools and signals to shape how campaigns perform.

Read on to learn more about automation layering and helpful use cases to make your job easier.

What Is Automation Layering?

PPC automation layering is the strategic use of multiple automation tools and rules to manage and optimize PPC campaigns.

The main goal of PPC automation layering is to improve the efficiency and effectiveness of your PPC efforts.

This is where automation layering comes in.

Instead of relying on one automated feature, advertisers use several layers of automation working together. Each layer contributes different inputs, signals, or guardrails.

Some examples of automation layering include:

  • Smart Bidding strategies: Ad platforms take care of keyword bidding based on goals input within campaign settings. Examples of Smart Bidding include target CPA, target ROAS, maximize conversions, and more.
  • Automated PPC rules: Ad platforms can run specific account rules on a schedule based on the goal of the rule. An example would be to have Google Ads pause time-sensitive sale ads on a specific day and time.
  • PPC scripts: These are blocks of code that give ad platforms certain parameters to look out for and then have the platform take a specific action if those parameters are met.
  • Google Ads Recommendations tab: Google reviews campaign performance and puts together recommendations for PPC marketers to either take action on or dismiss if irrelevant.
  • Third-party automation tools: Tools such as Google Ads Editor, Optmyzr, Adalysis, and more can help take PPC management to the next level with their automated software and additional insights.
  • AI-Powered analysis tools: Platforms like ChatGPT, Copilot, Claude, and Gemini all have different capabilities, from campaign analysis to keyword research, that can streamline your workflow and efficiency.

See the pattern here?

Automation and machine learning produce outputs of PPC management based on the inputs of PPC marketers to produce better campaign results.

How Has Automation Changed PPC Management?

Automation has gradually reshaped how paid media accounts are managed.

Ten to fifteen years ago, many PPC managers (including myself) spent most of their time adjusting bids, expanding keyword lists and negatives, and refining campaign structures. Success often came from tightly controlling every lever in the account.

Today, many of those levers are controlled by algorithms and automation.

Platforms automatically adjust bids in real time, assemble ad combinations dynamically, and expand targeting beyond the parameters advertisers originally set. These systems are designed to find conversions more efficiently than manual management.

In many cases, they do.

But automation introduces a new challenge. Algorithms are only as effective as the signals they receive.

For example, a few automation features built into the Google Ads platform include:

  • Keyword and campaign bid management.
  • Audience expansion.
  • Automated ad asset creation.
  • Keyword expansion.
  • And much more.

Automation has essentially taken over many of the day-to-day management tasks that PPC advertisers were used to doing.

While everyone can agree that easier paid media management sounds great, the learning curve for marketers has been far from perfect.

This leads us to the next big question: Will automation replace PPC marketers?

Does Automation Replace PPC Experts?

Job layoffs and restructuring due to automation are certainly a sensitive topic.

In reality, automation has already replaced many repetitive tasks that once filled a marketer’s day. Bid adjustments, keyword expansion, and ad rotation are increasingly handled by machine learning systems.

But it’s time to settle this debate once and for all.

Automation will not replace the need for PPC marketers.

What we have, and will continue to see, is a shift in the role of PPC experts.

Since automation and machine learning take the role of day-to-day management, PPC experts will spend more time doing things such as:

  • Analyzing data and data quality.
  • Strategic decision making.
  • Reviewing and optimizing outputs from automation.
  • Identifying growth opportunities.

Automation and machines are great at pulling levers, making overall campaign management more efficient.

But automation tools alone cannot replace human touch in creating a story based on data and insights.

This is the beauty of PPC automation layering.

Lean into what automation tools have to offer, which leaves you more time to become a more strategic PPC marketer.

PPC Automation Layering Use Cases

There are many ways that PPC marketers and automation technologies can work together for optimal campaign results.

Below are just a few examples of how to use automation layering to your advantage.

1. Make The Most Of Smart Bidding Capabilities

As mentioned earlier in this guide, Smart Bidding is one of the most useful PPC automation tools.

Google Ads has developed its own automated bidding strategies to take the guesswork out of manual bid management. These have been around since 2016, so this isn’t necessarily a “new” automation tool compared to others.

However, Smart Bidding is not foolproof and certainly not a “set and forget” strategy.

Smart Bidding outputs can only be as effective as the inputs given to the machine learning system.

So, how should you use automation layering for Smart Bidding?

First, pick a Smart Bidding strategy that best fits an individual campaign goal. You can choose from:

Whenever starting a Smart Bidding strategy, it’s important to put some safeguards in place to reduce the volatility in campaign performance.

This could mean setting up an automated rule to alert you whenever significant volatility is reported, such as:

  • Spike in cost per click (CPC) or cost.
  • Dip in impressions, clicks, or cost.

Either of these scenarios could be due to learning curves in the algorithm, or it could be an indicator that your bids are too low or too high.

For example, say a campaign has a set target CPA goal of $25, but then all of a sudden, impressions and clicks fall off a cliff.

This could mean that the target CPA is set too low, and the algorithm has throttled ad serving to preserve only for individual users the algorithm thinks are most likely to purchase.

Without having an alert system in place, campaign volatility could go unnoticed for hours, days, or even weeks if you’re not checking performance in a timely manner.

2. Interact With Recommendations & Insights To Improve Automated Outputs

The goal of the ad algorithms is to get smarter every day and improve campaign performance.

But again, automated outputs are only as good as the input signals it’s been given at the beginning.

Many experienced PPC marketers tend to write off the Google Ads Recommendations or Insights tab due to perceptions of receiving irrelevant suggestions.

However, these systems were meant to learn from the input of marketers to better learn how to optimize.

Just because a recommendation is given on the platform does not mean you have to implement it.

The beauty of this tool is you have the ability to dismiss the opportunity and then tell Google why you’re dismissing it.

There’s even an option for “this is not relevant.”

Be willing to interact with the Recommendations and Insights tab on a weekly or bi-weekly basis to help better train the algorithms for optimizing performance based on what you signal as important.

Regularly reviewing recommendations, rather than ignoring them completely, creates another layer of automation feedback inside the account.

3. Automate Competitor Analysis With Tools

It’s one thing to ensure your ads and campaigns are running smoothly at all times.

Next-level strategy is using automation to keep track of your competitors and what they’re doing.

Multiple third-party tools have competitor analysis features to alert you on items such as:

  • Keyword coverage.
  • Content marketing.
  • Social media presence.
  • Market share.
  • And more.

Keep in mind that these tools are a paid subscription, but many are useful in many other automation areas outside of competitor analysis.

Some of these tools include Moz, Google Trends, and Klue.

The goal is not simply to keep up with your competitors and copy what they’re doing.

Setting up automated competitor analysis helps you stay informed and can reinforce your market positioning or react in a way to help set you apart from competitor content.

4. Using LLM Platforms To Accelerate PPC Analysis

A newer layer of automation is emerging through large language model platforms such as ChatGPT, Claude, Gemini, and Copilot.

It’s important to note that these platforms do not control campaign delivery. Instead, they help marketers process and interpret information faster.

LLM platforms can assist with tasks such as reviewing exported performance data, identifying patterns across campaigns, or summarizing performance changes between reporting periods.

For example, marketers can upload campaign reports and ask targeted questions about cost trends, conversion performance, or impression share shifts. The model can quickly highlight patterns that might otherwise require significant manual analysis.

LLMs can also support areas like keyword expansion, creative brainstorming, and reporting summaries. When paired with platform automation features such as Smart Bidding or responsive ad formats, this approach helps advertisers produce stronger inputs for the algorithm to evaluate.

These tools should not replace human analysis, but they can accelerate many of the workflows surrounding campaign management.

In Summary

Automation now shapes nearly every part of paid media management.

Because of this, the role of the PPC practitioner continues to evolve.

Instead of managing every setting manually, marketers increasingly guide how automation systems operate. That guidance comes through better signals, stronger inputs, and thoughtful campaign structures.

Automation layering helps bring those elements together.

By combining platform automation, scripts, rules, external tools, and AI-driven analysis, advertisers can create a system where automation improves efficiency without losing control over their accounts.

The platforms may be running the mechanics of campaign delivery, but the direction still comes from the marketer.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

FAQ

What are some key benefits of PPC automation layering?

PPC automation layering enhances the efficiency and effectiveness of PPC campaign management. It combines multiple automation tools and strategies like Smart Bidding, automated PPC rules, PPC scripts, and third-party platforms. By leveraging these technologies, marketers can focus on higher-level strategic tasks while the system manages routine tasks, such as keyword bidding, campaign bid management, and data analysis.

Will automation replace the need for PPC experts?

Automation will not replace PPC experts, but it will shift their role over time. While automation can handle many day-to-day management tasks like bid adjustments and ad scheduling, PPC experts should shift their focus to strategic decision-making, data analysis, and optimizing the outputs from automation tools. Human oversight remains crucial for effective campaign management.

What are some practical use cases for PPC automation layering?

Practical use cases for PPC automation layering include:

  • Smart Bidding strategies: Choosing the best bidding strategy (e.g., Target CPA, Target ROAS) and setting up rules to monitor performance volatility.
  • Recommendations & Insights: Regularly interacting with the Google Ads Recommendations and Insights tab to refine automated outputs.
  • Competitor Analysis: Using third-party tools like Semrush, Moz, or Google Trends to automate competitor analysis, staying informed on market positioning without manually tracking competitors.

These strategies help optimize campaign results while allowing more time for strategic analysis and decision-making.

What’s Hot, What’s Not: AI Search Changes In Q1 2026 [Recap] via @sejournal, @MattGSouthern

SEJ Live’s opening panel covered three months of AI search changes from three angles. I covered the news, SEJ Founder Loren Baker covered the business case, and Managing Editor Shelley Walsh covered content strategy. The on-demand recording is available here.

The session was called “What’s Hot, What’s Not,” and our goal was to identify the Q1 changes worth acting on in Q2, and what steps you can start taking today.

AI Overviews Are Costing Clicks, But Not All Of Them

The headline number from Q1 is that clicks drop when AI Overviews appear, but the loss varies by query type. Google’s VP of Search, Robby Stein, said that when people scroll past an AI Overview without engaging, Google pulls it back for that query. The pages losing traffic are the ones answering simple questions. If someone searches for store hours or a return policy, the AI answers it, and nobody clicks through.

Shelley pointed to data from Amsive showing that branded queries with AI Overviews see an 18% increase in click-through rates. When people trust a source, they click through even when a summary is available.

She also pointed out that between half and three-quarters of all queries don’t trigger an AI Overview at all, depending on whose data you use. BrightEdge puts it at about half. Conductor puts it higher. Either way, there are entire categories of queries where you can still compete without an AI Overview in the way.

AI Mode And ChatGPT Are Both Selling Ads Now

AI Mode crossed 100 million monthly active users in the U.S. and India, with 75 million using it daily. During Q1, Google expanded how it monetizes AI-powered search, including Direct Offers in AI Mode, which lets businesses place promotions inside AI responses.

OpenAI began testing ads in ChatGPT for logged-in adult users on the Free and Go tiers. Industry reports put the early pricing at about $60 CPM with a $200,000 minimum commitment. OpenAI said the ads use the current conversation context for targeting.

Between Google and OpenAI, there are now multiple ways to place ads inside AI-generated answers. That wasn’t the case a few months ago.

Start tracking how often your brand gets mentioned in ChatGPT and AI Mode responses. You’ll want to know where you stand before deciding whether paid placement makes sense.

Replaceable Content Is What AI Threatens

Shelley’s segment drew a line between replaceable and valuable content. AI can summarize “what is SEO” or “how to change a bike chain” as well as any page that restates common knowledge. If your content is built on answering those kinds of questions, you’re competing directly with AI.

But content based on original research and firsthand experience is different. Shelley called this “golden knowledge,” borrowing a phrase from SEO veteran Grant Simmons. It’s your data and your experience. LLMs can’t generate it from training data.

Shelley said this looks like video interviews and original research, plus opinionated commentary from practitioners. She pointed to SEJ’s own changes as an example. SEJ has moved editorial toward experience-first formats and shifted revenue from programmatic to sponsorship and downloadable assets. Growing a direct audience is now the top priority.

The question to ask, she said, is why someone would click through from an AI summary to your site. If your content is a summary, there’s no reason. If it has depth, case studies, implementation detail, or nuance the summary can’t contain, that’s what drives the click.

Schema Markup Now Trains LLMs Across Platforms

Loren’s segment made the case that structured data has more value now than at any point in the last decade. Schema markup has always helped with rich snippets in Google. Now it also trains LLMs across platforms.

He shared an example of a client whose CEO shared a common name, and searching for that name plus “CEO” surfaced executives from other companies. Loren implemented organization and person schema. As soon as it went live, the correct CEO appeared in AI Overviews.

Loren ranked the structured data signals AI systems respond to. Schema markup was at the top, followed by clean heading hierarchy and semantic HTML. He put llms.txt as an emerging standard worth watching.

On markdown, Loren noted that Cloudflare had announced a new /crawl endpoint that same morning. The feature renders sites in clean HTML and markdown for LLMs, plus structured JSON. Loren’s point was that if Cloudflare is building this at the platform level, and LLMs learn from markdown, then the tooling to serve it is growing.

Getting Schema Off The Dev Backlog

Loren’s most relatable point was about internal buy-in. Anyone who’s worked with development teams knows schema tends to sit in the backlog behind other priorities. But the conversation changes when you tie technical SEO work to AI visibility.

Tell a client that AI answers depend on structured data, and that ticket moves up the sprint board. He connected this to broader executive buy-in. C-suite leaders are seeing AI Overviews and ChatGPT answers about their companies, and they’re asking questions. That attention creates an opening to secure funding for technical work that would have stalled in previous years.

For ecommerce specifically, Loren recommended the Shopify Knowledge Base App, which crawls product content and generates question-and-answer pairs.

Looking Ahead

During Q&A, the panel was asked about AI-generated content. Shelley confirmed that Search Engine Journal’s content is human-written, and we plan to keep it that way. All three of us agreed that AI works best as an augmentation tool for writers who already know their subject.

The full session, including the Q&A, is available on demand. The other two sessions from the event are also available. CallRail’s Emily Popson covered AI search KPIs in Session 2, and Forrester’s Nikhil Lai covered answer engine strategy in Session 3.

More Resources:


Featured Image: Search Engine Journal

Google AI Mode’s Personal Intelligence Now Free In U.S. via @sejournal, @MattGSouthern

Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid AI Pro and AI Ultra subscribers, the feature is now expanding to users with personal Google accounts.

What’s New

Announced in a blog post, the expansion covers AI Mode in Search, the Gemini app, and Gemini in Chrome. AI Mode access is available today, while the Gemini app and Chrome rollouts are starting now.

Personal Intelligence connects a user’s Gmail and Google Photos to AI-powered search and chat responses. When enabled, AI Mode and Gemini can reference email confirmations, travel bookings, and photo memories to answer questions without the user providing that context manually.

What Changed

When Google first launched Personal Intelligence in January, you needed a subscription to try it. Today’s expansion removes that paywall for U.S. users on personal Google accounts.

The feature still isn’t available for Google Workspace business, enterprise, or education accounts.

You can opt in by connecting apps through their Search or Gemini settings, and you can turn connections on or off at any time.

What Google Says About Training Data

The blog post includes a disclosure about how data from connected accounts is handled.

According to the post, Gemini and AI Mode don’t train directly on your Gmail inbox or Google Photos library. Google describes the training as limited to “specific prompts in Gemini or AI Mode and the model’s responses.”

That means prompts generated while using Personal Intelligence could include details drawn from connected apps, even though Google says it doesn’t train directly on raw Gmail or Photos data.

Why This Matters

The move from paid to free changes the scale of this feature. When Personal Intelligence required a Pro or Ultra subscription, it reached a smaller audience of paying users. Opening it to anyone with a personal Google account in the U.S. puts it in front of a much larger base.

Increased personalization means AI Mode responses could vary more from user to user. Two people searching the same query may get different results if one has connected their Gmail and the other hasn’t. That makes it harder to benchmark what AI Mode shows for a given topic.

This feature could also change how people type queries into AI Mode. If Google already has the necessary context about a person, we might see searches become shorter. That’s an idea I explored in this video back when Google originally launched the feature:

Looking Ahead

No expansion beyond the U.S. or to Workspace accounts has been announced. Moving from paid to free in less than two months suggests Google is confident in this feature. How people respond to the linking of personal data to search will likely shape future rollout plans.

Google Removes ‘What People Suggest,’ Expands Health AI Tools via @sejournal, @MattGSouthern

Google has removed “What People Suggest,” a search feature that used AI to organize health perspectives from online discussions. The confirmation came as Google held its annual Check Up event, where it announced new AI health features for YouTube.

A Google spokesperson confirmed the removal to The Guardian, calling it part of a “broader simplification” of the search results page. The spokesperson said the decision was unrelated to the quality or safety of the feature. The Guardian also reported, citing three people familiar with the matter, that the feature was pulled after a trial run.

“What People Suggest” launched on mobile devices in the U.S. last year at Google’s annual health event, The Check Up. At the time, Karen DeSalvo, then Google’s chief health officer, said people value hearing from others who have experienced similar health conditions. DeSalvo retired in August and was succeeded by Dr. Michael Howell, who led this year’s Check Up announcements.

What Google Announced At The Check Up

At its 2026 Check Up event, Google announced AI health features across YouTube, Fitbit, and clinician education.

Google says health-related videos on YouTube have surpassed 1 trillion views globally. The company is adding an AI-powered “Ask” button on eligible health videos that lets viewers interact with the content.

Separately, Google is experimenting with AI to organize peer-reviewed scientific information and help present complex topics to broader audiences.

In the blog post, Howell said a central challenge has been connecting people to the right health information at the right time.

Google.org is committing $10 million to fund organizations that will reimagine clinician education for AI. The Council of Medical Specialty Societies and the American Academy of Nursing are the first partners.

Why This Matters

AI features in search results for health-related topics keep changing. Google pulled back one feature that showed forum-style perspectives and put new investment into medical education and structured video tools.

YouTube’s growing role in health-related AI Overviews is already documented. SE Ranking’s study of German health queries found YouTube was the most-cited domain in health AI Overviews, appearing more often than medical or government sites. Adding interactive AI on top of those videos could reinforce that pattern.

How We Got Here

Google’s AI features for health queries have faced pressure over the past year.

In January, the Guardian published an investigation that found health experts considered some AI Overview responses misleading for medical queries. Google disputed elements of the reporting but later removed AI Overviews for some specific health searches, including queries about liver function tests.

“What People Suggest” launched during the same period Google was expanding AI Overviews to thousands more health topics. Ahrefs data from November showed medical YMYL queries triggered AI Overviews 44.1% of the time, the highest rate among YMYL categories.

Looking Ahead

The pattern over the past year points to tighter guardrails around some health AI experiences. Whether that direction holds is less certain.

The removal of “What People Suggest,” and YouTube’s continued citation visibility in AI Overviews, could point that way. But Google’s track record with health-related AI features also shows these decisions can change quickly.


Featured Image: Mamun_Sheikh/Shutterstock

Google AI Overviews Cut Germany’s Top Organic CTR By 59% via @sejournal, @MattGSouthern

AI Overviews cut the click-through rate on Germany’s top organic position by 59%, according to a SISTRIX analysis of more than 100 million keywords.

The data, published by founder Johannes Beus, puts numbers on a pattern that multiple studies have now documented across different markets. The dataset stands out for its size and for offering category-level detail in Germany.

What The Data Shows

SISTRIX found that AI Overviews appear on roughly 20% of all keywords in German search results. That’s close to SE Ranking’s finding of about 21% in the US market from November, though the datasets cover different markets and use different methodologies.

When AIOs are present, the CTR at position 1 drops from 27% to 11%. Across all positions, a typical search leads to an organic click 57% of the time without an AIO. With one, that falls to 33%.

About 79% of AIOs in German results appear above the organic listings. The rest show up further down the page, after the first few organic results.

SISTRIX estimates the total cost at 265 million lost organic clicks per month across the German market. Averaged across all keywords, including those without AIOs, that works out to a 6.6% click loss.

Impact Varies By Category

SISTRIX broke down the data by category, and the gap between the most-affected and the least-affected is large.

Parenting and baby content sites lost over 24% of their organic clicks. The health and home improvement categories also showed losses well above average.

At the other end, recipe sites like Chefkoch lost about 1%. News and media sites lost 7.37%, below the average. Shopping and travel booking sites were barely affected.

SISTRIX’s Beus wrote that informational queries are hit hardest. Transactional searches, where people need to do something that an AI summary can’t replace, are mostly spared.

Biggest Losers

In raw numbers, Wikipedia leads with an estimated 31.6 million lost clicks per month in Germany, representing about 5% of its Google traffic in that market. DocCheck (4.8 million), AOK (4 million), ADAC (3.1 million), and Pons (3.1 million) follow.

By percentage, specialized health portals are hit hardest. SISTRIX data shows lumedis.de losing 30% of its organic clicks, ratgeber-herzinsuffizienz.de losing 29%, and herzstiftung.de losing 29%.

Sites with the smallest losses include wetter.com (0.18%), Booking.com (0.46%), Idealo (0.85%), and Amazon (1.73%).

How This Compares To Other Markets

The German data aligns with other regions, but comparisons are limited by differing methods and keywords.

A Pew Research Center study of US searches found that users clicked 8% of the time when an AIO was present, compared to 15% without one. That’s a 47% relative reduction. A GrowthSRC analysis found a 32% drop at position 1 in the US.

The German numbers (59% loss at position 1) are steeper. Whether that reflects actual differences between the markets or differences in measurement methodology isn’t clear from the available data.

Why This Matters

The category-level breakdown is the most useful part of this data if you’re managing organic search in European markets. A blended 6% average click loss sounds manageable, but losing 24% of clicks in your specific vertical isn’t.

SISTRIX’s data shows search volume alone doesn’t reliably predict traffic where AIOs are active. Whether an AIO appears and impacts CTR in your category must now be part of keyword analysis.

Looking Ahead

SISTRIX previously reported 17% AIO prevalence in Germany in August, and that’s now 20%. Growth slowed, but the feature’s presence in German search results continues expanding.

SISTRIX is a commercial SEO analytics provider. The data in this analysis is drawn from their proprietary keyword database.


Featured Image: Lana Sham/Shutterstock

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Search referral traffic to small publishers dropped 60% over two years, according to Chartbeat data reported exclusively by Axios.

That’s nearly three times the decline at large publishers. The analytics firm, which tracks traffic across thousands of client websites globally, segmented its network by size. Mid-sized publishers (10,000 to 100,000 daily page views) lost 47%, and large publishers (over 100,000 daily page views) lost 22%.

What’s New

Aggregate search traffic data from Chartbeat isn’t new. Our January Reuters Institute coverage cited Chartbeat data showing a 33% global decline in Google Search referrals. What’s new is the size breakdown. Previous Chartbeat figures cited in earlier coverage were aggregate numbers, and this data shows the losses are concentrated at the bottom.

Page views from Google Search fell 34% between December 2024 and December 2025, per the Chartbeat data. Google Discover, the other top referral source, fell 15% over the same period.

ChatGPT referrals grew more than 200% during that window, but chatbots still account for less than 1% of all publisher page view referrals. Growth in chatbot traffic hasn’t come close to replacing what search lost.

How Larger Publishers Are Compensating

Larger publishers appear to be finding alternative traffic sources to partially offset search losses. News and media sites in particular are seeing growth in direct and internal traffic as a share of referrals.

Email and app referrals are also growing among news publishers, per the Axios report. Our Reuters Institute coverage in January found the same pattern, with publishers saying they planned to invest more in owned channels.

Overall weekly page views across all publishers in Chartbeat’s network dropped 6% between 2024 and 2025. The firm attributed that to factors outside search, including a quieter election cycle, though that’s their interpretation, not a measured cause.

AI Referral Engagement Varies By Site Type

One finding that stands out for content strategy is that news and media sites get the highest total page views from AI chatbot referrals, but the lowest engagement per article.

Axios reports that this pattern suggests readers use news citations in chatbots for quick fact-checks or context, not deeper reading.

The other category in the data is “utilitarian sites,” meaning publishers offering health advice or gardening tips. Those publishers see fewer total referrals from AI platforms but more page views per article.

Methodology Notes

Chartbeat sells analytics tools to publishers and has tracked traffic across its client network for close to two decades. Its data covers thousands of websites globally but skews toward news and media publishers.

Small publishers in this data average 1,000 to 10,000 daily page views, medium is 10,000 to 100,000, and large is over 100,000.

Axios received the data exclusively, and Chartbeat hasn’t published it independently.

Why This Matters

Search referral traffic loss is hitting sites with the fewest resources to build alternative traffic.

Most reporting on search traffic declines has treated publishers as a single group. This Chartbeat data breaks down the data by size. For anyone working with smaller publishers, these numbers should change the conversation.

AI chatbot users click to news sites for quick checks but spend more time on how-to content. That means the value of an AI referral depends on what you publish.

Looking Ahead

We’ll be watching for Chartbeat to publish the full data set. How chatbot referral engagement differs by site type is still early data worth tracking.


Featured Image: fizkes/Shutterstock

Google Explains Why HTTPS Migration May Negatively Impact SEO via @sejournal, @martinibuster

Google’s John Mueller answered a question about moving to HTTPS, explaining why the process of making a site secure is actually a major undertaking that can have a negative impact on rankings.

Loss Of Top 3 Google Rankings

A person asked on Reddit why they lost their top 3 rankings in Google after making their site secure with HTTPS. They also replaced their old WordPress theme and updated their content.

They explained their situation and asked for advice:

“We have a 15 year old financial website hosted with godaddy deluxe plan, suddenly disappeared in google after moving https. We replaced our wordpress old theme and updated new content. Our old http site scored top 3 in google. We implemented 301 using real simple ssl few days ago so far rankings not recovered. Some of the http links still not crawled and updated by google.

Do you think going back to http would recover our rankings? We feel all is lost. Any chance of recovery.”

HTTPS Migration

There are multiple things that stand out as possible reasons for losing their rankings. But John Mueller focused exclusively on the HTTPS migration as the likely reason for losing their rankings.

Mueller responded:

“Moving to HTTPS is a bit like a site migration, all the URLs have to be recognized, recrawled, and reprocessed individually. So especially if this move was made a few days ago, you need to give it time to recover (in particular, don’t use the URL removal tool to try to get rid of the HTTP URLs, since it will also remove/hide the HTTPS URLs). (I won’t touch upon finally moving to HTTPS after so many years, but I guess I just did :))”

All Is Not Lost

I have had several occasions to test how fast Google could return an entire website back to former rankings and have been pleasantly surprised at how fast Google is able to process a major website change or recover from being offline for as long as a month.

The person is rightfully having a freakout about losing their rankings, but it’s only been a few days. Mueller said to give it some time, and based on my own experiences, I would agree.

Featured Image by Shutterstock/Anton Vierietin

SEO Test Shows It’s Trivial To Rank Misinformation On Google via @sejournal, @martinibuster

An SEO crafting a newsletter with AI spotted a hallucination about a March 2026 Google Core Update and decided to publish it as an experiment to see how misinformation spreads. While search marketing industry publications ignored the fake news some independent SEOs picked it up and ran with it without first checking the factual accuracy of the news.

Mistake Leads To A Double Take

The person who did the experiment, Jon Goodey (LinkedIn profile), published a LinkedIn article that purposely contained an AI hallucination about a non-existent March 2026 Google Core update. He explained, in a subsequent Linkedin post, that his AI workflow contains human quality control to catch AI mistakes and when he spotted it he decided to go ahead and publish it to see if anyone would dispute or challenge the false information.

Google Ranks Misinformation

Goodey explained that it was Google itself that fueled the misinformation about the fake core algorithm update as his LinkedIn newsletter ranked for the phrase Google March Update 2026. The fake news ranked in Google’s classic search and in AI Overviews.

He explained:

“My LinkedIn article began ranking on the first page of Google for “Google March update 2026.” Not buried on page three. Right there, visible to anyone searching for information about recent Google algorithm changes.

…Google’s own AI Overview feature picked up the fabricated information and presented it as fact.”

Google’s fact checking in the search results is basically non-existent, so it’s not surprising that Google’s search engine would rank the fake information, especially for anything related to SEO. Using Google for SEO queries is like playing a slot machine, you have no idea if the information will be right or a total fabrication.

Searching for information about a dubious black hat tactic (like Google stacking) may cause Google to actually validate it, potentially misleading an honest business person who wouldn’t know better.

Screenshot Of Google Recommending A Black Hat SEO Tactic

This is a longstanding black spot on Google’s search results and is why it’s not surprising to see Google spew out misinformation about a fake Google update.

Websites Echo Misinformation

The result is that SEO websites began repeating the false update information because of course, Google core updates are a traffic magnet and a way some SEOs attract potential clients. There’s a long history in the SEO community of stirring up noise about non-existent updates, so again, not surprising to see SEO agencies pick up this ball and run with it.

Goodey shared:

“Multiple websites published detailed, authoritative-sounding articles about the “March 2026 Core Update,” treating it as confirmed fact. These weren’t throwaway blog posts. They were detailed pieces with specific claims about Gemini 4.0 Semantic Filters, Information Gain metrics, and recovery strategies.”

Most News Sites Ignored The Fake Update

SEJ and our competitors ignored the fake March update news. But a technology site apparently did not, with Goodey calling them out about it.

He wrote:

“Another site, TechBytes, went even further with a piece by Dillip Chowdary headlined “Google March 2026 Core Update: Cracking Down on ‘Agentic Slop’.” (Oh, the irony…).

This article invented specific technical details including claims about a “Gemini 4.0 Semantic Filter,” a “Zero Information Gain” classification system, and a “Discover 2.0 Engine” prioritising long-form technical narratives.”

Google Has A Policy About Fact Checking

I recall Google’s Danny Sullivan talking about how Google doesn’t do fact checking but I couldn’t find his tweet or statement. There is however a news report published in Axios related to fact checking where a Google spokesperson affirms that Google will not abide by an EU law that requires fact checking.

According to the news article:

“In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google’s global affairs president Kent Walker said the fact-checking integration required by the Commission’s new Disinformation Code of Practice “simply isn’t appropriate or effective for our services” and said Google won’t commit to it.

The code would require Google to incorporate fact-check results alongside Google’s search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google’s current approach to content moderation works and pointed to successful content moderation during last year’s “unprecedented cycle of global elections” as proof.
He said a new feature added to YouTube last year that enables some users to add contextual notes to videos “has significant potential.” (That program is similar to X’s Community Notes feature, as well as new program announced by Meta last week.)”

Takeaways

Jon Goodey had multiple takeaways, with the most important one being that people should fact check what they read online.

Other takeaways are:

  • AI workflows should have validations built into them.
  • Most readers don’t fact check (only a few commenters disputed the false claims).
  • AI overviews and search amplify misinformation.
  • One article is echoed by the Internet, with other sites repeating and embellishing on the original false information.

Featured Image by Shutterstock/Rawpixel.com

How To Use AI To Streamline Time-Consuming SEO Tasks via @sejournal, @coreydmorris

SEO, like most organic and non-advertising or paid channels in digital marketing, is labor-intensive. Yes, there are software suites, analytics platforms, research tools, and a number of other things that help in the tech stack.

We all have our favorites, and no one is (or should) be doing SEO like I was in 2008 (despite my desire sometimes to just do something manually where I can see the inputs and outputs and have more control, but I digress).

In the midst of constant noise about new platforms, new ranking factors, ways to become visible in AI, and everything else, it can be hard at times to keep going with the tasks that still require a human at some level. Whether it is gaining efficiency, scaling efforts, doing more with less, or a combination of these, I’m sharing human-involved ways to streamline time-consuming tasks so you can gain time (and maybe money).

1. Generating Meta Descriptions, Page Titles, Alt Text

I could have started with something more high-level or strategic, but I’m getting this one out of the way right now.

The basic blocking and tackling of ensuring you have unique, helpful, and topically relevant meta descriptions, page titles, and image alt text can be a huge investment of time on a large website or across sites if you own tactical SEO for multiple sites or clients.

While there are ways to semantically have these tags auto-generated by a database or CMS, we know that, in a lot of cases, there’s still a manual process or intervention to audit and ensure that the tags are written to best practices and strategic positioning.

Also, I know that there’s plenty of discussion or debate on whether there’s even value in creating titles and meta descriptions. I’m not going there. But I will say that, if you have any areas where you need to create them and they are on your tasks list, you can spend a lot of hours and the cost of those hours (or outsourced resources) for a minimal return.

Leverage tools based on what you’re already paying for or what tech ecosystem you’re in, like Screaming Frog + OpenAI API + a WordPress plugin, which can save thousands of dollars and many dozens of hours.

Putting It Into Action

Steps for generating alt text at scale:

  1. Get your OpenAI API key:
    • In your OpenAI dashboard at platform.openai.com, go to API keys.
    • Create a new secret key and name it something you’ll remember, like Screaming Frog.
    • Make sure you have credits in your account (a few dollars can go a long way).
  2. Set up your Screaming Frog crawl:
    • Set up your OpenAI configuration by going to Configuration > API Access > AI. Enter your API Key into the field. Press Connect.
    • Set up a prompt to generate alt text by going to the Prompt Configuration tab. Click Add from Library > System > Generate alt text for images.
    • Set up your crawl configuration and don’t forget to go to Spider > Rendering and change the rendering mode from Text Only to JavaScript. Then, go to Extraction and, under HTML, check Store HTML and Store Rendered HTML.
    • Run a test crawl on one URL to ensure the output works for you. Tweak the prompt if you’d like.
  3. Run the crawl.
  4. Export to a CSV.
  5. Format the file with two columns: image URL, alt text.
  6. Add this plugin to the site: https://wordpress.org/plugins/alt-text-updater/.
  7. Upload the file.
  8. Crawl your site and do manual checks to test that images have alt text.
  9. Deactivate and uninstall the plugin.

2. Structuring Content Outlines

This might be one of the most common things we do when starting SEO or in periodic content organization, expansion projects, or ongoing content creation. With content being what I call the “fuel” of SEO (and also visibility in AI search), it is still as important as ever to organize it well and present it in a way that makes sense to site visitors and the machines that are also learning it.

While you might not be able to automate this out of the box or in a single prompt in your favorite LLM, you can definitely speed up the process and gain some insights into connections you might not make on content themes on your own (my favorite bonus).

Whether you’re working on a single article, a longer-term content calendar, reorganizing evergreen content, or other content-specific tasks, mastering the art of prompt creation, coaching the AI agent, ensuring the output is good, and using project folders (with brand style guides) in ChatGPT can ensure the quality and speed the more you produce.

Putting It Into Action

Example Prompt

You are an expert SEO who specializes in content writing for [industry]. Your task is to create an outline for an article for [topic]. The article outline should cover the following subtopics: 

[subtopic 1], 

[subtopic 2], 

[subtopic 3]. 

The article should target the following keywords: 

[keyword]

[keyword]

[keyword]

Attached are the HTML files of pages currently ranking well in Google search results to use as guidance. Review the HTML files and generate a content outline. 

3. Creating Project Briefs

Going a little higher level into organizing the work we do, connecting desired outcomes to strategies and ultimately to tactics, project briefs are something you might not do every day.

I like to think about SEO in projects or sprints as a way to break up the big nature of ongoing and long-term work that requires short-term progress and tactics. Regardless of how you organize the work, you likely have a lot of varying documentation and information. Whether in sheets, documents, decks, or other sources, you have information that you can feed together into your LLM of choice to have AI organize and sort out.

Whether you’re doing this formally to produce a report deliverable or informally to help your team or yourself organize the minutiae of SEO information, I can point to examples of my team using Gemini to read through a bunch of documents, including meeting notes, personal notes, transcripts, AI transcripts, agendas, competitor lists, research, emails, and more.

This can be helpful for a number of uses, including putting together a document that can be helpful for personal reference, team reference, onboarding, and articulation of the overall knowledge base for stakeholders.

Putting It Into Action

Example Prompt

You are an experienced Senior Marketing Strategist and you’re onboarding your team for [describe project]. Your task is to create a comprehensive project brief for [name of campaign or project].

Ensure the project brief takes into account the following project details:

Objective: [what is the overarching goal of the project]

Target audience: [overview of the demographics]

Key messaging: [provide details about campaign messaging]

Channels: [what channels will be incorporated into the campaign/project]

For the deliverable, the output should include the following:

Project Overview: Include a 1-2 sentence summary of the project

Success Metrics: [provide KPIs]

Budget: [provide financials]

Timeline: [provide deadlines and milestones]

Generate the project brief as a professional, internal-facing document.

Classifying Keywords

Prompt for using the AI function in Google Sheets to classify keywords by search intent, segment, branded/non-branded, etc.

=ai("Act as an SEO Specialist. Classify the following Keyword into exactly one of these Categories: [Informational, Navigational, Commercial, Transactional].

Rules:

Informational: User is looking for an answer or guide.

Commercial: User is researching products/services before buying.

Transactional: User has high intent to buy/convert now.

Navigational: User is looking for a specific website/brand.

Keyword: [Cell Reference, e.g., A2]

Result: Return only the category name with no extra text or punctuation

4. Segmenting Keywords

In SEO today, we’re not focused necessarily on granular keywords. However, they are still important in our research and strategy planning, along with more tactical work in guiding content topic building and creation.

When you do your research and have your list of keywords from any source, you can utilize the Google Sheets AI function to categorize them by topic, pillar, branded/non-branded, localized or not, search intent, etc.

You can also run keywords through an LLM and have it categorize them, export the output, import that back into your spreadsheet, and align it to the data using a VLOOKUP function (a recommendation, as my team thinks the Google Sheet AI function isn’t where we want it to be yet).

While the method I noted also might feel manual and not where we want it to be eventually, with better AI and tooling, it is still much better than doing things manually. I encourage you to use your own spreadsheet logic or “regular expression” (regex) to categorize as much as you can efficiently before going to AI, especially if your dataset is extensive.

5. Documenting Competitor Outlines

While I have to admit that I like to visually check out competitor websites for my first impression and a quick, informal sophistication check, automating this is a huge time-saver.

For example, Gemini is really good at outlining the content structure of a webpage, so my team likes to feed three or four competitor URLs that are ranking well or have high visibility for a topic that we’re building a strategy for, and it can give us an outline of each page. That includes messaging, targeting, and providing baseline content blocks that each page has that we can use when we do content development on our side.

Disclaimer: Just like in the olden days, don’t copy directly and don’t steal. Verify that what you’re getting back out of the tool you’re using isn’t ripping someone off. That’s on us to validate.

Putting It Into Action

Example Prompt

You’re an expert SEO strategist and you’re conducting a competitive content analysis of your client’s page against pages currently outranking it in Google for the search term [keyword]. The client is a [describe client and industry]. The page is [describe purpose of the page and topic].

I’ve attached the HTML files of the client’s page, as well as the HTML files for the competitor pages. Your tasks are to provide me:

An outline for each page of the content blocks present in the HTML

An overview of the messaging, tone, voice

A list of outgoing internal links in the content

Content gaps between the client's page and the competitors 

6. Conducting SERP Analysis

We can’t waste impressions and any visibility we get by showing up on the wrong topics. SEO now is about quality, and we can’t miss the mark on search intent.

An example that is a big time-saver is to build your seed keyword list using Ahrefs and then export the keyword list with SERP data. Then, feed that spreadsheet into Gemini and have it provide a breakdown of organic competitors per keyword, intent of ranking organic pages per keyword, etc. This example is a good way to save time from having to review hundreds and hundreds of rows. My team usually filters out AI Overviews and ad placement data to condense it a bit.

This type of work has been helpful in figuring out informational versus commercial intent SERPs at scale so that we’re targeting the right keywords with the right content. It has also been helpful in understanding the level of competition within a topic, so we know what to avoid and what long-tail keywords may represent realistic opportunities.

I will emphasize, though, that it is important to note that the SERPs aren’t 100% accurate, and localization and personalization will change the SERPs that users see. But it’s helpful in comparing keywords against each other. We also do SERP reviews manually to confirm findings. Again, validate as a human what you’re getting from tools.

In Closing

There’s a lot of power in what you can reclaim in time and dollars, leveraging automation, deeper tools use, and the power of AI for SEO. And, you probably detected a theme where, in pretty much everything you do, there have to be solid inputs in order to get useful outputs, which also require human validation and experience to trust.

Regardless of where you are with automation, the goal of being able to do more with less, scale tasks, and not do manual tasks that might have low return on investment is a great way to determine where you should consider doing more with tech and less manual work.

More Resources:


Featured Image: ArtEternal/Shutterstock

How To Build An SEO Commissioning Workflow: From Tickets To Requirements via @sejournal, @billhunt

Enterprise SEO doesn’t fail because teams lack knowledge. It fails because they’re invited too late.

In most large organizations, SEO still operates in a reactive posture. Teams review pages after launch, run audits, document issues, file tickets, and then wait, often for months, for other teams to implement changes. Modern search visibility is no longer shaped by tweaks. It is shaped by what gets built upstream.

High-performing organizations have responded by changing SEO’s role entirely. Instead of treating SEO as a cleanup function, they’ve repositioned it as a commissioning function, one that defines the exact requirements digital assets must meet before they are ever created. This article explains how enterprises can formalize that shift by building an SEO commissioning workflow: a structured, repeatable process that embeds search requirements into digital creation at the moment decisions are made.

The Problem With Ticket-Based SEO

In the traditional enterprise model, SEO is integrated into the workflow after launch. In the traditional cycle, content is created or revised without input from SEO, and the resulting changes often harm search performance. The SEO team investigates the decline to identify new or updated content or templates and creates tickets to adapt them to recover what was lost, or, in the case of new content, what was not gained.  Those tickets are then placed into development queues alongside revenue initiatives, product launches, and executive priorities.

What follows is predictable. Fixes are delayed. Implementation is partial. Some issues are addressed, others are deferred, and many recur in the next release because the underlying cause was never addressed. This model creates three chronic failures.

  • First, SEO is perpetually behind. It is reacting to outcomes rather than shaping them.
  • Second, SEO relies on persuasion rather than process.
  • Third, structural mistakes multiply faster than they can be fixed. Every new page, template, or market rollout becomes another opportunity to replicate the same issues at scale.

When SEO lives downstream, every asset is a potential liability. The organization becomes very good at discovering problems and very bad at preventing them. Progress depends on relationships and goodwill rather than enforceable requirements. Commissioning exists to flip that dynamic.

What SEO Commissioning Actually Means

Instead of reviewing pages after they are launched, leading organizations have begun moving SEO to the moment digital assets are conceived.

At that stage, the question is no longer whether a page can be optimized later. The question becomes whether the asset is designed so that search systems can understand it from the start. Content structure, template behavior, entity representation, internal linking roles, and market alignment are all determined before production begins. When those decisions are made upstream, discoverability becomes a property of the system rather than a series of corrections applied after launch.

A useful analogy comes from high-rise construction. On complex projects, builders often assign a dedicated commissioning agent whose job is not to install anything directly but to ensure that all the independent systems going into the building, including HVAC, elevators, electrical systems, glass, fire controls, and dozens of other components, work together as a coherent whole. Without that coordination, the building may be technically complete yet fail to function as a system.

SEO plays a similar role in digital environments. Instead of diagnosing problems after launch, SEO helps define the requirements that must be satisfied before assets move forward. Those requirements shape how content is commissioned, how templates behave, how entities are represented, and how information is structured so that search engines and AI systems can interpret it correctly.

When SEO participates at the design stage, teams stop asking, “How do we fix this later?” and start asking a more useful question: What must be true before this asset should exist at all?  In that environment, SEO stops behaving like a repair function and becomes part of the design discipline that ensures digital systems work as intended from the beginning.

The SEO Commissioning Lifecycle

Organizations that operationalize SEO commissioning tend to follow the same lifecycle, even if they don’t label it explicitly. The difference is that high-performing teams make these stages intentional, documented, and enforceable.

1. Define Intent Before Creation

Every asset should begin with clarity about why it should exist from a search perspective.

At this stage, SEO identifies how users actually search for the topic or product, how intent is distributed across informational, commercial, and navigational needs, and what search systems typically surface for eligibility. This prevents a common enterprise failure mode: Well-written content that is structurally misaligned with how demand expresses itself.

Commissioning forces an uncomfortable but necessary question early in the process: Why would a search engine or AI system ever select this asset?

If that question cannot be answered clearly, the asset should not move forward.

2. Define Eligibility Signals

Before development or content production begins, SEO specifies the signals that must exist for eligibility.

This includes decisions about schema usage, page classification, metadata structures, heading hierarchies, internal linking roles, entity associations, media requirements, and – when relevant – market and language signals. The key distinction is timing. These decisions are not retrofitted later. They are defined before work begins, ensuring assets are born eligible rather than hoping eligibility can be added after the fact.

Eligibility becomes a prerequisite, not a gamble.

3. Define Structural Requirements

Commissioning also applies to platforms and templates, not just content.

This is where SEO moves closest to product and engineering teams, shaping the structures that determine discoverability at scale. URL rules, template architecture, rendering accessibility, navigation placement, internal linking frameworks, and content modules for depth are all defined here. These are not tactical SEO opinions. They are structural requirements that influence how thousands of pages will be interpreted by machines over time.

When SEO is incorporated at this stage, discoverability becomes a property of the system rather than the result of manual intervention.

4. Pre-Launch Validation (Search QA)

Before release, SEO validates that commissioning requirements were actually implemented.

This includes confirming crawlability, indexability, structured data integrity, entity consistency, internal linking alignment, market targeting, and content completeness relative to intent. This step is often misunderstood as “SEO QA,” but it is fundamentally different from traditional bug fixing. The purpose is not to discover surprises. It is to confirm compliance with requirements already agreed upon.

When commissioning is done correctly, this stage is fast and predictable.

5. Post-Launch Monitoring & Feedback

Commissioning does not end at launch.

SEO monitors performance relative to expectations, including visibility patterns, SERP feature capture, AI citation presence, market alignment, and template behavior at scale. Real-world query data then feeds back into future commissioning rules. This creates a virtuous cycle. SEO evolves from a reactive repair function into a continuous upstream optimization system that improves with each release.

Where Commissioning Lives In The Enterprise Workflow

For commissioning to work, it must live where decisions are made.

That means being embedded into product requirement documents, content briefs, CMS template design, sprint planning, market rollout processes, and governance checkpoints. SEO becomes a required approval step before assets move forward, not an optional reviewer afterward.

This is the difference between SEO as a service and SEO as infrastructure.

Why This Model Changes Everything

Ticket-based SEO creates backlogs and dependencies and commissioning-based SEO creates leverage and prevention. The benefits compound quickly.

Assets launch search-ready the first time, increasing speed rather than slowing it. Structural failures decline because mistakes are prevented upstream. Compliance scales automatically across thousands of pages. Content and entities are structured for machine retrieval from day one. And SEO stops fighting for attention because it is embedded directly into how work gets done.

Most importantly, commissioning aligns incentives. SEO success is no longer dependent on favors, persuasion, or heroics. It becomes a predictable outcome of a well-designed system.

The Hard Truth

Most enterprise SEO pain is self-inflicted. Organizations built workflows where SEO arrives late, lacks authority, fixes rather than defines, and is measured by outcomes shaped by others. Commissioning removes those structural handicaps.

It moves SEO to the point where search success is actually created: the moment decisions are made.

Coming Next

Commissioning solves timing; it does not solve ownership. In the next article, we’ll examine why SEO still fails without clear cross-functional accountability and how enterprises must redefine ownership if commissioning is going to scale.

More Resources:


Featured Image: Summit Art Creations/Shutterstock