Google Tested AI Headlines In Discover. Now It’s Testing Them In Search via @sejournal, @MattGSouthern

When Google started rewriting headlines with AI in Discover last year, it called the test “small.” By the following month, it was reclassified as a feature.

Now the same pattern is showing up in traditional search results.

Google confirmed to The Verge (subscription required) that it’s testing AI-generated headline rewrites in Search. The company described the test as “small and narrow.” It’s similar language to what Google used before reclassifying AI headlines in Discover as a feature.

What’s Happening In Search

Multiple Verge staff members spotted rewritten headlines over the past few months. In one case, “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” appeared in results as “‘Cheat on everything’ AI tool.” Another article was rewritten to “Copilot Changes: Marketing Teams at it Again,” phrasing the article never used.

The test isn’t limited to news sites. Google said it affects other types of websites too.

None of the rewrites included any disclosure that Google had changed the original headline.

Google told The Verge the goal is to “identify content on a page that would be a useful and relevant title to a users’ query.” The company said the test aims at “better matching titles to users’ queries and facilitating engagement with web content.”

Any broader launch may not use generative AI, the company said, but it didn’t explain what the alternative would look like. The test hasn’t been approved for wider rollout.

How Discover’s AI Headlines Became A Feature

We’ve been tracking Google’s treatment of Discover through several changes this year. Here’s how the headline experiment played out.

In December, Google called AI-generated headlines in Discover “a small UI experiment for a subset of Discover users.” By January, Google reclassified the feature. It now “performs well for user satisfaction,” according to Nieman Lab’s reporting.

That’s about a month from test to reclassified feature.

During that period, Google revised its Discover guidelines alongside the February Discover core update and rolled out AI previews that show short AI-generated summaries with links. Each change added another layer of AI-mediated content between publishers and readers in Discover.

The Search test follows the same opening move. Google describes it as small, narrow, and not approved for broader rollout.

How This Differs From Existing Title Rewrites

Title tag rewrites in search results aren’t new. Google has been doing this for years using rule-based systems. An analysis of over 80,000 title tags found Google changed 61% of them. A follow-up study put that number at 76%.

Those existing rewrites pull from elements already on the page. According to Google’s title link documentation, the system draws from title elements, H1 headings, og:title meta tags, anchor text, and other on-page sources.

The new test is different. In the Copilot example, the rewritten headline used phrasing that didn’t exist anywhere in the article. That’s generative AI creating new text.

Why This Matters

An analysis of over 400 publishers found Discover’s share of Google-sourced traffic had climbed from 37% to roughly 68%. For publishers relying so heavily on Discover, AI headline rewrites becoming a feature in Search would mean losing headline control across both of their primary Google traffic sources.

Google’s title link documentation describes inputs Google may use to generate titles but doesn’t include a publisher control for opting out of rewrites. And because Google doesn’t disclose when a headline has been rewritten, you may not know it’s happening to your content unless you check manually.

Sean Hollister, senior editor at The Verge, wrote:

“This is like a bookstore ripping the covers off the books it puts on display and changing their titles.”

Louisa Frahm, SEO director at ESPN, wrote on LinkedIn:

“After 10+ years in news SEO, I’ve come to find that a headline is the most prominent element for attracting readers in timely windows, to provide a targeted synopsis that elevates your brand voice. If that vision gets altered and facts are misrepresented, long-term audience trust will be compromised.”

Looking Ahead

Publishers monitoring their search visibility should check whether their headlines are appearing as written in Google results. There’s no tool for this, so it requires manual spot-checking.


Featured Image: elenabsl/Shutterstock

From SEO And CRO To Agentic AI Optimization (AAIO): Why Your Website Needs To Speak To Machines via @sejournal, @slobodanmanic

For 25 years, we’ve built websites for humans who click, scroll, and browse. That era is ending. I’ve been in website optimization for 15+ years, and this is the biggest shift I’ve seen since mobile. And honestly, I think it’s way bigger than that.

The internet is undergoing its most significant transformation since it began. Your website now has two audiences: humans and AI agents. The agents are already here, shopping, researching, booking, and making decisions. The question is whether your website can serve them.

This is the first article in a five-part series on optimizing websites for the agentic web. We’ll cover discovery, citation, technical implementation, and the new commerce protocols that let AI complete purchases on your behalf. Throughout this series, we’ll draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

But first, we need to understand how we got here and why December 2025 changed everything.

The Evolution: SEO To AEO To GEO To AAIO

The alphabet soup of optimization acronyms tells a story about how the web has changed.

SEO (Search Engine Optimization) dominated from the mid-1990s through the 2010s. The goal was simple: rank higher on Google. You optimized keywords, built backlinks, and structured your site so crawlers could index it. Success meant appearing on page one when someone searched for your topic.

AEO (Answer Engine Optimization) emerged as AI systems started answering questions directly. When Google introduced featured snippets, then AI Overviews, the game changed. Ranking wasn’t enough anymore. You needed to be the source that AI systems cited when generating answers. AEO focuses on structuring content so it gets selected and quoted, becoming the definitive answer rather than just a search result.

GEO (Generative Engine Optimization) expanded this further. Systems like ChatGPT, Claude, and Perplexity don’t just cite sources. They synthesize information from multiple places into comprehensive responses. GEO ensures your content appears in these synthesized answers, ensuring your expertise gets woven into the AI’s response even when you’re not the primary citation.

AAIO (Agentic AI Optimization) is the latest evolution, and it represents a fundamental shift. AAIO isn’t about being found or cited. It’s about being usable by AI agents that act autonomously on behalf of humans.

A research paper published in April 2025 by Luciano Floridi and colleagues formalized this distinction. As they put it, AAIO “explicitly optimises content for autonomous artificial agents, simultaneously addressing both human and machine interpretability.” Unlike SEO, which enhanced discoverability for humans through search engines, AAIO prepares websites for AI systems that initiate digital interactions independently.

Agent Experience Optimization (AXO) is the umbrella term that encompasses all of these practices. Just as UX focuses on human users and SEO focuses on search crawlers, AXO focuses on AI systems that interact with websites. It includes discovery (being found), citation (being referenced), and action (being usable). I cover AXO in depth in What is Agent Experience Optimization.

The progression is straightforward: SEO asks “How do I rank?” AEO asks “How do I get cited?” GEO asks “How do I get included?” AAIO asks “How do I enable agents to complete tasks on my site?”

The relationship between website optimization and AI effectiveness creates a virtuous cycle, similar to what happened with SEO and search engines in the early 2000s. When websites implement AAIO practices, AI agents perform better, which encourages more websites to adopt these practices, which makes agents more useful, which drives adoption further.

December 2025: The HTML Moment For AI

On Dec. 9, 2025, something significant happened. The Linux Foundation announced the Agentic AI Foundation (AAIF), a vendor-neutral governance body for agentic AI standards.

Eight platinum members anchored the foundation: Amazon Web Services, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. What’s remarkable here isn’t the technology. It’s that OpenAI, Anthropic, Google, and Microsoft are building shared infrastructure instead of competing standards. This is a strong signal that the industry sees agentic AI as foundational, not a feature war.

Three key projects were contributed:

  • Model Context Protocol (MCP) from Anthropic: a universal standard for connecting AI systems to tools and data sources, now with over 10,000 published servers and adoption by Claude, ChatGPT, Gemini, VS Code, and Microsoft Copilot
  • AGENTS.md from OpenAI: a standardized specification for providing AI coding agents consistent project guidance across repositories
  • goose from Block: an open-source, local-first agent framework combining language models with extensible tools

This matters because it mirrors what happened with the early web. In the 1990s, competing browser vendors and incompatible standards fragmented the internet. The W3C brought order by establishing shared protocols like HTML and CSS. The Agentic AI Foundation aims to do the same for AI agents, creating the shared infrastructure that lets agents from different companies work together and interact with websites consistently.

As Linux Foundation Executive Director Jim Zemlin put it, the foundation enables development “with the transparency and stability that only open governance provides.”

We’re watching the TCP/IP moment for agents. The protocols being established now will define how AI interacts with the web for the next decade: MCP for tool integration, A2A for agent-to-agent communication, NLWeb for making websites queryable.

I realize that sounds hyperbolic. It isn’t. We’re in the early months of a decade-long transformation.

Discovery, Citation, And Action

These three concepts form the framework for this entire series:

  • Discovery is about being found by AI systems. AI crawlers like GPTBot, ClaudeBot, and PerplexityBot index the web for their respective platforms. If you’re blocking these crawlers, or if your content isn’t accessible to them, you’re invisible to AI systems. Discovery is the foundation. Nothing else matters if agents can’t find you.
  • Citation is about being selected as a source. When an AI system generates a response, it chooses which sources to reference. Getting cited requires content that AI systems recognize as authoritative, accurate, and relevant. This involves structured data, clear information hierarchy, and demonstrable expertise. Microsoft has published detailed guidance on what makes content citable.
  • Action is about enabling agents to use your site. This is where AAIO diverges from earlier optimization approaches. An agent visiting your site might need to click buttons, fill forms, navigate menus, compare options, and complete transactions. If your site breaks when an agent tries to interact with it, you lose the business to competitors whose websites work.

The stakes escalate at each level. Failing at discovery means invisibility. Failing at citation means your competitors get referenced instead. Failing at action means losing transactions that would have happened on your site.

Why This Matters Now

Two converging trends make 2026 the year to act.

Agentic browsers are reaching consumers.

The first wave of AI browsers launched in 2025, and 2026 is bringing them to mainstream users. For a complete breakdown, see The Agentic Browser Landscape in 2026.

Perplexity’s Comet combines search-focused AI with full browser capabilities. ChatGPT Atlas from OpenAI includes Agent Mode for autonomous multi-step tasks. Chrome’s auto browse feature, powered by Gemini, is shipping to Google AI subscribers.

Chrome alone represents 3 billion potential users. If you’re wondering whether to take this seriously: Google doesn’t ship features to 3 billion users on a whim.

When the world’s most popular browser can autonomously scroll, click, type, and navigate on your behalf, the implications for website owners are profound. Websites that work well with these agents get included in agentic workflows. Websites that don’t get skipped.

As DigitalOcean’s analysis notes, “This shift forces websites to redesign for both human and AI users,” requiring cleaner navigation, API-first strategies, and optimization for agent functionality beyond visual presentation.

Commerce is shifting.

Stripe, Shopify, and OpenAI are building infrastructure for AI agents to complete purchases. The Agentic Commerce Protocol enables secure, agent-initiated transactions. Brands like URBN, Etsy, Glossier, and SKIMS are already implementing these systems.

Checkout is no longer a page. It’s an API endpoint. The agent researches, selects, and purchases on behalf of the user, who never visits your website at all.

What’s Coming In This Series

This article established the “why.” The rest of the series covers the “how”:

Part 2: Answer Engine Optimization dives into getting your content cited in AI responses. How AI systems parse content differently than search engines, the structure that gets cited, which schema markup matters, and how to measure your AI visibility.

Part 3: The Agentic Web Protocols explores MCP, A2A, NLWeb, and AGENTS.md, the standards powering the agentic web. These protocols are complementary, not competing, and together they form the infrastructure layer that enables everything else.

Part 4: How AI Agents See Your Website provides the implementation guide. How agents “see” websites, why semantic HTML matters more than ever, the role of accessibility standards, and what to tell your developers.

Part 5: Selling to AI covers agentic commerce. Stripe’s Agentic Commerce Suite, Shopify’s Universal Commerce Protocol, secure payment tokens, fraud detection for agent traffic, and how to get started.

Key Takeaways

  • The web is shifting from pages for humans to content for AI agents. Your website now serves two audiences, and optimizing for both is becoming necessary.
  • The evolution runs from SEO to AEO to GEO to AAIO. Each builds on the last: ranking, then citation, then inclusion, then enabling autonomous action.
  • December 2025 was the turning point. The Agentic AI Foundation launch established shared standards, moving agentic AI from experimentation to infrastructure.
  • Three levels matter: discovery, citation, and action. Being found, being referenced, and being usable by AI agents.
  • The business case is concrete. Agentic browsers are reaching billions of users. Commerce protocols are enabling agent-initiated purchases. Websites that work with agents capture this opportunity; those that don’t lose business to competitors.

Traditional SEO asked: “How do I rank on Google?” The new question is: “How do I become the answer, and how do I let AI complete transactions on my site without a human ever visiting?”

I’m writing this series because I believe most websites do and will get this wrong. They’ll treat it as an SEO tweak or a CRO experiment when it’s an architectural shift.

The infrastructure is being built now. The standards are being established. The agents are already browsing.

The question is whether your website is ready for them.

More Resources:


This post was originally published on No Hacks.


Featured Image: Collagery/Shutterstock

Google AI Mode Goes Personal, Crawl Limits Clarified – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how Google personalizes AI Mode, what Googlebot’s crawl limits look like in practice, and what new data shows about AIO click behavior and publisher traffic.

Here’s what matters for you and your work.

Google Personal Intelligence Now Free For US Users

Google expanded Personal Intelligence from paid AI Pro and Ultra subscribers to all free US users on personal Google accounts. The feature connects Gmail and Google Photos to AI Mode.

Key Facts: AI Mode access is available now. Gemini app and Chrome rollouts are starting. When enabled, AI Mode can reference email confirmations, travel bookings, and photo context to personalize responses. No expansion beyond the US or to Workspace accounts has been announced.

Why This Matters

Paid-to-free means a much larger user base gets access to personalized AI Mode results. People searching the same query could see different AI Mode responses depending on what’s in their Gmail. That makes it harder to benchmark what AI Mode shows for any given topic.

Read our full coverage: Google AI Mode’s Personal Intelligence Now Free In U.S.

Google Reveals Googlebot’s Crawl Limits Are Flexible

Google’s Gary Illyes and Martin Splitt discussed how Googlebot’s crawl limits work. The commonly cited limits aren’t as fixed as most people assume.

Key Facts: Google has long cited a 15 megabyte limit for its crawlers, but Illyes said internal teams can override it. Google Search works with a smaller 2 megabyte threshold in practice. The limits can be increased or decreased depending on what’s being crawled and why.

Why This Matters

The 15MB number has been treated as a hard ceiling in technical SEO guidance for years. Google Search working with a smaller 2MB threshold adds useful context to the long-cited 15MB figure. Most pages are well under 2MB, but pages with heavy inline scripts, large data objects, or extensive embedded content could be affected.

Read our full coverage: Google Shares More Information On Googlebot Crawl Limits

AI Overviews Cut Germany’s Top Organic Position CTR By 59%

SISTRIX analyzed over 100 million German keywords and found AI Overviews cut the position one click rate from 27% to 11%.

Key Facts: AI Overviews appear on about 20% of German keywords, up from 17% in August. SISTRIX estimates the total cost at 265 million lost organic clicks per month across the German market. Averaged across all keywords, including those without AIOs, that works out to a 6.6% click loss.

Why This Matters

The German data is directionally similar to US findings. Position one loses more than half its clicks when an AIO appears, and informational content takes the biggest hit. This suggests the pattern is not limited to the US.

What People Are Saying

Barry Adams, founder of Polemic Digital, wrote on LinkedIn:

“Citations in AIOs don’t matter, people don’t click. If you want to keep thriving on Google, you need to offer something AI can’t replicate. For publishers, breaking news is the golden goose.”

Read our full coverage: Google AI Overviews Cut Germany’s Top Organic CTR By 59%

Search Referral Traffic Down 60% For Small Publishers

Chartbeat shared new data that breaks down search referral traffic losses by publisher size. Most previous reporting on search traffic declines treated publishers as a single group.

Key Facts: Small publishers lost 60% of search referral traffic over two years. Mid-sized publishers lost 47%, and large publishers lost 22%. Google Discover referrals fell 15% over the same period. Larger publishers are partially offsetting losses through direct traffic, email, and app referrals.

Why This Matters

ChatGPT referrals grew over 200% in this data, and they still account for less than 1% of publisher page views. The growth rate sounds impressive until you compare it to what search took away. Chatbot traffic is still too small to offset those losses in this data.

What People Are Saying

Steven Waldman, founder of Rebuild Local News and Report for America, called the data “incredibly important” in a LinkedIn post, noting that larger publishers are more insulated because of stronger brand recognition and direct-to-consumer products.

Layne Bruce, Executive Director of the Mississippi Press Association, wrote on LinkedIn:

“Each week brings some new advancement in technology that’s great for consumers but threatening the ecosystem that generates the flow of information in the first place.”

Read our full coverage: Search Referral Traffic Down 60% For Small Publishers, Data Shows

Theme Of The Week: General Benchmarks Are Getting Less Useful

Each story this week shows a number that used to mean one thing now meaning something different depending on context.

AIO click losses in Germany are directionally similar to those in the US. The 15MB crawl limit isn’t 15MB in practice. And Personal Intelligence makes AI Mode results vary by user, so checking what “shows up” for a query depends on what personal Google services that person has connected.

This week’s stories show data is more useful when you read it against your own vertical, your own site size, and your own audience.

More Resources:


Featured Image: [Credit]/Shutterstock

You’re Not Scaling Content. You’re Scaling Disappointment

Every few years, the SEO industry discovers a new way to mass-produce content and convinces itself that this time it’ll work. That the sheer volume of pages will overwhelm Google’s ability to assess quality. That if you just publish enough, the numbers will carry you.

It never works. It has never worked. And the people selling you these approaches know it has never worked. They just need it to work long enough to collect the invoice.

The Pattern Has A Name. It’s Called “Not Learning”

Let’s walk through the timeline, because apparently, we need to do this again.

2008-2011: Content Spinning

The pitch was simple: Take one article, run it through software that swaps synonyms, and suddenly you have 50 “unique” articles. The word “unique” was doing a lot of heavy lifting in that sentence. These articles read like someone had fed a dictionary through a blender. But even if the output had been polished, the premise was broken. Here’s what the content spinners never grasped, and what their successors still don’t: Uniqueness is trivially easy to produce. A monkey dropping its hands on a keyboard produces unique content. The string of characters has never existed before – congratulations, it’s original. The hard part was never uniqueness. It was producing uniqueness that’s worth something. Unique and valuable are not synonyms, and the gap between them is where every scaling strategy falls apart.

Google tolerated it for a while. Its systems simply hadn’t caught up yet. Then Panda arrived in February 2011, hit nearly 12% of all search queries, and content farms watched their traffic evaporate overnight … I was “fortunate” enough to watch it happen in real time. Demand Media, the poster child of the content-farm model, reported a $6.4 million loss the following year.

The lesson was supposed to be clear: You cannot industrialize quality. Volume without substance is a liability with a longer tail than most budgets can absorb.

2015-2022: Programmatic SEO

The pitch evolved. Instead of spinning existing articles, you’d build templates and fill them with structured data. “Best [X] in [City]” pages, generated by the thousand, each one a thin wrapper around a database query. Some of these actually provided value – if the underlying data was good and the template served genuine user needs. Most didn’t. Most were just doorway pages wearing a better outfit. Google spent years refining its ability to detect and demote templated content that existed primarily for indexing purposes rather than for humans.

The lesson was supposed to be reinforced: scale works when there’s substance underneath. Without it, you’re just building a bigger target.

2023-Present: AI-Generated Content At Scale

And here we are again. Same pitch, shinier tools. “We can produce 500 articles a month!” Wonderful. Can you produce 500 articles a month that are worth reading? That contain something a reader couldn’t get from the results already in the index? That demonstrate any form of expertise, experience, or original thought?

No? Then you’re not scaling content. You’re scaling your crawl budget waste.

And the pattern recognition failures are stunning. (This wasn’t subtle. Several of us noticed. No, we weren’t impressed.)

I recently came across an AI visibility tool – one that sells itself on helping you get discovered by AI systems – that had generated hundreds of pages following the pattern “best SEO agencies in {city}.” Déjà vu. Anyone who lived through programmatic SEO recognizes this immediately – it’s the 2017 playbook, except now the copy is written by an LLM. The template got a grammar upgrade and an “it’s AEO” stamp. The strategy didn’t.

Lily Ray flagged a similar case: a resume site with 500+ programmatic pages for “resume examples for {career}.” Every title following the exact same formula. Near-identical page templates. Misused AggregateRating schema. Obvious AI content throughout. Her summary was three words: “Worked until it didn’t.”

Image Credit: Pedro Dias

That phrase should be tattooed on every content scaling pitch deck. Worked until it didn’t. It always does. And then it doesn’t.

The irony of an AI optimization tool using mass-generated doorway pages to build its own visibility would be funny if it weren’t so perfectly on-brand for this industry.

The Qualitative Wall Doesn’t Move

Here’s what every generation of content scalers fails to understand: Google doesn’t evaluate content in isolation. It evaluates content relative to everything else in the index on the same topic.

Publishing 500 AI-generated articles about mortgage rates doesn’t make you an authority on mortgage rates. It makes you the 500th source saying the same thing in slightly different words. And Google already has 499 of those. It doesn’t need yours.

The qualitative wall is this: There is a minimum threshold of genuine value – original insight, lived experience, specific expertise, something the reader cannot get elsewhere – below which no amount of volume helps you. You can publish a million pages below that threshold. You’ll rank for nothing that matters.

And it gets worse. For the people scaling AI content specifically to gain visibility in AI-powered answer systems, the volume strategy doesn’t just fail; it actively backfires. A 2025 paper on retrieval evaluation for LLM-era systems introduces a metric that measures both helpful and distracting passages in retrieval. The finding that matters here: Low-utility content doesn’t sit quietly in the index waiting to be ignored. It can pull retrieval models off-track, degrading the quality of answers those systems produce. Your 500 thin articles aren’t just invisible. They’re noise. And if your site also has genuinely useful pages buried in that noise, congratulations – you’ve built your own interference pattern. The volume you thought would help discovery is actively drowning the pages that might have earned it.

This isn’t a new insight. It’s the same insight that content spinners ignored in 2010, that programmatic SEO factories ignored in 2018, and that AI content mills are ignoring right now. The tools got better at producing text. The text still has nothing to say.

Google Told You. Repeatedly

Google’s spam policies define scaled content abuse as generating pages “for the primary purpose of search rankings and not helping users.” They explicitly list “using generative AI tools or other similar tools to generate many pages without adding value for users” as an example. This is not subtext. It’s text.

In June 2025, Google began issuing manual actions specifically for scaled content abuse, targeting sites that had been mass-publishing AI-generated content. Sites across the UK, US, and EU received Search Console notifications citing “aggressive spam techniques, such as large-scale content abuse.” Complete visibility drops. Pages didn’t slide down the rankings; they vanished.

The August 2025 spam update continued the enforcement. Subsequent core updates have kept tightening the screws. Each time, the same profile gets hit: high volume, low substance, no editorial oversight.

And each time, the affected site owners acted surprised. As if Google hadn’t been telling them this for 15 years.

‘But Our Content Is Ranking Well’

This is my favorite delusion. I’ve seen it at every stage of this cycle. “Our AI content is ranking, so it must be fine.” Claiming “this is ranking well” is often precisely why Google issues algorithmic improvements and manual actions for your site. If your low-value content is ranking, the system hasn’t gotten to you yet. That’s all it means.

Google aggregates signals at the site level, not just the page level. You can have individual pages performing while the overall quality signal of your site degrades. And when the enforcement catches up (algorithmically or manually), it doesn’t pick off pages one by one. It hits the lot.

This is the content spinner’s fallacy, recycled: “It’s working right now, so it must be a strategy.” Demand Media’s content was ranking too. Right up until it wasn’t.

Lily captured this perfectly: “The case study: scaling AI content is working! The reality:” – followed by the traffic cliff that inevitably arrives. Every scaling success story is a snapshot taken before the correction. Nobody publishes the sequel.

Image Credit: Pedro Dias

The Economics Don’t Even Make Sense

Set aside the risk for a moment. Let’s talk about what you’re actually producing.

Five hundred AI-generated articles a month. Each one needs to be reviewed for accuracy – because LLMs hallucinate, and publishing incorrect information is a liability that extends well beyond SEO. Each one needs to be checked for originality – because if it reads like everything else in the index, it provides no added value; no competitive advantage. Each one needs editorial oversight to ensure it actually serves the audience you claim to serve.

If you’re doing all of that, the cost just moved – and possibly increased – while you convinced yourself you were being efficient. The “efficiency” of AI content generation evaporates the moment you apply the quality standards the content actually needs to meet.

And if you’re not doing any of that? You’re publishing unreviewed, unoriginal, potentially inaccurate content at scale under your brand name. I genuinely do not understand how anyone signs off on that.

Same Mistake, Better Tools

Content spinning. Programmatic SEO. AI-generated content at scale. Three different tools, one identical mistake: treating content as a manufacturing problem.

Manufacturing produces identical outputs at scale – that’s the point. Content derives its value from the opposite: from being specific, from being informed by experience, from saying something the rest of the index doesn’t. Every attempt to industrialise it crashes into that contradiction.

You can’t automate specificity. You can’t template experience. You can’t generate original thought by running a prompt through an LLM and hoping something useful comes out. And these constraints won’t be solved by the next model release. They’re baked into what makes content worth reading in the first place.

The people who keep chasing scale are optimising for the wrong variable. They see “more content” as an input that produces “more traffic” as an output. But the function is not linear. It never was. It’s gated by quality, and no amount of volume bypasses the gate.

The Only Question That Matters

Before you publish anything (AI-assisted or otherwise), ask one question: What does this page offer that the reader cannot already get?

If the answer is “nothing, but we’ll have more pages indexed,” you’re not building a content strategy. You’re building a liability. And you’re doing it with the confidence of someone who has apparently never heard of Panda, never looked at what happened to programmatic SEO sites in 2022, and never read Google’s own spam policies.

You can convince yourself for as long as you want. But you’ll only fool everyone else for a while.

The wall is still there. It’s always been there. The tools keep changing. The wall doesn’t.

More Resources:


This post was originally published on The Inference.


Featured Image: Roman Samborskyi/Shutterstock

What’s Hot, What’s Not: AI Search Changes In Q1 2026 [Recap] via @sejournal, @MattGSouthern

SEJ Live’s opening panel covered three months of AI search changes from three angles. I covered the news, SEJ Founder Loren Baker covered the business case, and Managing Editor Shelley Walsh covered content strategy. The on-demand recording is available here.

The session was called “What’s Hot, What’s Not,” and our goal was to identify the Q1 changes worth acting on in Q2, and what steps you can start taking today.

AI Overviews Are Costing Clicks, But Not All Of Them

The headline number from Q1 is that clicks drop when AI Overviews appear, but the loss varies by query type. Google’s VP of Search, Robby Stein, said that when people scroll past an AI Overview without engaging, Google pulls it back for that query. The pages losing traffic are the ones answering simple questions. If someone searches for store hours or a return policy, the AI answers it, and nobody clicks through.

Shelley pointed to data from Amsive showing that branded queries with AI Overviews see an 18% increase in click-through rates. When people trust a source, they click through even when a summary is available.

She also pointed out that between half and three-quarters of all queries don’t trigger an AI Overview at all, depending on whose data you use. BrightEdge puts it at about half. Conductor puts it higher. Either way, there are entire categories of queries where you can still compete without an AI Overview in the way.

AI Mode And ChatGPT Are Both Selling Ads Now

AI Mode crossed 100 million monthly active users in the U.S. and India, with 75 million using it daily. During Q1, Google expanded how it monetizes AI-powered search, including Direct Offers in AI Mode, which lets businesses place promotions inside AI responses.

OpenAI began testing ads in ChatGPT for logged-in adult users on the Free and Go tiers. Industry reports put the early pricing at about $60 CPM with a $200,000 minimum commitment. OpenAI said the ads use the current conversation context for targeting.

Between Google and OpenAI, there are now multiple ways to place ads inside AI-generated answers. That wasn’t the case a few months ago.

Start tracking how often your brand gets mentioned in ChatGPT and AI Mode responses. You’ll want to know where you stand before deciding whether paid placement makes sense.

Replaceable Content Is What AI Threatens

Shelley’s segment drew a line between replaceable and valuable content. AI can summarize “what is SEO” or “how to change a bike chain” as well as any page that restates common knowledge. If your content is built on answering those kinds of questions, you’re competing directly with AI.

But content based on original research and firsthand experience is different. Shelley called this “golden knowledge,” borrowing a phrase from SEO veteran Grant Simmons. It’s your data and your experience. LLMs can’t generate it from training data.

Shelley said this looks like video interviews and original research, plus opinionated commentary from practitioners. She pointed to SEJ’s own changes as an example. SEJ has moved editorial toward experience-first formats and shifted revenue from programmatic to sponsorship and downloadable assets. Growing a direct audience is now the top priority.

The question to ask, she said, is why someone would click through from an AI summary to your site. If your content is a summary, there’s no reason. If it has depth, case studies, implementation detail, or nuance the summary can’t contain, that’s what drives the click.

Schema Markup Now Trains LLMs Across Platforms

Loren’s segment made the case that structured data has more value now than at any point in the last decade. Schema markup has always helped with rich snippets in Google. Now it also trains LLMs across platforms.

He shared an example of a client whose CEO shared a common name, and searching for that name plus “CEO” surfaced executives from other companies. Loren implemented organization and person schema. As soon as it went live, the correct CEO appeared in AI Overviews.

Loren ranked the structured data signals AI systems respond to. Schema markup was at the top, followed by clean heading hierarchy and semantic HTML. He put llms.txt as an emerging standard worth watching.

On markdown, Loren noted that Cloudflare had announced a new /crawl endpoint that same morning. The feature renders sites in clean HTML and markdown for LLMs, plus structured JSON. Loren’s point was that if Cloudflare is building this at the platform level, and LLMs learn from markdown, then the tooling to serve it is growing.

Getting Schema Off The Dev Backlog

Loren’s most relatable point was about internal buy-in. Anyone who’s worked with development teams knows schema tends to sit in the backlog behind other priorities. But the conversation changes when you tie technical SEO work to AI visibility.

Tell a client that AI answers depend on structured data, and that ticket moves up the sprint board. He connected this to broader executive buy-in. C-suite leaders are seeing AI Overviews and ChatGPT answers about their companies, and they’re asking questions. That attention creates an opening to secure funding for technical work that would have stalled in previous years.

For ecommerce specifically, Loren recommended the Shopify Knowledge Base App, which crawls product content and generates question-and-answer pairs.

Looking Ahead

During Q&A, the panel was asked about AI-generated content. Shelley confirmed that Search Engine Journal’s content is human-written, and we plan to keep it that way. All three of us agreed that AI works best as an augmentation tool for writers who already know their subject.

The full session, including the Q&A, is available on demand. The other two sessions from the event are also available. CallRail’s Emily Popson covered AI search KPIs in Session 2, and Forrester’s Nikhil Lai covered answer engine strategy in Session 3.

More Resources:


Featured Image: Search Engine Journal

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Search referral traffic to small publishers dropped 60% over two years, according to Chartbeat data reported exclusively by Axios.

That’s nearly three times the decline at large publishers. The analytics firm, which tracks traffic across thousands of client websites globally, segmented its network by size. Mid-sized publishers (10,000 to 100,000 daily page views) lost 47%, and large publishers (over 100,000 daily page views) lost 22%.

What’s New

Aggregate search traffic data from Chartbeat isn’t new. Our January Reuters Institute coverage cited Chartbeat data showing a 33% global decline in Google Search referrals. What’s new is the size breakdown. Previous Chartbeat figures cited in earlier coverage were aggregate numbers, and this data shows the losses are concentrated at the bottom.

Page views from Google Search fell 34% between December 2024 and December 2025, per the Chartbeat data. Google Discover, the other top referral source, fell 15% over the same period.

ChatGPT referrals grew more than 200% during that window, but chatbots still account for less than 1% of all publisher page view referrals. Growth in chatbot traffic hasn’t come close to replacing what search lost.

How Larger Publishers Are Compensating

Larger publishers appear to be finding alternative traffic sources to partially offset search losses. News and media sites in particular are seeing growth in direct and internal traffic as a share of referrals.

Email and app referrals are also growing among news publishers, per the Axios report. Our Reuters Institute coverage in January found the same pattern, with publishers saying they planned to invest more in owned channels.

Overall weekly page views across all publishers in Chartbeat’s network dropped 6% between 2024 and 2025. The firm attributed that to factors outside search, including a quieter election cycle, though that’s their interpretation, not a measured cause.

AI Referral Engagement Varies By Site Type

One finding that stands out for content strategy is that news and media sites get the highest total page views from AI chatbot referrals, but the lowest engagement per article.

Axios reports that this pattern suggests readers use news citations in chatbots for quick fact-checks or context, not deeper reading.

The other category in the data is “utilitarian sites,” meaning publishers offering health advice or gardening tips. Those publishers see fewer total referrals from AI platforms but more page views per article.

Methodology Notes

Chartbeat sells analytics tools to publishers and has tracked traffic across its client network for close to two decades. Its data covers thousands of websites globally but skews toward news and media publishers.

Small publishers in this data average 1,000 to 10,000 daily page views, medium is 10,000 to 100,000, and large is over 100,000.

Axios received the data exclusively, and Chartbeat hasn’t published it independently.

Why This Matters

Search referral traffic loss is hitting sites with the fewest resources to build alternative traffic.

Most reporting on search traffic declines has treated publishers as a single group. This Chartbeat data breaks down the data by size. For anyone working with smaller publishers, these numbers should change the conversation.

AI chatbot users click to news sites for quick checks but spend more time on how-to content. That means the value of an AI referral depends on what you publish.

Looking Ahead

We’ll be watching for Chartbeat to publish the full data set. How chatbot referral engagement differs by site type is still early data worth tracking.


Featured Image: fizkes/Shutterstock

Google Explains Why HTTPS Migration May Negatively Impact SEO via @sejournal, @martinibuster

Google’s John Mueller answered a question about moving to HTTPS, explaining why the process of making a site secure is actually a major undertaking that can have a negative impact on rankings.

Loss Of Top 3 Google Rankings

A person asked on Reddit why they lost their top 3 rankings in Google after making their site secure with HTTPS. They also replaced their old WordPress theme and updated their content.

They explained their situation and asked for advice:

“We have a 15 year old financial website hosted with godaddy deluxe plan, suddenly disappeared in google after moving https. We replaced our wordpress old theme and updated new content. Our old http site scored top 3 in google. We implemented 301 using real simple ssl few days ago so far rankings not recovered. Some of the http links still not crawled and updated by google.

Do you think going back to http would recover our rankings? We feel all is lost. Any chance of recovery.”

HTTPS Migration

There are multiple things that stand out as possible reasons for losing their rankings. But John Mueller focused exclusively on the HTTPS migration as the likely reason for losing their rankings.

Mueller responded:

“Moving to HTTPS is a bit like a site migration, all the URLs have to be recognized, recrawled, and reprocessed individually. So especially if this move was made a few days ago, you need to give it time to recover (in particular, don’t use the URL removal tool to try to get rid of the HTTP URLs, since it will also remove/hide the HTTPS URLs). (I won’t touch upon finally moving to HTTPS after so many years, but I guess I just did :))”

All Is Not Lost

I have had several occasions to test how fast Google could return an entire website back to former rankings and have been pleasantly surprised at how fast Google is able to process a major website change or recover from being offline for as long as a month.

The person is rightfully having a freakout about losing their rankings, but it’s only been a few days. Mueller said to give it some time, and based on my own experiences, I would agree.

Featured Image by Shutterstock/Anton Vierietin

SEO Test Shows It’s Trivial To Rank Misinformation On Google via @sejournal, @martinibuster

An SEO crafting a newsletter with AI spotted a hallucination about a March 2026 Google Core Update and decided to publish it as an experiment to see how misinformation spreads. While search marketing industry publications ignored the fake news some independent SEOs picked it up and ran with it without first checking the factual accuracy of the news.

Mistake Leads To A Double Take

The person who did the experiment, Jon Goodey (LinkedIn profile), published a LinkedIn article that purposely contained an AI hallucination about a non-existent March 2026 Google Core update. He explained, in a subsequent Linkedin post, that his AI workflow contains human quality control to catch AI mistakes and when he spotted it he decided to go ahead and publish it to see if anyone would dispute or challenge the false information.

Google Ranks Misinformation

Goodey explained that it was Google itself that fueled the misinformation about the fake core algorithm update as his LinkedIn newsletter ranked for the phrase Google March Update 2026. The fake news ranked in Google’s classic search and in AI Overviews.

He explained:

“My LinkedIn article began ranking on the first page of Google for “Google March update 2026.” Not buried on page three. Right there, visible to anyone searching for information about recent Google algorithm changes.

…Google’s own AI Overview feature picked up the fabricated information and presented it as fact.”

Google’s fact checking in the search results is basically non-existent, so it’s not surprising that Google’s search engine would rank the fake information, especially for anything related to SEO. Using Google for SEO queries is like playing a slot machine, you have no idea if the information will be right or a total fabrication.

Searching for information about a dubious black hat tactic (like Google stacking) may cause Google to actually validate it, potentially misleading an honest business person who wouldn’t know better.

Screenshot Of Google Recommending A Black Hat SEO Tactic

This is a longstanding black spot on Google’s search results and is why it’s not surprising to see Google spew out misinformation about a fake Google update.

Websites Echo Misinformation

The result is that SEO websites began repeating the false update information because of course, Google core updates are a traffic magnet and a way some SEOs attract potential clients. There’s a long history in the SEO community of stirring up noise about non-existent updates, so again, not surprising to see SEO agencies pick up this ball and run with it.

Goodey shared:

“Multiple websites published detailed, authoritative-sounding articles about the “March 2026 Core Update,” treating it as confirmed fact. These weren’t throwaway blog posts. They were detailed pieces with specific claims about Gemini 4.0 Semantic Filters, Information Gain metrics, and recovery strategies.”

Most News Sites Ignored The Fake Update

SEJ and our competitors ignored the fake March update news. But a technology site apparently did not, with Goodey calling them out about it.

He wrote:

“Another site, TechBytes, went even further with a piece by Dillip Chowdary headlined “Google March 2026 Core Update: Cracking Down on ‘Agentic Slop’.” (Oh, the irony…).

This article invented specific technical details including claims about a “Gemini 4.0 Semantic Filter,” a “Zero Information Gain” classification system, and a “Discover 2.0 Engine” prioritising long-form technical narratives.”

Google Has A Policy About Fact Checking

I recall Google’s Danny Sullivan talking about how Google doesn’t do fact checking but I couldn’t find his tweet or statement. There is however a news report published in Axios related to fact checking where a Google spokesperson affirms that Google will not abide by an EU law that requires fact checking.

According to the news article:

“In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google’s global affairs president Kent Walker said the fact-checking integration required by the Commission’s new Disinformation Code of Practice “simply isn’t appropriate or effective for our services” and said Google won’t commit to it.

The code would require Google to incorporate fact-check results alongside Google’s search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google’s current approach to content moderation works and pointed to successful content moderation during last year’s “unprecedented cycle of global elections” as proof.
He said a new feature added to YouTube last year that enables some users to add contextual notes to videos “has significant potential.” (That program is similar to X’s Community Notes feature, as well as new program announced by Meta last week.)”

Takeaways

Jon Goodey had multiple takeaways, with the most important one being that people should fact check what they read online.

Other takeaways are:

  • AI workflows should have validations built into them.
  • Most readers don’t fact check (only a few commenters disputed the false claims).
  • AI overviews and search amplify misinformation.
  • One article is echoed by the Internet, with other sites repeating and embellishing on the original false information.

Featured Image by Shutterstock/Rawpixel.com

How To Use AI To Streamline Time-Consuming SEO Tasks via @sejournal, @coreydmorris

SEO, like most organic and non-advertising or paid channels in digital marketing, is labor-intensive. Yes, there are software suites, analytics platforms, research tools, and a number of other things that help in the tech stack.

We all have our favorites, and no one is (or should) be doing SEO like I was in 2008 (despite my desire sometimes to just do something manually where I can see the inputs and outputs and have more control, but I digress).

In the midst of constant noise about new platforms, new ranking factors, ways to become visible in AI, and everything else, it can be hard at times to keep going with the tasks that still require a human at some level. Whether it is gaining efficiency, scaling efforts, doing more with less, or a combination of these, I’m sharing human-involved ways to streamline time-consuming tasks so you can gain time (and maybe money).

1. Generating Meta Descriptions, Page Titles, Alt Text

I could have started with something more high-level or strategic, but I’m getting this one out of the way right now.

The basic blocking and tackling of ensuring you have unique, helpful, and topically relevant meta descriptions, page titles, and image alt text can be a huge investment of time on a large website or across sites if you own tactical SEO for multiple sites or clients.

While there are ways to semantically have these tags auto-generated by a database or CMS, we know that, in a lot of cases, there’s still a manual process or intervention to audit and ensure that the tags are written to best practices and strategic positioning.

Also, I know that there’s plenty of discussion or debate on whether there’s even value in creating titles and meta descriptions. I’m not going there. But I will say that, if you have any areas where you need to create them and they are on your tasks list, you can spend a lot of hours and the cost of those hours (or outsourced resources) for a minimal return.

Leverage tools based on what you’re already paying for or what tech ecosystem you’re in, like Screaming Frog + OpenAI API + a WordPress plugin, which can save thousands of dollars and many dozens of hours.

Putting It Into Action

Steps for generating alt text at scale:

  1. Get your OpenAI API key:
    • In your OpenAI dashboard at platform.openai.com, go to API keys.
    • Create a new secret key and name it something you’ll remember, like Screaming Frog.
    • Make sure you have credits in your account (a few dollars can go a long way).
  2. Set up your Screaming Frog crawl:
    • Set up your OpenAI configuration by going to Configuration > API Access > AI. Enter your API Key into the field. Press Connect.
    • Set up a prompt to generate alt text by going to the Prompt Configuration tab. Click Add from Library > System > Generate alt text for images.
    • Set up your crawl configuration and don’t forget to go to Spider > Rendering and change the rendering mode from Text Only to JavaScript. Then, go to Extraction and, under HTML, check Store HTML and Store Rendered HTML.
    • Run a test crawl on one URL to ensure the output works for you. Tweak the prompt if you’d like.
  3. Run the crawl.
  4. Export to a CSV.
  5. Format the file with two columns: image URL, alt text.
  6. Add this plugin to the site: https://wordpress.org/plugins/alt-text-updater/.
  7. Upload the file.
  8. Crawl your site and do manual checks to test that images have alt text.
  9. Deactivate and uninstall the plugin.

2. Structuring Content Outlines

This might be one of the most common things we do when starting SEO or in periodic content organization, expansion projects, or ongoing content creation. With content being what I call the “fuel” of SEO (and also visibility in AI search), it is still as important as ever to organize it well and present it in a way that makes sense to site visitors and the machines that are also learning it.

While you might not be able to automate this out of the box or in a single prompt in your favorite LLM, you can definitely speed up the process and gain some insights into connections you might not make on content themes on your own (my favorite bonus).

Whether you’re working on a single article, a longer-term content calendar, reorganizing evergreen content, or other content-specific tasks, mastering the art of prompt creation, coaching the AI agent, ensuring the output is good, and using project folders (with brand style guides) in ChatGPT can ensure the quality and speed the more you produce.

Putting It Into Action

Example Prompt

You are an expert SEO who specializes in content writing for [industry]. Your task is to create an outline for an article for [topic]. The article outline should cover the following subtopics: 

[subtopic 1], 

[subtopic 2], 

[subtopic 3]. 

The article should target the following keywords: 

[keyword]

[keyword]

[keyword]

Attached are the HTML files of pages currently ranking well in Google search results to use as guidance. Review the HTML files and generate a content outline. 

3. Creating Project Briefs

Going a little higher level into organizing the work we do, connecting desired outcomes to strategies and ultimately to tactics, project briefs are something you might not do every day.

I like to think about SEO in projects or sprints as a way to break up the big nature of ongoing and long-term work that requires short-term progress and tactics. Regardless of how you organize the work, you likely have a lot of varying documentation and information. Whether in sheets, documents, decks, or other sources, you have information that you can feed together into your LLM of choice to have AI organize and sort out.

Whether you’re doing this formally to produce a report deliverable or informally to help your team or yourself organize the minutiae of SEO information, I can point to examples of my team using Gemini to read through a bunch of documents, including meeting notes, personal notes, transcripts, AI transcripts, agendas, competitor lists, research, emails, and more.

This can be helpful for a number of uses, including putting together a document that can be helpful for personal reference, team reference, onboarding, and articulation of the overall knowledge base for stakeholders.

Putting It Into Action

Example Prompt

You are an experienced Senior Marketing Strategist and you’re onboarding your team for [describe project]. Your task is to create a comprehensive project brief for [name of campaign or project].

Ensure the project brief takes into account the following project details:

Objective: [what is the overarching goal of the project]

Target audience: [overview of the demographics]

Key messaging: [provide details about campaign messaging]

Channels: [what channels will be incorporated into the campaign/project]

For the deliverable, the output should include the following:

Project Overview: Include a 1-2 sentence summary of the project

Success Metrics: [provide KPIs]

Budget: [provide financials]

Timeline: [provide deadlines and milestones]

Generate the project brief as a professional, internal-facing document.

Classifying Keywords

Prompt for using the AI function in Google Sheets to classify keywords by search intent, segment, branded/non-branded, etc.

=ai("Act as an SEO Specialist. Classify the following Keyword into exactly one of these Categories: [Informational, Navigational, Commercial, Transactional].

Rules:

Informational: User is looking for an answer or guide.

Commercial: User is researching products/services before buying.

Transactional: User has high intent to buy/convert now.

Navigational: User is looking for a specific website/brand.

Keyword: [Cell Reference, e.g., A2]

Result: Return only the category name with no extra text or punctuation

4. Segmenting Keywords

In SEO today, we’re not focused necessarily on granular keywords. However, they are still important in our research and strategy planning, along with more tactical work in guiding content topic building and creation.

When you do your research and have your list of keywords from any source, you can utilize the Google Sheets AI function to categorize them by topic, pillar, branded/non-branded, localized or not, search intent, etc.

You can also run keywords through an LLM and have it categorize them, export the output, import that back into your spreadsheet, and align it to the data using a VLOOKUP function (a recommendation, as my team thinks the Google Sheet AI function isn’t where we want it to be yet).

While the method I noted also might feel manual and not where we want it to be eventually, with better AI and tooling, it is still much better than doing things manually. I encourage you to use your own spreadsheet logic or “regular expression” (regex) to categorize as much as you can efficiently before going to AI, especially if your dataset is extensive.

5. Documenting Competitor Outlines

While I have to admit that I like to visually check out competitor websites for my first impression and a quick, informal sophistication check, automating this is a huge time-saver.

For example, Gemini is really good at outlining the content structure of a webpage, so my team likes to feed three or four competitor URLs that are ranking well or have high visibility for a topic that we’re building a strategy for, and it can give us an outline of each page. That includes messaging, targeting, and providing baseline content blocks that each page has that we can use when we do content development on our side.

Disclaimer: Just like in the olden days, don’t copy directly and don’t steal. Verify that what you’re getting back out of the tool you’re using isn’t ripping someone off. That’s on us to validate.

Putting It Into Action

Example Prompt

You’re an expert SEO strategist and you’re conducting a competitive content analysis of your client’s page against pages currently outranking it in Google for the search term [keyword]. The client is a [describe client and industry]. The page is [describe purpose of the page and topic].

I’ve attached the HTML files of the client’s page, as well as the HTML files for the competitor pages. Your tasks are to provide me:

An outline for each page of the content blocks present in the HTML

An overview of the messaging, tone, voice

A list of outgoing internal links in the content

Content gaps between the client's page and the competitors 

6. Conducting SERP Analysis

We can’t waste impressions and any visibility we get by showing up on the wrong topics. SEO now is about quality, and we can’t miss the mark on search intent.

An example that is a big time-saver is to build your seed keyword list using Ahrefs and then export the keyword list with SERP data. Then, feed that spreadsheet into Gemini and have it provide a breakdown of organic competitors per keyword, intent of ranking organic pages per keyword, etc. This example is a good way to save time from having to review hundreds and hundreds of rows. My team usually filters out AI Overviews and ad placement data to condense it a bit.

This type of work has been helpful in figuring out informational versus commercial intent SERPs at scale so that we’re targeting the right keywords with the right content. It has also been helpful in understanding the level of competition within a topic, so we know what to avoid and what long-tail keywords may represent realistic opportunities.

I will emphasize, though, that it is important to note that the SERPs aren’t 100% accurate, and localization and personalization will change the SERPs that users see. But it’s helpful in comparing keywords against each other. We also do SERP reviews manually to confirm findings. Again, validate as a human what you’re getting from tools.

In Closing

There’s a lot of power in what you can reclaim in time and dollars, leveraging automation, deeper tools use, and the power of AI for SEO. And, you probably detected a theme where, in pretty much everything you do, there have to be solid inputs in order to get useful outputs, which also require human validation and experience to trust.

Regardless of where you are with automation, the goal of being able to do more with less, scale tasks, and not do manual tasks that might have low return on investment is a great way to determine where you should consider doing more with tech and less manual work.

More Resources:


Featured Image: ArtEternal/Shutterstock

How To Build An SEO Commissioning Workflow: From Tickets To Requirements via @sejournal, @billhunt

Enterprise SEO doesn’t fail because teams lack knowledge. It fails because they’re invited too late.

In most large organizations, SEO still operates in a reactive posture. Teams review pages after launch, run audits, document issues, file tickets, and then wait, often for months, for other teams to implement changes. Modern search visibility is no longer shaped by tweaks. It is shaped by what gets built upstream.

High-performing organizations have responded by changing SEO’s role entirely. Instead of treating SEO as a cleanup function, they’ve repositioned it as a commissioning function, one that defines the exact requirements digital assets must meet before they are ever created. This article explains how enterprises can formalize that shift by building an SEO commissioning workflow: a structured, repeatable process that embeds search requirements into digital creation at the moment decisions are made.

The Problem With Ticket-Based SEO

In the traditional enterprise model, SEO is integrated into the workflow after launch. In the traditional cycle, content is created or revised without input from SEO, and the resulting changes often harm search performance. The SEO team investigates the decline to identify new or updated content or templates and creates tickets to adapt them to recover what was lost, or, in the case of new content, what was not gained.  Those tickets are then placed into development queues alongside revenue initiatives, product launches, and executive priorities.

What follows is predictable. Fixes are delayed. Implementation is partial. Some issues are addressed, others are deferred, and many recur in the next release because the underlying cause was never addressed. This model creates three chronic failures.

  • First, SEO is perpetually behind. It is reacting to outcomes rather than shaping them.
  • Second, SEO relies on persuasion rather than process.
  • Third, structural mistakes multiply faster than they can be fixed. Every new page, template, or market rollout becomes another opportunity to replicate the same issues at scale.

When SEO lives downstream, every asset is a potential liability. The organization becomes very good at discovering problems and very bad at preventing them. Progress depends on relationships and goodwill rather than enforceable requirements. Commissioning exists to flip that dynamic.

What SEO Commissioning Actually Means

Instead of reviewing pages after they are launched, leading organizations have begun moving SEO to the moment digital assets are conceived.

At that stage, the question is no longer whether a page can be optimized later. The question becomes whether the asset is designed so that search systems can understand it from the start. Content structure, template behavior, entity representation, internal linking roles, and market alignment are all determined before production begins. When those decisions are made upstream, discoverability becomes a property of the system rather than a series of corrections applied after launch.

A useful analogy comes from high-rise construction. On complex projects, builders often assign a dedicated commissioning agent whose job is not to install anything directly but to ensure that all the independent systems going into the building, including HVAC, elevators, electrical systems, glass, fire controls, and dozens of other components, work together as a coherent whole. Without that coordination, the building may be technically complete yet fail to function as a system.

SEO plays a similar role in digital environments. Instead of diagnosing problems after launch, SEO helps define the requirements that must be satisfied before assets move forward. Those requirements shape how content is commissioned, how templates behave, how entities are represented, and how information is structured so that search engines and AI systems can interpret it correctly.

When SEO participates at the design stage, teams stop asking, “How do we fix this later?” and start asking a more useful question: What must be true before this asset should exist at all?  In that environment, SEO stops behaving like a repair function and becomes part of the design discipline that ensures digital systems work as intended from the beginning.

The SEO Commissioning Lifecycle

Organizations that operationalize SEO commissioning tend to follow the same lifecycle, even if they don’t label it explicitly. The difference is that high-performing teams make these stages intentional, documented, and enforceable.

1. Define Intent Before Creation

Every asset should begin with clarity about why it should exist from a search perspective.

At this stage, SEO identifies how users actually search for the topic or product, how intent is distributed across informational, commercial, and navigational needs, and what search systems typically surface for eligibility. This prevents a common enterprise failure mode: Well-written content that is structurally misaligned with how demand expresses itself.

Commissioning forces an uncomfortable but necessary question early in the process: Why would a search engine or AI system ever select this asset?

If that question cannot be answered clearly, the asset should not move forward.

2. Define Eligibility Signals

Before development or content production begins, SEO specifies the signals that must exist for eligibility.

This includes decisions about schema usage, page classification, metadata structures, heading hierarchies, internal linking roles, entity associations, media requirements, and – when relevant – market and language signals. The key distinction is timing. These decisions are not retrofitted later. They are defined before work begins, ensuring assets are born eligible rather than hoping eligibility can be added after the fact.

Eligibility becomes a prerequisite, not a gamble.

3. Define Structural Requirements

Commissioning also applies to platforms and templates, not just content.

This is where SEO moves closest to product and engineering teams, shaping the structures that determine discoverability at scale. URL rules, template architecture, rendering accessibility, navigation placement, internal linking frameworks, and content modules for depth are all defined here. These are not tactical SEO opinions. They are structural requirements that influence how thousands of pages will be interpreted by machines over time.

When SEO is incorporated at this stage, discoverability becomes a property of the system rather than the result of manual intervention.

4. Pre-Launch Validation (Search QA)

Before release, SEO validates that commissioning requirements were actually implemented.

This includes confirming crawlability, indexability, structured data integrity, entity consistency, internal linking alignment, market targeting, and content completeness relative to intent. This step is often misunderstood as “SEO QA,” but it is fundamentally different from traditional bug fixing. The purpose is not to discover surprises. It is to confirm compliance with requirements already agreed upon.

When commissioning is done correctly, this stage is fast and predictable.

5. Post-Launch Monitoring & Feedback

Commissioning does not end at launch.

SEO monitors performance relative to expectations, including visibility patterns, SERP feature capture, AI citation presence, market alignment, and template behavior at scale. Real-world query data then feeds back into future commissioning rules. This creates a virtuous cycle. SEO evolves from a reactive repair function into a continuous upstream optimization system that improves with each release.

Where Commissioning Lives In The Enterprise Workflow

For commissioning to work, it must live where decisions are made.

That means being embedded into product requirement documents, content briefs, CMS template design, sprint planning, market rollout processes, and governance checkpoints. SEO becomes a required approval step before assets move forward, not an optional reviewer afterward.

This is the difference between SEO as a service and SEO as infrastructure.

Why This Model Changes Everything

Ticket-based SEO creates backlogs and dependencies and commissioning-based SEO creates leverage and prevention. The benefits compound quickly.

Assets launch search-ready the first time, increasing speed rather than slowing it. Structural failures decline because mistakes are prevented upstream. Compliance scales automatically across thousands of pages. Content and entities are structured for machine retrieval from day one. And SEO stops fighting for attention because it is embedded directly into how work gets done.

Most importantly, commissioning aligns incentives. SEO success is no longer dependent on favors, persuasion, or heroics. It becomes a predictable outcome of a well-designed system.

The Hard Truth

Most enterprise SEO pain is self-inflicted. Organizations built workflows where SEO arrives late, lacks authority, fixes rather than defines, and is measured by outcomes shaped by others. Commissioning removes those structural handicaps.

It moves SEO to the point where search success is actually created: the moment decisions are made.

Coming Next

Commissioning solves timing; it does not solve ownership. In the next article, we’ll examine why SEO still fails without clear cross-functional accountability and how enterprises must redefine ownership if commissioning is going to scale.

More Resources:


Featured Image: Summit Art Creations/Shutterstock