The Agency Playbook for Surviving the Agentic AI Era

Search is moving from queries typed into a box to conversations held with systems that understand intent, context, and outcomes. People no longer look for pages. They look for solutions, guidance, and confidence that they are making the right choice.

Agentic AI pushes this shift further. Instead of waiting for instructions, agents act on goals. They discover information, compare options, trigger workflows, and adjust based on feedback. For digital leaders, this means visibility is no longer only a ranking problem. It becomes a problem of influence inside AI systems.

SEO now touches product, data, knowledge management, and experience design. This playbook explains how to prepare for that shift, build capability, and lead change.

Search Is Becoming AI-Mediated

AI systems have become the layer between users and the web. They read content on behalf of users, make selections instead of requiring users to browse, and influence decisions in ways that search pages once did.

This shift changes how people interact with information. Users now ask broader, more complex questions, expecting systems to understand nuance and intent. The traditional act of navigating through links is giving way to direct answers and immediate actions.

Content can no longer be designed solely for human readers. It must also be structured in ways that AI systems can interpret accurately and confidently. In this environment, trust and evidence carry more weight than keywords or search optimization tactics.

Winning in search today means becoming part of the models that shape decisions, not just appearing in the results.

What Agentic AI Means For SEO And Digital

Agentic AI is changing how people discover and choose brands. Discovery now depends on how well models learn from your content, the paths users take on your site, and the external signals that establish credibility. These systems decide when your brand is relevant, based on what they understand and trust.

During evaluation, AI compares your product, price, quality, reviews, and suitability for a given user against other options. It looks for proof, tests claims, and weighs real signals over marketing language.

When supporting decisions, AI doesn’t just provide information. It actively guides users toward what it considers the best fit. Your brand might be brought forward or quietly passed over, depending on how well it matches user needs.

In this landscape, SEO is no longer just about publishing content. It’s about shaping how AI systems perceive your brand and when they choose to recommend it.

New Operating Model For SEO

The future of search brings marketing, product, and data teams into a shared effort. Success depends on how well these areas work together to shape how AI systems perceive and present your brand.

The key is building structured knowledge that AI can easily process and apply. Instead of designing for clicks and views, focus on creating journeys that help users complete tasks through the systems guiding them. It’s also critical to train these systems with the right brand messages, supported by clear evidence and consistent proof points.

Ongoing visibility requires monitoring how models reference your brand, how they rank it, and how they reason about its relevance. This means continuously refining the signals you send, improving your content, updating product data, and reinforcing trust in every interaction.

The goal remains clear and hasn’t really changed from our technical goals for SEO. Make it easy for AI agents to understand, trust, and ultimately recommend your brand.

Maturity Model

Level Name Description Key indicators
0 Manual SEO Basic optimization and manual workflows Keyword focus, isolated content execution, minimal data alignment
1 Assisted SEO AI supports research and content creation AI‑assisted briefs, content suggestions, faster execution, manual oversight
2 Integrated AI workflows Core SEO tasks automated and structured Content pipelines, structured data adoption, automated QA, analytics integration
3 Agent‑driven operations Agents monitor, trigger, and refine SEO Automated reporting, performance triggers, self‑adjusting content modules
4 Autonomous acquisition systems Self‑improving systems tied to revenue Continuous testing, adaptive journeys, revenue‑linked triggers, real‑time optimization

The goal is not automation alone. It is intelligence and improvement at scale.

Technical And Data Foundations

To prepare for agentic SEO, organizations need more than traditional content systems built for publishing. They need strong foundations that help AI systems understand, evaluate, and act with confidence.

This starts with clarity, which means crafting messaging that is consistent, accurate, and easy for machines to interpret. Structure is also essential, requiring content, data, and signals to be organized in ways that align with how AI systems process and reason through information.

Key components of this are:

  • Structured data that turns content into machine‑readable knowledge.
  • Knowledge graphs that explain relationships between products, categories, and needs.
  • Taxonomy and naming standards to ensure consistency across pages, feeds, and assets.
  • APIs and automation for publishing and optimization, so agents can trigger updates.
  • Clean product and service data, including specifications, pricing, and availability.
  • Evaluation systems to audit AI outputs and detect hallucinations or misalignment.
  • Identity and trust signals, including reviews, authority, certifications, and product proof.

This calls for a shift from simply building web pages to creating a well-organized information architecture. The goal is to structure information in a way that AI systems can easily navigate, understand, and apply.

In practice, this means bringing together product data, content metadata, and customer intent into a single, connected system. It involves defining the key entities your business represents, such as products or services, and mapping how they relate to what users are trying to accomplish. Content feeds and structured data should reflect the actual state of the business rather than just marketing language.

Equally important is creating feedback loops that show how AI systems interpret and reference your brand. These insights help you see where your content is being used, how it is being understood, and whether it is guiding users toward your brand. With this information, you can keep refining what you share to improve how systems recognize and recommend you.

Instead of asking, “How do we rank for this query?” leaders will ask, “How do systems understand us, trust us, and act on our information?”

KPI And Measurement Model

Traditional key performance indicators still hold value, but they no longer capture the full picture. Rankings and session metrics continue to provide insight, yet they now exist within a broader framework shaped by how AI systems retrieve, interpret, and act on information. Ranking reports will sit alongside AI retrieval dashboards, and session counts will be evaluated alongside metrics focused on task completion and user outcomes.

In my opinion, you should also be looking to monitor:

  • Share of voice in AI assistants.
  • Retrieval and inclusion rate in AI answers.
  • Brand alignment and brand safety in model outputs.
  • Presence in multi‑step reasoning chains.
  • Task completion and conversion paths from AI systems.
  • Cost per automated workflow and cost per agent‑driven action.
  • Model education, data freshness, and trust scores.

As measurement evolves, the focus moves from tracking visitor numbers to understanding how AI systems shape decisions. To navigate this shift, leaders should design metrics that reflect influence within these systems. Visibility will measure whether the brand is appearing in AI-generated responses and assistant-led interactions.

Accuracy will assess whether the brand is being represented correctly and safely across touchpoints. Trust will reflect whether AI systems choose your content and signals over others when making recommendations. Action will capture whether AI-driven experiences result in tangible outcomes like leads, bookings, or purchases. Efficiency will show whether AI agents are reducing manual effort, improving speed, and delivering better user experiences.

Success will no longer be defined by visibility alone but by a brand’s ability to perform across discovery, decision support, and operational impact.

Talent And Capability Model

Agentic SEO is not a standalone skill set, it draws from a mix of disciplines that span marketing, data, and product. Success in this space requires a collaborative approach, where expertise is integrated rather than siloed.

Future-facing teams bring together SEO and content strategy, data and automation engineering, product and user experience thinking, as well as governance and prompt development. Legal and compliance awareness also play a critical role, ensuring that outputs remain responsible and aligned with brand and regulatory standards.

These teams operate in cross-functional pods, organized around delivering customer outcomes rather than managing individual channels. This structure allows them to move faster, adapt to change, and create more cohesive experiences across AI-driven platforms.

Modern SEO teams include several key roles. The SEO strategist focuses on how AI systems search, retrieve, and rank content. The data engineer manages the integrity of structured content, metadata, and live data feeds. The automation specialist builds the workflows and agents that connect information to user actions. The AI evaluator audits model outputs to ensure accuracy, brand alignment, and safety. The product partner bridges SEO efforts with real user journeys, making sure that discovery leads to meaningful interaction and conversion.

As this approach matures, teams will spend less time producing content manually and more time designing the systems, signals, and experiences that guide AI behavior and improve how users discover and engage with the brand.

The First 90 days

Days 1 To 30: Foundation And Alignment

  • Audit content, data, and search performance.
  • Map where AI already touches customer journeys.
  • Identify gaps in structure, trust signals, and data quality.
  • Set goals for AI visibility and agent‑driven workflows.

Days 31 To 60: Build And Test Pilots

  • Launch structured data and knowledge base improvements.
  • Test AI‑assisted content and QA pipelines.
  • Introduce early agent monitoring for SEO signals.
  • Create evaluation benchmarks for AI accuracy and brand safety.

Days 61 To 90: Scale And Govern

  • Deploy automation in high‑impact workflows.
  • Formalize model governance and feedback loops.
  • Train cross‑functional teams on AI‑ready processes.
  • Build dashboards for AI visibility, trust, and conversion.

Future Outlook

Search will not disappear. It will merge into tasks, journeys, and decisions across devices and interfaces. Brands that train AI systems, structure knowledge, and build agent‑ready operations will lead.

The winners will not be those who automate content. They will be those who help users and systems make better decisions at speed and scale.

More Resources:


Featured Image: Collagery/Shutterstock

Research: “You Are An Expert” Prompts Can Damage Factual Accuracy via @sejournal, @martinibuster

“You are an expert” persona prompting can harm performance as much as it helps. A new study shows that persona prompting improves alignment with human expectations but can reduce factual accuracy on knowledge-heavy tasks, with effects varying by task type and model. The takeaway is that persona prompting works better on some kinds of tasks than it does in others.

Persona Prompting

Persona prompting is a common way to shape how large language models respond, especially in applications where tone and alignment with human expectations matter. It is widely used because it improves how outputs read and feel. Given how widespread persona prompting is, it may come as a surprise that its actual effect on performance remains unclear, as prior research shows inconsistent results, throwing the technique into doubt as to whether it is helping or harming.

The researchers concluded that persona prompting is neither broadly beneficial nor harmful, and that its efficacy depends on the type of task.

They found:

  • It improves alignment-related outputs such as tone, formatting, and safety behavior
  • Persona prompting degrades performance on tasks that rely on factual accuracy and reasoning

Based on this, the authors introduce a method called PRISM (Persona Routing via Intent-based Self-Modeling), that applies personas selectively, using intent-based routing instead of treating personas as a default setting. Their findings show that persona prompting works best as a conditional tool and provide a better understanding of when persona prompting helps and when it should be avoided.

Managing Behavioral Signals

In section three of the paper, the researchers say that expert personas have “useful behavioral signals” but that naïve use of persona prompting damages as much as it helps. They say this raises the question of whether those benefits can be separated from the harms and applied only where they improve results.

Behavioral signals influence LLM output. These signals are the reason persona prompting works. They drive improvements in tone, structure, safety behavior, and how well responses match expectations. Without them, there would be no benefit to persona prompting.

Yet, in a seeming paradox, the paper shows that those same signals interfere with tasks that depend on factual accuracy and reasoning. That is why the paper treats them as something to manage, not maximize.

These signals include:

  • Stylistic adaptation and tone matching: Adopting a professional or creative voice.
  • Structured formatting: Providing step-by-step or technical layouts.
  • Format adherence: Helping the model follow complex structures, like professional emails or step-by-step STEM explanations.
  • Intent following: Focusing the model on the user’s underlying goal, especially in tasks like data extraction.
  • Safety refusal: Identifying and declining harmful requests more effectively by adopting a “Safety Monitor” role.

Persona Prompt Wins

The paper found that persona prompts were a win in five out of eight categories of tasks:

  1. Extraction: +0.65 score increase.
  2. STEM: +0.60 score increase.
  3. Reasoning: +0.40 score increase.
  4. Writing: Improved through better stylistic adaptation.
  5. Roleplaying a domain expert: Improved through better tone matching.

The persona prompting won in the above categories because they are more about style and clarity rather than whether the answer is correct for facts and knowledge. They also found that the longer and more detailed the persona prompt, the stronger the alignment and safety behaviors become.

Persona Prompt Failures

Conversely, the expert persona consistently degraded performance in the remaining three (out of eight) categories because they rely on precise fact retrieval or strict logic rather than style and clarity. The reason for the performance drop is that adding a detailed expert persona essentially “distracts” the model by activating an “instruction-following mode” that prioritizes tone and style.

Activating expert personas come at the expense of “factual recall.” The model is so focused on trying to act like an expert that it forgets the information it learned during its initial training.That explains the drops in accuracy for facts and math.

Persona expert prompts performed worse in the following three categories:

  1. Math
  2. Coding
  3. Humanities (memorized factual knowledge)

The paper notes that on one of the knowledge benchmarks (MMLU), accuracy dropped from a 71.6% baseline to 68.0% even with the “minimum” persona, and fell further to 66.3% with the “long” persona.

They explained the safety improvements:

“More detailed persona descriptions provide richer alignment information, amplifying instruction-tuning behaviors proportionally.”

And showed why factual accuracy takes a hit:

“Persona Damages Pretraining Tasks
During pretraining, language models acquire capabilities such as factual knowledge memorization, classification, entity relationship recognition, and zero-shot reasoning. These abilities can be accessed without relying on instruction-tuning, and can be damaged by extra instruction-following context, such as expert persona prompts.”

Conclusions Reached

The researchers conclude that persona prompting consistently improves alignment-dependent tasks such as writing, roleplay, and safety behavior, while degrading performance on tasks that rely on pretraining-based knowledge, including math, coding, and general knowledge benchmarks.

They also found that a model’s sensitivity to personas scales with its training. Models that are more optimized to follow instructions are more “steerable,” which means they get the biggest boost in safety and tone, but they also suffer the largest drops in factual accuracy.

Takeaways

1. Be selective about using persona prompts:

  • Do not default to “You are an expert” prompts
  • Treat persona prompting as situational. Using it everywhere introduces hidden accuracy risks.

2. Persona prompting is effective for:

  • Writing quality
  • Tone
  • Formatting and organization
  • Readability

3. Tasks that don’t benefit from persona prompting and should instead use neutral prompting to preserve accuracy:

  • Fact-checking
  • Statistics
  • Technical explanations
  • Logic-heavy outputs
  • Research
  • SEO analysis

4. Remember these three findings:

  • Use persona prompting to generate content, then switch to a non-persona prompt (or a stricter mode) to verify facts.
  • Highly detailed “expert” prompts strengthen tone and clarity but reduce factual and knowledge accuracy.
  • “You are an expert” prompts may cause a model to prioritize sounding correct over actually being correct.

5. Match your prompts to the task:

  • Content creation: Persona helps
  • Analysis and validation: Persona hurts

The most effective approach is not one prompt, but a workflow that switches prompts depending on the task, similar to the researcher’s PRISM approach.

Read the research paper:
Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM

Featured Image by Shutterstock/ImageFlow

5 GEO Strategies To Make AI Search Engines Recommend Your Brand In 2026

This post was sponsored by Geoptie. The opinions expressed in this article are the sponsor’s own. 

The way people search is changing faster than most marketers realize. ChatGPT alone now has over 900 million weekly active users. Google AI Overviews appear in one out of every four search results.

Each of these contains the potential for AI to cite your brand.

This isn’t a future trend. It’s happening right now. And if your brand isn’t showing up in those AI-generated answers, you’re invisible to a rapidly growing audience, even if you rank #1 on Google.

That’s where Generative Engine Optimization (GEO) comes in: the practice of optimizing your online presence. So, AI engines cite, reference, and recommend your brand when users ask questions in your space.

1. Start By Measuring Your AI Visibility

Before changing a single word on your website, you need to know where you stand. Which AI platforms mention your brand? For which queries? How often are your competitors getting cited instead of you?

You can’t optimize what you don’t measure.

How To Measure AI Visibility

Most marketers skip this step because it feels unfamiliar. But the process is straightforward.

  1. List 10–15 questions your ideal customer would ask an AI engine, things like “best [your category] for [use case]” or “how to solve [problem you address].”
  2. Run each query in ChatGPT, Perplexity, and Gemini.
  3. Note whether your brand is mentioned, which competitors show up instead, and whether sources are cited.

Repeat monthly, because AI-generated answers shift as models update and new content gets indexed. Doing this manually across multiple platforms gets tedious fast, which is why dedicated GEO platforms exist to automate the tracking and monitor changes over time.

The best place to start? Run a free geo rank check on your brand. In under a minute, you’ll see which AI engines mention you, which ones don’t, and where your competitors show up instead.

This baseline is essential. Without it, you’re optimizing blind.

2. Don’t Abandon SEO. It Still Feeds AI

Here’s an important nuance: traditional search rankings still matter for GEO.

AI engines frequently pull from top-ranking Google results when generating their responses. If your page ranks well for a relevant query, there’s a higher chance an AI engine will reference it as a source. Google’s own AI Overviews heavily favor content that already performs well in organic search.

So keep doing what continues to drive SERP rankings:

  • Producing high-quality content
  • Building backlinks
  • Technical SEO.

But think of SEO as the foundation, not the full strategy. The brands that win in AI search are those that layer GEO tactics on top of a solid SEO foundation.

3. Make Sure Your Content Follows GEO Best Practices

This is where most of the work happens. AI engines are selective about what they cite, and the structure and quality of your content play a massive role. Here’s what to focus on:

  • Write for citability, not just readability. AI engines look for content that makes clear, specific claims backed by data or expertise. Vague, fluffy paragraphs get skipped. Concrete statements like definitions, statistics, step-by-step processes, and expert opinions are far more likely to be pulled into a generated response.
  • Structure content around questions. Conversational AI is driven by user questions. Structure your content to directly answer the questions your audience asks. Use clear headers, concise paragraphs, and FAQ When an AI engine scans your page and finds a clean, authoritative answer to a specific question, you become a prime candidate for citation.
  • Leverage schema markup and structured data. Help AI engines understand what your content is about by implementing proper schema FAQ schema, How-To schema, and Organization schema all give AI systems stronger signals about your content’s topic and structure.
  • Build topical authority, not just keyword-specific content. AI engines favor sources that demonstrate deep expertise on a topic. Rather than publishing scattered blog posts across dozens of topics, build comprehensive content clusters that cover a subject thoroughly. This signals to AI engines that your brand is a reliable authority worth citing.

Pro Tip: Leverage a comprehensive GEO platform. Optimizing your content for AI search involves many moving parts: content structure, schema markup, topical authority, and technical SEO. Keeping track of all these signals manually across every page on your site isn’t realistic, especially as AI engines update how they evaluate sources. A dedicated GEO platform lets you regularly scan your entire website, monitor your optimization scores, and catch issues before they cost you citations.

Want to see where you stand right now? Run a free GEO audit and get actionable insights on your site’s AI readiness in under a minute.

4. Show Up In Reddit & UGC Discussions

Here’s a strategy most brands overlook: AI engines love Reddit.

If you’ve noticed Reddit threads showing up in Google results more frequently, that’s not a coincidence. Google and AI platforms increasingly treat user-generated content, especially Reddit, as a trusted and authentic source of information. When someone asks an AI engine for a product recommendation or solution comparison, the response often draws from Reddit discussions.

This means your brand’s presence in relevant threads matters more than ever. But you can’t just show up and start promoting yourself. Here’s how to approach it the right way:

  • Find where your audience is already talking. Search Reddit for your product category, your competitors’ names, and the problems you solve. Identify 5–10 active subreddits where these conversations happen. Look for threads like “what tool do you use for [your category].”  These are the discussions AI engines pull from.
  • Contribute before you promote. Spend at least 2–3 weeks genuinely participating before your brand ever comes up. Reddit users check post history, and if your account is nothing but product mentions, you’ll get flagged as spam.
  • Be honest, not salesy. When a relevant recommendation thread comes up, share your product as one option among others. Mention what it’s good at and where it might not be the best fit. AI engines weigh authentic, nuanced mentions far more heavily than obvious self-promotion.
  • Check what AI engines are citing. Run your core queries in ChatGPT and Perplexity and see which Reddit threads appear. If your brand isn’t in those threads, that’s where to focus.

5. Get Featured In Listicles On Trusted Sites

When users ask AI engines for recommendations like “best project management tools,” the AI doesn’t generate that list from scratch. It synthesizes from existing listicle articles on authoritative websites. A single placement in a well-ranking listicle can get your brand recommended across ChatGPT, Perplexity, and Google AI Overviews simultaneously.

  • Find the listicles AI engines are already citing. Run your target recommendation queries in ChatGPT and Perplexity and note which articles they reference. These are the exact listicles you need to be in.
  • Build a hit list of publishers. Identify publications that come up repeatedly across both AI and traditional search results for “best [your category]” queries. Prioritize sites with strong domain authority.
  • Make inclusion easy. Make sure your product pages have a clear one-liner, obvious differentiators, social proof, and transparent pricing. Then pitch authors with something valuable, such as a free account, a demo, or data they can use.

Listicles get updated regularly and AI engines re-scan them, so a placement you earn today could start driving AI citations within weeks.

The Window Is Open, For Now

Generative Engine Optimization is still in its early stages. Most brands haven’t even started thinking about it, which means the opportunity to establish an early advantage is enormous.

The brands that start measuring their AI visibility, optimizing their content for citability, building community presence, and earning placements in authoritative listicles today will be the ones AI engines default to recommending tomorrow.

The question isn’t whether AI search will matter for your business. It’s whether you’ll be visible when it does.

Start Optimizing For AI Search Today

Every strategy in this article comes down to one thing: making your brand the obvious choice when AI engines look for sources to cite and recommend. You don’t need to tackle everything at once, but you do need to start.

Geoptie brings all five strategies together in one platform, from tracking your AI visibility across ChatGPT, Perplexity, and Google AI to auditing your content and monitoring your optimization scores over time. It’s built specifically for GEO, so you can stop guessing and start seeing exactly where your brand stands in AI search.

The early movers will own this space. Make sure you’re one of them.


Image Credits

Featured Image: Image by Tor App. Used with permission.

From SEO And CRO To Agentic AI Optimization (AAIO): Why Your Website Needs To Speak To Machines via @sejournal, @slobodanmanic

For 25 years, we’ve built websites for humans who click, scroll, and browse. That era is ending. I’ve been in website optimization for 15+ years, and this is the biggest shift I’ve seen since mobile. And honestly, I think it’s way bigger than that.

The internet is undergoing its most significant transformation since it began. Your website now has two audiences: humans and AI agents. The agents are already here, shopping, researching, booking, and making decisions. The question is whether your website can serve them.

This is the first article in a five-part series on optimizing websites for the agentic web. We’ll cover discovery, citation, technical implementation, and the new commerce protocols that let AI complete purchases on your behalf. Throughout this series, we’ll draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

But first, we need to understand how we got here and why December 2025 changed everything.

The Evolution: SEO To AEO To GEO To AAIO

The alphabet soup of optimization acronyms tells a story about how the web has changed.

SEO (Search Engine Optimization) dominated from the mid-1990s through the 2010s. The goal was simple: rank higher on Google. You optimized keywords, built backlinks, and structured your site so crawlers could index it. Success meant appearing on page one when someone searched for your topic.

AEO (Answer Engine Optimization) emerged as AI systems started answering questions directly. When Google introduced featured snippets, then AI Overviews, the game changed. Ranking wasn’t enough anymore. You needed to be the source that AI systems cited when generating answers. AEO focuses on structuring content so it gets selected and quoted, becoming the definitive answer rather than just a search result.

GEO (Generative Engine Optimization) expanded this further. Systems like ChatGPT, Claude, and Perplexity don’t just cite sources. They synthesize information from multiple places into comprehensive responses. GEO ensures your content appears in these synthesized answers, ensuring your expertise gets woven into the AI’s response even when you’re not the primary citation.

AAIO (Agentic AI Optimization) is the latest evolution, and it represents a fundamental shift. AAIO isn’t about being found or cited. It’s about being usable by AI agents that act autonomously on behalf of humans.

A research paper published in April 2025 by Luciano Floridi and colleagues formalized this distinction. As they put it, AAIO “explicitly optimises content for autonomous artificial agents, simultaneously addressing both human and machine interpretability.” Unlike SEO, which enhanced discoverability for humans through search engines, AAIO prepares websites for AI systems that initiate digital interactions independently.

Agent Experience Optimization (AXO) is the umbrella term that encompasses all of these practices. Just as UX focuses on human users and SEO focuses on search crawlers, AXO focuses on AI systems that interact with websites. It includes discovery (being found), citation (being referenced), and action (being usable). I cover AXO in depth in What is Agent Experience Optimization.

The progression is straightforward: SEO asks “How do I rank?” AEO asks “How do I get cited?” GEO asks “How do I get included?” AAIO asks “How do I enable agents to complete tasks on my site?”

The relationship between website optimization and AI effectiveness creates a virtuous cycle, similar to what happened with SEO and search engines in the early 2000s. When websites implement AAIO practices, AI agents perform better, which encourages more websites to adopt these practices, which makes agents more useful, which drives adoption further.

December 2025: The HTML Moment For AI

On Dec. 9, 2025, something significant happened. The Linux Foundation announced the Agentic AI Foundation (AAIF), a vendor-neutral governance body for agentic AI standards.

Eight platinum members anchored the foundation: Amazon Web Services, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. What’s remarkable here isn’t the technology. It’s that OpenAI, Anthropic, Google, and Microsoft are building shared infrastructure instead of competing standards. This is a strong signal that the industry sees agentic AI as foundational, not a feature war.

Three key projects were contributed:

  • Model Context Protocol (MCP) from Anthropic: a universal standard for connecting AI systems to tools and data sources, now with over 10,000 published servers and adoption by Claude, ChatGPT, Gemini, VS Code, and Microsoft Copilot
  • AGENTS.md from OpenAI: a standardized specification for providing AI coding agents consistent project guidance across repositories
  • goose from Block: an open-source, local-first agent framework combining language models with extensible tools

This matters because it mirrors what happened with the early web. In the 1990s, competing browser vendors and incompatible standards fragmented the internet. The W3C brought order by establishing shared protocols like HTML and CSS. The Agentic AI Foundation aims to do the same for AI agents, creating the shared infrastructure that lets agents from different companies work together and interact with websites consistently.

As Linux Foundation Executive Director Jim Zemlin put it, the foundation enables development “with the transparency and stability that only open governance provides.”

We’re watching the TCP/IP moment for agents. The protocols being established now will define how AI interacts with the web for the next decade: MCP for tool integration, A2A for agent-to-agent communication, NLWeb for making websites queryable.

I realize that sounds hyperbolic. It isn’t. We’re in the early months of a decade-long transformation.

Discovery, Citation, And Action

These three concepts form the framework for this entire series:

  • Discovery is about being found by AI systems. AI crawlers like GPTBot, ClaudeBot, and PerplexityBot index the web for their respective platforms. If you’re blocking these crawlers, or if your content isn’t accessible to them, you’re invisible to AI systems. Discovery is the foundation. Nothing else matters if agents can’t find you.
  • Citation is about being selected as a source. When an AI system generates a response, it chooses which sources to reference. Getting cited requires content that AI systems recognize as authoritative, accurate, and relevant. This involves structured data, clear information hierarchy, and demonstrable expertise. Microsoft has published detailed guidance on what makes content citable.
  • Action is about enabling agents to use your site. This is where AAIO diverges from earlier optimization approaches. An agent visiting your site might need to click buttons, fill forms, navigate menus, compare options, and complete transactions. If your site breaks when an agent tries to interact with it, you lose the business to competitors whose websites work.

The stakes escalate at each level. Failing at discovery means invisibility. Failing at citation means your competitors get referenced instead. Failing at action means losing transactions that would have happened on your site.

Why This Matters Now

Two converging trends make 2026 the year to act.

Agentic browsers are reaching consumers.

The first wave of AI browsers launched in 2025, and 2026 is bringing them to mainstream users. For a complete breakdown, see The Agentic Browser Landscape in 2026.

Perplexity’s Comet combines search-focused AI with full browser capabilities. ChatGPT Atlas from OpenAI includes Agent Mode for autonomous multi-step tasks. Chrome’s auto browse feature, powered by Gemini, is shipping to Google AI subscribers.

Chrome alone represents 3 billion potential users. If you’re wondering whether to take this seriously: Google doesn’t ship features to 3 billion users on a whim.

When the world’s most popular browser can autonomously scroll, click, type, and navigate on your behalf, the implications for website owners are profound. Websites that work well with these agents get included in agentic workflows. Websites that don’t get skipped.

As DigitalOcean’s analysis notes, “This shift forces websites to redesign for both human and AI users,” requiring cleaner navigation, API-first strategies, and optimization for agent functionality beyond visual presentation.

Commerce is shifting.

Stripe, Shopify, and OpenAI are building infrastructure for AI agents to complete purchases. The Agentic Commerce Protocol enables secure, agent-initiated transactions. Brands like URBN, Etsy, Glossier, and SKIMS are already implementing these systems.

Checkout is no longer a page. It’s an API endpoint. The agent researches, selects, and purchases on behalf of the user, who never visits your website at all.

What’s Coming In This Series

This article established the “why.” The rest of the series covers the “how”:

Part 2: Answer Engine Optimization dives into getting your content cited in AI responses. How AI systems parse content differently than search engines, the structure that gets cited, which schema markup matters, and how to measure your AI visibility.

Part 3: The Agentic Web Protocols explores MCP, A2A, NLWeb, and AGENTS.md, the standards powering the agentic web. These protocols are complementary, not competing, and together they form the infrastructure layer that enables everything else.

Part 4: How AI Agents See Your Website provides the implementation guide. How agents “see” websites, why semantic HTML matters more than ever, the role of accessibility standards, and what to tell your developers.

Part 5: Selling to AI covers agentic commerce. Stripe’s Agentic Commerce Suite, Shopify’s Universal Commerce Protocol, secure payment tokens, fraud detection for agent traffic, and how to get started.

Key Takeaways

  • The web is shifting from pages for humans to content for AI agents. Your website now serves two audiences, and optimizing for both is becoming necessary.
  • The evolution runs from SEO to AEO to GEO to AAIO. Each builds on the last: ranking, then citation, then inclusion, then enabling autonomous action.
  • December 2025 was the turning point. The Agentic AI Foundation launch established shared standards, moving agentic AI from experimentation to infrastructure.
  • Three levels matter: discovery, citation, and action. Being found, being referenced, and being usable by AI agents.
  • The business case is concrete. Agentic browsers are reaching billions of users. Commerce protocols are enabling agent-initiated purchases. Websites that work with agents capture this opportunity; those that don’t lose business to competitors.

Traditional SEO asked: “How do I rank on Google?” The new question is: “How do I become the answer, and how do I let AI complete transactions on my site without a human ever visiting?”

I’m writing this series because I believe most websites do and will get this wrong. They’ll treat it as an SEO tweak or a CRO experiment when it’s an architectural shift.

The infrastructure is being built now. The standards are being established. The agents are already browsing.

The question is whether your website is ready for them.

More Resources:


This post was originally published on No Hacks.


Featured Image: Collagery/Shutterstock

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

A BuzzStream report analyzing 4 million AI citations found that press releases distributed through syndication channels barely appear in AI-generated answers.

Background

Press release distribution services have been marketing AI visibility as a selling point.

For example, ACCESS Newswire offers an “AI Visibility Checklist” for press releases. eReleases published a guide positioning press releases as tools for AI search visibility. Business Wire has written about optimizing releases for answer engine discovery.

BuzzStream’s data offers a different perspective.

What They Found

The report’s authors used XOFU, a citation monitoring tool from Citation Labs, to track where AI platforms pull their sources across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini. BuzzStream ran 3,600 prompts across 10 industries and collected data for one week.

Overall, news publications accounted for 14% of all citations in the dataset. But within that news category, the numbers drop off quickly for syndicated and distributed content.

Press releases published through syndication channels like Yahoo and MSN accounted for 0.32% of news citations and 0.04% of the entire dataset.

Direct citations from newswire services like PRNewswire made up 0.21% of the full dataset. They appeared most often in exploratory and informational prompts, but even there they only reached 0.37%.

Syndicated news content overall, including articles republished through MSN and Yahoo networks, accounted for 6.2% of news citations and 0.9% of the total dataset.

To identify syndicated content, BuzzStream cross-referenced author names against publications using its ListIQ tool and manually confirmed cases where the author name didn’t match the publication. The company acknowledged this method has limits, since some sites repost press releases without labeling them as such.

What The Data Shows About What Works

The report’s more interesting finding is what does get cited.

Original editorial content made up 81% of news citations in the dataset. Affiliate and review content accounted for the rest. The split held across prompt types, though affiliate content had its strongest showing in evaluative prompts at 39%.

The report broke prompts into three categories. Evaluative prompts like “Is Sony better than Bose?” generated the most news citations at 18% of all citations. Brand awareness prompts like “What is Chase known for?” generated the fewest at 7%. Informational prompts fell in between.

Editorial content that appeared most often in evaluative citations included head-to-head comparisons and cost analysis from outlets like Reuters, CNBC, and CNET.

The ChatGPT Newsroom Exception

One platform-level finding stood out. Internal press releases and newsroom content on company-owned domains accounted for 18% of ChatGPT’s citations in the dataset.

On Google’s AI platforms, that number dropped to around 3%.

BuzzStream cited examples including Iberdrola’s corporate press room and Target’s corporate subdomain. When prompted about Iberdrola’s role in renewables, ChatGPT cited a press release from Iberdrola’s own website. When asked about Target’s products, ChatGPT cited a 2015 press release from Target’s corporate domain.

BuzzStream said most earlier trends looked fairly uniform across platforms, with newsroom content on ChatGPT standing out as a clearer exception.

Why This Matters

The data challenges a premise that press release distribution services have been promoting. Multiple distribution platforms now market press releases as a path to AI visibility.

BuzzStream’s data suggests the distributed version of a press release, the one that lands on Yahoo Finance or MSN through a wire service, rarely becomes the version AI platforms cite. Original editorial coverage and owned newsroom content performed better by wide margins.

This connects to patterns we’ve been tracking. A BuzzStream report we covered in January found 79%of top news publishers block at least one AI training bot, and 71% block retrieval bots. Hostinger’s analysis of 66 billion bot requests showed AI training crawlers losing access while search bots expanded their reach.

The citation data suggests that even when syndicated content is accessible to AI crawlers, it rarely gets cited.

Google’s VP of Product for Search, Robby Stein, said in an interview we covered that being mentioned by other sites could help with AI recommendations, comparing AI’s behavior to how a human might research a question. That comparison favors earned editorial coverage over distributed press releases.

Adam Riemer made a related point in his Ask an SEO column, drawing a line between digital PR that builds brand coverage in publications and link building that focuses on placement metrics. BuzzStream’s data suggests that line extends to AI citations too.

For transparency, BuzzStream sells outreach and digital PR tools, so the finding that earned media outperforms distribution aligns with its business model. The company partnered with Citation Labs and used Citation Labs’ XOFU monitoring tool for the data collection.

Looking Ahead

This is part one of a multi-part analysis from BuzzStream. The single-week data window and large-brand focus are limits worth noting. Smaller brands with less existing editorial coverage may see different results.

Businesses investing in digital PR may want to look more closely at how different distribution channels perform in their category. Data suggests the channel you use can affect where your brand gets cited.


Featured Image: Cagkan Sayin/Shutterstock

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

AI search has changed the rules, but has your tracking? 

How do you measure visibility without rankings?

Which prompts actually reflect real buyer intent?

And how do you avoid AI tracking data that looks useful, but isn’t?

Learn how to set up AI prompt tracking you can trust for smarter decisions.

ChatGPT, Google AI Overviews & Perplexity Are Reshaping Discoverability

In this on-demand webinar, Nick Gallagher, Sr. SEO Strategy Director at Conductor, breaks down how AI prompt tracking really works, why topics matter more than individual prompts, and how to avoid common mistakes that skew insights.

You’ll leave with a clear framework for measuring AI visibility in a way that reflects real user behavior and supports smarter search and content strategies.

You’ll Learn:

  • How AI prompt tracking works, and why setup matters more than volume
  • Best practices for choosing topics, prompts, and answer engines
  • Common mistakes that lead to inaccurate or misleading AI visibility data

Watch on-demand and learn how reputation management is shaping local visibility, trust, and growth in 2026.

View the slides below or check out the full webinar for all the details.

Google AI Mode’s Personal Intelligence Now Free In U.S. via @sejournal, @MattGSouthern

Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid AI Pro and AI Ultra subscribers, the feature is now expanding to users with personal Google accounts.

What’s New

Announced in a blog post, the expansion covers AI Mode in Search, the Gemini app, and Gemini in Chrome. AI Mode access is available today, while the Gemini app and Chrome rollouts are starting now.

Personal Intelligence connects a user’s Gmail and Google Photos to AI-powered search and chat responses. When enabled, AI Mode and Gemini can reference email confirmations, travel bookings, and photo memories to answer questions without the user providing that context manually.

What Changed

When Google first launched Personal Intelligence in January, you needed a subscription to try it. Today’s expansion removes that paywall for U.S. users on personal Google accounts.

The feature still isn’t available for Google Workspace business, enterprise, or education accounts.

You can opt in by connecting apps through their Search or Gemini settings, and you can turn connections on or off at any time.

What Google Says About Training Data

The blog post includes a disclosure about how data from connected accounts is handled.

According to the post, Gemini and AI Mode don’t train directly on your Gmail inbox or Google Photos library. Google describes the training as limited to “specific prompts in Gemini or AI Mode and the model’s responses.”

That means prompts generated while using Personal Intelligence could include details drawn from connected apps, even though Google says it doesn’t train directly on raw Gmail or Photos data.

Why This Matters

The move from paid to free changes the scale of this feature. When Personal Intelligence required a Pro or Ultra subscription, it reached a smaller audience of paying users. Opening it to anyone with a personal Google account in the U.S. puts it in front of a much larger base.

Increased personalization means AI Mode responses could vary more from user to user. Two people searching the same query may get different results if one has connected their Gmail and the other hasn’t. That makes it harder to benchmark what AI Mode shows for a given topic.

This feature could also change how people type queries into AI Mode. If Google already has the necessary context about a person, we might see searches become shorter. That’s an idea I explored in this video back when Google originally launched the feature:

Looking Ahead

No expansion beyond the U.S. or to Workspace accounts has been announced. Moving from paid to free in less than two months suggests Google is confident in this feature. How people respond to the linking of personal data to search will likely shape future rollout plans.

Google Removes ‘What People Suggest,’ Expands Health AI Tools via @sejournal, @MattGSouthern

Google has removed “What People Suggest,” a search feature that used AI to organize health perspectives from online discussions. The confirmation came as Google held its annual Check Up event, where it announced new AI health features for YouTube.

A Google spokesperson confirmed the removal to The Guardian, calling it part of a “broader simplification” of the search results page. The spokesperson said the decision was unrelated to the quality or safety of the feature. The Guardian also reported, citing three people familiar with the matter, that the feature was pulled after a trial run.

“What People Suggest” launched on mobile devices in the U.S. last year at Google’s annual health event, The Check Up. At the time, Karen DeSalvo, then Google’s chief health officer, said people value hearing from others who have experienced similar health conditions. DeSalvo retired in August and was succeeded by Dr. Michael Howell, who led this year’s Check Up announcements.

What Google Announced At The Check Up

At its 2026 Check Up event, Google announced AI health features across YouTube, Fitbit, and clinician education.

Google says health-related videos on YouTube have surpassed 1 trillion views globally. The company is adding an AI-powered “Ask” button on eligible health videos that lets viewers interact with the content.

Separately, Google is experimenting with AI to organize peer-reviewed scientific information and help present complex topics to broader audiences.

In the blog post, Howell said a central challenge has been connecting people to the right health information at the right time.

Google.org is committing $10 million to fund organizations that will reimagine clinician education for AI. The Council of Medical Specialty Societies and the American Academy of Nursing are the first partners.

Why This Matters

AI features in search results for health-related topics keep changing. Google pulled back one feature that showed forum-style perspectives and put new investment into medical education and structured video tools.

YouTube’s growing role in health-related AI Overviews is already documented. SE Ranking’s study of German health queries found YouTube was the most-cited domain in health AI Overviews, appearing more often than medical or government sites. Adding interactive AI on top of those videos could reinforce that pattern.

How We Got Here

Google’s AI features for health queries have faced pressure over the past year.

In January, the Guardian published an investigation that found health experts considered some AI Overview responses misleading for medical queries. Google disputed elements of the reporting but later removed AI Overviews for some specific health searches, including queries about liver function tests.

“What People Suggest” launched during the same period Google was expanding AI Overviews to thousands more health topics. Ahrefs data from November showed medical YMYL queries triggered AI Overviews 44.1% of the time, the highest rate among YMYL categories.

Looking Ahead

The pattern over the past year points to tighter guardrails around some health AI experiences. Whether that direction holds is less certain.

The removal of “What People Suggest,” and YouTube’s continued citation visibility in AI Overviews, could point that way. But Google’s track record with health-related AI features also shows these decisions can change quickly.


Featured Image: Mamun_Sheikh/Shutterstock

How To Use AI To Streamline Time-Consuming SEO Tasks via @sejournal, @coreydmorris

SEO, like most organic and non-advertising or paid channels in digital marketing, is labor-intensive. Yes, there are software suites, analytics platforms, research tools, and a number of other things that help in the tech stack.

We all have our favorites, and no one is (or should) be doing SEO like I was in 2008 (despite my desire sometimes to just do something manually where I can see the inputs and outputs and have more control, but I digress).

In the midst of constant noise about new platforms, new ranking factors, ways to become visible in AI, and everything else, it can be hard at times to keep going with the tasks that still require a human at some level. Whether it is gaining efficiency, scaling efforts, doing more with less, or a combination of these, I’m sharing human-involved ways to streamline time-consuming tasks so you can gain time (and maybe money).

1. Generating Meta Descriptions, Page Titles, Alt Text

I could have started with something more high-level or strategic, but I’m getting this one out of the way right now.

The basic blocking and tackling of ensuring you have unique, helpful, and topically relevant meta descriptions, page titles, and image alt text can be a huge investment of time on a large website or across sites if you own tactical SEO for multiple sites or clients.

While there are ways to semantically have these tags auto-generated by a database or CMS, we know that, in a lot of cases, there’s still a manual process or intervention to audit and ensure that the tags are written to best practices and strategic positioning.

Also, I know that there’s plenty of discussion or debate on whether there’s even value in creating titles and meta descriptions. I’m not going there. But I will say that, if you have any areas where you need to create them and they are on your tasks list, you can spend a lot of hours and the cost of those hours (or outsourced resources) for a minimal return.

Leverage tools based on what you’re already paying for or what tech ecosystem you’re in, like Screaming Frog + OpenAI API + a WordPress plugin, which can save thousands of dollars and many dozens of hours.

Putting It Into Action

Steps for generating alt text at scale:

  1. Get your OpenAI API key:
    • In your OpenAI dashboard at platform.openai.com, go to API keys.
    • Create a new secret key and name it something you’ll remember, like Screaming Frog.
    • Make sure you have credits in your account (a few dollars can go a long way).
  2. Set up your Screaming Frog crawl:
    • Set up your OpenAI configuration by going to Configuration > API Access > AI. Enter your API Key into the field. Press Connect.
    • Set up a prompt to generate alt text by going to the Prompt Configuration tab. Click Add from Library > System > Generate alt text for images.
    • Set up your crawl configuration and don’t forget to go to Spider > Rendering and change the rendering mode from Text Only to JavaScript. Then, go to Extraction and, under HTML, check Store HTML and Store Rendered HTML.
    • Run a test crawl on one URL to ensure the output works for you. Tweak the prompt if you’d like.
  3. Run the crawl.
  4. Export to a CSV.
  5. Format the file with two columns: image URL, alt text.
  6. Add this plugin to the site: https://wordpress.org/plugins/alt-text-updater/.
  7. Upload the file.
  8. Crawl your site and do manual checks to test that images have alt text.
  9. Deactivate and uninstall the plugin.

2. Structuring Content Outlines

This might be one of the most common things we do when starting SEO or in periodic content organization, expansion projects, or ongoing content creation. With content being what I call the “fuel” of SEO (and also visibility in AI search), it is still as important as ever to organize it well and present it in a way that makes sense to site visitors and the machines that are also learning it.

While you might not be able to automate this out of the box or in a single prompt in your favorite LLM, you can definitely speed up the process and gain some insights into connections you might not make on content themes on your own (my favorite bonus).

Whether you’re working on a single article, a longer-term content calendar, reorganizing evergreen content, or other content-specific tasks, mastering the art of prompt creation, coaching the AI agent, ensuring the output is good, and using project folders (with brand style guides) in ChatGPT can ensure the quality and speed the more you produce.

Putting It Into Action

Example Prompt

You are an expert SEO who specializes in content writing for [industry]. Your task is to create an outline for an article for [topic]. The article outline should cover the following subtopics: 

[subtopic 1], 

[subtopic 2], 

[subtopic 3]. 

The article should target the following keywords: 

[keyword]

[keyword]

[keyword]

Attached are the HTML files of pages currently ranking well in Google search results to use as guidance. Review the HTML files and generate a content outline. 

3. Creating Project Briefs

Going a little higher level into organizing the work we do, connecting desired outcomes to strategies and ultimately to tactics, project briefs are something you might not do every day.

I like to think about SEO in projects or sprints as a way to break up the big nature of ongoing and long-term work that requires short-term progress and tactics. Regardless of how you organize the work, you likely have a lot of varying documentation and information. Whether in sheets, documents, decks, or other sources, you have information that you can feed together into your LLM of choice to have AI organize and sort out.

Whether you’re doing this formally to produce a report deliverable or informally to help your team or yourself organize the minutiae of SEO information, I can point to examples of my team using Gemini to read through a bunch of documents, including meeting notes, personal notes, transcripts, AI transcripts, agendas, competitor lists, research, emails, and more.

This can be helpful for a number of uses, including putting together a document that can be helpful for personal reference, team reference, onboarding, and articulation of the overall knowledge base for stakeholders.

Putting It Into Action

Example Prompt

You are an experienced Senior Marketing Strategist and you’re onboarding your team for [describe project]. Your task is to create a comprehensive project brief for [name of campaign or project].

Ensure the project brief takes into account the following project details:

Objective: [what is the overarching goal of the project]

Target audience: [overview of the demographics]

Key messaging: [provide details about campaign messaging]

Channels: [what channels will be incorporated into the campaign/project]

For the deliverable, the output should include the following:

Project Overview: Include a 1-2 sentence summary of the project

Success Metrics: [provide KPIs]

Budget: [provide financials]

Timeline: [provide deadlines and milestones]

Generate the project brief as a professional, internal-facing document.

Classifying Keywords

Prompt for using the AI function in Google Sheets to classify keywords by search intent, segment, branded/non-branded, etc.

=ai("Act as an SEO Specialist. Classify the following Keyword into exactly one of these Categories: [Informational, Navigational, Commercial, Transactional].

Rules:

Informational: User is looking for an answer or guide.

Commercial: User is researching products/services before buying.

Transactional: User has high intent to buy/convert now.

Navigational: User is looking for a specific website/brand.

Keyword: [Cell Reference, e.g., A2]

Result: Return only the category name with no extra text or punctuation

4. Segmenting Keywords

In SEO today, we’re not focused necessarily on granular keywords. However, they are still important in our research and strategy planning, along with more tactical work in guiding content topic building and creation.

When you do your research and have your list of keywords from any source, you can utilize the Google Sheets AI function to categorize them by topic, pillar, branded/non-branded, localized or not, search intent, etc.

You can also run keywords through an LLM and have it categorize them, export the output, import that back into your spreadsheet, and align it to the data using a VLOOKUP function (a recommendation, as my team thinks the Google Sheet AI function isn’t where we want it to be yet).

While the method I noted also might feel manual and not where we want it to be eventually, with better AI and tooling, it is still much better than doing things manually. I encourage you to use your own spreadsheet logic or “regular expression” (regex) to categorize as much as you can efficiently before going to AI, especially if your dataset is extensive.

5. Documenting Competitor Outlines

While I have to admit that I like to visually check out competitor websites for my first impression and a quick, informal sophistication check, automating this is a huge time-saver.

For example, Gemini is really good at outlining the content structure of a webpage, so my team likes to feed three or four competitor URLs that are ranking well or have high visibility for a topic that we’re building a strategy for, and it can give us an outline of each page. That includes messaging, targeting, and providing baseline content blocks that each page has that we can use when we do content development on our side.

Disclaimer: Just like in the olden days, don’t copy directly and don’t steal. Verify that what you’re getting back out of the tool you’re using isn’t ripping someone off. That’s on us to validate.

Putting It Into Action

Example Prompt

You’re an expert SEO strategist and you’re conducting a competitive content analysis of your client’s page against pages currently outranking it in Google for the search term [keyword]. The client is a [describe client and industry]. The page is [describe purpose of the page and topic].

I’ve attached the HTML files of the client’s page, as well as the HTML files for the competitor pages. Your tasks are to provide me:

An outline for each page of the content blocks present in the HTML

An overview of the messaging, tone, voice

A list of outgoing internal links in the content

Content gaps between the client's page and the competitors 

6. Conducting SERP Analysis

We can’t waste impressions and any visibility we get by showing up on the wrong topics. SEO now is about quality, and we can’t miss the mark on search intent.

An example that is a big time-saver is to build your seed keyword list using Ahrefs and then export the keyword list with SERP data. Then, feed that spreadsheet into Gemini and have it provide a breakdown of organic competitors per keyword, intent of ranking organic pages per keyword, etc. This example is a good way to save time from having to review hundreds and hundreds of rows. My team usually filters out AI Overviews and ad placement data to condense it a bit.

This type of work has been helpful in figuring out informational versus commercial intent SERPs at scale so that we’re targeting the right keywords with the right content. It has also been helpful in understanding the level of competition within a topic, so we know what to avoid and what long-tail keywords may represent realistic opportunities.

I will emphasize, though, that it is important to note that the SERPs aren’t 100% accurate, and localization and personalization will change the SERPs that users see. But it’s helpful in comparing keywords against each other. We also do SERP reviews manually to confirm findings. Again, validate as a human what you’re getting from tools.

In Closing

There’s a lot of power in what you can reclaim in time and dollars, leveraging automation, deeper tools use, and the power of AI for SEO. And, you probably detected a theme where, in pretty much everything you do, there have to be solid inputs in order to get useful outputs, which also require human validation and experience to trust.

Regardless of where you are with automation, the goal of being able to do more with less, scale tasks, and not do manual tasks that might have low return on investment is a great way to determine where you should consider doing more with tech and less manual work.

More Resources:


Featured Image: ArtEternal/Shutterstock

Anthropic’s Claude Bots Make Robots.txt Decisions More Granular via @sejournal, @MattGSouthern

Anthropic updated its crawler documentation this week with a formal breakdown of its three web crawlers and their individual purposes.

The page now lists ClaudeBot (training data collection), Claude-User (fetching pages when Claude users ask questions), and Claude-SearchBot (indexing content for search results) as separate bots, each with its own robots.txt user-agent string.

Each bot gets a “What happens when you disable it” explanation. For Claude-SearchBot, Anthropic wrote that blocking it “prevents our system from indexing your content for search optimization, which may reduce your site’s visibility and accuracy in user search results.”

For Claude-User, the language is similar. Blocking it “prevents our system from retrieving your content in response to a user query, which may reduce your site’s visibility for user-directed web search.”

The update formalizes a pattern that’s becoming more common among AI search products. OpenAI runs the same three-tier structure with GPTBot, OAI-SearchBot, and ChatGPT-User. Perplexity operates a two-tier version with PerplexityBot for indexing and Perplexity-User for retrieval.

Anthropic says all three of its bots honor robots.txt, including Claude-User. OpenAI and Perplexity draw a sharper line for user-initiated fetchers, warning that robots.txt rules may not apply to ChatGPT-User and generally don’t apply to Perplexity-User. For Anthropic and OpenAI, blocking the training bot does not block the search bot or the user-requested fetcher.

What Changed From The Old Page

The previous version of Anthropic’s crawler page referenced only ClaudeBot and used broader language about data collection for model development. Before ClaudeBot, Anthropic operated under the Claude-Web and Anthropic-AI user agents, both now deprecated.

The move from one listed crawler to three mirrors what OpenAI did in late 2024 when it separated GPTBot from OAI-SearchBot and ChatGPT-User. OpenAI updated that documentation again in December, adding a note that GPTBot and OAI-SearchBot share information to avoid duplicate crawling when both are allowed.

OpenAI also noted in that December update that ChatGPT-User, which handles user-initiated browsing, may not be governed by robots.txt in the same way as its automated crawlers. Anthropic’s documentation does not make a similar distinction for Claude-User.

Why This Matters

The blanket “block AI crawlers” strategy that many sites adopted in 2024 no longer works the way it did. Blocking ClaudeBot stops training data collection but does nothing about Claude-SearchBot or Claude-User. The same is true on OpenAI’s side.

A BuzzStream study we covered in January found that 79% of top news sites block at least one AI training bot. But 71% also block at least one retrieval or search bot, potentially removing themselves from AI-powered search citations in the process.

That matters more now than it did a year ago. Hostinger’s analysis of 66.7 billion bot requests showed OpenAI’s search crawler coverage growing from 4.7% to over 55% of sites in their sample, even as its training crawler coverage dropped from 84% to 12%. Websites are allowing search bots while blocking training bots, and the gap is widening.

The visibility warnings differ by company. Anthropic says blocking Claude-SearchBot “may reduce” visibility. OpenAI is more direct, telling publishers that sites opted out of OAI-SearchBot won’t appear in ChatGPT search answers, though navigational links may still show up. Both are positioning their search crawlers alongside Googlebot and Bingbot, not alongside their own training crawlers.

What This Means

When managing robots.txt files, the old copy-paste block list needs an audit. SEJ’s complete AI crawler list includes verified user-agent strings across every company.

A strategic robots.txt now requires separate entries for training and search bots at minimum, with the understanding that user-initiated fetchers may not follow the same rules.

Looking Ahead

The three-tier split creates a new category of publisher decision that parallels what Google did years ago with Google-Extended. That user-agent lets sites opt out of Gemini training while staying in Google Search results. Now Anthropic and OpenAI offer the same separation for their platforms.

As AI-powered search grows its share of referral traffic, the cost of blocking search crawlers increases. The Cloudflare Year in Review data we reported in December showed AI crawlers already account for a measurable share of web traffic, and the gap between crawling volume and referral traffic remains wide. How publishers navigate these three-way decisions will shape how much of the web AI search tools can actually surface.