How To Identify Which LLM Is Actually Working For You [Webinar] via @sejournal, @hethr_campbell

AI search is dominating the strategy conversation right now, and everyone is hearing the same thing from clients and directors: “What’s our AI search plan?”

The instinct is to optimize everywhere, ChatGPT, Perplexity, Gemini, and move fast. But before you reallocate budget or rewrite your GEO roadmap, there’s a more useful question to ask first:

Which LLM is actually driving conversions in your clients’ specific industry?

Join us for an upcoming expert panel webinar where we’ll dive into exactly that.

What You’ll Learn

In this webinar, Danielle Wood, Content & Creative Manager at CallRail, and Natalie Johnson, SEO & AI Visibility Expert & Founder of SweetGlow Marketing, will break down real conversion data by LLM and show how platform-level performance should shape your GEO strategy.

Specifically, you’ll walk away with:

  • Conversion data by LLM platform, so you know where high-intent traffic is actually coming from in each industry
  • A clear AI prioritization framework to stop spreading GEO effort equally and concentrate it where it converts
  • A reporting model that ties AI search activity to real business outcomes clients can see and trust

Why Attend?

You’ll finally be able to justify AI search investment; this session will give you the data and the framework to make that case and to implement the strongest, most successful AI search strategy possible.

Join us live to get your questions answered directly by the expert panel.

New AI Jobs Index Ranks 784 Occupations By Loss Risk via @sejournal, @MattGSouthern

Jobs with the highest potential for AI-assisted productivity gains also face the highest projected job losses, according to a new index from Digital Planet at Tufts University’s Fletcher School.

The American AI Jobs Risk Index ranks 784 U.S. occupations, 530 metro areas, 50 states, and 20 industry sectors by vulnerability to AI-driven job loss.

All figures are model projections based on AI adoption scenarios, not actual layoffs or employment changes. The median scenario estimates 9.3 million jobs at risk, ranging from 2.7 million to 19.5 million depending on AI adoption speed.

Which Jobs Face The Highest Projected Risk

Writers and authors top the list of occupations at risk at 57%. Computer programmers and web and digital interface designers follow at 55% each. Editors are at 54%, and web developers at 46%.

Market research analysts and marketing specialists face a projected 35% job loss rate. Public relations specialists are at 37%. News analysts, reporters, and journalists face 35% risk.

Earlier analyses, such as the Anthropic Economic Index and Stanford’s “Canaries in the Coal Mine,” measured how accessible jobs are to AI. This analysis goes further by estimating how likely that exposure is to translate into projected job loss.

Augmentation & Loss Risk Go Together

Authors refer to the connection between jobs that benefit from AI-driven productivity gains and those expected to lose jobs as the “augmentation-displacement link.”

When AI increases individual workers’ efficiency, companies can produce the same output with fewer employees. This mainly affects entry-level and lower-seniority roles first, because companies can cut back on hiring rather than firing.

Writing, programming, web design, technical writing, and data analysis are where this pattern is most evident. Tasks in these fields are cognitive, language-intensive, and structured enough for large language models to manage.

By Industry

Average vulnerability across all industries is about 6%. Sectors with the highest projected job loss are Information (18%), Finance and Insurance (16%), and Professional, Scientific, and Technical Services (16%).

Software Developers, Management Analysts, and Market Research Analysts face the biggest total income losses. These three roles combine high pay with large workforces, accounting for a significant share of the projected $757 billion in total at-risk annual income.

What The Analysis Doesn’t Include

Note that job creation effects aren’t included in this version. The authors intend to add that data in future updates as they gather more evidence.

Additionally, regulatory constraints, union bargaining power, and occupational licensing requirements that could help slow job losses in some sectors are not part of this analysis. The authors emphasize that their forecasts are based on different scenarios rather than being definitive.

Why This Matters

There’s a common assumption among digital professionals that using AI to boost productivity protects their jobs. However, this data challenges that idea.

SEJ previously covered this tension in 2023 when Dr. Craig Froehle of the University of Cincinnati warned that companies not investing in employee retraining would see turnover costs double. The Tufts data puts numbers on the specific occupations where that pressure is building.

Looking Ahead

Updates to the American AI Jobs Risk Index will be made as AI capabilities and labor market conditions evolve. The authors mention that future versions will try to include job creation data along with loss estimates, providing a more complete view of AI’s overall impact on employment.

The methodology is available on the Digital Planet site, which also links to a data download page.


Featured Image: rudall30/Shutterstock

Why New Google-Agent May Be A Pivot Related To OpenClaw Trend via @sejournal, @martinibuster

Google quietly updated its list of user-triggered fetchers to include a new one called Google-Agent. The new agent will be used by its Project Mariner tool that began as an AI browser agent and may now be part of a pivot to compete with OpenClaw-style personal agents.

OpenClaw

OpenClaw is a new type of personal AI agent assistant that is able to perform a wide range of tasks online. In fact, it is able to form teams with one agent as the manager (the orchestrator) handing out tasks to specialized agents in a team. These AI agents run from a laptop or desktop device as well as in hosted environments. They are model-agnostic and can connect to any cloud-based AI providers like Anthropic (Claude), Google (Gemini), and OpenAI.

Chinese AI providers like MiniMax, Moonshot AI (Kimi), Alibaba Cloud (Qwen), and DeepSeek are increasingly popular because they are significantly less expensive than mainstream American AI providers, further driving the personal AI agent boom.

The personal AI agent space is so popular and important that OpenAI hired the developer of the OpenClaw AI agent, Peter Steinberger.

Google’s Project Mariner

Project Mariner was announced in 2025 and was only available to Google AI Ultra subscribers and others who were allowed in as Google Labs testers. The initial version of Project Mariner was a browser style assistant that you could tell what to do, and it would go out onto the web and accomplish various tasks.

A video of Project Mariner in action showed it to be a fairly clunky way to navigate the web, with one tester calling it “far from perfect.

Project Mariner Test Drive Video

Pivot To AI Agents Called LAMs

AI agents are exploding right now, largely in the developer community, especially as it intersects with the vibe-coding trend. AI is currently used for building software, WordPress plugins, creating blog posts, and monitoring and posting to social media. AI agents are essentially robot workers that can do all of that autonomously.

These kinds of user agents are becoming known as Large Action Models (LAMs). A LAM understands what a user wants accomplished, breaks the goal up into steps, clicks buttons, calls APIs, and carries out tasks autonomously or with human oversight. Unlike LLMs that basically say things, LAMs actually do things.

The imminent release of the AI-friendly WordPress 7.0 may usher in a period of rapid evolutionary change in how businesses create and manage websites, and AI agents will quite likely expand and play a big role in that.

Many of the Project Mariner staff have moved over to the Gemini Agent product, as some of the capabilities and insights from Project Mariner are moved over to other projects. Wired reported that it received confirmation, one day before Google announced the new Google-Agent crawler, that Google was moving Project Mariner staff over to its Gemini Agent product.

“A Google spokesperson confirmed the changes, but said the computer use capabilities developed under Project Mariner will be incorporated into the company’s agent strategy moving forward. Google has already folded some of these capabilities into other agent products, including the recently launched Gemini Agent, the spokesperson added.

The change comes as Google and other AI labs rush to respond to the rise of highly capable agents like OpenClaw.”

Anthropic is already ahead of Google with its announcement of Claude Cowork, a desktop interface for interacting with AI agents that makes it possible for non-coders to take advantage of AI agents.

Anthropic describes Cowork’s capabilities and purpose:

“Unlike Chat, Cowork lets Claude complete work on its own. Describe the outcome and cadence, and it takes action and keeps you informed. Come back to the result.

Claude delivers finished work instead of step-by-step updates: a formatted spreadsheet, a memo, a briefing doc. You review, refine, and decide what’s next.

Tell Claude what you want from your desktop or phone. Claude picks the fastest path: a connector for Slack, Chrome for web research, or your screen to open apps when there’s no direct integration.”

Cowork is currently available for download for macOS and Windows.

The boom in agentic AI coding has sent shockwaves through the software publishing industry based on fears that AI coding will make it easier for users to roll their own software solutions. Adobe Inc.’s stock has already lost 33% of its value over the past six months, as have many other software companies.

Screenshot Of Google Search For Adobe Inc Stock Price

For example, Mistral recently released Voxtral TTS, an inexpensive text-to-speech AI that can run on a laptop with at least 3GB of RAM, undercutting other companies offering the same service for a monthly subscription.

Google-Agent Connection

The new Google-Agent crawler is labeled as a user-initiated crawler, meaning that the crawler is initiated by a user. The documentation for the new crawler explains:

“Google-Agent is used by agents hosted on Google infrastructure to navigate the web and perform actions upon user request (for example, Project Mariner). It uses IP ranges from user-triggered-agents.json.”

Google currently offers Gemini CLI, but it’s not a one-to-one competitor with the agent-first Claude Code, which is designed to take actions. This addition of the new Google-Agent crawler could be a small piece of a new product that is able to compete more directly with Code.

That said, Google once again finds itself racing to catch up to rapidly developing situations, and this change to the list of Google’s user-triggered fetchers is likely part of Google’s pivot to compete more robustly in the LAM space.

Featured Image by Shutterstock/PPstock

Google Gemini Sends More Traffic To Sites Than Perplexity: Report via @sejournal, @MattGSouthern

Google Gemini more than doubled its referral traffic to websites between November and January, according to SE Ranking data from more than 101,000 sites with Google Analytics installed.

The increase started in December, shortly after Google began rolling out Gemini 3 across its products. SE Ranking measured a 51% increase in December and a 42% increase in January, for a combined gain of about 115%.

For transparency, SE Ranking sells AI visibility tracking tools, and the data below comes from their own Google Analytics dataset.

Gemini Passes Perplexity

In January, SE Ranking’s data shows Gemini sent 29% more visitors to websites than Perplexity globally. In the U.S., the gap was wider at 41%.

Five months earlier, the positions were reversed. In August, Perplexity was sending roughly three times more referral traffic than Gemini, according to the same dataset.

ChatGPT’s Decline From Peak

ChatGPT’s referral traffic peaked in October and has fallen since then. SE Ranking measured an 8% drop in November and an 18% drop in December, with a partial recovery in January.

Even after the decline, ChatGPT still generates about 80% of all AI referral traffic to websites. ChatGPT’s lead over Gemini narrowed from roughly 22x in October to about 8x in January. That’s still a large gap.

Similarweb’s January data showed a similar pattern when measuring direct visits to chatbot sites. ChatGPT’s traffic share fell from 86% to 64% over the past year, while Gemini rose from 5% to 21 %. The two datasets measure different things, but both show the same direction.

The Gemini 3 Connection

The timing of Gemini’s traffic increase lines up with Google’s rollout of Gemini 3 models.

Google released Gemini 3 Pro on November 18, Gemini 3 Deep Think on December 4, and Gemini 3 Flash on December 17. Flash became the default model in the Gemini app and in AI Mode for Search.

Before those releases, Gemini’s referral traffic had been mostly flat for eight months. SE Ranking’s data shows it grew at roughly 4% per month from January through October. The jump to 47% monthly growth in December and January represents about a 12x acceleration from the prior pace.

AI Traffic In Context

All AI platforms combined still account for a small share of overall web traffic. SE Ranking puts the figure at about 0.24% of global internet traffic as of January, up from 0.15% in 2025.

An earlier SE Ranking report of 13,700 websites found Google generating 94% of organic traffic. ChatGPT and Perplexity were starting to show up in referral reports. The new dataset is larger at 101,574 sites across 250 markets but uses the same GA-based methodology.

Why This Matters

Two months of growth from Gemini doesn’t predict where AI referral traffic will be by year’s end. The increase from November to January is measurable and correlates with a known product launch, but it’s too early to call it a sustained pattern.

The Perplexity milestone is more concrete. Gemini may now show up as a larger referral source than Perplexity in your own analytics. That’s worth checking.

Looking Ahead

SE Ranking says it will continue monitoring AI referral traffic through 2026. Google hasn’t disclosed referral traffic figures for Gemini or AI Mode directly. The next Similarweb AI Tracker update could provide a second data point on whether Gemini’s growth continued past January.


Featured Image: DANIEL CONSTANTE/Shutterstock

Answer Engine Optimization: How To Get Your Content Into AI Responses via @sejournal, @slobodanmanic

This is Part 2 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO and why the shift matters. This article gets practical: how AI systems actually select content, and what you can do about it.

AI Doesn’t Rank Pages. It Selects Fragments.

Traditional search ranks whole pages. AI search does something fundamentally different.

Microsoft’s Krishna Madhavan, principal product manager on the Bing team, described the shift in October 2025: AI assistants “break content down, a process called parsing, into smaller, structured pieces that can be evaluated for authority and relevance. Those pieces are then assembled into answers, often drawing from multiple sources to create a single, coherent response.”

This is the core insight. AI doesn’t pick the best page and show it. It picks the best fragments from many pages and weaves them together. Your page might rank No. 1 on Google and still not get cited in an AI response if its content isn’t structured in fragments that AI can extract.

The numbers show the shift is real. According to the Conductor AEO/GEO Benchmarks Report (January 2026; 13,770 domains, 17 million AI responses), AI traffic now accounts for 1.08% of all website sessions, growing roughly 1% month over month. Microsoft reported that AI referrals to top websites spiked 357% year-over-year in June 2025, reaching 1.13 billion visits. Small numbers today, compounding fast.

One in four Google searches now triggers an AI Overview. In healthcare, it’s nearly one in two. The surface area is growing, and the content that fills these answers has to come from somewhere. The question is whether it comes from you.

The Research: What Actually Gets Cited

The academic research on what makes content citable in AI responses has matured rapidly. The foundational paper, “GEO: Generative Engine Optimization” (Princeton, IIT Delhi, Georgia Tech, published at KDD 2024), tested nine optimization strategies and found that GEO techniques could boost visibility by up to 40% in AI responses. The most effective single technique was citing credible sources, which produced a 115.1% visibility increase for websites that weren’t already ranking in the top positions.

A counterintuitive finding: Writing in an authoritative or persuasive tone did not improve AI visibility. AI systems don’t respond to rhetorical style. They respond to verifiable information.

Since then, 2025 brought a wave of follow-up research that tested these ideas on real production AI engines rather than simulated ones.

The University of Toronto study (September 2025) was the first large-scale analysis across ChatGPT, Perplexity, Gemini, and Claude. Their most striking finding: AI search overwhelmingly favors earned media. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time, compared to Google’s 54.1%. Automotive showed a similar pattern at 81.9% versus 45.1%. In other words, it’s not just how you write content, but whose domain it appears on. Press coverage, product reviews on independent websites, and mentions on industry publications carry far more weight in AI responses than your own website.

Carnegie Mellon’s AutoGEO study (October 2025) used automated methods to discover what generative engines actually prefer. The results showed up to 50.99% improvement over the best baseline, with universal preferences emerging across engines: comprehensive topic coverage, factual accuracy with citations, clear logical structure with headings and lists, and direct answers to queries.

The GEO-16 framework (September 2025) analyzed 1,702 real citations from Brave, Google AI Overviews, and Perplexity. It identified 16 on-page quality factors that predict citation likelihood. The top three: metadata and freshness, semantic HTML, and structured data. Technical on-page factors matter as much as the quality of the writing itself.

And a reality check from Columbia and MIT’s ecommerce study (November 2025): of 15 common content rewriting heuristics, 10 produced negligible or negative results. The optimization strategies that did work converged toward truthfulness, user intent alignment, and competitive differentiation. Not tricks. Substance.

The overall pattern across all of this research: AI systems reward clarity, factual accuracy, and structure. They don’t reward marketing language, persuasion tactics, or keyword density.

Content Structure That Earns Citations

Based on the research and official guidance from Microsoft and Google, here’s what structurally makes content citable.

Heading hierarchy matters more than ever. Use descriptive H2 and H3 headings that each cover one specific idea. Microsoft lists strong headings as “signals that help AI know where a complete idea starts and ends.” Vague headings like “Learn More” or “Overview” give AI nothing to work with. A heading like “How AI parses content differently than search engines” tells the system exactly what the section contains.

Q&A format is native to AI. Write questions as headings with direct answers below them. Microsoft notes that “assistants can often lift these pairs word for word into AI-generated responses.” If your content answers the question someone asks an AI, and it’s structured as a clear question-and-answer pair, you’ve made the AI’s job easy.

Make content snippable. Bulleted and numbered lists, comparison tables, step-by-step instructions. These formats give AI clean, extractable fragments. A paragraph buried in a wall of text is harder for AI to isolate than the same information presented as a three-item list.

Front-load the answer. Start sections with the key information, then provide context. If someone asks, “What temperature should I bake bread at?” and your content opens with a two-paragraph history of bread making before mentioning 375°F, you’ll lose the citation to a competitor who leads with the answer.

Keep sections self-contained. Each section should make sense on its own, without requiring the reader to have read the previous section. AI extracts fragments. If your fragment only makes sense in the context of the whole page, it won’t be selected.

An important technical note from Microsoft: “Don’t hide important answers in tabs or expandable menus: AI systems may not render hidden content, so key details can be skipped.” FAQ answers collapsed inside an expandable menu, product specs hidden behind tabs, content that requires interaction to reveal: it may all be invisible to AI. If information is important, it needs to be in the visible HTML.

Authority Signals For AI

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t just a Google concept anymore. It’s what AI systems look for across the board, even if they don’t use the term.

Microsoft’s October 2025 guidance describes the baseline: success starts with content that is “fresh, authoritative, structured, and semantically clear.” On the clarity side, they’re specific: “avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.” Saying something is “next-gen” or “cutting-edge” without context leaves AI unsure how to classify it.

The research backs this up. The original GEO paper found that writing in a persuasive or authoritative tone did not improve AI visibility. Facts and cited sources did. Marketing language doesn’t impress algorithms.

This connects to the University of Toronto’s finding about earned media dominance. AI systems trust third-party validation more than self-promotion. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time compared to Google’s 54.1%. The implication: getting your expertise published on industry websites, earning press coverage, and building a presence on authoritative platforms matters more for AI visibility than perfecting the copy on your own site.

Freshness is a signal, not a bonus. Stale content rarely gets cited. Krishna Madhavan said at Pubcon Cyber Week: “Stale or missing content will constrain the amount of retrieval we can do and push agents toward alternative sources.”

Schema Markup: From Text To Knowledge

Microsoft’s October 2025 post devotes an entire section to schema. They describe it as code that “turns plain text into structured data that machines can interpret with confidence.” Schema can label your content as a product, review, FAQ, or event, giving AI systems explicit context instead of forcing them to guess. Krishna Madhavan reinforced this at Pubcon: “Schemas are super useful. They help the system discern exactly what your information is without us having to guess.”

The GEO-16 framework confirms this from the academic side. Structured data was one of the top three factors predicting AI citation likelihood, alongside metadata/freshness and semantic HTML.

The schema types that matter most for AI visibility:

  • FAQPage for question-and-answer content (directly maps to how AI formats responses).
  • HowTo for step-by-step instructions.
  • Product with Offer, AggregateRating, and Review for ecommerce.
  • Article/BlogPosting for content with clear authorship and dates.
  • Organization for business identity.

Pair structured data with IndexNow for freshness. As the Bing Webmaster Blog put it: “IndexNow tells search engines that something has changed, while structured data tells them what has changed. Together, they improve both speed and accuracy in indexing.”

Crawler Permissions: Who Gets In

AI search engines use distinct crawlers, and most let you control training and search access separately. Here’s who to allow.

Bot Platform Purpose Robots.txt Token
OAI-SearchBot ChatGPT Search index OAI-SearchBot
GPTBot OpenAI Model training GPTBot
ChatGPT-User ChatGPT On-demand browsing ChatGPT-User
Bingbot Microsoft Copilot Search + AI Bingbot
Googlebot Google AI Overviews Search + AI Googlebot
Google-Extended Google Gemini training Google-Extended
PerplexityBot Perplexity Search + index PerplexityBot
Perplexity-User Perplexity On-demand browsing Perplexity-User
ClaudeBot Anthropic Training + retrieval ClaudeBot

A sensible robots.txt configuration might allow search crawlers while blocking training:

User-agent: OAI-SearchBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

OpenAI provides the cleanest bot separation. You can allow OAI-SearchBot (so your content appears in ChatGPT search) while blocking GPTBot (so it’s not used for model training). Google’s controls are less granular: blocking Google-Extended prevents Gemini training but has no effect on AI Overviews, which use Googlebot.

OpenAI also offers the most specific technical recommendation of any AI search provider. For their Atlas browser (which uses a standard Chrome user agent, not a bot identifier), they recommend following WAI-ARIA best practices: “Add descriptive roles, labels, and states to interactive elements like buttons, menus, and forms. This helps ChatGPT recognize what each element does and interact with your site more accurately.” Accessibility and AI agent compatibility are the same work.

A caveat on Perplexity: while their documentation states they respect robots.txt, Cloudflare documented in August 2025 that Perplexity uses undeclared crawlers with rotating IPs and spoofed browser user agents to bypass no-crawl directives. This is a contested claim, but it’s worth knowing.

For revenue, Perplexity is the only platform currently offering publisher compensation. Their Comet Plus program provides an 80/20 revenue split (publishers keep 80%) across direct visits, search citations, and agent actions.

Google Vs. Microsoft: Two Philosophies

The contrast between Google and Microsoft on AEO is striking enough to be its own story.

Google says: just do good SEO. Their official documentation is deliberately minimalist: “There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.” They add that you “don’t need to create new machine readable files, AI text files, or markup to appear in these features.”

Google recommends helpful, reliable, people-first content demonstrating E-E-A-T. Standard structured data. Good page experience. Technical basics. Nothing AI-specific.

Microsoft says: here’s the playbook. Their October 2025 blog post and January 2026 guide provide detailed, actionable guidance. Specific heading structures. Schema recommendations. Content formatting rules. Concrete examples (an AEO product description vs. a GEO product description). Warnings about content hidden in tabs and expandable menus. A framework for thinking about crawled data, product feeds, and live website data as three distinct layers.

What explains the difference? Partly market position. Google dominates search and has less incentive to help publishers optimize for AI features that might reduce clicks to their websites. Microsoft, with Bing’s roughly 8% market share, benefits from providing publishers with reasons to optimize specifically for their ecosystem.

But there’s a practical takeaway: Microsoft’s guidance isn’t Bing-specific. The principles of structured content, clear headings, snippable formats, schema markup, and expert authority are universal. Following Microsoft’s playbook improves your content for every AI system, including Google’s. Google just won’t tell you that.

Measuring AI Visibility

This is the hard part. Traditional SEO has Google Search Console. AI visibility is still fragmented.

Ahrefs analyzed 1.9 million citations from 1 million AI Overviews and found that 76% of citations come from pages already ranking in Google’s top 10. The median ranking for the most-cited URLs was position 2. Traditional ranking still matters for AI citation, but being No. 1 is “a coin flip at best” for getting cited.

The traffic impact is significant. Ahrefs found that AI Overviews correlate with 58% lower click-through rates for the No. 1 position. Seer Interactive reported a 61% organic CTR drop for queries with AI Overviews. But being cited within the AI Overview gives 35% more organic clicks compared to not being cited. Citation is the new ranking.

For tracking, the tool landscape is emerging:

Tool What It Tracks Starting Price
Profound Citations across ChatGPT, Perplexity, Copilot, Google AIOs From $99/mo
Peec.ai Brand mentions across ChatGPT, Gemini, Claude, Perplexity From ~$95/mo
Advanced Web Ranking AIO presence tracking in Google Included in plans
Bing Webmaster Tools AI Performance Report for Copilot Free

Bing Webmaster Tools is the easiest starting point. It’s free, and the new AI Performance Report shows how your content performs in Copilot citations. For ChatGPT specifically, track utm_source=chatgpt.com in your analytics. OpenAI automatically appends this to referral URLs.

Conductor’s January 2026 report found that 87.4% of AI referral traffic comes from ChatGPT. That’s one platform dominating the space, which makes tracking it particularly important.

Key Takeaways

  • AI selects fragments, not pages. Structure your content in self-contained, extractable sections with descriptive headings that signal where each idea starts and ends.
  • Clarity beats persuasion. Factual accuracy, cited sources, and direct answers outperform authoritative tone and marketing language. The research consistently shows this.
  • Earned media dominates brand content in AI citations. Press coverage, third-party reviews, and authoritative mentions on other websites carry more weight than your own pages. Build presence beyond your domain.
  • Schema markup is a force multiplier. FAQPage, HowTo, Product, and Article schemas make your content machine-readable. Pair with IndexNow for freshness.
  • Follow Microsoft’s playbook, even for Google. Google says “just do good SEO.” Microsoft provides specific, actionable guidance that improves content for every AI system, Google’s included.
  • Separate training from search in your robots.txt. Allow search crawlers (OAI-SearchBot, Bingbot, PerplexityBot) while blocking training crawlers (GPTBot, Google-Extended) if that’s your preference. You have more control than you might think.
  • Track AI visibility now. Use Bing Webmaster Tools (free), monitor utm_source=chatgpt.com in analytics, and consider dedicated tools as the measurement space matures.

Traditional SEO asked: “How do I rank?” AEO asks: “How do I become the fragment that gets selected?” The answer isn’t a single trick. It’s clear structure, verifiable expertise, and content that AI can confidently extract and cite.

Up next in Part 3: the protocols powering the agentic web, including MCP, A2A, NLWeb, and AGENTS.md, and how they fit together.

More Resources:


This was originally published on No Hacks.


Featured Image: Meepian Graphic/Shutterstock

Wikipedia Bans Use Of AI-Generated Content via @sejournal, @martinibuster

Wikipedia recently published guidelines prohibiting the use of AI to generate or rewrite articles, except for two exceptions related to editing and translations. The guidelines acknowledges that identifying AI generated content can’t be based on style signals and offers no further guidance on how they will identify the LLM-based content.

Violation Of Wikipedia’s Core Content Policies.

The new guidelines prohibiting the use of LLMs states that the use of AI violates several of their core content policies, without actually naming them. But a look at those policies makes it reasonably clear which policies are being alluded to, namely their policies about verifiability, their prohibition on no original research, and possibly their requirement for a neutral point of view are quite likely the two obvious policies referred to.

The policy on verifiability requires that content that might be challenged must be attributable to a reliable published source that other editors can check to verify that the source is reliable. LLMs generate text without explicitly citing sources and they also tend to hallucinate facts.

The policy on original research states:

“Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”

Obviously, LLMs generate a synthesis based on published sources and as for neutral point of view, it’s possible for an LLM to place more weight on dominant viewpoints at the expense of those that are in a minority. Most SEOs are aware that asking an LLM about SEO consistently results in answers that reflect the dominant but not necessarily the most correct point of view.

The new guidance makes two exceptions:

  1. “Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own. Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.
  2. Editors are permitted to use LLMs to translate articles from another language’s Wikipedia into the English Wikipedia, but must follow the guidance laid out at Wikipedia:LLM-assisted translation.”

As to identifying AI generated content, the new Wikipedia AI guidelines suggest considering how well the content complies with their core content guidelines and to audit recent posts by the editor whose edits are under suspicion.

Featured Image by Shutterstock/JarTee

The Agency Playbook for Surviving the Agentic AI Era

Search is moving from queries typed into a box to conversations held with systems that understand intent, context, and outcomes. People no longer look for pages. They look for solutions, guidance, and confidence that they are making the right choice.

Agentic AI pushes this shift further. Instead of waiting for instructions, agents act on goals. They discover information, compare options, trigger workflows, and adjust based on feedback. For digital leaders, this means visibility is no longer only a ranking problem. It becomes a problem of influence inside AI systems.

SEO now touches product, data, knowledge management, and experience design. This playbook explains how to prepare for that shift, build capability, and lead change.

Search Is Becoming AI-Mediated

AI systems have become the layer between users and the web. They read content on behalf of users, make selections instead of requiring users to browse, and influence decisions in ways that search pages once did.

This shift changes how people interact with information. Users now ask broader, more complex questions, expecting systems to understand nuance and intent. The traditional act of navigating through links is giving way to direct answers and immediate actions.

Content can no longer be designed solely for human readers. It must also be structured in ways that AI systems can interpret accurately and confidently. In this environment, trust and evidence carry more weight than keywords or search optimization tactics.

Winning in search today means becoming part of the models that shape decisions, not just appearing in the results.

What Agentic AI Means For SEO And Digital

Agentic AI is changing how people discover and choose brands. Discovery now depends on how well models learn from your content, the paths users take on your site, and the external signals that establish credibility. These systems decide when your brand is relevant, based on what they understand and trust.

During evaluation, AI compares your product, price, quality, reviews, and suitability for a given user against other options. It looks for proof, tests claims, and weighs real signals over marketing language.

When supporting decisions, AI doesn’t just provide information. It actively guides users toward what it considers the best fit. Your brand might be brought forward or quietly passed over, depending on how well it matches user needs.

In this landscape, SEO is no longer just about publishing content. It’s about shaping how AI systems perceive your brand and when they choose to recommend it.

New Operating Model For SEO

The future of search brings marketing, product, and data teams into a shared effort. Success depends on how well these areas work together to shape how AI systems perceive and present your brand.

The key is building structured knowledge that AI can easily process and apply. Instead of designing for clicks and views, focus on creating journeys that help users complete tasks through the systems guiding them. It’s also critical to train these systems with the right brand messages, supported by clear evidence and consistent proof points.

Ongoing visibility requires monitoring how models reference your brand, how they rank it, and how they reason about its relevance. This means continuously refining the signals you send, improving your content, updating product data, and reinforcing trust in every interaction.

The goal remains clear and hasn’t really changed from our technical goals for SEO. Make it easy for AI agents to understand, trust, and ultimately recommend your brand.

Maturity Model

Level Name Description Key indicators
0 Manual SEO Basic optimization and manual workflows Keyword focus, isolated content execution, minimal data alignment
1 Assisted SEO AI supports research and content creation AI‑assisted briefs, content suggestions, faster execution, manual oversight
2 Integrated AI workflows Core SEO tasks automated and structured Content pipelines, structured data adoption, automated QA, analytics integration
3 Agent‑driven operations Agents monitor, trigger, and refine SEO Automated reporting, performance triggers, self‑adjusting content modules
4 Autonomous acquisition systems Self‑improving systems tied to revenue Continuous testing, adaptive journeys, revenue‑linked triggers, real‑time optimization

The goal is not automation alone. It is intelligence and improvement at scale.

Technical And Data Foundations

To prepare for agentic SEO, organizations need more than traditional content systems built for publishing. They need strong foundations that help AI systems understand, evaluate, and act with confidence.

This starts with clarity, which means crafting messaging that is consistent, accurate, and easy for machines to interpret. Structure is also essential, requiring content, data, and signals to be organized in ways that align with how AI systems process and reason through information.

Key components of this are:

  • Structured data that turns content into machine‑readable knowledge.
  • Knowledge graphs that explain relationships between products, categories, and needs.
  • Taxonomy and naming standards to ensure consistency across pages, feeds, and assets.
  • APIs and automation for publishing and optimization, so agents can trigger updates.
  • Clean product and service data, including specifications, pricing, and availability.
  • Evaluation systems to audit AI outputs and detect hallucinations or misalignment.
  • Identity and trust signals, including reviews, authority, certifications, and product proof.

This calls for a shift from simply building web pages to creating a well-organized information architecture. The goal is to structure information in a way that AI systems can easily navigate, understand, and apply.

In practice, this means bringing together product data, content metadata, and customer intent into a single, connected system. It involves defining the key entities your business represents, such as products or services, and mapping how they relate to what users are trying to accomplish. Content feeds and structured data should reflect the actual state of the business rather than just marketing language.

Equally important is creating feedback loops that show how AI systems interpret and reference your brand. These insights help you see where your content is being used, how it is being understood, and whether it is guiding users toward your brand. With this information, you can keep refining what you share to improve how systems recognize and recommend you.

Instead of asking, “How do we rank for this query?” leaders will ask, “How do systems understand us, trust us, and act on our information?”

KPI And Measurement Model

Traditional key performance indicators still hold value, but they no longer capture the full picture. Rankings and session metrics continue to provide insight, yet they now exist within a broader framework shaped by how AI systems retrieve, interpret, and act on information. Ranking reports will sit alongside AI retrieval dashboards, and session counts will be evaluated alongside metrics focused on task completion and user outcomes.

In my opinion, you should also be looking to monitor:

  • Share of voice in AI assistants.
  • Retrieval and inclusion rate in AI answers.
  • Brand alignment and brand safety in model outputs.
  • Presence in multi‑step reasoning chains.
  • Task completion and conversion paths from AI systems.
  • Cost per automated workflow and cost per agent‑driven action.
  • Model education, data freshness, and trust scores.

As measurement evolves, the focus moves from tracking visitor numbers to understanding how AI systems shape decisions. To navigate this shift, leaders should design metrics that reflect influence within these systems. Visibility will measure whether the brand is appearing in AI-generated responses and assistant-led interactions.

Accuracy will assess whether the brand is being represented correctly and safely across touchpoints. Trust will reflect whether AI systems choose your content and signals over others when making recommendations. Action will capture whether AI-driven experiences result in tangible outcomes like leads, bookings, or purchases. Efficiency will show whether AI agents are reducing manual effort, improving speed, and delivering better user experiences.

Success will no longer be defined by visibility alone but by a brand’s ability to perform across discovery, decision support, and operational impact.

Talent And Capability Model

Agentic SEO is not a standalone skill set, it draws from a mix of disciplines that span marketing, data, and product. Success in this space requires a collaborative approach, where expertise is integrated rather than siloed.

Future-facing teams bring together SEO and content strategy, data and automation engineering, product and user experience thinking, as well as governance and prompt development. Legal and compliance awareness also play a critical role, ensuring that outputs remain responsible and aligned with brand and regulatory standards.

These teams operate in cross-functional pods, organized around delivering customer outcomes rather than managing individual channels. This structure allows them to move faster, adapt to change, and create more cohesive experiences across AI-driven platforms.

Modern SEO teams include several key roles. The SEO strategist focuses on how AI systems search, retrieve, and rank content. The data engineer manages the integrity of structured content, metadata, and live data feeds. The automation specialist builds the workflows and agents that connect information to user actions. The AI evaluator audits model outputs to ensure accuracy, brand alignment, and safety. The product partner bridges SEO efforts with real user journeys, making sure that discovery leads to meaningful interaction and conversion.

As this approach matures, teams will spend less time producing content manually and more time designing the systems, signals, and experiences that guide AI behavior and improve how users discover and engage with the brand.

The First 90 days

Days 1 To 30: Foundation And Alignment

  • Audit content, data, and search performance.
  • Map where AI already touches customer journeys.
  • Identify gaps in structure, trust signals, and data quality.
  • Set goals for AI visibility and agent‑driven workflows.

Days 31 To 60: Build And Test Pilots

  • Launch structured data and knowledge base improvements.
  • Test AI‑assisted content and QA pipelines.
  • Introduce early agent monitoring for SEO signals.
  • Create evaluation benchmarks for AI accuracy and brand safety.

Days 61 To 90: Scale And Govern

  • Deploy automation in high‑impact workflows.
  • Formalize model governance and feedback loops.
  • Train cross‑functional teams on AI‑ready processes.
  • Build dashboards for AI visibility, trust, and conversion.

Future Outlook

Search will not disappear. It will merge into tasks, journeys, and decisions across devices and interfaces. Brands that train AI systems, structure knowledge, and build agent‑ready operations will lead.

The winners will not be those who automate content. They will be those who help users and systems make better decisions at speed and scale.

More Resources:


Featured Image: Collagery/Shutterstock

Research: “You Are An Expert” Prompts Can Damage Factual Accuracy via @sejournal, @martinibuster

“You are an expert” persona prompting can harm performance as much as it helps. A new study shows that persona prompting improves alignment with human expectations but can reduce factual accuracy on knowledge-heavy tasks, with effects varying by task type and model. The takeaway is that persona prompting works better on some kinds of tasks than it does in others.

Persona Prompting

Persona prompting is a common way to shape how large language models respond, especially in applications where tone and alignment with human expectations matter. It is widely used because it improves how outputs read and feel. Given how widespread persona prompting is, it may come as a surprise that its actual effect on performance remains unclear, as prior research shows inconsistent results, throwing the technique into doubt as to whether it is helping or harming.

The researchers concluded that persona prompting is neither broadly beneficial nor harmful, and that its efficacy depends on the type of task.

They found:

  • It improves alignment-related outputs such as tone, formatting, and safety behavior
  • Persona prompting degrades performance on tasks that rely on factual accuracy and reasoning

Based on this, the authors introduce a method called PRISM (Persona Routing via Intent-based Self-Modeling), that applies personas selectively, using intent-based routing instead of treating personas as a default setting. Their findings show that persona prompting works best as a conditional tool and provide a better understanding of when persona prompting helps and when it should be avoided.

Managing Behavioral Signals

In section three of the paper, the researchers say that expert personas have “useful behavioral signals” but that naïve use of persona prompting damages as much as it helps. They say this raises the question of whether those benefits can be separated from the harms and applied only where they improve results.

Behavioral signals influence LLM output. These signals are the reason persona prompting works. They drive improvements in tone, structure, safety behavior, and how well responses match expectations. Without them, there would be no benefit to persona prompting.

Yet, in a seeming paradox, the paper shows that those same signals interfere with tasks that depend on factual accuracy and reasoning. That is why the paper treats them as something to manage, not maximize.

These signals include:

  • Stylistic adaptation and tone matching: Adopting a professional or creative voice.
  • Structured formatting: Providing step-by-step or technical layouts.
  • Format adherence: Helping the model follow complex structures, like professional emails or step-by-step STEM explanations.
  • Intent following: Focusing the model on the user’s underlying goal, especially in tasks like data extraction.
  • Safety refusal: Identifying and declining harmful requests more effectively by adopting a “Safety Monitor” role.

Persona Prompt Wins

The paper found that persona prompts were a win in five out of eight categories of tasks:

  1. Extraction: +0.65 score increase.
  2. STEM: +0.60 score increase.
  3. Reasoning: +0.40 score increase.
  4. Writing: Improved through better stylistic adaptation.
  5. Roleplaying a domain expert: Improved through better tone matching.

The persona prompting won in the above categories because they are more about style and clarity rather than whether the answer is correct for facts and knowledge. They also found that the longer and more detailed the persona prompt, the stronger the alignment and safety behaviors become.

Persona Prompt Failures

Conversely, the expert persona consistently degraded performance in the remaining three (out of eight) categories because they rely on precise fact retrieval or strict logic rather than style and clarity. The reason for the performance drop is that adding a detailed expert persona essentially “distracts” the model by activating an “instruction-following mode” that prioritizes tone and style.

Activating expert personas come at the expense of “factual recall.” The model is so focused on trying to act like an expert that it forgets the information it learned during its initial training.That explains the drops in accuracy for facts and math.

Persona expert prompts performed worse in the following three categories:

  1. Math
  2. Coding
  3. Humanities (memorized factual knowledge)

The paper notes that on one of the knowledge benchmarks (MMLU), accuracy dropped from a 71.6% baseline to 68.0% even with the “minimum” persona, and fell further to 66.3% with the “long” persona.

They explained the safety improvements:

“More detailed persona descriptions provide richer alignment information, amplifying instruction-tuning behaviors proportionally.”

And showed why factual accuracy takes a hit:

“Persona Damages Pretraining Tasks
During pretraining, language models acquire capabilities such as factual knowledge memorization, classification, entity relationship recognition, and zero-shot reasoning. These abilities can be accessed without relying on instruction-tuning, and can be damaged by extra instruction-following context, such as expert persona prompts.”

Conclusions Reached

The researchers conclude that persona prompting consistently improves alignment-dependent tasks such as writing, roleplay, and safety behavior, while degrading performance on tasks that rely on pretraining-based knowledge, including math, coding, and general knowledge benchmarks.

They also found that a model’s sensitivity to personas scales with its training. Models that are more optimized to follow instructions are more “steerable,” which means they get the biggest boost in safety and tone, but they also suffer the largest drops in factual accuracy.

Takeaways

1. Be selective about using persona prompts:

  • Do not default to “You are an expert” prompts
  • Treat persona prompting as situational. Using it everywhere introduces hidden accuracy risks.

2. Persona prompting is effective for:

  • Writing quality
  • Tone
  • Formatting and organization
  • Readability

3. Tasks that don’t benefit from persona prompting and should instead use neutral prompting to preserve accuracy:

  • Fact-checking
  • Statistics
  • Technical explanations
  • Logic-heavy outputs
  • Research
  • SEO analysis

4. Remember these three findings:

  • Use persona prompting to generate content, then switch to a non-persona prompt (or a stricter mode) to verify facts.
  • Highly detailed “expert” prompts strengthen tone and clarity but reduce factual and knowledge accuracy.
  • “You are an expert” prompts may cause a model to prioritize sounding correct over actually being correct.

5. Match your prompts to the task:

  • Content creation: Persona helps
  • Analysis and validation: Persona hurts

The most effective approach is not one prompt, but a workflow that switches prompts depending on the task, similar to the researcher’s PRISM approach.

Read the research paper:
Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM

Featured Image by Shutterstock/ImageFlow

5 GEO Strategies To Make AI Search Engines Recommend Your Brand In 2026

This post was sponsored by Geoptie. The opinions expressed in this article are the sponsor’s own. 

The way people search is changing faster than most marketers realize. ChatGPT alone now has over 900 million weekly active users. Google AI Overviews appear in one out of every four search results.

Each of these contains the potential for AI to cite your brand.

This isn’t a future trend. It’s happening right now. And if your brand isn’t showing up in those AI-generated answers, you’re invisible to a rapidly growing audience, even if you rank #1 on Google.

That’s where Generative Engine Optimization (GEO) comes in: the practice of optimizing your online presence. So, AI engines cite, reference, and recommend your brand when users ask questions in your space.

1. Start By Measuring Your AI Visibility

Before changing a single word on your website, you need to know where you stand. Which AI platforms mention your brand? For which queries? How often are your competitors getting cited instead of you?

You can’t optimize what you don’t measure.

How To Measure AI Visibility

Most marketers skip this step because it feels unfamiliar. But the process is straightforward.

  1. List 10–15 questions your ideal customer would ask an AI engine, things like “best [your category] for [use case]” or “how to solve [problem you address].”
  2. Run each query in ChatGPT, Perplexity, and Gemini.
  3. Note whether your brand is mentioned, which competitors show up instead, and whether sources are cited.

Repeat monthly, because AI-generated answers shift as models update and new content gets indexed. Doing this manually across multiple platforms gets tedious fast, which is why dedicated GEO platforms exist to automate the tracking and monitor changes over time.

The best place to start? Run a free geo rank check on your brand. In under a minute, you’ll see which AI engines mention you, which ones don’t, and where your competitors show up instead.

This baseline is essential. Without it, you’re optimizing blind.

2. Don’t Abandon SEO. It Still Feeds AI

Here’s an important nuance: traditional search rankings still matter for GEO.

AI engines frequently pull from top-ranking Google results when generating their responses. If your page ranks well for a relevant query, there’s a higher chance an AI engine will reference it as a source. Google’s own AI Overviews heavily favor content that already performs well in organic search.

So keep doing what continues to drive SERP rankings:

  • Producing high-quality content
  • Building backlinks
  • Technical SEO.

But think of SEO as the foundation, not the full strategy. The brands that win in AI search are those that layer GEO tactics on top of a solid SEO foundation.

3. Make Sure Your Content Follows GEO Best Practices

This is where most of the work happens. AI engines are selective about what they cite, and the structure and quality of your content play a massive role. Here’s what to focus on:

  • Write for citability, not just readability. AI engines look for content that makes clear, specific claims backed by data or expertise. Vague, fluffy paragraphs get skipped. Concrete statements like definitions, statistics, step-by-step processes, and expert opinions are far more likely to be pulled into a generated response.
  • Structure content around questions. Conversational AI is driven by user questions. Structure your content to directly answer the questions your audience asks. Use clear headers, concise paragraphs, and FAQ When an AI engine scans your page and finds a clean, authoritative answer to a specific question, you become a prime candidate for citation.
  • Leverage schema markup and structured data. Help AI engines understand what your content is about by implementing proper schema FAQ schema, How-To schema, and Organization schema all give AI systems stronger signals about your content’s topic and structure.
  • Build topical authority, not just keyword-specific content. AI engines favor sources that demonstrate deep expertise on a topic. Rather than publishing scattered blog posts across dozens of topics, build comprehensive content clusters that cover a subject thoroughly. This signals to AI engines that your brand is a reliable authority worth citing.

Pro Tip: Leverage a comprehensive GEO platform. Optimizing your content for AI search involves many moving parts: content structure, schema markup, topical authority, and technical SEO. Keeping track of all these signals manually across every page on your site isn’t realistic, especially as AI engines update how they evaluate sources. A dedicated GEO platform lets you regularly scan your entire website, monitor your optimization scores, and catch issues before they cost you citations.

Want to see where you stand right now? Run a free GEO audit and get actionable insights on your site’s AI readiness in under a minute.

4. Show Up In Reddit & UGC Discussions

Here’s a strategy most brands overlook: AI engines love Reddit.

If you’ve noticed Reddit threads showing up in Google results more frequently, that’s not a coincidence. Google and AI platforms increasingly treat user-generated content, especially Reddit, as a trusted and authentic source of information. When someone asks an AI engine for a product recommendation or solution comparison, the response often draws from Reddit discussions.

This means your brand’s presence in relevant threads matters more than ever. But you can’t just show up and start promoting yourself. Here’s how to approach it the right way:

  • Find where your audience is already talking. Search Reddit for your product category, your competitors’ names, and the problems you solve. Identify 5–10 active subreddits where these conversations happen. Look for threads like “what tool do you use for [your category].”  These are the discussions AI engines pull from.
  • Contribute before you promote. Spend at least 2–3 weeks genuinely participating before your brand ever comes up. Reddit users check post history, and if your account is nothing but product mentions, you’ll get flagged as spam.
  • Be honest, not salesy. When a relevant recommendation thread comes up, share your product as one option among others. Mention what it’s good at and where it might not be the best fit. AI engines weigh authentic, nuanced mentions far more heavily than obvious self-promotion.
  • Check what AI engines are citing. Run your core queries in ChatGPT and Perplexity and see which Reddit threads appear. If your brand isn’t in those threads, that’s where to focus.

5. Get Featured In Listicles On Trusted Sites

When users ask AI engines for recommendations like “best project management tools,” the AI doesn’t generate that list from scratch. It synthesizes from existing listicle articles on authoritative websites. A single placement in a well-ranking listicle can get your brand recommended across ChatGPT, Perplexity, and Google AI Overviews simultaneously.

  • Find the listicles AI engines are already citing. Run your target recommendation queries in ChatGPT and Perplexity and note which articles they reference. These are the exact listicles you need to be in.
  • Build a hit list of publishers. Identify publications that come up repeatedly across both AI and traditional search results for “best [your category]” queries. Prioritize sites with strong domain authority.
  • Make inclusion easy. Make sure your product pages have a clear one-liner, obvious differentiators, social proof, and transparent pricing. Then pitch authors with something valuable, such as a free account, a demo, or data they can use.

Listicles get updated regularly and AI engines re-scan them, so a placement you earn today could start driving AI citations within weeks.

The Window Is Open, For Now

Generative Engine Optimization is still in its early stages. Most brands haven’t even started thinking about it, which means the opportunity to establish an early advantage is enormous.

The brands that start measuring their AI visibility, optimizing their content for citability, building community presence, and earning placements in authoritative listicles today will be the ones AI engines default to recommending tomorrow.

The question isn’t whether AI search will matter for your business. It’s whether you’ll be visible when it does.

Start Optimizing For AI Search Today

Every strategy in this article comes down to one thing: making your brand the obvious choice when AI engines look for sources to cite and recommend. You don’t need to tackle everything at once, but you do need to start.

Geoptie brings all five strategies together in one platform, from tracking your AI visibility across ChatGPT, Perplexity, and Google AI to auditing your content and monitoring your optimization scores over time. It’s built specifically for GEO, so you can stop guessing and start seeing exactly where your brand stands in AI search.

The early movers will own this space. Make sure you’re one of them.


Image Credits

Featured Image: Image by Tor App. Used with permission.

From SEO And CRO To Agentic AI Optimization (AAIO): Why Your Website Needs To Speak To Machines via @sejournal, @slobodanmanic

For 25 years, we’ve built websites for humans who click, scroll, and browse. That era is ending. I’ve been in website optimization for 15+ years, and this is the biggest shift I’ve seen since mobile. And honestly, I think it’s way bigger than that.

The internet is undergoing its most significant transformation since it began. Your website now has two audiences: humans and AI agents. The agents are already here, shopping, researching, booking, and making decisions. The question is whether your website can serve them.

This is the first article in a five-part series on optimizing websites for the agentic web. We’ll cover discovery, citation, technical implementation, and the new commerce protocols that let AI complete purchases on your behalf. Throughout this series, we’ll draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

But first, we need to understand how we got here and why December 2025 changed everything.

The Evolution: SEO To AEO To GEO To AAIO

The alphabet soup of optimization acronyms tells a story about how the web has changed.

SEO (Search Engine Optimization) dominated from the mid-1990s through the 2010s. The goal was simple: rank higher on Google. You optimized keywords, built backlinks, and structured your site so crawlers could index it. Success meant appearing on page one when someone searched for your topic.

AEO (Answer Engine Optimization) emerged as AI systems started answering questions directly. When Google introduced featured snippets, then AI Overviews, the game changed. Ranking wasn’t enough anymore. You needed to be the source that AI systems cited when generating answers. AEO focuses on structuring content so it gets selected and quoted, becoming the definitive answer rather than just a search result.

GEO (Generative Engine Optimization) expanded this further. Systems like ChatGPT, Claude, and Perplexity don’t just cite sources. They synthesize information from multiple places into comprehensive responses. GEO ensures your content appears in these synthesized answers, ensuring your expertise gets woven into the AI’s response even when you’re not the primary citation.

AAIO (Agentic AI Optimization) is the latest evolution, and it represents a fundamental shift. AAIO isn’t about being found or cited. It’s about being usable by AI agents that act autonomously on behalf of humans.

A research paper published in April 2025 by Luciano Floridi and colleagues formalized this distinction. As they put it, AAIO “explicitly optimises content for autonomous artificial agents, simultaneously addressing both human and machine interpretability.” Unlike SEO, which enhanced discoverability for humans through search engines, AAIO prepares websites for AI systems that initiate digital interactions independently.

Agent Experience Optimization (AXO) is the umbrella term that encompasses all of these practices. Just as UX focuses on human users and SEO focuses on search crawlers, AXO focuses on AI systems that interact with websites. It includes discovery (being found), citation (being referenced), and action (being usable). I cover AXO in depth in What is Agent Experience Optimization.

The progression is straightforward: SEO asks “How do I rank?” AEO asks “How do I get cited?” GEO asks “How do I get included?” AAIO asks “How do I enable agents to complete tasks on my site?”

The relationship between website optimization and AI effectiveness creates a virtuous cycle, similar to what happened with SEO and search engines in the early 2000s. When websites implement AAIO practices, AI agents perform better, which encourages more websites to adopt these practices, which makes agents more useful, which drives adoption further.

December 2025: The HTML Moment For AI

On Dec. 9, 2025, something significant happened. The Linux Foundation announced the Agentic AI Foundation (AAIF), a vendor-neutral governance body for agentic AI standards.

Eight platinum members anchored the foundation: Amazon Web Services, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. What’s remarkable here isn’t the technology. It’s that OpenAI, Anthropic, Google, and Microsoft are building shared infrastructure instead of competing standards. This is a strong signal that the industry sees agentic AI as foundational, not a feature war.

Three key projects were contributed:

  • Model Context Protocol (MCP) from Anthropic: a universal standard for connecting AI systems to tools and data sources, now with over 10,000 published servers and adoption by Claude, ChatGPT, Gemini, VS Code, and Microsoft Copilot
  • AGENTS.md from OpenAI: a standardized specification for providing AI coding agents consistent project guidance across repositories
  • goose from Block: an open-source, local-first agent framework combining language models with extensible tools

This matters because it mirrors what happened with the early web. In the 1990s, competing browser vendors and incompatible standards fragmented the internet. The W3C brought order by establishing shared protocols like HTML and CSS. The Agentic AI Foundation aims to do the same for AI agents, creating the shared infrastructure that lets agents from different companies work together and interact with websites consistently.

As Linux Foundation Executive Director Jim Zemlin put it, the foundation enables development “with the transparency and stability that only open governance provides.”

We’re watching the TCP/IP moment for agents. The protocols being established now will define how AI interacts with the web for the next decade: MCP for tool integration, A2A for agent-to-agent communication, NLWeb for making websites queryable.

I realize that sounds hyperbolic. It isn’t. We’re in the early months of a decade-long transformation.

Discovery, Citation, And Action

These three concepts form the framework for this entire series:

  • Discovery is about being found by AI systems. AI crawlers like GPTBot, ClaudeBot, and PerplexityBot index the web for their respective platforms. If you’re blocking these crawlers, or if your content isn’t accessible to them, you’re invisible to AI systems. Discovery is the foundation. Nothing else matters if agents can’t find you.
  • Citation is about being selected as a source. When an AI system generates a response, it chooses which sources to reference. Getting cited requires content that AI systems recognize as authoritative, accurate, and relevant. This involves structured data, clear information hierarchy, and demonstrable expertise. Microsoft has published detailed guidance on what makes content citable.
  • Action is about enabling agents to use your site. This is where AAIO diverges from earlier optimization approaches. An agent visiting your site might need to click buttons, fill forms, navigate menus, compare options, and complete transactions. If your site breaks when an agent tries to interact with it, you lose the business to competitors whose websites work.

The stakes escalate at each level. Failing at discovery means invisibility. Failing at citation means your competitors get referenced instead. Failing at action means losing transactions that would have happened on your site.

Why This Matters Now

Two converging trends make 2026 the year to act.

Agentic browsers are reaching consumers.

The first wave of AI browsers launched in 2025, and 2026 is bringing them to mainstream users. For a complete breakdown, see The Agentic Browser Landscape in 2026.

Perplexity’s Comet combines search-focused AI with full browser capabilities. ChatGPT Atlas from OpenAI includes Agent Mode for autonomous multi-step tasks. Chrome’s auto browse feature, powered by Gemini, is shipping to Google AI subscribers.

Chrome alone represents 3 billion potential users. If you’re wondering whether to take this seriously: Google doesn’t ship features to 3 billion users on a whim.

When the world’s most popular browser can autonomously scroll, click, type, and navigate on your behalf, the implications for website owners are profound. Websites that work well with these agents get included in agentic workflows. Websites that don’t get skipped.

As DigitalOcean’s analysis notes, “This shift forces websites to redesign for both human and AI users,” requiring cleaner navigation, API-first strategies, and optimization for agent functionality beyond visual presentation.

Commerce is shifting.

Stripe, Shopify, and OpenAI are building infrastructure for AI agents to complete purchases. The Agentic Commerce Protocol enables secure, agent-initiated transactions. Brands like URBN, Etsy, Glossier, and SKIMS are already implementing these systems.

Checkout is no longer a page. It’s an API endpoint. The agent researches, selects, and purchases on behalf of the user, who never visits your website at all.

What’s Coming In This Series

This article established the “why.” The rest of the series covers the “how”:

Part 2: Answer Engine Optimization dives into getting your content cited in AI responses. How AI systems parse content differently than search engines, the structure that gets cited, which schema markup matters, and how to measure your AI visibility.

Part 3: The Agentic Web Protocols explores MCP, A2A, NLWeb, and AGENTS.md, the standards powering the agentic web. These protocols are complementary, not competing, and together they form the infrastructure layer that enables everything else.

Part 4: How AI Agents See Your Website provides the implementation guide. How agents “see” websites, why semantic HTML matters more than ever, the role of accessibility standards, and what to tell your developers.

Part 5: Selling to AI covers agentic commerce. Stripe’s Agentic Commerce Suite, Shopify’s Universal Commerce Protocol, secure payment tokens, fraud detection for agent traffic, and how to get started.

Key Takeaways

  • The web is shifting from pages for humans to content for AI agents. Your website now serves two audiences, and optimizing for both is becoming necessary.
  • The evolution runs from SEO to AEO to GEO to AAIO. Each builds on the last: ranking, then citation, then inclusion, then enabling autonomous action.
  • December 2025 was the turning point. The Agentic AI Foundation launch established shared standards, moving agentic AI from experimentation to infrastructure.
  • Three levels matter: discovery, citation, and action. Being found, being referenced, and being usable by AI agents.
  • The business case is concrete. Agentic browsers are reaching billions of users. Commerce protocols are enabling agent-initiated purchases. Websites that work with agents capture this opportunity; those that don’t lose business to competitors.

Traditional SEO asked: “How do I rank on Google?” The new question is: “How do I become the answer, and how do I let AI complete transactions on my site without a human ever visiting?”

I’m writing this series because I believe most websites do and will get this wrong. They’ll treat it as an SEO tweak or a CRO experiment when it’s an architectural shift.

The infrastructure is being built now. The standards are being established. The agents are already browsing.

The question is whether your website is ready for them.

More Resources:


This post was originally published on No Hacks.


Featured Image: Collagery/Shutterstock