AI Search in 2026: The 5 Article GEO & SEO Playbook For Modern Visibility via @sejournal, @contentful

In the SEO world, when we talk about how to structure content for AI search, we often default to structured data – Schema.org, JSON-LD, rich results, knowledge graph eligibility – the whole shooting match.

While that layer of markup is still useful in many scenarios, this isn’t another article about how to wrap your content in tags.

Structuring content isn’t the same as structured data

Instead, we’re going deeper into something more fundamental and arguably more important in the age of generative AI: How your content is actually structured on the page and how that influences what large language models (LLMs) extract, understand, and surface in AI-powered search results.

Structured data is optional. Structured writing and formatting are not.

If you want your content to show up in AI Overviews, Perplexity summaries, ChatGPT citations, or any of the increasingly common “direct answer” features driven by LLMs, the architecture of your content matters: Headings. Paragraphs. Lists. Order. Clarity. Consistency.

In this article, I’m unpacking how LLMs interpret content — and what you can do to make sure your message is not just crawled, but understood.

How LLMs Actually Interpret Web Content

Let’s start with the basics.

Unlike traditional search engine crawlers that rely heavily on markup, metadata, and link structures, LLMs interpret content differently.

They don’t scan a page the way a bot does. They ingest it, break it into tokens, and analyze the relationships between words, sentences, and concepts using attention mechanisms.

They’re not looking for a tag or a JSON-LD snippet to tell them what a page is about. They’re looking for semantic clarity: Does this content express a clear idea? Is it coherent? Does it answer a question directly?

LLMs like GPT-4 or Gemini analyze:

  • The order in which information is presented.
  • The hierarchy of concepts (which is why headings still matter).
  • Formatting cues like bullet points, tables, bolded summaries.
  • Redundancy and reinforcement, which help models determine what’s most important.

This is why poorly structured content – even if it’s keyword-rich and marked up with schema – can fail to show up in AI summaries, while a clear, well-formatted blog post without a single line of JSON-LD might get cited or paraphrased directly.

Why Structure Matters More Than Ever In AI Search

Traditional search was about ranking; AI search is about representation.

When a language model generates a response to a query, it’s pulling from many sources – often sentence by sentence, paragraph by paragraph.

It’s not retrieving a whole page and showing it. It’s building a new answer based on what it can understand.

What gets understood most reliably?

Content that is:

  • Segmented logically, so each part expresses one idea.
  • Consistent in tone and terminology.
  • Presented in a format that lends itself to quick parsing (think FAQs, how-to steps, definition-style intros).
  • Written with clarity, not cleverness.

AI search engines don’t need schema to pull a step-by-step answer from a blog post.

But, they do need you to label your steps clearly, keep them together, and not bury them in long-winded prose or interrupt them with calls to action, pop-ups, or unrelated tangents.

Clean structure is now a ranking factor – not in the traditional SEO sense, but in the AI citation economy we’re entering.

What LLMs Look For When Parsing Content

Here’s what I’ve observed (both anecdotally and through testing across tools like Perplexity, ChatGPT Browse, Bing Copilot, and Google’s AI Overviews):

  • Clear Headings And Subheadings: LLMs use heading structure to understand hierarchy. Pages with proper H1–H2–H3 nesting are easier to parse than walls of text or div-heavy templates.
  • Short, Focused Paragraphs: Long paragraphs bury the lede. LLMs favor self-contained thoughts. Think one idea per paragraph.
  • Structured Formats (Lists, Tables, FAQs): If you want to get quoted, make it easy to lift your content. Bullets, tables, and Q&A formats are goldmines for answer engines.
  • Defined Topic Scope At The Top: Put your TL;DR early. Don’t make the model (or the user) scroll through 600 words of brand story before getting to the meat.
  • Semantic Cues In The Body: Words like “in summary,” “the most important,” “step 1,” and “common mistake” help LLMs identify relevance and structure. There’s a reason so much AI-generated content uses those “giveaway” phrases. It’s not because the model is lazy or formulaic. It’s because it actually knows how to structure information in a way that’s clear, digestible, and effective, which, frankly, is more than can be said for a lot of human writers.

A Real-World Example: Why My Own Article Didn’t Show Up

In December 2024, I wrote a piece about the relevance of schema in AI-first search.

It was structured for clarity, timeliness, and was highly relevant to this conversation, but didn’t show up in my research queries for this article (the one you are presently reading). The reason? I didn’t use the term “LLM” in the title or slug.

All of the articles returned in my search had “LLM” in the title. Mine said “AI Search” but didn’t mention LLMs explicitly.

You might assume that a large language model would understand “AI search” and “LLMs” are conceptually related – and it probably does – but understanding that two things are related and choosing what to return based on the prompt are two different things.

Where does the model get its retrieval logic? From the prompt. It interprets your question literally.

If you say, “Show me articles about LLMs using schema,” it will surface content that directly includes “LLMs” and “schema” – not necessarily content that’s adjacent, related, or semantically similar, especially when it has plenty to choose from that contains the words in the query (a.k.a. the prompt).

So, even though LLMs are smarter than traditional crawlers, retrieval is still rooted in surface-level cues.

This might sound suspiciously like keyword research still matters – and yes, it absolutely does. Not because LLMs are dumb, but because search behavior (even AI search) still depends on how humans phrase things.

The retrieval layer – the layer that decides what’s eligible to be summarized or cited – is still driven by surface-level language cues.

What Research Tells Us About Retrieval

Even recent academic work supports this layered view of retrieval.

A 2023 research paper by Doostmohammadi et al. found that simpler, keyword-matching techniques, like a method called BM25, often led to better results than approaches focused solely on semantic understanding.

The improvement was measured through a drop in perplexity, which tells us how confident or uncertain a language model is when predicting the next word.

In plain terms: Even in systems designed to be smart, clear and literal phrasing still made the answers better.

So, the lesson isn’t just to use the language they’ve been trained to recognize. The real lesson is: If you want your content to be found, understand how AI search works as a system – a chain of prompts, retrieval, and synthesis. Plus, make sure you’re aligned at the retrieval layer.

This isn’t about the limits of AI comprehension. It’s about the precision of retrieval.

Language models are incredibly capable of interpreting nuanced content, but when they’re acting as search agents, they still rely on the specificity of the queries they’re given.

That makes terminology, not just structure, a key part of being found.

How To Structure Content For AI Search

If you want to increase your odds of being cited, summarized, or quoted by AI-driven search engines, it’s time to think less like a writer and more like an information architect – and structure content for AI search accordingly.

That doesn’t mean sacrificing voice or insight, but it does mean presenting ideas in a format that makes them easy to extract, interpret, and reassemble.

Core Techniques For Structuring AI-Friendly Content

Here are some of the most effective structural tactics I recommend:

Use A Logical Heading Hierarchy

Structure your pages with a single clear H1 that sets the context, followed by H2s and H3s that nest logically beneath it.

LLMs, like human readers, rely on this hierarchy to understand the flow and relationship between concepts.

If every heading on your page is an H1, you’re signaling that everything is equally important, which means nothing stands out.

Good heading structure is not just semantic hygiene; it’s a blueprint for comprehension.

Keep Paragraphs Short And Self-Contained

Every paragraph should communicate one idea clearly.

Walls of text don’t just intimidate human readers; they also increase the likelihood that an AI model will extract the wrong part of the answer or skip your content altogether.

This is closely tied to readability metrics like the Flesch Reading Ease score, which rewards shorter sentences and simpler phrasing.

While it may pain those of us who enjoy a good, long, meandering sentence (myself included), clarity and segmentation help both humans and LLMs follow your train of thought without derailing.

Use Lists, Tables, And Predictable Formats

If your content can be turned into a step-by-step guide, numbered list, comparison table, or bulleted breakdown, do it. AI summarizers love structure, so do users.

Frontload Key Insights

Don’t save your best advice or most important definitions for the end.

LLMs tend to prioritize what appears early in the content. Give your thesis, definition, or takeaway up top, then expand on it.

Use Semantic Cues

Signal structure with phrasing like “Step 1,” “In summary,” “Key takeaway,” “Most common mistake,” and “To compare.”

These phrases help LLMs (and readers) identify the role each passage plays.

Avoid Noise

Interruptive pop-ups, modal windows, endless calls-to-action (CTAs), and disjointed carousels can pollute your content.

Even if the user closes them, they’re often still present in the Document Object Model (DOM), and they dilute what the LLM sees.

Think of your content like a transcript: What would it sound like if read aloud? If it’s hard to follow in that format, it might be hard for an LLM to follow, too.

The Role Of Schema: Still Useful, But Not A Magic Bullet

Let’s be clear: Structured data still has value. It helps search engines understand content, populate rich results, and disambiguate similar topics.

However, LLMs don’t require it to understand your content.

If your site is a semantic dumpster fire, schema might save you, but wouldn’t it be better to avoid building a dumpster fire in the first place?

Schema is a helpful boost, not a magic bullet. Prioritize clear structure and communication first, and use markup to reinforce – not rescue – your content.

How Schema Still Supports AI Understanding

That said, Google has recently confirmed at Search Central Live in Madrid that its LLM (Gemini), which powers AI Overviews, does leverage structured data to help understand content more effectively.

In fact, at the event, John Mueller recommends to use structured data because it gives models clearer signals about intent and structure.

That doesn’t contradict the point; it reinforces it. If your content isn’t already structured and understandable, schema can help fill the gaps. It’s a crutch, not a cure.

Schema is a helpful boost, but not a substitute, for structure and clarity.

In AI-driven search environments, we’re seeing content without any structured data show up in citations and summaries because the core content was well-organized, well-written, and easily parsed.

In short:

  • Use schema when it helps clarify the intent or context.
  • Don’t rely on it to fix bad content or a disorganized layout.
  • Prioritize content quality and layout before markup.

The future of content visibility is built on how well you communicate, not just how well you tag.

Conclusion: Structure For Meaning, Not Just For Machines

Optimizing for LLMs doesn’t mean chasing new tools or hacks. It means doubling down on what good communication has always required: clarity, coherence, and structure.

If you want to stay competitive, you’ll need to structure content for AI search just as carefully as you structure it for human readers.

The best-performing content in AI search isn’t necessarily the most optimized. It’s the most understandable. That means:

  • Anticipating how content will be interpreted, not just indexed.
  • Giving AI the framework it needs to extract your ideas.
  • Structuring pages for comprehension, not just compliance.
  • Anticipating and using the language your audience uses, because LLMs respond literally to prompts and retrieval depends on those exact terms being present.

As search shifts from links to language, we’re entering a new era of content design. One where meaning rises to the top, and the brands that structure for comprehension will rise right along with it.

More Resources:


Featured Image: Igor Link/Shutterstock

Survey: Publishers Expect Search Traffic To Fall Over 40% via @sejournal, @MattGSouthern

The Reuters Institute for the Study of Journalism has published its annual predictions report based on a survey of 280 senior media leaders across 51 countries and territories.

The report suggests publishers are preparing for two potential threats: generative AI tools, and creators who attract audiences with personality-led formats.

Note that the Reuters Institute survey reflects a strategic group of senior leaders. It’s not a representative sample of the entire industry.

What The Report Found

Search Traffic Is The Biggest Near-Term Concern

Survey respondents expect search engine traffic to decline by more than 40% over the next three years as AI-driven answers expand.

The report cites Chartbeat data showing aggregate Google Search traffic to hundreds of news sites has already started to dip. Lifestyle-focused publishers say they’ve been hit especially hard by Google’s AI Overviews rollout.

That comes on top of longer-running platform declines. The report notes referral traffic to news sites from Facebook fell 43% over the last three years, while referrals from X fell 46% over the same period.

Publishers Plan To Invest In Differentiation

In response to traffic pressure and AI summarization, publishers say they’ll invest more in original investigations, on-the-ground reporting, contextual analysis, and human stories.

Leaders surveyed say they plan to scale back service journalism and evergreen content, which many expect AI chatbots to commoditize.

Video & Off-Platform Distribution Rising

Publishers expect to invest more in video, including “watch tabs,” and more in audio formats such as podcasts. Text output is less of a priority.

On distribution, YouTube is the main off-platform channel cited in the report, alongside TikTok and Instagram.

Publishers are also trying to work out how to navigate distribution through AI platforms such as OpenAI’s ChatGPT, Google’s Gemini, and Perplexity.

Subscriptions Lead, Licensing Is Growing

For commercial publishers, paid content like subscriptions and memberships are the top focus. There’s also renewed interest in native advertising and face-to-face events as publishers look for revenue beyond traditional display ads.

Publishers are also looking at licensing and other platform payments. The report notes interest in platform funding has nearly doubled over the last two years as AI companies began offering large deals.

Why This Matters

I’ve watched publishers cycle through traffic crises before. When Facebook’s algorithm changes hit in 2018, the industry scrambled, and eventually most publishers adjusted by leaning harder into search. Search was supposed to be the stable channel.

That assumption is what this report challenges. A projected decline of 40%+ over three years has become a planning number, affecting budgets, headcount, and content strategy.

The content mix change warrants attention. When 280 senior media leaders say they’re scaling back service journalism and evergreen content, it signals which pages they think will still drive traffic in an AI-summarized environment. Original reporting and analysis survive because chatbots can’t replicate them. Commodity information doesn’t, because it can be synthesized without a click.

The doubling of interest in licensing deals over two years is the other number that jumped out to me. When AI companies started writing checks, the conversation changed from “should we license” to “what’s our leverage.”

This report is useful as a benchmark for where the industry’s head is at, even if individual outcomes vary.

Looking Ahead

Traffic from search and AI aggregators is unlikely to disappear, but the terms of trade are still being negotiated.

That includes how citations work, what licensing looks like at scale, and whether revenue-sharing becomes a standard arrangement.


Featured Image: Roman Samborskyi/Shutterstock

SEO Is No Longer A Single Discipline via @sejournal, @DuaneForrester

Most people have a favorite coffee mug. You reach for it without thinking. It fits your hand. It does its job. For a long time, SEO felt like that mug. A defined craft, a repeatable routine, a discipline you could explain in a sentence. Crawl the site. Optimize the pages. Earn visibility. Somewhere along the way, that single mug turned into a cabinet full of cups. Each one different. Each one required – none of them optional anymore.

That shift did not happen because SEO got bloated or unfocused. It happened because discovery changed shape.

SEO did not become complex on its own. The environment around it fractured, multiplied, and layered itself. SEO stretched to meet it.

Image Credit: Duane Forrester

The SEO Core Still Exists

Despite everything that has changed, SEO still has a core. It is smaller than many people remember, but it is still essential.

This core is about access, clarity, and measurement. Search engines must be able to crawl content, understand it, and present it in a usable way. Google’s own SEO Starter Guide still frames these fundamentals clearly.

Crawl and indexing remain foundational. If content cannot be accessed or stored, nothing else matters. Robots.txt governance follows a formal standard, RFC 9309, which defines how crawlers interpret exclusion rules. This matters because robots.txt is guidance, not enforcement. Misuse can create accidental invisibility.

Page experience is no longer optional. Core Web Vitals represent measurable user experience signals that Google incorporates into Search. The broader framework and measurement approach are documented on Web.dev.

Content architecture still matters. Pages must map cleanly to intent. Headings must signal structure. Internal links must express relationships. Structured data still plays a role in helping machines interpret content and enable eligible rich results today.

Measurement and diagnostics remain part of the job. Search Console, analytics, and validation tools still anchor decision-making for traditional search.

That is the SEO core. It is real work, and it is not shrinking. It is, however, no longer sufficient on its own.

This first ring out from the core is where SEO stops being a single lane.

Once the core is in place, modern SEO immediately runs into systems it does not fully control. This is where the real complexity starts to expand.

AI Search And Answer Engines

AI systems now sit between content and audience. They do not behave like traditional search engines. They summarize, recommend, and sometimes cite. Critically, they do not agree with each other.

In mid-2025, BrightEdge analyzed brand recommendations across ChatGPT, Google AI experiences, and other AI-driven interfaces and found that they disagreed on brand recommendations for 62% of queries. Search Engine Land covered the same analysis and framed it as a warning for marketers assuming consistency across AI search experiences.

This introduces a new kind of SEO work. Rankings alone no longer describe visibility. Practitioners now track whether their brand appears in answers, which pages are cited when citations exist, and how often competitors are recommended instead.

This is not arbitrary. Retrieval-augmented generation exists precisely to ground AI responses in external sources and improve factual reliability. The original RAG paper outlines this architecture clearly.

That architectural choice pulls SEO into new territory. Content must be written so it can be extracted without losing meaning. Ambiguity becomes a liability. Sections must stand alone.

Chunk-Level Content Architecture

Pages are no longer the smallest competitive unit. Passages are. And despite being told we shouldn’t focus on chunks for traditional search, when you look outside of traditional search, you need to understand the role chunks play. And traditional search isn’t the only game in town now.

Modern retrieval systems often pull fragments of content, not entire documents. That forces SEOs to think in chunks. Each section needs a single job. Each answer needs to survive without surrounding context.

This changes how long-form content is written. It does not eliminate depth. It demands structure. We now live in a hybrid world where both layers of the system must be served. It means more work, but selecting one over the other? That’s a mistake at this point.

Visual Search

Discovery increasingly starts with cameras. Google Lens allows users to search what they see, using images as queries. Pinterest Lens and other visual tools follow the same model.

This forces new responsibilities. Image libraries become strategic assets. Alt text stops being a compliance task and becomes a retrieval signal. Product imagery must support recognition, not just aesthetics.

Google’s product structured data documentation explicitly notes that product information can surface across Search, Images, and Lens experiences.

Audio And Conversational Search

Voice changes how people ask questions and what kind of answers they accept. Queries become more conversational, more situational, and more task-focused.

Industry research compiled by Marketing LTB shows that a meaningful portion of users now rely on voice input, with multiple surveys indicating that roughly one in four to one in three people use voice search, particularly on mobile devices and smart assistants.

That matters less as a headline number and more for what it does to query shape. Spoken queries tend to be longer, more natural, and framed as requests rather than keywords. Users expect direct, complete answers, not a list of links.

And the biggest search platform is reinforcing this behavior. Google has begun rolling out conversational voice experiences directly inside Search, allowing users to ask follow-up questions in real time using speech. The Verge covered Google’s launch of Search Live, which turns search into an ongoing dialogue rather than a single query-response interaction.

For SEO practitioners, this expands the work. It pulls them into spoken-language modeling, answer-first content construction, and situational phrasing that works when read aloud. Pages that perform well in voice and conversational contexts tend to be clear, concise, and structurally explicit, because ambiguity collapses quickly when an answer is spoken rather than scanned. Still think traditional SEO approaches are all you need?

Personalization And Context

There is no single SERP. Google explains that search results vary based on factors including personalization, language, and location.

For practitioners, this means rankings become samples, not truths. Monitoring shifts toward trends, segments, and outcome-based signals rather than position reports.

Image Credit: Duane Forrester

The third ring is where complexity becomes really visible.

These are not just SEO tasks. The things in this layer are entire disciplines that SEO now interfaces with.

Brand Protection And Retrieval In An LLM World

Brand protection used to be a communications problem. Today, it is also a retrieval problem.

Large language models do not simply repeat press releases or corporate messaging. They retrieve information from a mixture of training data, indexed content, and real-time sources, then synthesize an answer that feels authoritative, whether it is accurate or not.

This creates a new class of risk. A brand can be well-known, well-funded, and well-covered by media, yet still be misrepresented, outdated, or absent in AI-generated answers.

Unlike traditional search, there is no single ranking to defend. Different AI systems can surface different descriptions, different competitors, or different recommendations for the same intent. That BrightEdge analysis showing 62% disagreement in brand recommendations across AI platforms illustrates how unstable this layer can be.

This is where SEO is pulled into brand protection work.

SEO practitioners already operate at the intersection of machine interpretation and human intent. In an LLM environment, that skill set extends naturally into brand retrieval monitoring. This includes tracking whether a brand appears in AI answers, how it is described, which sources are cited when citations exist, and whether outdated or incorrect narratives persist.

PR and brand teams are not historically equipped to do this work. Media monitoring tools track mentions, sentiment, and coverage. They do not track how an AI model synthesizes a brand narrative, nor how retrieval changes over time.

As a result, SEO increasingly becomes the connective tissue between brand, PR, and the machine layer.

This does not mean SEO owns brand. It means SEO helps ensure that the content machines retrieve about a brand is accurate, current, and structured in ways retrieval systems can use. It means working with brand teams to align authoritative sources, consistent terminology, and verifiable claims. It means working with PR teams to understand which coverage reinforces trust signals that machines recognize, not just headlines humans read.

In practice, brand protection in AI search becomes a shared responsibility, with SEO providing the technical and retrieval lens that brand and PR teams lack, and brand and PR providing the narrative discipline SEO cannot manufacture alone.

This is not optional work. As AI systems increasingly act as intermediaries between brands and audiences, the question is no longer “how do we rank?” It is “how are we being represented when no one clicks at all?”

Branding And Narrative Systems

Branding is not a subset of SEO. It is a discipline that includes voice, identity, reputation, executive presence, and crisis response.

SEO intersects with branding because AI systems increasingly behave like advisors, recommending, summarizing, and implicitly judging.

Trust matters more in that environment. The Edelman Trust Barometer documents declining trust across institutions and brands, reinforcing why authority can no longer be assumed. Trust diminishes, and consumer behavior changes. The equation is no longer brand = X, therefore X = brand.

SEO practitioners now care about sourcing, claims, and consistency because brand perception can now influence whether content is surfaced or ignored.

UX And Task Completion

Clicks are no longer the win. Completion is.

Though old, these remain applicable. Nielsen Norman Group defines success rate as a core usability metric, measuring whether users can complete tasks. They also outline usability metrics tied directly to task efficiency and error reduction.

When AI and zero-click experiences compress opportunities, the pages that do earn attention must deliver. SEO now has a stake in friction reduction, clarity, and task flow. CRO (conversion rate optimization) has never been more important, but how you define “conversion” has also never been broader.

Paid Media, Lifecycle, And Attribution

Discovery spans organic, AI answers, video feeds, and paid placements. Measurement follows the same fragmentation.

Google Analytics defines attribution as assigning credit across touchpoints in the path to conversion.

SEO practitioners are pulled into cross-channel conversations not because they want to own them, but because outcomes are shared. Organic assists paid. Email creates branded demand. Paid fills gaps while organic matures.

Generational And Situational Behavior

Audience behavior is not uniform. Pew Research Center’s 2025 research on teens, social media, and AI chatbots shows how discovery and engagement increasingly differ across age groups, platforms, and interaction modes, including traditional search, social feeds, and AI interfaces.

This shapes format expectations. Discovery may happen in video-first environments. Conversion may happen on the web. Sometimes the web is skipped entirely.

What This Means For SEO Practitioners

SEO did not become more complex because practitioners lost discipline or focus; it became more complex because discovery fractured. The work expanded because the interfaces expanded. The inputs multiplied. The outputs stopped behaving consistently.

In that environment, SEO stopped being a function you execute and became a role you play inside a system you do not fully control, and that distinction matters.

Much of the anxiety practitioners feel right now comes from being evaluated as if SEO were still a closed loop. Rankings up or down. Traffic in or out. Conversions attributed cleanly. Those models assume a world where discovery happens in one place and outcomes follow a predictable path.

That is no longer the world we’re operating in.

Today, a user might encounter a brand inside an AI answer, validate it through a video platform, compare it through reviews surfaced in search, and convert days later through a branded query or a direct visit. In many cases, no single click tells the story. In others, there is no click at all.

This is why SEO keeps getting pulled into UX conversations, brand discussions, PR alignment, attribution debates, and content format decisions. Not because SEO owns those disciplines, but because SEO sits closest to the fault lines where discovery breaks or holds.

This is also why trying to “draw a box” around SEO keeps failing.

You can still define an SEO core, and you should. Crawlability, performance, content architecture, structured data, and measurement remain non-negotiable. But pretending the job ends there creates a gap between responsibility and reality. When visibility drops, or when AI answers misrepresent a brand, or when traffic declines despite strong fundamentals, that gap becomes painfully visible.

What’s changed is not the importance of SEO, but the nature of its influence.

Modern SEO operates as an integration discipline. It connects systems that were never designed to work together. It translates between machines and humans, between intent and interface, between brand narrative and retrieval logic. It absorbs volatility from platforms so organizations don’t have to feel it all at once.

That does not mean every SEO must take on every cup in the cabinet. It does mean understanding what those cups contain, which ones you own, which ones you influence, and which ones you simply need to account for when explaining outcomes.

The cabinet is already there, and you can choose to keep reaching for a single familiar mug and accept increasing unpredictability. Or you can open the cabinet deliberately, understand what’s inside, and decide how much of the expanded role you’re willing to take on.

Either choice is valid, but pretending everything still fits in one cup is no longer an option.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Master1305/Shutterstock

Building A Brand Is Not A Strategy, It Is A Starting Point via @sejournal, @TaylorDanRW

“Build a brand” has become one of the most repeated phrases in SEO over the past year. It is offered as both diagnosis and cure. If traffic is declining, build a brand. If large language models are not citing you, build a brand. If organic performance is unstable, build a brand.

The problem is not that this advice is wrong. The problem is that it is incomplete, and for many SEOs, it is not actionable.

A large proportion of people working in SEO today have developed in an environment that rewarded channel depth rather than marketing breadth. They understand crawling, indexing, content templates, internal linking, and ranking systems extremely well. What they have often not been trained in is how demand is created, how brands are formed in the mind, or how different marketing channels reinforce one another over time.

So, when the instruction becomes “build a brand,” the obvious question follows. What does that actually mean in practice, and what happens after you say the words?

SEO Is Not A Direct Demand Generator

Search has always been a demand capture channel rather than a demand creation channel. SEO does not usually make someone want something they did not already want. It places a brand in front of existing intent and attempts to win preference at the moment of consideration.

What SEO can do very effectively is increase mental availability. By being visible across a wide range of non-branded queries, a website creates repeated brand touchpoints. Over time, those touchpoints can contribute to familiarity, preference, and eventually loyalty.

The important part of that sentence is “over time.”

Affinity and loyalty are not short-term outcomes. They are built through repeated exposure, consistency of messaging, and relevance across different contexts. SEO can support this process, but it cannot compress it. No amount of optimization can turn visibility into trust overnight.

AI Has Changed The Pressure, Not The Fundamentals

AI has introduced new technical and behavioral challenges, but it has also created urgency at the executive level. Boards and leadership teams see both risk and opportunity, and the result is pressure. Pressure to act quickly, to be visible in new surfaces, and to avoid being left behind.

In reality, this is one of the most significant visibility opportunities since the mass adoption of social media. But like social media, it rewards those who understand distribution, reinforcement, and timing, not just production.

Where Content And Digital PR Actually Fit

Content and digital PR are often positioned as the vehicles for brand building in search. That framing is not wrong, but it is frequently too vague to be useful.

Google has been clear, including in recent Search Central discussions, that strong technical foundations still matter. Good SEO is a prerequisite to performance, not a nice-to-have. Content and digital PR sit within that system because they create the signals that justify deeper crawling, more frequent discovery, and sustained visibility. Both content and digital PR can be dissected further based on tactical objectives, but at the core, the objective is the same.

Search demand does not appear out of nowhere. It grows when topics are discussed, linked, cited, and repeated across the web. Digital PR contributes to this by placing ideas and assets into wider ecosystems. Content supports it by giving those ideas a constant home that search engines can understand and return to users.

This is not brand building in the abstract sense; it is visibility building.

Strong Visibility Content Accelerates Brand Building

Well-executed SEO content plays a critical role in brand building precisely because it operates at the point of repeated exposure. When a brand consistently appears for high-intent, non-branded queries, it earns familiarity before it ever earns loyalty.

Visibility-led content does not need to be overtly promotional to do this work. In many cases, its impact is stronger when it is practical, authoritative, and clearly written for the user rather than for the brand. Over time, this consistency creates an association between the problem space and the brand itself.

This is where many brand discussions lose precision. Brand is not only shaped by creative campaigns or opinion pieces. It is shaped by whether a brand reliably shows up with useful answers when someone is trying to understand a topic, solve a problem, or make a decision.

Strong SEO content compounds over time, and each ranking page reinforces the others. An example of this is some work I did back with Cloudflare in mid-2017. A content hub, positioned as a “learning center,” that we developed and rolled out a section at a time, has compounded over the years to achieve millions of organic visits, and collected over 30,000 backlinks.

Image from author, January 2026

Each impression adds to mental availability, and each return visit subtly shifts perception from unfamiliar to known. This is slow work, but it is measurable, and it is durable, and builds signals over time through Chrome, and in turn, begins to feed its own growth.

In this sense, SEO content is not separate from brand building. It is one of the few channels where brand perception can be shaped at scale, repeatedly, and in moments of genuine user need.

Thought Leadership Without Readership Is A Vanity Project

Thought leadership content has real value, but only under specific conditions. It needs an audience, a distribution strategy, and a feedback loop.

One of the most common patterns seen over the years is organizations investing heavily in senior-led opinion pieces, vision statements, or industry commentary, and then assuming impact by default.

When performance is examined properly, using analytics platforms or marketing automation data, it often becomes clear that very few people are actually reading the content.

If nobody is consuming it, it is not thought leadership. It is publishing for internal reassurance.

This is not an argument against opinion-led content. It is an argument for accountability. Content should earn its place by contributing to visibility, engagement, or downstream commercial outcomes, even if those outcomes sit higher in the funnel.

That requires measurement beyond pageviews. It requires understanding how content is discovered, how it is referenced elsewhere, how it supports other assets, and whether it creates repeat exposure over time.

Balancing Brand And Search Visibility

The current challenge for SEOs is not choosing between brand building and visibility building. It is learning how to balance the two without confusing them.

Brand is the outcome of repeated, coherent experiences. Visibility is the mechanism that makes those experiences possible at scale. You cannot shortcut one with the other, and you cannot treat them as interchangeable.

For practitioners who have grown up inside SEO, this means expanding beyond the channel without abandoning its discipline. It means understanding distribution as well as creation, signals as well as stories, and measurement as well as messaging.

The future does not belong to those who simply declare themselves a brand. It belongs to those who understand how visibility compounds, how trust is earned gradually, and how SEO fits into a much wider system of influence.

Building a brand is not the answer. It is the work that begins once the question has finally been asked properly.

More Resources:


Featured Image: Master1305/Shutterstock

What Google SERPs Will Reward in 2026 [Webinar] via @sejournal, @lorenbaker

The Changes, Features & Signals Driving Organic Traffic Next Year

Google’s search results are evolving faster than most SEO strategies can adapt.

AI Overviews are expanding into new keyword and intent types, AI Mode is reshaping how results are displayed, and ongoing experimentation with SERP layouts is changing how users interact with search altogether. For SEO leaders, the challenge is no longer keeping up with updates but understanding which changes actually impact organic traffic.

Join Tom Capper, Senior Search Scientist at STAT Search Analytics, for a data-backed look at how Google SERPs are shifting in 2026 and where real organic opportunities still exist. Drawing from STAT’s extensive repository of daily SERP data, this session cuts through speculation to show which features and keywords are worth prioritizing now.

What You’ll Learn

  • Which SERP features deliver the highest click potential in 2026
  • How AI Mode features are showing up and initiatives to prioritize
  • The keyword and topic opportunities that still drive organic traffic next year

Why Attend?

This webinar offers a clear, evidence-based view of how Google SERPs are changing and what those changes mean for SEO strategy. You will gain practical insights to refine keyword targeting, focus on the right SERP features, and build an organic search approach grounded in real performance data for 2026.

Register now to understand the SERP shifts shaping organic traffic in 2026.

🛑 Can’t make it live? Register anyway and we’ll send you the on demand recording after the event.

SEO in 2026: Key predictions from Yoast experts

If there’s one takeaway as we look toward SEO in 2026, it’s that visibility is no longer just about ranking pages, but about being understood by increasingly selective AI-driven systems. In 2025, SEO proved it was not disappearing, but evolving, as search engines leaned more heavily on structure, authority, and trust to interpret content beyond the click. In this article, we share SEO predictions for 2026 from Yoast SEO experts, Alex Moss and Carolyn Shelby, highlighting the shifts that will shape how brands earn visibility across search and AI-powered discovery experiences.

Key takeaways

  • In 2026, SEO focuses on visibility defined by clarity, authority, and trust rather than just page rankings
  • Structured data becomes essential for eligibility in AI-driven search and shopping experiences
  • Editorial quality must meet machine readability standards, as AI evaluates content based on structure and clarity
  • Rankings remain important as indicators of authority, but visibility now also includes citations and brand sentiment
  • Brands should align their SEO strategies with social presence and aim for consistency across all platforms to enhance visibility

Table of contents

A brief recap of SEO in 2025: what actually changed?

2025 marked a clear shift in how SEO works. Visibility stopped being defined purely by pages and rankings and began to be shaped by how well search engines and AI systems could interpret content, brands, and intent across multiple surfaces. AI-generated summaries, richer SERP features, and alternative discovery experiences made it harder to rely solely on traditional metrics, while signals such as authority, trust, and structure played a larger role in determining what was surfaced and reused.

As we outlined in our SEO in 2025 wrap-up, the brands that performed best were those with strong foundations: clear content, credible signals, and structured information that search systems could confidently understand. That shift set the direction for what was to come next.

By the end of 2025, it was clear that SEO had entered a new phase, one shaped by interpretation rather than isolated optimizations. The SEO predictions for 2026 from Yoast experts build directly on this evolution.

2026 SEO predictions by Yoast experts

The SEO predictions for 2026 shared here come from our very own Principal SEOs at Yoast, Alex Moss and Carolyn Shelby. Built on the lessons SEO revealed in 2025, these predictions focus less on reacting to individual updates and more on how search and AI systems are evolving at a foundational level, and what that means for sustainable visibility going forward.

TL;DR

SEO in 2026 is about understanding how signals such as structure, authority, clarity, and trust are now interpreted across search engines, AI-powered experiences, and discovery platforms. Each prediction below explains what is changing, why it matters, and how brands can practically adapt in the coming year.

Prediction 1: Structured data shifts from ranking enhancer to retrieval qualifier

In 2026, structured data will no longer be a competitive advantage; it will become a baseline requirement. Search engines and AI systems increasingly rely on structured data as a layer of eligibility to determine whether content, products, and entities can be confidently retrieved, compared, or surfaced in AI-powered experiences.

For ecommerce brands, this shift is especially significant. Product information such as pricing, availability, shipping details, and merchant data is now critical for visibility in AI-driven shopping agents and comparison interfaces. At the enterprise level, the move toward canonical identifiers reflects a growing need to avoid misattribution and data decay across systems that reuse information at scale.

What this means in practice:

Brands without clean, comprehensive entity and product data will not rank lower. They will simply not appear in AI-driven shopping and comparison flows at all.

Also read: Optimizing ecommerce product variations for SEO and conversions

How to act on this:

Treat structured data as part of your SEO foundation, not an enhancement. Tools like Yoast SEO help standardize the implementation of structured data. The plugin’s structured data features make it easier to generate rich, meaningful schema markup, helping search engines better understand your site and take control of how your content is described.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Prediction 2: Agentic commerce becomes a visibility battleground, not a checkout feature

Agentic commerce marks a shift in how users discover and choose brands. Instead of browsing, comparing, and transacting manually, users increasingly rely on AI-driven agents to recommend, reorder, or select products and services on their behalf. In this environment, visibility is established before a checkout ever happens, often without a traditional search query.

This shift is becoming more concrete as search and commerce platforms move toward standardised ways for agents to understand and transact with merchants. Recent developments around agentic commerce protocols and Universal Commerce Protocol (UCP) highlight how AI systems are being designed to access product, pricing, availability, and merchant information more directly. As a result, platforms such as Shopify, Stripe, and WooCommerce are no longer just infrastructure. They increasingly act as distribution layers, where agent compatibility influences which brands are surfaced, recommended, or selected.

What this means in practice:

In 2026, SEO teams will be accountable for agent readiness in much the same way they were once accountable for mobile-first readiness. If agents cannot consistently interpret your brand, product data, or availability, they are more likely to default to competitors that they can understand with greater confidence.

How to act on this:

Focus on making your brand legible to automated decision systems. Ensure product information, pricing, availability, and supporting metadata are clear, structured, and consistent across your site and feeds. This is not about optimising for a single platform or protocol, but about reducing ambiguity so AI agents can accurately interpret and act on your information across emerging agent-driven discovery and commerce experiences.

Prediction 3: Editorial quality becomes a machine readability requirement

In 2026, editorial quality is no longer judged only by human readers. AI systems increasingly evaluate content based on how efficiently it can be parsed, summarized, cited, and reused. Verbosity, fluff, and circular explanations do not fail editorially. They fail functionally.

Content that is concise, clearly structured, and well-attributed has higher chances of performing well. Headings, lists, definitions, and tables directly influence how information is chunked and reused across AI-generated summaries and search experiences.

Must read: Why is summarizing essential for modern content?

What this means in practice:

“Helpful content” is being held to higher editorial standards. Content that cannot be summarized cleanly without losing meaning becomes less useful to AI systems, even if it remains readable to human audiences.

How to act on this:

Make editorial quality measurable and machine actionable. Utilize tools that assist you in aligning content with modern discoverability requirements. Yoast SEO Premium’s AI features, AI Generate, AI Optimize, and AI Summarize, help you assess and improve how content is structured and optimized, supporting both search engines and AI systems in understanding your intent.

Prediction 4: Rankings still matter, but as training signals, not endpoints

Despite ongoing speculation, rankings do not disappear in 2026. Instead, their role changes. AI agents and search systems continue to rely on top-ranked, trusted pages to understand authority, relevance, and consensus within a topic.

While rankings are no longer the final KPI, abandoning them entirely creates blind spots in understanding why certain brands are included or ignored in AI-driven experiences.

What this means in practice:

Teams that stop tracking rankings altogether risk losing insight into how authority is established and reinforced across search and AI systems.

How to act on this:

Continue to use rankings as diagnostic signals, but don’t treat them as the sole indicator of success in 2026. Alongside traditional performance metrics for SEO in 2026, look at how often your brand is mentioned, cited, or summarized in AI-generated answers and recommendations.

Tools like Yoast AI Brand Insights, available as part of Yoast SEO AI+, help surface these broader visibility signals by showing how your brand appears across AI platforms, including sentiment, citation patterns, and competitive context.

See how visible your brand is in AI search

Track mentions, sentiment, and AI visibility. With AI Brand Insights and Yoast SEO AI+, you can start monitoring and improving your performance.

Prediction 5: Brand sentiment becomes a core visibility signal

Brand sentiment increasingly influences how search engines and AI systems assess credibility and trust. Mentions, whether linked or unlinked, contribute to a broader understanding of how a brand is perceived across the web. AI systems synthesize signals from reviews, forums, social platforms, media coverage, and knowledge bases to form a composite view of legitimacy and expertise.

What makes this shift more impactful is amplification. Inconsistent messaging or negative sentiment is not smoothed out over time. Instead, it becomes more apparent when systems attempt to summarize, compare, or recommend brands across search and AI-driven experiences.

What this means in practice:

SEO, brand, PR, and social teams increasingly influence the same visibility signals. When these efforts are misaligned, credibility weakens. When they reinforce one another, trust becomes easier for systems to establish and maintain.

How to act on this:

Focus on consistency across owned, earned, and shared channels. Pay attention not only to where your brand ranks, but also to how it is discussed, described, and contextualized across various platforms. As discovery expands beyond traditional search results, reputation and narrative coherence become essential inputs into how brands are surfaced and understood.

Prediction 6: Multimodal optimization becomes baseline, not optional

Search behavior is no longer text-first. Images, video, audio, and transcripts now function as retrievable knowledge objects that feed both traditional search and AI-powered experiences. In particular, video platforms continue to influence how expertise and authority are understood at scale.

Platforms like YouTube function not only as discovery engines, but also as training corpora for AI systems learning how to interpret topics, brands, and creators.

What this means in practice:

Brands with strong written content but weak visual or video assets may appear incomplete or “thin” to AI systems, even if their articles are well-optimized.

How to act on this:

Treat multimodal content as part of your SEO foundation. Support written content with relevant visuals, video, and transcripts. Clear structure and readability remain essential, and tools like Yoast SEO help ensure your core content remains accessible and well-organized as it is reused across formats.

Prediction 7: Social platforms become secondary search indexes

Discovery will increasingly happen outside traditional search engines. Platforms such as TikTok, LinkedIn, Reddit, and niche communities now act as secondary search indexes where users validate expertise and intent.

AI systems reference these platforms to verify whether a brand’s claims, expertise, and messaging are substantiated in public discourse.

What this means in practice:

Presence alone is not enough. Inconsistent or unclear messaging across platforms weakens trust signals, while focused, repeatable narratives reinforce authority.

How to act on this:

Align your SEO strategy with social and community visibility to enhance your online presence. Ensure that your expertise, terminology, and positioning remain consistent across all discussions about your brand.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Prediction 8: Email reasserts itself as the most controllable growth channel

As discovery fragments and platforms increasingly gate access to audiences, email regains importance as a high-signal, low-distortion channel. Unlike search or social platforms, email offers direct access to users without algorithmic mediation.

In 2026, email plays a supporting role in reinforcing authority, engagement, and intent signals, especially as AI systems evaluate how audiences interact with trusted sources over time.

What this means in practice:

Brands that underinvest in email become overly dependent on platforms they do not control, which increases volatility and reduces long-term resilience.

How to act on this:

Focus on relevance over volume. Segment audiences, align content with intent, and use email to reinforce expertise and trust, not just drive clicks.

Prediction 9: Authority outweighs freshness for most non-news queries

For non-news content, AI systems increasingly prioritize credible, historically consistent sources over frequent updates or constant publishing. Freshness still matters, but only when it meaningfully improves accuracy or relevance.

Long-standing domains with coherent narratives and well-maintained content benefit, provided their foundations remain clean and trustworthy.

What this means in practice:

Scaled/programmatic content strategies lose effectiveness. Publishing frequently without maintaining quality or consistency introduces noise rather than value.

How to act on this:

Invest in maintaining and improving existing content. Update thoughtfully, reinforce expertise, and ensure that your most important pages remain accurate, structured, and authoritative.

Prediction 10: SEO teams evolve into visibility and narrative stewards

In 2026, SEO will extend far beyond search engines. SEO teams are increasingly influencing how brands are perceived by both humans and machines across search, AI-generated answers, and discovery platforms.

Success is measured not only by traffic alone, but also by inclusion, citation, and trust. SEO becomes a strategic function that shapes how a brand is represented and understood.

What this means in practice:

SEO teams that focus solely on production or technical fixes risk losing influence as visibility becomes a cross-channel concern.

How to act on this:

Shift focus toward clarity, consistency, and long-term trust. The most effective teams help define how a brand is understood, not just how it ranks.

What SEO is no longer about in 2026 (misconceptions to discard)

As SEO evolves in 2026, many long-standing assumptions no longer reflect how search engines and AI-driven systems actually determine visibility. The table below contrasts common SEO myths with the realities shaped by recent changes and expert insights from Yoast.

Diminishing relevance What actually matters in 2026
SEO is mainly about ranking pages Rankings still matter, but they serve as signals for authority and relevance, rather than the final measure of visibility
Structured data is optional or a ranking boost Structured data is now a baseline requirement for eligibility in AI-driven search, shopping, and comparison experiences
Publishing more content leads to better performance Authority, clarity, and maintenance of fewer strong assets outperform high-volume publishing
Editorial quality is subjective Content quality is increasingly evaluated by machines based on structure, clarity, and reusability
Brand reputation is a PR concern, not an SEO one Brand sentiment directly influences how AI systems interpret, trust, and recommend brands
Search is still primarily text-based Images, video, audio, and transcripts are now core retrievable knowledge objects
SEO can be measured only through traffic Visibility spans AI answers, social platforms, agents, and citations, requiring broader performance signals

Looking ahead: what will shape SEO in 2026

The focus is no longer on isolated tactics or short-term wins, but on building visibility systems that search engines and AI platforms can reliably understand, trust, and reuse.

Clarity and interpretability matter more than clever optimization. Content, products, and brand narratives need to be easy for machines to interpret without ambiguity. Structured data has become foundational, not optional, determining whether brands are eligible to appear in AI-powered shopping, comparison, and answer-driven experiences.

Authority is built over time, not manufactured at scale. Search and AI systems increasingly favor sources with consistent, well-maintained narratives over those chasing volume. Visibility also extends beyond the SERP, spanning AI-generated answers, citations, recommendations, and cross-platform mentions, making it essential to look beyond traffic as the sole measure of success.

Finally, SEO in 2026 demands alignment. Brand, content, product, and platform signals all contribute to how systems interpret trust and relevance.

Search Marketing’s Insight Gap: When Automation Replaces Understanding via @sejournal, @coreydmorris

Automation is a part of our daily lives in marketing. If you’re in a leadership role or oversee it in some capacity, you’re hearing about it from your team doing the day-to-day work, from those within your industry, or you’re doing your own exploration.

Within search marketing, it has helped to greatly scale efforts as well as to bring new efficiencies, whether those are in our own processes or built into the platforms we use.

In just a few short years, automated bidding strategies, AI-generated content, AI-driven research, and platform-generated “insights” have changed the way we work, including the tools we use, and many of our expectations for how we do search marketing and digital marketing in a broader sense.

With all of this automation and new ways of getting things done, a gap has emerged. I’ll call it an “insights gap.” I contend that teams can see performance changes, but struggle to explain why. This can be serious and, for marketing leaders, can result in a loss of confidence in decision-making due to outcomes not being what was planned, projected, or desired.

No one at a leadership or implementation level likes to have a non-answer or mystery that can’t be solved when real leads or sales dollars are at stake.

Here’s the problem. It is a leadership challenge at this point. It isn’t a technology issue. Automation itself isn’t the problem; the lack of strategic interpretation is.

Now, yes, search volatility is involved. It amplifies the problem with algorithm updates, SERP changes, AI Overviews, and how user behavior changes. Automated systems we have react, but they don’t necessarily contextualize.

Combined with stakeholder expectations rising, we can’t get by with just charts and graphs and data tables. We have to find the insights, contextualize them, and demonstrate value. This is the impact versus activity contrast that has been around forever, but is amplified with automation.

If we go too far into reliance on automation and AI and don’t get the expected marketing and business outcomes, we likely have weaker strategic muscles and an over-dependence on AI and automation tools and platforms. Connecting all knowledge back to being institutional versus platform-specific (and in the AI “brains”) is a key to fixing the problem.

How Marketing Leaders Can Close The Insight Gap

1. Reinforce Strategy In Search Marketing Campaigns & Efforts

Efficiencies gained in execution should be celebrated. Tasks that were manual, done with expensive software, or not done at all just a few years ago can be done in an instant now. The hard and soft cost savings shouldn’t be overlooked.

However, we need to be clear in separating the executional efficiencies from strategic aspects and intent.

Every automated system and process needs to support a documented objective so we’re not just “doing” things, but we’re quantifying them, and they are connected to our overall strategy.

2. Build Human Review Into Automated Systems & Processes

A longstanding challenge with search marketing is that it often doesn’t have a clearly defined ending point. It is ongoing and includes iterative optimization processes. We look to the past to inform decisions for now and going forward, but we often don’t turn it all off, blow it up, and start over (and I’m not advocating for that).

Scheduling structured reviews of AI-driven decisions is important to ensure that we don’t have an insights gap.

In those reviews, even simply asking “why did this change?” before moving on to “what do we do next?” adds an intentional moment to ensure we’re not on autopilot with systems that are not connected deeply enough to our strategy.

3. Train Teams To Interpret, Not Just Monitor, Search Data

We all have dashboards and data coming to us. Or, we have go-to reports in Google Analytics 4 or our web analytics suite that we’re comfortable with. Those are important to have, and any alerts coming our way are great for tracking real-time progress.

Maintaining (or developing) analysts and strategists who can translate data, patterns, and observations into insights is important. Yes, you can create AI agents to do this, but ensure that you have oversight of the agents and that there’s enough cross-checking to ensure that business outcomes aren’t negatively impacted by assumptions that go on for too long in an automated way.

4. Treat AI Outputs As Inputs (For Humans), Not Answers

Being careful with my wording of “inputs” and “outputs” here, calling attention to what AI gives us, we should treat that as output. But, it shouldn’t stop there. The AI output should become “input” for humans.

Even the seemingly smartest ideas from AI should be taken as an output, for human input, and not a definitive (a favorite AI word, by the way) answer.

Just like when humans are owning the full process, with whatever level of AI and automation we have involved, we should maintain a healthy skepticism and validation.

5. Protect Institutional Knowledge In Search Marketing

The more automation we have, likely the more scattered we are with documentation. It probably lives in many places, within platforms, or may be lacking overall. As we get smarter and more efficient with our tech stacks and use, we can’t lose critical institutional knowledge in search marketing.

That means we need to document learnings from tests, optimization, campaigns, and changes. We don’t want to repeat mistakes when platforms, vendors, or other variables change.

6. Align Automation With Business Outcomes, Not Platform Metrics

This is not a new recommendation or news to anyone who has been in marketing leadership. However, I point it out as a word of caution, as the deeper we get in turning things over to automation, the more we’re at risk of getting into the weeds and not being able to connect actions, activities, tactics, and work being done back to an ultimate marketing-driven business outcome.

We need the platform metrics. But, we still need to be able to translate metrics at every depth level back to something higher in the marketing and business ROI equation. Being able to automate and scale something without context can lead us to just doing more of something, doing it faster, or cheaper, but not necessarily moving the needle for ROI.

7. Reintroduce Strategic Review Into Search Marketing Cadence

I mentioned asking questions with human review earlier. More broadly, ensuring that strategic review is integrated into your search marketing cadence is important. My team has been challenging our own client reporting meetings, metrics, and flow recently.

Whether you already have a monthly or quarterly strategic review process or not, this is an opportunity to challenge what automation and AI are doing in the mix. What is it helping, hiding, or potentially distorting? How can we include this in strategic review and go beyond just the data, reports, and activity?

8. Elevate Search Reporting For Executive Audiences

At the heart of any talk about insights, we know we have to translate performance into narrative. With more automation, we need to have more translation. What we are doing matters. However, our executive peers and audiences are a degree (or more) further removed from what we do, and with new tech, are probably even less connected (no offense to the super high-tech execs I know and love).

We still must connect search behavior to customer intent and business priorities. That hasn’t changed, even if we need to layer in more or mine it out of the automation we have in place.

Wrap Up

Automation is essential, and for most, it is a big part of how our teams are scaling digital marketing and search marketing work. Plus, we’re leveraging the functions (whether by choice or not) in platforms and channels that we’re doing our work in.

Automation is incomplete, though, without insight. Strategic understanding is not just necessary, but can be a competitive advantage in search. When everyone is automating, getting above and beyond with strategic insights and leveraging them can be a difference-maker.

The goal here isn’t to slow automation. It is to advance your team’s ability to think critically while scaling implementation and execution.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Google Downplays GEO – But Let’s Talk About Garbage AI SERPs via @sejournal, @martinibuster

Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.

SEO And The Next Generation Of Search

Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.

What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”

So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.

The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.

Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?

How To Write For Longform Answers

Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?

Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?

It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.

Danny said:

“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?

So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”

Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.

Avoids Talking About Search Referrals

But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.

Focus On The Big Picture

Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.

He explained:

“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.

…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”

Let’s Talk About Garbage AI Search Results

I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.

Subject Matter Expertise Is Missing

Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.

How Google Hides Expert Web Pages

How Google hides expert web pages.

Google’s AI Mode Promotes Garbage And Sites Lacking Expertise

This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.

Google’s AI Mode cites the following pages:

1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.

2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?

3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?

Screenshot Of Google’s Garbage AI Results

Google Hides The Good Stuff In More > News Tab

Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.

Screenshot Of  Hidden High Quality Search Results

GEO Or SEO – It Doesn’t Matter

This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.

I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?

Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.

How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?

Listen to the podcast here:

Featured Image by Shutterstock/Kues

Agentic Commerce: What SEOs Need To Consider (ACP & UCP) via @sejournal, @alexmoss

In my last post, I referenced how there is now a growing split between the “human” web and the “agentic” web, where AI agents are becoming an additional audience/profile alongside the “traditional” human visitors we have been optimizing for for years.

This shift is now becoming more aggressive, especially when it comes to the transactional web in the form of agentic commerce. 2026 will see the accelerated adoption of this method, where store owners will now have to cater to and optimize for both the human and agentic visitor concurrently.

The recent launch of Universal Commerce Protocol (UCP) from Google underlines the push towards this integration of AI and ecommerce experiences.

What Is Agentic Commerce?

Agentic commerce is when agents complete purchases autonomously on behalf of users. Now, a human can engage with a large language model platform, where the agent will browse and purchase from a site on behalf (and with approval) of the human. Not only is the agent acting as the gatekeeper for information gain and influencing decisions, but they are also acting as the gatekeeper for the transaction itself.

This is a step beyond delegating an LLM to act as a recommendation agent or a method of validation, but now transfers authority to actually transact.

Enter ACP (Agentic Commerce Protocol)

On Sept. 29, 2025, OpenAI and Stripe announced their partnership and, within this, launched ACP, an open standard that defines how AI agents, merchants, and payment providers interact to complete agentic and programmatic purchases.

On the same day, OpenAI detailed platforms that were immediately able to benefit from agentic commerce, including Shopify and Etsy, with others following suit using the protocol, including Walmart and Instacart.

From a CMS point of view, Shopify hit the ground running by enabling ACP for over 1 million merchants from the day of the announcement. WooCommerce has followed suit more recently by announcing it will be part of Stripe’s launch of Agentic Commerce Suite, which will allow even more merchants the ability to sell products through various AI-based platforms.

But ACP was launched three months ago, and as we now know, things move fast…

UCP: Google’s Answer To The Immersive Agentic Commerce Experience

Google just announced the launch of Universal Commerce Protocol, which widens some boundaries applied by ACP by tackling a broader problem, providing any AI surface (like Search AI Mode or Gemini) a common language to discover merchants, understand their capabilities, and orchestrate full journeys from discovery through order management, as well as engagement beyond a purchase (also made seamless using Google Pay). This is also done by integrating with other existing standards, including APIs, Agent2Agent (A2A), and the Model Context Protocol (MCP).

Aspect ACP (OpenAI) UCP (Google)
Primary focus Agent‑led commerce in ChatGPT and ACP‑aware agents.​ Unified rail for many agents/surfaces talking to merchants.
Journey Coverage Product feed, checkout, fulfillment, delegated payment. Discovery, checkout, discounts, fulfillment, order management, payments.
Driver OpenAI + Stripe & ecosystem partners. Google + retailers/platforms (Shopify, Etsy, Walmart, etc.).

Here, Google adds to the possibilities of the commerce experience, where SEOs can adopt both ACP and UCP in order to accommodate both platforms and ecosystems.

This will only become more immersive as 2026 progresses. Google has a great advantage of knowing a lot about individual users, and features such as AI features inside Gmail illustrate Google can utilize and understand much more context about individuals in order to provide an even more frictionless experience.

Why This Matters For SEOs

As SEOs, we’ve spent over a generation optimizing for humans, albeit for various personas or ICPs. While we are still required to do this, we must now include the agent as an additional consideration. This does pose another challenge: that AI agents don’t browse pages but instead query APIs, parse product feeds, and evaluate structured data.

As such, we need to optimize for this. Maybe I can give it a name…

ACO: Agentic Commerce Optimization

I don’t want to trigger you by introducing yet another acronym to what seems to be a previous year of new acronyms, but for the sake of this post, let’s pretend that ACO is something you’ve been told to do now, as well as SEO, even though this is still SEO.

What would I need to consider and optimize for for successful ACO?

  • Crawlability: Agents still follow links, take journeys, and understand IA.
  • Format: Content needs to be concise with less fluff, but enough to ensure unique value has been added, and that it provides consistency throughout the site as a whole.
  • Structured Data: Agents will become more reliant on existing standards, especially if they’re open source.
  • Brand Authority And Sentiment: Populating your products well is, of course, paramount, but without positive brand sentiment, you have the challenge of convincing the agent to cite you as part of that discovery, then have to convince the human who will have that feedback presented to them. Third-party perspectives will become a larger contribution towards some of the agents’ grounding procedures before any agentic commerce begins.

Sounds familiar, right? While ACP is a connector between your site and the platforms that allow agents to use it, and CMSs are out there to make that connection as seamless as possible, this isn’t just a switch where, when switched on, is automatically optimized.

ACO = SEO.  

Schema.org Is The Glue

Pascal Fleury presenting structured data options at Search Central Live Zurich December 2025
Image Credit: Alex Moss, January 2026

Last month at Google Search Central Live in Zurich, Pascal Fleury went into detail about structured data for Shopping, where we can see that, while “schema.org is the glue that holds [structured data] together,” there are still other industry standards, such as GS1, that will add even more granular detail to products that will not only help inform agents on really specific details but also understand that you’re a great source of information to continue ingest from.

Product schema, pricing, availability, reviews, FAQs, shipping options, and other logistics, loyalty schemes –  all of this structured data will need close optimization. If it’s missing or incorrect, you’re invisible to agent-mediated discovery.

Test The Agents

Even before your store is ACP-enabled, test how agents perceive your products. Ask platforms about products in your category. Do they surface your brand? How do they describe your products and complementary offerings? What information are they presenting, from both first-party and third-party perspectives? And more importantly, what is missing that you expected to be present?

Then, enable. What are the differences? Compare the results.

What Can I Do About It Now?

ACP

For WooCommerce and Wix, you will unfortunately need to join Stripe’s waitlist for ACS. Shopify users also have to join their own waitlist. Until then, we will have to wait until full rollout, but expect this to accelerate in Q1 of 2026.

If you work with a site where you have to integrate ACP directly into your CMS, any early adopters will perhaps benefit from early discovery, while the other CMSs catch up and competition is lower. So here, while this will require more resources, you will be able to take advantage of what ACP has to offer while most wait for their CMS platform to create the solution for them.

UCP

This is extremely fresh information, but I suggest that some time to understand it in detail, as well as experiment where possible using their documentation and GitHub repo, I know that’s how a lot of my time will be spent in the next few weeks.

More Resources:


Featured Image: Koupei Studio/Shutterstock

SEO Pulse: Core Update Favors Niche Expertise, AIO Health Inaccuracies & AI Slop via @sejournal, @MattGSouthern

Welcome to this week’s Pulse: updates on rankings from December’s core update, platform responses to AI quality issues, and disputes that reveal tensions in AI-generated health information.

Early analysis of Google’s December core update suggests specialized sites gained visibility in several shared examples. Microsoft and Google executives reframed criticism of AI quality. The Guardian reported concerns about health-related AI Overviews, and Google pushed back on aspects of the testing.

Here’s what matters for you and your work.

December Core Update Favors Specialists Over Generalists

Early analysis of Google’s December core update suggests specialized sites gained visibility in examples shared across publishing, ecommerce, and SaaS.

Key facts: Aleyda Solís’s analysis found sites with narrower, category-specific strength appear to be gaining ground on “best of” and mid-funnel product terms.

Some publisher sites appeared to lose visibility on broader, top-of-funnel queries in examples shared after the rollout. In examples shared after the December 11-29 rollout, ecommerce and SaaS brands with direct category expertise appeared to outperform broader review sites and affiliate aggregators.

Why SEOs Should Pay Attention

This update highlights a trend where generalist sites face ranking pressure, especially on queries with commercial intent or specific domain knowledge. Sites covering multiple categories are affected by competition from dedicated category sites.

Google says improvements can take time to show up. Some changes can take effect in a few days, but it can take several months for its systems to confirm longer-term improvement. Google also says it makes smaller, unannounced core updates that it doesn’t typically announce.

In the examples shared so far, specialization appears to outperform breadth when queries have specific intent.

What SEO Professionals Are Saying

Luke R., founder at Adexa.io, commented on LinkedIn:

“Specialists rise when search stops guessing and starts serving intent. These shifts reward brands that live one problem, one buyer.”

AYESHA ASIF, social media manager and content strategist, wrote:

“Generalist pages used to win on authority, but now depth matters more than domain size.”

Thanos Lappas, founder at Datafunc, added:

“This feels like the beginning of a long-anticipated transition in how search evaluates relevance and expertise.”

In that thread, several commenters argued the update favors deep, category-specific content over broad coverage. Several commenters suggested domain authority mattered less than focused expertise in the examples being discussed.

Read our full coverage: December Core Update: More Brands Win “Best Of” Queries

Guardian Investigation Claims AI Overview Health Inaccuracies

The Guardian reported that health organizations and experts reviewed examples of AI Overviews for medical queries and raised concerns about inaccuracies. A Google spokesperson said many examples were “incomplete screenshots.” The spokesperson also said the vast majority of AI Overviews are factual and helpful, and that Google continuously makes quality improvements.

Key facts: The Guardian said it tested health queries and shared AI Overview responses with health groups and experts for review. A Google spokesperson said many examples were “incomplete screenshots,” but added that the results linked “to well-known, reputable sources” and recommended seeking out expert advice.

Why SEOs Should Pay Attention

AI Overviews can appear at the top of results. When the topic is health, errors carry more weight. The Guardian’s reporting also highlights a practical problem. One charity leader told The Guardian the AI summary changed when repeating the same search, pulling from different sources. That can make verification harder.

Publishers have spent years investing in documented medical expertise to meet Google’s expectations around health content. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

What Health Organizations Are Saying

Sophie Randall, director of the Patient Information Forum, told The Guardian:

 “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”

Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK, stated:

“If someone followed what the search result told them, they might not take in enough calories … and be unable to tolerate either chemotherapy or potentially life-saving surgery.”

The reactions reveal two concerns. First, that even when AI Overviews link to trusted sources, the summary itself can override that trust by presenting confident but incorrect guidance. Second, some reactions framed Google’s response as addressing individual examples without explaining how these errors happen or how often they occur.

Read our full coverage: Guardian Investigation: AI Overviews Health Accuracy

Microsoft CEO And Google Engineer Reframe AI Quality Criticism

Within one week, Microsoft CEO Satya Nadella published a blog post asking the industry to “get beyond the arguments of slop vs. sophistication,” while Google Principal Engineer Jaana Dogan posted that people are “only anti new tech when they are burned out from trying new tech.”

Key facts: Nadella’s blog post characterized AI as “cognitive amplifier tools” and called for “a new equilibrium” that accounts for humans having these tools. Dogan’s X post framed anti-AI sentiment as burnout from trying new technology. In replies, some people pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable in day-to-day workflows. The timing follows Merriam-Webster naming “slop” its 2025 Word of the Year.

Why SEOs Should Pay Attention

Some readers may interpret these statements as an attempt to move the conversation away from output quality and toward user expectations. When people are urged to move past “slop vs. sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.

The practical concern is how these companies respond to user feedback versus how they frame criticism. Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related one.

What Industry Observers Are Saying

Jez Corden, managing editor at Windows Central, wrote that Nadella’s framing of AI as a “scaffolding for human potential” felt “either naively utopic, or at worse, wilfully dishonest.”

Tom Warren, senior editor at The Verge, wrote on Bluesky that Nadella wants everyone to move beyond the arguments about AI slop, calling 2026 a “pivotal year for AI.”

The commentary reveals a gap between executive messaging about AI as a transformative technology and the user experience of AI products, which feels inconsistent or forced. Some reactions suggested the request drew more attention to the term.

Read our full coverage: Microsoft CEO, Google Engineer Deflect AI Quality Complaints

Theme Of The Week: Competing Standards

Each story this week reveals a tension between the quality standards applied to publishers and those applied to platforms’ own AI systems.

The December core update appears to put more weight on category expertise than broad coverage in the examples highlighted. The Guardian investigation questions whether AI Overviews meet the accuracy bar Google sets for health content. The Nadella messaging attempts to reframe quality concerns as user adjustment problems rather than product issues.

The week highlights a tension between the standards applied to websites and the way platforms defend their own AI summaries when accuracy is questioned.

More Resources:


Featured Image: Accogliente Design/Shutterstock