SEO Is No Longer A Single Discipline via @sejournal, @DuaneForrester

Most people have a favorite coffee mug. You reach for it without thinking. It fits your hand. It does its job. For a long time, SEO felt like that mug. A defined craft, a repeatable routine, a discipline you could explain in a sentence. Crawl the site. Optimize the pages. Earn visibility. Somewhere along the way, that single mug turned into a cabinet full of cups. Each one different. Each one required – none of them optional anymore.

That shift did not happen because SEO got bloated or unfocused. It happened because discovery changed shape.

SEO did not become complex on its own. The environment around it fractured, multiplied, and layered itself. SEO stretched to meet it.

Image Credit: Duane Forrester

The SEO Core Still Exists

Despite everything that has changed, SEO still has a core. It is smaller than many people remember, but it is still essential.

This core is about access, clarity, and measurement. Search engines must be able to crawl content, understand it, and present it in a usable way. Google’s own SEO Starter Guide still frames these fundamentals clearly.

Crawl and indexing remain foundational. If content cannot be accessed or stored, nothing else matters. Robots.txt governance follows a formal standard, RFC 9309, which defines how crawlers interpret exclusion rules. This matters because robots.txt is guidance, not enforcement. Misuse can create accidental invisibility.

Page experience is no longer optional. Core Web Vitals represent measurable user experience signals that Google incorporates into Search. The broader framework and measurement approach are documented on Web.dev.

Content architecture still matters. Pages must map cleanly to intent. Headings must signal structure. Internal links must express relationships. Structured data still plays a role in helping machines interpret content and enable eligible rich results today.

Measurement and diagnostics remain part of the job. Search Console, analytics, and validation tools still anchor decision-making for traditional search.

That is the SEO core. It is real work, and it is not shrinking. It is, however, no longer sufficient on its own.

This first ring out from the core is where SEO stops being a single lane.

Once the core is in place, modern SEO immediately runs into systems it does not fully control. This is where the real complexity starts to expand.

AI Search And Answer Engines

AI systems now sit between content and audience. They do not behave like traditional search engines. They summarize, recommend, and sometimes cite. Critically, they do not agree with each other.

In mid-2025, BrightEdge analyzed brand recommendations across ChatGPT, Google AI experiences, and other AI-driven interfaces and found that they disagreed on brand recommendations for 62% of queries. Search Engine Land covered the same analysis and framed it as a warning for marketers assuming consistency across AI search experiences.

This introduces a new kind of SEO work. Rankings alone no longer describe visibility. Practitioners now track whether their brand appears in answers, which pages are cited when citations exist, and how often competitors are recommended instead.

This is not arbitrary. Retrieval-augmented generation exists precisely to ground AI responses in external sources and improve factual reliability. The original RAG paper outlines this architecture clearly.

That architectural choice pulls SEO into new territory. Content must be written so it can be extracted without losing meaning. Ambiguity becomes a liability. Sections must stand alone.

Chunk-Level Content Architecture

Pages are no longer the smallest competitive unit. Passages are. And despite being told we shouldn’t focus on chunks for traditional search, when you look outside of traditional search, you need to understand the role chunks play. And traditional search isn’t the only game in town now.

Modern retrieval systems often pull fragments of content, not entire documents. That forces SEOs to think in chunks. Each section needs a single job. Each answer needs to survive without surrounding context.

This changes how long-form content is written. It does not eliminate depth. It demands structure. We now live in a hybrid world where both layers of the system must be served. It means more work, but selecting one over the other? That’s a mistake at this point.

Visual Search

Discovery increasingly starts with cameras. Google Lens allows users to search what they see, using images as queries. Pinterest Lens and other visual tools follow the same model.

This forces new responsibilities. Image libraries become strategic assets. Alt text stops being a compliance task and becomes a retrieval signal. Product imagery must support recognition, not just aesthetics.

Google’s product structured data documentation explicitly notes that product information can surface across Search, Images, and Lens experiences.

Audio And Conversational Search

Voice changes how people ask questions and what kind of answers they accept. Queries become more conversational, more situational, and more task-focused.

Industry research compiled by Marketing LTB shows that a meaningful portion of users now rely on voice input, with multiple surveys indicating that roughly one in four to one in three people use voice search, particularly on mobile devices and smart assistants.

That matters less as a headline number and more for what it does to query shape. Spoken queries tend to be longer, more natural, and framed as requests rather than keywords. Users expect direct, complete answers, not a list of links.

And the biggest search platform is reinforcing this behavior. Google has begun rolling out conversational voice experiences directly inside Search, allowing users to ask follow-up questions in real time using speech. The Verge covered Google’s launch of Search Live, which turns search into an ongoing dialogue rather than a single query-response interaction.

For SEO practitioners, this expands the work. It pulls them into spoken-language modeling, answer-first content construction, and situational phrasing that works when read aloud. Pages that perform well in voice and conversational contexts tend to be clear, concise, and structurally explicit, because ambiguity collapses quickly when an answer is spoken rather than scanned. Still think traditional SEO approaches are all you need?

Personalization And Context

There is no single SERP. Google explains that search results vary based on factors including personalization, language, and location.

For practitioners, this means rankings become samples, not truths. Monitoring shifts toward trends, segments, and outcome-based signals rather than position reports.

Image Credit: Duane Forrester

The third ring is where complexity becomes really visible.

These are not just SEO tasks. The things in this layer are entire disciplines that SEO now interfaces with.

Brand Protection And Retrieval In An LLM World

Brand protection used to be a communications problem. Today, it is also a retrieval problem.

Large language models do not simply repeat press releases or corporate messaging. They retrieve information from a mixture of training data, indexed content, and real-time sources, then synthesize an answer that feels authoritative, whether it is accurate or not.

This creates a new class of risk. A brand can be well-known, well-funded, and well-covered by media, yet still be misrepresented, outdated, or absent in AI-generated answers.

Unlike traditional search, there is no single ranking to defend. Different AI systems can surface different descriptions, different competitors, or different recommendations for the same intent. That BrightEdge analysis showing 62% disagreement in brand recommendations across AI platforms illustrates how unstable this layer can be.

This is where SEO is pulled into brand protection work.

SEO practitioners already operate at the intersection of machine interpretation and human intent. In an LLM environment, that skill set extends naturally into brand retrieval monitoring. This includes tracking whether a brand appears in AI answers, how it is described, which sources are cited when citations exist, and whether outdated or incorrect narratives persist.

PR and brand teams are not historically equipped to do this work. Media monitoring tools track mentions, sentiment, and coverage. They do not track how an AI model synthesizes a brand narrative, nor how retrieval changes over time.

As a result, SEO increasingly becomes the connective tissue between brand, PR, and the machine layer.

This does not mean SEO owns brand. It means SEO helps ensure that the content machines retrieve about a brand is accurate, current, and structured in ways retrieval systems can use. It means working with brand teams to align authoritative sources, consistent terminology, and verifiable claims. It means working with PR teams to understand which coverage reinforces trust signals that machines recognize, not just headlines humans read.

In practice, brand protection in AI search becomes a shared responsibility, with SEO providing the technical and retrieval lens that brand and PR teams lack, and brand and PR providing the narrative discipline SEO cannot manufacture alone.

This is not optional work. As AI systems increasingly act as intermediaries between brands and audiences, the question is no longer “how do we rank?” It is “how are we being represented when no one clicks at all?”

Branding And Narrative Systems

Branding is not a subset of SEO. It is a discipline that includes voice, identity, reputation, executive presence, and crisis response.

SEO intersects with branding because AI systems increasingly behave like advisors, recommending, summarizing, and implicitly judging.

Trust matters more in that environment. The Edelman Trust Barometer documents declining trust across institutions and brands, reinforcing why authority can no longer be assumed. Trust diminishes, and consumer behavior changes. The equation is no longer brand = X, therefore X = brand.

SEO practitioners now care about sourcing, claims, and consistency because brand perception can now influence whether content is surfaced or ignored.

UX And Task Completion

Clicks are no longer the win. Completion is.

Though old, these remain applicable. Nielsen Norman Group defines success rate as a core usability metric, measuring whether users can complete tasks. They also outline usability metrics tied directly to task efficiency and error reduction.

When AI and zero-click experiences compress opportunities, the pages that do earn attention must deliver. SEO now has a stake in friction reduction, clarity, and task flow. CRO (conversion rate optimization) has never been more important, but how you define “conversion” has also never been broader.

Paid Media, Lifecycle, And Attribution

Discovery spans organic, AI answers, video feeds, and paid placements. Measurement follows the same fragmentation.

Google Analytics defines attribution as assigning credit across touchpoints in the path to conversion.

SEO practitioners are pulled into cross-channel conversations not because they want to own them, but because outcomes are shared. Organic assists paid. Email creates branded demand. Paid fills gaps while organic matures.

Generational And Situational Behavior

Audience behavior is not uniform. Pew Research Center’s 2025 research on teens, social media, and AI chatbots shows how discovery and engagement increasingly differ across age groups, platforms, and interaction modes, including traditional search, social feeds, and AI interfaces.

This shapes format expectations. Discovery may happen in video-first environments. Conversion may happen on the web. Sometimes the web is skipped entirely.

What This Means For SEO Practitioners

SEO did not become more complex because practitioners lost discipline or focus; it became more complex because discovery fractured. The work expanded because the interfaces expanded. The inputs multiplied. The outputs stopped behaving consistently.

In that environment, SEO stopped being a function you execute and became a role you play inside a system you do not fully control, and that distinction matters.

Much of the anxiety practitioners feel right now comes from being evaluated as if SEO were still a closed loop. Rankings up or down. Traffic in or out. Conversions attributed cleanly. Those models assume a world where discovery happens in one place and outcomes follow a predictable path.

That is no longer the world we’re operating in.

Today, a user might encounter a brand inside an AI answer, validate it through a video platform, compare it through reviews surfaced in search, and convert days later through a branded query or a direct visit. In many cases, no single click tells the story. In others, there is no click at all.

This is why SEO keeps getting pulled into UX conversations, brand discussions, PR alignment, attribution debates, and content format decisions. Not because SEO owns those disciplines, but because SEO sits closest to the fault lines where discovery breaks or holds.

This is also why trying to “draw a box” around SEO keeps failing.

You can still define an SEO core, and you should. Crawlability, performance, content architecture, structured data, and measurement remain non-negotiable. But pretending the job ends there creates a gap between responsibility and reality. When visibility drops, or when AI answers misrepresent a brand, or when traffic declines despite strong fundamentals, that gap becomes painfully visible.

What’s changed is not the importance of SEO, but the nature of its influence.

Modern SEO operates as an integration discipline. It connects systems that were never designed to work together. It translates between machines and humans, between intent and interface, between brand narrative and retrieval logic. It absorbs volatility from platforms so organizations don’t have to feel it all at once.

That does not mean every SEO must take on every cup in the cabinet. It does mean understanding what those cups contain, which ones you own, which ones you influence, and which ones you simply need to account for when explaining outcomes.

The cabinet is already there, and you can choose to keep reaching for a single familiar mug and accept increasing unpredictability. Or you can open the cabinet deliberately, understand what’s inside, and decide how much of the expanded role you’re willing to take on.

Either choice is valid, but pretending everything still fits in one cup is no longer an option.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Master1305/Shutterstock

Building A Brand Is Not A Strategy, It Is A Starting Point via @sejournal, @TaylorDanRW

“Build a brand” has become one of the most repeated phrases in SEO over the past year. It is offered as both diagnosis and cure. If traffic is declining, build a brand. If large language models are not citing you, build a brand. If organic performance is unstable, build a brand.

The problem is not that this advice is wrong. The problem is that it is incomplete, and for many SEOs, it is not actionable.

A large proportion of people working in SEO today have developed in an environment that rewarded channel depth rather than marketing breadth. They understand crawling, indexing, content templates, internal linking, and ranking systems extremely well. What they have often not been trained in is how demand is created, how brands are formed in the mind, or how different marketing channels reinforce one another over time.

So, when the instruction becomes “build a brand,” the obvious question follows. What does that actually mean in practice, and what happens after you say the words?

SEO Is Not A Direct Demand Generator

Search has always been a demand capture channel rather than a demand creation channel. SEO does not usually make someone want something they did not already want. It places a brand in front of existing intent and attempts to win preference at the moment of consideration.

What SEO can do very effectively is increase mental availability. By being visible across a wide range of non-branded queries, a website creates repeated brand touchpoints. Over time, those touchpoints can contribute to familiarity, preference, and eventually loyalty.

The important part of that sentence is “over time.”

Affinity and loyalty are not short-term outcomes. They are built through repeated exposure, consistency of messaging, and relevance across different contexts. SEO can support this process, but it cannot compress it. No amount of optimization can turn visibility into trust overnight.

AI Has Changed The Pressure, Not The Fundamentals

AI has introduced new technical and behavioral challenges, but it has also created urgency at the executive level. Boards and leadership teams see both risk and opportunity, and the result is pressure. Pressure to act quickly, to be visible in new surfaces, and to avoid being left behind.

In reality, this is one of the most significant visibility opportunities since the mass adoption of social media. But like social media, it rewards those who understand distribution, reinforcement, and timing, not just production.

Where Content And Digital PR Actually Fit

Content and digital PR are often positioned as the vehicles for brand building in search. That framing is not wrong, but it is frequently too vague to be useful.

Google has been clear, including in recent Search Central discussions, that strong technical foundations still matter. Good SEO is a prerequisite to performance, not a nice-to-have. Content and digital PR sit within that system because they create the signals that justify deeper crawling, more frequent discovery, and sustained visibility. Both content and digital PR can be dissected further based on tactical objectives, but at the core, the objective is the same.

Search demand does not appear out of nowhere. It grows when topics are discussed, linked, cited, and repeated across the web. Digital PR contributes to this by placing ideas and assets into wider ecosystems. Content supports it by giving those ideas a constant home that search engines can understand and return to users.

This is not brand building in the abstract sense; it is visibility building.

Strong Visibility Content Accelerates Brand Building

Well-executed SEO content plays a critical role in brand building precisely because it operates at the point of repeated exposure. When a brand consistently appears for high-intent, non-branded queries, it earns familiarity before it ever earns loyalty.

Visibility-led content does not need to be overtly promotional to do this work. In many cases, its impact is stronger when it is practical, authoritative, and clearly written for the user rather than for the brand. Over time, this consistency creates an association between the problem space and the brand itself.

This is where many brand discussions lose precision. Brand is not only shaped by creative campaigns or opinion pieces. It is shaped by whether a brand reliably shows up with useful answers when someone is trying to understand a topic, solve a problem, or make a decision.

Strong SEO content compounds over time, and each ranking page reinforces the others. An example of this is some work I did back with Cloudflare in mid-2017. A content hub, positioned as a “learning center,” that we developed and rolled out a section at a time, has compounded over the years to achieve millions of organic visits, and collected over 30,000 backlinks.

Image from author, January 2026

Each impression adds to mental availability, and each return visit subtly shifts perception from unfamiliar to known. This is slow work, but it is measurable, and it is durable, and builds signals over time through Chrome, and in turn, begins to feed its own growth.

In this sense, SEO content is not separate from brand building. It is one of the few channels where brand perception can be shaped at scale, repeatedly, and in moments of genuine user need.

Thought Leadership Without Readership Is A Vanity Project

Thought leadership content has real value, but only under specific conditions. It needs an audience, a distribution strategy, and a feedback loop.

One of the most common patterns seen over the years is organizations investing heavily in senior-led opinion pieces, vision statements, or industry commentary, and then assuming impact by default.

When performance is examined properly, using analytics platforms or marketing automation data, it often becomes clear that very few people are actually reading the content.

If nobody is consuming it, it is not thought leadership. It is publishing for internal reassurance.

This is not an argument against opinion-led content. It is an argument for accountability. Content should earn its place by contributing to visibility, engagement, or downstream commercial outcomes, even if those outcomes sit higher in the funnel.

That requires measurement beyond pageviews. It requires understanding how content is discovered, how it is referenced elsewhere, how it supports other assets, and whether it creates repeat exposure over time.

Balancing Brand And Search Visibility

The current challenge for SEOs is not choosing between brand building and visibility building. It is learning how to balance the two without confusing them.

Brand is the outcome of repeated, coherent experiences. Visibility is the mechanism that makes those experiences possible at scale. You cannot shortcut one with the other, and you cannot treat them as interchangeable.

For practitioners who have grown up inside SEO, this means expanding beyond the channel without abandoning its discipline. It means understanding distribution as well as creation, signals as well as stories, and measurement as well as messaging.

The future does not belong to those who simply declare themselves a brand. It belongs to those who understand how visibility compounds, how trust is earned gradually, and how SEO fits into a much wider system of influence.

Building a brand is not the answer. It is the work that begins once the question has finally been asked properly.

More Resources:


Featured Image: Master1305/Shutterstock

What Google SERPs Will Reward in 2026 [Webinar] via @sejournal, @lorenbaker

The Changes, Features & Signals Driving Organic Traffic Next Year

Google’s search results are evolving faster than most SEO strategies can adapt.

AI Overviews are expanding into new keyword and intent types, AI Mode is reshaping how results are displayed, and ongoing experimentation with SERP layouts is changing how users interact with search altogether. For SEO leaders, the challenge is no longer keeping up with updates but understanding which changes actually impact organic traffic.

Join Tom Capper, Senior Search Scientist at STAT Search Analytics, for a data-backed look at how Google SERPs are shifting in 2026 and where real organic opportunities still exist. Drawing from STAT’s extensive repository of daily SERP data, this session cuts through speculation to show which features and keywords are worth prioritizing now.

What You’ll Learn

  • Which SERP features deliver the highest click potential in 2026
  • How AI Mode features are showing up and initiatives to prioritize
  • The keyword and topic opportunities that still drive organic traffic next year

Why Attend?

This webinar offers a clear, evidence-based view of how Google SERPs are changing and what those changes mean for SEO strategy. You will gain practical insights to refine keyword targeting, focus on the right SERP features, and build an organic search approach grounded in real performance data for 2026.

Register now to understand the SERP shifts shaping organic traffic in 2026.

🛑 Can’t make it live? Register anyway and we’ll send you the on demand recording after the event.

SEO in 2026: Key predictions from Yoast experts

If there’s one takeaway as we look toward SEO in 2026, it’s that visibility is no longer just about ranking pages, but about being understood by increasingly selective AI-driven systems. In 2025, SEO proved it was not disappearing, but evolving, as search engines leaned more heavily on structure, authority, and trust to interpret content beyond the click. In this article, we share SEO predictions for 2026 from Yoast SEO experts, Alex Moss and Carolyn Shelby, highlighting the shifts that will shape how brands earn visibility across search and AI-powered discovery experiences.

Key takeaways

  • In 2026, SEO focuses on visibility defined by clarity, authority, and trust rather than just page rankings
  • Structured data becomes essential for eligibility in AI-driven search and shopping experiences
  • Editorial quality must meet machine readability standards, as AI evaluates content based on structure and clarity
  • Rankings remain important as indicators of authority, but visibility now also includes citations and brand sentiment
  • Brands should align their SEO strategies with social presence and aim for consistency across all platforms to enhance visibility

Table of contents

A brief recap of SEO in 2025: what actually changed?

2025 marked a clear shift in how SEO works. Visibility stopped being defined purely by pages and rankings and began to be shaped by how well search engines and AI systems could interpret content, brands, and intent across multiple surfaces. AI-generated summaries, richer SERP features, and alternative discovery experiences made it harder to rely solely on traditional metrics, while signals such as authority, trust, and structure played a larger role in determining what was surfaced and reused.

As we outlined in our SEO in 2025 wrap-up, the brands that performed best were those with strong foundations: clear content, credible signals, and structured information that search systems could confidently understand. That shift set the direction for what was to come next.

By the end of 2025, it was clear that SEO had entered a new phase, one shaped by interpretation rather than isolated optimizations. The SEO predictions for 2026 from Yoast experts build directly on this evolution.

2026 SEO predictions by Yoast experts

The SEO predictions for 2026 shared here come from our very own Principal SEOs at Yoast, Alex Moss and Carolyn Shelby. Built on the lessons SEO revealed in 2025, these predictions focus less on reacting to individual updates and more on how search and AI systems are evolving at a foundational level, and what that means for sustainable visibility going forward.

TL;DR

SEO in 2026 is about understanding how signals such as structure, authority, clarity, and trust are now interpreted across search engines, AI-powered experiences, and discovery platforms. Each prediction below explains what is changing, why it matters, and how brands can practically adapt in the coming year.

Prediction 1: Structured data shifts from ranking enhancer to retrieval qualifier

In 2026, structured data will no longer be a competitive advantage; it will become a baseline requirement. Search engines and AI systems increasingly rely on structured data as a layer of eligibility to determine whether content, products, and entities can be confidently retrieved, compared, or surfaced in AI-powered experiences.

For ecommerce brands, this shift is especially significant. Product information such as pricing, availability, shipping details, and merchant data is now critical for visibility in AI-driven shopping agents and comparison interfaces. At the enterprise level, the move toward canonical identifiers reflects a growing need to avoid misattribution and data decay across systems that reuse information at scale.

What this means in practice:

Brands without clean, comprehensive entity and product data will not rank lower. They will simply not appear in AI-driven shopping and comparison flows at all.

Also read: Optimizing ecommerce product variations for SEO and conversions

How to act on this:

Treat structured data as part of your SEO foundation, not an enhancement. Tools like Yoast SEO help standardize the implementation of structured data. The plugin’s structured data features make it easier to generate rich, meaningful schema markup, helping search engines better understand your site and take control of how your content is described.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Prediction 2: Agentic commerce becomes a visibility battleground, not a checkout feature

Agentic commerce marks a shift in how users discover and choose brands. Instead of browsing, comparing, and transacting manually, users increasingly rely on AI-driven agents to recommend, reorder, or select products and services on their behalf. In this environment, visibility is established before a checkout ever happens, often without a traditional search query.

This shift is becoming more concrete as search and commerce platforms move toward standardised ways for agents to understand and transact with merchants. Recent developments around agentic commerce protocols and Universal Commerce Protocol (UCP) highlight how AI systems are being designed to access product, pricing, availability, and merchant information more directly. As a result, platforms such as Shopify, Stripe, and WooCommerce are no longer just infrastructure. They increasingly act as distribution layers, where agent compatibility influences which brands are surfaced, recommended, or selected.

What this means in practice:

In 2026, SEO teams will be accountable for agent readiness in much the same way they were once accountable for mobile-first readiness. If agents cannot consistently interpret your brand, product data, or availability, they are more likely to default to competitors that they can understand with greater confidence.

How to act on this:

Focus on making your brand legible to automated decision systems. Ensure product information, pricing, availability, and supporting metadata are clear, structured, and consistent across your site and feeds. This is not about optimising for a single platform or protocol, but about reducing ambiguity so AI agents can accurately interpret and act on your information across emerging agent-driven discovery and commerce experiences.

Prediction 3: Editorial quality becomes a machine readability requirement

In 2026, editorial quality is no longer judged only by human readers. AI systems increasingly evaluate content based on how efficiently it can be parsed, summarized, cited, and reused. Verbosity, fluff, and circular explanations do not fail editorially. They fail functionally.

Content that is concise, clearly structured, and well-attributed has higher chances of performing well. Headings, lists, definitions, and tables directly influence how information is chunked and reused across AI-generated summaries and search experiences.

Must read: Why is summarizing essential for modern content?

What this means in practice:

“Helpful content” is being held to higher editorial standards. Content that cannot be summarized cleanly without losing meaning becomes less useful to AI systems, even if it remains readable to human audiences.

How to act on this:

Make editorial quality measurable and machine actionable. Utilize tools that assist you in aligning content with modern discoverability requirements. Yoast SEO Premium’s AI features, AI Generate, AI Optimize, and AI Summarize, help you assess and improve how content is structured and optimized, supporting both search engines and AI systems in understanding your intent.

Prediction 4: Rankings still matter, but as training signals, not endpoints

Despite ongoing speculation, rankings do not disappear in 2026. Instead, their role changes. AI agents and search systems continue to rely on top-ranked, trusted pages to understand authority, relevance, and consensus within a topic.

While rankings are no longer the final KPI, abandoning them entirely creates blind spots in understanding why certain brands are included or ignored in AI-driven experiences.

What this means in practice:

Teams that stop tracking rankings altogether risk losing insight into how authority is established and reinforced across search and AI systems.

How to act on this:

Continue to use rankings as diagnostic signals, but don’t treat them as the sole indicator of success in 2026. Alongside traditional performance metrics for SEO in 2026, look at how often your brand is mentioned, cited, or summarized in AI-generated answers and recommendations.

Tools like Yoast AI Brand Insights, available as part of Yoast SEO AI+, help surface these broader visibility signals by showing how your brand appears across AI platforms, including sentiment, citation patterns, and competitive context.

See how visible your brand is in AI search

Track mentions, sentiment, and AI visibility. With AI Brand Insights and Yoast SEO AI+, you can start monitoring and improving your performance.

Prediction 5: Brand sentiment becomes a core visibility signal

Brand sentiment increasingly influences how search engines and AI systems assess credibility and trust. Mentions, whether linked or unlinked, contribute to a broader understanding of how a brand is perceived across the web. AI systems synthesize signals from reviews, forums, social platforms, media coverage, and knowledge bases to form a composite view of legitimacy and expertise.

What makes this shift more impactful is amplification. Inconsistent messaging or negative sentiment is not smoothed out over time. Instead, it becomes more apparent when systems attempt to summarize, compare, or recommend brands across search and AI-driven experiences.

What this means in practice:

SEO, brand, PR, and social teams increasingly influence the same visibility signals. When these efforts are misaligned, credibility weakens. When they reinforce one another, trust becomes easier for systems to establish and maintain.

How to act on this:

Focus on consistency across owned, earned, and shared channels. Pay attention not only to where your brand ranks, but also to how it is discussed, described, and contextualized across various platforms. As discovery expands beyond traditional search results, reputation and narrative coherence become essential inputs into how brands are surfaced and understood.

Prediction 6: Multimodal optimization becomes baseline, not optional

Search behavior is no longer text-first. Images, video, audio, and transcripts now function as retrievable knowledge objects that feed both traditional search and AI-powered experiences. In particular, video platforms continue to influence how expertise and authority are understood at scale.

Platforms like YouTube function not only as discovery engines, but also as training corpora for AI systems learning how to interpret topics, brands, and creators.

What this means in practice:

Brands with strong written content but weak visual or video assets may appear incomplete or “thin” to AI systems, even if their articles are well-optimized.

How to act on this:

Treat multimodal content as part of your SEO foundation. Support written content with relevant visuals, video, and transcripts. Clear structure and readability remain essential, and tools like Yoast SEO help ensure your core content remains accessible and well-organized as it is reused across formats.

Prediction 7: Social platforms become secondary search indexes

Discovery will increasingly happen outside traditional search engines. Platforms such as TikTok, LinkedIn, Reddit, and niche communities now act as secondary search indexes where users validate expertise and intent.

AI systems reference these platforms to verify whether a brand’s claims, expertise, and messaging are substantiated in public discourse.

What this means in practice:

Presence alone is not enough. Inconsistent or unclear messaging across platforms weakens trust signals, while focused, repeatable narratives reinforce authority.

How to act on this:

Align your SEO strategy with social and community visibility to enhance your online presence. Ensure that your expertise, terminology, and positioning remain consistent across all discussions about your brand.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Prediction 8: Email reasserts itself as the most controllable growth channel

As discovery fragments and platforms increasingly gate access to audiences, email regains importance as a high-signal, low-distortion channel. Unlike search or social platforms, email offers direct access to users without algorithmic mediation.

In 2026, email plays a supporting role in reinforcing authority, engagement, and intent signals, especially as AI systems evaluate how audiences interact with trusted sources over time.

What this means in practice:

Brands that underinvest in email become overly dependent on platforms they do not control, which increases volatility and reduces long-term resilience.

How to act on this:

Focus on relevance over volume. Segment audiences, align content with intent, and use email to reinforce expertise and trust, not just drive clicks.

Prediction 9: Authority outweighs freshness for most non-news queries

For non-news content, AI systems increasingly prioritize credible, historically consistent sources over frequent updates or constant publishing. Freshness still matters, but only when it meaningfully improves accuracy or relevance.

Long-standing domains with coherent narratives and well-maintained content benefit, provided their foundations remain clean and trustworthy.

What this means in practice:

Scaled/programmatic content strategies lose effectiveness. Publishing frequently without maintaining quality or consistency introduces noise rather than value.

How to act on this:

Invest in maintaining and improving existing content. Update thoughtfully, reinforce expertise, and ensure that your most important pages remain accurate, structured, and authoritative.

Prediction 10: SEO teams evolve into visibility and narrative stewards

In 2026, SEO will extend far beyond search engines. SEO teams are increasingly influencing how brands are perceived by both humans and machines across search, AI-generated answers, and discovery platforms.

Success is measured not only by traffic alone, but also by inclusion, citation, and trust. SEO becomes a strategic function that shapes how a brand is represented and understood.

What this means in practice:

SEO teams that focus solely on production or technical fixes risk losing influence as visibility becomes a cross-channel concern.

How to act on this:

Shift focus toward clarity, consistency, and long-term trust. The most effective teams help define how a brand is understood, not just how it ranks.

What SEO is no longer about in 2026 (misconceptions to discard)

As SEO evolves in 2026, many long-standing assumptions no longer reflect how search engines and AI-driven systems actually determine visibility. The table below contrasts common SEO myths with the realities shaped by recent changes and expert insights from Yoast.

Diminishing relevance What actually matters in 2026
SEO is mainly about ranking pages Rankings still matter, but they serve as signals for authority and relevance, rather than the final measure of visibility
Structured data is optional or a ranking boost Structured data is now a baseline requirement for eligibility in AI-driven search, shopping, and comparison experiences
Publishing more content leads to better performance Authority, clarity, and maintenance of fewer strong assets outperform high-volume publishing
Editorial quality is subjective Content quality is increasingly evaluated by machines based on structure, clarity, and reusability
Brand reputation is a PR concern, not an SEO one Brand sentiment directly influences how AI systems interpret, trust, and recommend brands
Search is still primarily text-based Images, video, audio, and transcripts are now core retrievable knowledge objects
SEO can be measured only through traffic Visibility spans AI answers, social platforms, agents, and citations, requiring broader performance signals

Looking ahead: what will shape SEO in 2026

The focus is no longer on isolated tactics or short-term wins, but on building visibility systems that search engines and AI platforms can reliably understand, trust, and reuse.

Clarity and interpretability matter more than clever optimization. Content, products, and brand narratives need to be easy for machines to interpret without ambiguity. Structured data has become foundational, not optional, determining whether brands are eligible to appear in AI-powered shopping, comparison, and answer-driven experiences.

Authority is built over time, not manufactured at scale. Search and AI systems increasingly favor sources with consistent, well-maintained narratives over those chasing volume. Visibility also extends beyond the SERP, spanning AI-generated answers, citations, recommendations, and cross-platform mentions, making it essential to look beyond traffic as the sole measure of success.

Finally, SEO in 2026 demands alignment. Brand, content, product, and platform signals all contribute to how systems interpret trust and relevance.

Search Marketing’s Insight Gap: When Automation Replaces Understanding via @sejournal, @coreydmorris

Automation is a part of our daily lives in marketing. If you’re in a leadership role or oversee it in some capacity, you’re hearing about it from your team doing the day-to-day work, from those within your industry, or you’re doing your own exploration.

Within search marketing, it has helped to greatly scale efforts as well as to bring new efficiencies, whether those are in our own processes or built into the platforms we use.

In just a few short years, automated bidding strategies, AI-generated content, AI-driven research, and platform-generated “insights” have changed the way we work, including the tools we use, and many of our expectations for how we do search marketing and digital marketing in a broader sense.

With all of this automation and new ways of getting things done, a gap has emerged. I’ll call it an “insights gap.” I contend that teams can see performance changes, but struggle to explain why. This can be serious and, for marketing leaders, can result in a loss of confidence in decision-making due to outcomes not being what was planned, projected, or desired.

No one at a leadership or implementation level likes to have a non-answer or mystery that can’t be solved when real leads or sales dollars are at stake.

Here’s the problem. It is a leadership challenge at this point. It isn’t a technology issue. Automation itself isn’t the problem; the lack of strategic interpretation is.

Now, yes, search volatility is involved. It amplifies the problem with algorithm updates, SERP changes, AI Overviews, and how user behavior changes. Automated systems we have react, but they don’t necessarily contextualize.

Combined with stakeholder expectations rising, we can’t get by with just charts and graphs and data tables. We have to find the insights, contextualize them, and demonstrate value. This is the impact versus activity contrast that has been around forever, but is amplified with automation.

If we go too far into reliance on automation and AI and don’t get the expected marketing and business outcomes, we likely have weaker strategic muscles and an over-dependence on AI and automation tools and platforms. Connecting all knowledge back to being institutional versus platform-specific (and in the AI “brains”) is a key to fixing the problem.

How Marketing Leaders Can Close The Insight Gap

1. Reinforce Strategy In Search Marketing Campaigns & Efforts

Efficiencies gained in execution should be celebrated. Tasks that were manual, done with expensive software, or not done at all just a few years ago can be done in an instant now. The hard and soft cost savings shouldn’t be overlooked.

However, we need to be clear in separating the executional efficiencies from strategic aspects and intent.

Every automated system and process needs to support a documented objective so we’re not just “doing” things, but we’re quantifying them, and they are connected to our overall strategy.

2. Build Human Review Into Automated Systems & Processes

A longstanding challenge with search marketing is that it often doesn’t have a clearly defined ending point. It is ongoing and includes iterative optimization processes. We look to the past to inform decisions for now and going forward, but we often don’t turn it all off, blow it up, and start over (and I’m not advocating for that).

Scheduling structured reviews of AI-driven decisions is important to ensure that we don’t have an insights gap.

In those reviews, even simply asking “why did this change?” before moving on to “what do we do next?” adds an intentional moment to ensure we’re not on autopilot with systems that are not connected deeply enough to our strategy.

3. Train Teams To Interpret, Not Just Monitor, Search Data

We all have dashboards and data coming to us. Or, we have go-to reports in Google Analytics 4 or our web analytics suite that we’re comfortable with. Those are important to have, and any alerts coming our way are great for tracking real-time progress.

Maintaining (or developing) analysts and strategists who can translate data, patterns, and observations into insights is important. Yes, you can create AI agents to do this, but ensure that you have oversight of the agents and that there’s enough cross-checking to ensure that business outcomes aren’t negatively impacted by assumptions that go on for too long in an automated way.

4. Treat AI Outputs As Inputs (For Humans), Not Answers

Being careful with my wording of “inputs” and “outputs” here, calling attention to what AI gives us, we should treat that as output. But, it shouldn’t stop there. The AI output should become “input” for humans.

Even the seemingly smartest ideas from AI should be taken as an output, for human input, and not a definitive (a favorite AI word, by the way) answer.

Just like when humans are owning the full process, with whatever level of AI and automation we have involved, we should maintain a healthy skepticism and validation.

5. Protect Institutional Knowledge In Search Marketing

The more automation we have, likely the more scattered we are with documentation. It probably lives in many places, within platforms, or may be lacking overall. As we get smarter and more efficient with our tech stacks and use, we can’t lose critical institutional knowledge in search marketing.

That means we need to document learnings from tests, optimization, campaigns, and changes. We don’t want to repeat mistakes when platforms, vendors, or other variables change.

6. Align Automation With Business Outcomes, Not Platform Metrics

This is not a new recommendation or news to anyone who has been in marketing leadership. However, I point it out as a word of caution, as the deeper we get in turning things over to automation, the more we’re at risk of getting into the weeds and not being able to connect actions, activities, tactics, and work being done back to an ultimate marketing-driven business outcome.

We need the platform metrics. But, we still need to be able to translate metrics at every depth level back to something higher in the marketing and business ROI equation. Being able to automate and scale something without context can lead us to just doing more of something, doing it faster, or cheaper, but not necessarily moving the needle for ROI.

7. Reintroduce Strategic Review Into Search Marketing Cadence

I mentioned asking questions with human review earlier. More broadly, ensuring that strategic review is integrated into your search marketing cadence is important. My team has been challenging our own client reporting meetings, metrics, and flow recently.

Whether you already have a monthly or quarterly strategic review process or not, this is an opportunity to challenge what automation and AI are doing in the mix. What is it helping, hiding, or potentially distorting? How can we include this in strategic review and go beyond just the data, reports, and activity?

8. Elevate Search Reporting For Executive Audiences

At the heart of any talk about insights, we know we have to translate performance into narrative. With more automation, we need to have more translation. What we are doing matters. However, our executive peers and audiences are a degree (or more) further removed from what we do, and with new tech, are probably even less connected (no offense to the super high-tech execs I know and love).

We still must connect search behavior to customer intent and business priorities. That hasn’t changed, even if we need to layer in more or mine it out of the automation we have in place.

Wrap Up

Automation is essential, and for most, it is a big part of how our teams are scaling digital marketing and search marketing work. Plus, we’re leveraging the functions (whether by choice or not) in platforms and channels that we’re doing our work in.

Automation is incomplete, though, without insight. Strategic understanding is not just necessary, but can be a competitive advantage in search. When everyone is automating, getting above and beyond with strategic insights and leveraging them can be a difference-maker.

The goal here isn’t to slow automation. It is to advance your team’s ability to think critically while scaling implementation and execution.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Google Downplays GEO – But Let’s Talk About Garbage AI SERPs via @sejournal, @martinibuster

Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.

SEO And The Next Generation Of Search

Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.

What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”

So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.

The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.

Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?

How To Write For Longform Answers

Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?

Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?

It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.

Danny said:

“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?

So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”

Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.

Avoids Talking About Search Referrals

But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.

Focus On The Big Picture

Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.

He explained:

“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.

…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”

Let’s Talk About Garbage AI Search Results

I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.

Subject Matter Expertise Is Missing

Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.

How Google Hides Expert Web Pages

How Google hides expert web pages.

Google’s AI Mode Promotes Garbage And Sites Lacking Expertise

This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.

Google’s AI Mode cites the following pages:

1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.

2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?

3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?

Screenshot Of Google’s Garbage AI Results

Google Hides The Good Stuff In More > News Tab

Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.

Screenshot Of  Hidden High Quality Search Results

GEO Or SEO – It Doesn’t Matter

This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.

I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?

Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.

How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?

Listen to the podcast here:

Featured Image by Shutterstock/Kues

Agentic Commerce: What SEOs Need To Consider (ACP & UCP) via @sejournal, @alexmoss

In my last post, I referenced how there is now a growing split between the “human” web and the “agentic” web, where AI agents are becoming an additional audience/profile alongside the “traditional” human visitors we have been optimizing for for years.

This shift is now becoming more aggressive, especially when it comes to the transactional web in the form of agentic commerce. 2026 will see the accelerated adoption of this method, where store owners will now have to cater to and optimize for both the human and agentic visitor concurrently.

The recent launch of Universal Commerce Protocol (UCP) from Google underlines the push towards this integration of AI and ecommerce experiences.

What Is Agentic Commerce?

Agentic commerce is when agents complete purchases autonomously on behalf of users. Now, a human can engage with a large language model platform, where the agent will browse and purchase from a site on behalf (and with approval) of the human. Not only is the agent acting as the gatekeeper for information gain and influencing decisions, but they are also acting as the gatekeeper for the transaction itself.

This is a step beyond delegating an LLM to act as a recommendation agent or a method of validation, but now transfers authority to actually transact.

Enter ACP (Agentic Commerce Protocol)

On Sept. 29, 2025, OpenAI and Stripe announced their partnership and, within this, launched ACP, an open standard that defines how AI agents, merchants, and payment providers interact to complete agentic and programmatic purchases.

On the same day, OpenAI detailed platforms that were immediately able to benefit from agentic commerce, including Shopify and Etsy, with others following suit using the protocol, including Walmart and Instacart.

From a CMS point of view, Shopify hit the ground running by enabling ACP for over 1 million merchants from the day of the announcement. WooCommerce has followed suit more recently by announcing it will be part of Stripe’s launch of Agentic Commerce Suite, which will allow even more merchants the ability to sell products through various AI-based platforms.

But ACP was launched three months ago, and as we now know, things move fast…

UCP: Google’s Answer To The Immersive Agentic Commerce Experience

Google just announced the launch of Universal Commerce Protocol, which widens some boundaries applied by ACP by tackling a broader problem, providing any AI surface (like Search AI Mode or Gemini) a common language to discover merchants, understand their capabilities, and orchestrate full journeys from discovery through order management, as well as engagement beyond a purchase (also made seamless using Google Pay). This is also done by integrating with other existing standards, including APIs, Agent2Agent (A2A), and the Model Context Protocol (MCP).

Aspect ACP (OpenAI) UCP (Google)
Primary focus Agent‑led commerce in ChatGPT and ACP‑aware agents.​ Unified rail for many agents/surfaces talking to merchants.
Journey Coverage Product feed, checkout, fulfillment, delegated payment. Discovery, checkout, discounts, fulfillment, order management, payments.
Driver OpenAI + Stripe & ecosystem partners. Google + retailers/platforms (Shopify, Etsy, Walmart, etc.).

Here, Google adds to the possibilities of the commerce experience, where SEOs can adopt both ACP and UCP in order to accommodate both platforms and ecosystems.

This will only become more immersive as 2026 progresses. Google has a great advantage of knowing a lot about individual users, and features such as AI features inside Gmail illustrate Google can utilize and understand much more context about individuals in order to provide an even more frictionless experience.

Why This Matters For SEOs

As SEOs, we’ve spent over a generation optimizing for humans, albeit for various personas or ICPs. While we are still required to do this, we must now include the agent as an additional consideration. This does pose another challenge: that AI agents don’t browse pages but instead query APIs, parse product feeds, and evaluate structured data.

As such, we need to optimize for this. Maybe I can give it a name…

ACO: Agentic Commerce Optimization

I don’t want to trigger you by introducing yet another acronym to what seems to be a previous year of new acronyms, but for the sake of this post, let’s pretend that ACO is something you’ve been told to do now, as well as SEO, even though this is still SEO.

What would I need to consider and optimize for for successful ACO?

  • Crawlability: Agents still follow links, take journeys, and understand IA.
  • Format: Content needs to be concise with less fluff, but enough to ensure unique value has been added, and that it provides consistency throughout the site as a whole.
  • Structured Data: Agents will become more reliant on existing standards, especially if they’re open source.
  • Brand Authority And Sentiment: Populating your products well is, of course, paramount, but without positive brand sentiment, you have the challenge of convincing the agent to cite you as part of that discovery, then have to convince the human who will have that feedback presented to them. Third-party perspectives will become a larger contribution towards some of the agents’ grounding procedures before any agentic commerce begins.

Sounds familiar, right? While ACP is a connector between your site and the platforms that allow agents to use it, and CMSs are out there to make that connection as seamless as possible, this isn’t just a switch where, when switched on, is automatically optimized.

ACO = SEO.  

Schema.org Is The Glue

Pascal Fleury presenting structured data options at Search Central Live Zurich December 2025
Image Credit: Alex Moss, January 2026

Last month at Google Search Central Live in Zurich, Pascal Fleury went into detail about structured data for Shopping, where we can see that, while “schema.org is the glue that holds [structured data] together,” there are still other industry standards, such as GS1, that will add even more granular detail to products that will not only help inform agents on really specific details but also understand that you’re a great source of information to continue ingest from.

Product schema, pricing, availability, reviews, FAQs, shipping options, and other logistics, loyalty schemes –  all of this structured data will need close optimization. If it’s missing or incorrect, you’re invisible to agent-mediated discovery.

Test The Agents

Even before your store is ACP-enabled, test how agents perceive your products. Ask platforms about products in your category. Do they surface your brand? How do they describe your products and complementary offerings? What information are they presenting, from both first-party and third-party perspectives? And more importantly, what is missing that you expected to be present?

Then, enable. What are the differences? Compare the results.

What Can I Do About It Now?

ACP

For WooCommerce and Wix, you will unfortunately need to join Stripe’s waitlist for ACS. Shopify users also have to join their own waitlist. Until then, we will have to wait until full rollout, but expect this to accelerate in Q1 of 2026.

If you work with a site where you have to integrate ACP directly into your CMS, any early adopters will perhaps benefit from early discovery, while the other CMSs catch up and competition is lower. So here, while this will require more resources, you will be able to take advantage of what ACP has to offer while most wait for their CMS platform to create the solution for them.

UCP

This is extremely fresh information, but I suggest that some time to understand it in detail, as well as experiment where possible using their documentation and GitHub repo, I know that’s how a lot of my time will be spent in the next few weeks.

More Resources:


Featured Image: Koupei Studio/Shutterstock

SEO Pulse: Core Update Favors Niche Expertise, AIO Health Inaccuracies & AI Slop via @sejournal, @MattGSouthern

Welcome to this week’s Pulse: updates on rankings from December’s core update, platform responses to AI quality issues, and disputes that reveal tensions in AI-generated health information.

Early analysis of Google’s December core update suggests specialized sites gained visibility in several shared examples. Microsoft and Google executives reframed criticism of AI quality. The Guardian reported concerns about health-related AI Overviews, and Google pushed back on aspects of the testing.

Here’s what matters for you and your work.

December Core Update Favors Specialists Over Generalists

Early analysis of Google’s December core update suggests specialized sites gained visibility in examples shared across publishing, ecommerce, and SaaS.

Key facts: Aleyda Solís’s analysis found sites with narrower, category-specific strength appear to be gaining ground on “best of” and mid-funnel product terms.

Some publisher sites appeared to lose visibility on broader, top-of-funnel queries in examples shared after the rollout. In examples shared after the December 11-29 rollout, ecommerce and SaaS brands with direct category expertise appeared to outperform broader review sites and affiliate aggregators.

Why SEOs Should Pay Attention

This update highlights a trend where generalist sites face ranking pressure, especially on queries with commercial intent or specific domain knowledge. Sites covering multiple categories are affected by competition from dedicated category sites.

Google says improvements can take time to show up. Some changes can take effect in a few days, but it can take several months for its systems to confirm longer-term improvement. Google also says it makes smaller, unannounced core updates that it doesn’t typically announce.

In the examples shared so far, specialization appears to outperform breadth when queries have specific intent.

What SEO Professionals Are Saying

Luke R., founder at Adexa.io, commented on LinkedIn:

“Specialists rise when search stops guessing and starts serving intent. These shifts reward brands that live one problem, one buyer.”

AYESHA ASIF, social media manager and content strategist, wrote:

“Generalist pages used to win on authority, but now depth matters more than domain size.”

Thanos Lappas, founder at Datafunc, added:

“This feels like the beginning of a long-anticipated transition in how search evaluates relevance and expertise.”

In that thread, several commenters argued the update favors deep, category-specific content over broad coverage. Several commenters suggested domain authority mattered less than focused expertise in the examples being discussed.

Read our full coverage: December Core Update: More Brands Win “Best Of” Queries

Guardian Investigation Claims AI Overview Health Inaccuracies

The Guardian reported that health organizations and experts reviewed examples of AI Overviews for medical queries and raised concerns about inaccuracies. A Google spokesperson said many examples were “incomplete screenshots.” The spokesperson also said the vast majority of AI Overviews are factual and helpful, and that Google continuously makes quality improvements.

Key facts: The Guardian said it tested health queries and shared AI Overview responses with health groups and experts for review. A Google spokesperson said many examples were “incomplete screenshots,” but added that the results linked “to well-known, reputable sources” and recommended seeking out expert advice.

Why SEOs Should Pay Attention

AI Overviews can appear at the top of results. When the topic is health, errors carry more weight. The Guardian’s reporting also highlights a practical problem. One charity leader told The Guardian the AI summary changed when repeating the same search, pulling from different sources. That can make verification harder.

Publishers have spent years investing in documented medical expertise to meet Google’s expectations around health content. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

What Health Organizations Are Saying

Sophie Randall, director of the Patient Information Forum, told The Guardian:

 “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”

Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK, stated:

“If someone followed what the search result told them, they might not take in enough calories … and be unable to tolerate either chemotherapy or potentially life-saving surgery.”

The reactions reveal two concerns. First, that even when AI Overviews link to trusted sources, the summary itself can override that trust by presenting confident but incorrect guidance. Second, some reactions framed Google’s response as addressing individual examples without explaining how these errors happen or how often they occur.

Read our full coverage: Guardian Investigation: AI Overviews Health Accuracy

Microsoft CEO And Google Engineer Reframe AI Quality Criticism

Within one week, Microsoft CEO Satya Nadella published a blog post asking the industry to “get beyond the arguments of slop vs. sophistication,” while Google Principal Engineer Jaana Dogan posted that people are “only anti new tech when they are burned out from trying new tech.”

Key facts: Nadella’s blog post characterized AI as “cognitive amplifier tools” and called for “a new equilibrium” that accounts for humans having these tools. Dogan’s X post framed anti-AI sentiment as burnout from trying new technology. In replies, some people pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable in day-to-day workflows. The timing follows Merriam-Webster naming “slop” its 2025 Word of the Year.

Why SEOs Should Pay Attention

Some readers may interpret these statements as an attempt to move the conversation away from output quality and toward user expectations. When people are urged to move past “slop vs. sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.

The practical concern is how these companies respond to user feedback versus how they frame criticism. Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related one.

What Industry Observers Are Saying

Jez Corden, managing editor at Windows Central, wrote that Nadella’s framing of AI as a “scaffolding for human potential” felt “either naively utopic, or at worse, wilfully dishonest.”

Tom Warren, senior editor at The Verge, wrote on Bluesky that Nadella wants everyone to move beyond the arguments about AI slop, calling 2026 a “pivotal year for AI.”

The commentary reveals a gap between executive messaging about AI as a transformative technology and the user experience of AI products, which feels inconsistent or forced. Some reactions suggested the request drew more attention to the term.

Read our full coverage: Microsoft CEO, Google Engineer Deflect AI Quality Complaints

Theme Of The Week: Competing Standards

Each story this week reveals a tension between the quality standards applied to publishers and those applied to platforms’ own AI systems.

The December core update appears to put more weight on category expertise than broad coverage in the examples highlighted. The Guardian investigation questions whether AI Overviews meet the accuracy bar Google sets for health content. The Nadella messaging attempts to reframe quality concerns as user adjustment problems rather than product issues.

The week highlights a tension between the standards applied to websites and the way platforms defend their own AI summaries when accuracy is questioned.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Being Right Isn’t Enough For AI Visibility Today via @sejournal, @DuaneForrester

Bias is not what you think it is.

When most people hear the phrase “AI bias,” their mind jumps to ethics, politics, or fairness. They think about whether systems lean left or right, whether certain groups are represented properly, or whether models reflect human prejudice. That conversation matters. But it is not the conversation reshaping search, visibility, and digital work right now.

The bias that is quietly changing outcomes is not ideological. It is structural, and operational. It emerges from how AI systems are built, trained, how they retrieve and weight information, and how they are rewarded. It exists even when everyone involved is acting in good faith. And it affects who gets seen, cited, and summarized long before anyone argues about intent.

This article is about that bias. Not as a flaw or as a scandal. But as a predictable consequence of machine systems designed to operate at scale under uncertainty.

To talk about it clearly, we need a name. We need language that practitioners can use without drifting into moral debate or academic abstraction. This behavior has been studied, but what hasn’t existed is a single term that explains how it manifests as visibility bias in AI-mediated discovery. I’m calling it Machine Comfort Bias.

Image Credit: Duane Forrester

Why AI Answers Cannot Be Neutral

To understand why this bias exists, we need to be precise about how modern AI answers are produced.

AI systems do not search the web the way people do. They do not evaluate pages one by one, weigh arguments, or reason toward a conclusion. What they do instead is retrieve information, weight it, compress it, and generate a response that is statistically likely to be acceptable given what they have seen before, a process openly described in modern retrieval-augmented generation architectures such as those outlined by Microsoft Research.

That process introduces bias before a single word is generated.

First comes retrieval. Content is selected based on relevance signals, semantic similarity, and trust indicators. If something is not retrieved, it cannot influence the answer at all.

Then comes weighting. Retrieved material is not treated equally. Some sources carry more authority. Some phrasing patterns are considered safer. Some structures are easier to compress without distortion.

Finally comes generation. The model produces an answer that optimizes for probability, coherence, and risk minimization. It does not aim for novelty. It does not aim for sharp differentiation. It aims to sound right, a behavior explicitly acknowledged in system-level discussions of large models such as OpenAI’s GPT-4 overview.

At no point in this pipeline does neutrality exist in the way humans usually mean it. What exists instead is preference. Preference for what is familiar. Preference for what has been validated before. Preference for what fits established patterns.

Introducing Machine Comfort Bias

Machine Comfort Bias describes the tendency of AI retrieval and answer systems to favor information that is structurally familiar, historically validated, semantically aligned with prior training, and low-risk to reproduce, regardless of whether it represents the most accurate, current, or original insight.

This is not a new behavior. The underlying components have been studied for years under different labels. Training data bias. Exposure bias. Authority bias. Consensus bias. Risk minimization. Mode collapse.

What is new is the surface on which these behaviors now operate. Instead of influencing rankings, they influence answers. Instead of pushing a page down the results, they erase it entirely.

Machine Comfort Bias is not a scientific replacement term. It is a unifying lens. It brings together behaviors that are already documented but rarely discussed as a single system shaping visibility.

Where Bias Enters The System, Layer By Layer

To understand why Machine Comfort Bias is so persistent, it helps to see where it enters the system.

Training Data And Exposure Bias

Language models learn from large collections of text. Those collections reflect what has been written, linked, cited, and repeated over time. High-frequency patterns become foundational. Widely cited sources become anchors.

This means that models are deeply shaped by past visibility. They learn what has already been successful, not what is emerging now. New ideas are underrepresented by definition. Niche expertise appears less often. Minority viewpoints show up with lower frequency, a limitation openly discussed in platform documentation about model training and data distribution.

This is not an oversight. It is a mathematical reality.

Authority And Popularity Bias

When systems are trained or tuned using signals of quality, they tend to overweight sources that already have strong reputations. Large publishers, government sites, encyclopedic resources, and widely referenced brands appear more often in training data and are more frequently retrieved later.

The result is a reinforcement loop. Authority increases retrieval. Retrieval increases citation. Citation increases perceived trust. Trust increases future retrieval. And this loop does not require intent. It emerges naturally from how large-scale AI systems reinforce signals that have already proven reliable.

Structural And Formatting Bias

Machines are sensitive to structure in ways humans often underestimate. Clear headings, definitional language, explanatory tone, and predictable formatting are easier to parse, chunk, and retrieve, a reality long acknowledged in how search and retrieval systems process content, including Google’s own explanations of machine interpretation.

Content that is conversational, opinionated, or stylistically unusual may be valuable to humans but harder for systems to integrate confidently. When in doubt, the system leans toward content that looks like what it has successfully used before. That is comfort expressed through structure.

Semantic Similarity And Embedding Gravity

Modern retrieval relies heavily on embeddings. These are mathematical representations of meaning that allow systems to compare content based on similarity rather than keywords.

Embedding systems naturally cluster around centroids. Content that sits close to established semantic centers is easier to retrieve. Content that introduces new language, new metaphors, or new framing sits farther away, a dynamic visible in production systems such as Azure’s vector search implementation.

This creates a form of gravity. Established ways of talking about a topic pull answers toward themselves. New ways struggle to break in.

Safety And Risk Minimization Bias

AI systems are designed to avoid harmful, misleading, or controversial outputs. This is necessary. But it also shapes answers in subtle ways.

Sharp claims are riskier than neutral ones. Nuance is riskier than consensus. Strong opinions are riskier than balanced summaries.

When faced with uncertainty, systems tend to choose language that feels safest to reproduce. Over time, this favors blandness, caution, and repetition, a trade-off described directly in Anthropic’s work on Constitutional AI as far back as 2023.

Why Familiarity Wins Over Accuracy

One of the most uncomfortable truths for practitioners is that accuracy alone is not enough.

Two pages can be equally correct. One may even be more current or better researched. But if one aligns more closely with what the system already understands and trusts, that one is more likely to be retrieved and cited.

This is why AI answers often feel similar. It is not laziness. It is system optimization. Familiar language reduces the chance of error. Familiar sources reduce the chance of controversy. Familiar structure reduces the chance of misinterpretation, a phenomenon widely observed in mainstream analysis showing that LLM-generated outputs are significantly more homogeneous than human-generated one.

From the system’s perspective, familiarity is a proxy for safety.

The Shift From Ranking Bias To Existence Bias

Traditional search has long grappled with bias. That work has been explicit and deliberate. Engineers measure it, debate it, and attempt to mitigate it through ranking adjustments, audits, and policy changes.

Most importantly, traditional search bias has historically been visible. You could see where you ranked. You could see who outranked you. You could test changes and observe movement.

AI answers change the nature of the problem.

When an AI system produces a single synthesized response, there is no ranking list to inspect. There is no second page of results. There is only inclusion or omission. This is a shift from ranking bias to existence bias.

If you are not retrieved, you do not exist in the answer. If you are not cited, you do not contribute to the narrative. If you are not summarized, you are invisible to the user.

That is a fundamentally different visibility challenge.

Machine Comfort Bias In The Wild

You do not need to run thousands of prompts to see this behavior. It has already been observed, measured, and documented.

Studies and audits consistently show that AI answers disproportionately mirror encyclopedic tone and structure, even when multiple valid explanations exist, a pattern widely discussed.

Independent analyses also reveal high overlap in phrasing across answers to similar questions. Change the prompt slightly, and the structure remains. The language remains. The sources remain.

These are not isolated quirks. They are consistent patterns.

What This Changes About SEO, For Real

This is where the conversation gets uncomfortable for the industry.

SEO has always involved bias management. Understanding how systems evaluate relevance, authority, and quality has been the job. But the feedback loops were visible. You could measure impact, and you could test hypotheses. Machine Comfort Bias now complicates that work.

When outcomes depend on retrieval confidence and generation comfort, feedback becomes opaque. You may not know why you were excluded. You may not know which signal mattered. You may not even know that an opportunity existed.

This shifts the role of the SEO. From optimizer to interpreter. From ranking tactician to system translator, which reshapes career value. The people who understand how machine comfort forms, how trust accumulates, and how retrieval systems behave under uncertainty become critical. Not because they can game the system, but because they can explain it.

What Can Be Influenced, And What Cannot

It is important to be honest here. You cannot remove Machine Comfort Bias, nor can you force a system to prefer novelty. You cannot demand inclusion.

What you can do is work within the boundaries. You can make structure explicit without flattening voice, and you can align language with established concepts without parroting them. You can demonstrate expertise across multiple trusted surfaces so that familiarity accumulates over time. You can also reduce friction for retrieval and increase confidence for citation. The bottom line here is that you can design content that machines can safely use without misinterpretation. This shift is not about conformity; it’s about translation.

How To Explain This To Leadership Without Losing The Room

One of the hardest parts of this shift is communication. Telling an executive that “the AI is biased against us” rarely lands well. It sounds defensive and speculative.

I will suggest that a better framing is this. AI systems favor what they already understand and trust. Our risk is not being wrong. Our risk is being unfamiliar. That is our new, biggest business risk. It affects visibility, and it affects brand inclusion as well as how markets learn about new ideas.

Once framed that way, the conversation changes. This is no longer about influencing algorithms. It is about ensuring the system can recognize and confidently represent the business.

Bias Literacy As A Core Skill For 2026

As AI intermediaries become more common, bias literacy becomes a professional requirement. This does not mean memorizing research papers, but instead it means understanding where preference forms, how comfort manifests, and why omission happens. It means being able to look at an AI answer and ask not just “is this right,” but “why did this version of ‘right’ win.” That is an enhanced skill, and it will define who thrives in the next phase of digital work.

Naming The Invisible Changes

Machine Comfort Bias is not an accusation. It is a description, and by naming it, we make it discussable. By understanding it, we make it predictable. And anything predictable can be planned for.

This is not a story about loss of control. It is a story about adaptation, about learning how systems see the world and designing visibility accordingly.

Bias has not disappeared. It has changed shape, and now that we can see it, we can work with it.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: SvetaZi/Shutterstock

Ask An SEO: Can AI Systems & LLMs Render JavaScript To Read ‘Hidden’ Content? via @sejournal, @HelenPollitt1

For this week’s Ask An SEO, a reader asked:

“Is there any difference between how AI systems handle JavaScript-rendered or interactively hidden content compared to traditional Google indexing? What technical checks can SEOs do to confirm that all page critical information is available to machines?”

This is a great question because beyond the hype of LLM-optimization sits a very real technical challenge: ensuring your content can actually be found and read by the LLMs.

For several years now, SEOs have been fairly encouraged by Googlebot’s improvements in being able to crawl and render JavaScript-heavy pages. However, with the new AI crawlers, this might not be the case.

In this article, we’ll look at the differences between the two crawler types, and how to ensure your critical webpage content is accessible to both.

How Does Googlebot Render JavaScript Content?

Googlebot processes JavaScript in three main stages: crawling, rendering, and indexing. In a basic and simple explanation, this is how each stage works:

Crawling

Googlebot will queue pages to be crawled when it discovers them on the web. Not every page that gets queued will be crawled, however, as Googlebot will check to see if crawling is allowed. For example, it will see if the page is blocked from crawling via a disallow command in the robots.txt.

If the page is not eligible to be crawled, then Googlebot will skip it, forgoing an HTTP request. If a page is eligible to be crawled, it will move to render the content.

Rendering

Googlebot will check if the page is eligible to be indexed by ensuring there are no requests to keep it from the index, for example, via a noindex meta tag. Googlebot will queue the page to be rendered. The rendering may happen within seconds, or it may remain in the queue for a longer period of time. Rendering is a resource-intensive process, and as such, it may not be instantaneous.

In the meantime, the bot will receive the DOM response; this is the content that is rendered before JavaScript is executed. This typically is the page HTML, which will be available as soon as the page is crawled.

Once the JavaScript is executed, Googlebot will receive the fully constructed page, the “browser render.”

Indexing

Eligible pages and information will be stored in the Google index and made available to serve as search results at the point of user query.

How Does Googlebot Handle Interactively Hidden Content?

Not all content is available to users when they first land on a page. For example, you may need to click through tabs to find supplementary content, or expand an accordion to see all of the information.

Googlebot doesn’t have the ability to switch between tabs, or to click open an accordion. So, making sure it can parse all the page’s information is important.

The way to do this is to make sure that the information is contained within the DOM on the first load of the page. Meaning, content may be “hidden from view” on the front end before clicking a button, but it’s not hidden in the code.

Think of it like this: The HTML content is “hidden in a box”; the JavaScript is the key to open the box. If Googlebot has to open the box, it may not see that content straightaway. However, if the server has opened the box before Googlebot requests it, then it should be able to get to that content via the DOM.

How To Improve The Likelihood That Googlebot Will Be Able To Read Your Content

The key to ensuring that content can be parsed by Googlebot is making it accessible without the need for the bot to render the JavaScript. One way of doing this is by forcing the rendering to happen on the server itself.

Server-side rendering is the process by which a webpage is rendered on the server rather than by the browser. This means an HTML file is prepared and sent to the user’s browser (or the search engine bot), and the content of the page is accessible to them without waiting for the JavaScript to load. This is because the server has essentially created a file that has rendered content in it already; the HTML and CSS are accessible immediately. Meanwhile, JavaScript files that are stored on the server can be downloaded by the browser.

This is opposed to client-side rendering, which requires the browser to fetch and compile the JavaScript before content is accessible on the webpage. This is a much lower lift for the server, which is why it is often favored by website developers, but it does mean that bots struggle to see the content on the page without rendering the JavaScript first.

How Do LLM Bots Render JavaScript?

Given what we now know about how Googlebot renders JavaScript, how does that differ from AI bots?

The most important element to understand about the following is that, unlike Googlebot, there is no “one” governing body that represents all the bots that might be encompassed under “LLM bots.” That is, what one bot might be capable of doing won’t necessarily be the standard for all.

The bots that scrape the web to power the knowledge bases of the LLMs are not the same as the bots that visit a page to bring back timely information to a user via a search engine.

And Claude’s bots do not have the same capability as OpenAI’s.

When we are considering how to ensure that AI bots can access our content, we have to cater to the lowest-capability bots.

Less is known about how LLM bots render JavaScript, mainly because, unlike Google, the AI bots are not sharing that information. However, some very smart people have been running tests to identify how each of the main LLM bots handles it.

Back in 2024, Vercel published an investigation into the JavaScript rendering capabilities of the main LLM bots, including OpenAI’s, Anthropic’s, Meta’s, ByteDance’s, and Perplexity’s. According to their study, none of those bots were able to render JavaScript. The only ones that were, were Gemini (leveraging Googlebot’s infrastructure), Applebot, and CommonCrawl’s CCbot.

More recently, Glenn Gabe reconfirmed Vercel’s findings through his own in-depth analysis of how ChatGPT, Perplexity, and Claude handle JavaScript. He also runs through how to test your own website in the LLMs to see how they handle your content.

These are the most well-known bots, from some of the most heavily funded AI companies in this space. It stands to reason that if they are struggling with JavaScript, lesser-funded or more niche ones will be also.

How Do AI Bots Handle Interactively Hidden Content?

Not well. That is, if the interactive content requires some execution of JavaScript, they may struggle to parse it.

To ensure the bots are able to see content hidden behind tabs, or in accordions, it is prudent to ensure the content loads fully in the DOM without the need to execute JavaScript. Human visitors can still interact with the content to reveal it, but the bots won’t need to.

How To Check For JavaScript Rendering Issues

There are two very easy ways to check if Googlebot is able to render all the content on your page:

Check The DOM Through Developer Tools

The DOM (Document Object Model) is an interface for a webpage that represents the HTML page as a series of “nodes” and “objects.” It essentially links a webpage’s HTML source code to JavaScript, which enables the functionality of the webpage to work. In simple terms, think of a webpage as a family tree. Each element on a webpage is a “node” on the tree. So, a header tag

, a paragraph

, and the body of the page itself are all nodes on the family tree.

When a browser loads a webpage, it reads the HTML and turns it into the family tree (the DOM).

How To Check It

I’ll take you through this using Chrome’s Developer Tools as an example.

You can check the DOM of a page by going to your browser. Using Chrome, right-click and select “Inspect.” From there, make sure you’re in the “Elements” tab.

To see if content is visible on your webpage without having to execute JavaScript, you can search for it here. If you find the content fully within the DOM when you first load the page (and don’t interact with it further), then it should be visible to Googlebot and LLM bots.

Use Google Search Console

To check if the content is visible specifically to Googlebot, you can use Google Search Console.

Choose the page you want to test and paste it into the “Inspect any URL” field. Search Console will then take you to another page where you can “Test live URL.” When you test a live page, you will be presented with another screen where you can opt to “View tested page.”

How To Check If An LLM Bot Can See Your Content

As per Glenn Gabe’s experiments, you can ask the LLMs themselves what they can read from a specific webpage. For example, you can prompt them to read the text of an article. They will respond with an explanation if they cannot due to JavaScript.

Viewing The Source HTML

If we are working to the lowest common denominator, it is prudent to assume, at this point, LLMs can’t read content in JavaScript. To make sure that your content is available in the HTML of a webpage so that the bots can definitely access it, be absolutely sure that the content of your page is readable to these bots. Make sure it is in the source HTML. To check this, you can go to Chrome and right click on the page. From the menu, select “View page source.” If you can “find” the text in this code, you know it’s in the source HTML of the page.

What Does This Mean For Your Website?

Essentially, Googlebot has been developed over the years to be much better at handling JavaScript than the newer LLM bots. However, it’s really important to understand that the LLM bots are not trying to crawl and render the web in the same way as Googlebot. Don’t assume that they will ever try to mimic Googlebot’s behavior. Don’t consider them “behind” Googlebot. They are a different beast altogether.

For your website, this means you need to check if your page loads all the pertinent information in the DOM on the first load of the page to satisfy Googlebot’s needs. For the LLM bots, to be very sure the content is available to them, check your static HTML.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal