Why SEO Roadmaps Break In January (And How To Build Ones That Survive The Year) via @sejournal, @cshel

SEO roadmaps have a lot in common with New Year’s resolutions: They’re created with optimism, backed by sincere intent, and abandoned far sooner than anyone wants to admit.

The difference is that most people at least make it to Valentine’s Day before quietly deciding that daily workouts or dry January were an ambitious, yet misguided, experiment. SEO roadmaps often start unraveling while Punxsutawney Phil is still deep in REM sleep.

By the third or fourth week of the year, teams are already making “temporary” adjustments. A content cadence slips here. A technical initiative gets deprioritized there. A dependency turns out to be more complicated than anticipated, etc. None of this is framed as failure, naturally, but the original plan is already being renegotiated.

This doesn’t happen because SEO teams are bad at planning. It happens because annual SEO roadmaps are still built as if search were a stable environment with predictable inputs and outcomes.

(Narrator: Search is not, and has never been, a stable environment with predictable inputs or outcomes.)

In January, just like that diet plan, the SEO roadmap looks entirely doable. By February, you’re hiding in a dark pantry with a sleeve of Thin Mints, and the roadmap is already in tatters.

Here’s why those plans break so quickly and how to replace them with a planning model that holds up once the year actually starts moving.

The January Planning Trap

Annual SEO roadmaps are appealing because they feel responsible.

  • They give leadership something concrete to approve.
  • They make resourcing look predictable.
  • They suggest that search performance can be engineered in advance.

Except SEO doesn’t operate in a static system, and most roadmaps quietly assume that it does.

By the time Q1 is halfway over, teams are already reacting instead of executing. The plan didn’t fail because it was poorly constructed. It failed because it was built on outdated assumptions about how search works now.

Three Assumptions That Break By February

1. Algorithms Behave Predictably Over A 12-Month Period

Most annual roadmaps assume that major algorithm shifts are rare, isolated events.

That’s no longer true.

Search systems are now updated continuously. Ranking behavior, SERP layouts, AI integrations, and retrieval logic evolve incrementally –  often without a single, named “update” to react to.

A roadmap that assumes stability for even one full quarter is already fragile.

If your plan depends on a fixed set of ranking conditions remaining intact until December, it’s already obsolete.

2. Technical Debt Stays Static Unless Something “Breaks”

January plans usually account for new technical work like migrations, performance improvements, structured data, internal linking projects.

What they don’t account for is technical debt accumulation.

Every CMS update, plugin change, template tweak, tracking script, and marketing experiment adds friction. Even well-maintained sites slowly degrade over time.

Most SEO roadmaps treat technical SEO as a project with an end date. In reality, it’s a system that requires continuous maintenance.

By February, that invisible debt starts to surface – crawl inefficiencies, index bloat, rendering issues, or performance regressions – none of which were in the original plan.

3. Content Velocity Produces Linear Returns

Many annual SEO plans assume that content output scales predictably:

More content = more rankings = more traffic

That relationship hasn’t been linear for a long time.

Content saturation, intent overlap, internal competition, and AI-driven summaries all flatten returns. Publishing at the same pace doesn’t guarantee the same impact quarter over quarter.

By February, teams are already seeing diminishing returns from “planned” content and scrambling to justify why performance isn’t tracking to projections.

What Modern SEO Roadmap Planning Actually Looks Like

Roadmaps don’t need to disappear, but they do need to change shape.

Instead of a rigid annual plan, resilient SEO teams operate on a quarterly diagnostic model, one that assumes volatility and builds flexibility into execution.

The goal isn’t to abandon strategy. It’s to stop pretending that January can predict December.

A resilient model includes:

  • Quarterly diagnostic checkpoints, not just quarterly goals.
  • Rolling prioritization, based on what’s actually happening in search.
  • Protected capacity for unplanned technical or algorithmic responses.
  • Outcome-based planning, not task-based planning.

This shifts SEO from “deliverables by date” to “decisions based on signals.”

The Quarterly Diagnostic Framework

Instead of locking a yearlong roadmap, break planning into repeatable quarterly cycles:

Step 1: Assess (What Changed?)

At the start of each quarter, and ideally again mid-quarter, evaluate:

  • Crawl and indexation patterns.
  • Ranking volatility across key templates.
  • Performance deltas by intent, not just keywords.
  • Content cannibalization and decay.
  • Technical regressions or new constraints.

This is not a full audit. It’s a focused diagnostic designed to surface friction early.

Step 2: Diagnose (Why Did It Change?)

This is where most roadmaps fall apart: They track metrics but skip interpretation.

Diagnosis means asking:

  • Is this decline structural, algorithmic, or competitive?
  • Did we introduce friction, or did the ecosystem change around us?
  • Are we seeing demand shifts or retrieval shifts?

Without this layer, teams chase symptoms instead of causes.

Step 3: Fix (What Actually Matters Now?)

Only after diagnosis should priorities shift. That shift may involve pausing content production, redirecting engineering resources, or deliberately doing nothing while volatility settles. Resilient planning accepts that the “right” work in February may bear little resemblance to what was approved in January.

How To Audit Mid-Quarter Without Panicking

Mid-quarter reviews don’t mean throwing out the plan. They mean stress-testing it.

A healthy mid-quarter SEO check should answer three questions:

  1. What assumptions no longer hold?
  2. What work is no longer high-leverage?
  3. What risk is emerging that wasn’t visible before?

If the answer to any of those changes execution, that’s not failure. It’s adaptive planning.

The teams that struggle are the ones afraid to admit the plan needs to change.

The Bottom Line

The acceleration introduced by AI-driven retrieval has shortened the gap between planning and obsolescence.

January SEO roadmaps don’t fail because teams lack strategy. They fail because they assume a level of stability that search has not offered in years. If your SEO plan can’t absorb algorithmic shifts, technical debt, and nonlinear content returns, it won’t survive the year. The difference between teams that struggle and teams that adapt is simple: One plans for certainty, the other plans for reality.

The teams that win in search aren’t the ones with the most detailed January roadmap. They’re the ones that can still make good decisions in February.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Better Metrics for AI Search Visibility

The rise of AI-generated search and discovery is pushing merchants to measure their products’ visibility on those platforms. Many search optimizers are attempting to apply traditional metrics such as traffic from genAI and rankings in the answers. Both fall short.

Traffic. Focusing on traffic obscures the purpose of AI answers: to satisfy a need on-site, not to generate clicks.

AI-generated solutions do not typically include links to branded websites. Google’s AI Overviews, for example, sometimes links product names to organic search listings.

Thus visibility does not equate to traffic. A merchant’s products could appear in an AI answer and receive no clicks.

Screenshot of the AI Overview showing the North Face citation and link to an organic listing.

Brand names cited in Google’s AI Overviews often link to organic search listings, such as this example for North Face hiking boots.

Rankings. AI answers often include lists. Many sellers are trying to track those lists to rank at or near the top. Yet tracking such rankings is impossible.

AI answers are unpredictable. A recent study by Sparktoro found that AI platforms recommend different brands and different orders every time the same person asks the same question.

Better AI Metrics

Here are better metrics to measure AI visibility.

Product or brand positioning in LLM training data

Training data is fundamental to AI visibility because large language models default to what they know. Even when they query Google and elsewhere, LLMs often use their training data to guide the search terms.

It’s therefore essential to track what LLMs retain about your brand and competitors and, importantly, what is incorrect or outdated. Then focus on providing missing or corrected data on your site and across all owned channels.

Manual prompting in ChatGPT, Claude, and Gemini (at least) will help identify the gaps. The prompts can be:

  • “What do you know about [MY PRODUCT]?”
  • “Compare [MY PRODUCT] vs [MY COMPETITOR’S PRODUCT].”

Profound, Peec AI, and other AI visibility trackers can set up these prompts to monitor product positioning over time.

When using such visibility tools, keep in mind:

  • AI tracking tools enter prompts via LLMs’ APIs. Humans often see different results due to personalization and differences among AI models. API results are better for checking training data because LLMs likely return results from that data (versus live searches) to save resources.
  • The tools’ visibility scores depend entirely on the prompt. In the tools, separate branded prompts in a folder, as they will likely score 100%. Also, focus on non-branded prompts that reflect a product’s value proposition. Prompts irrelevant to an item’s key features will likely score 0%.

Most cited sources

LLM platforms increasingly conduct live searches when responding to prompts. They may query Google or Bing — yes, organic search drives AI visibility — or crawl other sources such as Reddit.

Citations, such as articles or videos, from those live searches influence the AI responses. But the citations vary widely because LLMs fan out across different (often unrelated) queries. So, trying to get included in every cited source is not realistic.

However, prompts often produce the same, influential sources repeatedly. These are worth exploring to include your brand or product. AI visibility trackers can collect the most cited URLs for your brand, product, or industry.

Brand mentions and branded search volume

Use Search Console or other traditional analytics tools to track:

  • Queries that contain your brand name or a version of it.
  • Number of clicks from those queries.
  • Impressions from those queries. The more AI answers include a brand name, the more humans will search for it.

In Search Console, create a filter in the “Performance” section to view data for branded queries.

Screenshot of the Search Console Performance section.

Create a filter in Search Console’s “Performance” section to view data for branded queries.

Controversial Proposal To Label Sections Of AI Generated Content via @sejournal, @martinibuster

A new proposal was published for creating an HTML attribute that can be helpful for notifying crawlers what part of a web page is generated by AI. The proposal is quickly becoming relevant because of new rules coming into effect in Europe this summer, but some are questioning whether this is the right solution to that problem.

AI Disclosure

The proposal was created by David E. Weekly (LinkedIn profile), who noted that there are currently proposals that provide a more general signal that an entire web page is AI generated but nothing that labels only a section of a web page in a page that is otherwise authored by a human.

Weekly’s proposal acknowledges the reality that many web pages are partially AI generated. One example is the AI generated summaries of news content. The proposal specifically mentions news sites that contain a sidebar with AI generated summaries.

The proposal suggests creating an HTML attribute that can be applied at the section level using the

Google Shows How To Get More Traffic From Top Stories Feature via @sejournal, @martinibuster

Google added new documentation to Search Central covering their Preferred Sources program that helps news websites get into the Top Stories feature. The documentation explains what publishers can do to make it more likely to be ranked in Top Stories and get more traffic.

Top Stories

Given that Top Stories is about breaking news, freshness may be a factor for ranking.Top Stories surfaces local news as well as breaking news. Schema structured data is not necessary to rank in Top Stories but adding Schema.org Article structured data helps Google better understand what the page is about. While the Top Stories display resembles Google’s carousel feature, the ItemList structured data for Carousel displays has no effect.

Source Preferences Tool

The preferred sources program is available only to English language web pages globally. Google also states that sites that are already in the Preferred Sources tool are eligible to deep link to encourage users to add your site as a preferred source. https://www.google.com/preferences/source

According to Google:

If your site appears in the source preferences tool, you can use the following methods to guide your readers to select your site as a preferred source:

Add the deeplink to your social posts or promotions. Use the following URL format, which takes users directly to your site in the source preferences tool:

https://google.com/preferences/source?q=Your_Website's_URL

For example, if your site is https://example.com, use the following URL:

https://google.com/preferences/source?q=example.com

Do What You Can For More Traffic From Top Stories

Getting traffic out of Google appears to be getting increasingly difficult. So it’s useful to take advantage of every available opportunity.

Featured Image by Shutterstock/RealPeopleStudio


 
		
	
Google’s SAGE Agentic AI Research: What It Means For SEO via @sejournal, @martinibuster

Google published a research paper about creating a challenging dataset for training AI agents for deep research. The paper offers insights into how agentic AI deep research works, which implies insights for optimizing content.

The acronym SAGE stands for Steerable Agentic Data Generation for Deep Search with Execution Feedback.

Synthetic Question And Answer Pairs

The researchers noted that the previous state of the art AI training datasets (like Musique and HotpotQA) required no more than four reasoning steps in order to answer the questions. On the number of searches needed to answer a question, Musique averages 2.7 searches per question and HotpotQA averaged 2.1 searches. Another commonly used dataset named Natural Questions (NQ) only required an average of 1.3 searches per question.

These datasets that are used to train AI agents created a training gap for deep search tasks that required more reasoning steps and a greater number of searches. How can you train an AI agent for complex real-world deep search tasks if the AI agents haven’t been trained to tackle genuinely difficult questions.

The researchers created a system called SAGE that automatically generates high-quality, complex question-answer pairs for training AI search agents. SAGE is a “dual-agent” system where one AI writes a question and a second “search agent” AI tries to solve it, providing feedback on the complexity of the question.

  • The goal of the first AI is to write a question that’s challenging to answer and requires many reasoning steps and multiple searches to solve.
  • The goal of the second AI is try to measure if the question is answerable and calculate how difficult it is (minimum number of search steps required).

The key to SAGE is that if the second AI solves the question too easily or gets it wrong, the specific steps and documents it found (the execution trace) is fed back to the first AI. This feedback enables the first AI to identify one of four shortcuts that enable the second AI to solve the question in fewer steps.

It’s these shortcuts that provide insights into how to rank better for deep research tasks.

Four Ways That Deep Research Was Avoided

The goal of the paper was to create a set of question and answer pairs that were so difficult that it took the AI agent multiple steps to solve. The feedback showed four ways that made it less necessary for the AI agent to do additional searches to find an answer.

Four Reasons Deep Research Was Unnecessary

  1. Information Co-Location
    This is the most common shortcut, accounting for 35% of the times when deep research was not necessary. This happens when two or more pieces of information needed to answer a question are located in the same document. Instead of searching twice, the AI finds both answers in one “hop”.
  2. Multi-query Collapse
    This happened in 21% of cases. The cause is when a single, clever search query retrieves enough information from different documents to solve multiple parts of the problem at once. This “collapses” what should have been a multi-step process into a single step.
  3. Superficial Complexity
    This accounts for 13% of times when deep research was not necessary. The question looks long and complicated to a human, but a search engine (that an AI agent is using) can jump straight to the answer without needing to reason through the intermediate steps.
  4. Overly Specific Questions
    31% of the failures are questions that contain so much detail that the answer becomes obvious in the very first search, removing the need for any “deep” investigation.

The researchers found that some questions look hard but are actually relatively easy because the information is “co-located” in one document. If an agent can answer a 4-hop question in 1 hop because one website was comprehensive enough to have all the answers, that data point is considered a failure for training the agent for reasoning but it’s still something that can happen in real-life and the agent will take advantage of finding all the information on one page.

SEO Takeaways

It’s possible to gain some insights into what kinds of content satisfies the deep research. While these aren’t necessarily tactics for ranking better in agentic AI deep search, these insights do show what kinds of scenarios caused the AI agents to find all or most of the answers in one web page.

“Information Co-location” Could Be An SEO Win
The researchers found that when multiple pieces of information required to answer a question occur in the same document, it reduces the number of search steps needed. For a publisher, this means consolidating “scattered” facts into one page prevents an AI agent from having to “hop” to a competitor’s site to find the rest of the answer.

Triggering “Multi-query Collapse”
The authors identified a phenomenon where information from different documents can be retrieved using a single query. By structuring content to answer several sub-questions at once, you enable the agent to find the full solution on your page faster, effectively “short-circuiting” the long reasoning chain the agent was prepared to undertake.

Eliminating “Shortcuts” (The Reasoning Gap)
The research paper notes that the data generator fails when it accidentally creates a “shortcut” to the answer. As an SEO, your goal is to be that shortcut—providing the specific data points like calculations, dates, or names that allow the agent to reach the final answer without further exploration.

The Goal Is Still To Rank In Classic Search

For an SEO and a publisher, these shortcuts underline the value of creating a comprehensive document because it will remove the need for an AI agent from getting triggered to hop somewhere else. This doesn’t mean it will be helpful to add all the information in one page. If it makes sense for a user it may be useful to link out from one page to another page for related information.

The reason I say that is because the AI agent is conducting classic search looking for answers, so the goal remains to optimize a web page for classic search. Furthermore, in this research, the AI agent is pulling from the top three ranked web pages for each query that it’s executing. I don’t know if this is how agentic AI search works in a live environment, but this is something to consider.

In fact, one of the tests that the researchers did was conducted using the Serper API to extract search results from Google.

So when it comes to ranking in agentic AI search, consider these takeaways:

  • It may be useful to consider the importance of ranking in the top three.
  • Do optimize web pages for classic search.
  • Do not optimize web pages for AI search
  • If it’s possible to be comprehensive, remain on-topic, and rank in the top three, then do that.
  • Interlink to relevant pages to help those rank in classic search, preferably in the top three (to be safe).

It could be that agentic AI search will consider pulling from more than the top three in classic search. But it may be helpful to set the goal of ranking for the top 3 in classic search and to focus on ranking other pages that may be a part of the multi-hop deep research.

The research paper was published by Google on January 26, 2026. It’s available in PDF form:  SAGE: Steerable Agentic Data Generation for Deep Search with Execution Feedback.

Featured Image by Shutterstock/Shutterstock AI Generator

What The Latest Web Almanac Report Reveals About Bots, CMS Influence & llms.txt via @sejournal, @theshelleywalsh

The Web Almanac is an annual report that translates the HTTP Archive dataset into practical insight, combining large-scale measurement with expert interpretation from industry experts.

To get insights into what the 2025 report can tell us about what is actually happening in SEO, I spoke with one of the authors of the SEO chapter update, Chris Green, a well-known industry expert with over 15 years of experience.

Chris shared with me some surprises about the adoption of llms.txt files and how CMS systems are shaping SEO far more than we realize. Little-known facts that the data surfaced in the research, and surprising insights that usually would go unnoticed.

You can watch the full interview with Chris on the IMHO recording at the end, or continue reading the article summary.

“I think the data [in the Web Almanac] helped to show me that there’s still a lot broken. The web is really messy. Really messy.”

Bot Management Is No Longer ‘Google, Or Not Google?’

Although bot management has been binary for some time – allow/disallow Google – it’s becoming a new challenge. Something that Eoghan Henn had picked up previously, and Chris found in his research.

We began our conversation by talking about how robots files are now being used to express intent about AI crawler access.

Chris responded to say that, firstly, there is a need to be conscious of the different crawlers, what their intention is, and fundamentally what blocking them might do, i.e., blocking some bots has bigger implications than others.

Second to that, requires the platform providers to actually listen to those rules and treat those files as appropriate. That isn’t always happening, and the ethics around robots and AI crawlers is an area that SEOs need to know about and understand more.

Chris explained that although the Almanac report showed the symptom of robots.txt usage, SEOs need to get ahead and understand how to control the bots.

“It’s not only understanding what the impact of each [bot/crawler] is, but also how to communicate that with the business. If you’ve got a team who want to cut as much bot crawling as possible because they want to save money, that might desperately impact your AI visibility.”

Equally, you might have an editorial team that doesn’t want to get all of their work scraped and regurgitated. So, we, as SEOs, need to understand that dynamic, how to control it technically, but how to put that argument forward in the business as well.” Chris explained.

As more platforms and crawlers are introduced, SEO teams will have to consider all implications, and collaborate with other teams to ensure the right balance of access is applied to the site.

Llms.txt Is Being Applied Despite No Official Platform Adoption 

The first surprising finding of the report was that adoption for the proposed llms.txt standard is around 2% of sites in the dataset.

Llms.txt has been a heated topic in the industry, with many SEOs dismissing the value of the file. Some tools, such as Yoast, have included the standard, but as yet, there has been no demonstration of actual uptake by AI providers.

Chris admitted that 2% was a higher adoption than he expected. But much of that growth appears to be driven by SEO tools that have added llms.txt as a default or optional feature.

Chris is skeptical of its long-term impact. As he explained, Google has repeatedly stated it does not plan to use llms.txt, and without clear commitment from the major AI providers, especially OpenAI, it risks remaining a niche, symbolic gesture rather than a functional standard.

That said, Chris has experienced log-file data suggesting some AI crawlers are already fetching these files, and in limited cases, they may even be referenced as sources. Green views this less as a competitive advantage and more as a potential parity mechanism, something that may help certain sites be understood, but not dramatically elevate them.

“Google has time and again said they don’t plan to use llms.txt which they reiterated in Zurich at Search Central last year. I think, fundamentally, Google doesn’t need it as they do have crawling and rendering nailed. So, I think it hinges on whether OpenAI say they will or won’t use it and I think they have other problems than trying to set up a new standard.”

Different, But Reassuringly The Same Where It Matters

I went on to ask Chris about how SEOs can balance the difference between search engine visibility and machine visibility.

He thinks there is “a significant overlap between what SEO was before we started worrying about this and where we are at the start of 2026.”

Despite this overlap, Chris was clear that if anyone thinks optimizing for search and machines is the same, then they are not aware of the two different systems, the different weightings, the fact that interpretation, retrieval, and generation are completely different.

Although there are different systems and different capabilities in play, he doesn’t think SEO has fundamentally changed. His belief is that SEO and AI optimization are “kind of the same, reassuringly the same in the places that matter, but you will need to approach it differently” because it diverges in how outputs are delivered and consumed.

Chris did say that SEOs will move more towards feeds, feed management, feed optimization.

“Google’s universal commerce protocol where you could potentially transact directly from search results or from a Gemini window obviously changes a lot. It’s just another move to push the website out of the loop. But the information, what we’re actually optimizing still needs to be optimized. It’s just in a different place.”

CMS Platforms Shape The Web More Than SEOs Realize

Perhaps the biggest surprise from Web Almanac 2025 was the scale of influence exerted by CMS platforms and tooling providers.

Chris said that he hadn’t realized just how big that impact is. “Platforms like Shopify, Wix, etc. are shaping the actual state of tech SEO probably more profoundly than I think a lot of people truly give it credit for.”

Chris went on to explain that “as well-intentioned as individual SEOs are, I think our overall impact on the web is minimal outside of CMS platforms providers. I would say if you are really determined to have an impact outside of your specific clients, you need to be nudging WordPress or Wix or Shopify or some of the big software providers within those ecosystems.”

This creates opportunity: Websites that do implement technical standards correctly could achieve significant differentiation when most sites lag behind best practices.

One of the more interesting insights from this conversation was that so much on the web is broken and how little impact we [SEOs] really have.

Chris explained that “a lot of SEOs believe that Google owes us because we maintain the internet for them. We do the dirty work, but I also don’t think we have as much impact perhaps at an industry level as maybe some like to believe. I think the data in the Web Almanac kind of helped show me that there’s still a lot broken. The web is really messy. Really messy.”

AI Agents Won’t Replace SEOs, But They Will Replace Bad Processes

Our conversation concluded with AI agents and automation. Chris started by saying, “Agents are easily misunderstood because we use the term differently.”

He emphasized that agents are not replacements for expertise, but accelerators of process. Most SEO workflows involve repetitive data gathering and pattern recognition, areas well-suited to automation. The value of human expertise lies in designing processes, applying judgment, and contextualizing outputs.

Early-stage agents could automate 60-80% of the work, similar to a highly capable intern. “It’s going to take your knowledge and your expertise to make that applicable to your given context. And I don’t just mean the context of web marketing or the context of ecommerce. I mean the context of the business that you’re specifically working for,” he said.

Chris would argue that a lot of SEOs don’t spend enough time customizing what they do to the client specifically. He thinks there’s an opportunity to build an 80% automated process and then add your real value when your human intervention optimizes the last 20% business logic.

SEOs who engage with agents, refine workflows, and evolve alongside automation are far more likely to remain indispensable than those who resist change altogether.

However, when experimenting with automation, Chris warned we should avoid automating broken processes.

“You need to understand the process that you’re trying to optimize. If the process isn’t very good, you’ve just created a machine to produce mediocrity at scale, which frankly doesn’t help anyone.”

Chris thinks that this will give SEOs an edge as AI is more widely adopted. “I suggest the people that engage with it and make those processes better and show how they can be continually evolved, they’ll be the ones that have greater longevity.”

SEOs Can Succeed By Engaging With The Complexity

The Web Almanac 2025 doesn’t suggest that SEO is being replaced, but it does show that its role is expanding in ways many teams haven’t fully adapted to yet. Core principles like crawlability and technical hygiene still matter, but they now exist within a more complex ecosystem shaped by AI crawlers, feeds, closed systems, and platform-level decisions.

Where technical standards are poorly implemented at scale, those who understand the systems that shape them can still gain a meaningful advantage.

Automation works best when it accelerates well-designed processes and fails when it simply scales inefficiency. SEOs who focus on process design, judgment, and business context will remain essential as automation becomes more common.

In an increasingly messy and machine-driven web, the SEOs who succeed will be those willing to engage with that complexity rather than ignore it.

SEO in 2026 isn’t about choosing between search and AI; it’s about understanding how multiple systems consume content and where optimization now happens.

Watch the full video interview with Chris Green here:

Thank you to Chris Green for offering his insights and being my guest on IMHO.

More Resources: 


Featured Image: Shelley Walsh/Search Engine Journal

Information Retrieval Part 1: Disambiguation

TL;DR

  1. Disambiguation is the process of resolving ambiguity and uncertainty in data. It’s crucial in modern-day SEO and information retrieval.
  2. Search engines and LLMs reward content that is easy to “understand,” not content that is necessarily best.
  3. The clearer and better structured your content, the harder it is to replace.
  4. You have to reinforce how your brand and products are understood. When grounding is required, models favor sources they recognize from training data

The internet has changed. Channels have begun to homogenize. Google is trying to become something of a destination, and the individual content creator is more powerful than ever.

Oh, and we don’t need to click on anything.

But what makes for great content hasn’t changed. AI and LLMs haven’t changed what people want to consume. They’ve changed what we need to click on. Which I don’t necessarily hate.

As long as you’ve been creating well-structured, engaging, educational/entertaining content for years. All this chat of chunking is a bit smoke and mirrors for me.

“If it walks like a duck and talks like a duck, it’s probably a grifter selling you link building services or GEO.”

However, it is absolutely not all rubbish. Concepts like ambiguity are a more destructive force than ever. If you permit a quick double negative, you cannot not be clear.

The clearer you are. The more concise. The more structured on and off-page. The better chance you stand. There’s no place for ambiguous phrases, paragraphs, and definitions.

This is known as disambiguation.

What Is Disambigation?

Disambiguation is the process of resolving ambiguity and uncertainty in data. Ambiguity is a problem in the modern-day internet. The deeper down the rabbit hole we go, the less diligence is paid towards accuracy and truth. The more clarity your surrounding context provides, the better.

It is a critical component of modern-day SEO, AI, natural language processing (NLP), and information retrieval.

This is an obvious and overused example, but consider a term like apple. The intent and understanding behind it are vague. We don’t know whether people mean the company, the fruit, the daughter of a batshit, brain-dead celebrity.

Image Credit: Harry Clarkson-Bennett

Years ago, this type of ambiguous search would’ve yielded a more diverse set of results. But thanks to personalization and trillions of stored interactions, Google knows what we all want. Scaled user engagement signals and an improved understanding of intent and keywords, phrases, and context are fundamental here.

Yes, I could’ve thought of a better example, but I couldn’t be bothered. You see my point.

Why Should I Care?

Modern-day information retrieval requires clarity. The context you provide really matters when it comes to a confidence score systems require when pulling the “correct” answer.

And this context is not just present in the content.

There is a significant debate about the value of structured data in modern-day search and information retrieval. Using structured data like sameAs to signify exactly who this author is and tying all of your company’s social accounts and sub-brands together can only be a good thing.

The argument isn’t that this has no value. It makes sense.

  • It’s whether Google needs it for accurate information parsing anymore.
  • And whether it has value to LLMs outside of well-structured HTML.

Ambiguity and information retrieval have become incredibly hot topics in data science. Vectorization – representing documents and queries as vectors – helps machines understand the relationships between terms.

It allows models to effectively predict what words should be present in the surrounding context. It’s why answering the most relevant questions and predicting user intent and ‘what’s next’ has been so valuable for a long time in search.

See Google’s Word2Vec for more information.

Google Has Been Doing This For A Long Time

Do you remember what Google’s early, and official, mission statement regarding information was?

“Organize the world’s information and make it universally accessible and useful.”

Their former motto was “don’t be evil.” Which I think in more recent times they may have let slide somewhat. Or conveniently hidden it.

Organizing the world’s information has become so much more effective thanks to advances in information retrieval. Originally, Google thrived on straightforward keyword matching. Then they moved to tokenization.

Their ability to break sentences into words and match short-tail queries was revolutionary. But as queries advanced and intent became less obvious, they had to evolve.

The advent of Google’s Knowledge Graph was transformational. A database of entities that helped create consistency. It created stability and improved accuracy in an ever-changing web.

Image Credit: Harry Clarkson-Bennett

Now queries are rewritten at scale. Ranking is probabilistic instead of deterministic, and in some cases, fan-out processes are applied to create an all-encompassing answer. It’s about matching the user’s intent at the time. It’s personalized. Contextual signals are applied to give the individual the best result for them.

Which means we lose predictability depending on temperature settings, context, and inference path. There’s a lot more passage-level retrieval going on.

Thanks to Dan Petrovic, we know that Google doesn’t use your full page content when grounding its Gemini-powered AI systems. Each query has a fixed grounding budget of approximately 2,000 words total, distributed across sources by relevance rank.

The higher you rank in search, the more budget you are allotted. Think of this context window limit like crawl budget. Larger windows enable longer interactions, but cause performance degradation. So they have to strike a balance.

Position 1 gives you over twice as much “budget” as position 5 (Image Credit: Harry Clarkson-Bennett)

Hummingbird, BERT, RankBrain – Foundational Semantic Understanding

These older algorithm shifts were pivotal in making Google’s systems treat language and meaning differently.

  • Hummingbird (2013) helped Google identify entities and things quickly, with greater precision. This was a step toward semantic interpretation and entity recognition. Think of keywords at a page level. Not query level.
  • RankBrain (2015): To combat the ever-increasing and never-before-seen queries, Google introduced machine learning to interpret unknown queries and relate them to known concepts and entities.

RankBrain was built on the success of Hummingbird’s semantic search. By mastering NLP systems, Google began mapping words to mathematical patterns (vectorization) to better serve new and ever-evolving queries.

These vectors help Google ‘guess’ the intent of queries it has never seen before by finding their nearest mathematical neighbors.

The Knowledge Graph Updates

In July 2023, Google rolled out a major Knowledge Graph update. I think people in SEO called it the Killer Whale Update, but I can’t remember who coined the phrase. Or why. Apologies. It was designed to accelerate the growth of the graph and reduce its dependence on third-party sources like Wikipedia.

As somebody who has spent a long time messing around with entities, I can really understand why. It’s a giant, expensive time-suck.

It explicitly expanded and restructured how entities are recognized and classified in the Knowledge Graph. Particularly, person entities with clear roles such as author or writer.

  • The number of entities in the Knowledge Vault increased by 7.23% in one day to over 54 billion.
  • In July 2023, the number of Person entities tripled in just four days.

All of this is an effort to combat AI slop, provide clarity, and minimize misinformation. To reduce ambiguity and to serve content where a living, breathing expert is at the heart of it.

Worth checking whether you have a presence in the Knowledge Graph here. If you do and can claim a Knowledge Panel, do it. Cement your presence. If not, build your brand and connectedness on the internet.

What About LLMs & AI Search?

There are two main ways LLMs retrieve information:

  • By accessing their vast, static training data.
  • Using RAG (a type of grounding) to access external, up-to-date sources of information.

RAG is why traditional Google Search is still so important. The latest models no longer train on real-time data and lag a little behind. Before the primary model dives in to respond to your desperate need for companionship, a classifier determines whether real-time information retrieval is necessary.

Hence the need for RAG (Image Credit: Harry Clarkson-Bennett)

They cannot know everything and have to employ RAG to make up for their lack of up-to-date information (or verifiable facts through their training data) when retrieving certain answers. Essentially trying to make sure they aren’t chatting rubbish.

Hallucinating if you’re feeling fancy.

So, each model needs its own form of disambiguation. Primarily, this is achieved via:

  • Context-aware query matching. Seeing words as tokens and even reformatting queries into more structured formats to try and achieve the most accurate result. This type of query transformation leads to fan-out and embeddings for more complex queries.
  • RAG architectures. Accessing external knowledge when an accuracy threshold isn’t reached.
  • Conversational agents. LLMs can be prompted to decide whether to directly answer a query or to ask the user for clarification if they don’t meet the same confidence threshold.

Remember, if your content isn’t accessible to search retrieval systems it can’t be used as part of a grounding response. There’s no separation here.

What Should You Do About It?

If you have wanted to do well in search over the last decade, this should’ve been a core part of your thinking. Helpful content rewards clarity.

Allegedly. It also rewards nerfing smaller sites out of existence.

Remember that being clever isn’t better than being clear.

Doesn’t mean you can’t be both. Great content entertains, educates, inspires, and enhances.

Use Your Words

You need to learn how to write. Short, snappy sentences. Help people and machines connect the dots. If you understand the topic, you should know what people want or need to read next almost better than they do.

  • Use verifiable claims.
  • Cite your sources.
  • Showcase your expertise through your understanding.
  • Stand out. Be different. Add information to the corpus to force a mention and/or citation.

Structure The Page Effectively

Write in clear, straightforward paragraphs with a logical heading structure. You really don’t have to call it chunking if you don’t want to. Just make it easy for people and machines to consume your content.

  • Answer the question. Answer it early.
  • Use summaries or hooks.
  • Tables of contents.
  • Tables, lists, and actual structured data. Not schema. But also schema.

Make it easy for users to see what they’re getting and whether this page is right for them.

Intent

Lots of intent is static. Commercial queries always demand some level of comparison. Transactional queries demand some kind of buying or sales process.

But intent changes and millions of new queries crop up every day.

So, you need to monitor the intent of a term or phrase. News is probably a perfect example. Stories break. Develop. What was true yesterday may not be true today. The courts of public opinion damn and praise in equal measure.

Google monitors the consensus. Tracks changes to documents. Monitors authority and – crucially here – relevance.

You can use something like Also Asked to monitor intent changes over time.

The Technical Layer

For years, structured data has helped resolve ambiguity. But we don’t have real clarity over its impact on AI search. Cleaner, well-structured pages are always easier to parse, and entity recognition really matters.

  • sameAs properties connect the dots with your brand and social accounts.
  • It helps you explicitly state who your author is and, crucially, isn’t.
  • Internal linking helps bots navigate across connected sections of your website and build some form of topical authority.
  • Keep content up to date, with consistent date framing – on page, structured data, and sitemaps

If you like messing around with the Knowledge Graph (who the hell doesn’t?), you can find confidence scores for your brand.

According to Google’s very own guidelines, structured data provides explicit clues about a page’s content, helping search engines understand it better.

Yes, yes, it displays rich results etc. But it removes ambiguity.

Entity Matching

I think this ties everything together. Your brand, your products, your authors, your social accounts.

What you say about your brand matters now more than ever.

  • The company you keep (the phrases on a page).
  • The linked accounts.
  • The events you speak at.
  • Your about us page(s).

All of it helps machines build up a clear picture of who you are. If you have strong social profiles, you want to make sure you’re leveraging that trust.

At a page level, title consistency, using relevant entities in your opening paragraph, linking to relevant tags and articles page, and using a rich, relevant author bio is a great start.

Really, just good, solid SEO. Don’t @ me.

PSA: Don’t be boring. You won’t survive.

More Resources:


This post was originally published on Leadership in SEO.


Featured Image: Roman Samborskyi/Shutterstock

What If User Satisfaction Is The Most Important Factor In SEO? via @sejournal, @marie_haynes

Let me see if I can convince you!

I’ve shared a bunch in this video and summarized my thoughts in the article below. Also, this is the second blog post I’ve written on this topic in the last week. There is much more information on user data and how Google uses it in my previous blog post.

Ranking Has 3 Components

We learned in the DOJ vs Google trial that Google’s ranking process involves three main components:

  1. Traditional systems are used for initial ranking.
  2. AI Systems (such as RankBrain, DeepRank, and RankEmbed BERT) re-rank the top 20-30 documents.
  3. Those systems are fine-tuned by Quality Rater scores, and more importantly IMO, results from live user tests.

The DOJ vs. Google lawsuit talked extensively about how Google’s massive advantage stems from the large amounts of user data it uses. In its appeal, Google said that it does not want to comply with the judge’s mandate to hand over user data to competitors. It listed two ways it uses user data – in a system called Glue, a system which incorporates Navboost that looks at what users click on and engage with, and also in the RankEmbed model.

RankEmbed is fascinating. It embeds the user’s query into a vector space. Content that is likely to be relevant to that query will be found nearby. RankEmbed is fine-tuned by two things:

1. Ratings from the Quality Raters. They are given two sets of results – “Frozen” Google results and “Retrained” results – or, in other words, the results of the newly trained and refined AI-driven search algorithms. Their scores help Google’s systems understand whether the retrained algorithms are producing higher-quality search results.

From Douglas Oard’s testimony re: Frozen and Retrained Google

2. Real-world live experiments where a small percentage of real searchers are shown results from the old vs. retrained algorithms. Their clicks and actions help fine-tune the system.

The ultimate goal of these systems is to continually improve on producing rankings that satisfy the searcher.

More Thinking On Live Tests – Users Tell Google The Types Of Pages That Are Helpful, Not The Actual Pages

I’ve realized that Google’s live user tests aren’t just about gathering data on specific pages. They are about training the system to recognize patterns. Google isn’t necessarily tracking every single user interaction to rank that one specific URL. Instead, it is using that data to teach its AI what “helpful” looks like. The system learns to identify the types of content that satisfy user intent, then predicts whether your site fits that successful mold.

It will continue to evolve its process in predicting which content is likely to be helpful. It definitely extends far beyond simple vector search. Google is continually finding new ways to understand user intent and how to meet it.

What This Means For SEO

If you’re ranking in the top few pages of search, you have convinced the traditional ranking systems to put you in the ranking auction.

Once there, a multitude of AI systems work to predict which of the top results truly is the best for the searcher. This is even more important now that Google is starting to use “Personal Intelligence” in Gemini and AI Mode. My top search results will be tailored specifically for what Google’s systems think I will find helpful.

Once you start understanding how AI systems do search, which is primarily vector search, it can be tempting to work to reverse engineer these. If you’re optimizing by using a deep understanding of what vector search rewards (including using cosine similarity), you’re working to look good to the AI systems. I’d caution against diving in too deeply here.

Image Credit: Marie Haynes

Given that the systems are fine-tuned to continually improve upon producing results that are the most satisfying for the searcher, looking good to AI is nowhere near as important as truly being the result that is the most helpful. I would argue that optimizing for vector search can do more harm than good unless you truly do have the type of content that users go on to find more helpful than the other options they have. Otherwise, there’s a good chance you’re training the AI systems to not favor you.

Image Credit: Marie Haynes

My Advice

My advice is to optimize loosely for vector search. What I mean by this is to not obsess over keywords and cosine similarity, but instead to understand what it is your audience wants and be sure that your pages meet the specific needs they have. Is using a knowledge of Google’s Query Fan-Out helpful here? To some degree, yes, as it is helpful to know what questions users generally tend to have surrounding a query. But, I think that my same fears apply here as well. If you look really good to the AI systems trying to find content to satisfy the query fan-out, but users don’t tend to agree, or if you’re lacking other characteristics associated with helpfulness compared to competitors, you might train Google’s systems to favor you less.

Make use of headings – not for the AI systems to see, but to help your readers understand that the things they are looking for are on your page.

Look at the pages that Google is ranking for queries that should lead to your page, and truly ask yourself what it is about these pages that searchers are finding helpful. Look at how well they answer specific questions, whether they use good imagery, tables, or other graphics, and how easy it is for the page to be skimmed and navigated. Work to figure out why this page was chosen as among the most likely to be helpful in satisfying the needs of searchers.

Instead of obsessing over keywords, work to improve the actual user experience. If you make your page more engaging, focusing more on metrics like scrolls and session duration, rankings should naturally improve.

And mostly, obsess over helpfulness. It can be helpful to have an external party look at your content and share why it may or may not be helpful.

I have found that even though I have this understanding that search is built to continually learn and improve upon showing searchers pages they are likely to find helpful, I still find myself fighting the urge to optimize for machines rather than users. It is a hard habit to break! Given that Google’s deep learning systems are working tirelessly on one goal – predicting which pages are likely to be helpful to the searcher – that should be our goal as well. As Google’s documentation on helpful content suggests, the type of content that people tend to find helpful is content that is original, insightful, and provides substantial value when compared to other pages in the search results.

More Resources:


This post was originally published on Marie Haynes Consulting.


Featured Image: Chayanit/Shutterstock

Social Channel Insights In Search Console: What It Means For Social & Search via @sejournal, @rio_seo

Google has been testing Social Channel Insights inside Google Search Console (GSC). This update may appear small, but it’s more than meets the eye. In the search landscape, these new social insights translate to a bigger shift happening behind the scenes, where search and social data converge to improve visibility.

The official announcement from Google highlighted the growth of businesses managing their digital presence on popular social media sites. The integration makes sense as social media continues to become a popular method for search discovery and information, with 15% of consumers believing social media to be the most accurate/current source to find up-to-date business details.

The expansion of the social report feature showcases performance for accounts Google associates with a website, allowing businesses a centralized location for reviewing key search and discoverability metrics. This update signifies just how intertwined search and social are becoming. Search and social should no longer be treated as disparate functions, but rather integral counterparts that must communicate and coordinate to improve online visibility and discovery.

A Closer Look At Google’s Social Channel Insights Test

When digging into Google Search Console Insights to ascertain what exactly these new social metrics entail, we see a plethora of new information has been added. It appears as though this feature isn’t readily available to all, but is only showing up for some websites where Google was able to locate their social media channels. Of those who have seen the new social media report features, they’re seeing:

  • The total reach from Google to your social channels.
  • Social media content performance.
  • Queries drive traffic to your social channels.
  • Trends such as high average duration or post growth.

Right now, it appears as though the social media metrics measured focus mostly on referral insights. This isn’t merely a slight tweak to the user experience. It could be seen as a strategic convergence of data, meant to shine a spotlight on how social goes hand in hand with search performance.

Does This Mean Social Is Having More Influence?

Google doesn’t typically make updates for fun or convenience. Each update is a signal for what they plan to evaluate next as part of their never-ending quest to maintain dominancy in the search engine landscape.

Even though Google hasn’t explicitly stated that social engagement metrics have direct influence, this could be an acknowledgement that discovery is increasingly happening on other channels, such as AI platforms and social media.

Search has fractured with other players joining the race, and Google is clearly noticing and adapting. In fact, a study found nearly a quarter (24%) of U.S. adults use social media as their primary search method, while another 24% use search primarily but also social media occasionally. 78% of global internet users leverage social media to research brands and products, and over 60% of Gen Z consumers have purchased a product they’ve found on social media.

Search engines are no longer the sole place consumers start their sales journey. Users use AI to research and ask questions, or seek out online reviews and testimonies on social media channels. Search engines are becoming more of a validation layer, where users go after they research all the options to confirm information, or seek additional information, and then move to the transaction stage.

How Social Channel Insights Could Impact Social Campaigns

When it comes to social, evaluating performance in the past may have looked like chasing more likes and comments. Engagement, of course, still matters, but Google is telling us what other insights we should consider, right inside your GSC dashboard.

Social referral insights give social media marketers visibility into how their content performs in the search discovery journey. Writing social posts to meet an arbitrary number or goal isn’t the end game. It’s about finding the posts that have the influence.

For social campaigns, social insights can help you:

  • Identify which social content themes generate downstream search demand.
  • Use query-level insights to inform what you write and the message you want to get across.
  • Highlighting social’s distinct role in discovery, not just engaging passive viewers.
  • Coordinate more seamlessly with SEO teams in terms of campaign launches and promotions to capitalize on growing demand.
  • Empower marketers to create content that resonates and aligns with what users are likely to search for next, keeping you one step ahead of the game.

Instead of considering traditional social media metrics (such as comments, shares, or likes), social teams can use these new Social Channel Insights in GSC to increase online visibility.

What Social Signals We’d Like To See Google Include Next

Google, if you’re reading this, here’s what we’d love to see beyond referral behavior to help marketers provide even more strategic value.

Social insights that could meaningfully support discovery-focused strategies include:

  • Content velocity indicators: Show us how quickly topics gain traction on social before search demand spikes.
  • Content format indicators: Tell us what content formats perform best for winning search discovery, whether that be short-form videos or static posts.
  • Topic momentum indicators: Help us understand emerging themes gaining attention across platforms.
  • Creator and brand association indicators: Give us more transparency around which entities are consistently driving early discovery for certain topics.
  • Cross-platform trend alignment indicators: Reveal when multiple social ecosystems signal rising interest at the same time. This helps us strike the iron when it’s hot.

By adding the aforementioned signals, SEOs would be able to anticipate intent shifts earlier and inform content and social teams to draft meaningful and relevant content right away, not after the hype dies down. It’s a win for all teams as your time investment will lead to actual results.

What Marketers Should Do Now

Even though this is a limited test and hasn’t impacted every business (yet),  it would be a good idea for marketers to review their social media channels and strategy to provide an exceptional experience across every channel customers find you.

To prepare, marketers should:

  • Audit which pages receive the most social-driven search traffic. These insights will inform which types of content and topics attract social search visitors most.
  • Align content calendars across social and SEO teams. Start breaking silos between teams by enabling transparency across cross-department initiatives, such as the content calendar. By doing so, you’ll better create a culture of collaboration and give teams shared KPIs to work toward.
  • Repurpose high-performing social content into search-optimized formats (and vice versa). For example, social videos that are performing well in search can be embedded into relevant blog posts, helping you get more value and longevity out of the content you work hard to create. Another example would be user-generated content repurposed into frequently asked questions.
  • Track emerging social trends. Social platforms like TikTok and Instagram can serve as search indicators, allowing marketers to anticipate what consumers are interested in most and what’s capturing their attention.
  • Integrate hybrid analytics into your measurement tracking. AI is having an impact on marketing; however, humans still play a key role in any and every marketing endeavor. Machine-driven insights may give us data at our fingertips; however, human interpretation and validation are still a must. Only humans have the power and foresight to assess nuance, emotions, and insider knowledge, far better than any machine ever could.

Next Steps To Take With Social Channel Insights

Google’s rollout of Social Channel Insights in GSC may seem like a minor advancement, but it’s more than just additional metrics to track for marketers. It signifies how Google is considering how the two disciplines share insights.

Search engines are factoring in the rise of discovery and influence taking place on social media channels. By bridging the gap, and working more closely together, social media marketers and SEOs should see each other as partners rather than once in a while collaborators. The result? Better workflows, collaboration, visibility, and business impact.

Marketers who embrace a cross-collaboration mentality with SEOs will be better poised to appear in the moments that matter, being discovered and chosen.

More Resources:


Featured Image: MR.DEEN/Shutterstock

New Microsoft Retail AI Guide Echoes SEO

Microsoft published a playbook early this month to help retailers increase visibility in AI search, browsers, and assistants.

“A guide to AEO and GEO” (PDF) from the heads of Microsoft Shopping and Copilot, and Microsoft Advertising, includes and confirms actionable tips worth the read.

Microsoft’s new guide aims to help retailers increase AI visibility.

GEO vs. AEO

The rise of AI platforms has created a proliferation of ill-defined acronyms. The guide attempts to clarify two of them:

  • GEO. Generative engine optimization. “Optimizes content for generative AI search environments (like LLM-powered engines) to make it discoverable, trustworthy, and authoritative.”
  • AEO. Answer/Agentic Engine Optimization. “Optimizes content for AI agents and assistants (like Copilot or ChatGPT) so they can find, understand, and present answers effectively.”

I question the need for new acronyms, as the concepts have existed for years in traditional search engine optimization. “GEO” is synonymous with “EEAT” — Experience, Expertise, Authoritativeness, Trustworthiness — Google’s term for instructing human quality raters.

“AEO” is akin to optimizing for featured snippets in traditional search results.

The key difference is that GEO and AEO focus on a product’s pre-training data to impact exposure in AI answers.

And GEO extends beyond a site’s content to include external resources such as reviews, Reddit mentions, product-comparison articles, and similar.

Intent-driven product data

To me, the most useful part of the guide reinforces my article on optimizing product feeds for AI. Product feeds and on-page descriptions should clearly address use cases, such as shoes “best for day hikes above 40 degrees.”

The guide also recommends:

  • Product page titles that are detailed and descriptive,
  • Front-loading product descriptions with benefits: who it’s for, the problem it solves, and how it’s better,
  •  Q&As,
  • Comparison tables,
  • Detailed alt text for product images,
  • Complementary products that match the intent,
  • Transcripts for videos.

Social proof

The guide emphasizes the importance of factual entities such as verified customer reviews, certifications, sustainability badges, and partnerships. It warns against using exaggerated or unverifiable claims, stating, “AI systems penalize low-trust language.”

It advises applying social proof consistently across your site and all channels, and verifying any subjective claims about your business or product. For example, if you assert a product is the best in a category, include why, such as “according to [XYZ’s] tests.”

Structured data

Per the guide, structured data markup, such as Schema.org, is key for AI visibility.

However, I’ve seen no evidence to support that recommendation. The guide does not explain how LLMs use Schema. To my knowledge, AI training data does not store Schema markup, and AI bots crawl text-only content.

Yet for live searches, Schema may be helpful because traditional search engines support it, and LLMs rely on those platforms.

Nonetheless, the guide recommends:

  • Schema Types: Product, Offer, AggregateRating, Review, Brand, ItemList, and FAQ.
  • Dynamic fields: price, availability, color, size, SKU, GTIN, and dateModified.
  • ItemList markup for collections and category pages to clarify product groupings.

While helpful, Microsoft’s “A guide to AEO and GEO” doesn’t introduce anything new. The recommendations align with longstanding SEO tactics and reinforce the views of industry pros.