Who Owns Web Performance? Building A Framework For Digital Accountability via @sejournal, @billhunt

In my previous article, “Closing the Digital Performance Gap,” I made the case that web effectiveness is a business issue, not a marketing metric. The website is no longer just a reflection of your brand – it is your brand. If it’s not delivering measurable business results, that’s a leadership problem, not a team problem.

But there’s a deeper issue underneath that: Who actually owns web performance?

The truth is, many companies don’t have a good answer. Or they think they do until something breaks. The SEO team doesn’t own the infrastructure. The dev team isn’t briefed on platform changes. The content team isn’t looped in until after a redesign. Visibility drops, conversions dip, and someone asks, “Why isn’t our SEO team performing?”

Because they don’t own the full system, no one does.

If we want to close the digital performance gap, we must address this root problem: lack of accountability.

The Fallacy Of Distributed Ownership

The idea that “everyone owns the website” likely stems from early digital transformation initiatives, where cross-functional collaboration was encouraged to break down departmental silos. The intent was to foster shared responsibility across departments – but the unintended consequence was diffused accountability.

It sounds collaborative, but in practice, it often means no one is fully accountable for performance.

Here’s how it typically breaks down:

  • IT owns infrastructure and hosting.
  • Marketing owns content and campaigns.
  • SEO owns visibility – but not implementation.
  • UX owns experience – but not findability.
  • Legal owns compliance – but limits usability.
  • Product owns the content management system (CMS) – but doesn’t track SEO.

Each group is doing its job, often with excellence. But the result? Disconnected execution. Strategy gets lost in translation, and performance stalls.

Case in point: For a global alcohol brand, a site refresh had legal requirements mandating an age verification gate before users could access the site. That was the extent of their specification. IT built the gate exactly to spec: a page with the statement to enter your birthdate and three pull-down options for Month, Day, and Year, and a check of that date to the U.S. legal drinking age. UX and creative delayed launch for weeks while debating the optimal wording, positioning, and color scheme.

Once launched, the website traffic, both direct and organic search, dropped to zero. This was due to several key reasons:

  1. Analytics were not set up to track visits before and after the age gate.
  2. Search engines can’t input a birthdate, so they were blocked.
  3. The age requirement was set to the U.S. standard, rejecting younger, yet legal visitors from other countries.

Because everything was done in silos, no one had considered these critical details.

When we finally got all stakeholders in a room, agreed on the issues, and sorted through them, we redesigned the system:

  • Search engines were recognized and bypassed the age requirement.
  • The age requirement and date format are adapted to the user’s location.
  • UX developed multiple variations and tested abandonment.
  • Analytics captured pre- and post-gate performance.
  • UX used the data to validate new landing page formats.

The result? A compliant, user-friendly, and search-accessible module that could be reused globally. Visibility, conversions, and compliance all increased exponentially. But we lost months and millions in potential traffic simply because no one owned the whole picture.

Without centralized accountability, the site was optimized in parts but underperforming as a whole.

The AI Era Raises The Stakes

This kind of siloed ownership might have been manageable in the old “10 blue links” era. But in an AI-first world – where Google and other platforms synthesize content into answers, summarize brands, and bypass traditional click paths – every decision across your digital operation impacts your visibility, trust, and conversion.

Search visibility today depends on structured data, crawlable infrastructure, content relevance, and citation-worthiness. If even one of these is out of alignment, you lose shelf space in the AI-driven SERP. And chances are, the team responsible for the weak link doesn’t even know they’re part of the problem.

Why Most SEO Advice Falls Short

I’ve seen well-meaning advice to “improve your SEO strategy” fall flat – because it assumes the SEO team has control over all the necessary elements. They don’t.

  • You can’t fix crawl issues if you can’t talk to the dev team.
  • You can’t win AI citations if your content team doesn’t structure or enrich their pages.
  • You can’t build authority if your legal or PR teams strip bios and outbound references.

What’s needed isn’t better tactics. It’s organizational clarity.

The Case For Centralized Digital Ownership

To create sustained performance, companies need to designate real ownership over web effectiveness. That doesn’t mean centralizing every task – but it does mean centralizing accountability.

Here are three practical approaches:

1. Establish A Digital Center Of Excellence (CoE)

A CoE provides governance, guidance, and support across business units and regions. It ensures that:

  • Standards are defined and enforced.
  • Platforms are chosen and maintained with shared goals.
  • Learnings are captured and distributed.
  • Key performance indicators (KPIs) are consistent and comparable.

2. Appoint A Digital Effectiveness Officer (DEO)

Think of this like a Commissioning Authority in construction – a role that ensures every component works together to meet the original performance spec. A DEO:

  • Connects the dots between dev, SEO, UX, and content.
  • Tracks impact beyond traffic (revenue, leads, brand trust).
  • Advocates for platform investment and cross-team prioritization.

3. Build Shared KPIs Across Departments

Most teams optimize for what they’re measured on. If the SEO team is judged on rankings but not revenue, and the content team is judged on output but not visibility, you get misaligned efforts. Create chained KPIs that reflect end-to-end performance.

Characteristics Of A Performance-Driven Model

Companies that close the accountability gap tend to share these traits:

  • Unified Taxonomy and Tagging – so content is findable and trackable.
  • Structured Governance – clear roles and escalation paths across teams.
  • Shared Dashboards – everyone sees the same numbers, not vanity metrics.
  • Tech Stack Discipline – fewer, better tools with cross-functional usage.
  • Scenario Planning – AI, zero-click SERPs, and platform volatility are modeled, not ignored.

Final Thought: Performance Requires Ownership

If you’re serious about web effectiveness, you need more than skilled people and good tools. You need a system where someone is truly accountable for how the site performs – across traffic, visibility, UX, conversion, and AI resilience.

This doesn’t mean a top-down mandate. It means orchestrated ownership with clear roles, measurable outcomes, and a strategic anchor.

It’s time to stop asking the SEO team to fix what they don’t control.

It’s time to build a framework where the web is everyone’s responsibility – and someone’s job.

Let’s make web performance a leadership priority, not a guessing game.

More Resources:


Featured Image: SFIO CRACHO/Shutterstock

Google Uses Infinite 301 Redirect Loops For Missing Documentation via @sejournal, @martinibuster

Google removed outdated structured data documentation, but instead of returning a 404 response, they have chosen to redirect the old URLs to a changelog that links to the old URL, thereby causing an infinite loop between the two pages. Although that is technically not a soft 404, it is an interesting use of a 301 redirect for a missing web page and not how SEOs typically handle missing web pages and 404 server responses. Did Google make a mistake?

Google Removed Structured Data Documentation

Google quitely published a changelog note announcing they had removed obsolete structured data documentation. An announcement was made three months ago in June and today they finally removed the obsolete documentation.

The missing pages are for the following structured data that is no longer supported:

  • Course info
  • Estimated salary
  • Learning video
  • Special announcement
  • Vehicle listing.

Those pages are completely missing. Gone, and likely never coming back. The usual procedure in that kind of situation is to return a 404 Page Not Found server response. But that’s not what is happening.

Instead of a 404 response Google is returning a 301 redirect back to the changelog. What makes this setup somewhat weird is that Google is linking back to the missing web page from the changelog, which then redirects back to the changelog, creating an infinite loop between the two pages.

Screenshot Of Changelog

In the above screenshot I’ve underlined  in red the link to the Course Info structured data.

The words “course info” are a link to this URL:
https://developers.google.com/search/docs/appearance/structured-data/course-info

Which redirects right back to the changelog here:
https://developers.google.com/search/updates#september-2025

Which of course contains the links to the five URLs that  no longer exist, essentially causing an infinite loop.

It’s not a good user experience and it’s not good for crawlers. So the question is, why did Google do that? 

301 redirects are an option for pages that are missing, so Google is technically correct to use a 301 redirect. However, 301 redirects are generally used to point “to a more accurate URL” which generally means a redirect to a replacement page, one that serves the same or similar purpose.

Technically they didn’t create a soft 404. But the way they handled the missing pages creates a loop that sends crawlers back and forth between a missing web page and the changelog. It seems that it would have been a better user and crawler experience to instead link to the June 2025 blog post that explains why these structured data types are no longer supported  rather than create an infinite loop.

I don’t think it’s anything most SEOs or publishers would do, so why does Google think it’s a good idea?

Featured Image by Shutterstock/Kues

AI Is Changing Local Search Faster Than You Think [Webinar] via @sejournal, @hethr_campbell

For multi-location brands, local search has always been competitive. But 2025 has introduced a new player: AI

From AI Overviews to Maps Packs, how consumers discover your stores is evolving, and some brands are already pulling ahead.

Robert Cooney, VP of Client Strategy at DAC, and Kyle Harris, Director of Local Optimization, have spent months analyzing enterprise local search trends. Their findings reveal clear gaps between brands that merely appear and those that consistently win visibility across hundreds of locations.

The insights are striking:

  • Some queries favor Maps Packs, others AI Overviews. Winning in both requires strategy, not luck.
  • Multi-generational search habits are shifting. Brands that align content to real consumer behavior capture more attention.
  • The next wave of “agentic search” is coming, and early preparation is the key to staying relevant.

This webinar is your chance to see these insights in action. Walk away with actionable steps to protect your visibility, optimize local presence, and turn AI-driven search into a growth engine for your stores.

📌 Register now to see how enterprise brands are staying ahead of AI in local search. Can’t make it live? Sign up and we’ll send the recording straight to your inbox.

Structured Data’s Role In AI And AI Search Visibility via @sejournal, @marthavanberkel

The way people find and consume information has shifted. We, as marketers, must think about visibility across AI platforms and Google.

The challenge is that we don’t have the same ability to control and measure success as we do with Google and Microsoft, so it feels like we’re flying blind.

Earlier this year, Google, Microsoft, and ChatGPT each commented about how structured data can help LLMs to better understand your digital content.

Structured data can give AI tools the context they need to determine their understanding of content through entities and relationships. In this new era of search, you could say that context, not content, is king.

Schema Markup Helps To Build A Data Layer

By translating your content into Schema.org and defining the relationships between pages and entities, you are building a data layer for AI. This schema markup data layer, or what I like to call your “content knowledge graph,” tells machines what your brand is, what it offers, and how it should be understood.

This data layer is how your content becomes accessible and understood across a growing range of AI capabilities, including:

  • AI Overviews
  • Chatbots and voice assistants
  • Internal AI systems

Through grounding, structured data can contribute to visibility and discovery across Google, ChatGPT, Bing, and other AI platforms. It also prepares your web data to be of value to accelerate your internal AI initiatives as well.

The same week that Google and Microsoft announced they were using structured data for their generative AI experiences, Google and OpenAI announced their support of the Model Context Protocol.

What Is Model Context Protocol?

In November 2024, Anthropic introduced Model Context Protocol (MCP), “an open protocol that standardizes how applications provide context to LLMs” and was subsequently adopted by OpenAI and Google DeepMind.

You can think of MCP as the USB-C connector for AI applications and agents or an API for AI. “MCP provides a standardized way to connect AI models to different data sources and tools.”

Since we are now thinking of structured data as a strategic data layer, the problem Google and OpenAI need to solve is how they scale their AI capabilities efficiently and cost-effectively. The combination of structured data you put on your website, with MCP, would allow accuracy in inferencing and the ability to scale.

Structured Data Defines Entities And Relationships

LLMs generate answers based on the content they are trained on or connected to. While they primarily learn from unstructured text, their outputs can be strengthened when grounded in clearly defined entities and relationships, for example, via structured data or knowledge graphs.

Structured data can be used as an enhancer that allows enterprises to define key entities and their relationships.

When implemented using Schema.org vocabulary, structured data:

  • Defines the entities on a page: people, products, services, locations, and more.
  • Establishes relationships between those entities.
  • Can reduce hallucinations when LLMs are grounded in structured data through retrieval systems or knowledge graphs.

When schema markup is deployed at scale, it builds a content knowledge graph, a structured data layer that connects your brand’s entities across your site and beyond. 

A recent study by BrightEdge demonstrated that schema markup improved brand presence and perception in Google’s AI Overviews, noting higher citation rates on pages with robust schema markup.

Structured Data As An Enterprise AI Strategy

Enterprises can shift their view of structured data beyond the basic requirements for rich result eligibility to managing a content knowledge graph.

According to Gartner’s 2024 AI Mandates for the Enterprise Survey, participants cite data availability and quality as the top barrier to successful AI implementation.

By implementing structured data and developing a robust content knowledge graph you can contribute to both external search performance and internal AI enablement.

A scalable schema markup strategy requires:

  • Defined relationships between content and entities: Schema markup properties connect all content and entities across the brand. All page content is connected in context.
  • Entity Governance: Shared definitions and taxonomies across marketing, SEO, content, and product teams.
  • Content Readiness: Ensuring your content is comprehensive, relevant, representative of the topics you want to be known for, and connected to your content knowledge graph.
  • Technical Capability: Cross-functional tools and processes to manage schema markup at scale and ensure accuracy across thousands of pages.

For enterprise teams, structured data is a cross-functional capability that prepares web data to be consumed by internal AI applications.

What To Do Next To Prepare Your Content For AI

Enterprise teams can align their content strategies with AI requirements. Here’s how to get started:

1. Audit your current structured data to identify gaps in coverage and whether schema markup is defining relationships within your website. This context is critical for AI inferencing.

2. Map your brand’s key entities, such as products, services, people, and core topics, and ensure they are clearly defined and consistently marked up with schema markup across your content. This includes identifying the main page that defines an entity, known as the entity home.

3. Build or expand your content knowledge graph by connecting related entities and establishing relationships that AI systems can understand.

4. Integrate structured data into AI budget and planning, alongside other AI investments and that content is intended for AI Overviews, chatbots, or internal AI initiatives.

5. Operationalize schema markup management by developing repeatable workflows for creating, reviewing, and updating schema markup at scale.

By taking these steps, enterprises can ensure that their data is AI-ready, inside and outside the enterprise.

Structured Data Provides A Machine-Readable Layer

Structured data doesn’t assure placement in AI Overviews or directly control what large language models say about your brand. LLMs are still primarily trained on unstructured text, and AI systems weigh many signals when generating answers.

What structured data does provide is a strategic, machine-readable layer. When used to build a knowledge graph, schema markup defines entities and the relationships between them, creating a reliable framework that AI systems can draw from. This reduces ambiguity, strengthens attribution, and makes it easier to ground outputs in fact-based content when structured data is part of a connected retrieval or grounding system.

By investing in semantic, large-scale schema markup and aligning it across teams, organizations position themselves to be as discoverable in AI experiences as possible.

More Resources:


Featured Image: Koto Amatsukami/Shutterstock

Google’s Antitrust Ruling: What The Remedies Really Mean For Search, SEO, And AI Assistants via @sejournal, @gregjarboe

When Judge Amit P. Mehta issued his long-awaited remedies decision in the Google search antitrust case, the industry exhaled a collective sigh of relief. There would be no breakup of Google, no forced divestiture of Chrome or Android, and no user-facing “choice screen” like the one that reshaped Microsoft’s browser market two decades ago. But make no mistake – this ruling rewrites the playbook for search distribution, data access, and competitive strategy over the next six years.

This article dives into what led to the decision, what it actually requires, and – most importantly – what it means for SEO, PPC, publishers, and the emerging generation of AI-driven search assistants.

What Led To The Decision

The Department of Justice and a coalition of states sued Google in 2020, alleging that the company used exclusionary contracts and massive payments to cement its dominance in search. In August 2024, Judge Mehta ruled that Google had indeed violated antitrust law, writing, “Google is a monopolist, and it has acted as one to maintain its monopoly.” The question then became: what remedies would actually restore competition?

The DOJ and states pushed for sweeping measures – including a breakup of Google’s Chrome browser or Android operating system, and mandatory choice screens on devices. Google countered that such steps would harm consumers and innovation. By the time remedies hearings wrapped, generative AI had exploded into the mainstream, shifting the court’s sense of what competition in search could look like.

What The Court Decided

Judge Mehta’s ruling, issued September 2, 2025, imposed a mix of behavioral remedies:

  • Exclusive contracts banned. Google can no longer strike deals that make it the sole default search engine on browsers, phones, or carriers. That means Apple, Samsung, Mozilla, and mobile carriers can now entertain offers from rivals like Microsoft Bing or newer AI entrants.
  • Payments still allowed. Crucially, the court did not ban Google from paying for placement. Judge Mehta explained that removing payments altogether would “impose substantial harms on distribution partners.” In other words, the checks will keep flowing – but without exclusivity.
  • Index and data sharing. Google must share portions of its search index and some user interaction data with “qualified competitors” on commercial terms. Ads data, however, is excluded. This creates a potential on-ramp for challengers, but it doesn’t hand them the secret sauce of Google’s ranking systems.
  • No breakup, no choice screen. Calls to divest Chrome or Android were rejected as overreach. Similarly, the court declined to mandate a consumer-facing choice screen. Change will come instead through contracts and UX decisions by distribution partners.
  • Six-year oversight. Remedies will be overseen by a technical committee for six years. A revised judgment is due September 10, with remedies taking effect roughly 60 days after final entry.

As Judge Mehta put it, “Courts must… craft remedies with a healthy dose of humility,” noting that generative AI has already “changed the course of this case.”

How The Market Reacted

Investors immediately signaled relief. Alphabet shares jumped ~8% after hours, while Apple gained ~4%. The lack of a breakup, and the preservation of lucrative search placement payments, reassured Wall Street that Google’s search empire was not being dismantled overnight.

But beneath the relief lies a new strategic reality: Google’s moat of exclusivity has been replaced with a marketplace for defaults.

Strategic Insights: Beyond The Headlines

Most coverage of the decision has focused on what didn’t happen – the absence of a breakup or a choice screen. But the deeper story is how distribution, data, and AI will interact under the new rules.

1. Defaults Move From Moat To Marketplace

Under the old model, Google’s exclusive deals ensured it was the default on Safari, Android, and beyond. Now, partners can take money from multiple providers. That turns the default position into a marketplace, not a moat.

Apple, in particular, gains leverage. Court records revealed that Google paid Apple $20 billion in 2022 and paid $26.3 billion in 2021  – the figure is not to any one company, but Apple likely represents the largest recipient – to remain Safari’s default search engine. Without exclusivity, Apple can entertain bids from Microsoft, OpenAI, or others – potentially extracting even more money by selling multiple placements or rotating defaults.

We may see new UX experiments: rotating search tiles, auction-based setup flows, or AI assistant shortcuts integrated into operating systems. Distribution partners like Samsung or Mozilla could pilot “multi-home defaults,” where Google, Bing, and an AI engine all coexist in visible slots.

2. Data Access Opens An On-Ramp For Challengers

Index-sharing and limited interaction data access lower barriers for rivals. Crawling the web is expensive; licensing Google’s index could accelerate challengers like Bing, Perplexity, or OpenAI’s rumored search product.

But it’s not full parity. Without ads data and ranking signals, competitors must still differentiate on product experience. Think faster answers, vertical specialization, or superior AI integration. As I like to put it: Index access gives challengers legs, not lungs.

Much depends on how “qualified competitor” is defined. A narrow definition could limit access to a token few; a broad one could empower a new wave of vertical and AI-driven search entrants.

3. AI Is Already Shifting The Game

The court acknowledged that generative AI reshaped its view of competition. Assistants like Copilot, Gemini, or Perplexity are increasingly acting as intent routers – answering directly, citing sources, or routing users to transactions without a traditional SERP.

That means the battle for distribution may shift from browsers and search bars to AI copilots embedded in operating systems, apps, and devices. If users increasingly ask their assistant instead of typing a query, exclusivity deals matter less than who owns the assistant.

For SEO and SEM professionals, this accelerates the shift toward zero-click answers, assistant-ready content, and schema that supports citations.

4. Financial Dynamics: Relief Today, Pressure Tomorrow

Yes, investors cheered. But over time, Google could face rising traffic acquisition costs (TAC) as Apple, Samsung, and carriers auction off default positions. Defending its distribution may get more expensive, eating into margins.

At the same time, without a choice screen, search market share is likely to shift gradually, not collapse. Expect Google’s U.S. query share to remain in the high 80s in the near term, with only single-digit erosion as rivals experiment with new models.

5. Knock-On Effects: The Ad-Tech Case Looms

Don’t overlook the second front: the DOJ’s separate antitrust case against Google’s ad-tech stack, now moving toward remedies hearings in Virginia. If that case results in structural changes – say, forcing Google to separate its publisher ad server from its exchange – it could reshape how search ads are bought, measured, and monetized.

For publishers, both cases matter. If rivals gain traction with AI-driven assistants, referral traffic could diversify – but also become more volatile, depending on how assistants handle citations and click-throughs.

What Happens Next

  • September 10, 2025: DOJ and Google file a revised judgment.
  • ~60 days later: Remedies begin taking effect.
  • Six years: Oversight period, with ongoing compliance monitoring.

Key Questions To Watch:

  • How will Apple implement non-exclusive search defaults in Safari?
  • Who qualifies as a “competitor” for index/data access, and on what terms?
  • Will rivals like Microsoft, Perplexity, or OpenAI buy into distribution slots aggressively?
  • How will AI assistants evolve as distribution front doors?

What This Means For SEO And PPC

This ruling isn’t just about contracts in Silicon Valley – it has practical consequences for marketers everywhere.

  • Distribution volatility planning. SEM teams should budget for a world where Safari queries become more contestable. Test Bing Ads, Copilot Ads, and assistant placements.
  • Assistant-ready content. Optimize for concise, cite-worthy answers with schema markup. Publish FAQs, data tables, and source-friendly content that large language models (LLMs) like to quote.
  • Syndication hedge. If new index-sharing programs emerge, explore partnerships with vertical search startups. Early pilots could deliver traffic streams outside the Google ecosystem.
  • Attribution resilience. As assistants mediate more traffic, referral strings will get messy. Double down on UTM governance, server-side tracking, and marketing mix models to parse signal from noise.
  • Creative testing. Build two-tier content: a punchy, fact-dense abstract that assistants can lift, and a deeper explainer for human readers.

Market Scenarios

  • Base Case (Most Likely): Google retains high-80s market share. TAC costs rise gradually. AI assistants siphon a modest share of informational queries by 2027. Impact: margin pressure more than market share loss.
  • Upside for Rivals: If index access is broad and AI assistants nail UX, Bing, Perplexity, and others could win five to 10 points combined in specific verticals. Impact: SEM arbitrage opportunities emerge, and SEO adapts to answer-first surfaces.
  • Regulatory Cascade: If the ad-tech remedies impose structural changes, Google’s measurement edge narrows, and OEMs test choice-like UX voluntarily. Impact: more fragmentation, more testing for marketers.

Final Takeaway

Judge Mehta summed up the challenge well: “Courts must craft remedies with a healthy dose of humility.” The ruling doesn’t topple Google, but it does force the search giant to compete on more open terms. Exclusivity is gone; auctions and assistants are in.

For marketers, the message is clear: Don’t wait for regulators to rebalance the playing field. Diversify now – across engines, assistants, and ad formats. Optimize for answerability as much as for rankings. And be ready: The real competition for search traffic is just beginning.

More Resources:


Featured Image: beast01/Shutterstock

The Problem With Always-On SEO: Why You Need Sprints, Not Checklists via @sejournal, @coreydmorris

There’s a lot that goes into SEO. And, now, more broadly into being found online and online visibility overall, whether we’re talking about an organic result in a search engine, an AI Overview, or through a large language model (LLM).

With SEO being a discipline that often takes a long time (compared to ads and some other channels and platforms), with a large amount of complexity, technical aspects, contradictions of how it works, and even disagreements, it has to be organized in a way that can be implemented.

Over the years and decades, this has resulted in the acceptance of specific “best practices,” along with the fact that it is a longer-term commitment. That, ultimately, has led to the use of checklists and specific cadences to accomplish what is typically seen as an “ongoing” and never-ending discipline.

In full disclosure, you’ll find articles written by me that talk about checklists and ways to structure the work that is important to be visible and found online. I’m not saying we have to throw them out, but we can’t simply do the list or activities.

“Always-on SEO” sounds great in theory: ongoing optimization, constant monitoring, and steady progress. But in reality, it often becomes a nebulous set of tasks without priority, strategy, or momentum.

This article challenges the default mindset of treating SEO as a perpetual checklist and proposes a sprint-based approach, where work is grouped into focused time blocks with measurable goals.

By approaching SEO in strategic sprints, teams can prioritize, measure, adapt, and improve – all while staying aligned with larger business goals.

The Problem With Perpetual SEO Checklists

What I often see with SEO checklists is a lack of prioritization. Everything becomes a task, but nothing is deemed critical.

The checklist might have “right” and “good” things in it, but it isn’t weighted or prioritized based on any level of strategic approach or potential level of impact.

And, when there’s a lack of direction, we often can end up with a set of actions, activities, or tactics that have no clear end or evaluation defined. This ends up getting us into a place of just “doing SEO” without being able to objectively say what the result was or how things were improved.

Like any digital marketing channel, activity without the right anchor or foundation, in SEO, can result in wasted effort.

Technical fixes and content updates may not support meaningful business goals and can be a huge investment of time and money that ultimately don’t impact the business. And, activity without results or clear direction can drive SEO teams and professionals to boredom or burnout.

I’ve taken over a number of situations where a business thought SEO didn’t work for them or that the team was not competent enough due to stakeholder confusion.

When activity doesn’t generate results and you find it out a year into an investment, it is hard to recover, especially when no one really knows what “done” or what success looks like in the effort.

I say all of this not to bring up pain, say that checklists aren’t good, or even that the ongoing tactics aren’t right. I’m simply saying we have to have a deeper understanding and meaning behind what we’re doing in SEO.

What Sprint-Based SEO Looks Like

SEO sprints are focused and time-bound (e.g., four weeks) efforts with specific goals tied to strategy. Rather than working on everything at once, you work on the highest-impact priorities in chunks.

Common sprint types:

  • Content optimization sprints.
  • Technical SEO fix sprints.
  • Internal linking improvement sprints.
  • New content creation sprints.
  • Authority/link building sprints.

You can also combine types into a custom sprint. Regardless of whether you stay in a category or make one that contains blended themes or tactics, it needs to be anchored to an initial strategy, plan, or audit for your first one.

Each sprint ends with measurable outputs, documented outcomes, and clear learnings. The first one might be rooted in an initial plan, but each subsequent sprint will include a retrospective review from the previous one to help fuel continuous learning, efficiencies, improvements, and ultimate impact.

Benefits Of SEO Sprints

A quick win benefit is gaining focus. Pivoting away from a generic checklist to sprint structure results in solving a defined problem, not tackling a vague backlog.

As noted earlier, sprints are time-based as well. By having the right length (not too short or small of a sample size, yet too long and repeating tactics that aren’t effective), you gain the benefits of agility and an adaptable longer-term approach overall.

Agility in sprints allows you to adjust based on performance and new insights. Checklists are not only generic or often disconnected from strategy, but are getting out of date constantly with shifts in online visibility optimization sources and methods.

Accountability and team clarity come more naturally as well. It’s easier to report on and justify value with clear before/after comparisons and to keep people engaged and in the know on what’s happening now and what’s next.

This matters for overall business alignment of key performance indicators (KPIs) and not getting too deep and lost in the jargon, technical aspects, and “hope” for return on investment (ROI) versus seeing shorter-term, higher-impact efforts.

Sprints can be tied directly to goals (revenue, lead generation, funnel support) and not just rankings or other KPIs that are upstream and further removed from business outcomes, and shorter-term expectations can take pressure off of long-term waiting for something to happen.

How To Implement Sprint-Based SEO

Start with strategy. Identify what matters to the business and where SEO fits. Define sprint themes and objectives, and make them specific enough to be meaningful and measurable.

Example: “Improve organic conversions for top 5 services pages” vs. “Improve rankings.”

Build a backlog or tactics plan, but don’t treat it like a checklist. Use it to feed sprint plans, but not overwhelm day-to-day work.

In short:

  • Plan your first sprint: Choose one clear objective, timeline, and outcome.
  • Track and review: Report on progress, document what was done, and define what’s next.
  • Iterate: Use learnings from each sprint to improve the next.

When (And Where) “Always-On” SEO Still Applies

Certain things do need continuous attention. I’m not saying that it is right for 100% of your sprints to be 100% custom.

There are recurring things that could, or likely should, go into sprints or be monitored and maintained by regular or routine audits or checklists, e.g., crawl errors, broken links, technical issues, etc.

But, this maintenance work shouldn’t be the SEO strategy. It should support it. Use “always-on” as infrastructure or basics, not direction, and remember that the checklist isn’t the strategy, and if you have one, it is a planning tool, not necessarily your tactical plan and roadmap to ultimate SEO ROI.

Why It’s Time To Rethink “Always-On” SEO

I’ve hit on it enough, but I will wrap up by reminding you that endless to-do lists don’t move the needle.

Checklists can be good things and full of the “right” tactics. However, they often lack strategy and don’t serve shorter attention spans or allow for enough agility.

Sprint-based SEO helps teams be more strategic, productive, and aligned with the business overall, with room to implement prioritized tactics, tied to overall goals, and adjust to market and business needs and conditions.

Shifting your team from “always-on” to “intentionally paced” is a move to start seeing results and not just activity.

More Resources:


Featured Image: wenich_mit/Shutterstock

Google Antitrust Case: AI Overviews Use FastSearch, Not Links via @sejournal, @martinibuster

A sharp-eyed search marketer discovered the reason why Google’s AI Overviews showed spammy web pages. The recent Memorandum Opinion in the Google antitrust case featured a passage that offers a clue as to why that happened and speculates how it reflects Google’s move away from links as a prominent ranking factor.

Ryan Jones, founder of SERPrecon (LinkedIn profile), called attention to a passage in the recent Memorandum Opinion that shows how Google grounds its Gemini models.

Grounding Generative AI Answers

The passage occurs in a section about grounding answers with search data. Ordinarily, it’s fair to assume that links play a role in ranking the web pages that an AI model retrieves from a search query to an internal search engine. So when someone asks Google’s AI Overviews a question, the system queries Google Search and then creates a summary from those search results.

But apparently, that’s not how it works at Google. Google has a separate algorithm that retrieves fewer web documents and does so at a faster rate.

The passage reads:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. Rem. Tr. at 3509:23–3511:4 (Reid). FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. Id. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Ryan Jones shared these insights:

“This is interesting and confirms both what many of us thought and what we were seeing in early tests. What does it mean? It means for grounding Google doesn’t use the same search algorithm. They need it to be faster but they also don’t care about as many signals. They just need text that backs up what they’re saying.

…There’s probably a bunch of spam and quality signals that don’t get computed for fastsearch either. That would explain how/why in early versions we saw some spammy sites and even penalized sites showing up in AI overviews.”

He goes on to share his opinion that links aren’t playing a role here because the grounding uses semantic relevance.

What Is FastSearch?

Elsewhere the Memorandum shares that FastSearch generates limited search results:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

Now the question is, what’s the RankEmbed model?

The Memorandum explains that RankEmbed is a deep-learning model. In simple terms, a deep-learning model identifies patterns in massive datasets and can, for example, identify semantic meanings and relationships. It does not understand anything in the same way that a human does; it is essentially identifying patterns and correlations.

The Memorandum has a passage that explains:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets. …(Allan)

…Google has developed various “top-level” signals that are inputs to producing the final score for a web page. Id. at 2793:5–2794:9 (Allan) (discussing RDXD-20.018). Among Google’s top-level signals are those measuring a web page’s quality and popularity. Id.; RDX0041 at -001.

Signals developed through deep-learning models, like RankEmbed, also are among Google’s top-level signals.”

User-Side Data

RankEmbed uses “user-side” data. The Memorandum, in a section about the kind of data Google should provide to competitors, describes RankEmbed (which FastSearch is based on) in this manner:

“User-side Data used to train, build, or operate the RankEmbed model(s); “

Elsewhere it shares:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: _____% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

Then:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms. PXR0171 at -086 (“Embedding based retrieval is effective at semantic matching of docs and queries”);

…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…RankEmbed particularly helped Google improve its answers to long-tail queries.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.

…The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.

…RankEmbedBERT needs to be retrained to reflect fresh data…”

A New Perspective On AI Search

Is it true that links do not play a role in selecting web pages for AI Overviews? Google’s FastSearch prioritizes speed. Ryan Jones theorizes that it could mean Google uses multiple indexes, with one specific to FastSearch made up of sites that tend to get visits. That may be a reflection of the RankEmbed part of FastSearch, which is said to be a combination of “click-and-query data” and human rater data.

Regarding human rater data, with billions or trillions of pages in an index, it would be impossible for raters to manually rate more than a tiny fraction. So it follows that the human rater data is used to provide quality-labeled examples for training. Labeled data are examples that a model is trained on so that the patterns inherent to identifying a high-quality page or low-quality page can become more apparent.

Featured Image by Shutterstock/Cookie Studio

8 Generative Engine Optimization (GEO) Strategies For Boosting AI Visibility in 2025 via @sejournal, @samanyougarg

This post was sponsored by Writesonic. The opinions expressed in this article are the sponsor’s own.

AI search now makes the first decision.

When? Before a buyer hits your website.

If you’re not part of the AI answer, you’re not part of the deal. In fact, 89% of B2B buyers use AI platforms like ChatGPT for research.

Picture this:

  • A founder at a 12-person SaaS asks, “best CRM for a 10-person B2B startup.”
  • AI answer cites:
    a TechRadar roundup,
    a r/SaaS thread,
    a fresh comparison,
    Not you.
  • Your brand is missing.
  • They book demos with two rivals.
  • You never hear about it.

Here is why. AI search works on intent, not keywords.

It reads content, then grounds answers with sources. It leans on third-party citations, community threads, and trusted publications. It trusts what others say about you more than what you say about yourself.

Most Generative Engine Optimization (GEO) tools stop at the surface. They track mentions, list prompts you missed, and ship dashboards. They do not explain why you are invisible or what to fix. Brands get reports, not steps.

We went hands-on. We analyzed millions of conversations and ran controlled tests. The result is a practical playbook: eight strategies that explain the why, give a quick diagnostic, and end with actions you can ship this week.

Off-Page Authority Builders For AI Search Visibility

1. Find & Fix Your Citation Gaps

Citation gaps are the highest-leverage strategy most brands miss.

Translation: This is an easy win for you.

What Is A Citation Gap?

A citation gap is when AI platforms cite web pages that mention your competitors but not you. These cited pages become the sources AI uses to generate its answers.

Think of it like this:

  • When someone asks ChatGPT about CRMs, it pulls information from specific web pages to craft its response.
  • If those source pages mention your competitors but not you, AI recommends them instead of your brand.

Finding and fixing these gaps means getting your brand mentioned on the exact pages AI already trusts and cites as sources.

Why You Need Citations In Answer Engines

If you’re not cited in an answer engine, you are essentially invisible.

Let’s break this down.

TechRadar publishes “21 Best Collaboration Tools for Remote Teams” mentioning:

  • Asana.
  • Monday.
  • Notion.

When users ask ChatGPT about remote project management, AI cites this TechRadar article.

Your competitors appear in every response. You don’t.

How To Fix Citation Gaps

That TechRadar article gets cited for dozens of queries, including “best remote work tools,” “Monday alternatives,” “startup project management.”

Get mentioned in that article, and you appear in all those AI responses. One placement creates visibility across multiple search variations.

Contact the TechRadar author with genuine value, such as:

  • Exclusive data about remote productivity.
  • Unique use cases they missed.
  • Updated features that change the comparison.

The beauty? It’s completely scalable.

Quick Win:

  1. Identify 50 high-authority articles where competitors are mentioned but you’re not.
  2. Get into even 10 of them, and your AI visibility multiplies exponentially.

2. Engage In The Reddit & UGC Discussions That AI References

Social platformsImage created by Writesonic, August 2025

AI trusts real user conversations over marketing content.

Reddit citations in AI overviews surged from 1.3% to 7.15% in just three months, a 450% increase. User-generated content now makes up 21.74% of all AI citations.

Why You Should Add Your Brand To Reddit & UGC Conversations

Reddit, Quora, LinkedIn Pulse, and industry forums together, and you’ve found where AI gets most of its trusted information.

If you show up as “trusted” information, your visibility increases.

How To Inject Your Brand Into AI-Sourced Conversations

Let’s say a Reddit thread titled “Best project management tool for a startup with 10 people?” gets cited whenever users ask about startup tools.

Since AI already cites these, if you enter the conversation and include your thoughtful contribution, it will get included in future AI answers.

Pro Tip #1: Don’t just promote your brand. Share genuine insights, such as:

  • Hidden costs.
  • Scaling challenges.
  • Migration tips.

Quick Win:

Find and join the discussions AI seems to trust:

  • Reddit threads with 50+ responses.
  • High-upvote Quora answers in your industry.
  • LinkedIn Pulse articles from recognized experts.
  • Active forum discussions with detailed experiences.

Pro Tip #2: Finding which articles get cited and which Reddit threads AI trusts takes forever manually. GEO platforms automate this discovery, showing you exactly which publications to pitch and which discussions to join.

On-Page Optimization For GEO

3. Study Which Topics Get Cited Most, Then Write Them

Something we’re discovering: when AI gives hundreds of citations for a topic, it’s not just citing one amazing article.

Instead, AI pulls from multiple sites covering that same topic.

If you haven’t written about that topic at all, you’re invisible while competitors win.

Consider Topic Clusters To Get Cited

Let’s say you’re performing a content gap analysis for GEO.

You notice these articles all getting 100+ AI citations:

  • “Best Project Management Software for Small Teams”
  • “Top 10 Project Management Tools for Startups”
  • “Project Management Software for Teams Under 20”

Different titles, same intent: small teams need project management software.

When users ask, “PM tool for my startup,” AI might cite 2-3 of these articles together for a comprehensive answer.

Ask “affordable project management,” and AI pulls different ones. The point is that these topics cluster around the same user need.

How To Outperform Competitors In AI Generated Search Answers

Identify intent clusters for your topic and create one comprehensive piece on your own website so your own content gets cited.

In this example, we’d suggest writing “Best Project Management Software for Small Teams (Under 50 People).”

It should cover startups, SMBs, and budget considerations all in one authoritative guide.

Quick Win:

  • Find 20 high-citation topic clusters you’re missing.
  • Create comprehensive content for each cluster.
  • Study what makes the top versions work, such as structure, depth, and comparison tables.
  • Then make yours better with fresher data and broader coverage.

4. Update Content Regularly To Maintain AI Visibility

AI platforms heavily favor recent content.

Content from the past two to three months dominates AI citations, with freshness being a key ranking factor. If your content appears outdated, AI tends to overlook it in favor of newer alternatives.

Why You Should Keep Your Content Up To Date For GEO Visibility

Let’s say your “Email Marketing Best Practices” from 2023 used to get AI citations.

Now it’s losing to articles with 2025 data. AI sees the date and chooses fresher content every time.

How To Keep Your Content Fresh Enough To Be Cited In AIOs

Weekly refresh for top 10 pages:

  • Add two to three new statistics.
  • Include a recent case study.
  • Update “Last Modified” date prominently.
  • Add one new FAQ.
  • Change title to “(Updated August 2025)”.

Bi-weekly, on less important pages:

  • Replace outdated examples.
  • Update internal links.
  • Rewrite the weakest section.
  • Add seasonal relevance.

Pro Tip: Track your content’s AI visibility systematically. Certain advanced GEO tools alert you when pages lose citations, so you know exactly what to refresh and when.

5. Create “X vs Y” And “X vs Y vs Z” Comparison Pages

Users constantly ask AI to help them choose between options. AI platforms love comparison content. They even prompt users to compare features and create comparison tables.

Pages that deliver these structured comparisons dominate AI search results.

Common questions flooding AI platforms:

  • “Slack vs Microsoft Teams for remote work”
  • “HubSpot vs Salesforce for small business”
  • “Asana or Monday for creative agencies”

AI can’t answer these without citing detailed comparisons. Generic blog posts don’t work. Promotional content gets ignored.

Create comprehensive comparisons like: “Asana vs Monday vs ClickUp: Project Management for Creative Teams.”

How To Create Comparisons That Have High Visibility On SERPs

Use a content structure that wins:

  • Quick decision matrix upfront.
  • Pricing breakdown by team size.
  • Feature-by-feature comparison table.
  • Integrations.
  • Learning curve and onboarding time.
  • Best for: specific use cases.

Make it genuinely balanced:

  • Asana: “Overwhelming for teams under 5”
  • Monday: “Gets expensive with add-ons”
  • ClickUp: “Steep learning curve initially”

Include your product naturally in the comparison. Be honest about limitations while highlighting genuine advantages.

AI prefers citing fair comparisons over biased reviews. Include real limitations, actual pricing (not just “starting at”), and honest trade-offs. This builds trust that gets you cited repeatedly.

Technical GEO To Do Right Now

6. Fix Robots.txt Blocking AI Crawlers

Most websites accidentally block the very bots they want to attract. Like putting a “Do Not Enter” sign on your store while wondering why customers aren’t coming in.

ChatGPT uses three bots:

  • ChatGPT-User: Main bot serving actual queries (your money maker)
  • OAI-SearchBot: Activates when users click search toggle.
  • GPTBot: Collects training data for future models.

Strategic decision: Publications worried about content theft might block GPTBot. Product companies should allow it, however, because you want future AI models trained on your content for long-term visibility.

Essential bots to allow:

  • Claude-Web (Anthropic).
  • PerplexityBot.
  • GoogleOther (Gemini).

Add to robots.txt:

User-agent: ChatGPT-User
Allow: /
User-agent: Claude-Web
Allow: /
User-agent: PerplexityBot
Allow: /

Verify it’s working: Check server logs for these user agents actively crawling your content. No crawl activity means no AI visibility.

7. Fix Broken Pages For AI Crawlers

Just like Google Search Console shows Googlebot errors, you need visibility for AI crawlers. But AI bots behave differently and can be aggressive.

Monitor AI bot-specific issues:

  • 404 errors on important pages.
  • 500 server errors during crawls.
  • Timeout issues when bots access content.

If your key product pages error when ChatGPT crawls them, you’ll never appear in AI responses.

Common problems:

  • AI crawlers triggering DDoS protection.
  • CDN security blocking legitimate bots.
  • Rate limiting preventing full crawls.

Fix: Whitelist AI bots in your CDN (Cloudflare, Fastly). Set up server-side tracking to differentiate AI crawlers from regular traffic. No errors = AI can cite you.

8. Avoid JavaScript For Main Content

Most AI crawlers can’t execute JavaScript. If your content loads dynamically, you’re invisible to AI.

Quick test: Disable JavaScript in your browser. Visit key pages. Can you see the main content, product descriptions, and key information?

Blank page = AI sees nothing.

Solutions:

  • Server-side rendering (Next.js, Nuxt.js).
  • Static site generators (Gatsby, Hugo).
  • Progressive enhancement (core content works without JS).

Bottom line: If it needs JavaScript to display, AI can’t read it. Fix this or stay invisible.

Take Action Now

People ask ChatGPT, Claude, and Perplexity for recommendations every day. If you’re missing from those answers, you’re missing deals.

These eight strategies boil down to three moves: get mentioned where AI already looks (high-authority sites and Reddit threads), create content AI wants to cite (comparisons and fresh updates), and fix the technical blocks keeping AI out (robots.txt and JavaScript issues).

You can do all this manually. Track mentions in spreadsheets, find citation gaps by hand, and update content weekly. It works on a smaller scale, consumes time, and requires a larger team.

Writesonic provides you with a GEO platform that goes beyond tracking to giving you precise actions to boost visibility – create new content, refresh existing pages, or reach out to sites that mention competitors but not you.

Plus, get real AI search volumes to prioritize high-impact prompts.


Image Credits

Featured Image: Image by Writesonic. Used with permission.

In-Post Image: Image by Writesonic. Used with permission.

What To Expect AT NESS 2025: Surviving The AI-First Era via @sejournal, @NewsSEO_

This post was sponsored by NESS. The opinions expressed in this article are the sponsor’s own.

For anyone who isn’t paying attention to news SEO because they feel it isn’t their relevant niche – think again.

The foundations of SEO are underpinned by publishing content. Therefore, news SEO is relevant to all SEO. We are all publishers online.

John Shehata and Barry Adams are the experts within this vertical and, between them, have experience working with most of the top news publications worldwide.

Together, they founded the News and Editorial SEO Summit (NESS) in 2021, and in the last four years, the SEO industry has seen the most significant and rapid changes since it began 30 years ago.

I spoke to both John and Barry to get their insights into some of the current issues SEOs face, how SEO can survive this AI-first era, and to get a preview of the topics to be discussed at their upcoming fifth NESS event to be held on October 21-22, 2025.

You can watch the full interview at the end of this article.

SEO Repackaged For The AI Era

I started out by commenting that recently, at Google Search Central Live in Thailand, Gary Illyes came out to say that there is no difference between GEO, AEO, and SEO. I asked Barry what he thought about this and if the introduction of AI Mode is going to continue taking away publisher traffic.

Surprisingly, Barry agreed with Google to say, “It’s SEO. It’s just SEO. I fully agree with what the Googlers are saying on this front, and it’s not often that I fully agree with Googlers.”

He went on to say, “I have yet to find any LLM optimization strategy that is not also an SEO strategy. It’s just SEO repackaged for the AI era so that agencies can charge more money without actually creating any more added value.”

AI Mode Is A Threat To Publisher Traffic

While AI Overviews have drawn significant attention, Barry identifies AI Mode as a more serious threat to publisher traffic.

Unlike AI Overviews, which still display traditional search results alongside AI-generated summaries, AI Mode creates an immersive conversational experience that encourages users to continue their search journey within Google’s ecosystem.

Barry warns that if AI Mode becomes the default search experience, it could be “insanely damaging for the web because it’s just going to make a lot of traffic evaporate without any chance of recovery.”

He added that “If you can maintain your traffic from search at the moment, you’re already doing better than most.”

Moving Up The Value Chain

At NESS, John will be speaking about how to survive this AI-first era, and I asked him for a preview of how SEOs can survive what is happening right now.

John highlighted a major issue: “Number one, I think SEOs need to move up the value chain. And I have been saying this for a long time, SEOs cannot be only about keywords and rankings. It has to be much bigger than that.”

He then went on to talk about three key areas as solutions: building topical authority, traffic diversification, and direct audience relationships.

“They [news publishers] need to think about revenue diversification as well as going back to some traditional revenue streams, such as events or syndication. They also need to build their own direct relationships with users, either through apps or newsletters. And newsletters never got the attention they deserve in any of the different brands I’m familiar with, but now it’s gaining more traction. It’s extremely important.”

Quality Journalism Is Crucial For Publishers

Despite the AI disruption, both John and Barry stress that technical SEO fundamentals remain important, but to a point.

“You have to make sure the foundations are in place,” Barry notes, but he believes the technical can only take you so far. After that, investment in content is critical.

“When those foundations are at the level where there’s not much value in getting further optimization, then the publisher has to do the hard work of producing the content that builds the brand. The foundation can only get you so far. But if you don’t have the foundation, you are building a house on quicksand and you’re not going to be able to get much traction anyway.”

John also noted that “it’s important to double down on technical elements of the site.” He went on to say, “While I think you need to look at your schema, your speed, all of the elements, the plumbing, just to make sure that whatever channel you work with has good access and good understanding of your data.”

Barry concluded by reaffirming the importance of content quality. “The content is really what needs to shine. And if you don’t have that in place, if you don’t have that unique brand voice, that quality journalism, then why are you in business in the first place?”

The AI Agents Question

James Carson and Marie Haynes are both speaking about AI agents at NESS 2025, and when I asked Barry and John about the introduction of AI agents into newsrooms, the conversation was both optimistic and cautious.

John sees significant potential for AI to handle research tasks, document summarization, and basic content creation for standardized reporting like market updates or sports scores.

“A lot of SEO teams are using AI to recommend Google Discover headlines that intrigue curiosity, checking certain SEO elements on the site and so on. So I think more and more we have seen AI integrated not to write the content itself, but to guide the content and optimize the efficiency of the whole process.” John commented.

However, Barry remains skeptical about current AI agent reliability for enterprise environments.

“You cannot give an AI agent your credit card details to start shopping on your behalf, and then it just starts making things up and ends up spending thousands of your dollars on the wrong things … The AI agents are nowhere near that maturity level yet and I’m not entirely sure they will ever be at that maturity level because I do think the current large language model technology has fundamental limitations.”

John countered that “AI agents can save us hundreds of hours, hundreds.” He went on to say, “These three elements together, automation, AI agents, and human supervision together can be a really powerful combination, but not AI agent completely solo. And I agree with Barry, it can lead to disastrous consequences.”

Looking Forward

The AI-first era demands honest acknowledgment of changed realities. Easy search traffic growth is over, but opportunities exist for publishers willing to adapt strategically.

Success requires focusing on unique value propositions, building direct audience relationships, and maintaining technical excellence while accepting that traditional growth metrics may no longer apply.

The future belongs to publishers who understand that survival means focusing on their audience to build authentic connections that value their specific perspective and expertise.

Watch the full interview below.


If you’re a news publisher, or an SEO, you cannot afford to miss the fifth NESS on October 21-22, 2025.

SEJ readers have a special 20% discount on tickets. Just use the code “SEJ2025” at the checkout here.

Headline speakers include Marie Haynes, Mike King, Lily Ray, Kevin Indig, and of course John Shehata and Barry Adams.

Over two days, there are 20 speakers representing the best news publishers such as Carly Steven (Daily Mail), Maddie Shepherd (CBS), Christine Liang (The New York Times), Jessie Willms (The Guardian), among others.

Check out the full schedule here.


Featured Image: Shelley Walsh/Search Engine Journal/ NESS

The CMO & SEO: Staying Ahead Of The Multi-AI Search Platform Shift (Part 1)

Some of the critical questions that are top of mind for both SEOs and CMOs as we head into a multi-search world are: Where is search going to develop? Is ChatGPT a threat or an opportunity? Is optimizing for large language models (LLMs) the same as optimizing for search engines?

In this two-part interview series, I try to answer these questions to provide some clear direction and focus to help navigate considerable change.

What you will learn:

  • Ecosystem Evolution: While it is still a Google-first world, learn where native AI search platforms are growing and what this means.
  • Opportunity vs. Threat: Why AI platforms create unprecedented brand visibility opportunities while demanding new return on investment (ROI) thinking.
  • LLM Optimization Strategy: Why SEO has become more vital than ever, regardless of the AI and Search platform, and where specific nuances to optimize for lie.
  • CMO Priorities: Why authority and trust signals matter more than ever in AI-driven search.
  • Organizational Alignment: Why CMOs need to integrate marketing, PR, and technical teams for cohesive AI-first search strategies.

Where Do You Think The Current Search Ecosystem Might Develop In The Next 6 Months?

To answer the first question, I think we are witnessing something really fascinating right now. The search landscape is undergoing a fundamental transformation that will accelerate significantly over the next six months.

While Google still dominates with about 90% market share, AI-powered search platforms are experiencing explosive growth that is impossible to ignore.

Let me put this in perspective. ChatGPT is showing 21% month-over-month growth and is on track to hit 700 million weekly active users.

Claude and Perplexity are posting similar numbers at 21% and 19% growth, respectively. But here is what has caught my attention: Grok has seen over 1,000% month-over-month growth. Source BrightEdge Generative Parser and DataCube analysis, July 2025.

Sure, it is starting from a tiny base, but that trajectory makes it the dark horse to watch. Meanwhile, DeepSeek continues its gradual decline following its January surge, which highlights the volatility in this emerging market. I will share more on that later.

In A Google First World, User Behavior Is Also Evolving On Multiple AI Platforms

What is particularly interesting is how user behavior is evolving. People are not just switching from Google to AI search — they are starting to mix and match platforms based on their specific needs. I am seeing users turn to:

  • ChatGPT for deep research.
  • Perplexity for quick facts.
  • Claude, when they need reliable information.
  • Google when they want comprehensive breadth.
Image from BrightEdge, August 2025

The CMO AI And SEO Mindset Shift

From a marketing perspective, this creates a massive change in thinking. SEO is not just about Google anymore – though that is still where most of the focus needs to be.

Marketers will need to consider optimizing for multiple AI engines, each with its own distinct data ingestion pipelines. For ChatGPT and Claude, you need clear, structured, cited content that AI models can safely reuse. For Perplexity, timeliness, credibility, and brevity matter more than traditional keyword density.

It is no longer about optimizing just for clicks; it is about optimizing for influence and citations and making sure you appear in the proper context at the right moment within all these distinct types of AI experiences.

The Search Bot To AI User Agent Revolution

ChatGPT and its ChatGPT-User agent are leading the charge.

In July, BrightEdge’s analysis revealed that ChatGPT’s User Agent real-time page requests nearly doubled its activity. In other words, it shows that users relying on real-time web searches to answer questions almost doubled within just one month.

For example, suppose you are looking to compare “Apple Watch vs. Fitbit” from current reviews. In that case, the ChatGPT user agent is acting as your browsing assistant and operating on your behalf, which is fundamentally different from traditional search engines and crawlers.

Image from BrightEdge, August 2025

In summary, I believe the next six months will establish what I term a “multi-AI search world.” Users will become increasingly comfortable switching between platforms fluidly based on what they need in that moment. The opportunity here is massive for early adopters who figure out cross-platform optimization.

Is The Rise Of AI Platforms Like ChatGPT An Opportunity Or A Threat That CMOs Need To Be Aware Of?

It is all opportunity.

Each AI platform is carving out its own distinct identity. Google is doubling down on AI Overviews and AI Mode. ChatGPT is making this fascinating transition from conversational Q&A into full web search integration.

Perplexity is cementing itself as the premier “answer engine” with its citation-first, mobile-focused approach, and they are planning deeper integrations with news providers and real-time data.

Claude is expanding beyond conversation into contextual search with superior fact-checking capabilities, while Microsoft’s Bing Copilot is positioning itself as this search-plus-productivity hybrid that seamlessly blends document generation with web search.

The rise of AI platforms represents both a transformative opportunity and a strategic challenge that CMOs must navigate with sophistication and strategic foresight.

Learn More: How Enterprise Search And AI Intelligence Reveal Market Pulse

CMOs And The Shift From Ranking To Referencing And Citations

And that brings me to a huge mindset shift: We are moving from “ranking” to “referencing.” AI summaries do not just display the top 10 links; they reference and attribute sites within the answer itself.

Being cited within an AI summary can be more impactful than just ranking high in traditional blue links. So, CMOs need to start tracking not just where they rank, but where and how their content gets referenced and cited by AI everywhere.

Technical Infrastructure Requirements And CMOs Leaning Into SEO Teams

On the technical side, structured data and clear information architecture are no longer nice-to-haves – they are foundational. AI relies on this structure to surface accurate information, so schema.org markup, clean technical SEO, and machine-readable content formats are essential.

Image from BrightEdge, August 2025

Brands, The CMO, And The Authority And Trust Premium

Here is something that is becoming critical: Authority and brand trust matter more than ever. AI tends to pull from sites it considers authoritative, trustworthy, and frequently cited. This puts a premium on long-term brand-building, thought leadership, and reputation management across all digital channels.

You need to focus on those E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness) for both humans and AI algorithms.

The CMOs’ SEO And AI Competitive Advantage

The CMOs who are proactively adapting to these shifts – rethinking measurement, technical SEO, brand trust, and cross-team integration – are the ones positioning their enterprises for continued visibility and influence.

The move to AI-driven search is rapid, but savvy enterprise marketers are seeing this as an opportunity to deepen brand engagement and become a trusted source for both human users and AI engines.

It is challenging, but the potential upside for brands that get this right is enormous.

It is a whole new way of thinking about ROI.

Learn More: How AI Search Should Be Shaping Your CEO’s & CMO’s Strategy

Do You Think Optimizing For LLMs Is The Same As Search Engines, As Google Suggests?

Following Google Search Central Live in Thailand, and Gary’s advice that SEOs don’t need to optimize for GEO, I think that Gary’s absolutely right, and putting any acronym debates behind us, foundational SEO remains the same, particularly with Google search.

SEO has never been more vital, and AI is accelerating the need for specialists in this area. Your website still needs to be fast, mobile-friendly, and technically sound. Search engines and AI systems alike need to crawl and index your content efficiently. Technical optimizations like proper URL structures, XML sitemaps, clean code, and fast loading times are still paying dividends.

The CMO, SEO, And LLM Optimization Fundamentals

Now, when we talk about optimizing for all LLMs, there is a similarity in the reality that success still lies in core SEO – primarily technical SEO – and content fundamentals.

Strong internal linking helps AI crawlers understand how your pages connect. Make sure all pages are easily crawlable. Answer related questions throughout your content using clear headings, schema markup, and FAQ sections, and figure out what people are trying to accomplish to give them the answer and be the cited source in AI results.

LLM Platform-Specific Differentiation

However, as more brands are being discovered and interpreted across multiple AI platforms, it is also vital to understand that each has its own interface, logic, and way of shaping brand perceptions.

Each platform has developed distinct strengths: ChatGPT Search provides a comprehensive narrative context. Perplexity shines with visual integration and related content. Google AI Overview excels at structured, hierarchical information.

Here is a nuanced example. When users ask comparison questions like “what’s the best?,” ChatGPT and Google’s approaches are similar. But when users ask action-oriented questions like “how do I?,” they part ways dramatically. ChatGPT acts like a trusted coach for decision-making, while Google AI remains the research assistant.

Image from BrightEdge, August 2025

Trust Signal Variations

Different platforms also show distinct trust signal patterns. Google AI Overviews tends to cite review sites and community sources like Reddit, asking “what does the community think?”.

ChatGPT appears to favor retail sources more frequently, asking, “where can you buy it?”. This suggests these platforms are developing different approaches to trust and authority validation.

Three-Phase AI Optimization Framework For The CMO And Marketing Teams

Here is a framework for organizations to follow.

  • Start by tracking your AI and brand presence across multiple AI engines. Monitor how your visibility evolves over time through citations and mentions across AI Overviews, ChatGPT, and beyond.
  • Next, focus on understanding variations in brand mentions across key prompts. Quickly identify which prompts from ChatGPT, AI Overviews, and other AI search engines generate brand mentions so you can optimize your content efficiently.
  • Finally, dive deeper into specific prompts to understand why AI systems recommend brands. Utilizing sentiment analysis provides precise insights into which brand attributes each AI engine favors.

Learn More: The Triple-P Framework: AI & Search Brand Presence, Perception & Performance

The CMO: AI, Search, And Cross-Team Integration Thinking

One thing I am seeing work well is tighter integration across marketing and communications teams. Paid and organic strategies must align more than ever because ads and organic AI overviews often get presented together – your messaging, branding, and targeted intent need to be entirely consistent.
Plus, your PR and content teams need better coordination because off-site mentions in media, reviews, and authoritative sites directly influence who gets cited in AI summaries.

Conclusion: Embracing The Multi-AI Search Transformation

The CMOs who are proactively adapting to the shifts are positioning their organizations for sustained competitive advantage in this evolving landscape.

Big Picture, to put this all in perspective.

The 3 Big Questions From CMOs On AI And Search

  1. AI would kill Google: No, it has turbocharged it.
  2. SEO is dead: No, it’s actually more important than ever. AI is reshaping search, which means we need to understand what this transformation entails. Generative Engine Optimization (GEO) builds upon core SEO foundations and requires more integrated, higher-quality technical approaches.
  3. Everything changes? The more things change, the more they stay the same.

In Part 2 of this series, topics covered will include the future of traditional SERP search and how agentic SEO might change the search funnel. Learn how these changes impact the role of SEO and all teams that fall under the CMO remit.

More Resources:


Featured Image: jd8/Shutterstock