The Death Of The Static GBP: Why Dynamic Profiles Are The New Local Ranking Factor via @sejournal, @AdamHeitzman

You probably set up your Google Business Profile a while back, filled in your address, picked your categories, maybe chased down a few reviews, and then called it done. Totally understandable. That was enough, once.

But here’s what’s changed: If you haven’t meaningfully touched that profile in months, you’re losing visibility to competitors who figured out something you haven’t yet. Google transformed GBP from a directory listing into a live engagement surface, and businesses that treat it like the former are quietly bleeding map pack rankings they don’t even know they’ve lost.

This applies to every local business. Retailers, yes, but also law firms, dental practices, restaurants, gyms, plumbers, and salons. If your GBP isn’t actively signaling to Google that you’re open for business and earning it every day, you’re leaving real visibility on the table.

Let’s talk about what killed the static profile, what Google built in its place, and exactly what you need to do about it.

When “Set It And Forget It” Actually Worked

Cast your mind back to the directory era. You filled out your name, address, and phone number (NAP), chose a category, uploaded a logo, and crossed your fingers. Google treated these profiles as reference points, fixed coordinates in the physical world. The algorithm cared about NAP consistency across directories more than anything else. Match your citations across 50 listing sites? You were golden.

It worked because that’s genuinely all Google needed. The platform was confirming you existed at a given address. Nothing more.

The New Table Stakes (And Why They’re Not Enough)

Those fundamentals haven’t disappeared; they’ve just become the entry fee. According to the 2026 Local Search Ranking Factors report, the primary GBP category is still the No. 1 factor for local pack visibility, followed by proximity to the searcher and keywords in the business title. These matter enormously. But when every serious competitor has them dialed in, they stop being differentiators.

Screenshot from Whitespark, March 2026

The report also makes clear that behavioral and engagement signals, posts, photos, clicks, calls, direction requests, and review cadence are climbing fast in importance. Google is actively rewarding businesses that “look alive.”

There’s also a finding worth pausing on: Being open when users search is now the No. 5 local pack ranking factor. Your hours aren’t just informational; they’re a ranking signal. This was first noted by Joy Hawkins of Sterling Sky and subsequently confirmed by a BrightLocal study of 50 businesses across 10 categories, which found that rankings tended to drop when a business is listed as closed. Don’t treat your hours as a set-and-forget field. Audit them quarterly, set special hours for holidays before the holiday arrives (not after), and consider whether your current hours are costing you visibility during high-intent search windows.

A static profile with perfect NAP and a 4.8-star rating is like showing up to a job interview in a great suit but refusing to speak. You look the part, but you’re not convincing anyone you’re the right choice.

Google’s Shift: From Listings To Live Engagement

Google didn’t randomly decide to make GBP harder to manage. They followed user behavior. People aren’t browsing businesses anymore; they’re searching with immediate intent. “Who can help me with this right now?” isn’t a research question; it’s a decision waiting to happen.

So Google built GBP into an active engagement surface. For retailers, that meant integrating Merchant Center so real-time product inventory could surface directly in search results and Maps. For service businesses, it means appointment booking, Q&A, and post-activity are all live signals. For restaurants, it’s menus, wait times, and reservation links. The platform expects ongoing input, and it rewards the businesses that provide it.

The core principle is the same whether you sell hiking boots or handle divorces: Google favors profiles that continuously demonstrate relevance and activity. The mechanism differs by business type. The outcome doesn’t.

The Signals That Actually Move The Needle

Review Velocity, Not Just Review Volume

Reviews have always mattered, but the 2026 Local Search Factors Ranking Report data adds important nuance. Fresh reviews don’t just help you rank; they help people pick you over a competitor with the same star rating. Research further confirms that review signals are gaining influence across local rankings, with proximity earning you the look, but review content helping secure the top spot.

Do this: Make review requests part of your operational workflow. Send the ask within 24 hours of a completed service or transaction while the experience is fresh. Respond to every review, positive and negative, within 48 hours. Owner responses are an engagement signal, not just a reputation management courtesy.

Not that: Don’t batch review requests monthly or rely on a generic follow-up email. Don’t respond to positive reviews with a copy-paste “Thanks for your feedback!” Google and potential customers can both tell.

A law firm that earns 12 reviews over three years and one that earns 12 reviews over three months are sending very different signals to the algorithm, even with identical star ratings.

GBP Posts: The Most Underused Freshness Signal

Most businesses either never post to GBP or publish one post in January and forget it exists. That’s a significant missed opportunity. Posts, whether offers, updates, events, or business news, are a direct freshness signal that tells Google your profile is actively managed.

Do this: Post at least once a week. Tie posts to things that are actually happening: a seasonal promotion, a recently completed project, a staff milestone, or a local event you’re involved in. Use the “Offer” post type when you have something time-sensitive; the expiry date creates urgency and signals recency.

Not that: Don’t recycle the same “Welcome to our business!” post every few months. Don’t post only when you remember to; build it into a recurring task, same as you would any other content channel. And don’t ignore the post types Google gives you; Events and Offers get more real estate in the profile than standard Updates.

Photos: Recency Matters As Much As Quality

According to Birdeye’s State of Google Business Profile 2025 report, verified profiles with photos consistently receive more website visits, direction requests, and calls, and listings with recent photos and video see measurably higher engagement than those with stale or infrequently updated imagery. That “recently updated” part is key. A profile with 80 photos, all uploaded three years ago, isn’t sending the same freshness signal as one with steady uploads over recent months.

Do this: Set a recurring reminder to upload new photos at least twice a month. Show real things: recent work, your current team, your updated space, seasonal inventory. For service businesses, job-site photos and before/after shots are gold; they’re authentic, specific, and far more compelling than stock imagery.

Not that: Don’t upload a batch of 50 photos once a year and call it done. Don’t use obviously staged or stock photos as your primary images; research on competitor GBP analysis shows that photo quality and authenticity are increasingly factored into how profiles are perceived. And don’t ignore customer-uploaded photos; respond to them or flag inappropriate ones rather than leaving them unattended.

Booking And Messaging: Closing The Loop Inside Google

Google increasingly wants to keep searchers inside its own ecosystem. For local businesses, that means enabling every feature your business type supports: “Book Online” links, appointment URLs, and the Q&A section. These aren’t just convenience features; they’re engagement signals. When a user books directly through your GBP, that interaction tells Google your profile is functional and driving real-world action.

Do this: If your business supports appointments, connect a booking link (Google supports integrations with platforms like Booksy, Vagaro, OpenTable, and others). Seed your Q&A section with the three to five questions customers actually ask most, and answer them yourself before strangers do it for you.

Not that: Don’t leave your Q&A section empty or unmonitored, unanswered questions (or worse, inaccurate answers from random users) erode trust and represent a missed engagement opportunity.

For Retailers: Real-Time Inventory Is Its Own Category

If you sell physical products, everything above applies, but you have an additional lever that service businesses don’t: real-time inventory.

Google integrated Merchant Center with GBP specifically to surface what’s on your shelves in search results and Maps.

Do this: Prioritize your top 50 highest-intent, most-searched products first. Get those live and accurate before trying to sync your entire catalog. Add product schema markup to your website’s product pages so your feed and your site are telling Google the same thing.

Not that: Don’t upload a feed manually once a week and assume that’s close enough to real-time. Don’t skip the Merchant Center diagnostics step; a feed with errors will silently underperform, and you won’t know why until you check. And don’t assume inventory feeds only matter for paid ads; enabling free local listings through Merchant Center unlocks organic product visibility in search, Maps, and your GBP profile at no additional cost.

The AI Layer: Why This All Matters More Than Ever

Here’s the dimension that makes everything above more urgent: GBP signals are now feeding directly into AI-driven local results, not just the traditional map pack.

Google’s AI Mode pulls from the same signals discussed in this article: review recency and sentiment, photo freshness, post activity, accurate hours, and service completeness. The Whitespark 2026 report introduced an entirely new AI Search Visibility category for the first time, with three of the top five AI visibility factors being citation and entity-based signals. Businesses that keep their GBP current and consistent are the ones being surfaced in AI-generated answers. Businesses with stale profiles aren’t just losing map pack spots; they’re becoming invisible to AI-driven discovery entirely.

Treat every update you make to your GBP not just as a ranking tactic for the traditional local pack, but as a data signal for AI systems that are increasingly acting as the front door to local search. Accurate hours, fresh photos, recent reviews, and complete service descriptions aren’t just best practices; they’re the inputs AI needs to confidently recommend your business.

What To Measure

Once you’re actively managing your profile, track what’s actually moving:

Profile interactions: calls, direction requests, website clicks, and (where applicable) booking clicks tell you which features are actually driving action. 

Review velocity: not just your total count, but how many you’re earning per month and how quickly you’re responding. 

Post engagement: views and clicks on GBP posts help you understand which content types your local audience actually responds to. For retailers, add product impressions and store visit conversions to this list.

The Compounding Effect

Here’s what makes dynamic GBP management so powerful over time: the signals compound. Consistent posting builds freshness and authority. Steady review velocity builds trust signals. Updated photos drive higher engagement. Higher engagement improves rankings. Better rankings bring more profile views, more reviews, and more interactions, which further improve rankings. And now, all of those same signals are feeding AI systems that are reshaping how local businesses get discovered in the first place.

Local visibility is increasingly built on engagement, credibility, and connection, not just keyword optimization. Static profiles erode authority over time. Dynamic profiles compound it.

The businesses treating GBP like a compliance checkbox are the ones watching competitors steal map pack spots they used to own. The ones showing up consistently, posting, earning reviews, updating photos, keeping information current, and (for retailers) feeding Google live inventory, are building durable local visibility that’s genuinely hard to disrupt, whether the search happens in the traditional map pack or in an AI-generated answer.

That’s the gap. The only question is which side of it you want to be on.

More Resources:


Featured Image: A_stockphoto/Shutterstock

Answer Engine Optimization: How To Get Your Content Into AI Responses via @sejournal, @slobodanmanic

This is Part 2 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO and why the shift matters. This article gets practical: how AI systems actually select content, and what you can do about it.

AI Doesn’t Rank Pages. It Selects Fragments.

Traditional search ranks whole pages. AI search does something fundamentally different.

Microsoft’s Krishna Madhavan, principal product manager on the Bing team, described the shift in October 2025: AI assistants “break content down, a process called parsing, into smaller, structured pieces that can be evaluated for authority and relevance. Those pieces are then assembled into answers, often drawing from multiple sources to create a single, coherent response.”

This is the core insight. AI doesn’t pick the best page and show it. It picks the best fragments from many pages and weaves them together. Your page might rank No. 1 on Google and still not get cited in an AI response if its content isn’t structured in fragments that AI can extract.

The numbers show the shift is real. According to the Conductor AEO/GEO Benchmarks Report (January 2026; 13,770 domains, 17 million AI responses), AI traffic now accounts for 1.08% of all website sessions, growing roughly 1% month over month. Microsoft reported that AI referrals to top websites spiked 357% year-over-year in June 2025, reaching 1.13 billion visits. Small numbers today, compounding fast.

One in four Google searches now triggers an AI Overview. In healthcare, it’s nearly one in two. The surface area is growing, and the content that fills these answers has to come from somewhere. The question is whether it comes from you.

The Research: What Actually Gets Cited

The academic research on what makes content citable in AI responses has matured rapidly. The foundational paper, “GEO: Generative Engine Optimization” (Princeton, IIT Delhi, Georgia Tech, published at KDD 2024), tested nine optimization strategies and found that GEO techniques could boost visibility by up to 40% in AI responses. The most effective single technique was citing credible sources, which produced a 115.1% visibility increase for websites that weren’t already ranking in the top positions.

A counterintuitive finding: Writing in an authoritative or persuasive tone did not improve AI visibility. AI systems don’t respond to rhetorical style. They respond to verifiable information.

Since then, 2025 brought a wave of follow-up research that tested these ideas on real production AI engines rather than simulated ones.

The University of Toronto study (September 2025) was the first large-scale analysis across ChatGPT, Perplexity, Gemini, and Claude. Their most striking finding: AI search overwhelmingly favors earned media. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time, compared to Google’s 54.1%. Automotive showed a similar pattern at 81.9% versus 45.1%. In other words, it’s not just how you write content, but whose domain it appears on. Press coverage, product reviews on independent websites, and mentions on industry publications carry far more weight in AI responses than your own website.

Carnegie Mellon’s AutoGEO study (October 2025) used automated methods to discover what generative engines actually prefer. The results showed up to 50.99% improvement over the best baseline, with universal preferences emerging across engines: comprehensive topic coverage, factual accuracy with citations, clear logical structure with headings and lists, and direct answers to queries.

The GEO-16 framework (September 2025) analyzed 1,702 real citations from Brave, Google AI Overviews, and Perplexity. It identified 16 on-page quality factors that predict citation likelihood. The top three: metadata and freshness, semantic HTML, and structured data. Technical on-page factors matter as much as the quality of the writing itself.

And a reality check from Columbia and MIT’s ecommerce study (November 2025): of 15 common content rewriting heuristics, 10 produced negligible or negative results. The optimization strategies that did work converged toward truthfulness, user intent alignment, and competitive differentiation. Not tricks. Substance.

The overall pattern across all of this research: AI systems reward clarity, factual accuracy, and structure. They don’t reward marketing language, persuasion tactics, or keyword density.

Content Structure That Earns Citations

Based on the research and official guidance from Microsoft and Google, here’s what structurally makes content citable.

Heading hierarchy matters more than ever. Use descriptive H2 and H3 headings that each cover one specific idea. Microsoft lists strong headings as “signals that help AI know where a complete idea starts and ends.” Vague headings like “Learn More” or “Overview” give AI nothing to work with. A heading like “How AI parses content differently than search engines” tells the system exactly what the section contains.

Q&A format is native to AI. Write questions as headings with direct answers below them. Microsoft notes that “assistants can often lift these pairs word for word into AI-generated responses.” If your content answers the question someone asks an AI, and it’s structured as a clear question-and-answer pair, you’ve made the AI’s job easy.

Make content snippable. Bulleted and numbered lists, comparison tables, step-by-step instructions. These formats give AI clean, extractable fragments. A paragraph buried in a wall of text is harder for AI to isolate than the same information presented as a three-item list.

Front-load the answer. Start sections with the key information, then provide context. If someone asks, “What temperature should I bake bread at?” and your content opens with a two-paragraph history of bread making before mentioning 375°F, you’ll lose the citation to a competitor who leads with the answer.

Keep sections self-contained. Each section should make sense on its own, without requiring the reader to have read the previous section. AI extracts fragments. If your fragment only makes sense in the context of the whole page, it won’t be selected.

An important technical note from Microsoft: “Don’t hide important answers in tabs or expandable menus: AI systems may not render hidden content, so key details can be skipped.” FAQ answers collapsed inside an expandable menu, product specs hidden behind tabs, content that requires interaction to reveal: it may all be invisible to AI. If information is important, it needs to be in the visible HTML.

Authority Signals For AI

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t just a Google concept anymore. It’s what AI systems look for across the board, even if they don’t use the term.

Microsoft’s October 2025 guidance describes the baseline: success starts with content that is “fresh, authoritative, structured, and semantically clear.” On the clarity side, they’re specific: “avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.” Saying something is “next-gen” or “cutting-edge” without context leaves AI unsure how to classify it.

The research backs this up. The original GEO paper found that writing in a persuasive or authoritative tone did not improve AI visibility. Facts and cited sources did. Marketing language doesn’t impress algorithms.

This connects to the University of Toronto’s finding about earned media dominance. AI systems trust third-party validation more than self-promotion. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time compared to Google’s 54.1%. The implication: getting your expertise published on industry websites, earning press coverage, and building a presence on authoritative platforms matters more for AI visibility than perfecting the copy on your own site.

Freshness is a signal, not a bonus. Stale content rarely gets cited. Krishna Madhavan said at Pubcon Cyber Week: “Stale or missing content will constrain the amount of retrieval we can do and push agents toward alternative sources.”

Schema Markup: From Text To Knowledge

Microsoft’s October 2025 post devotes an entire section to schema. They describe it as code that “turns plain text into structured data that machines can interpret with confidence.” Schema can label your content as a product, review, FAQ, or event, giving AI systems explicit context instead of forcing them to guess. Krishna Madhavan reinforced this at Pubcon: “Schemas are super useful. They help the system discern exactly what your information is without us having to guess.”

The GEO-16 framework confirms this from the academic side. Structured data was one of the top three factors predicting AI citation likelihood, alongside metadata/freshness and semantic HTML.

The schema types that matter most for AI visibility:

  • FAQPage for question-and-answer content (directly maps to how AI formats responses).
  • HowTo for step-by-step instructions.
  • Product with Offer, AggregateRating, and Review for ecommerce.
  • Article/BlogPosting for content with clear authorship and dates.
  • Organization for business identity.

Pair structured data with IndexNow for freshness. As the Bing Webmaster Blog put it: “IndexNow tells search engines that something has changed, while structured data tells them what has changed. Together, they improve both speed and accuracy in indexing.”

Crawler Permissions: Who Gets In

AI search engines use distinct crawlers, and most let you control training and search access separately. Here’s who to allow.

Bot Platform Purpose Robots.txt Token
OAI-SearchBot ChatGPT Search index OAI-SearchBot
GPTBot OpenAI Model training GPTBot
ChatGPT-User ChatGPT On-demand browsing ChatGPT-User
Bingbot Microsoft Copilot Search + AI Bingbot
Googlebot Google AI Overviews Search + AI Googlebot
Google-Extended Google Gemini training Google-Extended
PerplexityBot Perplexity Search + index PerplexityBot
Perplexity-User Perplexity On-demand browsing Perplexity-User
ClaudeBot Anthropic Training + retrieval ClaudeBot

A sensible robots.txt configuration might allow search crawlers while blocking training:

User-agent: OAI-SearchBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

OpenAI provides the cleanest bot separation. You can allow OAI-SearchBot (so your content appears in ChatGPT search) while blocking GPTBot (so it’s not used for model training). Google’s controls are less granular: blocking Google-Extended prevents Gemini training but has no effect on AI Overviews, which use Googlebot.

OpenAI also offers the most specific technical recommendation of any AI search provider. For their Atlas browser (which uses a standard Chrome user agent, not a bot identifier), they recommend following WAI-ARIA best practices: “Add descriptive roles, labels, and states to interactive elements like buttons, menus, and forms. This helps ChatGPT recognize what each element does and interact with your site more accurately.” Accessibility and AI agent compatibility are the same work.

A caveat on Perplexity: while their documentation states they respect robots.txt, Cloudflare documented in August 2025 that Perplexity uses undeclared crawlers with rotating IPs and spoofed browser user agents to bypass no-crawl directives. This is a contested claim, but it’s worth knowing.

For revenue, Perplexity is the only platform currently offering publisher compensation. Their Comet Plus program provides an 80/20 revenue split (publishers keep 80%) across direct visits, search citations, and agent actions.

Google Vs. Microsoft: Two Philosophies

The contrast between Google and Microsoft on AEO is striking enough to be its own story.

Google says: just do good SEO. Their official documentation is deliberately minimalist: “There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.” They add that you “don’t need to create new machine readable files, AI text files, or markup to appear in these features.”

Google recommends helpful, reliable, people-first content demonstrating E-E-A-T. Standard structured data. Good page experience. Technical basics. Nothing AI-specific.

Microsoft says: here’s the playbook. Their October 2025 blog post and January 2026 guide provide detailed, actionable guidance. Specific heading structures. Schema recommendations. Content formatting rules. Concrete examples (an AEO product description vs. a GEO product description). Warnings about content hidden in tabs and expandable menus. A framework for thinking about crawled data, product feeds, and live website data as three distinct layers.

What explains the difference? Partly market position. Google dominates search and has less incentive to help publishers optimize for AI features that might reduce clicks to their websites. Microsoft, with Bing’s roughly 8% market share, benefits from providing publishers with reasons to optimize specifically for their ecosystem.

But there’s a practical takeaway: Microsoft’s guidance isn’t Bing-specific. The principles of structured content, clear headings, snippable formats, schema markup, and expert authority are universal. Following Microsoft’s playbook improves your content for every AI system, including Google’s. Google just won’t tell you that.

Measuring AI Visibility

This is the hard part. Traditional SEO has Google Search Console. AI visibility is still fragmented.

Ahrefs analyzed 1.9 million citations from 1 million AI Overviews and found that 76% of citations come from pages already ranking in Google’s top 10. The median ranking for the most-cited URLs was position 2. Traditional ranking still matters for AI citation, but being No. 1 is “a coin flip at best” for getting cited.

The traffic impact is significant. Ahrefs found that AI Overviews correlate with 58% lower click-through rates for the No. 1 position. Seer Interactive reported a 61% organic CTR drop for queries with AI Overviews. But being cited within the AI Overview gives 35% more organic clicks compared to not being cited. Citation is the new ranking.

For tracking, the tool landscape is emerging:

Tool What It Tracks Starting Price
Profound Citations across ChatGPT, Perplexity, Copilot, Google AIOs From $99/mo
Peec.ai Brand mentions across ChatGPT, Gemini, Claude, Perplexity From ~$95/mo
Advanced Web Ranking AIO presence tracking in Google Included in plans
Bing Webmaster Tools AI Performance Report for Copilot Free

Bing Webmaster Tools is the easiest starting point. It’s free, and the new AI Performance Report shows how your content performs in Copilot citations. For ChatGPT specifically, track utm_source=chatgpt.com in your analytics. OpenAI automatically appends this to referral URLs.

Conductor’s January 2026 report found that 87.4% of AI referral traffic comes from ChatGPT. That’s one platform dominating the space, which makes tracking it particularly important.

Key Takeaways

  • AI selects fragments, not pages. Structure your content in self-contained, extractable sections with descriptive headings that signal where each idea starts and ends.
  • Clarity beats persuasion. Factual accuracy, cited sources, and direct answers outperform authoritative tone and marketing language. The research consistently shows this.
  • Earned media dominates brand content in AI citations. Press coverage, third-party reviews, and authoritative mentions on other websites carry more weight than your own pages. Build presence beyond your domain.
  • Schema markup is a force multiplier. FAQPage, HowTo, Product, and Article schemas make your content machine-readable. Pair with IndexNow for freshness.
  • Follow Microsoft’s playbook, even for Google. Google says “just do good SEO.” Microsoft provides specific, actionable guidance that improves content for every AI system, Google’s included.
  • Separate training from search in your robots.txt. Allow search crawlers (OAI-SearchBot, Bingbot, PerplexityBot) while blocking training crawlers (GPTBot, Google-Extended) if that’s your preference. You have more control than you might think.
  • Track AI visibility now. Use Bing Webmaster Tools (free), monitor utm_source=chatgpt.com in analytics, and consider dedicated tools as the measurement space matures.

Traditional SEO asked: “How do I rank?” AEO asks: “How do I become the fragment that gets selected?” The answer isn’t a single trick. It’s clear structure, verifiable expertise, and content that AI can confidently extract and cite.

Up next in Part 3: the protocols powering the agentic web, including MCP, A2A, NLWeb, and AGENTS.md, and how they fit together.

More Resources:


This was originally published on No Hacks.


Featured Image: Meepian Graphic/Shutterstock

Why Google’s New “Google-Agent” Is The Biggest Mindset Shift In SEO History via @sejournal, @marie_haynes

Imagine a web where a machine fills out your lead forms, buys your products, and negotiates with your backend. While many SEOs are currently arguing over whether to block AI search bots, whether to call ourselves SEOs or GEOs, or optimizing to get mentioned in ChatGPT, the reality of the web is moving far past simple crawlers. With Google’s new WebMCP and Google-Agent bot, the agentic web is already here.

The Web Is Becoming Agentic

The web as we know it (where humans click links and scroll pages) is radically ending. What replaces it is the agentic web.

This week, Google announced a new user agent just for agents. When an agent using Google infrastructure browses your site (like Project Mariner), it will use this new tag.

Table showing mobile and desktop HTTP User-Agent strings for the Google-Agent web crawler.
Image Credit: Marie Haynes

This is just the beginning. Agents will do much more than browse the web like a human.

Liz Reid Describes A Web Where Agents Do Much Of The Browsing

In a recent interview, Head of Search Liz Reid noted she believes people will still want to hear from other people, but she said, “I do think that probably means there’s a world in which a lot of agents are talking with each other.

Google’s latest blog post outlines several AI protocols we all must understand immediately:

Protocol What It Stands For The Business Impact
MCP Model Context Protocol Lets agents securely access your backend data.
A2A Agent2Agent Enables bot-to-bot communication and transactions.
UCP Universal Commerce Protocol Lets a machine buy your product directly from the SERPs.
A2UI Agent to User Interface Automatically composes new visual layouts for users.
AG-UI Agent User Interaction A middleware for streaming real-time AI data.

I had a great chat with Search Bar member Liz Micik discussing this:

WebMCP Will Let Agents Use Your Website Natively

Standard browser agents are slow because they look at pixels like humans do. In the agentic web, machines will talk directly with the tools and functionality available on our website.

Infographic comparing traditional human web browsing to AI agent interaction using the WebMCP protocol.
(Yup – it’s a nano banana image. 😛) Image Credit: Marie Haynes

The improvement here is WebMCP. I think every SEO should be heavily paying attention to this. WebMCP lets agents use the functionality of your website in real time, natively.

What does this look like?

The obvious use case is an agent automatically filling out lead forms perfectly. But I think we will see much more interesting use cases as we publish our own agents. Let’s say I’ve made agents for SEO. (I have! I just haven’t shared them with others yet.) I could share those with you via a SaaS model, perhaps – where you pay a monthly fee for access. Or, what I think will be most likely is that you will be able to have your agent access my agents via WebMCP. My agents will negotiate with your agents on pricing and possibly even help each other improve as well.

Search Is Turning Into AI Search

Google’s Nick Fox recently stated, “Search is becoming AI Search, and the Gemini app is your personal assistant.” He also said that Google is increasingly thinking of AI Mode and AI Overviews as one in the same.

I keep looking back at this NYT article from three years ago.

Article headline about Google devising radical search changes to beat A.I. rivals, with highlighted text.
Image Credit: Marie Haynes

They framed it as Google panicking over ChatGPT. There is no doubt in my mind, however, that Google had much bigger things in mind. The agentic web is not an afterthought! It has been in planning for many years now.

I personally believe that from 1998 until now, those of us who create content have been giving it to Google to train AI. In return, we got human traffic and ad revenue. I think that partnership no longer exists in its traditional form.

Why This Is The Most Exciting Time To Be In SEO

This transition from a human-first web to an agent-first web might sound terrifying if you rely solely on traditional keyword rankings. In reality, it is the biggest opportunity we have seen since the invention of the search engine itself. WebMCP and UCP mean we are no longer just optimizing for clicks; we are optimizing for direct action, frictionless commerce, and automated lead generation. The creators who understand how these agents interact with backend systems are going to see a level of efficiency and reach that human browsing could never achieve. The partnership between creators and Google has definitively changed, but the future of what we can build on top of this agentic web is incredibly bright.

My Advice

It’s hard to know how to act right now because no one, not even Google, knows exactly how the agentic web will unfold. Here’s what I’d recommend:

  • Familiarize yourself with WebMCP and understand how it works.
  • If you are ecommerce, learn all you can on UCP.
  • Start learning to vibe code using tools like Claude Code, Google’s AI Studio, or my favourite, Antigravity. (If you are a member of my paid community, I just published a full guide on using Antigravity in the Learning Hub.)
  • Focus on the exciting things you can do with AI rather than letting media and social media steer you away! No one knows what’s in store for the future, but I’m positive that those who press in and learn how to use AI for good will have great success.

More Resources:


Read Marie’s newsletter, AI News You Can Use. Subscribe now.


Featured Image: HST6/Shutterstock

Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how your headlines appear in Search, how spam enforcement played out, and how AI content gets labeled.

Here’s what matters for you and your work.

Google Tests AI-Generated Headline Rewrites In Search

Google confirmed that it’s testing AI-generated headline rewrites in traditional search results. The test uses language similar to what Google used before reclassifying AI headlines in Discover as a feature.

Key facts: Google called the test “small and narrow.” The rewrites include no disclosure that Google changed the original headline. Google said any broader launch may not use generative AI but didn’t explain what the alternative would look like.

Why This Matters

Google called AI headlines in Discover “small” in December, reclassified them as a feature by January, and is now using the same language for Search. Google has not outlined an opt-out for this test, and the documented examples show Google changing meaning, not just formatting.

What Publishers And SEO Professionals Are Saying

Bastian Grimm, founder of Peak Ace AG, wrote on LinkedIn:

“Previous rewrites were primarily about matching query intent, fixing truncation, or improving readability. This test uses AI to rewrite for engagement – and documented examples show it changing tone and intent in ways that go well beyond formatting. That is a meaningful shift. A title rewritten to match a query is one thing. A title rewritten because Google’s model thinks a different framing will perform better is another.”

Brodie Clark, independent SEO consultant, wrote on LinkedIn:

“The big issue with this approach is that there were instances where the titles for the articles were rewritten, but the meaning of the article was lost in the rewrite or through formatting changes (such as using capitals for every word).”

Nilay Patel, editor-in-chief of The Verge, wrote on Bluesky:

“Google is now screwing with the 10 blue links in traditional search and rewriting headlines – including ours – to be the worst kind of slop. This sucks so bad”

James Ball, political editor at The New World Opinion and fellow at Tech Policy Press and Demos, wrote on Bluesky:

“Google is re-headlining articles in search results, including in ways that introduce errors. I think even 2-3 years ago it would’ve backed off this for fear of publisher backlash. Does the media have enough clout left wirh tech to get this one reversed?”

Read our full coverage: Google Tested AI Headlines In Discover. Now It’s Testing Them In Search

March 2026 Spam Update Completes In Under 20 Hours

Google’s March 2026 spam update started on March 24 and finished on March 25. The rollout was significantly faster than recent spam updates. The update applies globally and to all languages.

Key facts: The rollout began at 12:00 PM PT on March 24 and ended at 7:30 AM PT on March 25. Google didn’t announce new spam policies with this update. The community response has been notably quiet, with few reports of visible impact.

Why This Matters

The rollout window was short and is already complete, so March 24-25 is the clearest period to review in Search Console. Google’s current spam policies are still the main guidelines to follow, as no new categories have been introduced.

What SEO Professionals Are Saying

Nilesh Pansuriya, leading Guru99’s global content and SEO team, wrote on LinkedIn:

“I’ve been tracking Google updates for 15 years. I’ve never seen one move this fast. The March 2026 Spam Update rolled out on March 24th. Completely finished by March 25th. ⏱️ Total time: 19 hours and 30 minutes. → August 2025 spam update → 27 days → December 2024 spam update → 7 days → October 2022 spam update → 48 hours → March 2026 spam update → under 20 hours Done before most SEOs even noticed it started.”

Read our full coverage: Google Begins Rolling Out The March 2026 Spam Update

Google Adds AI And Bot Content Labels To Structured Data

Google updated its Discussion Forum and Q&A Page structured data documentation to include new properties, including a way for sites to label AI- and bot-generated content.

Key facts: The new digitalSourceType property uses IPTC enumeration values to distinguish content created by a trained model from content created by a simpler automated process. Google lists the property as recommended, not required. When it’s absent, Google assumes the content is human-generated.

Why This Matters

Forums and Q&A platforms now have a documented way to tell Google which content was created by AI or bots. The “recommended” status means adoption will be voluntary.

What SEO Professionals Are Saying

Jan-Willem Bobbink, founder of WebGeist, wrote on LinkedIn:

“Lets talk about a gap in Google’s new AI content labeling. They require it for product feeds but only ‘recommend’ it for forums. Google just updated its Discussion Forum and Q&A Page structured data docs with a new property called digitalSourceType. It lets sites flag when a post or comment was written by an AI model or an automated bot. The idea sounds great on paper. In practice, the implementation tells a different story. The property is listed as ‘recommended,’ not required. If a site leaves it out, Google assumes the content is human-generated. That is a massive loophole.”

Read our full coverage: Google Adds AI & Bot Labels To Forum, Q&A Structured Data

Bing Connects Grounding Queries To Cited Pages

Bing Webmaster Tools added a mapping feature to its AI Performance dashboard that connects grounding queries to the specific pages cited for them. The update works in both directions.

Key facts: You can click a grounding query to see which pages are cited for it. You can also click a page to see which grounding queries drive its citations. The dashboard covers AI experiences across Copilot, AI summaries in Bing, and select partner integrations. The data is still a sample, not a complete log.

Why This Matters

This gives you a way to connect AI citation data to specific content on your site. Knowing which pages earn citations for which phrases makes it easier to decide where to focus content updates for AI visibility.

Google’s Search Console includes AI Overviews and AI Mode in standard Performance reporting but hasn’t introduced a comparable page-level citation mapping.

What SEO Professionals Are Saying

Aleyda Solís, international SEO consultant and founder of Orainti, wrote on LinkedIn:

“New Bing Webmaster Tools AI Performance Dashboard Insights: We can now see which pages are being cited for a specific grounding query, and which grounding queries are driving citations to a specific pages Thanks so much for hearing the community feedback Krishna Madhavan, Fabrice Canel and team See the announcement in comments.”

Navah Hopkins, ads liaison at Microsoft Advertising, wrote on LinkedIn:

“Grounding queries reveal the key phrases AI used to retrieve content that was cited, offering insight into how AI interprets user intent. If you see your content is getting cited, that means you’re registering as visible to the AI. The page-level citation report sheds light on which pages are helping you win that visibility.”

Read our full coverage: Bing AI Dashboard Maps Grounding Queries To Cited Pages

Theme Of The Week: Google Tightens Control Over How Content Appears

Three of this week’s four stories show Google asserting more influence over how content is presented and categorized in its ecosystem.

AI headline rewrites let Google change how your pages appear in search results. The spam update completed in under 20 hours, the fastest rollout in recent memory. And the new structured data properties ask platforms to self-report whether content was created by humans or machines.

In contrast, while Google tightens control over how content appears, Bing is giving publishers greater visibility into how their content performs in AI-generated answers. The query-to-page mapping closes a measurement gap that Google hasn’t addressed on its side.

Top Stories Of The Week:

More Resources:


Google Begins Rolling Out March 2026 Core Update via @sejournal, @MattGSouthern

Google has started rolling out the March 2026 core update, according to a notice on the Google Search Status Dashboard. The company says the update began at 2:00 a.m. PT on March 27 and may take up to two weeks to finish.

What Google Confirmed

Google hasn’t provided additional details about what changed in this core update. As with previous core updates, the company’s public notice focuses on timing rather than specific ranking systems or categories of websites that may be affected.

The March 2026 core update follows the March 2026 spam update, which Google completed earlier this week.

Why This Matters

If you’re seeing ranking changes over the next several days, Google has now confirmed that a core update rollout is underway.

Core updates can lead to ranking changes across many types of websites while the rollout is in progress. Google hasn’t shared any more specific guidance for this update yet.

Looking Ahead

Google says the rollout may take up to two weeks, so rankings may continue to shift into early April. Until the update is complete, it may be difficult to tell whether visibility changes reflect a lasting reordering or temporary movement during the rollout.

Google Takes Search Live Global With Gemini 3.1 Flash Live via @sejournal, @MattGSouthern

Google is expanding Search Live to more than 200 countries and territories, bringing voice and camera conversations to AI Mode globally.

The expansion is powered by Gemini 3.1 Flash Live, a new audio model that Google calls its highest-quality yet. It’s inherently multilingual, so you can speak with Search in your preferred language without switching settings.

Search Live was previously limited to the U.S.

What’s Changing

Search Live lets you talk to Google Search inside AI Mode instead of typing a query. You ask a question out loud and get an audio response, then continue with follow-ups. Web links appear on screen alongside the voice responses.

The feature also supports camera input. Point your phone at a product label or a piece of equipment and ask Search about what it sees. Google Lens users can tap a “Live” option to start a conversation about what’s in the camera view.

With today’s expansion, both voice and camera capabilities are available in every market where AI Mode is active.

The New Model

Gemini 3.1 Flash Live replaces the previous audio model powering Search Live. Google published benchmark results alongside the announcement.

Gemini Live can now follow a conversation thread for twice as long as the previous model, according to Google. Though the company didn’t specify what the previous limit was.

Beyond Search, 3.1 Flash Live is available to developers in preview through the Gemini Live API in Google AI Studio.

Why This Matters

Search Live turns search into a spoken conversation with camera input. Until now, the feature was limited to U.S. users. Today’s expansion makes it available in the markets where AI Mode is live, across more than 200 countries and territories.

There’s no public data yet on how many people use Search Live or how it affects query volume. But Google has been building toward this for the past year. The company launched Search Live in June, added video input in July, and upgraded to Gemini 2.5 Flash Native Audio in December. Each update expanded what the feature can do and who can use it.

Looking Ahead

Google didn’t announce additional Search Live features alongside this expansion. The focus is on geographic reach and the underlying model upgrade.

How the model performs in production across different languages and markets will be worth watching as adoption data becomes available.

When The Training Data Cutoff Becomes A Ranking Factor via @sejournal, @DuaneForrester

Every AI system serving answers today operates with two fundamentally different memory architectures, and the boundary between them runs along a single invisible line: the training data cutoff. Content published before that line is baked into the model’s weights, always accessible, confident, and unreferenced. Content published after that line only surfaces when the model retrieves it in real time, which introduces a different retrieval path, a different confidence profile, and, critically, different presentation behavior in synthesized answers. If you’re optimizing for brand visibility in AI-generated search, this distinction is not a footnote. It is the organizing principle.

The mechanism most practitioners are still treating as one thing is actually two.

The shorthand “AI doesn’t know things after its cutoff date” is technically accurate but strategically incomplete. What it obscures is that post-cutoff and pre-cutoff content don’t just occupy different time periods. They occupy different systems inside the same model.

Parametric memory is what the model learned during training: facts, relationships, concepts, and entities whose representations are encoded directly into the model’s weights. When you ask a model something within its parametric knowledge, it doesn’t look anything up. It synthesizes from internalized representations, which is why responses from parametric knowledge tend to be fluent, fast, and stated without qualification. The model isn’t consulting a source. It’s recalling.

Retrieval-augmented memory, by contrast, is what the model fetches at inference time. When a query either touches post-cutoff territory or triggers the model’s search function, a retriever collects documents from a live index, compresses the most relevant passages, and injects them into the context window alongside the original prompt. The model then synthesizes from those passages. Think of it this way: Parametric memory is everything you learned in school, internalized and available instantly. Retrieval is picking up your phone to look something up. Both produce answers, but the confidence signature and attribution behavior are structurally different, and that difference matters to how your brand content gets presented.

The Platforms Are Not Behaving The Same Way

One reason this dynamic gets underappreciated is that the five platforms your audience actually uses have meaningfully different cutoff dates and retrieval architectures, which means the practical implications vary by platform.

ChatGPT’s flagship GPT-5 series carries a knowledge cutoff of August 2025, but the older GPT-4o model, which remains widely deployed via API integrations and older interfaces, cuts off at October 2023. Web search is available in the ChatGPT interface but is selectively triggered rather than on by default for every query, meaning a substantial portion of ChatGPT responses still draw from parametric memory. Gemini 3 and 3.1 carry a January 2025 parametric cutoff, but Google’s Search Grounding tool is available as a supplementary mechanism that can be activated contextually. Gemini’s deep integration with Google infrastructure gives it a more natural path to real-time retrieval than models from other providers, but it does not automatically retrieve for every query. Claude (this current Sonnet 4.6 generation) holds a reliable knowledge cutoff of August 2025 and a broader training data cutoff of January 2026, with web search available as a tool but not automatically deployed on every response. Microsoft Copilot is unique in that its web grounding capability runs through Bing and is configurable at the enterprise level, meaning it is off by default in US government cloud deployments, leaving those instances fully dependent on parametric memory. Regulated industry users need to make their choice, but the feature exists.

Then there is Perplexity, which operates differently from all of the above. Perplexity is RAG-native by design, running a live retrieval pipeline on essentially every query through a distributed index built on Vespa AI, with real-time web crawling supplemented by external search APIs. For Perplexity, the training cutoff is largely irrelevant to the end user because the system routes around it by default. The practical consequence is that Perplexity citations tend to be current and attributed, while ChatGPT, Gemini, Claude, and Copilot responses vary between confident parametric synthesis and hedged retrieval depending on query type and configuration.

What this means in practice is that your brand visibility strategy cannot treat “AI search” as a monolith. The platform your prospective buyer uses when comparing enterprise software vendors may have a completely different memory architecture than the one your marketing team tested last week.

Why The Cutoff Creates A Structural Confidence Advantage For Older Content

This is the part of the cutoff discussion that gets the least attention, and it has direct implications for how your brand claims land inside synthesized answers.

When a model operates within its parametric knowledge, it does not need to retrieve, attribute, or hedge. It simply answers. The academic literature on dynamic retrieval confirms that models trigger retrieval based on initial confidence in the original question: when parametric confidence is high, retrieval often isn’t triggered at all. When retrieval is triggered, the response mechanics shift. The model must now weave in attributed information from fetched documents, which introduces phrases like “according to a recent report,” “sources indicate,” or “based on search results.” These attribution constructs are not cosmetic. They signal to the reader (and to the response synthesis logic) that the cited claim exists in a different epistemic register than a confident parametric assertion.

The practical example is straightforward. Ask most current AI models what Salesforce’s CRM market position is, and if that information is well-represented in training data, you’ll get a confident, unqualified synthesis. Ask about a product positioning shift from six months ago, after the cutoff, and you get either a retrieval-dependent answer with caveats and citations or a gap in coverage. Your brand’s foundational narrative, if it exists clearly in parametric memory, presents with the confidence of internalized knowledge. Your recent product news, if it only exists in the retrieval layer, arrives with the hedging language of external evidence. Both appear, but they sound different.

The Strategic Layer: Timing Content For The Cutoff-To-RAG Pipeline

What can practitioners actually do with this? The answer requires rethinking how we talk about content calendaring.

Traditional content calendaring is organized around audience timing, seasonal relevance, and channel cadence. Cutoff-aware content calendaring adds a fourth axis: anticipated model training windows. If you know that major model training runs tend to lag publication by several months to a year, and you know that training data sampling favors well-cited, well-distributed content, then there is a strategic argument for prioritizing the publication and amplification of your most foundational brand claims well in advance of those windows. A capabilities brief, a positioning paper, a definitional piece that establishes your category leadership, these are the kinds of assets that benefit from being embedded in parametric memory rather than living only in the retrieval layer.

The inverse implication is equally important. Time-sensitive content such as product updates, event coverage, pricing announcements, and campaign materials is inherently post-cutoff territory for any model trained before publication. That content must succeed in the retrieval layer, which means it needs to be indexed, cited, and structured for chunk-level retrieval rather than optimized for the parametric embedding that foundational content targets. These are different content jobs requiring different distribution strategies, and treating them the same is one of the more common structural errors in current AI visibility practice.

The practical execution of cutoff-aware content calendaring does not require inside knowledge of any model’s training schedule, which is rarely disclosed. What it requires is treating content type as a determinant of content timing: foundational brand positioning gets published and amplified early and consistently, long before you need it in AI answers; time-sensitive content gets optimized for retrieval quality through proper indexing, machine-readable structure, and citation-friendly formatting. Next week’s article addresses that second half in detail.

What ‘Freshness’ Actually Means When Two Memory Systems Are In Play

It is worth addressing directly how this framework differs from Google’s freshness model, because the intuitions built up from fifteen years of SEO practice don’t map cleanly onto AI search behavior.

In Google’s architecture, freshness signals follow a model roughly described as Query Deserves Freshness: for certain query types, recently published or recently updated content receives a ranking boost that causes it to displace older content in results. Fresh content wins, stale content loses, and the implication for practitioners is that regular updates maintain ranking position.

The AI dual-memory model works differently. Pre-cutoff content and post-cutoff content don’t compete directly on a freshness dimension. They coexist in different retrieval layers and can both appear in a single synthesized response. A model answering a question about your product category might draw its foundational description from parametric memory trained on content from two years ago, then supplement it with a retrieved mention of your latest release, all within the same paragraph. The optimization challenge is not to keep one piece of content fresh enough to outrank another. It is to ensure that what lives in parametric memory says what you want it to say, and that what lives in the retrieval layer is structured to be found, parsed, and attributed accurately.

The implications for content update strategy also diverge. In traditional SEO, updating a page often signals freshness and can improve rankings. In AI retrieval, updating a page changes what gets indexed in the retrieval layer but does nothing to update what’s already embedded in parametric memory. The only mechanism that changes parametric memory is a new model training run. This means the stakes around getting foundational content right before training windows are considerably higher than the stakes around quarterly page refreshes, and the measurement challenge is different in kind.

The Thread Connecting This To Everything That Follows

This article is a layer added onto the consistency problem described in “The AI Consistency Paradox.” Inconsistency across queries isn’t random noise. A significant portion of it is structurally explained by the dual-memory architecture: the same model asked the same question on different days may draw from parametric memory or trigger retrieval depending on phrasing, context, and platform configuration, producing different confidence signatures and different content. The measurement problem introduced here, which is how do you know which memory layer your brand content is living in, is precisely what cutoff-aware content calendaring is designed to address at the strategic level and what the next article will address at the technical level.

The next article looks at machine-readable content structure as a mechanism for increasing retrieval quality, which is where parametric timing and retrieval optimization meet.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: SkillUp/Shutterstock; Paulo Bobita/Search Engine Journal

How To Avoid Top Down SEO Systems Failures With The Visibility Governance Maturity Model via @sejournal, @theshelleywalsh

Most SEO failures aren’t caused by bad SEOs. They’re caused by organizations that don’t have the systems to support them.

That’s the argument Ash Nallawalla has been building across five books and over 24 years of enterprise SEO experience in Australia. As a visibility governance consultant based in Melbourne, Ash has worked in-house for some of Australia’s biggest brands, and seen firsthand what happens when no one above the SEO team understands what they do or why it matters.

On IMHO, I spoke with Ash about why he believes visibility needs to be governed at board level, how his maturity model works, and why the rise of AI-mediated discovery makes this more urgent than ever.

“Governance is not a constraint on speed. However, the absence of governance is.”

When No One Owns It, Everything Breaks

Most SEO failures are structural. Which means the team didn’t fail, but the system did. And the damage could be disproportionate to the cause. A governance gap of weeks could create months of recovery. And governance is not a constraint on speed. However, the absence of governance is.

Ash shared an example that illustrates just how catastrophic a governance gap can be.

At one organization, he discovered in Google Search Console 22 million pages as “currently not indexed.” When Australia only has 25 million residents, he knew something has seriously gone wrong.

This was down to someone internally in the past who had decided that creating a page for every combination of facet would be a good idea.

“There were 10 quintillion pages. And if you’ve not heard that number before, it is one followed by 18 zeros,” Ash explained. “We calculated that if Googlebot could read a thousand URLs a second, it would take 310 billion years to crawl all of them.”

Despite this, the site was still ranking well and receiving 5 million Googlebot visits per day. The problem was invisible to anyone above the SEO or product manager level.

“That place didn’t have governance because no one above the SEO level or the product manager level realized the problem. They just knew someone was doing SEO and yes, we’re getting lots of traffic.”

This kind of structural failure is what drove Ash to write his first book, “Accidental SEO Manager,” in 2022. As he put it, “In reality most people come into SEO with no background and that applies to the managers who are looking after enterprise SEO.”

A Maturity Model For Visibility Governance

Ash has since developed what he calls the Visibility Governance Maturity Model (VGMM), borrowing from the Carnegie Mellon capability maturity framework used in software development. It maps governance across seven domains, SEO (including local and international), content, website performance, accessibility, and AI governance, into five levels expressed as a percentage score.

“The C-suite gets to know that our visibility governance is at 80% or it’s at 20% or 30% whatever it is, and that corresponds to five levels.”

“Some of these questions are single points of failure. And if you said ‘not in place’ for any of them, it doesn’t matter what your real score is, you are capped at level two.” Ash explained.

A single point of failure (SPOF) might be something as fundamental as whether anyone is responsible for robots.txt. In some companies, Ash noted, they don’t even know what robots.txt is.

Selling Governance To Skeptics

When boards push back against the need for governance, Ash uses three arguments.

First, the system test: “If things work wonderfully this month, are we guaranteed that next month and the month after that things will work wonderfully? And if not, then there is a problem that we need to investigate.”

Second, the rework cost. Fixing a visibility failure after the fact is far more expensive than preventing it, especially when the failure involves AI systems.

“If suddenly ChatGPT stops recommending your brand, you may not realize it. Your traffic is up. Your rankings are where they were. That’s not effective, but your competitors are doing better than you.”

And third, for the skeptics who worry governance will slow things down: “You will move faster with governance than without it because you might have these big problems and it may take you an unknown amount of time to fix them.”

What To Tell A Board That’s Never Heard Of Visibility Governance

When pitching to a board for the first time, Ash recommends leading with money, then reframing SEO as infrastructure.

“Organic search visibility, which is the traditional SEO, is infrastructure. It’s not just a marketing exercise. It’s a capital asset with a yield.”

He frames AI-mediated discovery as a new category of risk, something boards are already familiar with in other contexts. Brand visibility can erode silently without any alerts firing, and traditional controls aren’t detecting it.

“If their paid costs are slowly creeping up, that’s not always because the search engine is charging more. It’s also because they’re having to advertise more. And that’s one of the early hints that there could be an external system that is brewing, and it’s taking customers away, and that’s the AI-mediated search that their potential customers are beginning to use, and they’re being led in other directions.

So the second thing that I say to them is that the risk profile of visibility has changed in the last two years, and your traditional controls are not detecting it.”

Ash shared a real example where his CIO once asked why Bing Chat was recommending competitors but not their own brand. The cause turned out to be a blocked Common Crawl bot (CCBot), which Bing Chat had relied on during its learning phase. “We unblocked CCBot, and within a few months, it started recommending our brand.”

There’s also a reputational dimension. If customers are leaving bad reviews on platforms the company doesn’t monitor, large language models are learning from that sentiment, and quietly dropping the brand from their recommendations.

“When you share responsibility without ownership, then governance will fail.”

Ash recommends boards ask four questions:

  • Who owns accountability for visibility performance at a strategic level?
  • Is that person senior enough to influence things?
  • Is visibility reporting reaching the board in a way that distinguishes between performing well today and being structurally sound tomorrow?
  • Are we treating AI-mediated visibility as a governance matter, or as a technology novelty someone in marketing is keeping an eye on?

The Leadership Test

Ash closed with what he calls the leadership test, a challenge to any organization that relies on individual heroics rather than systems.

“If your SEO depends on individuals pushing uphill against the system, then gradually their capability will vanish when they leave.”

He advocates for internal wikis, documented learnings, and hiring for capability rather than cultural fit. The goal is to reduce dependence on individuals and build structures that survive personnel changes.

“I’m saying to boards, put visibility on the agenda at every meeting, even if it’s a one sentence from the responsible person, ‘visibility is fine’ or whatever they want to report, but it reminds the board at every meeting that SEO and now external visibility are both very important infrastructure matters.”

Visibility Governance Isn’t Just For Enterprise

While governance is most obviously an enterprise concern, the principles apply broadly. Smaller companies are just as vulnerable to silent visibility erosion, perhaps more so, because they have fewer resources to detect or recover from it.

Where AI systems are reshaping how brands get discovered, the organizations that treat visibility as a governance matter rather than a marketing task are the ones most likely to survive the shift.

Watch the full interview with Ash Nallawalla here:

Thank you to Ash Nallawalla for offering his insights and being my guest on IMHO, and read more about the Visibility Governance Maturity Model in the Managing SEO series of books.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

Are We Due Another Florida-Style Update? via @sejournal, @TaylorDanRW

Editor’s note: this article was written a few days before the core update that started to roll out on March 24.

Updates like Florida, Allegra, and Brandy were major turning points in search because they fundamentally reshaped how websites were ranked and how SEO was practiced.

These updates caused sudden and dramatic shifts where rankings dropped overnight, entire categories of websites lost visibility, and tactics that once delivered consistent performance stopped working almost immediately.

A similar question is now starting to emerge as AI-generated content increases and large volumes of low-value pages begin to fill the web. The scale and speed of content production feel familiar and echo the build-up that came before earlier algorithmic resets.

The systems that power search have evolved, yet the pressures acting on them are beginning to look very similar. A repeat in the same form is unlikely, but the conditions that created those updates are returning, and a comparable reset remains a realistic possibility if those conditions continue to worsen.

Scaled Low-Value Content Is Worse Than Ever

The underlying problem of low-value content at scale is returning, driven largely by the capabilities of AI. The cost and effort required to produce content have dropped significantly, which allows pages to be created faster and in greater volume than ever before. This has led to rapid expansion across many areas of search, particularly in informational queries where barriers to entry are relatively lower.

The more prominent issue is the level of similarity across that content.

Much of what is produced follows the same structure, covers the same points, and reaches similar conclusions. The result is content that is readable and technically correct, but lacks depth, originality, and meaningful differentiation, core elements that make content useful, valuable, and give it longevity in Google’s serving index.

There are mirrors to the content farm era that Panda addressed, where the problem was not just the number of pages but the fact that those pages were largely interchangeable. The current wave of AI content reflects the same issue at a much larger scale and with a higher baseline level of quality, which makes it both more effective and harder to filter.

The Rolling Correction With Real-Time Updates

Google is already responding to these challenges through its existing systems, which work together to continuously evaluate and adjust content visibility. The Helpful Content System assesses quality across entire sites, SpamBrain identifies patterns that indicate low-value or manipulative behavior, and core updates refine rankings across the index.

These systems create a rolling correction where change is constant rather than concentrated in a single event. The March 2024 core update demonstrates this approach because it targeted low-quality and scaled content without creating a clear break. Some sites lost visibility, some improved, and many experienced mixed results over time.

This reflects a deliberate shift in how quality is managed because the goal is to maintain balance continuously rather than reset the system in one moment. That approach depends on the system keeping pace with the scale of the problem it is trying to manage.

Continuous Systems Aren’t Always Enough

The issue is not only that more content is being produced, but that it is being produced at a speed that may outpace the system’s ability to fully evaluate it. A gap can form between content production and content assessment, which allows low-value pages to gain visibility before being properly filtered.

As that gap widens, the quality of search results can decline in subtle but noticeable ways. Users may encounter repetitive or shallow content across similar queries, which reduces trust in the results over time. This does not represent a full breakdown of the system, but it does show increasing pressure, and if users lose trust in the results, they stop coming to Google, which impacts Google’s ability to generate revenue.

The assumption that continuous evaluation can handle unlimited scale is being tested, and the limits of that system are not yet clear.

The Case For Another Florida

The possibility of another large-scale update depends on whether the current system can continue to manage this pressure effectively.

A scenario exists where Google introduces a more aggressive update that recalibrates quality thresholds across the board and reduces the visibility of low-value content more quickly and more broadly. We know that Google trains on a subset of quality that it knows is created to the highest standards (as disclosed at the Search Central Live in Bangkok in 2025). The form this would take would differ from Florida, but the impact could feel similar because large numbers of sites could lose visibility in a short period of time.

Such an update would likely follow a period where search results feel consistently weak or repetitive and where users begin to question their reliability. Evidence that existing systems cannot correct the issue quickly enough would increase the likelihood of a more aggressive intervention from Google.

Recalibrating Content As A Tactic

Content strategy has shifted from efficiency to defensibility because the ability to produce content at scale is no longer a meaningful advantage. AI has made content production widely accessible, and this has put pressure on agencies and in-house teams to be able to produce more with the same resources – but measuring this by total content output versus the overall content quality is a trade-off I feel many are sleepwalking into.

Content that performs well now tends to offer something that cannot be easily replicated.

This often includes real experience, a clear and informed perspective, or genuinely useful insight that goes beyond standardized output. Strong alignment with user intent also plays a critical role in maintaining visibility over time.

These principles are not new, but they are enforced more consistently and may be applied more aggressively if the system requires it.

This Is A System Under Pressure

The likelihood of another Florida-style update depends on how well the current system continues to perform under increasing pressure. Google’s approach has shifted toward continuous evaluation, which reduces the need for large and sudden changes under normal conditions.

The conditions that led to past updates are beginning to re-emerge in a different form, driven by the scale of AI-generated content. A more decisive intervention becomes more likely if those conditions continue to build and begin to affect user trust in search results.

The system currently operates through steady and ongoing adjustment, without a clear reset point or a single moment of change. Content is evaluated continuously based on whether it deserves to be indexed and served to users.

History shows that gradual systems can give way to more direct action when pressure builds too much, and if that point is reached again, the response is likely to be a statement move.

More Resources:


Featured Image: hmorena/Shutterstock

Google’s March Spam Update Felt Muted But May Signal Bigger Changes via @sejournal, @martinibuster

Google’s March 2026 Spam Update was welcomed by many in the SEO community who were hoping for relief from listicles, AI content rewriters, and Google’s own AI Overviews that “rehash other people’s content.” The update unexpectedly finished in less than twenty-four hours, with a collective shrug and a yawn. Yet despite the underwhelming nature of the update, it still yielded a few interesting insights and takeaways.

Hopeful SEOs

Google’s spam announcement was largely welcomed by many in the SEO community who were hoping that spammy sites positioned above them would lose their rankings but the muted response spoke to an update that didn’t seem to land where people expected it to.

EmarketerZ expressed the hope that sites struggling under the weight of spammy sites ranking above them might have their comeback moment.

They tweeted:

“Google’s latest spam update might just be the comeback moment publishers have been waiting for—finally a shot at reclaiming the traffic they lost in the last one 🤣”

Over on LinkedIn Adrian M. responded to Google’s announcement by expressing that it’s about time, calling out fake engagement tactics as an area they’d like to see cleaned out.

They wrote:

“It was only a matter of time, and it’s exactly what the industry needed. Many SEO agencies have been relying on bot networks and residential proxies to simulate organic engagement and inflate their monthly reports. I’ve recently audited e-commerce servers pushed to the brink of crashing (503 errors) just by these automated, fake “add-to-cart” scripts masquerading as real users. This update will finally clean up the vanity metrics and force the market to return to genuine content marketing and real user acquisition. Excellent move by the Search team!”

Muted Response From Digital Marketers

Many SEOs who have been vocal about spammy GEO tactics and regular old spam jamming up the search results were oddly quiet through the duration of the spam update.

Glenn Gabe had this to say:

“Wait, what? The March 2026 Spam Update has completed rolling out. Damn, that was fast. :)”

And Lily Ray tweeted:

The Google subreddit announcing Google’s spam update only had six responses, four of which were conversations asking for a link to the official announcement. It’s fair to say the response on Reddit’s Google subreddit was a shrug and yawn.

The response over on the SEO subreddit was similar, with some of the comments doubting much of anything will change.

One person expressed the hope that this time AI-generated content farms will get wiped out.

They wrote:

“I’m betting on a big hit to AI-generated content farms and those super thin affiliate sites. google’s been hinting at this for a while, feels like it’s finally coming.”

But another Redditor nicknamed mrtornado79 responded with a big nah… and a useful insight.

“It’s been “finally coming” for three years. At this point it’s basically an SEO drinking game — spam update drops, someone says “this is the one that kills AI content farms,” nothing particularly dramatic happens, repeat.

Google called this a “normal spam update.” Not a paradigm shift. Not the AI content apocalypse. Normal.”

That point about the March Spam Update not being a paradigm shift was a good observation about Google’s understated announcement and it probably explains why Google didn’t even bother to update their Spam Update information.

A couple of the SEO Facebook Groups didn’t even have a discussion about the update, which in itself is a comment about how SEOs feel about Google’s spam updates: It could be a sign of how much wind has been taken out of the sails of low-level affiliate spammers and PBN sellers.

Wait, What… That Was It?

The end of the update was generally met by silence on many of the ongoing discussions across the Internet.

WebmasterWorld member Micha expressed the general underwhelment best:

“Huh? The update is over?”

It’s quite possible that Redditor mrtornado79’s opinion that it was not going to be a paradigm shift was the best view of what just happened.

What May Happen Next

The big question now may not be what just happened but rather what is going to happen next.

I’ve always seen Google’s spam updates as a clearing of the table in preparation for the next course. If a core update follows soon, then that may be what this muted spam update was about. That can be anything from the introduction of new AI-driven features (like those title rewrites they were recently experimenting with) to something quiet that will barely be noticed, like an infrastructure change to accommodate something big and new.

What could Google implement over the coming months?

There have been two patents filed recently which I’ll be publishing information about soon.

1. User Journey Patent
The first one describes a machine learning system that determines how different types of content exposure influence a user’s likelihood of performing a specific action, such as making a purchase or signing up for a service. It’s a system to attribute portions of the final action to specific exposures to content or ads, even when multiple exposures occurred at different times.

2. Automatic Search Results Updates
This patent describes a system that improves search experiences by automatically delivering better results to a user after their original search, without requiring them to search again. This is applicable to both an organic search and an AI assisted search. This transforms search from a one-time activity to information requests that resolve over time. This is really interesting because it makes it possible to ask a question about something that’s going to happen or hasn’t been announced yet, expanding the range of queries that Google can answer.

My general impression of Spam Updates is that they are sometimes a prelude to changes elsewhere in Google’s core algorithm or related infrastructure. It may be an interesting month ahead.

Featured Image by Shutterstock/vchal