Google’s June 2025 Core Update just finished. What’s notable is that while some say it was a big update, it didn’t feel disruptive, indicating that the changes may have been more subtle than game changing. Here are some clues that may explain what happened with this update.
Two Search Ranking Related Breakthroughs
Although a lot of people are saying that the June 2025 Update was related to MUVERA, that’s not really the whole story. There were two notable backend announcements over the past few weeks, MUVERA and Google’s Graph Foundation Model.
Google MUVERA
MUVERA is a Multi-Vector via Fixed Dimensional Encodings (FDEs) retrieval algorithm that makes retrieving web pages more accurate and with a higher degree of efficiency. The notable part for SEO is that it is able to retrieve fewer candidate pages for ranking, leaving the less relevant pages behind and promoting only the more precisely relevant pages.
This enables Google to have all of the precision of multi-vector retrieval without any of the drawbacks of traditional multi-vector systems and with greater accuracy.
Google’s MUVERA announcement explains the key improvements:
“Improved recall: MUVERA outperforms the single-vector heuristic, a common approach used in multi-vector retrieval (which PLAID also employs), achieving better recall while retrieving significantly fewer candidate documents… For instance, FDE’s retrieve 5–20x fewer candidates to achieve a fixed recall.
Moreover, we found that MUVERA’s FDEs can be effectively compressed using product quantization, reducing memory footprint by 32x with minimal impact on retrieval quality.
These results highlight MUVERA’s potential to significantly accelerate multi-vector retrieval, making it more practical for real-world applications.
…By reducing multi-vector search to single-vector MIPS, MUVERA leverages existing optimized search techniques and achieves state-of-the-art performance with significantly improved efficiency.”
Google’s Graph Foundation Model
A graph foundation model (GFM) is a type of AI model that is designed to generalize across different graph structures and datasets. It’s designed to be adaptable in a similar way to how large language models can generalize across different domains that it hadn’t been initially trained in.
Google’s GFM classifies nodes and edges, which could plausibly include documents, links, users, spam detection, product recommendations, and any other kind of classification.
This is something very new, published on July 10th, but already tested on ads for spam detection. It is in fact a breakthrough in graph machine learning and the development of AI models that can generalize across different graph structures and tasks.
It supersedes the limitations of Graph Neural Networks (GNNs) which are tethered to the graph on which they were trained on. Graph Foundation Models, like LLMs, aren’t limited to what they were trained on, which makes them versatile for handling new or unseen graph structures and domains.
Google’s announcement of GFM says that it improves zero-shot and few-shot learning, meaning it can make accurate predictions on different types of graphs without additional task-specific training (zero-shot), even when only a small number of labeled examples are available (few-shot).
Google’s GFM announcement reported these results:
“Operating at Google scale means processing graphs of billions of nodes and edges where our JAX environment and scalable TPU infrastructure particularly shines. Such data volumes are amenable for training generalist models, so we probed our GFM on several internal classification tasks like spam detection in ads, which involves dozens of large and connected relational tables. Typical tabular baselines, albeit scalable, do not consider connections between rows of different tables, and therefore miss context that might be useful for accurate predictions. Our experiments vividly demonstrate that gap.
We observe a significant performance boost compared to the best tuned single-table baselines. Depending on the downstream task, GFM brings 3x – 40x gains in average precision, which indicates that the graph structure in relational tables provides a crucial signal to be leveraged by ML models.”
What Changed?
It’s not unreasonable to speculate that integrating both MUVERA and GFM could enable Google’s ranking systems to more precisely rank relevant content by improving retrieval (MUVERA) and mapping relationships between links or content to better identify patterns associated with trustworthiness and authority (GFM).
Integrating Both MUVERA and GFM would enable Google’s ranking systems to more precisely surface relevant content that searchers would find to be satisfying.
Google’s official announcement said this:
“This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”
This particular update did not seem to be accompanied by widespread reports of massive changes. This update may fit into what Google’s Danny Sullivan was talking about at Search Central Live New York, where he said they would be making changes to Google’s algorithm to surface a greater variety of high-quality content.
Search marketer Glenn Gabe tweeted that he saw some sites that had been affected by the “Helpful Content Update,” also known as HCU, had surged back in the rankings, while other sites worsened.
Although he said that this was a very big update, the response to his tweets was muted, not the kind of response that happens when there’s a widespread disruption. I think it’s fair to say that, although Glenn Gabe’s data shows it was a big update, it may not have been a disruptive one.
So what changed? I think, I speculate, that it was a widespread change that improved Google’s ability to better surface relevant content, helped by better retrieval and an improved ability to interpret patterns of trustworthiness and authoritativeness, as well as to better identify low-quality sites.
OpenAI announced a new way for users to interact with the web to get things done in their personal and professional lives. ChatGPT agent is said to be able to automate planning a wedding, booking an entire vacation, updating a calendar, and converting screenshots into editable presentations. The impact on publishers, ecommerce stores, and SEOs cannot be overstated. This is what you should know and how to prepare for what could be one of the most consequential changes to online interactions since the invention of the browser.
OpenAI ChatGPT Agent Overview
OpenAI ChatGPT agent is based on three core parts, OpenAI’s Operator and Deep Research, two autonomous AI agents, plus ChatGPT’s natural language capabilities.
Operator can browse the web and interact with websites to complete tasks.
Deep Research is designed for multi-step research that is able to combine information from different resources and generate a report.
ChatGPT agent requests permission before taking significant actions and can be interrupted and halted at any point.
ChatGPT Agent Capabilities
ChatGPT agent has access to multiple tools to help it complete tasks:
A visual browser for interacting with web pages with the on-page interface.
Text based browser for answering reasoning-based queries.
A terminal for executing actions through a command-line interface.
Connectors, which are authorized user-friendly integrations (using APIs) that enable ChatGPT agent to interact with third-party apps.
Connectors are like bridges between ChatGPT agent and your authorized apps. When users ask ChatGPT agent to complete a task, the connectors enable it to retrieve the needed information and complete tasks. Direct API access via connectors enables it to interact with and extract information from connected apps.
ChatGPT agent can open a page with a browser (either text or visual), download a file, perform an action on it, and then view the results in the visual browser. ChatGPT connectors enable it to connect with external apps like Gmail or a calendar for answering questions and completing tasks.
ChatGPT Agent Automation of Web-Based Tasks
ChatGPT agent is able to complete entire complex tasks and summarize the results.
Here’s how OpenAI describes it:
“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.
You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.”
ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.
….ChatGPT agent can access your connectors, allowing it to integrate with your workflows and access relevant, actionable information. Once authenticated, these connectors allow ChatGPT to see information and do things like summarize your inbox for the day or find time slots you’re available for a meeting—to take action on these sites, however, you’ll still be prompted to log in by taking over the browser.
Additionally, you can schedule completed tasks to recur automatically, such as generating a weekly metrics report every Monday morning.”
What Does ChatGPT Agent Mean For SEO?
ChatGPT agent raises the stakes for publishers, online businesses, and SEO, in that making websites Agentic AI–friendly becomes increasingly important as more users become acquainted with it and begin sharing how it helps them in their daily lives and at work.
A recent study about AI agents found that OpenAI’s Operator responded well to structured on-page content. Structured on-page content enables AI agents to accurately retrieve specific information relevant to their tasks, perform actions (like filling in a form), and helps to disambiguate the web page (i.e., make it easily understood). I usually refrain from using jargon, but disambiguation is a word all SEOs need to understand because Agentic AI makes it more important than it has ever been.
Examples Of On-Page Structured Data
Headings
Tables
Forms with labeled input forms
Product listing with consistent fields like price, availability, name or label of the product in a title.
Authors, dates, and headlines
Menus and filters in ecommerce web pages
Takeaways
ChatGPT agent is a milestone in how users interact with the web, capable of completing multi-step tasks like planning trips, analyzing competitors, and generating reports or presentations.
OpenAI’s ChatGPT agent combines autonomous agents (Operator and Deep Research) with ChatGPT’s natural language interface to automate personal and professional workflows.
Connectors extend Agent’s capabilities by providing secure API-based access to third-party apps like calendars and email, enabling task execution across platforms.
Agent can interact directly with web pages, forms, and files, using tools like a visual browser, code execution terminal, and file handling system.
Agentic AI responds well to structured, disambiguated web content, making SEO and publisher alignment with structured on-page elements more important than ever.
Structured data improves an AI agent’s ability to retrieve and act on website information. Sites that are optimized for AI agents will gain the most, as more users depend on agent-driven automation to complete online tasks.
OpenAI’s ChatGPT agent is an automation system that can independently complete complex online tasks, such as booking trips, analyzing competitors, or summarizing emails, by using tools like browsers, terminals, and app connectors. It interacts directly with web pages and connected apps, performing actions that previously required human input.
For publishers, ecommerce sites, and SEOs, ChatGPT agent makes structured, easily interpreted on-page content critical because websites must now accommodate AI agents that interact with and act on their data in real time.
In a recent installment of SEO Office Hours, Google’s John Mueller offered guidance on how to keep unwanted pages out of search results and addressed a common source of confusion around sitelinks.
The discussion began with a user question: how can you remove a specific subpage from appearing in Google Search, even if other websites still link to it?
Sitelinks vs. Regular Listings
Mueller noted he wasn’t “100% sure” he understood the question, but assumed it referred either to sitelinks or standard listings. He explained that sitelinks, those extra links to subpages beneath a main result, are automatically generated based on what’s indexed for your site.
Mueller said:
“There’s no way for you to manually say I want this page indexed. I just don’t want it shown as a sitelink.”
In other words, you can’t selectively prevent a page from being a sitelink while keeping it in the index. If you want to make sure a page never appears in any form in search, a more direct approach is required.
How To Deindex A Page
Mueller outlined a two-step process for removing pages from Google Search results using a noindexdirective:
Allow crawling: First, make sure Google can access the page. If it’s blocked by robots.txt, the noindex tag won’t be seen and won’t work.
Apply a noindex tag: Once crawlable, add a noindex meta tag to the page to instruct Google not to include it in search results.
This method works even if other websites continue linking to the page.
Removing Pages Quickly
If you need faster action, Mueller suggested using Google Search Console’s URL Removal Tool, which allows site owners to request temporary removal.
“It works very quickly” for verified site owners, Mueller confirmed.
For pages on sites you don’t control, there’s also a public version of the removal tool, though Mueller noted it “takes a little bit longer” since Google must verify that the content has actually been taken down.
Hear Mueller’s full response in the video below:
What This Means For You
If you’re trying to prevent a specific page from appearing in Google results:
You can’t control sitelinks manually. Google’s algorithm handles them automatically.
Use noindex to remove content. Just make sure the page isn’t blocked from crawling.
Act quickly when needed. The URL Removal Tool is your fastest option, especially if you’re a verified site owner.
Choosing the right method, whether it’s noindex or a removal request, can help you manage visibility more effectively.
I recently saw Stephen Kenwright speak at a small Sistrix event in Leeds about strategies for exploiting Google’s brand bias, and a lot of what he said still feels as fresh today as it did over a decade ago when he first started promoting this theory.
Some might say (Stephen included) that this is what SEO should always have been about.
I spoke to Stephen, the founder of Rise at Seven, about his talk and about how his theories and strategies could translate to a world of large language model (LLM) optimization alongside a fractured search journey.
You can watch the full interview with Stephen on IMHO below, or continue reading the article summary.
Google’s Brand Bias Is Foundational
Brand bias isn’t a recent development. Stephen was already writing about it in 2016 during his time at Branded3. What underlines this bias is the trust users have in brands.
“Google wants to give a good experience to its users. That means surfacing the results they expect to see. Often, that’s a brand they already know,” Stephen explained.
When users search, they’re often subconsciously looking to reconnect with a mental shortcut that brands provide. It’s not about discovery; it’s about recognition.
When brands invest in traditional marketing channels, they influence user behavior in ways that create cascading effects across digital platforms.
Television advertising, for example, makes viewers significantly more likely to click on branded results even when searching for generic terms.
Traditional Marketing Directly Influences Search Behavior
At his talk in Leeds, Stephen referenced research that demonstrates television advertising creates measurable impacts on search behavior, with viewers 33% more likely to click on advertised brands in search results.
“People are about a third more likely to click your result after seeing a TV ad, and they convert better, too,” Stephen said.
When users encounter brands through traditional marketing channels, they develop mental associations that influence their subsequent search behavior. These behavioral patterns then signal to Google that certain brands provide better user experiences.
“Having the trust from the user comes from brand building activity. It doesn’t come from having an exact match domain that happens to rank first for a keyword,” Stephen emphasized. “That’s just not how the real world works.”
Investment In Brand Building Gains More Buy-In From C-Suite
Even though this bias has been evident for so long, Stephen highlighted a disconnect from brand-building activities within the industry.
“Every other discipline from PR to the marketing manager through to the social media team, literally everyone else, including the C-suite is interested in brand in some capacity and historically SEOs have been the exception,” Stephen explained.
This separation has created missed opportunities for SEOs to access larger marketing budgets and gain executive support for their initiatives.
By shifting focus toward brand-building activities that impact search visibility, they can better align with broader marketing objectives.
“Just by switching that mindset and asking, ‘What’s the impact on brand of our SEO activity?’ we get more buy-in, bigger budgets, and better results,” he said.
Make A Conscious Decision About Which Search Engine To Optimize For
While Google’s dominance remains statistically intact, user behavior tells us that there has always existed a fractured search journey.
Stephen cited that half of UK adults use Bing monthly. A quarter is on Quora. Pinterest and Reddit are seeing massive engagement, especially with younger users. Nearly everyone uses YouTube, and they spend significantly more time on it than on Google.
Also, specialized search engines like Autotrader for used cars and Amazon for ecommerce have captured significant market share in their respective categories.
This fragmentation means that conscious decisions about platform optimization become increasingly important. Different platforms serve different demographics and purposes, requiring strategic choices about where to invest optimization efforts.
I asked Stephen if he thought Google’s dominance was under threat, or if it would remain part of a fractured search journey. But, he thought Google would be relevant for at least half a decade to come.
“I don’t see Google going anywhere. And I also don’t see the massive difference in LLM optimization. So most of the things that you would be doing for Google now … are broadly marketing things anyway and broadly impact LLM optimization.”
LLM Optimization Could Be A Return To Traditional Marketing
Looking toward AI-driven search platforms, Stephen believes the same brand-building tactics that work for Google will prove effective across LLM platforms. These new platforms don’t necessarily demand new rules; they reinforce old ones.
“What works in Google now, broadly speaking, is good marketing. That also applies to LLMs,” he said.
While we’re still learning how LLMs surface content and determine authority, early indicators suggest trust signals, brand presence, and real-world engagement all play pivotal roles.
The key insight is that LLM optimization doesn’t require entirely new approaches but rather a return to fundamental marketing principles focused on audience needs and brand trust.
Television Advertising Creates Significant Impact
I asked Stephen what he would do if he were to launch a new brand and how he would quickly gain traction.
In an interesting twist for someone who has worked in the SEO industry for so long, he cited TV as his primary focus.
“I’d build a transactional website and spend millions on TV [advertising]. If I did more [marketing], I’d add PR.” Stephen told me.
This recommendation reflects his belief that traditional marketing channels create a significant impact.
He believes, the combination of a functional ecommerce website with substantial television advertising investment, supplemented by PR activities, provides the foundation for rapid brand recognition and search visibility.
Before We Ruined The Internet
To me, it feels like we are going full circle and back to the days prior to the introduction of “new media” in the early 90s, when TV advertising was dominant and offline advertising was heavily influential.
“It’s like we’re going back to before we ruined the internet,” Stephen joked.
In reality, we’re circling back to what always worked: building real brands that people trust, remember, and seek out. The future requires classical marketing principles that prioritize audience understanding and brand building over technical optimization tactics.
This shift benefits the entire marketing industry by encouraging more integrated approaches that consider the complete customer journey rather than isolated technical optimizations.
Success in both search and LLM platforms increasingly depends on building genuine brand recognition and trust through consistent, audience-focused marketing activities across multiple channels.
Whether it’s Google, Bing, an LLM, or something we haven’t seen yet, brand is the one constant that wins.
Thank you to Stephen Kenwright for offering his insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal
Someone asked if showing different content to logged-out users than to logged-in users and to Google via structured data is okay. John’s answer was unequivocal.
“Will this markup work for products in a unauthenticated view in where the price is not available to users and they will need to login (authenticate) to view the pricing information on their end? Let me know your thoughts.”
John Mueller answered:
“If I understand your use-case, then no. If a price is only available to users after authentication, then showing a price to search engines (logged out) would not be appropriate. The markup should match what’s visible on the page. If there’s no price shown, there should be no price markup.”
What’s The Problem With That Structured Data?
The price is visible to logged-in users, so technically the content (in this case the product price) is visible for those users who are logged-in. It’s a good question because a good case can be made that the content shown to Google is available, kind of like behind a paywall, in this case it’s for logged-in users.
But that’s not good enough for Google and it’s not really comparable to paywalls because these are two different things. Google is judging what “on the page” means based on what logged-out users will see on the page.
Google’s guideline about the structured data matching what’s on the page is unambiguous:
“Don’t mark up content that is not visible to readers of the page.
…Your structured data must be a true representation of the page content.”
This is a question that gets asked fairly frequently on social media and in forums so it’s good to go over it for those who might not know yet.
And reviews? They’re no longer just trust signals. They’re ranking signals.
This article breaks down what’s changing, what’s working, and how agencies can keep their clients visible across both traditional local search and Google’s evolving AI layer.
Reviews Are Now A Gateway To Search Inclusion
Reviews have long been seen as conversion tools, helping users decide between businesses they’ve already discovered. But that role is evolving.
In the era of Google’s AI Overviews (AIOs), reviews are increasingly acting as discovery signals, helping determine which businesses get included in the first place.
GatherUp’s 2024 Online Reputation Benchmark Report shows that businesses with consistent, multi-channel review strategies, especially those generating both first- and third-party reviews, saw stronger reputation signals across volume, recency, and engagement. These are the exact kinds of signals that Google’s systems now appear to prioritize in AI-generated results.
That observation is reinforced by recent industry research and leaked Google documentation, which suggest that review characteristics like click-throughs, content depth, and freshness contribute to both local pack visibility and AIO inclusion.
In other words, the businesses getting summarized at the top of the SERP aren’t just highly rated. They’re actively reviewed, broadly cited, and seen as credible across sources Google trusts.
Recency Is A Signal. “Relevance” Is Google’s Shortcut.
More than two-thirds of consumers say they prioritize recent reviews when evaluating a business. But Google doesn’t necessarily show them first.
Instead, Google’s “Most Relevant” filter may prioritize older reviews that match query terms, even if they no longer reflect the current customer experience.
That’s why it’s critical for businesses to maintain steady review velocity. A flood of reviews in January followed by silence for six months won’t cut it. The AI layer, and the human reader, needs signals that say “this business is active and trustworthy right now.”
For agencies, this presents an opportunity to shift client mindset from static review goals to ongoing review strategies.
Star Ratings Still Matter, But Mostly As A Decision Shortcut
During our recent webinar with Search Engine Journal, we explored how consumers are using star ratings to disqualify options, not differentiate them.
Research shows:
73% of consumers won’t consider businesses with fewer than 4 stars
But 69% are still open to doing business with brands that fall short of a perfect 5.0, so long as the reviews are recent and authentic
In other words, people are looking for a “safe” choice, not a flawless one.
A few solid 4-star reviews with real detail from the past week often carry more weight than a dozen perfect ratings from 2021.
Agencies should help clients understand this nuance, especially those who are hesitant to request reviews out of fear of imperfection.
First-Party & Third-Party Reviews: Both Are Necessary
AI Overviews aggregate information from across the web, including structured data from your own website and unstructured commentary from others.
First-party reviews: These are collected and hosted directly on the business’s website. They can be marked up with schema, giving Google structured, machine-readable content to use in summaries and answer boxes.
Third-party reviews: These appear on platforms like Google, Yelp, Facebook, TripAdvisor, and Reddit. They’re often seen as more objective and are more frequently cited in AI Overviews.
Businesses that show up consistently across both types are more likely to be included in AIOs, and appear trustworthy to users.
GatherUp supports multi-source review generation, schema markup for first-party feedback, and rotating requests across platforms. This makes it easier for agencies to build a review presence that supports both local SEO and AIO visibility.
AIOs Pull From More Than Just Google Reviews
According to recent data from Whitespark, over 60% of citations in AI Overviews come from non-Google sources. This includes platforms like:
Reddit.
TripAdvisor.
Yelp.
Local blogs and industry-specific directories.
If your client’s reviews live only on Google, they risk being overlooked entirely.
Google’s AI is scanning for what it deems “experience-based” content, unfiltered, authentic commentary from real people. And it prefers to cross-reference multiple sources to confirm credibility.
Agencies should encourage clients to broaden their review footprint and seek mentions in trusted third-party spaces. Dynamic review flows, QR codes, and conditional links can help diversify requests without overburdening the customer.
Responses Influence Visibility & Build Trust
Review responses are no longer just a nice gesture. They’re part of the algorithmic picture.
GatherUp’s benchmark research shows:
92% of consumers say responding to reviews is now part of basic customer service.
73% will give a business a second chance if their complaint receives a thoughtful reply.
But there’s also a technical upside. When reviews are clicked, read, and expanded, they generate engagement signals that may impact local rankings. And if a business’s reply includes resolution details or helpful context, it increases the content depth of that listing.
For agencies juggling multiple clients, automation helps. GatherUp offers AI-powered suggested responses that retain brand tone and ensure timely replies, without sounding robotic.
How Agencies Can Make AIO Part Of Their Core Strategy
Google’s AI systems are designed to answer user questions directly, often without requiring a click. That means review content is increasingly shaping brand narratives within the SERP.
To adapt, agencies should align client visibility efforts across both search formats:
For Local Pack Optimization
Keep Google Business Profile listings fully updated (photos, categories, Q&A).
AI Overviews now appear in nearly two-thirds of local business search queries. That means your clients’ next customers may form an impression—or make a decision—before ever clicking through to a website or map pack listing.
Visibility is no longer guaranteed. It’s earned through content, coverage, and credibility.
And reviews sit at the center of all three.
For agencies, this is a moment of opportunity. You already have the tools to guide clients through the shift. You know how to structure content, build citations, and amplify voices that resonate with customers.
GatherUp is the only proactive reputation management platform purpose-built for digital agencies. We help you build, manage, and defend your clients’ online reputations.
GatherUp supports:
First- and third-party review generation across multiple platforms,
Schema-marked up feedback collection for AIO relevance,
Intelligent, AI-assisted response workflows,
Seamless white-labeling for full agency control,
Scalable review operations tools that can help you manage 10 or 10,000 locations and clients.
Agencies who use GatherUp don’t just react to algorithm changes. They shape client visibility, and defend it.
To learn more, watch the full webinar for actionable strategies, data-backed insights, and examples of AIO-influenced local search in the wild.
Google Search Console Core Web Vitals (CWV) reporting for mobile is experiencing a dip that is confirmed to be related to the Chrome User Experience Report (CrUX). Search Console CWV reports for mobile performance show a marked dip beginning around July 10, at which point the reporting appears to stop completely.
“Hey @johnmu.com is there a known issue or bug with Core Web Vitals reporting in Search Console? Seeing a sudden massive drop in reported URLs (both “good” and “needs improvement”) on mobile as of July 14.”
The person referred to July 14th, but that’s the date the reporting hit zero. The drop actually starts closer to July 10th, which you can see when you hover a cursor at the point that the drops begin.
Google’s John Mueller responded:
“These reports are based on samples of what we know for your site, and sometimes the overall sample size for a site changes. That’s not indicative of a problem. I’d focus on the samples with issues (in your case it looks fine), rather than the absolute counts.”
The person who initially started the discussion responded to inform Mueller that this isn’t just on his site, the peculiar drop in reporting is happening on other sites.
Mueller was unaware of any problem with CWV reporting so he naturally assumed that this was an artifact of natural changes in Internet traffic and user behavior. So his next response continued under the assumption that this wasn’t a widespread issue:
He responded:
“That can happen. The web is dynamic and alive – our systems have to readjust these samples over time.”
Then Jamie Indigo responded to confirm she’s seeing it, too.
“Hey John! Thanks for responding 🙂 It seems like … everyone beyond the usual ebb and flow. Confirming nothing in the mechanics have changed?”
At this point it was becoming clear that this weird behavior wasn’t isolated to just one site and Mueller’s response to Jamie reflected this growing awareness. Mueller confirmed that there’s nothing happening on the Search Console side, leaving it open about the CrUX side of the Core Web Vitals reporting.
His response:
“Correct, nothing in the mechanics changed (at least with regards to Search Console — I’m also not aware of anything on the Chrome / CrUX side, but I’m not as involved there).”
CrUX CWV Field Data
CrUX is the acronym for the Chrome User Experience report. It’s CWV reporting based on real website visits. The data is collected from Chrome browser website visits by users who have opted in to reporting their data for the report.
“The Chrome User Experience Report (also known as the Chrome UX Report, or CrUX for short) is a dataset that reflects how real-world Chrome users experience popular destinations on the web.
CrUX is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented.
CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites.”
Core Web Vitals Reporting Outage Is Widespread
At this point more people joined the conversation, with Alan Bleiweiss offering both a comment and a screenshot showing the same behavior where the reporting completely drops off is happening on the Search Console CWV reports for other websites.
He posted:
“oooh Google had to slow down server requests to set aside more power to keep the swimming pools cool as the summer heats up.”
Here’s a closeup detail of Alan’s screenshot of a Search Console CWV report:
Screenshot Of CWV Report Showing July 10 Drop
I searched the Chrome Lighthouse changelog to see if there’s anything there that corresponds to the drop but nothing stood out.
So what is going on?
CWV Reporting Outage Is Confirmed
I next checked the X and Bluesky accounts of Googlers who work on the Chrome team and found a post by Barry Pollard, Web Performance Developer Advocate on Google Chrome, who had posted about this issue last week.
Barry posted a note about a reporting outage on Bluesky:
“We’ve noticed another dip on the metrics this month, particularly on mobile. We are actively investigating this and have a potential reason and fix rolling out to reverse this temporary dip. We’ll update further next month. Other than that, there are no further announcements this month.”
Takeaways
Google Search Console Core Web Vitals (CWV) data drop: A sudden stop in CWV reporting was observed in Google Search Console around July 10, especially on mobile.
Issue is widespread, not site-specific: Multiple users confirmed the drop across different websites, ruling out individual site problems.
Origin of issue is not at Search Console: John Mueller confirmed there were no changes on the Search Console side.
Possible link to CrUX data pipeline: Barry Pollard from the Chrome team confirmed a reporting outage and mentioned a fix may be rolled out at an unspecified time in the future.
We now know that this is a confirmed issue. Google Search Console’s Core Web Vitals reports began showing a reporting outage around July 10, leading users to suspect a bug. The issue was later acknowledged by Barry Pollard as reporting outage affecting CrUX data, particularly on mobile.
Featured Image by Shutterstock/Mix and Match Studio
Over the past few months, I’ve deeply analyzed how Google’s AI Overviews handle long-tail queries, dug into what makes brands visible in large language models (LLMs), and worked with brands trying to future-proof their SEO strategies.
Today’s Memo is the first in a two-part series where I’m covering a tactical deep dive into one of the most overlooked mindset shifts in SEO: optimizing for topics (not just keywords).
In this issue, I’m breaking down:
Why keyword-first SEO creates surface-level content and cannibalization.
What the actual differences are. (Isn’t this just a pillar-cluster approach? Nope.)
Thoughts from other pros across the web.
How to talk through these issues with your stakeholders, i.e., clients, the C-suite, and your teams (for premium subscribers).
And next week, I’ll cover how to build a topic map, and operationalize a topic-first approach to SEO across your team.
If you’ve ever struggled to convince stakeholders to think beyond search volume or wondered how to grow authority, this memo’s for you.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
At some point over the last year, it’s likely you’ve heard the guidance that keywords are out and topics are in.
The SEO pendulum has swung. If you haven’t already been optimizing for topics instead of keywords (and you really should have), now’s the time to finally start.
But what does that actually mean? How do we do it?
And how are we supposed to monitor topical performance?
With all this talk about LLM visibility, AI Overviews, AI Mode, query fan-out, entities, and semantics, when we optimize for topics, are we optimizing for humans, algorithms, or language models?
Personally, I think we’re making this more difficult than it has to be. I’ll walk you through how to optimize for (and measure/monitor) topics vs. keywords.
Why Optimizing For Topics > Keywords In 2025 (And Beyond)
If your team is still focused on keywords over topics, it’s time to explain the importance of this concept to the group.
Let’s start here: The traditional keyword-first approach worked when Google primarily ranked pages based on string matching. But in today’s search landscape, keywords are no longer the atomic unit of SEO.
But topics are.
In fact, we’re living through what you (and Kevin) might call the death of the keyword.
Think of it like this:
Topics are the foundation and framing of your site’s organic authority and visibility, like the blueprint and structure of a house.
Individual keywords are the bricks and nails that help build it, but optimizing for individual queries on their own without optimizing for the topics to anchor them, well, they don’t pull much weight.
If you focus only on keywords, it’s like obsessing over picking the right brick color without realizing the blueprint is incomplete.
But when you plan around (and optimize for) topics, you’re designing a structure that’s built to last – one that search engines and LLMs can understand as authoritative and comprehensive.
Google no longer sees a good search result as a direct match between a user’s query and a keyword on your page. That is some old SEO thinking that we all need to let go of completely.
Instead, search engines interpret intent and context, and then use language models to expand that single query into dozens of variations, a.k.a. query fan-out.
That’s why a piecemeal approach to targeting SEO keywords based on search volume, stage of the search journey, or even bottom-of-funnel (BOF) or pain-point intent can be wasted time.
And don’t get me wrong: Targeting queries that are BOF and solve core painpoints of your audience is a wise approach – and you should be doing it.
But own the topics, and you can see your brand’s organic visibility outlast big algorithm changes.
Keyword-Only Thinking Limits Growth
And after all that, if it’s still a challenge convincing your stakeholders, clients, or team to pivot to topic-forward thinking, explain how it limits growth.
Teams stuck in keyword-first mode often run into three problems:
Surface-level content: Articles become thin, narrowly scoped, and easy to outcompete.
Cannibalization: Content overlap happens often; articles compete with each other (and lose).
Blind spots: You miss related subtopics, tailoring content to personas, or exploring problems within the topic that your audience actually cares about.
On the other hand, a topic-first approach allows you to build deeper, more useful content ecosystems.
Your goal is not to just answer one query well; it’s to become a go-to resource for the entire subject area.
Understanding The Topic Maturity Path: Old Way Vs. New Way
Let’s take a closer look at how these two approaches are different from one another.
Image Credit: Kevin Indig
Old Way: Keyword-First SEO
The classic approach to SEO centered around picking individual keywords, assigning each one a page, and publishing content that aimed to rank for that phrase.
This model worked well when Google’s ranking signals were more literal (think string matching, backlink anchor text, and on-page optimization carrying most of the weight).
But in 2025 and beyond, this approach is showing its age.
Keyword-first SEO often looks like this:
Minimal internal cohesion across pages; articles aren’t working together to build topic depth or reinforce semantic signals.
Content decisions are often driven by average monthly search and tool-based keyword difficulty scores, rather than intent or persona-specific needs.
A high-effort, low-durability content-first SEO strategy; posts may rank initially and hold for a while, but they rarely stick or scale.
Monitoring performance is often focused on traffic projections and done by page type, SEO-tool-informed intent type, query rankings, and (yes) even sometimes topic groups.
But even when teams adopt a topic cluster-first model (like grouping related keywords into topic clusters or deploying a topic-focused pillar + cluster strategy), they often stay tethered to outdated keyword logic.
The result? Surface-level coverage for single keywords, frequent content cannibalization, and a site structure that might seem organized but still lacks strategic topic optimization.
Without persona insights, or a clear content hierarchy built around core topics, you’re building with bricks, but no real authority blueprint.
Wait a second. Is optimizing for topics any different from the classic pillar + topic cluster approach?
Yes and no.
A pillar + cluster model (a.k.a. hub and spoke) is a framework that can organize a topical approach.
But strategists should shift from matching pages to exact keywords → covering concepts deeply instead.
This classic framework can support topic optimization, but only if it’s implemented with a topic-first mindset.
Topic-driven pillar + cluster model:Pillar = offers a comprehensive guide to the topic; clusters = provide in-depth support for key concepts, different personas, related problems, and unique angles.
Simply selecting high-volume keywords to optimize for in your pillar + cluster strategy plan doesn’t work like it used to.
So, a pillar + cluster plan can help you organize your approach, but you’ll need to cover your core topics with depth and from a variety of perspectives, and for each persona in your target audience.
New Way: Topic-First SEO
Your future-proof SEO strategy doesn’t start with a focus on keywords; it starts with focusing on your target people, their problems, and the topics they care about.
Topic-first SEO approaches content through the lens of the real-world solutions your brand provides through your products and services.
You build authority by exploring a topic (one that you can directly speak to with authority) from all relevant angles: different personas, intent types, pain points, industry sectors, and contexts of use.
But keep in mind: Topic-first SEO is not exactly a page volume game, although the breadth and depth of your topic coverage are crucial.
Topic-first SEO involves:
Covering your core, targeted topics across personas.
Investing in “zero-volume” content based on actual questions and needs your target audience has.
Producing content within your topic that offers different perspectives and hot takes.
Building authority with information gain: i.e., new, fresh data that offers unique insights within your core targeted topics.
And guess what? This approach aligns with how Google now understands and ranks content:
Entities > keywords: Google doesn’t just match “search strings” anymore. It understands concepts and audiences (and how they’re related) through the knowledge graph.
Content built around people, problems, and questions: You’re not answering one query when you optimize for a topic as a whole; you’re solving layered, real-world challenges for your audience.
Content journeys, not isolated posts: Topic-first strategies map content to different user types and their stage in the journey (from learning to buying to advocating).
More durable visibility + stronger links: When your site deeply reflects a topic and tackles it from all angles, it attracts both organic queries and natural backlinks from people referencing real insight and utility.
That E-E-A-T we’re all supposed to focus on: Kevin discusses this a bit more when he digs into Google Quality Rater Guidelines in building and measuring brand authority. But this is an absolute no-brainer: Taking a topic-first approach actively works toward establishing Experience, Expertise, Authoritativeness, and Trustworthiness.
I wanted to know how others are doing this, so over on LinkedIn, I asked for your thoughts and questions.
Here are some that stuck out to me that I think we can all benefit from considering:
Lily Grozeva asks: “Is covering a topic and establishing a brand as an authority on it still a volume game?”
My answer: No. I think Backlinko is a good example. The site built incredible visibility with just a few, but very deep guides.
Image Credit: Kevin Indig
Diego Gallo asks: “Any tips on how to decide if a question should belong to a page or be its own page?”
My answer: “In my experience, one way to determine that is cosine similarity between the (tokenized, embedded) question and the main topics / intents of the pages that you can pick from.”
Diego also left a good tip for covering all relevant intents: Build an “intent template” for each page (e.g., product landing page, blog article, etc.). Base the template on what works well on Google.
Image Credit: Kevin Indig
Matthew Mellinger called out that you can use Google’s People Also Asked questions to get clarity on which questions to answer on a page.
Image Credit: Kevin Indig
By the way, you can also use the intent classifier I built for premium subscribers for this task!
Not Strategizing With A Topic-First Mindset? You’re Outdated
Next week, we’re going to take a deep look at operationalizing a topic-first SEO strategy, but here are some final thoughts.
While there are so many unknowns in the current search landscape, there are a few truths we can ground ourselves in, whether optimizing for search engines or LLMs:
1. Your brand can still own a topic in the AI era.
As shown in the data via the UX study of AIOs, brand/authority is now the first gate users walk through when considering a click off the SERP, search intent relevance the second; snippet wording only matters once trust is secured.
If people are going to click, they’re going to click on the familiar and authoritative. Be the topical authority in your areas of expertise and offerings.
Have your brand show up again, and again, and again in search results across the topic. It’s simple, but it’s hard work.
2. I don’t think focusing on a topic-first mindset could backfire in any way (in 2025 or beyond).
Demonstrating to your core ICPs – whether they find you via paid ads, organic search, LLM chats, socials, or word-of-mouth – through authoritative, branded website content that you understand the topics they care about, questions they have, and provide the solutions for their needs specifically only builds trust … no matter how your brand is found.
3. Build topic systems, not just articles or pages.
Integrators need to take a page out of the aggregator’s product-led SEO playbook: Create a comprehensive system (similar to TripAdvisor’s millions of programmatic pages supported by user-generated content (UGC) reviews, but you don’t need millions 😅) built around your topics of expertise that tackle perspectives, solutions, and questions around each persona type for each sector you serve.
Build the organizational structure within your site that makes these topics and personas easy to navigate for users (and easy to crawl and understand for bots/agents).
4. Persona or ICP-based content is more useful, less generic, and built for the next era of personalized search results.
‘Nuff said. If you strategize topic optimization through the lens of personas (even to the point of including real interviews, surveys, comments, and tips from these persona types), you’re adding to the conversation with depth and unique data.
If you’re not building audience-first content, does optimizing for LLMs and search bots even matter? You’ll gain visibility, but will you gain trust once you finally earn that click?
Featured Image: Paulo Bobita/Search Engine Journal
Google’s John Mueller and Martin Splitt discussed the question of whether AI will replace the need for SEO. Mueller expressed a common-sense opinion about the reality of the web ecosystem and AI chatbots as they exist today.
Context Of Discussion
The context of the discussion was about SEO basics that a business needs to know. Mueller then mentioned that businesses might want to consider hiring an SEO who can help navigate the site through its SEO journey.
Mueller observed:
“…you also need someone like an SEO as a partner to give you updates along the way and say, ‘Okay, we did all of these things,’ and they can list them out and tell you exactly what they did, ‘These things are going to take a while, and I can show you when Google crawls, we can follow along to see like what is happening there.’”
Is There Value In Learning SEO?
It was at this point that Martin Splitt asked if generative AI will make having to learn SEO obsolete or whether entering a prompt will give all the answers a business person needs to know. Mueller’s answer was tethered to how things are right now and avoided speculating about how things will change in a year or more.
Splitt asked:
“Okay, I think that’s pretty good. Last but not least, with generative AI and chatbot AI things happening. Do you think there’s still a value in learning these kind of things? Or can I just enter a prompt and it’ll figure things out for me?”
Mueller affirmed that knowing SEO will still be needed as long as there are websites because search engines and chat bots need the information that exists on websites. He offered examples of local businesses and ecommerce sites that still need to be found, regardless of whether that’s through an AI chatbot or search.
He answered:
“Absolutely value in learning these things and in making a good website. I think there are lots of things that all of these chatbots and other ways to get information, they don’t replace a website, especially for local search and ecommerce.
So, especially if you’re a local business, maybe it’s fine if a chatbot mentions your business name and tells people how to get there. Maybe that’s perfectly fine, but oftentimes, they do that based on web content that they found.
Having a website is the basis for being visible in all of these systems, and for a lot of other things where you offer a service or something, some other kind of functionality on a website where you have products to sell, where you have subscriptions or anything, a chat response can’t replace that.
If you want a t shirt, you don’t want a description of how to make your own t-shirt. You want a link to a store where it’s like, ‘Oh, here’s t-shirt designs,’ maybe t-shirt designs in that specific style that you like, but you go to this website and buy those t-shirts there.”
Martin acknowledged the common sense of that answer and they joked around a bit about Mueller hoping that an AI will be able to do his job once he retires.
That’s the context for this part of their conversation:
“Okay. That’s very fair. Yeah, that makes sense. Okay, so you think AI is not going to take it all away from us?”
And Mueller answers with the comment about AI replacing him after he retires:
“Well, we’ll see. I can’t make any promises. I think, at some point, I would like to retire, and then maybe AI takes over my work then. But, like, there’s lots of stuff to be done until then. There are lots of things that I imagine AI is not going to just replace.”
What About CMS Platforms With AI?
Something that wasn’t discussed is the trend of AI within content management systems. Many web hosts and WordPress plugins are already integrating AI into the workflow of creating and optimizing websites. Wix has already integrated AI into their workflow and it won’t be much longer until AI makes a stronger presence within WordPress, which is what the new WordPress AI team is working on.
Screenshot Of ChatGPT Choosing Number 27
Will AI ever replace the need for SEO? Many easy things that can be scaled are already automated. However, many of the best ideas for marketing and communicating with humans are still best handled by humans, not AI. The nature of generative AI, which is to generate the most likely answer or series of words in a sentence, precludes it from ever having an original idea. AI is so locked into being average that if you ask it to pick a number between one and fifty, it will choose the number 27 because the AI training binds it to picking the likeliest number, even when instructed to randomize the choice.
Listen to Search Off The Record at about the 24 minute mark:
The search engine industry is changing quickly, upended by AI platforms that have altered queries and informational journeys.
Consumers increasingly type extended prompts into AI platforms versus single keywords or phrases in traditional search engines. Prompts are much longer and are often voice-activated with follow-up questions.
Moreover, prompts are typically much more descriptive and thus harder to predict.
Nonetheless, knowing shoppers’ prompts is essential for understanding their needs. Unfortunately, third-party “question” tools are typically keyword-driven, unsuited for prompts.
Prompt analytics tools
New prompt-analysis tools are coming.
For example, Similarweb’s new “AI Traffic Intelligence” reveals the prompts driving traffic to any web page. The tool is an add-on to the company’s “Web Intelligence” platform, which starts at $1,500 per year. Similarweb has not published pricing for the AI addition. There is no free trial.
Aiso is another tool to see actual prompts based on topic, brand, or keywords. Users describe their business, products, and services. Aiso then shows actual, related prompts from ChatGPT and other AI platforms. Pricing starts at $75 per month, with a free trial.
New tools such as Similarweb’s “AI Traffic Intelligence” help understand shopper prompts.
Prompt optimization
Knowing the prompts of shoppers is useful only for understanding their needs and journeys. I know of no way to optimize a site for prompts because AI platforms don’t match content to queries, unlike search engines. Both rely on intent, but AI prompts must interpret entire sentences, paragraphs, and follow-ups.
For example, most brick-and-mortar businesses don’t use “local” in their description.
A prompt from a hair-salon owner might resemble this:
I own a hair salon and need more people in my area to be aware of my business. Which kind of marketing agency or expert do I need (with my small budget)?
An AI platform would understand that local search is a good option. Here’s the response from Google’s AI Mode to the above prompt:
Here’s a breakdown of the types of marketing expertise to consider for your hair salon, particularly with a limited budget…:
SEO experts or consultants
What they do: SEO specialists help your website rank higher in search engine results for relevant keywords, making it easier for potential clients in your area to find your salon when they search online for services like “hair salon in [Your City]”.
Why it’s good for you: A strong online presence is crucial for attracting new clients. A well-optimized website, along with a claimed and optimized Google Business Profile, will significantly boost your salon’s visibility in local searches.
Things to consider: Look for those specializing in local SEO or SEO for the beauty industry. [Here, AI Mode recommended actual local SEO agencies specializing in the beauty industry.] Ask for case studies or testimonials showcasing their success in ranking local businesses.
The biggest ecommerce change from AI platforms is the impact on shopping journeys. Consumers’ prompts go beyond keywords, such as “hair salons in my town,” to include, say, prices, driving distance, specializations, and more.
Hence, optimizing for AI focuses on problem-solving and positioning a business as a solution provider.
Aligning landing pages to keywords remains viable for traditional search engines, where keywords are still predictable, but AI optimization means answering all kinds of relevant (unpredictable) questions.
Thus merchants looking for AI visibility should create problem-solving content. Researching keywords and prompts can help understand those problems, but attempting to match content is fruitless.