Google Discusses If It’s Okay To Make Changes For SEO Purposes via @sejournal, @martinibuster

Google’s John Mueller and Martin Splitt discussed making changes to a web page, observing the SEO effect, and the importance of tracking those changes. There has been long-standing hesitation around making too many SEO changes because of a patent filed years ago about monitoring frequent SEO updates to catch attempts to manipulate search results, so Mueller’s answer to this question is meaningful in the context of what’s considered safe.

Does this mean it’s okay now to keep making changes until the site ranks well? Yes, no, and probably. The issue was discussed on a recent Search Off the Record podcast.

Is It Okay To Make Content Changes For SEO Testing?

The context of the discussion was a hypothetical small business owner who has a website and doesn’t really know much about SEO. The situation is that they want to try something out to see if it will bring more customers.

Martin Splitt set up the discussion as the business owner asking different people for their opinions on how to update a web page but receiving different answers. Splitt then asks whether going ahead and changing the page is safe to do.

Martin asked:

“And I want to try something out. Can I just do that or do I hurt my website when I just try things out?”

Mueller affirmed that it’s okay to get ahead and try things out, commenting that most content management systems (CMS) enable a user to easily make changes to the content.

He responded:

“…for the most part you can just try things out. One of the nice parts about websites is, often, if you’re using a CMS, you can just edit the page and it’s live, and it’s done. It’s not that you have to do some big, elaborate …work to put it live.”

In the old days, Google used to update its index once a month. So SEOs would make their web page changes and then wait for the monthly update to see if those changes had an impact. Nowadays, Google’s index is essentially on a rolling update, responding to new content as it gets indexed and processed, with SERPs being re-ranked in reaction to changes, including user trends where something becomes newsworthy or seasonal (that’s where the freshness algorithm kicks in).

Making changes to a small site that doesn’t have much traffic is an easy thing. Making changes to a website responsible for the livelihood of dozens, scores, or even hundreds of people is a scary thing. So when it comes to testing, you really need to balance the benefits against the possibility that a change might set off a catastrophic chain of events.

Monitoring The SEO Effect

Mueller and Splitt next talked about being prepared to monitor the changes.

Mueller continued his answer:

“It’s very easy to try things out, let it sit for a couple of weeks, see what happens and kind of monitor to see is it doing what you want it to be doing. I guess, at that point, when we talk about monitoring, you probably need to make sure that you have the various things installed so that you actually see what is happening.

Perhaps set up Search Console for your website so that you see the searches that people are doing. And, of course, some way to measure the goal that you want, which could be something perhaps in Analytics or perhaps there’s, I don’t know, some other way that you track in person if you have a physical store, like are people actually coming to my business after seeing my website, because it’s all well and good to do SEO, but if you have no way of understanding has it even changed anything, you don’t even know if you’re on the right track or recognize if something is going wrong.”

Something that Mueller didn’t mention is the impact on user behavior on a web page. Does the updated content make people scroll less? Does it make them click on the wrong thing? Do people bounce out at a specific part of the web page?

That’s the kind of data Google Analytics does not provide because that’s not what it’s for. But you can get that data with a free Microsoft Clarity account. Clarity is a user behavior analytics SaaS app. It shows you where (anonymized) users are on a page and what they do. It’s an incredible window on web page effectiveness.

Martin Splitt responded:

“Yeah, that’s true. Okay, so I need a way of measuring the impact of my changes. I don’t know, if I make a new website version and I have different texts and different images and everything is different, will I immediately see things change in Search Console or will that take some time?”

Mueller responded that the amount of time it takes for changes to show up in Search Console depends on how big the site is and the scale of the changes.

Mueller shared:

“…if you’re talking about something like a homepage, maybe one or two other pages, then probably within a week or two, you should see that reflected in Search. You can search for yourself initially.

That’s not forbidden to search for yourself. It’s not that something will go wrong or anything. Searching for your site and seeing, whatever change that you made, has that been reflected. Things like, if you change the title to include some more information, you can see fairly quickly if that got picked up or not.”

When Website Changes Go Wrong

Martin next talks about what I mentioned earlier: when a change goes wrong. He makes the distinction between a technical change and changes for users. A technical change can be tested on a staging site, which is a sandboxed version of the website that search engines or users don’t see. This is actually a pretty good thing to do before updating WordPress plugins or doing something big like swapping out the template. A staging site enables you to test technical changes to make sure there’s nothing wrong. Giving the staged site a crawl with Screaming Frog to check for broken links or other misconfigurations is a good idea.

Mueller said that changes for SEO can’t be tested on a staged site, which means that whatever changes are made, you have to be prepared for the consequences.

Listen to The Search Off The Record from about the 24 minute mark:

Featured Image by Shutterstock/Luis Molinero

Can AEO/GEO Startups Beat Established SEO Tool Companies? via @sejournal, @martinibuster

The CEO of Conductor started a LinkedIn discussion about the future of AI SEO platforms, suggesting that the established companies will dominate and that 95 percent of the startups will disappear. Others argued that smaller companies will find their niche and that startups may be better positioned to serve user needs.

Besmertnik published his thoughts on why top platforms like Conductor, Semrush, and Ahrefs are better positioned to provide the tools users will need for AI chatbot and search visibility. He argued that the established companies have over a decade of experience crawling the web and scaling data pipelines, with which smaller organizations cannot compete.

Conductor’s CEO wrote:

“Over 30 new companies offering AI tracking solutions have popped up in the last few months. A few have raised some capital to get going. Here’s my take: The incumbents will win. 95% of these startups will flatline into the SaaS abyss.

…We work with 700+ enterprise brands and have 100+ engineers, PMs, and designers. They are all 100% focused on an AI search only future. …Collectively, our companies have hundreds of millions of ARR and maybe 1000x more engineering horsepower than all these companies combined.

Sure we have some tech debt and legacy. But our strengths crush these disadvantages…

…Most of the AEO/GEO startups will be either out of business or 1-3mm ARR lifestyle businesses in ~18 months. One or two will break through and become contenders. One or two of the largest SEO ‘incumbents’ will likely fall off the map…”

Is There Room For The “Lifestyle” Businesses?

Besmertnik’s remarks suggested that smaller tool companies earning one to three million dollars in annual recurring revenue, what he termed “lifestyle” businesses, would continue as viable companies but stood no chance of moving upward to become larger and more established enterprise-level platforms.

Rand Fishkin, cofounder of SparkToro, defended the smaller “lifestyle” businesses, saying that it feels like cheating at business, happiness, and life.

He wrote:

“Nothing better than a $1-3M ARR “lifestyle” business.

…Let me tell you what I’m never going to do: serve Fortune 500s (nevermind 100s). The bureaucracy, hoops, and friction of those orgs is the least enjoyable, least rewarding, most avoid-at-all-costs thing in my life.”

Not to put words into Rand’s mouth but it seems that what he’s saying is that it’s absolutely worthwhile to scale a business to a point where there’s a work-life balance that makes sense for a business owner and their “lifestyle.”

Case For Startups

Not everyone agreed that established brands would successfully transition from SEO tools to AI search, arguing that startups are not burdened by legacy SEO ideas and infrastructure, and are better positioned to create AI-native solutions that more accurately follow how users interact with AI chatbots and search.

Daniel Rodriguez, cofounder of Beewhisper, suggested that the next generation of winners may not be “better Conductors,” but rather companies that start from a completely different paradigm based on how AI users interact with information. His point of view suggests that legacy advantages may not be foundations for building strong AI search tools, but rather are more like anchors, creating a drag on forward advancement.

He commented:

“You’re 100% right that the incumbents’ advantages in crawling, data processing, and enterprise relationships are immense.

The one question this raises for me is: Are those advantages optimized for the right problem? All those strengths are about analyzing the static web – pages, links, and keywords.

But the new user journey is happening in a dynamic, conversational layer on top of the web. It’s a fundamentally different type of data that requires a new kind of engine.

My bet is that the 1-2 startups that break through won’t be the ones trying to build a better Conductor. They’ll be the ones who were unburdened by legacy and built a native solution for understanding these new conversational journeys from day one.”

Venture Capital’s Role In The AI SEO Boom

Mike Mallazzo, Ads + Agentic Commerce @ PayPal, questioned whether there’s a market to support multiple breakout startups and suggested that venture capital interest in AEO and GEO startups may not be rational. He believes that the market is there for modest, capital-efficient companies rather than fund-returning unicorns.

Mallazzo commented:

“I admire the hell out of you and SEMRush, Ahrefs, Moz, etc– but y’all are all a different breed imo– this is a space that is built for reasonably capital efficient, profitable, renegade pirate SaaS startups that don’t fit the Sand Hill hyper venture scale mold. Feels like some serious Silicon Valley naivete fueling this funding run….

Even if AI fully eats search, is the analytics layer going to be bigger than the one that formed in conventional SEO? Can more than 1-2 of these companies win big?”

New Kinds Of Search Behavior And Data?

Right now it feels like the industry is still figuring out what is necessary to track, what is important for AI visibility. For example, brand mentions is emerging as an important metric, but is it really? Will brand mentions put customers in the ecommerce checkout cart?

And then there’s the reality of zero click searches, the idea that AI Search significantly wipes out the consideration stage of the customer’s purchasing journey, the data is not there, it’s swallowed up in zero click searches. So if you’re going to talk about tracking user’s journey and optimizing for it, this is a piece of the data puzzle that needs to be solved.

Michael Bonfils, a 30-year search marketing veteran, raised these questions in a discussion about zero click searches and what to do to better survive it, saying: 

“This is, you know, we have a funnel, we all know which is the awareness consideration phase and the whole center and then finally the purchase stage. The consideration stage is the critical side of our funnel. We’re not getting the data. How are we going to get the data?

So who who is going to provide that? Is Google going to eventually provide that? Do they? Would they provide that? How would they provide that?

But that’s very important information that I need because I need to know what that conversation is about. I need to know what two people are talking about that I’m talking about …because my entire content strategy in the center of my funnel depends on that greatly.”

There’s a real question about what type of data these companies are providing to fill the gaps. The established platforms were built for the static web, keyword data, and backlink graphs. But the emerging reality of AI search is personalized and queryless. So, as Michael Bonfils suggested, the buyer journeys may occur entirely within AI interfaces, bypassing traditional SERPs altogether, which is the bread and butter of the established SEO tool companies.

AI SEO Tool Companies: Where Your Data Will Come From Next

If the future of search is not about search results and the attendant search query volumes but a dynamic dialogue, the kinds of data that matter and the systems that can interpret them will change. Will startups that specialize in tracking and interpreting conversational interactions become the dominant SEO tools? Companies like Conductor have a track record of expertly pivoting in response to industry needs, so how it will all shake out remains to be seen.

Read the original post on LinkedIn by Conductor CEO, Seth Besmertnik.

Featured Image by Shutterstock/Gorodenkoff

Google’s Top 5 SEO Tools via @sejournal, @MattGSouthern

I’ve spent years working with Google’s SEO tools, and while there are countless paid options out there, Google’s free toolkit remains the foundation of my optimization workflow.

These tools show you exactly what Google considers important, and that offers invaluable insights you can’t get anywhere else.

Let me walk you through the five Google tools I use daily and why they’ve become indispensable for serious SEO work.

1. Lighthouse

Screenshot from Chrome DevTool, July 2025

When I first discovered Lighthouse tucked away in Chrome’s developer tools, it felt like finding a secret playbook from Google.

This tool has become my go-to for quick site audits, especially when clients come to me wondering why their perfectly designed website isn’t ranking.

Getting Started With Lighthouse

Accessing Lighthouse is surprisingly simple.

On any webpage, press F12 (Windows) or Command+Option+C (Mac) to open developer tools. You’ll find Lighthouse as one of the tabs. Alternatively, right-click any page, select “Inspect,” and navigate to the Lighthouse tab.

What makes Lighthouse special is its comprehensive approach. It evaluates five key areas: performance, progressive web app standards, best practices, accessibility, and SEO.

While accessibility might not seem directly SEO-related, I’ve learned that Google increasingly values sites that work well for all users.

Real-World Insights From The Community

The developer community has mixed feelings about Lighthouse, and I understand why.

As _listless noted, “Lighthouse is great because it helps you identify easy wins for performance and accessibility.”

However, CreativeTechGuyGames warned about the trap of chasing perfect scores: “There’s an important trade-off between performance and perceived performance.”

I’ve experienced this firsthand. One client insisted on achieving a perfect 100 score across all categories.

We spent weeks optimizing, only to find that some changes actually hurt user experience. The lesson? Use Lighthouse as a guide, not gospel.

Why Lighthouse Matters For SEO

The SEO section might seem basic as it checks things like meta tags, mobile usability, and crawling issues, but these fundamentals matter.

I’ve seen sites jump in rankings just by fixing the simple issues Lighthouse identifies. It validates crucial elements like:

  • Proper viewport configuration for mobile devices.
  • Title and meta description presence.
  • HTTP status codes.
  • Descriptive anchor text.
  • Hreflang implementation.
  • Canonical tags.
  • Mobile tap target sizing.

One frustrating aspect many developers mention is score inconsistency.

As one Redditor shared, “I ended up just re-running the analytics WITHOUT changing a thing and I got a performance score ranging from 33% to 90%.”

I’ve seen this too, which is why I always run multiple tests and focus on trends rather than individual scores.

Making The Most Of Lighthouse

My best advice? Use the “Opportunities” section for quick wins. Export your results as JSON to track improvements over time.

And remember what one developer wisely stated: “You can score 100 on accessibility and still ship an unusable [website].” The scores are indicators, not guarantees of quality.

2. PageSpeed Insights

Screenshot from pagespeed.web.dev, July 2025

PageSpeed Insights transformed from a nice-to-have tool to an essential one when Core Web Vitals became ranking considerations.

I check it regularly, especially after Google’s page experience update made site speed a confirmed ranking signal.

Understanding The Dual Nature Of PSI

What sets PageSpeed Insights apart is its combination of lab data (controlled test results) and field data (real user experiences from the Chrome User Experience Report).

This dual approach has saved me from optimization rabbit holes more times than I can count.

The field data is gold as it shows how real users experience your site over the past 28 days. I’ve had situations where lab scores looked terrible, but field data showed users were having a great experience.

This usually means the lab test conditions don’t match your actual user base.

Community Perspectives On PSI

The Reddit community has strong opinions about PageSpeed Insights.

NHRADeuce perfectly captured a common frustration: “The score you get from PageSpeed Insights has nothing to do with how fast your site loads.”

While it might sound harsh, there’s truth to it since the score is a simplified representation of complex metrics.

Practical Optimization Strategies

Through trial and error, I’ve developed a systematic approach to PSI optimization.

Arzishere’s strategy mirrors mine: “Added a caching plugin along with minifying HTML, CSS & JS (WP Rocket).” These foundational improvements often yield the biggest gains.

DOM size is another critical factor. As Fildernoot discovered, “I added some code that increased the DOM size by about 2000 elements and PageSpeed Insights wasn’t happy about that.” I now audit DOM complexity as part of my standard process.

Mobile optimization deserves special attention. A Redditor asked the right question: “How is your mobile score? Desktop is pretty easy with a decent theme and Litespeed hosting and LScaching plugin.”

In my experience, mobile scores are typically 20-30 points lower than desktop, and that’s where most of your users are.

The Diminishing Returns Reality

Here’s the hard truth about chasing perfect PSI scores: “You’re going to see diminishing returns as you invest more and more resources into this,” as E0nblue noted.

I tell clients to aim for “good” Core Web Vitals status rather than perfect scores. The jump from 50 to 80 is much easier and more impactful than 90 to 100.

3. Safe Browsing Test

Screenshot from transparencyreport.google.com/safe-browsing/search, July 2025

The Safe Browsing Test might seem like an odd inclusion in an SEO toolkit, but I learned its importance the hard way.

A client’s site got hacked, flagged by Safe Browsing, and disappeared from search results overnight. Their organic traffic dropped to zero in hours.

Understanding Safe Browsing’s Role

Google’s Safe Browsing protects users from dangerous websites by checking for malware, phishing attempts, and deceptive content.

As Lollygaggindovakiin explained, “It automatically scans files using both signatures of diverse types and uses machine learning.”

The tool lives in Google’s Transparency Report, and I check it monthly for all client sites. It shows when Google last scanned your site and any current security issues.

The integration with Search Console means you’ll get alerts if problems arise, but I prefer being proactive.

Community Concerns And Experiences

The Reddit community has highlighted some important considerations.

One concerning trend expressed by Nextdns is false positives: “Google is falsely flagging apple.com.akadns.net as malicious.” While rare, false flags can happen, which is why regular monitoring matters.

Privacy-conscious users raise valid concerns about data collection.

As Mera-beta noted, “Enhanced Safe Browsing will send content of pages directly to Google.” For SEO purposes, standard Safe Browsing protection is sufficient.

Why SEO Pros Should Care

When Safe Browsing flags your site, Google may:

  • Remove your pages from search results.
  • Display warning messages to users trying to visit.
  • Drastically reduce your click-through rates.
  • Impact your site’s trust signals.

I’ve helped several sites recover from security flags. The process typically takes one to two weeks after cleaning the infection and requesting a review.

That’s potentially two weeks of lost traffic and revenue, so prevention is infinitely better than cure.

Best Practices For Safe Browsing

My security checklist includes:

  • Weekly automated scans using the Safe Browsing API for multiple sites.
  • Immediate investigation of any Search Console security warnings.
  • Regular audits of third-party scripts and widgets.
  • Monitoring of user-generated content areas.

4. Google Trends

Screenshot from Google Trends, July 2025

Google Trends has evolved from a curiosity tool to a strategic weapon in my SEO arsenal.

With updates now happening every 10 minutes and AI-powered trend detection, it’s become indispensable for content strategy.

Beyond Basic Trend Watching

What many SEO pros miss is that Trends isn’t just about seeing what’s popular. I use it to:

  • Validate content ideas before investing resources.
  • Identify seasonal patterns for planning.
  • Spot declining topics to avoid.
  • Find regional variations for local SEO.
  • Compare brand performance against competitors.

Community Insights On Trends

The Reddit community offers balanced perspectives on Google Trends.

Maltelandwehr highlighted its unique value: “Some of the data in Google Trends is really unique. Even SEOs with monthly 7-figure budgets will use Google Trends for certain questions.”

However, limitations exist. As Dangerroo_2 clarified, “Trends does not track popularity, but search demand.”

This distinction matters since a declining trend doesn’t always mean fewer total searches, just decreasing relative interest.

For niche topics, frustrations mount. iBullyDummies complained, “Google has absolutely ruined Google Trends and no longer evaluates niche topics.” I’ve found this particularly true for B2B or technical terms with lower search volumes.

Advanced Trends Strategies

My favorite Trends hacks include:

  • The Comparison Method: I always compare terms against each other rather than viewing them in isolation. This reveals relative opportunity better than absolute numbers.
  • Category Filtering: This prevents confusion between similar terms. The classic example is “jaguar” where without filtering, you’re mixing car searches with animal searches.
  • Rising Trends Mining: The “Rising” section often reveals opportunities before they become competitive. I’ve launched successful content campaigns by spotting trends here early.
  • Geographic Arbitrage: Finding topics trending in one region before they spread helps you prepare content in advance.

Addressing The Accuracy Debate

Some prefer paid tools, as Contentwritenow stated: “I prefer using a paid tool like BuzzSumo or Semrush for trends and content ideas simply because I don’t trust Google Trends.”

While I use these tools too, they pull from different data sources. Google Trends shows actual Google search behavior, which is invaluable for SEO.

The relative nature of Trends data confuses many, like Sneakysneakums. As explained by Google News Initiative:

“A line trending downward means that a search term’s relative popularity is decreasing. But that doesn’t necessarily mean the total number of searches for that term is decreasing.”

I always combine Trends data with absolute volume estimates from other tools.

→ Read more: Google Shows 7 Hidden Features In Google Trends

5. Google Search Console

Screenshot from Google Search Console, July 2025

No list of Google SEO tools would be complete without Search Console.

If the other tools are your scouts, Search Console is your command center, showing exactly how Google sees and ranks your site.

Why Search Console Is Irreplaceable

Search Console provides data you literally cannot get anywhere else. As Peepeepoopoobutler emphasized, “GSC is the accurate real thing. But it doesn’t really give suggestions like ads does.”

That’s exactly right. While it won’t hold your hand with optimization suggestions, the raw data it provides is gold.

The tool offers:

  • Actual search queries driving traffic (not just keywords you think matter).
  • True click-through rates by position.
  • Index coverage issues before they tank your traffic.
  • Core Web Vitals data for all pages.
  • Manual actions and security issues that could devastate rankings.

I check Search Console daily, and I’m not alone.

Successful site owner ImportantDoubt6434 shared, “Yes monitoring GSC is part of how I got my website to the front page.”

The Performance report alone has helped me identify countless optimization opportunities.

Setting Up For Success

Getting started with Search Console is refreshingly straightforward.

As Anotherbozo noted, “You don’t need to verify each individual page but maintain the original verification method.”

I recommend domain-level verification for comprehensive access since you can “verify ownership by site or by domain (second level domain),” but domain gives you data across all subdomains and protocols.

The verification process takes minutes, but the insights last forever. I’ve seen clients discover they were ranking for valuable keywords they never knew about, simply because they finally looked at their Search Console data.

Hidden Powers Of Search Console

What many SEO pros miss are the advanced capabilities lurking in Search Console.

Seosavvy revealed a powerful strategy: “Google search console for keyword research is super powerful.” I couldn’t agree more.

By filtering for queries with high impressions but low click-through rates, you can find content gaps and optimization opportunities your competitors miss.

The structured data reports have saved me countless hours. CasperWink mentioned working with schemas, “I have already created the schema with a review and aggregateRating along with confirming in Google’s Rich Results Test.”

Search Console will tell you if Google can actually read and understand your structured data in the wild, something testing tools can’t guarantee.

Sitemap management is another underutilized feature. Yetisteve correctly stated, “Sitemaps are essential, they are used to give Google good signals about the structure of the site.”

I’ve diagnosed indexing issues just by comparing submitted versus indexed pages in the sitemap report.

The Reality Check: Limitations To Understand

Here’s where the community feedback gets really valuable.

An experienced SimonaRed warned, “GSC only shows around 50% of the reality.” This is crucial to understand since Google samples and anonymizes data for privacy. You’re seeing a representative sample, not every single query.

Some find the interface challenging. As UncleFeather6000 admitted, “I feel like I don’t really understand how to use Google’s Search Console.”

I get it because the tool has evolved significantly, and the learning curve can be steep. My advice? Start with the Performance report and gradually explore other sections.

Recent changes have frustrated users, too. “Google has officially removed Google Analytics data from the Search Console Insights tool,” Shakti-basan noted.

This integration loss means more manual work correlating data between tools, but the core Search Console data remains invaluable.

Making Search Console Work Harder

Through years of daily use, I’ve developed strategies to maximize Search Console’s value:

  • The Position 11-20 Gold Mine: Filter for keywords ranking on page two. These are your easiest wins since Google already thinks you’re relevant. You just need a push to page one.
  • Click-Through Rate Optimization: Sort by impressions, then look for low CTR. These queries show demand but suggest your titles and descriptions need work.
  • Query Matching: Compare what you think you rank for versus what Search Console shows. The gaps often reveal content opportunities or user intent mismatches.
  • Page-Level Analysis: Don’t just look at site-wide metrics. Individual page performance often reveals technical issues or content problems.

Integrating Search Console With Other Tools

The magic happens when you combine Search Console data with the other tools:

  • Use Trends to validate whether declining traffic is due to ranking drops or decreased search interest.
  • Cross-reference PageSpeed Insights recommendations with pages showing Core Web Vitals issues in Search Console.
  • Verify Lighthouse mobile-friendliness findings against Mobile Usability reports.
  • Monitor Safe Browsing status directly in the Security Issues section.

Mr_boogieman asked rhetorically, “How are you tracking results without looking at GSC?” It’s a fair question.

Without Search Console, you’re flying blind, relying on third-party estimations instead of data straight from Google.

Bringing It All Together

These five tools form the foundation of effective SEO work. They’re free, they’re official, and they show you exactly what Google values.

While specialized SEO platforms offer additional features, mastering these Google tools ensures your optimization efforts align with what actually matters for rankings.

My workflow typically starts with Search Console to identify opportunities, using Trends to validate content ideas, employing Lighthouse and PageSpeed Insights to optimize technical performance, and includes Safe Browsing checks to protect hard-won rankings.

Remember, these tools reflect Google’s current priorities. As search algorithms evolve, so do these tools. Staying current with their features and understanding their insights keeps your SEO strategy aligned with Google’s direction.

The key is using them together, understanding their limitations, and remembering that tools are only as good as the strategist wielding them. Start with these five, master their insights, and you’ll have a solid foundation for SEO success.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google’s June 2025 Update Analysis: What Just Happened? via @sejournal, @martinibuster

Google’s June 2025 Core Update just finished. What’s notable is that while some say it was a big update, it didn’t feel disruptive, indicating that the changes may have been more subtle than game changing. Here are some clues that may explain what happened with this update.

Two Search Ranking Related Breakthroughs

Although a lot of people are saying that the June 2025 Update was related to MUVERA, that’s not really the whole story. There were two notable backend announcements over the past few weeks, MUVERA and Google’s Graph Foundation Model.

Google MUVERA

MUVERA is a Multi-Vector via Fixed Dimensional Encodings (FDEs) retrieval algorithm that makes retrieving web pages more accurate and with a higher degree of efficiency. The notable part for SEO is that it is able to retrieve fewer candidate pages for ranking, leaving the less relevant pages behind and promoting only the more precisely relevant pages.

This enables Google to have all of the precision of multi-vector retrieval without any of the drawbacks of traditional multi-vector systems and with greater accuracy.

Google’s MUVERA announcement explains the key improvements:

“Improved recall: MUVERA outperforms the single-vector heuristic, a common approach used in multi-vector retrieval (which PLAID also employs), achieving better recall while retrieving significantly fewer candidate documents… For instance, FDE’s retrieve 5–20x fewer candidates to achieve a fixed recall.

Moreover, we found that MUVERA’s FDEs can be effectively compressed using product quantization, reducing memory footprint by 32x with minimal impact on retrieval quality.

These results highlight MUVERA’s potential to significantly accelerate multi-vector retrieval, making it more practical for real-world applications.

…By reducing multi-vector search to single-vector MIPS, MUVERA leverages existing optimized search techniques and achieves state-of-the-art performance with significantly improved efficiency.”

Google’s Graph Foundation Model

A graph foundation model (GFM) is a type of AI model that is designed to generalize across different graph structures and datasets. It’s designed to be adaptable in a similar way to how large language models can generalize across different domains that it hadn’t been initially trained in.

Google’s GFM classifies nodes and edges, which could plausibly include documents, links, users, spam detection, product recommendations, and any other kind of classification.

This is something very new, published on July 10th, but already tested on ads for spam detection. It is in fact a breakthrough in graph machine learning and the development of AI models that can generalize across different graph structures and tasks.

It supersedes the limitations of Graph Neural Networks (GNNs) which are tethered to the graph on which they were trained on. Graph Foundation Models, like LLMs, aren’t limited to what they were trained on, which makes them versatile for handling new or unseen graph structures and domains.

Google’s announcement of GFM says that it improves zero-shot and few-shot learning, meaning it can make accurate predictions on different types of graphs without additional task-specific training (zero-shot), even when only a small number of labeled examples are available (few-shot).

Google’s GFM announcement reported these results:

“Operating at Google scale means processing graphs of billions of nodes and edges where our JAX environment and scalable TPU infrastructure particularly shines. Such data volumes are amenable for training generalist models, so we probed our GFM on several internal classification tasks like spam detection in ads, which involves dozens of large and connected relational tables. Typical tabular baselines, albeit scalable, do not consider connections between rows of different tables, and therefore miss context that might be useful for accurate predictions. Our experiments vividly demonstrate that gap.

We observe a significant performance boost compared to the best tuned single-table baselines. Depending on the downstream task, GFM brings 3x – 40x gains in average precision, which indicates that the graph structure in relational tables provides a crucial signal to be leveraged by ML models.”

What Changed?

It’s not unreasonable to speculate that integrating both MUVERA and GFM could enable Google’s ranking systems to more precisely rank relevant content by improving retrieval (MUVERA) and mapping relationships between links or content to better identify patterns associated with trustworthiness and authority (GFM).

Integrating Both MUVERA and GFM would enable Google’s ranking systems to more precisely surface relevant content that searchers would find to be satisfying.

Google’s official announcement said this:

“This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”

This particular update did not seem to be accompanied by widespread reports of massive changes. This update may fit into what Google’s Danny Sullivan was talking about at Search Central Live New York, where he said they would be making changes to Google’s algorithm to surface a greater variety of high-quality content.

Search marketer Glenn Gabe tweeted that he saw some sites that had been affected by the “Helpful Content Update,” also known as HCU, had surged back in the rankings, while other sites worsened.

Although he said that this was a very big update, the response to his tweets was muted, not the kind of response that happens when there’s a widespread disruption. I think it’s fair to say that, although Glenn Gabe’s data shows it was a big update, it may not have been a disruptive one.

So what changed? I think, I speculate, that it was a widespread change that improved Google’s ability to better surface relevant content, helped by better retrieval and an improved ability to interpret patterns of trustworthiness and authoritativeness, as well as to better identify low-quality sites.

Read More:

Google MUVERA

Google’s Graph Foundation Model

Google’s June 2025 Update Is Over

Featured Image by Shutterstock/Kues

OpenAI ChatGPT Agent Marks A Turning Point For Businesses And SEO via @sejournal, @martinibuster

OpenAI announced a new way for users to interact with the web to get things done in their personal and professional lives. ChatGPT agent is said to be able to automate planning a wedding, booking an entire vacation, updating a calendar, and converting screenshots into editable presentations. The impact on publishers, ecommerce stores, and SEOs cannot be overstated. This is what you should know and how to prepare for what could be one of the most consequential changes to online interactions since the invention of the browser.

OpenAI ChatGPT Agent Overview

OpenAI ChatGPT agent is based on three core parts, OpenAI’s Operator and Deep Research, two autonomous AI agents, plus ChatGPT’s natural language capabilities.

  1. Operator can browse the web and interact with websites to complete tasks.
  2. Deep Research is designed for multi-step research that is able to combine information from different resources and generate a report.
  3. ChatGPT agent requests permission before taking significant actions and can be interrupted and halted at any point.

ChatGPT Agent Capabilities

ChatGPT agent has access to multiple tools to help it complete tasks:

  • A visual browser for interacting with web pages with the on-page interface.
  • Text based browser for answering reasoning-based queries.
  • A terminal for executing actions through a command-line interface.
  • Connectors, which are authorized user-friendly integrations (using APIs) that enable ChatGPT agent to interact with third-party apps.

Connectors are like bridges between ChatGPT agent and your authorized apps. When users ask ChatGPT agent to complete a task, the connectors enable it to retrieve the needed information and complete tasks. Direct API access via connectors enables it to interact with and extract information from connected apps.

ChatGPT agent can open a page with a browser (either text or visual), download a file, perform an action on it, and then view the results in the visual browser. ChatGPT connectors enable it to connect with external apps like Gmail or a calendar for answering questions and completing tasks.

ChatGPT Agent Automation of Web-Based Tasks

ChatGPT agent is able to complete entire complex tasks and summarize the results.

Here’s how OpenAI describes it:

“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.

You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.”

ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.

….ChatGPT agent can access your connectors, allowing it to integrate with your workflows and access relevant, actionable information. Once authenticated, these connectors allow ChatGPT to see information and do things like summarize your inbox for the day or find time slots you’re available for a meeting—to take action on these sites, however, you’ll still be prompted to log in by taking over the browser.

Additionally, you can schedule completed tasks to recur automatically, such as generating a weekly metrics report every Monday morning.”

What Does ChatGPT Agent Mean For SEO?

ChatGPT agent raises the stakes for publishers, online businesses, and SEO, in that making websites Agentic AI–friendly becomes increasingly important as more users become acquainted with it and begin sharing how it helps them in their daily lives and at work.

A recent study about AI agents found that OpenAI’s Operator responded well to structured on-page content. Structured on-page content enables AI agents to accurately retrieve specific information relevant to their tasks, perform actions (like filling in a form), and helps to disambiguate the web page (i.e., make it easily understood). I usually refrain from using jargon, but disambiguation is a word all SEOs need to understand because Agentic AI makes it more important than it has ever been.

Examples Of On-Page Structured Data

  • Headings
  • Tables
  • Forms with labeled input forms
  • Product listing with consistent fields like price, availability, name or label of the product in a title.
  • Authors, dates, and headlines
  • Menus and filters in ecommerce web pages

Takeaways

  • ChatGPT agent is a milestone in how users interact with the web, capable of completing multi-step tasks like planning trips, analyzing competitors, and generating reports or presentations.
  • OpenAI’s ChatGPT agent combines autonomous agents (Operator and Deep Research) with ChatGPT’s natural language interface to automate personal and professional workflows.
  • Connectors extend Agent’s capabilities by providing secure API-based access to third-party apps like calendars and email, enabling task execution across platforms.
  • Agent can interact directly with web pages, forms, and files, using tools like a visual browser, code execution terminal, and file handling system.
  • Agentic AI responds well to structured, disambiguated web content, making SEO and publisher alignment with structured on-page elements more important than ever.
  • Structured data improves an AI agent’s ability to retrieve and act on website information. Sites that are optimized for AI agents will gain the most, as more users depend on agent-driven automation to complete online tasks.

OpenAI’s ChatGPT agent is an automation system that can independently complete complex online tasks, such as booking trips, analyzing competitors, or summarizing emails, by using tools like browsers, terminals, and app connectors. It interacts directly with web pages and connected apps, performing actions that previously required human input.

For publishers, ecommerce sites, and SEOs, ChatGPT agent makes structured, easily interpreted on-page content critical because websites must now accommodate AI agents that interact with and act on their data in real time.

Read More About Optimizing For Agentic AI

Marketing To AI Agents Is The Future – Research Shows Why

Featured Image by Shutterstock/All kind of people

Google’s John Mueller Clarifies How To Remove Pages From Search via @sejournal, @MattGSouthern

In a recent installment of SEO Office Hours, Google’s John Mueller offered guidance on how to keep unwanted pages out of search results and addressed a common source of confusion around sitelinks.

The discussion began with a user question: how can you remove a specific subpage from appearing in Google Search, even if other websites still link to it?

Sitelinks vs. Regular Listings

Mueller noted he wasn’t “100% sure” he understood the question, but assumed it referred either to sitelinks or standard listings. He explained that sitelinks, those extra links to subpages beneath a main result, are automatically generated based on what’s indexed for your site.

Mueller said:

“There’s no way for you to manually say I want this page indexed. I just don’t want it shown as a sitelink.”

In other words, you can’t selectively prevent a page from being a sitelink while keeping it in the index. If you want to make sure a page never appears in any form in search, a more direct approach is required.

How To Deindex A Page

Mueller outlined a two-step process for removing pages from Google Search results using a noindexdirective:

  1. Allow crawling: First, make sure Google can access the page. If it’s blocked by robots.txt, the noindex tag won’t be seen and won’t work.
  2. Apply a noindex tag: Once crawlable, add a noindex meta tag to the page to instruct Google not to include it in search results.

This method works even if other websites continue linking to the page.

Removing Pages Quickly

If you need faster action, Mueller suggested using Google Search Console’s URL Removal Tool, which allows site owners to request temporary removal.

“It works very quickly” for verified site owners, Mueller confirmed.

For pages on sites you don’t control, there’s also a public version of the removal tool, though Mueller noted it “takes a little bit longer” since Google must verify that the content has actually been taken down.

Hear Mueller’s full response in the video below:

What This Means For You

If you’re trying to prevent a specific page from appearing in Google results:

  • You can’t control sitelinks manually. Google’s algorithm handles them automatically.
  • Use noindex to remove content. Just make sure the page isn’t blocked from crawling.
  • Act quickly when needed. The URL Removal Tool is your fastest option, especially if you’re a verified site owner.

Choosing the right method, whether it’s noindex or a removal request, can help you manage visibility more effectively.

Brand Bias For Visibility In Search & LLMs: A Conversation With Stephen Kenwright via @sejournal, @theshelleywalsh

I recently saw Stephen Kenwright speak at a small Sistrix event in Leeds about strategies for exploiting Google’s brand bias, and a lot of what he said still feels as fresh today as it did over a decade ago when he first started promoting this theory.

Right now, the search experience is changing more than in the last 25 years, and many SEOs are citing that brand is the critical focus for survival.

Some might say (Stephen included) that this is what SEO should always have been about.

I spoke to Stephen, the founder of Rise at Seven, about his talk and about how his theories and strategies could translate to a world of large language model (LLM) optimization alongside a fractured search journey.

You can watch the full interview with Stephen on IMHO below, or continue reading the article summary.

Google’s Brand Bias Is Foundational

Brand bias isn’t a recent development. Stephen was already writing about it in 2016 during his time at Branded3. What underlines this bias is the trust users have in brands.

“Google wants to give a good experience to its users. That means surfacing the results they expect to see. Often, that’s a brand they already know,” Stephen explained.

When users search, they’re often subconsciously looking to reconnect with a mental shortcut that brands provide. It’s not about discovery; it’s about recognition.

When brands invest in traditional marketing channels, they influence user behavior in ways that create cascading effects across digital platforms.

Television advertising, for example, makes viewers significantly more likely to click on branded results even when searching for generic terms.

Traditional Marketing Directly Influences Search Behavior

At his talk in Leeds, Stephen referenced research that demonstrates television advertising creates measurable impacts on search behavior, with viewers 33% more likely to click on advertised brands in search results.

“People are about a third more likely to click your result after seeing a TV ad, and they convert better, too,” Stephen said.

When users encounter brands through traditional marketing channels, they develop mental associations that influence their subsequent search behavior. These behavioral patterns then signal to Google that certain brands provide better user experiences.

“Having the trust from the user comes from brand building activity. It doesn’t come from having an exact match domain that happens to rank first for a keyword,” Stephen emphasized. “That’s just not how the real world works.”

Investment In Brand Building Gains More Buy-In From C-Suite

Even though this bias has been evident for so long, Stephen highlighted a disconnect from brand-building activities within the industry.

“Every other discipline from PR to the marketing manager through to the social media team, literally everyone else, including the C-suite is interested in brand in some capacity and historically SEOs have been the exception,” Stephen explained.

This separation has created missed opportunities for SEOs to access larger marketing budgets and gain executive support for their initiatives.

By shifting focus toward brand-building activities that impact search visibility, they can better align with broader marketing objectives.

“Just by switching that mindset and asking, ‘What’s the impact on brand of our SEO activity?’ we get more buy-in, bigger budgets, and better results,” he said.

Make A Conscious Decision About Which Search Engine To Optimize For

While Google’s dominance remains statistically intact, user behavior tells us that there has always existed a fractured search journey.

Stephen cited that half of UK adults use Bing monthly. A quarter is on Quora. Pinterest and Reddit are seeing massive engagement, especially with younger users. Nearly everyone uses YouTube, and they spend significantly more time on it than on Google.

Also, specialized search engines like Autotrader for used cars and Amazon for ecommerce have captured significant market share in their respective categories.

This fragmentation means that conscious decisions about platform optimization become increasingly important. Different platforms serve different demographics and purposes, requiring strategic choices about where to invest optimization efforts.

I asked Stephen if he thought Google’s dominance was under threat, or if it would remain part of a fractured search journey. But, he thought Google would be relevant for at least half a decade to come.

“I don’t see Google going anywhere. And I also don’t see the massive difference in LLM optimization. So most of the things that you would be doing for Google now … are broadly marketing things anyway and broadly impact LLM optimization.”

LLM Optimization Could Be A Return To Traditional Marketing

Looking toward AI-driven search platforms, Stephen believes the same brand-building tactics that work for Google will prove effective across LLM platforms. These new platforms don’t necessarily demand new rules; they reinforce old ones.

“What works in Google now, broadly speaking, is good marketing. That also applies to LLMs,” he said.

While we’re still learning how LLMs surface content and determine authority, early indicators suggest trust signals, brand presence, and real-world engagement all play pivotal roles.

The key insight is that LLM optimization doesn’t require entirely new approaches but rather a return to fundamental marketing principles focused on audience needs and brand trust.

Television Advertising Creates Significant Impact

I asked Stephen what he would do if he were to launch a new brand and how he would quickly gain traction.

In an interesting twist for someone who has worked in the SEO industry for so long, he cited TV as his primary focus.

“I’d build a transactional website and spend millions on TV [advertising]. If I did more [marketing], I’d add PR.” Stephen told me.

This recommendation reflects his belief that traditional marketing channels create a significant impact.

He believes, the combination of a functional ecommerce website with substantial television advertising investment, supplemented by PR activities, provides the foundation for rapid brand recognition and search visibility.

Before We Ruined The Internet

To me, it feels like we are going full circle and back to the days prior to the introduction of “new media” in the early 90s, when TV advertising was dominant and offline advertising was heavily influential.

“It’s like we’re going back to before we ruined the internet,” Stephen joked.

In reality, we’re circling back to what always worked: building real brands that people trust, remember, and seek out. The future requires classical marketing principles that prioritize audience understanding and brand building over technical optimization tactics.

This shift benefits the entire marketing industry by encouraging more integrated approaches that consider the complete customer journey rather than isolated technical optimizations.

Success in both search and LLM platforms increasingly depends on building genuine brand recognition and trust through consistent, audience-focused marketing activities across multiple channels.

Whether it’s Google, Bing, an LLM, or something we haven’t seen yet, brand is the one constant that wins.

Thank you to Stephen Kenwright for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

Google Answers Question About Structured Data And Logged Out Users via @sejournal, @martinibuster

Someone asked if showing different content to logged-out users than to logged-in users and to Google via structured data is okay. John’s answer was unequivocal.

This is the question that was asked:

“Will this markup work for products in a unauthenticated view in where the price is not available to users and they will need to login (authenticate) to view the pricing information on their end? Let me know your thoughts.”

John Mueller answered:

“If I understand your use-case, then no. If a price is only available to users after authentication, then showing a price to search engines (logged out) would not be appropriate. The markup should match what’s visible on the page. If there’s no price shown, there should be no price markup.”

What’s The Problem With That Structured Data?

The price is visible to logged-in users, so technically the content (in this case the product price) is visible for those users who are logged-in. It’s a good question because a good case can be made that the content shown to Google is available, kind of like behind a paywall, in this case it’s for logged-in users.

But that’s not good enough for Google and it’s not really comparable to paywalls because these are two different things. Google is judging what “on the page” means based on what logged-out users will see on the page.

Google’s guideline about the structured data matching what’s on the page is unambiguous:

“Don’t mark up content that is not visible to readers of the page.

…Your structured data must be a true representation of the page content.”

This is a question that gets asked fairly frequently on social media and in forums so it’s good to go over it for those who might not know yet.

Read More

Confirmed CWV Reporting Glitch In Google Search Console

Google’s New Graph Foundation Model Improves Precision By Up To 40X

Featured Image by Shutterstock/ViDI Studio

Your Reviews Are Ranking You (Or Not): How to Stay Visible in Google’s AI Era

This post was sponsored by GatherUp. The opinions expressed in this article are the sponsor’s own.

If your business has a great local, word-of-mouth reputation but very few online reviews, does it even exist?

That’s the existential riddle facing local businesses and agencies in 2025.

With Google’s AI Overviews (AIOs) now reshaping the search experience, visibility isn’t just about being “the best.”

It’s about being part of the summary.

And reviews? They’re no longer just trust signals. They’re ranking signals.

This article breaks down what’s changing, what’s working, and how agencies can keep their clients visible across both traditional local search and Google’s evolving AI layer.

Reviews Are Now A Gateway To Search Inclusion

Reviews have long been seen as conversion tools, helping users decide between businesses they’ve already discovered. But that role is evolving.

In the era of Google’s AI Overviews (AIOs), reviews are increasingly acting as discovery signals, helping determine which businesses get included in the first place.

GatherUp’s 2024 Online Reputation Benchmark Report shows that businesses with consistent, multi-channel review strategies, especially those generating both first- and third-party reviews, saw stronger reputation signals across volume, recency, and engagement. These are the exact kinds of signals that Google’s systems now appear to prioritize in AI-generated results.

That observation is reinforced by recent industry research and leaked Google documentation, which suggest that review characteristics like click-throughs, content depth, and freshness contribute to both local pack visibility and AIO inclusion.

In other words, the businesses getting summarized at the top of the SERP aren’t just highly rated. They’re actively reviewed, broadly cited, and seen as credible across sources Google trusts.

Recency Is A Signal. “Relevance” Is Google’s Shortcut.

More than two-thirds of consumers say they prioritize recent reviews when evaluating a business. But Google doesn’t necessarily show them first.

Instead, Google’s “Most Relevant” filter may prioritize older reviews that match query terms, even if they no longer reflect the current customer experience.

That’s why it’s critical for businesses to maintain steady review velocity. A flood of reviews in January followed by silence for six months won’t cut it. The AI layer, and the human reader, needs signals that say “this business is active and trustworthy right now.”

For agencies, this presents an opportunity to shift client mindset from static review goals to ongoing review strategies.

Star Ratings Still Matter, But Mostly As A Decision Shortcut

During our recent webinar with Search Engine Journal, we explored how consumers are using star ratings to disqualify options, not differentiate them.

Research shows:

  • 73% of consumers won’t consider businesses with fewer than 4 stars
  • But 69% are still open to doing business with brands that fall short of a perfect 5.0, so long as the reviews are recent and authentic

In other words, people are looking for a “safe” choice, not a flawless one.

A few solid 4-star reviews with real detail from the past week often carry more weight than a dozen perfect ratings from 2021.

Agencies should help clients understand this nuance, especially those who are hesitant to request reviews out of fear of imperfection.

First-Party & Third-Party Reviews: Both Are Necessary

AI Overviews aggregate information from across the web, including structured data from your own website and unstructured commentary from others.

  • First-party reviews: These are collected and hosted directly on the business’s website. They can be marked up with schema, giving Google structured, machine-readable content to use in summaries and answer boxes.
  • Third-party reviews: These appear on platforms like Google, Yelp, Facebook, TripAdvisor, and Reddit. They’re often seen as more objective and are more frequently cited in AI Overviews.

Businesses that show up consistently across both types are more likely to be included in AIOs, and appear trustworthy to users.

GatherUp supports multi-source review generation, schema markup for first-party feedback, and rotating requests across platforms. This makes it easier for agencies to build a review presence that supports both local SEO and AIO visibility.

AIOs Pull From More Than Just Google Reviews

According to recent data from Whitespark, over 60% of citations in AI Overviews come from non-Google sources. This includes platforms like:

  • Reddit.
  • TripAdvisor.
  • Yelp.
  • Local blogs and industry-specific directories.

If your client’s reviews live only on Google, they risk being overlooked entirely.

Google’s AI is scanning for what it deems “experience-based” content, unfiltered, authentic commentary from real people. And it prefers to cross-reference multiple sources to confirm credibility.

Agencies should encourage clients to broaden their review footprint and seek mentions in trusted third-party spaces. Dynamic review flows, QR codes, and conditional links can help diversify requests without overburdening the customer.

Responses Influence Visibility & Build Trust

Review responses are no longer just a nice gesture. They’re part of the algorithmic picture.

GatherUp’s benchmark research shows:

  • 92% of consumers say responding to reviews is now part of basic customer service.
  • 73% will give a business a second chance if their complaint receives a thoughtful reply.

But there’s also a technical upside. When reviews are clicked, read, and expanded, they generate engagement signals that may impact local rankings. And if a business’s reply includes resolution details or helpful context, it increases the content depth of that listing.

For agencies juggling multiple clients, automation helps. GatherUp offers AI-powered suggested responses that retain brand tone and ensure timely replies, without sounding robotic.

How Agencies Can Make AIO Part Of Their Core Strategy

Google’s AI systems are designed to answer user questions directly, often without requiring a click. That means review content is increasingly shaping brand narratives within the SERP.

To adapt, agencies should align client visibility efforts across both search formats:

For Local Pack Optimization

  • Keep Google Business Profile listings fully updated (photos, categories, Q&A).
  • Build and maintain steady review velocity using email, SMS, and in-person requests.
  • Respond to reviews regularly, especially nuanced or negative ones.

For AIO Inclusion

  • Collect first-party reviews and mark them up with schema.
  • Rotate requests to third-party platforms based on vertical relevance.
  • Capture reviews with photo uploads and detailed descriptions.
  • Build unstructured citations through community involvement, media mentions, and event participation.

Download Our Complete Proactive Reputation Management Playbook for Digital Agencies for templates and workflows to operationalize this as a branded, revenue-generating service.

Reputation Is No Longer Separate From Rankings

AI Overviews now appear in nearly two-thirds of local business search queries. That means your clients’ next customers may form an impression—or make a decision—before ever clicking through to a website or map pack listing.

Visibility is no longer guaranteed. It’s earned through content, coverage, and credibility.

And reviews sit at the center of all three.

For agencies, this is a moment of opportunity. You already have the tools to guide clients through the shift. You know how to structure content, build citations, and amplify voices that resonate with customers.

Reputation management isn’t optional anymore. It’s infrastructure.

About GatherUp

GatherUp is the only proactive reputation management platform purpose-built for digital agencies. We help you  build, manage, and defend your clients’ online reputations.

GatherUp supports:

  • First- and third-party review generation across multiple platforms,
  • Schema-marked up feedback collection for AIO relevance,
  • Intelligent, AI-assisted response workflows,
  • Seamless white-labeling for full agency control,
  • Scalable review operations tools that can help you manage 10 or 10,000 locations and clients.

Agencies who use GatherUp don’t just react to algorithm changes. They shape client visibility, and defend it.

To learn more, watch the full webinar for actionable strategies, data-backed insights, and examples of AIO-influenced local search in the wild.

Image Credits

Featured Image: Image by GatherUp. Used with permission.

Confirmed CWV Reporting Glitch In Google Search Console via @sejournal, @martinibuster

Google Search Console Core Web Vitals (CWV) reporting for mobile is experiencing a dip that is confirmed to be related to the Chrome User Experience Report (CrUX). Search Console CWV reports for mobile performance show a marked dip beginning around July 10, at which point the reporting appears to stop completely.

Not A Search Console Issue

Someone posted about it on Bluesky

“Hey @johnmu.com is there a known issue or bug with Core Web Vitals reporting in Search Console? Seeing a sudden massive drop in reported URLs (both “good” and “needs improvement”) on mobile as of July 14.”

The person referred to July 14th, but that’s the date the reporting hit zero. The drop actually starts closer to July 10th, which you can see when you hover a cursor at the point that the drops begin.

Google’s John Mueller responded:

“These reports are based on samples of what we know for your site, and sometimes the overall sample size for a site changes. That’s not indicative of a problem. I’d focus on the samples with issues (in your case it looks fine), rather than the absolute counts.”

The person who initially started the discussion responded to inform Mueller that this isn’t just on his site, the peculiar drop in reporting is happening on other sites.

Mueller was unaware of any problem with CWV reporting so he naturally assumed that this was an artifact of natural changes in Internet traffic and user behavior. So his next response continued under the assumption that this wasn’t a widespread issue:

He responded:

“That can happen. The web is dynamic and alive – our systems have to readjust these samples over time.”

Then Jamie Indigo responded to confirm she’s seeing it, too. 

“Hey John! Thanks for responding 🙂 It seems like … everyone beyond the usual ebb and flow. Confirming nothing in the mechanics have changed?”

At this point it was becoming clear that this weird behavior wasn’t isolated to just one site and Mueller’s response to Jamie reflected this growing awareness.  Mueller confirmed that there’s nothing happening on the Search Console side, leaving it open about the CrUX side of the Core Web Vitals reporting.

His response:

“Correct, nothing in the mechanics changed (at least with regards to Search Console — I’m also not aware of anything on the Chrome / CrUX side, but I’m not as involved there).”

CrUX CWV Field Data

CrUX is the acronym for the Chrome User Experience report. It’s CWV reporting based on real website visits. The data is collected from Chrome browser website visits by users who have opted in to reporting their data for the report.

Google’s Chrome For Developers page explains:

“The Chrome User Experience Report (also known as the Chrome UX Report, or CrUX for short) is a dataset that reflects how real-world Chrome users experience popular destinations on the web.

CrUX is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented.

CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites.”

Core Web Vitals Reporting Outage Is Widespread

At this point more people joined the conversation, with Alan Bleiweiss offering both a comment and a screenshot showing the same behavior where the reporting completely drops off is happening on the Search Console CWV reports for other websites.

He posted:

“oooh Google had to slow down server requests to set aside more power to keep the swimming pools cool as the summer heats up.”

Here’s a closeup detail of Alan’s screenshot of a Search Console CWV report:

Screenshot Of CWV Report Showing July 10 Drop

I searched the Chrome Lighthouse changelog to see if there’s anything there that corresponds to the drop but nothing stood out.

So what is going on?

CWV Reporting Outage Is Confirmed

I next checked the X and Bluesky accounts of Googlers who work on the Chrome team and found a post by Barry Pollard, Web Performance Developer Advocate on Google Chrome, who had posted about this issue last week.

Barry posted a note about a reporting outage on Bluesky:

“We’ve noticed another dip on the metrics this month, particularly on mobile. We are actively investigating this and have a potential reason and fix rolling out to reverse this temporary dip. We’ll update further next month. Other than that, there are no further announcements this month.”

Takeaways

Google Search Console Core Web Vitals (CWV) data drop:
A sudden stop in CWV reporting was observed in Google Search Console around July 10, especially on mobile.

Issue is widespread, not site-specific:
Multiple users confirmed the drop across different websites, ruling out individual site problems.

Origin of issue is not at Search Console:
John Mueller confirmed there were no changes on the Search Console side.

Possible link to CrUX data pipeline:
Barry Pollard from the Chrome team confirmed a reporting outage and mentioned a fix may be rolled out at an unspecified time in the future.

We now know that this is a confirmed issue. Google Search Console’s Core Web Vitals reports began showing a reporting outage around July 10, leading users to suspect a bug. The issue was later acknowledged by Barry Pollard as reporting outage affecting CrUX data, particularly on mobile.

Featured Image by Shutterstock/Mix and Match Studio