Google’s Top 5 SEO Tools via @sejournal, @MattGSouthern

I’ve spent years working with Google’s SEO tools, and while there are countless paid options out there, Google’s free toolkit remains the foundation of my optimization workflow.

These tools show you exactly what Google considers important, and that offers invaluable insights you can’t get anywhere else.

Let me walk you through the five Google tools I use daily and why they’ve become indispensable for serious SEO work.

1. Lighthouse

Screenshot from Chrome DevTool, July 2025

When I first discovered Lighthouse tucked away in Chrome’s developer tools, it felt like finding a secret playbook from Google.

This tool has become my go-to for quick site audits, especially when clients come to me wondering why their perfectly designed website isn’t ranking.

Getting Started With Lighthouse

Accessing Lighthouse is surprisingly simple.

On any webpage, press F12 (Windows) or Command+Option+C (Mac) to open developer tools. You’ll find Lighthouse as one of the tabs. Alternatively, right-click any page, select “Inspect,” and navigate to the Lighthouse tab.

What makes Lighthouse special is its comprehensive approach. It evaluates five key areas: performance, progressive web app standards, best practices, accessibility, and SEO.

While accessibility might not seem directly SEO-related, I’ve learned that Google increasingly values sites that work well for all users.

Real-World Insights From The Community

The developer community has mixed feelings about Lighthouse, and I understand why.

As _listless noted, “Lighthouse is great because it helps you identify easy wins for performance and accessibility.”

However, CreativeTechGuyGames warned about the trap of chasing perfect scores: “There’s an important trade-off between performance and perceived performance.”

I’ve experienced this firsthand. One client insisted on achieving a perfect 100 score across all categories.

We spent weeks optimizing, only to find that some changes actually hurt user experience. The lesson? Use Lighthouse as a guide, not gospel.

Why Lighthouse Matters For SEO

The SEO section might seem basic as it checks things like meta tags, mobile usability, and crawling issues, but these fundamentals matter.

I’ve seen sites jump in rankings just by fixing the simple issues Lighthouse identifies. It validates crucial elements like:

  • Proper viewport configuration for mobile devices.
  • Title and meta description presence.
  • HTTP status codes.
  • Descriptive anchor text.
  • Hreflang implementation.
  • Canonical tags.
  • Mobile tap target sizing.

One frustrating aspect many developers mention is score inconsistency.

As one Redditor shared, “I ended up just re-running the analytics WITHOUT changing a thing and I got a performance score ranging from 33% to 90%.”

I’ve seen this too, which is why I always run multiple tests and focus on trends rather than individual scores.

Making The Most Of Lighthouse

My best advice? Use the “Opportunities” section for quick wins. Export your results as JSON to track improvements over time.

And remember what one developer wisely stated: “You can score 100 on accessibility and still ship an unusable [website].” The scores are indicators, not guarantees of quality.

2. PageSpeed Insights

Screenshot from pagespeed.web.dev, July 2025

PageSpeed Insights transformed from a nice-to-have tool to an essential one when Core Web Vitals became ranking considerations.

I check it regularly, especially after Google’s page experience update made site speed a confirmed ranking signal.

Understanding The Dual Nature Of PSI

What sets PageSpeed Insights apart is its combination of lab data (controlled test results) and field data (real user experiences from the Chrome User Experience Report).

This dual approach has saved me from optimization rabbit holes more times than I can count.

The field data is gold as it shows how real users experience your site over the past 28 days. I’ve had situations where lab scores looked terrible, but field data showed users were having a great experience.

This usually means the lab test conditions don’t match your actual user base.

Community Perspectives On PSI

The Reddit community has strong opinions about PageSpeed Insights.

NHRADeuce perfectly captured a common frustration: “The score you get from PageSpeed Insights has nothing to do with how fast your site loads.”

While it might sound harsh, there’s truth to it since the score is a simplified representation of complex metrics.

Practical Optimization Strategies

Through trial and error, I’ve developed a systematic approach to PSI optimization.

Arzishere’s strategy mirrors mine: “Added a caching plugin along with minifying HTML, CSS & JS (WP Rocket).” These foundational improvements often yield the biggest gains.

DOM size is another critical factor. As Fildernoot discovered, “I added some code that increased the DOM size by about 2000 elements and PageSpeed Insights wasn’t happy about that.” I now audit DOM complexity as part of my standard process.

Mobile optimization deserves special attention. A Redditor asked the right question: “How is your mobile score? Desktop is pretty easy with a decent theme and Litespeed hosting and LScaching plugin.”

In my experience, mobile scores are typically 20-30 points lower than desktop, and that’s where most of your users are.

The Diminishing Returns Reality

Here’s the hard truth about chasing perfect PSI scores: “You’re going to see diminishing returns as you invest more and more resources into this,” as E0nblue noted.

I tell clients to aim for “good” Core Web Vitals status rather than perfect scores. The jump from 50 to 80 is much easier and more impactful than 90 to 100.

3. Safe Browsing Test

Screenshot from transparencyreport.google.com/safe-browsing/search, July 2025

The Safe Browsing Test might seem like an odd inclusion in an SEO toolkit, but I learned its importance the hard way.

A client’s site got hacked, flagged by Safe Browsing, and disappeared from search results overnight. Their organic traffic dropped to zero in hours.

Understanding Safe Browsing’s Role

Google’s Safe Browsing protects users from dangerous websites by checking for malware, phishing attempts, and deceptive content.

As Lollygaggindovakiin explained, “It automatically scans files using both signatures of diverse types and uses machine learning.”

The tool lives in Google’s Transparency Report, and I check it monthly for all client sites. It shows when Google last scanned your site and any current security issues.

The integration with Search Console means you’ll get alerts if problems arise, but I prefer being proactive.

Community Concerns And Experiences

The Reddit community has highlighted some important considerations.

One concerning trend expressed by Nextdns is false positives: “Google is falsely flagging apple.com.akadns.net as malicious.” While rare, false flags can happen, which is why regular monitoring matters.

Privacy-conscious users raise valid concerns about data collection.

As Mera-beta noted, “Enhanced Safe Browsing will send content of pages directly to Google.” For SEO purposes, standard Safe Browsing protection is sufficient.

Why SEO Pros Should Care

When Safe Browsing flags your site, Google may:

  • Remove your pages from search results.
  • Display warning messages to users trying to visit.
  • Drastically reduce your click-through rates.
  • Impact your site’s trust signals.

I’ve helped several sites recover from security flags. The process typically takes one to two weeks after cleaning the infection and requesting a review.

That’s potentially two weeks of lost traffic and revenue, so prevention is infinitely better than cure.

Best Practices For Safe Browsing

My security checklist includes:

  • Weekly automated scans using the Safe Browsing API for multiple sites.
  • Immediate investigation of any Search Console security warnings.
  • Regular audits of third-party scripts and widgets.
  • Monitoring of user-generated content areas.

4. Google Trends

Screenshot from Google Trends, July 2025

Google Trends has evolved from a curiosity tool to a strategic weapon in my SEO arsenal.

With updates now happening every 10 minutes and AI-powered trend detection, it’s become indispensable for content strategy.

Beyond Basic Trend Watching

What many SEO pros miss is that Trends isn’t just about seeing what’s popular. I use it to:

  • Validate content ideas before investing resources.
  • Identify seasonal patterns for planning.
  • Spot declining topics to avoid.
  • Find regional variations for local SEO.
  • Compare brand performance against competitors.

Community Insights On Trends

The Reddit community offers balanced perspectives on Google Trends.

Maltelandwehr highlighted its unique value: “Some of the data in Google Trends is really unique. Even SEOs with monthly 7-figure budgets will use Google Trends for certain questions.”

However, limitations exist. As Dangerroo_2 clarified, “Trends does not track popularity, but search demand.”

This distinction matters since a declining trend doesn’t always mean fewer total searches, just decreasing relative interest.

For niche topics, frustrations mount. iBullyDummies complained, “Google has absolutely ruined Google Trends and no longer evaluates niche topics.” I’ve found this particularly true for B2B or technical terms with lower search volumes.

Advanced Trends Strategies

My favorite Trends hacks include:

  • The Comparison Method: I always compare terms against each other rather than viewing them in isolation. This reveals relative opportunity better than absolute numbers.
  • Category Filtering: This prevents confusion between similar terms. The classic example is “jaguar” where without filtering, you’re mixing car searches with animal searches.
  • Rising Trends Mining: The “Rising” section often reveals opportunities before they become competitive. I’ve launched successful content campaigns by spotting trends here early.
  • Geographic Arbitrage: Finding topics trending in one region before they spread helps you prepare content in advance.

Addressing The Accuracy Debate

Some prefer paid tools, as Contentwritenow stated: “I prefer using a paid tool like BuzzSumo or Semrush for trends and content ideas simply because I don’t trust Google Trends.”

While I use these tools too, they pull from different data sources. Google Trends shows actual Google search behavior, which is invaluable for SEO.

The relative nature of Trends data confuses many, like Sneakysneakums. As explained by Google News Initiative:

“A line trending downward means that a search term’s relative popularity is decreasing. But that doesn’t necessarily mean the total number of searches for that term is decreasing.”

I always combine Trends data with absolute volume estimates from other tools.

→ Read more: Google Shows 7 Hidden Features In Google Trends

5. Google Search Console

Screenshot from Google Search Console, July 2025

No list of Google SEO tools would be complete without Search Console.

If the other tools are your scouts, Search Console is your command center, showing exactly how Google sees and ranks your site.

Why Search Console Is Irreplaceable

Search Console provides data you literally cannot get anywhere else. As Peepeepoopoobutler emphasized, “GSC is the accurate real thing. But it doesn’t really give suggestions like ads does.”

That’s exactly right. While it won’t hold your hand with optimization suggestions, the raw data it provides is gold.

The tool offers:

  • Actual search queries driving traffic (not just keywords you think matter).
  • True click-through rates by position.
  • Index coverage issues before they tank your traffic.
  • Core Web Vitals data for all pages.
  • Manual actions and security issues that could devastate rankings.

I check Search Console daily, and I’m not alone.

Successful site owner ImportantDoubt6434 shared, “Yes monitoring GSC is part of how I got my website to the front page.”

The Performance report alone has helped me identify countless optimization opportunities.

Setting Up For Success

Getting started with Search Console is refreshingly straightforward.

As Anotherbozo noted, “You don’t need to verify each individual page but maintain the original verification method.”

I recommend domain-level verification for comprehensive access since you can “verify ownership by site or by domain (second level domain),” but domain gives you data across all subdomains and protocols.

The verification process takes minutes, but the insights last forever. I’ve seen clients discover they were ranking for valuable keywords they never knew about, simply because they finally looked at their Search Console data.

Hidden Powers Of Search Console

What many SEO pros miss are the advanced capabilities lurking in Search Console.

Seosavvy revealed a powerful strategy: “Google search console for keyword research is super powerful.” I couldn’t agree more.

By filtering for queries with high impressions but low click-through rates, you can find content gaps and optimization opportunities your competitors miss.

The structured data reports have saved me countless hours. CasperWink mentioned working with schemas, “I have already created the schema with a review and aggregateRating along with confirming in Google’s Rich Results Test.”

Search Console will tell you if Google can actually read and understand your structured data in the wild, something testing tools can’t guarantee.

Sitemap management is another underutilized feature. Yetisteve correctly stated, “Sitemaps are essential, they are used to give Google good signals about the structure of the site.”

I’ve diagnosed indexing issues just by comparing submitted versus indexed pages in the sitemap report.

The Reality Check: Limitations To Understand

Here’s where the community feedback gets really valuable.

An experienced SimonaRed warned, “GSC only shows around 50% of the reality.” This is crucial to understand since Google samples and anonymizes data for privacy. You’re seeing a representative sample, not every single query.

Some find the interface challenging. As UncleFeather6000 admitted, “I feel like I don’t really understand how to use Google’s Search Console.”

I get it because the tool has evolved significantly, and the learning curve can be steep. My advice? Start with the Performance report and gradually explore other sections.

Recent changes have frustrated users, too. “Google has officially removed Google Analytics data from the Search Console Insights tool,” Shakti-basan noted.

This integration loss means more manual work correlating data between tools, but the core Search Console data remains invaluable.

Making Search Console Work Harder

Through years of daily use, I’ve developed strategies to maximize Search Console’s value:

  • The Position 11-20 Gold Mine: Filter for keywords ranking on page two. These are your easiest wins since Google already thinks you’re relevant. You just need a push to page one.
  • Click-Through Rate Optimization: Sort by impressions, then look for low CTR. These queries show demand but suggest your titles and descriptions need work.
  • Query Matching: Compare what you think you rank for versus what Search Console shows. The gaps often reveal content opportunities or user intent mismatches.
  • Page-Level Analysis: Don’t just look at site-wide metrics. Individual page performance often reveals technical issues or content problems.

Integrating Search Console With Other Tools

The magic happens when you combine Search Console data with the other tools:

  • Use Trends to validate whether declining traffic is due to ranking drops or decreased search interest.
  • Cross-reference PageSpeed Insights recommendations with pages showing Core Web Vitals issues in Search Console.
  • Verify Lighthouse mobile-friendliness findings against Mobile Usability reports.
  • Monitor Safe Browsing status directly in the Security Issues section.

Mr_boogieman asked rhetorically, “How are you tracking results without looking at GSC?” It’s a fair question.

Without Search Console, you’re flying blind, relying on third-party estimations instead of data straight from Google.

Bringing It All Together

These five tools form the foundation of effective SEO work. They’re free, they’re official, and they show you exactly what Google values.

While specialized SEO platforms offer additional features, mastering these Google tools ensures your optimization efforts align with what actually matters for rankings.

My workflow typically starts with Search Console to identify opportunities, using Trends to validate content ideas, employing Lighthouse and PageSpeed Insights to optimize technical performance, and includes Safe Browsing checks to protect hard-won rankings.

Remember, these tools reflect Google’s current priorities. As search algorithms evolve, so do these tools. Staying current with their features and understanding their insights keeps your SEO strategy aligned with Google’s direction.

The key is using them together, understanding their limitations, and remembering that tools are only as good as the strategist wielding them. Start with these five, master their insights, and you’ll have a solid foundation for SEO success.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google’s June 2025 Update Analysis: What Just Happened? via @sejournal, @martinibuster

Google’s June 2025 Core Update just finished. What’s notable is that while some say it was a big update, it didn’t feel disruptive, indicating that the changes may have been more subtle than game changing. Here are some clues that may explain what happened with this update.

Two Search Ranking Related Breakthroughs

Although a lot of people are saying that the June 2025 Update was related to MUVERA, that’s not really the whole story. There were two notable backend announcements over the past few weeks, MUVERA and Google’s Graph Foundation Model.

Google MUVERA

MUVERA is a Multi-Vector via Fixed Dimensional Encodings (FDEs) retrieval algorithm that makes retrieving web pages more accurate and with a higher degree of efficiency. The notable part for SEO is that it is able to retrieve fewer candidate pages for ranking, leaving the less relevant pages behind and promoting only the more precisely relevant pages.

This enables Google to have all of the precision of multi-vector retrieval without any of the drawbacks of traditional multi-vector systems and with greater accuracy.

Google’s MUVERA announcement explains the key improvements:

“Improved recall: MUVERA outperforms the single-vector heuristic, a common approach used in multi-vector retrieval (which PLAID also employs), achieving better recall while retrieving significantly fewer candidate documents… For instance, FDE’s retrieve 5–20x fewer candidates to achieve a fixed recall.

Moreover, we found that MUVERA’s FDEs can be effectively compressed using product quantization, reducing memory footprint by 32x with minimal impact on retrieval quality.

These results highlight MUVERA’s potential to significantly accelerate multi-vector retrieval, making it more practical for real-world applications.

…By reducing multi-vector search to single-vector MIPS, MUVERA leverages existing optimized search techniques and achieves state-of-the-art performance with significantly improved efficiency.”

Google’s Graph Foundation Model

A graph foundation model (GFM) is a type of AI model that is designed to generalize across different graph structures and datasets. It’s designed to be adaptable in a similar way to how large language models can generalize across different domains that it hadn’t been initially trained in.

Google’s GFM classifies nodes and edges, which could plausibly include documents, links, users, spam detection, product recommendations, and any other kind of classification.

This is something very new, published on July 10th, but already tested on ads for spam detection. It is in fact a breakthrough in graph machine learning and the development of AI models that can generalize across different graph structures and tasks.

It supersedes the limitations of Graph Neural Networks (GNNs) which are tethered to the graph on which they were trained on. Graph Foundation Models, like LLMs, aren’t limited to what they were trained on, which makes them versatile for handling new or unseen graph structures and domains.

Google’s announcement of GFM says that it improves zero-shot and few-shot learning, meaning it can make accurate predictions on different types of graphs without additional task-specific training (zero-shot), even when only a small number of labeled examples are available (few-shot).

Google’s GFM announcement reported these results:

“Operating at Google scale means processing graphs of billions of nodes and edges where our JAX environment and scalable TPU infrastructure particularly shines. Such data volumes are amenable for training generalist models, so we probed our GFM on several internal classification tasks like spam detection in ads, which involves dozens of large and connected relational tables. Typical tabular baselines, albeit scalable, do not consider connections between rows of different tables, and therefore miss context that might be useful for accurate predictions. Our experiments vividly demonstrate that gap.

We observe a significant performance boost compared to the best tuned single-table baselines. Depending on the downstream task, GFM brings 3x – 40x gains in average precision, which indicates that the graph structure in relational tables provides a crucial signal to be leveraged by ML models.”

What Changed?

It’s not unreasonable to speculate that integrating both MUVERA and GFM could enable Google’s ranking systems to more precisely rank relevant content by improving retrieval (MUVERA) and mapping relationships between links or content to better identify patterns associated with trustworthiness and authority (GFM).

Integrating Both MUVERA and GFM would enable Google’s ranking systems to more precisely surface relevant content that searchers would find to be satisfying.

Google’s official announcement said this:

“This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”

This particular update did not seem to be accompanied by widespread reports of massive changes. This update may fit into what Google’s Danny Sullivan was talking about at Search Central Live New York, where he said they would be making changes to Google’s algorithm to surface a greater variety of high-quality content.

Search marketer Glenn Gabe tweeted that he saw some sites that had been affected by the “Helpful Content Update,” also known as HCU, had surged back in the rankings, while other sites worsened.

Although he said that this was a very big update, the response to his tweets was muted, not the kind of response that happens when there’s a widespread disruption. I think it’s fair to say that, although Glenn Gabe’s data shows it was a big update, it may not have been a disruptive one.

So what changed? I think, I speculate, that it was a widespread change that improved Google’s ability to better surface relevant content, helped by better retrieval and an improved ability to interpret patterns of trustworthiness and authoritativeness, as well as to better identify low-quality sites.

Read More:

Google MUVERA

Google’s Graph Foundation Model

Google’s June 2025 Update Is Over

Featured Image by Shutterstock/Kues

OpenAI ChatGPT Agent Marks A Turning Point For Businesses And SEO via @sejournal, @martinibuster

OpenAI announced a new way for users to interact with the web to get things done in their personal and professional lives. ChatGPT agent is said to be able to automate planning a wedding, booking an entire vacation, updating a calendar, and converting screenshots into editable presentations. The impact on publishers, ecommerce stores, and SEOs cannot be overstated. This is what you should know and how to prepare for what could be one of the most consequential changes to online interactions since the invention of the browser.

OpenAI ChatGPT Agent Overview

OpenAI ChatGPT agent is based on three core parts, OpenAI’s Operator and Deep Research, two autonomous AI agents, plus ChatGPT’s natural language capabilities.

  1. Operator can browse the web and interact with websites to complete tasks.
  2. Deep Research is designed for multi-step research that is able to combine information from different resources and generate a report.
  3. ChatGPT agent requests permission before taking significant actions and can be interrupted and halted at any point.

ChatGPT Agent Capabilities

ChatGPT agent has access to multiple tools to help it complete tasks:

  • A visual browser for interacting with web pages with the on-page interface.
  • Text based browser for answering reasoning-based queries.
  • A terminal for executing actions through a command-line interface.
  • Connectors, which are authorized user-friendly integrations (using APIs) that enable ChatGPT agent to interact with third-party apps.

Connectors are like bridges between ChatGPT agent and your authorized apps. When users ask ChatGPT agent to complete a task, the connectors enable it to retrieve the needed information and complete tasks. Direct API access via connectors enables it to interact with and extract information from connected apps.

ChatGPT agent can open a page with a browser (either text or visual), download a file, perform an action on it, and then view the results in the visual browser. ChatGPT connectors enable it to connect with external apps like Gmail or a calendar for answering questions and completing tasks.

ChatGPT Agent Automation of Web-Based Tasks

ChatGPT agent is able to complete entire complex tasks and summarize the results.

Here’s how OpenAI describes it:

“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.

You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.”

ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.

….ChatGPT agent can access your connectors, allowing it to integrate with your workflows and access relevant, actionable information. Once authenticated, these connectors allow ChatGPT to see information and do things like summarize your inbox for the day or find time slots you’re available for a meeting—to take action on these sites, however, you’ll still be prompted to log in by taking over the browser.

Additionally, you can schedule completed tasks to recur automatically, such as generating a weekly metrics report every Monday morning.”

What Does ChatGPT Agent Mean For SEO?

ChatGPT agent raises the stakes for publishers, online businesses, and SEO, in that making websites Agentic AI–friendly becomes increasingly important as more users become acquainted with it and begin sharing how it helps them in their daily lives and at work.

A recent study about AI agents found that OpenAI’s Operator responded well to structured on-page content. Structured on-page content enables AI agents to accurately retrieve specific information relevant to their tasks, perform actions (like filling in a form), and helps to disambiguate the web page (i.e., make it easily understood). I usually refrain from using jargon, but disambiguation is a word all SEOs need to understand because Agentic AI makes it more important than it has ever been.

Examples Of On-Page Structured Data

  • Headings
  • Tables
  • Forms with labeled input forms
  • Product listing with consistent fields like price, availability, name or label of the product in a title.
  • Authors, dates, and headlines
  • Menus and filters in ecommerce web pages

Takeaways

  • ChatGPT agent is a milestone in how users interact with the web, capable of completing multi-step tasks like planning trips, analyzing competitors, and generating reports or presentations.
  • OpenAI’s ChatGPT agent combines autonomous agents (Operator and Deep Research) with ChatGPT’s natural language interface to automate personal and professional workflows.
  • Connectors extend Agent’s capabilities by providing secure API-based access to third-party apps like calendars and email, enabling task execution across platforms.
  • Agent can interact directly with web pages, forms, and files, using tools like a visual browser, code execution terminal, and file handling system.
  • Agentic AI responds well to structured, disambiguated web content, making SEO and publisher alignment with structured on-page elements more important than ever.
  • Structured data improves an AI agent’s ability to retrieve and act on website information. Sites that are optimized for AI agents will gain the most, as more users depend on agent-driven automation to complete online tasks.

OpenAI’s ChatGPT agent is an automation system that can independently complete complex online tasks, such as booking trips, analyzing competitors, or summarizing emails, by using tools like browsers, terminals, and app connectors. It interacts directly with web pages and connected apps, performing actions that previously required human input.

For publishers, ecommerce sites, and SEOs, ChatGPT agent makes structured, easily interpreted on-page content critical because websites must now accommodate AI agents that interact with and act on their data in real time.

Read More About Optimizing For Agentic AI

Marketing To AI Agents Is The Future – Research Shows Why

Featured Image by Shutterstock/All kind of people

Google’s John Mueller Clarifies How To Remove Pages From Search via @sejournal, @MattGSouthern

In a recent installment of SEO Office Hours, Google’s John Mueller offered guidance on how to keep unwanted pages out of search results and addressed a common source of confusion around sitelinks.

The discussion began with a user question: how can you remove a specific subpage from appearing in Google Search, even if other websites still link to it?

Sitelinks vs. Regular Listings

Mueller noted he wasn’t “100% sure” he understood the question, but assumed it referred either to sitelinks or standard listings. He explained that sitelinks, those extra links to subpages beneath a main result, are automatically generated based on what’s indexed for your site.

Mueller said:

“There’s no way for you to manually say I want this page indexed. I just don’t want it shown as a sitelink.”

In other words, you can’t selectively prevent a page from being a sitelink while keeping it in the index. If you want to make sure a page never appears in any form in search, a more direct approach is required.

How To Deindex A Page

Mueller outlined a two-step process for removing pages from Google Search results using a noindexdirective:

  1. Allow crawling: First, make sure Google can access the page. If it’s blocked by robots.txt, the noindex tag won’t be seen and won’t work.
  2. Apply a noindex tag: Once crawlable, add a noindex meta tag to the page to instruct Google not to include it in search results.

This method works even if other websites continue linking to the page.

Removing Pages Quickly

If you need faster action, Mueller suggested using Google Search Console’s URL Removal Tool, which allows site owners to request temporary removal.

“It works very quickly” for verified site owners, Mueller confirmed.

For pages on sites you don’t control, there’s also a public version of the removal tool, though Mueller noted it “takes a little bit longer” since Google must verify that the content has actually been taken down.

Hear Mueller’s full response in the video below:

What This Means For You

If you’re trying to prevent a specific page from appearing in Google results:

  • You can’t control sitelinks manually. Google’s algorithm handles them automatically.
  • Use noindex to remove content. Just make sure the page isn’t blocked from crawling.
  • Act quickly when needed. The URL Removal Tool is your fastest option, especially if you’re a verified site owner.

Choosing the right method, whether it’s noindex or a removal request, can help you manage visibility more effectively.

Brand Bias For Visibility In Search & LLMs: A Conversation With Stephen Kenwright via @sejournal, @theshelleywalsh

I recently saw Stephen Kenwright speak at a small Sistrix event in Leeds about strategies for exploiting Google’s brand bias, and a lot of what he said still feels as fresh today as it did over a decade ago when he first started promoting this theory.

Right now, the search experience is changing more than in the last 25 years, and many SEOs are citing that brand is the critical focus for survival.

Some might say (Stephen included) that this is what SEO should always have been about.

I spoke to Stephen, the founder of Rise at Seven, about his talk and about how his theories and strategies could translate to a world of large language model (LLM) optimization alongside a fractured search journey.

You can watch the full interview with Stephen on IMHO below, or continue reading the article summary.

Google’s Brand Bias Is Foundational

Brand bias isn’t a recent development. Stephen was already writing about it in 2016 during his time at Branded3. What underlines this bias is the trust users have in brands.

“Google wants to give a good experience to its users. That means surfacing the results they expect to see. Often, that’s a brand they already know,” Stephen explained.

When users search, they’re often subconsciously looking to reconnect with a mental shortcut that brands provide. It’s not about discovery; it’s about recognition.

When brands invest in traditional marketing channels, they influence user behavior in ways that create cascading effects across digital platforms.

Television advertising, for example, makes viewers significantly more likely to click on branded results even when searching for generic terms.

Traditional Marketing Directly Influences Search Behavior

At his talk in Leeds, Stephen referenced research that demonstrates television advertising creates measurable impacts on search behavior, with viewers 33% more likely to click on advertised brands in search results.

“People are about a third more likely to click your result after seeing a TV ad, and they convert better, too,” Stephen said.

When users encounter brands through traditional marketing channels, they develop mental associations that influence their subsequent search behavior. These behavioral patterns then signal to Google that certain brands provide better user experiences.

“Having the trust from the user comes from brand building activity. It doesn’t come from having an exact match domain that happens to rank first for a keyword,” Stephen emphasized. “That’s just not how the real world works.”

Investment In Brand Building Gains More Buy-In From C-Suite

Even though this bias has been evident for so long, Stephen highlighted a disconnect from brand-building activities within the industry.

“Every other discipline from PR to the marketing manager through to the social media team, literally everyone else, including the C-suite is interested in brand in some capacity and historically SEOs have been the exception,” Stephen explained.

This separation has created missed opportunities for SEOs to access larger marketing budgets and gain executive support for their initiatives.

By shifting focus toward brand-building activities that impact search visibility, they can better align with broader marketing objectives.

“Just by switching that mindset and asking, ‘What’s the impact on brand of our SEO activity?’ we get more buy-in, bigger budgets, and better results,” he said.

Make A Conscious Decision About Which Search Engine To Optimize For

While Google’s dominance remains statistically intact, user behavior tells us that there has always existed a fractured search journey.

Stephen cited that half of UK adults use Bing monthly. A quarter is on Quora. Pinterest and Reddit are seeing massive engagement, especially with younger users. Nearly everyone uses YouTube, and they spend significantly more time on it than on Google.

Also, specialized search engines like Autotrader for used cars and Amazon for ecommerce have captured significant market share in their respective categories.

This fragmentation means that conscious decisions about platform optimization become increasingly important. Different platforms serve different demographics and purposes, requiring strategic choices about where to invest optimization efforts.

I asked Stephen if he thought Google’s dominance was under threat, or if it would remain part of a fractured search journey. But, he thought Google would be relevant for at least half a decade to come.

“I don’t see Google going anywhere. And I also don’t see the massive difference in LLM optimization. So most of the things that you would be doing for Google now … are broadly marketing things anyway and broadly impact LLM optimization.”

LLM Optimization Could Be A Return To Traditional Marketing

Looking toward AI-driven search platforms, Stephen believes the same brand-building tactics that work for Google will prove effective across LLM platforms. These new platforms don’t necessarily demand new rules; they reinforce old ones.

“What works in Google now, broadly speaking, is good marketing. That also applies to LLMs,” he said.

While we’re still learning how LLMs surface content and determine authority, early indicators suggest trust signals, brand presence, and real-world engagement all play pivotal roles.

The key insight is that LLM optimization doesn’t require entirely new approaches but rather a return to fundamental marketing principles focused on audience needs and brand trust.

Television Advertising Creates Significant Impact

I asked Stephen what he would do if he were to launch a new brand and how he would quickly gain traction.

In an interesting twist for someone who has worked in the SEO industry for so long, he cited TV as his primary focus.

“I’d build a transactional website and spend millions on TV [advertising]. If I did more [marketing], I’d add PR.” Stephen told me.

This recommendation reflects his belief that traditional marketing channels create a significant impact.

He believes, the combination of a functional ecommerce website with substantial television advertising investment, supplemented by PR activities, provides the foundation for rapid brand recognition and search visibility.

Before We Ruined The Internet

To me, it feels like we are going full circle and back to the days prior to the introduction of “new media” in the early 90s, when TV advertising was dominant and offline advertising was heavily influential.

“It’s like we’re going back to before we ruined the internet,” Stephen joked.

In reality, we’re circling back to what always worked: building real brands that people trust, remember, and seek out. The future requires classical marketing principles that prioritize audience understanding and brand building over technical optimization tactics.

This shift benefits the entire marketing industry by encouraging more integrated approaches that consider the complete customer journey rather than isolated technical optimizations.

Success in both search and LLM platforms increasingly depends on building genuine brand recognition and trust through consistent, audience-focused marketing activities across multiple channels.

Whether it’s Google, Bing, an LLM, or something we haven’t seen yet, brand is the one constant that wins.

Thank you to Stephen Kenwright for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

Google Answers Question About Structured Data And Logged Out Users via @sejournal, @martinibuster

Someone asked if showing different content to logged-out users than to logged-in users and to Google via structured data is okay. John’s answer was unequivocal.

This is the question that was asked:

“Will this markup work for products in a unauthenticated view in where the price is not available to users and they will need to login (authenticate) to view the pricing information on their end? Let me know your thoughts.”

John Mueller answered:

“If I understand your use-case, then no. If a price is only available to users after authentication, then showing a price to search engines (logged out) would not be appropriate. The markup should match what’s visible on the page. If there’s no price shown, there should be no price markup.”

What’s The Problem With That Structured Data?

The price is visible to logged-in users, so technically the content (in this case the product price) is visible for those users who are logged-in. It’s a good question because a good case can be made that the content shown to Google is available, kind of like behind a paywall, in this case it’s for logged-in users.

But that’s not good enough for Google and it’s not really comparable to paywalls because these are two different things. Google is judging what “on the page” means based on what logged-out users will see on the page.

Google’s guideline about the structured data matching what’s on the page is unambiguous:

“Don’t mark up content that is not visible to readers of the page.

…Your structured data must be a true representation of the page content.”

This is a question that gets asked fairly frequently on social media and in forums so it’s good to go over it for those who might not know yet.

Read More

Confirmed CWV Reporting Glitch In Google Search Console

Google’s New Graph Foundation Model Improves Precision By Up To 40X

Featured Image by Shutterstock/ViDI Studio

Your Reviews Are Ranking You (Or Not): How to Stay Visible in Google’s AI Era

This post was sponsored by GatherUp. The opinions expressed in this article are the sponsor’s own.

If your business has a great local, word-of-mouth reputation but very few online reviews, does it even exist?

That’s the existential riddle facing local businesses and agencies in 2025.

With Google’s AI Overviews (AIOs) now reshaping the search experience, visibility isn’t just about being “the best.”

It’s about being part of the summary.

And reviews? They’re no longer just trust signals. They’re ranking signals.

This article breaks down what’s changing, what’s working, and how agencies can keep their clients visible across both traditional local search and Google’s evolving AI layer.

Reviews Are Now A Gateway To Search Inclusion

Reviews have long been seen as conversion tools, helping users decide between businesses they’ve already discovered. But that role is evolving.

In the era of Google’s AI Overviews (AIOs), reviews are increasingly acting as discovery signals, helping determine which businesses get included in the first place.

GatherUp’s 2024 Online Reputation Benchmark Report shows that businesses with consistent, multi-channel review strategies, especially those generating both first- and third-party reviews, saw stronger reputation signals across volume, recency, and engagement. These are the exact kinds of signals that Google’s systems now appear to prioritize in AI-generated results.

That observation is reinforced by recent industry research and leaked Google documentation, which suggest that review characteristics like click-throughs, content depth, and freshness contribute to both local pack visibility and AIO inclusion.

In other words, the businesses getting summarized at the top of the SERP aren’t just highly rated. They’re actively reviewed, broadly cited, and seen as credible across sources Google trusts.

Recency Is A Signal. “Relevance” Is Google’s Shortcut.

More than two-thirds of consumers say they prioritize recent reviews when evaluating a business. But Google doesn’t necessarily show them first.

Instead, Google’s “Most Relevant” filter may prioritize older reviews that match query terms, even if they no longer reflect the current customer experience.

That’s why it’s critical for businesses to maintain steady review velocity. A flood of reviews in January followed by silence for six months won’t cut it. The AI layer, and the human reader, needs signals that say “this business is active and trustworthy right now.”

For agencies, this presents an opportunity to shift client mindset from static review goals to ongoing review strategies.

Star Ratings Still Matter, But Mostly As A Decision Shortcut

During our recent webinar with Search Engine Journal, we explored how consumers are using star ratings to disqualify options, not differentiate them.

Research shows:

  • 73% of consumers won’t consider businesses with fewer than 4 stars
  • But 69% are still open to doing business with brands that fall short of a perfect 5.0, so long as the reviews are recent and authentic

In other words, people are looking for a “safe” choice, not a flawless one.

A few solid 4-star reviews with real detail from the past week often carry more weight than a dozen perfect ratings from 2021.

Agencies should help clients understand this nuance, especially those who are hesitant to request reviews out of fear of imperfection.

First-Party & Third-Party Reviews: Both Are Necessary

AI Overviews aggregate information from across the web, including structured data from your own website and unstructured commentary from others.

  • First-party reviews: These are collected and hosted directly on the business’s website. They can be marked up with schema, giving Google structured, machine-readable content to use in summaries and answer boxes.
  • Third-party reviews: These appear on platforms like Google, Yelp, Facebook, TripAdvisor, and Reddit. They’re often seen as more objective and are more frequently cited in AI Overviews.

Businesses that show up consistently across both types are more likely to be included in AIOs, and appear trustworthy to users.

GatherUp supports multi-source review generation, schema markup for first-party feedback, and rotating requests across platforms. This makes it easier for agencies to build a review presence that supports both local SEO and AIO visibility.

AIOs Pull From More Than Just Google Reviews

According to recent data from Whitespark, over 60% of citations in AI Overviews come from non-Google sources. This includes platforms like:

  • Reddit.
  • TripAdvisor.
  • Yelp.
  • Local blogs and industry-specific directories.

If your client’s reviews live only on Google, they risk being overlooked entirely.

Google’s AI is scanning for what it deems “experience-based” content, unfiltered, authentic commentary from real people. And it prefers to cross-reference multiple sources to confirm credibility.

Agencies should encourage clients to broaden their review footprint and seek mentions in trusted third-party spaces. Dynamic review flows, QR codes, and conditional links can help diversify requests without overburdening the customer.

Responses Influence Visibility & Build Trust

Review responses are no longer just a nice gesture. They’re part of the algorithmic picture.

GatherUp’s benchmark research shows:

  • 92% of consumers say responding to reviews is now part of basic customer service.
  • 73% will give a business a second chance if their complaint receives a thoughtful reply.

But there’s also a technical upside. When reviews are clicked, read, and expanded, they generate engagement signals that may impact local rankings. And if a business’s reply includes resolution details or helpful context, it increases the content depth of that listing.

For agencies juggling multiple clients, automation helps. GatherUp offers AI-powered suggested responses that retain brand tone and ensure timely replies, without sounding robotic.

How Agencies Can Make AIO Part Of Their Core Strategy

Google’s AI systems are designed to answer user questions directly, often without requiring a click. That means review content is increasingly shaping brand narratives within the SERP.

To adapt, agencies should align client visibility efforts across both search formats:

For Local Pack Optimization

  • Keep Google Business Profile listings fully updated (photos, categories, Q&A).
  • Build and maintain steady review velocity using email, SMS, and in-person requests.
  • Respond to reviews regularly, especially nuanced or negative ones.

For AIO Inclusion

  • Collect first-party reviews and mark them up with schema.
  • Rotate requests to third-party platforms based on vertical relevance.
  • Capture reviews with photo uploads and detailed descriptions.
  • Build unstructured citations through community involvement, media mentions, and event participation.

Download Our Complete Proactive Reputation Management Playbook for Digital Agencies for templates and workflows to operationalize this as a branded, revenue-generating service.

Reputation Is No Longer Separate From Rankings

AI Overviews now appear in nearly two-thirds of local business search queries. That means your clients’ next customers may form an impression—or make a decision—before ever clicking through to a website or map pack listing.

Visibility is no longer guaranteed. It’s earned through content, coverage, and credibility.

And reviews sit at the center of all three.

For agencies, this is a moment of opportunity. You already have the tools to guide clients through the shift. You know how to structure content, build citations, and amplify voices that resonate with customers.

Reputation management isn’t optional anymore. It’s infrastructure.

About GatherUp

GatherUp is the only proactive reputation management platform purpose-built for digital agencies. We help you  build, manage, and defend your clients’ online reputations.

GatherUp supports:

  • First- and third-party review generation across multiple platforms,
  • Schema-marked up feedback collection for AIO relevance,
  • Intelligent, AI-assisted response workflows,
  • Seamless white-labeling for full agency control,
  • Scalable review operations tools that can help you manage 10 or 10,000 locations and clients.

Agencies who use GatherUp don’t just react to algorithm changes. They shape client visibility, and defend it.

To learn more, watch the full webinar for actionable strategies, data-backed insights, and examples of AIO-influenced local search in the wild.

Image Credits

Featured Image: Image by GatherUp. Used with permission.

Confirmed CWV Reporting Glitch In Google Search Console via @sejournal, @martinibuster

Google Search Console Core Web Vitals (CWV) reporting for mobile is experiencing a dip that is confirmed to be related to the Chrome User Experience Report (CrUX). Search Console CWV reports for mobile performance show a marked dip beginning around July 10, at which point the reporting appears to stop completely.

Not A Search Console Issue

Someone posted about it on Bluesky

“Hey @johnmu.com is there a known issue or bug with Core Web Vitals reporting in Search Console? Seeing a sudden massive drop in reported URLs (both “good” and “needs improvement”) on mobile as of July 14.”

The person referred to July 14th, but that’s the date the reporting hit zero. The drop actually starts closer to July 10th, which you can see when you hover a cursor at the point that the drops begin.

Google’s John Mueller responded:

“These reports are based on samples of what we know for your site, and sometimes the overall sample size for a site changes. That’s not indicative of a problem. I’d focus on the samples with issues (in your case it looks fine), rather than the absolute counts.”

The person who initially started the discussion responded to inform Mueller that this isn’t just on his site, the peculiar drop in reporting is happening on other sites.

Mueller was unaware of any problem with CWV reporting so he naturally assumed that this was an artifact of natural changes in Internet traffic and user behavior. So his next response continued under the assumption that this wasn’t a widespread issue:

He responded:

“That can happen. The web is dynamic and alive – our systems have to readjust these samples over time.”

Then Jamie Indigo responded to confirm she’s seeing it, too. 

“Hey John! Thanks for responding 🙂 It seems like … everyone beyond the usual ebb and flow. Confirming nothing in the mechanics have changed?”

At this point it was becoming clear that this weird behavior wasn’t isolated to just one site and Mueller’s response to Jamie reflected this growing awareness.  Mueller confirmed that there’s nothing happening on the Search Console side, leaving it open about the CrUX side of the Core Web Vitals reporting.

His response:

“Correct, nothing in the mechanics changed (at least with regards to Search Console — I’m also not aware of anything on the Chrome / CrUX side, but I’m not as involved there).”

CrUX CWV Field Data

CrUX is the acronym for the Chrome User Experience report. It’s CWV reporting based on real website visits. The data is collected from Chrome browser website visits by users who have opted in to reporting their data for the report.

Google’s Chrome For Developers page explains:

“The Chrome User Experience Report (also known as the Chrome UX Report, or CrUX for short) is a dataset that reflects how real-world Chrome users experience popular destinations on the web.

CrUX is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented.

CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites.”

Core Web Vitals Reporting Outage Is Widespread

At this point more people joined the conversation, with Alan Bleiweiss offering both a comment and a screenshot showing the same behavior where the reporting completely drops off is happening on the Search Console CWV reports for other websites.

He posted:

“oooh Google had to slow down server requests to set aside more power to keep the swimming pools cool as the summer heats up.”

Here’s a closeup detail of Alan’s screenshot of a Search Console CWV report:

Screenshot Of CWV Report Showing July 10 Drop

I searched the Chrome Lighthouse changelog to see if there’s anything there that corresponds to the drop but nothing stood out.

So what is going on?

CWV Reporting Outage Is Confirmed

I next checked the X and Bluesky accounts of Googlers who work on the Chrome team and found a post by Barry Pollard, Web Performance Developer Advocate on Google Chrome, who had posted about this issue last week.

Barry posted a note about a reporting outage on Bluesky:

“We’ve noticed another dip on the metrics this month, particularly on mobile. We are actively investigating this and have a potential reason and fix rolling out to reverse this temporary dip. We’ll update further next month. Other than that, there are no further announcements this month.”

Takeaways

Google Search Console Core Web Vitals (CWV) data drop:
A sudden stop in CWV reporting was observed in Google Search Console around July 10, especially on mobile.

Issue is widespread, not site-specific:
Multiple users confirmed the drop across different websites, ruling out individual site problems.

Origin of issue is not at Search Console:
John Mueller confirmed there were no changes on the Search Console side.

Possible link to CrUX data pipeline:
Barry Pollard from the Chrome team confirmed a reporting outage and mentioned a fix may be rolled out at an unspecified time in the future.

We now know that this is a confirmed issue. Google Search Console’s Core Web Vitals reports began showing a reporting outage around July 10, leading users to suspect a bug. The issue was later acknowledged by Barry Pollard as reporting outage affecting CrUX data, particularly on mobile.

Featured Image by Shutterstock/Mix and Match Studio

Topic-First SEO: The Smarter Way To Scale Authority via @sejournal, @Kevin_Indig

Over the past few months, I’ve deeply analyzed how Google’s AI Overviews handle long-tail queries, dug into what makes brands visible in large language models (LLMs), and worked with brands trying to future-proof their SEO strategies.

Today’s Memo is the first in a two-part series where I’m covering a tactical deep dive into one of the most overlooked mindset shifts in SEO: optimizing for topics (not just keywords).

In this issue, I’m breaking down:

  • Why keyword-first SEO creates surface-level content and cannibalization.
  • What the actual differences are. (Isn’t this just a pillar-cluster approach? Nope.)
  • Thoughts from other pros across the web.
  • How to talk through these issues with your stakeholders, i.e., clients, the C-suite, and your teams (for premium subscribers).

And next week, I’ll cover how to build a topic map, and operationalize a topic-first approach to SEO across your team.

If you’ve ever struggled to convince stakeholders to think beyond search volume or wondered how to grow authority, this memo’s for you.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

At some point over the last year, it’s likely you’ve heard the guidance that keywords are out and topics are in.

The SEO pendulum has swung. If you haven’t already been optimizing for topics instead of keywords (and you really should have), now’s the time to finally start.

But what does that actually mean? How do we do it?

And how are we supposed to monitor topical performance?

With all this talk about LLM visibility, AI Overviews, AI Mode, query fan-out, entities, and semantics, when we optimize for topics, are we optimizing for humans, algorithms, or language models?

Personally, I think we’re making this more difficult than it has to be. I’ll walk you through how to optimize for (and measure/monitor) topics vs. keywords.

Why Optimizing For Topics > Keywords In 2025 (And Beyond)

If your team is still focused on keywords over topics, it’s time to explain the importance of this concept to the group.

Let’s start here: The traditional keyword-first approach worked when Google primarily ranked pages based on string matching. But in today’s search landscape, keywords are no longer the atomic unit of SEO.

But topics are.

In fact, we’re living through what you (and Kevin) might call the death of the keyword.

Think of it like this:

  • Topics are the foundation and framing of your site’s organic authority and visibility, like the blueprint and structure of a house.
  • Individual keywords are the bricks and nails that help build it, but optimizing for individual queries on their own without optimizing for the topics to anchor them, well, they don’t pull much weight.

If you focus only on keywords, it’s like obsessing over picking the right brick color without realizing the blueprint is incomplete.

But when you plan around (and optimize for) topics, you’re designing a structure that’s built to last – one that search engines and LLMs can understand as authoritative and comprehensive.

Google no longer sees a good search result as a direct match between a user’s query and a keyword on your page. That is some old SEO thinking that we all need to let go of completely.

Instead, search engines interpret intent and context, and then use language models to expand that single query into dozens of variations, a.k.a. query fan-out.

That’s why a piecemeal approach to targeting SEO keywords based on search volume, stage of the search journey, or even bottom-of-funnel (BOF) or pain-point intent can be wasted time.

And don’t get me wrong: Targeting queries that are BOF and solve core painpoints of your audience is a wise approach – and you should be doing it.

But own the topics, and you can see your brand’s organic visibility outlast big algorithm changes.

Keyword-Only Thinking Limits Growth

And after all that, if it’s still a challenge convincing your stakeholders, clients, or team to pivot to topic-forward thinking, explain how it limits growth.

Teams stuck in keyword-first mode often run into three problems:

  1. Surface-level content: Articles become thin, narrowly scoped, and easy to outcompete.
  2. Cannibalization: Content overlap happens often; articles compete with each other (and lose).
  3. Blind spots: You miss related subtopics, tailoring content to personas, or exploring problems within the topic that your audience actually cares about.

On the other hand, a topic-first approach allows you to build deeper, more useful content ecosystems.

Your goal is not to just answer one query well; it’s to become a go-to resource for the entire subject area.

Understanding The Topic Maturity Path: Old Way Vs. New Way

Let’s take a closer look at how these two approaches are different from one another.

Image Credit: Kevin Indig

Old Way: Keyword-First SEO

The classic approach to SEO centered around picking individual keywords, assigning each one a page, and publishing content that aimed to rank for that phrase.

This model worked well when Google’s ranking signals were more literal (think string matching, backlink anchor text, and on-page optimization carrying most of the weight).

But in 2025 and beyond, this approach is showing its age.

Keyword-first SEO often looks like this:

  • Minimal internal cohesion across pages; articles aren’t working together to build topic depth or reinforce semantic signals.
  • Content decisions are often driven by average monthly search and tool-based keyword difficulty scores, rather than intent or persona-specific needs.
  • A high-effort, low-durability content-first SEO strategy; posts may rank initially and hold for a while, but they rarely stick or scale.
  • Monitoring performance is often focused on traffic projections and done by page type, SEO-tool-informed intent type, query rankings, and (yes) even sometimes topic groups.

But even when teams adopt a topic cluster-first model (like grouping related keywords into topic clusters or deploying a topic-focused pillar + cluster strategy), they often stay tethered to outdated keyword logic.

The result? Surface-level coverage for single keywords, frequent content cannibalization, and a site structure that might seem organized but still lacks strategic topic optimization.

Without persona insights, or a clear content hierarchy built around core topics, you’re building with bricks, but no real authority blueprint.

Wait a second. Is optimizing for topics any different from the classic pillar + topic cluster approach?

Yes and no.

A pillar + cluster model (a.k.a. hub and spoke) is a framework that can organize a topical approach.

But strategists should shift from matching pages to exact keywords → covering concepts deeply instead.

This classic framework can support topic optimization, but only if it’s implemented with a topic-first mindset.

Here are the primary differences:

  • Keyword-driven pillar + cluster model: Pillar = covers seed keyword target(s); clusters = cover long-tail variations of seed keywords.
  • Topic-driven pillar + cluster model:Pillar = offers a comprehensive guide to the topic; clusters = provide in-depth support for key concepts, different personas, related problems, and unique angles.

Simply selecting high-volume keywords to optimize for in your pillar + cluster strategy plan doesn’t work like it used to.

So, a pillar + cluster plan can help you organize your approach, but you’ll need to cover your core topics with depth and from a variety of perspectives, and for each persona in your target audience.

New Way: Topic-First SEO

Your future-proof SEO strategy doesn’t start with a focus on keywords; it starts with focusing on your target people, their problems, and the topics they care about.

Topic-first SEO approaches content through the lens of the real-world solutions your brand provides through your products and services.

You build authority by exploring a topic (one that you can directly speak to with authority) from all relevant angles: different personas, intent types, pain points, industry sectors, and contexts of use.

But keep in mind: Topic-first SEO is not exactly a page volume game, although the breadth and depth of your topic coverage are crucial.

Topic-first SEO involves:

  • Covering your core, targeted topics across personas.
  • Investing in “zero-volume” content based on actual questions and needs your target audience has.
  • Producing content within your topic that offers different perspectives and hot takes.
  • Building authority with information gain: i.e., new, fresh data that offers unique insights within your core targeted topics.

And guess what? This approach aligns with how Google now understands and ranks content:

  1. Entities > keywords: Google doesn’t just match “search strings” anymore. It understands concepts and audiences (and how they’re related) through the knowledge graph.
  2. Content built around people, problems, and questions: You’re not answering one query when you optimize for a topic as a whole; you’re solving layered, real-world challenges for your audience.
  3. Content journeys, not isolated posts: Topic-first strategies map content to different user types and their stage in the journey (from learning to buying to advocating).
  4. More durable visibility + stronger links: When your site deeply reflects a topic and tackles it from all angles, it attracts both organic queries and natural backlinks from people referencing real insight and utility.
  5. That E-E-A-T we’re all supposed to focus on: Kevin discusses this a bit more when he digs into Google Quality Rater Guidelines in building and measuring brand authority. But this is an absolute no-brainer: Taking a topic-first approach actively works toward establishing Experience, Expertise, Authoritativeness, and Trustworthiness.

I wanted to know how others are doing this, so over on LinkedIn, I asked for your thoughts and questions.

Here are some that stuck out to me that I think we can all benefit from considering:

Lily Grozeva asks: “Is covering a topic and establishing a brand as an authority on it still a volume game?”

My answer: No. I think Backlinko is a good example. The site built incredible visibility with just a few, but very deep guides.

Image Credit: Kevin Indig

Diego Gallo asks: “Any tips on how to decide if a question should belong to a page or be its own page?”

My answer: “In my experience, one way to determine that is cosine similarity between the (tokenized, embedded) question and the main topics / intents of the pages that you can pick from.”

Diego also left a good tip for covering all relevant intents: Build an “intent template” for each page (e.g., product landing page, blog article, etc.). Base the template on what works well on Google.

Image Credit: Kevin Indig

Matthew Mellinger called out that you can use Google’s People Also Asked questions to get clarity on which questions to answer on a page.

Image Credit: Kevin Indig

By the way, you can also use the intent classifier I built for premium subscribers for this task!

Gianluca Fiorelli put a cool analogy on the table:

Image Credit: Kevin Indig

Not Strategizing With A Topic-First Mindset? You’re Outdated

Next week, we’re going to take a deep look at operationalizing a topic-first SEO strategy, but here are some final thoughts.

While there are so many unknowns in the current search landscape, there are a few truths we can ground ourselves in, whether optimizing for search engines or LLMs:

1. Your brand can still own a topic in the AI era.

As shown in the data via the UX study of AIOs, brand/authority is now the first gate users walk through when considering a click off the SERP, search intent relevance the second; snippet wording only matters once trust is secured.

If people are going to click, they’re going to click on the familiar and authoritative. Be the topical authority in your areas of expertise and offerings.

Have your brand show up again, and again, and again in search results across the topic. It’s simple, but it’s hard work.

2. I don’t think focusing on a topic-first mindset could backfire in any way (in 2025 or beyond).

Demonstrating to your core ICPs – whether they find you via paid ads, organic search, LLM chats, socials, or word-of-mouth – through authoritative, branded website content that you understand the topics they care about, questions they have, and provide the solutions for their needs specifically only builds trust … no matter how your brand is found.

3. Build topic systems, not just articles or pages.

Integrators need to take a page out of the aggregator’s product-led SEO playbook: Create a comprehensive system (similar to TripAdvisor’s millions of programmatic pages supported by user-generated content (UGC) reviews, but you don’t need millions 😅) built around your topics of expertise that tackle perspectives, solutions, and questions around each persona type for each sector you serve.

Build the organizational structure within your site that makes these topics and personas easy to navigate for users (and easy to crawl and understand for bots/agents).

4. Persona or ICP-based content is more useful, less generic, and built for the next era of personalized search results.

‘Nuff said. If you strategize topic optimization through the lens of personas (even to the point of including real interviews, surveys, comments, and tips from these persona types), you’re adding to the conversation with depth and unique data.

If you’re not building audience-first content, does optimizing for LLMs and search bots even matter? You’ll gain visibility, but will you gain trust once you finally earn that click?


Featured Image: Paulo Bobita/Search Engine Journal

Google Says AI Won’t Replace The Need For SEO via @sejournal, @martinibuster

Google’s John Mueller and Martin Splitt discussed the question of whether AI will replace the need for SEO. Mueller expressed a common-sense opinion about the reality of the web ecosystem and AI chatbots as they exist today.

Context Of Discussion

The context of the discussion was about SEO basics that a business needs to know. Mueller then mentioned that businesses might want to consider hiring an SEO who can help navigate the site through its SEO journey.

Mueller observed:

“…you also need someone like an SEO as a partner to give you updates along the way and say, ‘Okay, we did all of these things,’ and they can list them out and tell you exactly what they did, ‘These things are going to take a while, and I can show you when Google crawls, we can follow along to see like what is happening there.’”

Is There Value In Learning SEO?

It was at this point that Martin Splitt asked if generative AI will make having to learn SEO obsolete or whether entering a prompt will give all the answers a business person needs to know. Mueller’s answer was tethered to how things are right now and avoided speculating about how things will change in a year or more.

Splitt asked:

“Okay, I think that’s pretty good. Last but not least, with generative AI and chatbot AI things happening. Do you think there’s still a value in learning these kind of things? Or can I just enter a prompt and it’ll figure things out for me?”

Mueller affirmed that knowing SEO will still be needed as long as there are websites because search engines and chat bots need the information that exists on websites. He offered examples of local businesses and ecommerce sites that still need to be found, regardless of whether that’s through an AI chatbot or search.

He answered:

“Absolutely value in learning these things and in making a good website. I think there are lots of things that all of these chatbots and other ways to get information, they don’t replace a website, especially for local search and ecommerce.

So, especially if you’re a local business, maybe it’s fine if a chatbot mentions your business name and tells people how to get there. Maybe that’s perfectly fine, but oftentimes, they do that based on web content that they found.

Having a website is the basis for being visible in all of these systems, and for a lot of other things where you offer a service or something, some other kind of functionality on a website where you have products to sell, where you have subscriptions or anything, a chat response can’t replace that.

If you want a t shirt, you don’t want a description of how to make your own t-shirt. You want a link to a store where it’s like, ‘Oh, here’s t-shirt designs,’ maybe t-shirt designs in that specific style that you like, but you go to this website and buy those t-shirts there.”

Martin acknowledged the common sense of that answer and they joked around a bit about Mueller hoping that an AI will be able to do his job once he retires.

That’s the context for this part of their conversation:

“Okay. That’s very fair. Yeah, that makes sense. Okay, so you think AI is not going to take it all away from us?”

And Mueller answers with the comment about AI replacing him after he retires:

“Well, we’ll see. I can’t make any promises. I think, at some point, I would like to retire, and then maybe AI takes over my work then. But, like, there’s lots of stuff to be done until then. There are lots of things that I imagine AI is not going to just replace.”

What About CMS Platforms With AI?

Something that wasn’t discussed is the trend of AI within content management systems. Many web hosts and WordPress plugins are already integrating AI into the workflow of creating and optimizing websites. Wix has already integrated AI into their workflow and it won’t be much longer until AI makes a stronger presence within WordPress, which is what the new WordPress AI team is working on.

Screenshot Of ChatGPT Choosing Number 27

Will AI ever replace the need for SEO? Many easy things that can be scaled are already automated. However, many of the best ideas for marketing and communicating with humans are still best handled by humans, not AI. The nature of generative AI, which is to generate the most likely answer or series of words in a sentence, precludes it from ever having an original idea. AI is so locked into being average that if you ask it to pick a number between one and fifty, it will choose the number 27 because the AI training binds it to picking the likeliest number, even when instructed to randomize the choice.

Listen to Search Off The Record at about the 24 minute mark:

Featured Image by Shutterstock/Roman Samborskyi