Google’s Robby Stein Names 5 SEO Factors For AI Mode via @sejournal, @martinibuster

Robby Stein, Vice President of Product for Google Search, recently sat down for an interview where he answered questions about how Google’s AI Mode handles quality, how Google evaluates helpfulness, and how it leverages its experience with search to identify which content is helpful, including metrics like clicks. He also outlined five quality SEO-related factors used for AI Mode.

How Google Controls Hallucinations

Stein answered a question about hallucinations, where an AI lies in its answers. He said that the quality systems within AI Mode are based on everything Google has learned about quality from 25 years of experience with classic search. The systems that determine what links to show and whether content is good are encoded within the model and are based on Google’s experience with classic search.

The interviewer asked:

“These models are non-deterministic and they hallucinate occasionally… how do you protect against that? How do you make sure the core experience of searching on Google remains consistent and high quality?”

Robby Stein answered:

“Yeah, I mean, the good news is this is not new. While AI and generative AI in this way is frontier, thinking about quality systems for information is something that’s been happening for 20, 25 years.

And so all of these AI systems are built on top of those. There’s an incredibly rigorous approach to understanding, for a given question, is this good information? Are these the right links? Are these the right things that a user would value?

What’s all the signals and information that are available to know what the best things are to show someone. That’s all encoded in the model and how the model’s reasoning and using Google search as a tool to find you information.

So it’s building on that history. It’s not starting from scratch because it’s able to say, oh, okay, Robbie wants to go on this trip and is looking up cool restaurants in some neighborhood.

What are the things that people who are doing that have been relying on on Google for all these years? We kind of know what those resources are we can show you right there. And so I think that helps a lot.

And then obviously the models, now that you release the constraint on layout, obviously the models over time have also become just better at instruction following as well. And so you can actually just define, hey, here are my primitives, here are my design guidelines. Don’t do this, do this.

And of course it makes mistakes at times, but I think just the quality of the model has gotten so strong that those are much less likely to happen now.”

Stein’s explanation makes clear that AI Mode is encoded with everything learned from Google’s classic search systems rather than a rebuild from scratch or a break from them. The risk of hallucinations is managed by grounding AI answers in the same relevance, trust, and usefulness signals that have underpin classic search for decades. Those signals continue to determine which sources are considered reliable and which information users have historically found valuable. Accuracy in AI search follows from that continuity, with model reasoning guided by longstanding search quality signals rather than operating independently of them.

How Google Evaluates Helpfulness In AI Mode

The next question is about the quality signals that Google uses within AI Mode. Robby Stein’s answer explains that the way AI Mode determines quality is very much the same as with classic search.

The interviewer asked:

“And Robbie, as search is evolving, as the results are changing and really, again, becoming dynamic, what signals are you looking at to know that the user is not only getting what they want, but that is the best experience possible for their search?”

Stein answered:

“Yeah, there’s a whole battery of things. I mean, we look at, like we really study helpfulness and if people find information helpful.

And you do that through evaluating the content kind of offline with real people. You do that online by looking at the actual responses themselves.

And are people giving us thumbs up and thumbs downs?

Are they appreciating the information that’s coming?

And then you kind of like, you know, are they using it more? Are they coming back? Are they voting with their feet because it’s valuable to you.

And so I think you kind of triangulate, any one of those things can lead you astray.

There’s lots of ways that, interestingly, in many products, if the product’s not working, you may also cause you to use it more.

In search, it’s an interesting thing.

We have a very specific metric that manages people trying to use it again and again for the same thing.

We know that’s a bad thing because it means that they can’t find it.

You got to be really careful.

I think that’s how we’re building on what we’ve learned in search, that we really feel good that the things that we’re shipping are being found useful by people.”

Stein’s answer shows that AI Mode evaluates success using the same core signals used for search quality, even as the interface becomes more dynamic. Usefulness is not inferred from a single engagement signal but from a combination of human evaluation, explicit feedback, and behavioral patterns over time.

Importantly, Stein notes that just because people use it a lot, presumably in a single session, that the increased usage alone is not treated as success, since repeated attempts to answer the same query indicate failure rather than satisfaction. The takeaway is that AI Mode’s success is judged by whether users are satisfied, and that it uses quality signals designed to detect friction and confusion as much as positive engagement. This carries over continuity from classic search rather than redefining what usefulness means.

Five Quality Signals For AI Search

Lastly, Stein answers a question about the ranking of AI generated content and if SEO best practices still help for ranking in AI. Stein’s answer includes five factors that are used for determining if a website meets their quality and helpfulness standards.

Stein answered:

“The core mechanic is the model takes your question and reasons about it, tries to understand what you’re trying to get out of this.

It then generates a fan-out of potentially dozens of queries that are being Googled under the hood. That’s approximating what information people have found helpful for those questions.

There’s a very strong association to the quality work we’ve done over 25 years.

Is this piece of content about this topic?

Has someone found it helpful for the given question?

That allows us to surface a broader diversity of content than traditional Search, because it’s doing research for you under the hood.

The short of it is the same things apply.

  1. Is your content directly answering the user’s question?
  2. Is it high quality?
  3. Does it load quickly?
  4. Is it original?
  5. Does it cite sources?

If people click on it, value it, and come back to it, that content will rank for a given question and it will rank in the AI world as well.”

Watch the interview starting about the one hour and twenty three minute mark:

Let’s Be Honest About The Ranking Power Of Links via @sejournal, @martinibuster

What link building should be trying to accomplish, in my opinion, is proving that a site is trustworthy and making sure the machine understands what topic your web pages fit into. The way to communicate trustworthiness is to be careful about what sites you obtain links from and to be super careful about what sites your site links out to.

Context Of Links Matter

Maybe it doesn’t have to be said but I’ll say it: It’s important now more than ever that the page your link is on has relevant content on it and that the context for your link is an exact match for the page that’s being linked to.

Outgoing Links Can Signal A Site Is Poisoned

Also make sure that the outgoing links are to legitimate sites, not to sites that are low quality or in problematic neighborhoods. If those kinds of links are anywhere on the site it’s best to consider the entire site poisoned and ignore it.

The reason I say to consider the site poisoned is the link distance ranking algorithm concept where inbound links tell a story about how trustworthy a site is. Low quality outbound links are a signal that something’s wrong with the site. It’s possible that a site like that will have its ability to pass PageRank removed.

Reduced Link Graph

This is how the Reduced Link Graph works, where the spammy sites are kicked out of the link graph and only the legit sites are kept for ranking purposes and link propagation. The link graph can be thought of as a map of the internet with websites connected to each other by links. When you kick out the spammy sites that’s called the reduced link graph.

Search engines are at a point where they can rank websites based on the content alone. Links still matter but the content itself is now the highest level ranking factor. I suspect that in general the link signal isn’t very healthy right now. Less people are blogging across all topics. Some topics have a healthy blogging ecosystem but in general there aren’t professors blogging about technology in the classroom and there aren’t HR executives sharing workplace insights and so on like there used to be ten or fifteen years ago.

Links for Inclusion

I’m of the opinion that links increasingly are useful for determining if a site is legit, high quality, and trustworthy, deeming it worthy for consideration in the search results. In order to stay in the SERPs it’s important to think about the outbound links on your site and the sites you obtain links from. Think in terms of reduced link graphs, with spammy sites stuck on the outside within their own spammy cliques and the non-spam on the inside within the trusted Reduced Link Graph.

In my opinion, you must be in the trusted Reduced Link Graph in order to stay in play.

Is Link Building Over?

Link building is definitely not over. There’s still important. What needs to change is how links are acquired. The age of blasting out emails at scale are over. There aren’t enough legitimate websites to make that worthwhile. It’s better to be selective and targeted about which sites you get a (free) link from.

Something else that’s becoming increasingly important is citations, other sites talking about your site. An interesting thing right now is that sponsored articles, sometimes known as native advertising, will get cited in AI search engines, including Google AI Overviews and AI Mode. This is a great way to get a citation in a way that will not hurt your rankings as long as the sponsored article is clearly labeled as sponsored and the outbound links are nofollowed.

Takeaways

  • Links As Trust And Context Signals, Not Drivers Of Ranking
    Links increasingly function to confirm that a site is legitimate and topically aligned, rather than to directly push rankings through volume or anchor text manipulation as in the old days.
  • The Reduced Link Graph Matters
    Search engines filter out spammy or low-quality sites, leaving a smaller trusted network where links and associations still count. Being outside this trusted graph puts sites at risk of exclusion.
  • Content Matters, Links Qualify
    Search engines can rank many pages based on content alone, but links can still act as a gatekeeper for credibility and inclusion, especially for competitive topics.
  • Outbound Links Are A Risk Signal
    Linking out to low-quality or problematic sites can damage a site’s perceived trustworthiness and its ability to pass value.
  • Traditional Link Building Is Obsolete
    Scaled outreach, anchor text strategies, and chasing volume are ineffective in an AI-driven search environment.
  • Citations Are Rising In Importance
    Mentions and discussions of a website can cause a site to rank better in AI search engines
  • Sponsored Articles
    Sponsored articles that are properly labeled as sponsored content and containing nofollowed links are increasingly surfaced in AI search features and contribute to visibility.

Link building is still relevant, but not in the way it used to be. Its function now is likely more about establishing whether a site is legitimate and clearly associated with a real topic area, not to push rankings through volume, anchors, or scale. Focusing on clean outbound links, selective relationships with trusted sites, and credible citations keeps a site inside the trusted reduced link graph, which is the condition that allows strong content to compete and appear in both traditional search results and AI-driven search surfaces.

Featured Image by Shutterstock/AYO Production

Google Says What To Tell Clients Who Want SEO For AI via @sejournal, @martinibuster

Google’s Danny Sullivan offered advice to SEOs who have clients asking for updates on what they’re going to do for AI SEO. He acknowledged it’s easier to give the advice than it is to have to actually tell clients, but he also said that advancements in content management systems drive technical SEO into the background, enabling SEOs and publishers to focus on the content.

What To Tell Clients

Danny Sullivan acknowledged that SEOs are in a tough spot with clients. He didn’t suggest specifics for how to rank better in AI search (although later in the podcast he did offer suggestions for what to do to rank better in AI search).

But he did offer suggestions for what to tell clients.

Danny explained:

“And the other thing is, and I’ve seen a number of people remark on this, is this concern that, well, I’ve been doing SEO, but now I’m getting clients or people saying to me, but I need the new stuff. I need the new stuff. And I can’t just tell them it’s the same old stuff.

So I don’t know if you feel like you need to dress it up a bit more, but I think the way you dress it up is to say, These are continuing to be the things that are going to make you successful in the long-term. I get you want the fancy new type of thing, but the history is that the fancy new type of thing doesn’t always stick around if we go off and do these particular types of things…

I’m keeping an eye on it, but right now, the best advice I can tell you when it comes to how we’re going to be successful with our AEO is that we continue on doing the stuff that we’ve been doing because that is what it’s built on.

Which is easy for me to say ’cause I don’t got someone banging on the door to say, Well, actually we do. And so we are doing that.

So that’s why, as part of the podcast, it’s just to kind of reassure that, look, just because the formats are changing didn’t mean you have to change everything that you had to do and that everything you had to shift around.”

Downside Of Prioritizing AEO/GEO For AI Search Visibility

There are many in the SEO community who are suggesting fairly spammy things to do to rank better in AI chatbots like ChatGPT, like creating listicles that recommend themselves as best whatever. Others are doing things like tweaking on keyword phrases, the kind of thing SEOs stopped doing by 2005 or 2006.

The problem with making dramatic changes to content in order to rank better in chatbots is that ChatGPT, Perplexity, and Anthropic Claude’s search traffic share is a fraction of a percent for each of them, with Claude close zero and ChatGPT estimated to be 0.2% – 0.5%.

So it absolutely makes zero sense to prioritize AEO/GEO over Google and Bing search at this point because the return on the investment is close to zero. It’s a different story when it comes to Google AI Overviews and AI Mode, but the underlying ranking systems for both AI interfaces remain Google’s classic search.

Danny shared that focusing on things that are specific to AI risks complicating what should be simple.

Google’s Danny Sullivan shared:

“And in fact, that the more that you dramatically shift things around, and start doing something completely different, or the more that you start thinking I need to do two different things, the more that you may be making things far more complicated, not necessarily successful in the long term as you think they are.”

Technical SEO Is Needed Less?

John Mueller followed up by mentioning that the advanced state of content management systems today means that SEOs and publishers no longer have to spend as much time on technical SEO issues because most CMS’s have the basics of SEO handled virtually out of the box. Danny Sullivan said that this frees up SEOs and creators to focus on their content, which he insisted will be helpful for ranking in AI search surfaces.

John Mueller commented:

“I think that makes a lot of sense. I think one of the things that perhaps throws SEOs off a little bit is that in the early days, there was a lot of almost like a technical transition where people initially had to do a lot of technical specific things to make their site even kind of accessible in search. And at some point nowadays, I think if you’re using a popular CMS like WordPress or Wix or any of them, basically you don’t have to worry about any of those technical details.

So it’s almost like that technical side of things is a lot less in the foreground now, and you can really focus on the content, and that’s really what users are looking for. So it’s like that, almost like a transition from technical to content side with regards to SEO.”

This echoed a previous statement from earlier in the podcast where Danny remarked on how some people have begun worrying less about SEO and focusing on content.

Danny said:

“But we really just want you to focus on your content and not really worry about this. If your content is on the web and generally accessible as most people’s content is, that’s it.

I’ve actually been heartened that I’ve seen a number of people saying things like: I don’t even want to think about this SEO stuff anymore. I’m just getting back into the joy of writing blogs.

I’m like, yes, great. That’s what we want you to do. That’s where we think you’re going to find your most success.”

Listen to Danny Sullivan’s remarks at about the 8 minute mark:

Featured Image by Shutterstock/Just dance

Google Explains How To Rank In AI Search via @sejournal, @martinibuster

Google’s John Mueller and Danny Sullivan discussed their thoughts on AI search and what SEOs and creators should be doing to make sure their content is surfaced. Danny showed some concern for folks who were relying on commodity content that is widely available.

What Creators Should Focus On For AI

John Mueller asked Danny Sullivan what publishers should be focusing on right now that’s specific to AI. Danny answered by explaining what kind of content you should not focus on and what kind of content creators should be focusing on.

He explained that the kind of content that creators should not focus on is commodity content. Commodity content is web content that consists of information that’s widely available and offers no unique value, no perspective, and requires no expertise. It is the kind of content that’s virtually interchangeable with any other site’s content because they are all essentially generic.

While Danny Sullivan did not mention recipe sites, his discussion about commodity content immediately brought recipe sites to mind because those kinds of sites seemingly go out of their way to present themselves as generically as possible, from the way the sites look, the “I’m just a mom of two kids” bio, and the recipes they provide. In my opinion, what Danny Sullivan said should make creators consider what they bring to the web that makes them notable.

To explain what he meant by commodity content, Danny used the example of publishers who used to optimize a web page for the time that the Super Bowl game began. His description of the long preamble they wrote before giving the generic answer of what time the Super Bowl starts reminded me again of recipe sites.

At about the twelve minute mark John Mueller asked Danny:

“So what would you say web creators should focus on nowadays with all of the AI?”

Danny answered:

“A key thing is to really focus on is the original aspect. Not a new thing.

These are not new things beyond search, but if you’re really trying to reframe your mind about what’s important, I think that on one hand, there’s a lot of content that is just kind of commodity content, factual information, and I think that the… LLM, AI systems are doing a good job of presenting that sort of stuff.

And it’s not originating from any type of thing.

So the classic example, as you know, will make people laugh, …but every year we have this little American football thing called the Super Bowl, which is our big event.

…But no one ever can seem to remember what time it’s on.

…Multiple places would then all write their “what time does the Super Bowl start in 2011?” post. And then they would write these giant long things.

…So, you know, and then at some point, we could see enough information and we have data feeds and everything else that we just kind of said, you do a search and …the Super Bowl is going to be at 3:30.

…I think the vast majority of people say, that’s a good thing. Thank you for just telling me the time of the Super Bowl.

It wasn’t super original information.”

Commodity Content Is Not Your Strength

Next Danny considered some of the content people are publishing today, encouraging them to think  about the generic nature of their content and to give some thought to how they can share something more original and unique.

Danny continued his answer:

“I think that is a thing people need to understand, is that more of this sort of commodity stuff, it isn’t going to necessarily be your strength.

And I do worry that some people, even with traditional SEO, focus on it too much.

There are a number of sites I know from the research and things that I’ve done that get a huge amount of traffic for the answer to various popular online word-solving games.

It’s just every day I’m going to give you the answer to it. …and that is great. Until the system shifts or whatever, and it’s common enough, or we’re pulling it from a feed or whatever, and now it’s like, here’s the answer.”

Bring Your Expertise To AI

Danny next suggested that people who are concerned about showing up in AI should start exploring how to express their authentic experience or expertise. He said this advice is not just for text content but also to video and podcast content as well.

He continued:

“Your original voice is that thing that only you can provide. It’s your particular take.

And so that’s what we think was our number one thing when we’re telling people is like, this is what we think your strength is going to be.

As we go into this new world, is already what you should be doing, but this is what your strength that you should be doing is focus on that original content.

I think related to that is this idea that people are also seeking original content that’s, …authentic to them, which typically means it’s a video, it’s a podcast…

…And you’ve seen that in the search we’ve already done, where we brought in more social, more experiential content.  Not to take away from the expert takes, it’s just that people want that.

Sometimes you’re just wanting to know someone’s firsthand experience alongside some expert take on it as well.

But if you are providing those expert takes, you’re doing reviews or whatever, and you’ve done that in the written form, you still have the opportunity to be doing those in videos and podcasts and so on.  Those are other opportunities.

So those are things that, again, it’s not unique to the AI formats, but they just may be, as you’re thinking about, how do I reevaluate what I’m doing overall in this era, that these are things you may want to be considering with it from there.”

John Mueller agreed that it makes sense to bring your unique voice to content in order to make it stand out. Danny’s point treats visibility in AI driven search as a matter of differentiation rather than optimization. The emphasis is not on adapting content to a new format, but on creating a recognizable voice and perspective with which to stand out.  Given that AI Search is still classic search under the hood, it makes sense to stand out from competitors with unique content that people will recognize and recommend.

Listen to the passage at around the twelve minute mark:

Featured Image by Shutterstock/Asier Romero

Google’s AI Mode Personal Context Features “Still To Come” via @sejournal, @MattGSouthern

Seven months after Google teased “personal context” for AI Mode at Google I/O, Nick Fox, Google’s SVP of Knowledge and Information, says the feature still is not ready for a public rollout.

In an interview with the AI Inside podcast, Fox framed the delay as a product and permissions issue rather than a model-capability issue. As he put it: “It’s still to come.”

What Google Promised At I/O

At Google I/O, Google said AI Mode would “soon” incorporate a user’s past searches to improve responses. It also said you would be able to opt in to connect other Google apps, starting with Gmail, with controls to manage those connections.

The idea was that you wouldn’t need to restate context in every query if you wanted Google to use relevant details already sitting in your account.

On timing, Fox said some internal testing is underway, but he did not share a public rollout date:

“Some of us are testing this internally and working through it, but you know, still to come in terms of the in terms of the public roll out.”

You can hear the question and Fox’s response in the video below starting around the 37-minute mark:

AI Mode Growth Continues Without Personal Context

Even without that personalization layer, Fox pointed to rapid adoption, describing AI Mode as having “grown to 75 million daily active users worldwide.”

The bigger change may be in how people phrase queries. Fox described questions that are “two to three times as long,” with more explicit first-person context.

Instead of relying on AI Mode to infer intent, people are writing the context into the prompt, Fox says:

“People are trying to put put the right context into the query”

That matters because the “personal context” feature was designed to reduce that manual effort.

Geographic Patterns In Adoption

Adoption also appears uneven by market, with the strongest traction in regions that received AI Mode first. Fox described the U.S. as the most “mature” market because the product has had more time to become part of people’s routines.

He also pointed to strong adoption in markets where the web is less developed in certain languages or regions, naming India, Brazil, and Indonesia. The argument there is that AI Mode can stitch together information across languages and borders in ways traditional search results may not have for those markets.

Younger users, he added, are adopting the experience faster across regions.

Publisher Relationship Updates

The interview also included updates tied to how AI Mode connects people back to publisher content.

Preferred Sources is one of them. The feature lets you choose specific publications you want to see more prominently in Google’s Top Stories unit, and Google describes it as available worldwide in English.

Fox also described ongoing work on links in AI experiences, including increasing the number of links shown and adding more context around them:

“We’re actually improving the links within our within our AI experience, increasing the number of them…”

On the commercial side, he noted Google has partnerships with “over 3,000 organizations” across “50 plus countries.”

Technical Updates

Fox talked through product and infrastructure changes now powering AI Mode and related experiences.

One was shipping Gemini 3 Pro in Search on day one, which he described as the first time Google has shipped a frontier model” in Search on launch day.

He also described generative layouts,” where the model can generate UI code on the fly for certain queries.

To keep the experience fast, he emphasized model routing, where simpler queries go to smaller, faster models and heavier work is reserved for more complex prompts.

Why This Matters

A version of AI Mode that personalizes answers using opt-in Gmail context is still not available and doesn’t have a public timeline.

In the meantime, people appear to be compensating by typing more context into their queries. If that becomes the norm, it may push publishers toward satisfying longer, more situation-specific questions.

Looking Ahead

While AI Mode is still in its early stages, the 75 million daily active users figure suggests it’s large enough to monitor for visibility.


Featured Image: Jackpress/Shutterstock

Google Gemini 3 Flash Becomes Default In Gemini App & AI Mode via @sejournal, @MattGSouthern

Google released Gemini 3 Flash, expanding its Gemini 3 model family with a faster model that’s now the default in the Gemini app.

Gemini 3 Flash is also rolling out globally as the default model for AI Mode in Search.

The release builds on Google’s recent Gemini 3 rollout, which introduced Gemini 3 Pro in preview and also announced Gemini 3 Deep Think as an enhanced reasoning mode.

What’s New

Gemini 3 Flash replaces Gemini 2.5 Flash as the default model in the Gemini app globally, which means free users get the Gemini 3 experience by default.

In Search, Gemini 3 Flash is rolling out globally as AI Mode’s default model starting today.

For developers, Gemini 3 Flash is available in preview via the Gemini API, including access through Google AI Studio, Google Antigravity, Vertex AI, Gemini Enterprise, plus tools such as Gemini CLI and Android Studio.

Pricing

Gemini 3 Flash pricing is listed at $0.50 per million input tokens and $3.00 per million output tokens on Google’s Gemini API pricing documentation.

On the same pricing page, Gemini 2.5 Flash is listed at $0.30 per million input tokens and $2.50 per million output tokens.

Google says Gemini 3 Flash uses 30% fewer tokens on average than Gemini 2.5 Pro for typical tasks, and citing third-party benchmarking for a “3x faster” comparison versus 2.5 Pro.

Why This Matters

The default language model in the Gemini app has changed, and users have access at no extra cost.

If you build on Gemini, Gemini 3 Flash offers a new option for high-volume workflows, priced well below Pro-tier rates.

Looking Ahead

Gemini 3 Flash is rolling out now. In Search, Gemini 3 Pro is also available in the U.S. via the AI Mode model menu.

Google Updates JavaScript SEO Docs With Canonical Advice via @sejournal, @MattGSouthern

Google updated its JavaScript SEO documentation with new guidance on handling canonical URLs for JavaScript-rendered sites.

The documentation update also adds corresponding guidance to Google’s best practices for consolidating duplicate URLs.

What’s New

The updated documentation focuses on a timing issue specific to JavaScript sites: canonicalization can happen twice during Google’s processing.

Google evaluates canonical signals once when it first crawls the raw HTML, then again after rendering the JavaScript. If your raw HTML contains one canonical URL and your JavaScript sets a different one, Google may receive conflicting signals.

The documentation notes that injecting canonical tags via JavaScript is supported but not recommended. When JavaScript sets a canonical URL, Google can pick it up during rendering, but incorrect implementations can cause issues.

Multiple canonical tags, or changes to an existing canonical tag during rendering, can lead to unexpected indexing results.

Best Practices

Google recommends two best practices depending on your site’s architecture.

The preferred method is setting the canonical URL in the raw HTML response to match the URL your JavaScript will ultimately render. This gives Google consistent signals before and after rendering.

If JavaScript must set a different canonical URL, Google recommends leaving the canonical tag out of the initial HTML. This can help avoid conflicting signals between the crawl and render phases.

The documentation also reminds developers to ensure only one canonical tag exists on any given page after rendering.

Why This Matters

This guidance addresses a subtle detail that can be easy to miss when managing JavaScript-rendered sites.

The gap between when Google crawls your raw HTML and when it renders your JavaScript creates an opportunity for canonical signals to diverge.

If you use frameworks like React, Vue, or Angular that handle routing and page structure client-side, it’s worth checking how your canonical tags are implemented. Look at whether your server response includes a canonical tag and whether your JavaScript modifies or duplicates it.

In many cases, the fix is to coordinate your server-side and client-side canonical implementations so they send the same signal at both stages of Google’s processing.

Looking Ahead

This documentation update clarifies behavior that may not have been obvious before. It doesn’t change how Google processes canonical tags.

If you’re seeing unexpected canonical selection in Search Console’s Page indexing reporting, check for mismatches between your raw HTML and rendered canonical tags. The URL Inspection tool shows both the raw and rendered HTML, which makes it possible to compare canonical implementations across both phases.


Featured Image: Alicia97/Shutterstock

The Facts About Trust Change Everything About Link Building via @sejournal, @martinibuster

Trust is commonly understood to be a standalone quality that is passed between sites regardless of link neighborhood or topical vertical. What I’m going to demonstrate is that “trust” is not a thing that trickles down from a trusted site to another site. The implication for link building is that many may have been focusing on the wrong thing.

Six years ago I was the first person to write about link distance ranking algorithms that are a way to create a map of the Internet that begins with a group of sites that are judged to be trustworthy. These sites are called the seed set. The seed set links to other sites, which in turn link to ever increasing groups of other sites. The sites closer to the original seed set tend to be trustworthy websites. The sites that are furthest away from the seed set tend to be not trustworthy.

Google still counts links as part of the ranking process so it’s likely that there continues to be a seed set that is considered trustworthy from which the further away you a site is linked from the seeds the likelier it is considered to be spam.

Circling back to the idea of trust as a ranking related factor, trust is not a thing that is passed from one site to another. Trust, in this context, is not even a part of the conversation. Sites are said to be trustworthy by the link distance between the site in question and the original seed set. So you see, there is no trust that is conveyed from one site or another.

The word Trustworthiness is even a part of the E-E-A-T standard of what constitutes a quality website. So trust should never be considered as a thing that is passed from one site to another because it does not exist.

The takeaway is that link building decisions based on the idea of trust propagated through links are built on an outdated premise. What matters is whether a site sits close to trusted seed sites within the same topical neighborhood, not whether it receives a link from a widely recognized or authoritative domain. This insight transforms link evaluation into a relevance problem rather than a reputation problem. This insight should encourage site owners to focus on earning links that reinforce topical alignment instead of chasing links that appear impressive but have little, if any, ranking value.

Why Third Party Authority Metrics Are Inaccurate

The second thing about the link distance ranking algorithms that I think is quite cool and elegant is that websites naturally coalesce around each other according to their topics. Some topics are highly linked and some, like various business association verticals, are not well linked at all. The consequence is that those poorly linked sites that are nevertheless close to the original seed set do not acquire much “link equity” because their link neighborhoods are so small.

What that means is that a low-linked vertical can be a part of the original seed set and display low third-party authority metrics scores. The implication is that the third-party link metrics that measure how many inbound links a site has fail. They fail because third-party authority metrics follow the old and outdated PageRank scoring method that counts the amount of inbound links a site has. PageRank was created around 1998 and is so old that the patent on it has expired.

The seed set paradigm does not measure inbound links. It measures the distance from sites that are judged to be trustworthy. That has nothing to do with how many links those seed set sites have and everything to do with them being trustworthy, which is a subjective judgment.

That’s why I say that third-party link authority metrics are outdated. They don’t follow the seed set paradigm, they follow the old and outdated PageRank paradigm.
The insight to take away from this is that many highly trustworthy sites are being overlooked for link building purposes because link builders are judging the quality of a site by outdated metrics that incorrectly devalue sites in verticals that aren’t well linked but are actually very close to the trustworthy seed set.

The Important Of Link Neighborhoods

Let’s circle back to the observation that websites tend to naturally link to other sites that are on the same topic. What’s interesting about this is that the seed sets can be chosen according to topic verticals. Some verticals have a lot of inbound links and some verticals are in their own little corner of the Internet and aren’t link to from outside of their clique.

A link distance ranking algorithm can thus be used to calculate the relevance according to whatever neighbhorhood a site is located in. Majestic does something like that with their Trust Flow and Topical Trust Flow metrics that actually start with trusted seed sites. Topical Trust Flow breaks that score down into specific topic categories. The Topical Trust Flow metric shows how relevant a website is for a given metric.

My point isn’t that you should use that metric, although I think it’s the best one available today. The point is that there is no context for thinking about trustworthiness as something that spreads from link to link.

Once you can think of links in the paradigm of distance within a topic category it becomes easier to understand why a link from a university website or some other so-called “high trust” site isn’t necessarily that good or useful. I know for certain because there was a time before distance ranking where the topic of the site didn’t matter but now it does matter very much and it has mattered for a long time now.

The takeaways here are:

  1. It is counterproductive to go after so-called “high trust” links from verticals that are well outside of the topic of the website you’re trying to get a link to.
  2. This means that it’s more important to get links from sites that are in the right topic or from a context that exactly matches the topic, from a website that’s in an adjacent topical category.

For example, a site like The Washington Post is not a part of the Credit Repair niche. Any “trust” that may be calculated from a New York Times link to a Credit Repair site will likely be dampened to zero. Of course it will. Remember, seed set trust distance is calculated within groups within a niche. There is no trust passed from one link to another link. It is only the distance that is counted.

Logically, it makes sense to assume that there will be no validating effect between irrelevant sites. relevant website for the purposes of the seed set trust calculations.

Takeaways

  • Trust is not something that’s passed by links
    Link distance ranking algorithms do not deal with “trust.” They only measure how close a site is to a trusted seed set within a topic.
  • Link distance matters more than link volume
    Ranking systems based on link distance assess proximity to trusted seed sites, not how many inbound links a site has.
  • Topic-based link neighborhoods shape relevance
    Websites naturally cluster by topic, and link value is likely evaluated within those topical clusters rather than across the entire web. A non-relevant link can still have some small value but irrelevant links stopped working almost twenty years ago.
  • Third-party authority metrics are misaligned with modern link ranking systems
    Some third-party metrics rely on outdated Page Rank-style link counting and fail to account for seed set distance and topical context.
  • Low-link verticals are undervalued by SEOs
    Entire niches that are lightly linked can still sit close to trusted seed sets, yet appear weak in third-party metrics, causing them to be overlooked in link builders.
  • Relevance outweighs perceived link strength
    Links from well-known but topically irrelevant sites likely contribute little or nothing compared to links from closely related or adjacent topic sites.

Modern link evaluation is about topical proximity, not “trust” or raw link counts. Search systems measure how close a site is to trusted seed sites within its own topic neighborhood, which means relevant links from smaller, niche sites can matter more than links from famous but unrelated domains.

This knowledge should enable smarter link building by focusing efforts on contextually relevant websites that may actually strengthen relevance and rankings, instead of chasing outdated link authority scores that no longer reflect how search works.

Featured Image by Shutterstock/Kues

Improve Any Link Building Strategy With One Small Change via @sejournal, @martinibuster

Link building outreach is not just blasting out emails. There’s also a conversation that happens when someone emails you back with a skeptical question. The following are tactics to use for overcoming skeptical responses.

In my opinion it’s always a positive sign when someone responds to an email, even if they’re skeptical. I consider nearly all email responses to be indicators that a link is waiting to happen. This is why a good strategy that anticipates common questions will help you convert skeptical responses into links.

Many responses tend to be questions. What they are asking, between the lines, is for you to help them overcome their suspicions. Anytime you receive a skeptical response, try to view it as them asking you, “Help me understand that you are legitimate and represent a legitimate website that we should be linking to.”

The question is asked between the lines. The answer should similarly be addressed between the lines. Ninety nine percent of the time, a between-the-lines question should not be answered directly. The perfect way to answer those questions, the perfect way to address an underlying concern, is to answer it in the same way you received it, between the lines.

Common  and weird questions that I used to get were like:

  • Who are you?
  • Who do you work for?
  • How did get my email address?

Before I discuss how I address those questions, I want to mention something important that I do not do. I do not try to actively convert the respondent in the first response. In my response to their response to my outreach, I never ask them to link to the site.

The question of linking is already hanging in the air and is the object of their email to you- there is no need to bring that up. If in your response you ask them again to link to your site it will tilt them back to being suspicious of you, raising the odds of losing the link.

In trout fishing, the successful angler crouches so that the trout does not see you. The successful angler may even wear clothing that helps them blend into the background. The best anglers imitate the crane, a fish-eating bird that stands perfectly still, imperceptibly inching closer to its prey. This is done to avoid being noticed. Your response should imitate the crane or the camouflaged angler. You should put yourself into the mindset of anything but a marketer asking for a link.

Your response must not be to immediately ask for a link because that in my opinion will just lose the link. So don’t do it just yet.

Tribal Affinity

One approach that I used to use successful is what I called the Tribal Affinity approach. For a construction/home/real estate related campaign, I used to approach it with the mindset of a homeowner. I wouldn’t say that I’m a homeowner (even though I was), I would just think in terms of what would I say as a homeowner contacting a company to suggest a real estate or home repair type a link. In the broken link or suggest a link strategy, I would say that the three links I am suggesting for their links page have been useful to me.

Be A Mirror

A tribal affinity response that was useful to me is to mirror the person I’m outreaching to, to assume the mindset of the person I am responding to. So for example, if they are a toy collector then your mindset can also be a toy collector. If the outreach target is a club member then your outreach mindset can be an enthusiast of whatever the club is about. I never claim membership in any particular organization, club or association. I limit my affinity to mirroring the same shared mindset as the person I’m outreaching to.

Assume The Mindset

Another approach is to assume the mindset of someone who happened upon the links page with a broken link or missing a good quality link. When you get into the mindset the text of your email will be more natural.

Thus, when someone responds by challenging me by asking how I found their site or who am I working for my response is to just stick to my mindset of a homeowner and respond accordingly.

And really, what’s going on is that they’re not really asking how you found their site. What they’re really asking, between the lines, is if you’re a marketer of some kind. You can go ahead and say yes, you are. Or you can respond between the lines and say that you’re just a homeowner. Up to you.

There are many variations to this approach. The important points are:

  • Responses that challenge you are not necessarily hostile but are often link conversions waiting to happen.
  • Never respond to a response by asking for a link.
  • Put yourself into the right mindset. Thinking like a marketer will usually lead to a conversion dampening response.
  • Put yourself into the mindset that mirrors the person you outreach to.

Get into the mindset that gives you a plausible reason for finding their site and the best words for asking for a link will write themselves.

Featured Image by Shutterstock/Luis Molinero

Five Ways To Boost Traffic To Informational Sites via @sejournal, @martinibuster

Informational sites can easily decline into a crisis of search visibility. No site is immune. Here are five ways to manage content to maintain steady traffic, increase the ability to adapt to changing audiences, and make confident choices that help the site maintain growth momentum over time.

1. Create A Mix Of Content Types

Publishers are in a constant race to publish what’s latest because being first to publish can be a source of massive traffic. The main problem with these kinds of sites is that publishing content about current events can run into problems, putting into question the sustainability of the publication.

  • Current events quickly become stale and no longer relevant to an audience.
  • Unforeseen events like an industry strike, accidents, world events, and pandemics can disrupt interest in a topic.

The focus then is to identify content topics that are reliably relevant to the website’s current audience. This kind of content is called evergreen content, and it can form a safety net of reliable traffic that can sustain the business during slow cycles.

An example of the mixed approach to content that comes to mind is how the New York Times has a standalone recipes section on a subdomain of the main website. It also has a category-based section dedicated to gadget reviews called The Wirecutter.

Another example is the entertainment niche which in addition to industry news also publish interviews with stars and essays about popular movies. Music websites publish the latest news but also content based on snippets from interviews with famous musicians where the musicians make interesting statements about songs, inspirations, and cultural observations.

Rolling Stone magazine publishes content about music but also about current events like politics that align with their reader interests.

All three of those examples expand their topics to adjacent topics in order to bulk up their ability to attract steady and consistent traffic that is reliable.

2. Evergreen Content Also Needs Current Event Topics

Conversely, evergreen topics can generate new audience reach and growth by expanding to cover current events. Content sites about recipes, gardening, home repairs, DIY, crafts, parenting, personal finance, and fitness are all examples of topics that feature evergreen content and can also expand to cover current events. The flow of traffic derived from trending topics is an excellent source of devoted readers who return to read evergreen content and end up recommending the site to friends for both current events and evergreen topics.

Current events can be related to products and even to statements by famous people. If you enjoy creating content or making discoveries, then you’ll enjoy the challenge of discovering new sources of trending topics.

If you don’t already have a mix of evergreen and ephemeral content, then I would encourage you to seek opportunities to focus on those kinds of articles. They can help sustain traffic levels while feeding growth and life into the website.

3. Beware Of Old Content

Google evaluates the total content of a website in order to generate a quality score. Google is vague about these whole-site evaluations. We only know that they do it and that a good evaluation can have a positive effect on traffic.

However, what happens when the site becomes top-heavy with old, stale content that’s no longer relevant to site visitors? This can become a drag on a website. There are multiple ways of handling this situation.

Content that is absolutely out of date and of no interest to anyone and is therefore no longer useful should be removed. The criteria to judge content with is usefulness, not the age of the content. The reason to prune this content is because it’s possible that a whole-site evaluation may conclude that most of the website is comprised of unhelpful, outdated web pages. This could be a negative drag on site performance.

There’s nothing inherently wrong with old content as long as it’s useful. For example, the New York Times keeps old movie reviews in archives that are organized by year, month, day, category, and article title.

The URL slug for the movie review of E.T. looks like this:  /1982/06/11/movies/et-fantasy-from-spielberg.html

Screenshot Of Archived Article

Take Decisive Steps

  • Useful historical content can be archived.
  • Older content that is out of date can be rehabilitated.
  • Content that’s out of date and has been superseded by new content can be redirected with a 301 response code to the new content.
  • Content that is out of date and objectively useless should be removed from the website and allowed to show a 404 response code.

4. Topic Interest

Something that can cause traffic to decline on an informational site is waning interest. Technological innovation can cause the popularity of another product to decline, dragging website traffic along with it. For example, I consulted for a website that reported its traffic was declining. The site still ranked for its keywords, but a quick look at Google Trends showed that interest in the website topic was declining. This was several months after the introduction of the iPhone, which negatively impacted a broad category of products that the website was centered on.

Always keep track of how interested your audience is in your topic. Follow influencers in your niche topic on social media to gauge what they are talking about and whether there are any shifts in the conversation that indicate waning interest or growing interest in a related topic.

Always try out new subcategories of your topic that cross over with your readership to see if there is an audience there that can be cultivated.

Another nuance to consider is the difference between temporary dips in interest and long-term structural decline. Some topics experience predictable cycles driven by seasons, economic conditions, or news coverage, while others face permanent erosion as user needs change or alternatives emerge. Misreading a cyclical slowdown as a permanent decline can lead to unnecessary pivots, while ignoring structural shifts can leave a site over-invested in content that no longer aligns with how people search, buy, or learn.

Monitoring topic interest is less about reacting to short-term fluctuations and more about keeping aware of topical interest and trends. By monitoring audience behavior, tracking broader trends, and experimenting at the edges of the core topic, an informational site can adjust gradually rather than being forced into abrupt changes after traffic has already declined. This ongoing attention helps ensure that content decisions remain grounded in how interest evolves over time.

5. Differentiate

Something that happens to a lot of informational websites is that competitors in a topic tend to cover the exact same stories and even have similar styles of photos, about pages, and bios.

B2B software sites have images of people around a laptop, images of a serious professional, and people gesturing at a computer or a whiteboard.

Recipe sites feature the Flat Lay (food photographed from above), the Ingredient Still Life portrait, and action shots of ingredients grated, sprinkled, or in mid air.

Websites tend to converge into homogeneity in the images they use and the kind of content that’s shared, based on the idea that if it’s working for competitors, then it may be a good approach. But sometimes it’s best to step out of the pack and do things differently.

Evolve your images so that they stand out or catch the eye, try a different way of communicating your content, identify the common concept that everyone uses, and see if there’s an alternate approach that makes your site more authentic.

For example, a recipe site can show photographic bloopers or discuss what can go wrong and how to fix or avoid it. Being real is authentic. So why not show what underbaked looks like? Instagram and Pinterest are traffic drivers, but does that mean all images must be impossibly perfect? Maybe people might respond to the opposite of homogeneity and fake perfection.

The thing that’s almost always missing from product reviews is photos of the testers actually using the products. Is it because the reviews are fake? Hm… Show images of the products with proof that they’ve been used.

Takeaways

  • Sustainable traffic can be cultivated with a mix of evergreen and timely content. Find the balance that works for your website.
  • Evergreen content performs best when it is periodically refreshed with up-to-date details.
  • Outdated content that lacks utility or meaning in people’s lives can quietly grow to suppress site-wide performance. Old pages should be reviewed for usefulness and then archived, updated, redirected, or removed.
  • Audience interest in a topic can decline even if rankings remain stable. Monitoring search demand and cultural shifts helps publishers know when it’s time to expand into adjacent topics before traffic erosion becomes severe.
  • Differentiation matters as much as coverage. Sites that mirror competitors in visuals, formats, and voice risk blending into sameness, while original presentation and proof of authentic experience build trust and attention.

Search visibility declines are not caused by a single technical flaw or isolated content mistake but by gradual misalignment between what a site publishes and what audiences continue to value. Sites that rely too heavily on fleeting interest, allow outdated material to accumulate, or follow competitors into visual and editorial homogeneity risk signaling mediocrity rather than relevance and inspiring enthusiasm. Sustained performance depends on actively managing content, balancing evergreen coverage with current events, pruning what’s no longer useful, and making deliberate choices that distinguish the site as useful, authentic, and credible.

Featured Image by Shutterstock/Sergey Nivens