Google Trends Update: Faster, Smarter, More Customizable via @sejournal, @MattGSouthern

Google has updated its Trending Now tool in Google Trends, offering faster and more comprehensive search data.

The tool now detects more trends, covers more countries, and provides more visualization and filtering options.

These changes further enhance Google Trends’ value for marketers, researchers, and anyone seeking real-time insights into search behavior.

Updates To ‘Trending Now’ In Google Trends

Improved Trend Detection & Refresh Rate

Google says its new trend forecasting engine spots ten times more emerging trends than before.

It now updates every 10 minutes, providing real-time insights on rising search interests.

Expanded Global Coverage

The Trending Now tool is now available in 125 countries, 40 of which offer region-specific trend data.

This expansion allows for more localized trend analysis and comparison.

Enhanced Context & Visualization

You can now view a breakdown of each trend, including its emergence time, duration, and related news articles.

A graph displaying Search interest over time is also provided, along with the ability to compare multiple trends and export data for further analysis.

Customizable Filters

The redesigned filters tab lets you fine-tune trend results based on location, time frame, and trend status.

To concentrate on currently popular searches, you can select a time frame (4 hours, 24 hours, 48 hours, or 7 days) and filter out inactive trends.

Google Trends Tutorial Video

To complement the Trending Now update, Google has released a comprehensive video tutorial on its Google Search Central YouTube channel.

The video features Daniel Waisberg, a Google search Advocate, and Hadas Jacobi, a software engineer on the Google Trends team.

They demonstrate using Google Trends to compare search terms and topics across Google Search and YouTube.

The tutorial covers:

  • Navigating the Google Trends interface
  • Comparing up to five topics or search terms
  • Utilizing filters for location, time period, category, and Google property
  • Interpreting trend data and charts
  • Understanding related topics and queries

See the full video below.

Why This Matters

Google Trends remains a powerful tool for understanding public interest and search behavior.

As Waisberg states in the tutorial:

“Whether you’re a marketer, journalist or researcher, understanding Google Trends can help you uncover emerging trends and make informed decisions.”

As search patterns change, these updates will help you stay ahead with the latest insight into global and local search trends.


Featured Image: DANIEL CONSTANTE/Shutterstock

Anthropic Announces Prompt Caching with Claude via @sejournal, @martinibuster

Anthropic announced announced a new Prompt Caching with Claude feature that boosts Claude’s capabilities for repetitive tasks with large amounts of detailed contextual information. The new feature makes it faster, cheaper and more powerful, available today in Beta through the Anthropic API.

Prompt Caching

This new feature provides a powerful boosts for users that consistently use highly detailed instructions that use example responses and contain a large amount of background information in the prompt, enabling Claude to re-use the data with the cache. This improves the consistency of output, speeds up Claude responses by to 50% (lower latency), and it also makes it up to 90% cheaper to use.

Prompt Caching with Claude is especially useful for complex projects that rely on the same data and is useful for businesses of all sizes, not just enterprise level organizations. This feature is available in a public Beta via the Anthropic API for use with Claude 3.5 Sonnet and Claude 3 Haiku.

The announcement lists the following ways Prompt Caching improves performance:

  • “Conversational agents: Reduce cost and latency for extended conversations, especially those with long instructions or uploaded documents.
  • Large document processing: Incorporate complete long-form material in your prompt without increasing response latency.
  • Detailed instruction sets: Share extensive lists of instructions, procedures, and examples to fine-tune Claude’s responses without incurring repeated costs.
  • Coding assistants: Improve autocomplete and codebase Q&A by keeping a summarized version of the codebase in the prompt.
  • Agentic tool use: Enhance performance for scenarios involving multiple tool calls and iterative code changes, where each step typically requires a new API call.”

More information about the Anthropic API here:

Build with Claude

Explore latest models – Pricing

Featured Image by Shutterstock/gguy

Google’s Gemini To Gain ‘Deep Research’ Feature via @sejournal, @MattGSouthern

Google has revealed plans to expand the capabilities of its AI assistant Gemini, introducing a new feature called “Deep Research.” This announcement came via social media following the company’s recent ‘Made By Google’ event.

According to Google’s post, Gemini will soon be able to “do in-depth research for you and synthesize the info to give you a simple, comprehensive plan.”

Google says this feature will be launched “in the next few weeks,” along with other functionalities showcased at the event.

How Deep Research Works

The Deep Research tool is designed to assist users with complex tasks, such as gathering information from multiple sources and compiling comprehensive reports.

Google provided an example showing how the feature might help a restaurant owner research the process of adding a sidewalk café in Seattle.

In the example, Gemini outlines its approach to creating a guide that includes permit requirements, application steps, shelter specifications, timelines, costs, and case studies.

Gemini informs the user it will research web pages, analyze results, and create a full report.

Potential Applications & Limitations

Google describes the tool as being “designed to help you with making big decisions, navigating multiple sources, or getting started on a project you might not know about.”

However, Google hasn’t provided any sample reports or details about its data-sourcing methods.

This development could impact how users interact with search engines and process information.

Instead of manually sifting through multiple web pages, people might rely on Gemini to compile and synthesize information from various sources.

Unanswered Questions

Gemini’s full research capabilities and potential impact on search habits are unclear. Google’s keeping quiet on specifics, functionality, and rollout plans.

We’ll be watching to see how this research tool performs and monitoring potential ripple effects on the search ecosystem.


Featured Image: mundissima/Shutterstock

ChatGPT Study Finds Training Data Doesn’t Match Real-World Use via @sejournal, @MattGSouthern

A study by the Data Provenance Initiative, a collective of independent and academic researchers dedicated to data transparency, reveals a mismatch between ChatGPT’s training data and its typical use cases.

The study, which analyzed 14,000 web domains, found that ChatGPT’s training data primarily consists of news articles, encyclopedias, and social media content.

However, the most common real-world applications of the tool involve creative writing, brainstorming, and seeking explanations.

As the study states,

“Whereas news websites comprise nearly 40% of all tokens… fewer than 1% of ChatGPT queries appear to be related to news or current affairs.”

Diving deeper into usage patterns, the researchers analyzed a dataset called WildChat, containing 1 million user conversations with ChatGPT. They found that over 30% of these conversations involve creative compositions such as fictional story writing or role-playing.

This mismatch suggests that ChatGPT’s performance may vary depending on the specific task and its alignment with the tool’s training data.

Marketers should know that ChatGPT might struggle to generate content based on current events, industry-specific knowledge, or niche topics.

Adapting To ChatGPT’s Strengths & Limitations

Knowing what ChatGPT is trained on can help you align prompts with the tool’s strengths and limitations.

This means you may need to add more context, specify the desired tone and style, and break down complex tasks into smaller steps.

For AI-assisted content creation, leverage ChatGPT for tasks like ideating social posts or email subject lines. Reserve human expertise for complex, industry-specific content.

Use effective prompt engineering to optimize output. Always fact-check and edit AI-generated content to ensure quality.

AI tools can accelerate ideation and content creation but don’t expect perfection. Human review is essential for accuracy, brand consistency, and channel-specific copy.

Looking Ahead

This research highlights the need for marketers to be careful with AI tools like ChatGPT.

Understand what AI can and can’t do and combine it with human expertise. This combo can boost content strategies and help hit KPIs.

As the field evolves, we might see AI tools better tailored to real-world usage patterns.

Until then, remember that it assists but doesn’t replace expert judgment.


Featured Image: Emil Kazaryan/Shutterstock

State Of SEO Report: Top Insights For 2025 Success via @sejournal, @Juxtacognition

What opportunities are other SEO professionals taking advantage of? Did other SEO professionals struggle with the same things you did this year?

Our fourth annual State of SEO Report is packed with valuable insights, including the most pressing challenges, emerging trends, and actionable strategies SEO practitioners like you have faced over the last year and what they see on the horizon.

Find out how top search teams are tackling challenges. Download the full report today.

Top Challenges In SEO: From Content To Algorithm Changes

In 2023, 13.8% of SEO pros said content creation was the top challenge for SEO professionals. However, in 2024, 22.2% (up from 8.6% in 2023) of all SEO practitioners surveyed revealed that algorithm changes have become the primary concern.

In fact, 30.2% of those we asked pointed to core and general algorithm updates as the main source of traffic instability over the last 12 months. This finding is in stark contrast to 2023,  where 55.9% of SEO pros felt algorithm updates helped their efforts at least a little.

Why?

Simply put, creating the most helpful and expert content no longer guarantees a top spot in the SERPs.

To complicate matters, Google’s algorithms are constantly evolving, making it crucial to adapt and stay updated.

Budget Constraints: A Major Barrier To Success

Our survey revealed that budget limitations (cited by 19.4%) are the number one barrier to SEO success and the primary reason clients leave (by 41.0% of SEO professionals surveyed.)

With everyone feeling the financial squeeze, how can you gain an edge?

  • Forget gaming the SERPs. Focus on creating content that genuinely serves your ideal customer.
  • Collaborate with your marketing team to distribute this content on platforms where your audience is most active. Remember, Google’s rules may change, but the need for high-quality, valuable content that genuinely serves a need remains constant.
  • Prove your return on investment (ROI). Track customer journeys and identify where you are gaining conversions. If you’re not seeing success, make a plan and create a proposal to improve your strategies.

Learn how to overcome budget barriers with even more insights in the full report.

Key Insights From The State Of SEO Survey

SEO Industry Changes:

  • AI is predicted to drive the most significant changes in the SEO industry according to 29.0% of those we surveyed.
  • 16.6% believe Google updates will continue to be a major factor.

Performance Disruptions:

  • 36.3% of State of SEO respondents believe generative AI in search platforms and AI-generated content are major disruptors going forward into the future.

Essential SEO Metrics: Adapting To Fluctuations

As you explore the data in the report, you’ll find that 20.0% of State of SEO 2025 respondents indicated that keyword rankings and organic pageviews (11.7%) are the top tracked SEO metrics.

However, when these metrics fluctuate due to uncontrollable factors, it’s essential to build business value into your tracking.

Focus on the quality of your traffic and prioritize efforts that bring in high-quality users.

Skills In Demand: Navigating A Changing SEO Landscape

The most challenging skills to find in SEO professionals are technical SEO (18.9%) and data analysis (14.8%).

Meanwhile, 18.2% of respondents indicated that the most desired skills in candidates are soft skills and 15.7% said the ability to build and execute SEO strategies.

Want to grow as an SEO professional?

Develop rare and desirable skills.

SEO is increasingly integrated with other marketing disciplines, so cultivating exemplary collaborative skills and learning the languages of other fields will make you highly valuable.

Other Important Findings

  • 69.8% of SEO professionals found SERP competition increased over the last 12 months.
  • Only 13.2% of respondents felt zero click searches will cause significant shifts in the SEO industry.
  • 50.0% of SEO professionals reported client turnover remained steady throughout 2024.

The State of SEO 2025 Report is your go-to resource for understanding and mastering the current SEO landscape.

Download your copy today to gain a deeper understanding of the challenges, opportunities, and insights that will shape SEO in the coming year.

Stay informed, stay ahead, and make 2025 your best year in SEO yet!

Google Halts AdSense Monetization For Russia-Based Publishers via @sejournal, @MattGSouthern

Google’s halting ad monetization for Russian publishers on AdSense, AdMob, and Ad Manager, effective August 12, citing “ongoing developments” in the country.

This impacts Russian digital publishers, content creators, and app devs who use these platforms for revenue generation through ad impressions and clicks.

Background & Context

Google’s decision to halt ad monetization in Russia is not an isolated incident but part of a series of actions the company has taken since 2022 amid geopolitical tensions.

Previous measures include:

  • Halting ad serving to users in Russia in March 2022
  • Demonetizing content that exploited, dismissed, or condoned the conflict in Ukraine
  • Cracking down on state-sponsored YouTube channels and videos, blocking over 1,000 channels and 5.5 million videos

Google will make final payouts to eligible Russian AdSense users in late August, provided there are no payment issues and minimum thresholds are met.

This closes a revenue source for Russian creators who’ve been monetizing non-Russian traffic up to this point.

Google’s latest move has drawn criticism from some Russian officials. Anton Gorelkin, deputy head of Russia’s parliamentary committee on information policy, stated on Telegram that Google is “segregating citizens according to nationality” and supporting the division of the online space.

Potential Impact

The financial impact on Russian content creators could be substantial. Many have used these platforms to monetize traffic from both domestic and international audiences.

With this revenue stream now cut off, creators may need to explore alternative monetization methods or potentially face income reductions.

Beyond individual creators, this move could have broader implications for the Russian digital economy.

As a major player in the global digital advertising market, Google’s withdrawal may create a void that local Russian ad networks might struggle to fill completely.

This could lead to a decrease in overall digital ad spending within the country and potentially affect the quality and quantity of content available to Russian internet users.

Looking Ahead

Google’s exit from Russia’s ad market will force local publishers to pivot. They’ll likely explore alternative platforms or rev streams. This could boost Russian ad tech development, potentially siloing the RuNet further.

We may see similar actions from other companies as geopolitical tensions persist.


Featured Image: Mojahid Mottakin/Shutterstock

Google’s AI Overviews Ditch Reddit, Embrace YouTube [Study] via @sejournal, @MattGSouthern

A new study by SEO software company SE Ranking has analyzed the sources and links used in Google’s AI-generated search overviews.

The research, which examined over 100,000 keywords across 20 niches, offers insights into how these AI-powered snippets are constructed and what types of sources they prioritize.

Key Findings

Length & Sources

The study found that 7.47% of searches triggered AI overviews, a slight decrease from previous research.

The average length of these overviews has decreased by approximately 40%, now averaging 2,633 characters.

According to the data, the most frequently linked websites in AI overviews were:

  1. YouTube.com (1,346 links)
  2. LinkedIn.com (1,091 links)
  3. Healthline.com (1,091 links)

Government & Education

The research indicates that government and educational institutions are prominently featured in AI-generated answers.

Approximately 19.71% of AI overviews included links to .gov websites, while 26.61% referenced .edu domains.

Media Representation

Major media outlets appeared frequently in the AI overviews.

Forbes led with 804 links from 723 AI-generated answers, followed by Business Insider with 148 links from 139 overviews.

HTTPS Dominance

The study reported that 99.75% of links in AI overviews use the HTTPS protocol, with only 0.25% using HTTP.

Niche-Specific Trends

The research revealed variations in AI overviews across niches:

  • The Relationships niche dominated, with 40.64% of keywords in this category triggering AI overviews.
  • Food and Beverage maintained its second-place position, with 23.58% of keywords triggering overviews.
  • Notably, the Fashion and Beauty, Pets, and Ecommerce and Retail niches saw significant declines in AI overview appearances compared to previous studies.

Link Patterns

The study found that AI overviews often incorporate links from top-ranking organic search results:

  • 93.67% of AI overviews linked to at least one domain from the top 10 organic search results.
  • 56.50% of all detected links in AI overviews matched search results from the top 1-100, with most (73.01%) linking to the top 1-10 search results.

International Content

The research noted trends regarding international content:

  • 9.85% of keywords triggering AI overviews included links to .in (Indian) domains.
  • This was prevalent in certain niches, with Sports and Exercise leading at 36.83% of keywords in that category linking to .in sites.

Reddit & Quora Absent

Despite these platforms ‘ popularity as information sources, the study found no instances of Reddit or Quora being linked in the analyzed AI overviews. This marks a change from previous studies where these sites were more frequently referenced.

Methodology

The research was conducted using Google Chrome on an Ubuntu PC, with sessions based in New York and all personalization features disabled.

The data was collected on July 11, 2024, providing a snapshot of AI overview behavior.

SE Ranking has indicated that they plan to continue this research, acknowledging the need for ongoing analysis to understand evolving trends.

What Does This Mean?

These findings have several implications for SEO professionals and publishers:

  1. Google’s AI favors trusted sources. Keep building your site’s credibility.
  2. AI overviews are getting shorter. Focus on clear, concise content.
  3. HTTPS is a must. Secure your site if you haven’t already.
  4. Diversify your sources. Mix in .edu and .gov backlinks where relevant.
  5. AI behavior varies across industries. Adapt your strategy accordingly.
  6. Think globally. You might be competing with international sites more than before.

Remember, this is just a snapshot. Google’s AI overviews are changing fast. Monitor these trends and be ready to pivot your SEO strategy as needed.

The full report on SE Ranking’s website provides a detailed breakdown of the findings, including niche-specific data.


Featured Image: DIA TV / Shutterstock.com

Google’s “Branded Search” Patent For Ranking Search Results via @sejournal, @martinibuster

Back in 2012 Google applied for a patent called “Ranking Search Results” that shows how Google can use branded search queries as a ranking factor. The patent is about using branded search queries and navigational queries as ranking factors, plus a count of independent links. Although this patent is from 2012, it’s possible that it may still play a role in ranking.

The patent was misunderstood by the search marketing community in 2012 and the knowledge contained in it was lost.

What Is The Ranking Search Results Patent About? TL/DR

The patent is explicitly about an invention for ranking search results, that’s why the patent is called “Ranking Search Results.” The patent describes an algorithm that uses to ranking factors to re-rank web pages:

Sorting Factor 1: By number of independent inbound links
This is a count of links that are independent from the site being ranked.

Sorting Factor 2: By number of branded search queries & navigational search queries.
The branded and navigational search queries are called “reference queries” and also are referred to as implied links.

The counts of both factors are used to modify the rankings of the web pages.

Why The Patent Was Misunderstood TL/DR

First, I want to say that in 2012, I didn’t understand how to read patents. I was more interested in research papers and left the patent reading to others. When I say that everyone in the search marketing community misunderstood the patent, I include myself in that group.

The “Ranking Search Results” patent was published in 2012, one year after the release of a content quality update called the Panda Update. The Panda update was named after one of the engineers who worked on it, Navneet Panda. Navneet Panda came up with questions that third party quality raters used to rate web pages. Those ratings were used as a test to see if changes to the algorithm were successful at removing “content farm” content.

Navneet Panda is also a co-author of the “Ranking search results” patent. SEOs saw his name on the patent and immediately assumed that this was the Panda patent.

The reason why that assumption is wrong is because the Panda update is an algorithm that uses a “classifier” to classify web pages by content quality. The “Ranking Search Results” patent is about ranking search results, period. The Ranking Search Results patent is not about content quality nor does it feature a content quality classifier.

Nothing in the “Ranking Search Results” patent relates in any way with the Panda update.

Why This Patent Is Not The Panda Update

In 2009 Google released the Caffeine Update which enabled Google to quickly index fresh content but inadvertently created a loophole that allowed content farms to rank millions of web pages on rarely searched topics.

In an interview with Wired, former Google search engineer Matt Cutts described the content farms like this:

“It was like, “What’s the bare minimum that I can do that’s not spam?” It sort of fell between our respective groups. And then we decided, okay, we’ve got to come together and figure out how to address this.”

Google subsequently responded with the Panda Update, named after a search engineer who worked on the algorithm which was specifically designed to filter out content farm content. Google used third party site quality raters to rate websites and the feedback was used to create a new definition of content quality that was used against content farm content.

Matt Cutts described the process:

“There was an engineer who came up with a rigorous set of questions, everything from. “Do you consider this site to be authoritative? Would it be okay if this was in a magazine? Does this site have excessive ads?” Questions along those lines.

…we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons…”

In simple terms, a classifier is an algorithm within a system that categorizes data. In the context of the Panda Update, the classifier categorizes web pages by content quality.

What’s apparent when reading the “Ranking search results” patent is that it’s clearly not about content quality, it’s about ranking search results.

Meaning Of Express Links And Implied Links

The “Ranking Search Results” patent uses two kinds of links to modify ranked search results:

  1. Implied links
  2. Express links

Implied links:
The patent uses branded search queries and navigational queries to calculate a ranking score as if the branded/navigational queries are links, calling them implied links. The implied links are used to create a factor for modifying web pages that are relevant (responsive) to search queries.

Express links:
The patent also uses independent inbound links to the web page as a part of another calculation to come up with a factor for modifying web pages that are responsive to a search query.

Both of those kinds of links (implied and independent express link) are used as factors to modify the rankings of a group of web pages.

Understanding what the patent is about is straightforward because the beginning of the patent explains it in relatively easy to understand English.

This section of the patent uses the following jargon:

  • A resource is a web page or website.
  • A target (target resource) is what is being linked to or referred to.
  • A “source resource” is a resource that makes a citation to the “target resource.”
  • The word “group” means the group of web pages that are relevant to a search query and are being ranked.

The patent talks about “express links” which are just regular links. It also describes “implied links” which are references within search queries, references to a web page (which is called a “target resource”).

I’m going to add bullet points to the original sentences so that they are easier to understand.

Okay, so this is the first important part:

“Links for the group can include express links, implied links, or both.

An express link, e.g., a hyperlink, is a link that is included in a source resource that a user can follow to navigate to a target resource.

An implied link is a reference to a target resource, e.g., a citation to the target resource, which is included in a source resource but is not an express link to the target resource. Thus, a resource in the group can be the target of an implied link without a user being able to navigate to the resource by following the implied link.”

The second important part uses the same jargon to define what implied links are:

  • A resource is a web page or website.
  • The site being linked to or referred to is called a “target resource.”
  • A “group of resources” means a group of web pages.

This is how the patent explains implied links:

“A query can be classified as referring to a particular resource if the query includes a term that is recognized by the system as referring to the particular resource.

For example, a term that refers to a resource may be all of or a portion of a resource identifier, e.g., the URL, for the resource.

For example, the term “example.com” may be a term that is recognized as referring to the home page of that domain, e.g., the resource whose URL is “http://www.example.com”.

Thus, search queries including the term “example.com” can be classified as referring to that home page.

As another example, if the system has data indicating that the terms “example sf” and “esf” are commonly used by users to refer to the resource whose URL is “http://www.sf.example.com,” queries that contain the terms “example sf” or “esf”, e.g., the queries “example sf news” and “esf restaurant reviews,” can be counted as reference queries for the group that includes the resource whose URL is “http://www.sf.example.com.” “

The above explanation defines “reference queries” as the terms that people use to refer to a specific website. So, for example (my example), if people search using “Walmart” with the keyword Air Conditioner within their search query then the query  “Walmart” + Air Conditioner is counted as a “reference query” to Walmart.com, it’s counted as a citation and an implied link.

The Patent Is Not About “Brand Mentions” On Web Pages

Some SEOs believe that a mention of a brand on a web page is counted by Google as if it’s a link. They have misinterpreted this patent to support the belief that an “implied link” is a brand mention on a web page.

As you can see, the patent does not describe the use of “brand mentions” on web pages. It’s crystal clear that the meaning of “implied links” within the context of this patent is about references to brands within search queries, not on a web page.

It also discusses doing the same thing with navigational queries:

“In addition or in the alternative, a query can be categorized as referring to a particular resource when the query has been determined to be a navigational query to the particular resource. From the user point of view, a navigational query is a query that is submitted in order to get to a single, particular web site or web page of a particular entity. The system can determine whether a query is navigational to a resource by accessing data that identifies queries that are classified as navigational to each of a number of resources.”

The takeaway then is that the parent describes the use of “reference queries” (branded/navigational search queries) as a factor similar to links and that’s why they’re called implied links.

Modification Factor

The algorithm generates a “modification factor” which re-ranks (modifies) the a group of web pages that are relevant to a search query based on the “reference queries” (which are branded search queries) and also using a count of independent inbound links.

This is how the modification (or ranking) is done:

  1. A count of inbound links using only “independent” links (links that are not controlled by the site being linked to).
  2. A count is made of the reference queries (branded search queries) (which are given a ranking power like a link).

Reminder: “resources” is a reference to web pages and websites.

Here is how the patent explains the part about the ranking:

“The system generates a modification factor for the group of resources from the count of independent links and the count of reference queries… For example, the modification factor can be a ratio of the number of independent links for the group to the number of reference queries for the group.”

What the patent is doing is it is filtering links in order to use links that are not associated with the website and it is also counting how many branded search queries are made for a webpage or website and using that as a ranking factor (modification factor).

In retrospect it was a mistake for some in the SEO industry to use this patent as “proof” for their idea about brand mentions on websites being a ranking factor.

It’s clear that “implied links” are not about brand mentions in web pages as a ranking factor but rather it’s about brand mentions (and URLs & domains) in search queries that can be used as ranking factors.

Why This Patent Is Important

This patent describes a way to use branded search queries as a signal of popularity and relevance for ranking web pages. It’s a good signal because it’s the users themselves saying that a specific website is relevant for specific search queries. It’s a signal that’s hard to manipulate which may make it a clean non-spam signal.

We don’t know if Google uses what’s described in the patent. But it’s easy to understand why it could still be a relevant signal today.

Read The Patent Within The Entire Context

Patents use specific language and it’s easy to misinterpret the words or overlook the meaning of it by focusing on specific sentences. The biggest mistake I see SEOs do is to remove one or two sentences from their context and then use that to say that Google is doing something or other. This is how SEO misinformation begins.

Read my article about How To Read Google Patents to understand how to read them and avoid misinterpreting them. Even if you don’t read patents, knowing the information is helpful because it’ll make it easier to spot misinformation about patents, which there is a lot of right now.

I limited this article to communicating what the “Ranking Search Results” patent is and what the most important points are. There many granular details about different implementations that I don’t cover because they’re not necessary to understanding the overall patent itself.

If you want the granular details, I strongly encourage first reading my article about how to read patents before reading the patent.

Read the patent here:

Ranking search results

Google Confirms 3 Ways To Make Googlebot Crawl More via @sejournal, @martinibuster

Google’s Gary Illyes and Lizzi Sassman discussed three factors that trigger increased Googlebot crawling. While they downplayed the need for constant crawling, they acknowledged there a ways to encourage Googlebot to revisit a website.

1. Impact of High-Quality Content on Crawling Frequency

One of the things they talked about was the quality of a website. A lot of people suffer from the discovered not indexed issue and that’s sometimes caused by certain SEO practices that people have learned and believe are a good practice. I’ve been doing SEO for 25 years and one thing that’s always stayed the same is that industry defined best practices are generally years behind what Google is doing. Yet, it’s hard to see what’s wrong if a person is convinced that they’re doing everything right.

Gary Illyes shared a reason for an elevated crawl frequency at the 4:42 minute mark, explaining that one of triggers for a high level of crawling is signals of high quality that Google’s algorithms detect.

Gary said it at the 4:42 minute mark:

“…generally if the content of a site is of high quality and it’s helpful and people like it in general, then Googlebot–well, Google–tends to crawl more from that site…”

There’s a lot of nuance to the above statement that’s missing, like what are the signals of high quality and helpfulness that will trigger Google to decide to crawl more frequently?

Well, Google never says. But we can speculate and the following are some of my educated guesses.

We know that there are patents about branded search that count branded searches made by users as implied links. Some people think that “implied links” are brand mentions, but “brand mentions” are absolutely not what the patent talks about.

Then there’s the Navboost patent that’s been around since 2004. Some people equate the Navboost patent with clicks but if you read the actual patent from 2004 you’ll see that it never mentions click through rates (CTR). It talks about user interaction signals. Clicks was a topic of intense research in the early 2000s but if you read the research papers and the patents it’s easy to understand what I mean when it’s not so simple as “monkey clicks the website in the SERPs, Google ranks it higher, monkey gets banana.”

In general, I think that signals that indicate people perceive a site as helpful, I think that can help a website rank better. And sometimes that can be giving people what they expect to see, giving people what they expect to see.

Site owners will tell me that Google is ranking garbage and when I take a look I can see what they mean, the sites are kind of garbagey. But on the other hand the content is giving people what they want because they don’t really know how to tell the difference between what they expect to see and actual good quality content (I call that the Froot Loops algorithm).

What’s the Froot Loops algorithm? It’s an effect from Google’s reliance on user satisfaction signals to judge whether their search results are making users happy. Here’s what I previously published about Google’s Froot Loops algorithm:

“Ever walk down a supermarket cereal aisle and note how many sugar-laden kinds of cereal line the shelves? That’s user satisfaction in action. People expect to see sugar bomb cereals in their cereal aisle and supermarkets satisfy that user intent.

I often look at the Froot Loops on the cereal aisle and think, “Who eats that stuff?” Apparently, a lot of people do, that’s why the box is on the supermarket shelf – because people expect to see it there.

Google is doing the same thing as the supermarket. Google is showing the results that are most likely to satisfy users, just like that cereal aisle.”

An example of a garbagey site that satisfies users is a popular recipe site (that I won’t name) that publishes easy to cook recipes that are inauthentic and uses shortcuts like cream of mushroom soup out of the can as an ingredient. I’m fairly experienced in the kitchen and those recipes make me cringe. But people I know love that site because they really don’t know better, they just want an easy recipe.

What the helpfulness conversation is really about is understanding the online audience and giving them what they want, which is different from giving them what they should want. Understanding what people want and giving it to them is, in my opinion, what searchers will find helpful and ring Google’s helpfulness signal bells.

2. Increased Publishing Activity

Another thing that Illyes and Sassman said could trigger Googlebot to crawl more is an increased frequency of publishing, like if a site suddenly increased the amount of pages it is publishing. But Illyes said that in the context of a hacked site that all of a sudden started publishing more web pages. A hacked site that’s publishing a lot of pages would cause Googlebot to crawl more.

If we zoom out to examine that statement from the perspective of the forest then it’s pretty evident that he’s implying that an increase in publication activity may trigger an increase in crawl activity. It’s not that the site was hacked that is causing Googlebot to crawl more, it’s the increase in publishing that’s causing it.

Here is where Gary cites a burst of publishing activity as a Googlebot trigger:

“…but it can also mean that, I don’t know, the site was hacked. And then there’s a bunch of new URLs that Googlebot gets excited about, and then it goes out and then it’s crawling like crazy.”​

A lot of new pages makes Googlebot get excited and crawl a site “like crazy” is the takeaway there. No further elaboration is needed, let’s move on.

3. Consistency Of Content Quality

Gary Illyes goes on to mention that Google may reconsider the overall site quality and that may cause a drop in crawl frequency.

Here’s what Gary said:

“…if we are not crawling much or we are gradually slowing down with crawling, that might be a sign of low-quality content or that we rethought the quality of the site.”

What does Gary mean when he says that Google “rethought the quality of the site?” My take on it is that sometimes the overall site quality of a site can go down if there’s parts of the site that aren’t to the same standard as the original site quality. In my opinion, based on things I’ve seen over the years, at some point the low quality content may begin to outweigh the good content and drag the rest of the site down with it.

When people come to me saying that they have a “content cannibalism” issue, when I take a look at it, what they’re really suffering from is a low quality content issue in another part of the site.

Lizzi Sassman goes on to ask at around the 6 minute mark if there’s an impact if the website content was static, neither improving or getting worse, but simply not changing. Gary resisted giving an answer, simply saying that Googlebot returns to check on the site to see if it has changed and says that “probably” Googlebot might slow down the crawling if there is no changes but qualified that statement by saying that he didn’t know.

Something that went unsaid but is related to the Consistency of Content Quality is that sometimes the topic changes and if the content is static then it may automatically lose relevance and begin to lose rankings. So it’s a good idea to do a regular Content Audit to see if the topic has changed and if so to update the content so that it continues to be relevant to users, readers and consumers when they have conversations about a topic.

Three Ways To Improve Relations With Googlebot

As Gary and Lizzi made clear, it’s not really about poking Googlebot to get it to come around just for the sake of getting it to crawl. The point is to think about your content and its relationship to the users.

1. Is the content high quality?
Does the content address a topic or does it address a keyword? Sites that use a keyword-based content strategy are the ones that I see suffering in the 2024 core algorithm updates. Strategies that are based on topics tend to produce better content and sailed through the algorithm updates.

2. Increased Publishing Activity
An increase in publishing activity can cause Googlebot to come around more often. Regardless of whether it’s because a site is hacked or a site is putting more vigor into their content publishing strategy, a regular content publishing schedule is a good thing and has always been a good thing. There is no “set it and forget it” when it comes to content publishing.

3. Consistency Of Content Quality
Content quality, topicality, and relevance to users over time is an important consideration and will assure that Googlebot will continue to come around to say hello. A drop in any of those factors (quality, topicality, and relevance) could affect Googlebot crawling which itself is a symptom of the more importat factor, which is how Google’s algorithm itself regards the content.

Listen to the Google Search Off The Record Podcast beginning at about the 4 minute mark:

Featured Image by Shutterstock/Cast Of Thousands

Google Warns: URL Parameters Create Crawl Issues via @sejournal, @MattGSouthern

Gary Illyes, Analyst at Google, has highlighted a major issue for crawlers: URL parameters.

During a recent episode of Google’s Search Off The Record podcast, Illyes explained how parameters can create endless URLs for a single page, causing crawl inefficiencies.

Illyes covered the technical aspects, SEO impact, and potential solutions. He also discussed Google’s past approaches and hinted at future fixes.

This info is especially relevant for large or e-commerce sites.

The Infinite URL Problem

Illyes explained that URL parameters can create what amounts to an infinite number of URLs for a single page.

He explains:

“Technically, you can add that in one almost infinite–well, de facto infinite–number of parameters to any URL, and the server will just ignore those that don’t alter the response.”

This creates a problem for search engine crawlers.

While these variations might lead to the same content, crawlers can’t know this without visiting each URL. This can lead to inefficient use of crawl resources and indexing issues.

E-commerce Sites Most Affected

The problem is prevalent among e-commerce websites, which often use URL parameters to track, filter, and sort products.

For instance, a single product page might have multiple URL variations for different color options, sizes, or referral sources.

Illyes pointed out:

“Because you can just add URL parameters to it… it also means that when you are crawling, and crawling in the proper sense like ‘following links,’ then everything– everything becomes much more complicated.”

Historical Context

Google has grappled with this issue for years. In the past, Google offered a URL Parameters tool in Search Console to help webmasters indicate which parameters were important and which could be ignored.

However, this tool was deprecated in 2022, leaving some SEOs concerned about how to manage this issue.

Potential Solutions

While Illyes didn’t offer a definitive solution, he hinted at potential approaches:

  1. Google is exploring ways to handle URL parameters, potentially by developing algorithms to identify redundant URLs.
  2. Illyes suggested that clearer communication from website owners about their URL structure could help. “We could just tell them that, ‘Okay, use this method to block that URL space,’” he noted.
  3. Illyes mentioned that robots.txt files could potentially be used more to guide crawlers. “With robots.txt, it’s surprisingly flexible what you can do with it,” he said.

Implications For SEO

This discussion has several implications for SEO:

  1. Crawl Budget: For large sites, managing URL parameters can help conserve crawl budget, ensuring that important pages are crawled and indexed.in
  2. Site Architecture: Developers may need to reconsider how they structure URLs, particularly for large e-commerce sites with numerous product variations.
  3. Faceted Navigation: E-commerce sites using faceted navigation should be mindful of how this impacts URL structure and crawlability.
  4. Canonical Tags: Using canonical tags can help Google understand which URL version should be considered primary.

In Summary

URL parameter handling remains tricky for search engines.

Google is working on it, but you should still monitor URL structures and use tools to guide crawlers.

Hear the full discussion in the podcast episode below: