Google’s Spam Explainer was updated to reflect the new realities introduced by the March 2024 Core Algorithm Update. The following is everything that was removed and added and what it means what the implications are for the core algorithm update.
March 2024 Core Algorithm Update
Google’s core algorithm update is without question one of the big ones, the changes to the core ranking algorithm and other systems are profound.
The change to the Reviews system from a semi-monthly update to a continuous update was the first indication that something significant was happening behind the scenes because in my experience. That kind of change is usually preceded by an update to Google’s underlying infrastructure, either at the software or hardware level or both simultaneously (as seen with Google’s Hummingbird update which enabled more powerful capabilities).
Updates to Google’s Spam Explainer document reveal key insights about what is going on behind the scenes with this month’s ongoing update, much of which focuses on links.
1. Google Deemphasizes Links
At least year’s Pubcon Austin 2023, Google’s Gary Illyes remarked that links aren’t even in the top 3 of important ranking signals.
A single edit to the spam documentation appears to confirm Illyes statement because the documentation literally deemphasizes links.
Previous documentation:
“Google uses links as an important factor in determining the relevancy of web pages.”
New documentation:
“Google uses links as a factor in determining the relevancy of web pages.”
The removal of the word “important” could normally be conservatively be viewed as ambiguous. But it’s hard to be conservative when contrasted with Illyes’ statement about links. In my opinion there’s a purpose to the removal of the word “important” and that purpose is to deemphasize the role of links.
2. Content Created For For Link Manipulation
There is a second link related addition to the guidelines. This new factor relates to creating content for the purpose of manipulative links. This is communicated in a new sentence added to a list of examples of manipulative links.
This is the description of the new Content and Links signal:
“Creating low-value content primarily for the purposes of manipulating linking and ranking signals.”
That’s kind of ambiguous, but it does sound like it’s aimed at a form of paid links where a network of sites is created for the purpose of pointing links from contextually relevant pages. This is a very old practice that those who are new to SEO call Private Blog Networks (PBNs).
3. New Signal Related To Outgoing Links
Another new signal is focused on manipulative outgoing links. Google has always penalized sites that sell (outgoing) links. But this may be the first time Google expressly mentions outgoing links as something their algorithm is examining.
This is what the new sentences say:
“Any links that are intended to manipulate rankings in Google Search results may be considered link spam. This includes any behavior that manipulates links to your site or outgoing links from your site.”
4. New Expired Domains Signal
Some may conclude from today’s announcement that Google is cracking down on expired domains. That interpretation is not entirely accurate.
What’s really happening is that Google is introducing a new signal related to how domains are used, which specifically scans for changes from how a domain was previously used to a new purpose.
Google uses the word “repurpose” to describe the signal:
“Expired domain abuse is where an expired domain name is purchased and repurposed primarily to manipulate search rankings by hosting content that provides little to no value to users.”
Google’s updated guidelines list the following examples of some of what the new signal is looking for:
“Affiliate content on a site previously used by a government agency
Commercial medical products being sold on a site previously used by a non-profit medical charity
Casino-related content on a former elementary school site”
The guidelines say that the above list are examples and that what the signal is looking for is not limited to what’s in the above example. The confirmed takeaway is that Google’s expired domain signal is examining how the purpose of the new site built on an expired domain differs from the old purpose.
This doesn’t necessarily mean that keeping the content similar is enough to fly under the signal because the signal is looking at the purpose, such as going from a non-affiliate purpose to an affiliate purpose.
Why Changes Are Described Ambiguously
Those are the four additions to Google’s spam explainer page that specifically target link signals. Some of the changes may seem ambiguous in order to provide the general contours of the new link signals without precisely saying what those changes are, which is understandable.
I have 25 years hands-on experience as an SEO experiencing and analyzing search engine updates and in my opinion it would be a mistake to use the ambiguity in how Google describes the signals to deny that a change occurred. I expect that those who have a financial stake in maintaining the status quo will deny the changes because that’s what they tend to do. It’s up to you to decide for yourself.
There are more changes to the spam document but these are the changes that relate specifically to link signals in Google’s updated core ranking algorithm.
Before making any changes in response to the algorithm it may be useful to consider that the March 2024 Core Algorithm Update will take a month to fully roll out.
Editor’s note: This article is republished with permission from Microsoft.
Elevate your website’s search engine performance with our latest innovation in Bing Webmaster Tools, Top Insights.
Delivering top-tier insights and actionable recommendations to fine-tune your site for Bing and other search engines, Top Insights introduces a suite of personalized and prioritized recommendations, enabling webmasters to identify and focus on the most impactful tasks for website optimization.
Whether it’s enhancing content quality, improving indexing coverage, tracking progress and impact, or optimizing structured data and backlinks, Top Insights is your go-to resource for making data-driven decisions that boost your website’s visibility and performance.
Screenshot from author, March 2024
Deep Dive Into Performance Metrics
Understand how your website is performing in real time with detailed analytics on page speed, user engagement, and more.
Top Insights breaks down complex data into understandable metrics, helping you pinpoint areas for immediate improvement.
Crawl Status At Your Fingertips
Stay informed about how Bing’s crawlers interact with your site.
Identify crawl errors, broken links, and other issues that could hinder your site’s search engine performance, ensuring that your content is always accessible and indexable.
Enhanced Index Coverage
Gain clarity on how well your content is indexed in Bing.
Top Insights provides you with a comprehensive view of your website’s index status, highlighting opportunities to increase your visibility through better indexation.
Elevate Your Content Quality
Discover actionable insights on improving your website’s content.
Whether it’s optimizing for keywords, enhancing readability, or ensuring your content is fresh and relevant, Top Insights guides you through elevating your content game.
Master Structured Data
With Top Insights, harness the power of structured data to enhance your search visibility.
Learn how to implement and optimize structured data to communicate more effectively with search engines and improve your chances of achieving rich snippets in search results.
Optimize Backlinks
Understand the role of backlinks in your website’s SEO strategy.
Top Insights helps you analyze your backlink profile, identify high-quality linking opportunities, and avoid potentially harmful links that could affect your search engine rankings.
User Signals Unpacked
Learn how user behavior impacts your site’s performance.
Top Insights dives into metrics like bounce rate, time on site, and click-through rates, providing you with a clear understanding of user engagement and how it influences your SEO.
Track Your Progress
With Top Insights, monitoring your optimization efforts has never been easier.
Track your progress over time, understand the impact of your changes, and continuously refine your strategies to ensure your website stays ahead of the curve.
Screenshot from author, March 2024
In a digital landscape where visibility is key to your success, Top Insights is an essential asset for webmasters aiming to maximize their website’s potential.
This powerful feature arms you with the critical data and insights needed to make informed decisions to maximize your website’s SEO performance and visibility.
Google has announced a significant update to its search algorithms and policies to tackle spammy and low-quality content on its search engine.
The March 2024 Core Update, which the company says is more extensive than its usual core updates, is now rolling out.
This update includes algorithm changes to improve the quality of search results and reduce spam.
Here are the full details.
Improved Quality Ranking
One of the main focuses of the March 2024 Core Update is to enhance Google’s ranking systems.
“We’re making algorithmic enhancements to our core ranking systems to ensure we surface the most helpful information on the web and reduce unoriginal content in search results,” stated Elizabeth Tucker, Director of Product for Search at Google.
The company has been working on reducing unhelpful and unoriginal content since 2022 and the March 2024 update builds on those efforts.
The refined ranking systems will better understand if webpages could be more helpful, have a better user experience, or seem to be created primarily for search engines rather than people.
Google expects that combining this update and its previous efforts will collectively reduce low-quality, unoriginal content in search results by 40%.
Google states:
“We believe these updates will reduce the amount of low-quality content in Search and send more traffic to helpful and high-quality sites. Based on our evaluations, we expect that the combination of this update and our previous efforts will collectively reduce low-quality, unoriginal content in search results by 40%.”
New Spam Policies
In addition to the ranking adjustments, Google is updating its spam policies to remove the “lowest-quality” content from search results.
Google states:
“We’ll take action on more types of these manipulative behaviors starting today. While our ranking systems keep many types of low-quality content from ranking highly on Search, these updates allow us to take more targeted action under our spam policies.”
Scaled Content Abuse
Google is strengthening its policy against using automation to generate low-quality or unoriginal content at scale to manipulate search rankings.
The updated policy will focus on the abusive behavior of producing content at scale to boost search ranking, regardless of whether automation, humans, or a combination of both are involved.
Google states:
“This will allow us to take action on more types of content with little to no value created at scale, like pages that pretend to have answers to popular searches but fail to deliver helpful content.”
Site Reputation Abuse
Google is addressing the issue of site reputation abuse, where trusted websites host low-quality, third-party content to capitalize on the hosting site’s strong reputation.
Google provides the following example of site reputation abuse:
“For example, a third party might publish payday loan reviews on a trusted educational website to gain ranking benefits from the site. Such content ranking highly in Search can confuse or mislead visitors who may have vastly different expectations for the content on a given website.”
Google will now consider such content spam if it’s produced primarily for ranking purposes and without close oversight of the website owner.
Expired Domain Abuse
Google’s updated spam policies will target expired domain abuse, where expired domains are purchased and repurposed to boost the search ranking of low-quality content. This practice can mislead users into thinking the new content is part of the older, trusted site.
Timeline
The March 2024 Core Update is starting to roll out now.
Websites have a two-month window to comply with the new site reputation policy. The other changes come into effect this week.
Google’s announcement emphasizes the company’s ongoing commitment to improving the quality of its search results.
“Search helps people with billions of questions every day, but there will always be areas where we can improve,” Tucker stated. “We’ll continue to work hard at keeping low-quality content on Search to low levels and showing more information created to help people.”
FAQ
How does Google’s March 2024 Core Update aim to enhance search result quality?
The March 2024 Core Update targets the enhancement of search result quality through several measures:
Algorithmic enhancements to core ranking systems.
A significant focus on surfacing useful information while diminishing unoriginal content in search results.
An expected reduction in low-quality, unoriginal content by 40%, building on efforts begun in 2022.
Implementing refined ranking algorithms to discern content quality and user experience more accurately.
What new spam policies has Google introduced in this update?
Google’s spam policy updates encompass the following areas:
Strengthened measures against scaled content abuse, including automated and human-generated low-quality content.
Stricter actions against site reputation abuse where low-quality content rides on the reputation of trusted sites.
Measures targeting expired domain abuse to prevent repurposed domains from misleading users with low-quality content.
What is the timeframe for compliance with Google’s new spam policies?
Websites are being provided specific timeframes to ensure compliance with Google’s updated spam policies:
A two-month window has been provided for websites to adhere to the new site reputation policy.
Other updates related to the March 2024 Core Update are being implemented within the week of the announcement.
Google emphasizes the importance of adapting to these changes promptly to maintain or improve search rankings.
Vetted.ai is one of many AI chatbots that are gaining traction in Google search.
It’s not much, maybe 10,000 monthly visits, but Google found the site to be the best result for ~100 keywords.
Trend: growing.
Organic traffic and top 3 keywords for vetted.ai (Image Credit: Kevin Indig)
A click on one of Vetted AI’s results opens a blank chatbot page that quickly fills with content about the search query.
Vetted’s AI chatbot (Image Credit: Kevin Indig)
Vetted AI is a shopping assistant that targets long-tail queries like [zep vs draino], [vornado mvh vs vh200], or [ugg ansley vs dakota].
It programmatically creates content for any product comparison you can think of.
Screenshot from search for [zep vs drano], Google, March 2024 (Image Credit: Kevin Indig)
Now contrast that with another site that gained traction in the SERPs over the last months: Reddit (see result No. 3 in the screenshot above).
Reddit is the opposite of AI chatbots: human, experiential, and unoptimized content.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Sundar Pich-AI
Google’s $60 million deal with Reddit is more than it seems at first.
Google is surfacing more content from forums in the SERPs is to counter-balance AI content.
Verification is the ultimate AI watermarking.
Even though Reddit can’t prevent humans from using AI to create posts or comments, chances are lower because of two things Google search doesn’t have: Moderation and Karma.
Yes, Content Goblins have already taken aim at Reddit, but most of the 73 million daily active users provide useful answers.
Content moderators punish spam with bans or even kicks.
But the most powerful driver of quality on Reddit is Karma, “a user’s reputation score that reflects their community contributions.”
Through simple up or downvotes, users can gain authority and trustworthiness – two integral ingredients in Google’s quality systems.
The press release claims that the deal enables Google to “facilitate more content-forward displays of Reddit information that will make our products more helpful for our users and make it easier to participate in Reddit communities and conversations.“
That might be true, but the real reason for the Google-Reddit deal is Karma.
Google already has access to Reddit’s content: It crawls reddit.com many times every day and can use that content to train it’s machine learning models. It most likely already has.
Google also already shows Reddit content prominently in search through high organic ranks (Hidden Gem update) and the Discussions & Forums SERP Feature.
Reddit.com has seen the fastest growth in search of any domain and is now one of the web’s largest.
Reddit is at the same traffic level as Amazon (Image Credit: Kevin Indig)
But with access to Reddit’s API, Google can access Karma to train model’s on content humans value, and potentially surface better Reddit answers in search. For example, by filtering out spammy or guideline-violating posts.
The implications go way beyond model training and lead right into the heart of what users want out of search.
Authentic Reviews
Reddit’s Karma is especially valuable when it comes to surfacing authentic experiences in search.
Google still has miles to go when it comes to product reviews.
A recent article from HouseFresh shows how big brands are not testing as thoroughly as they might pretend.
As a team that has dedicated the last few years to testing and reviewing air purifiers, it’s disheartening to see our independent site be outranked by big-name publications that haven’t even bothered to check if a company is bankrupt before telling millions of readers to buy their products.
You might also recall a research study from Germany that found “having a separate low-quality review section to support a site’s primary content is a successful and lucrative business model.” Publishers make money with reviews, often to survive.
A recent study by Schultheiß et al. investigates the compatibility between SEO and content quality on medical websites with a user study. The study finds an inverse relationship between a page’s optimization level and its perceived expertise, indicating that SEO may hurt at least subjective page quality.
Reddit, on the other hand, is unoptimized and human. You could say it’s in-ai-efficient.
Users go to Reddit, the prime source for in-ai-fficient reviews, when they want opinions from other users instead of optimized reviews from publishers or affiliates. For some products, people want reviews from other people.
There is a speed benefit of using Reddit’s AI, too.
Similar to indexing APIs, Google gets all Reddit content via API instead of having to cough up crawl budget for the massive domain.
Wikipedia holds ~60 million pages.
Reddit had over 300 million posts in 2020 alone. The site is big and getting bigger.
First, we’re pleased to announce a new Cloud partnership that enables Reddit to integrate new AI-powered capabilities using Vertex AI. Reddit intends to use Vertex AI to enhance search and other capabilities on the Reddit platform.
It’s ironic that Reddit uses Google’s Vertex AI (enterprise Gemini models) to improve its search capabilities as part of the deal since it was Reddit’s poor search function that drove so many users to search for Reddit content on Google in the first place.
The Hidden Gems update, Google’s Discussions & Forums SERP Feature, and the eventual Google deal might have never happened without such demand for Reddit results.
Ai-fficient
Aifficiencies are incremental improvements from AI. Instead of doing new things, the biggest value add from AI so far is doings things faster and better.
I use ChatGPT a lot to come up with better spreadsheet formulas and macros.
I never learned RegEx well, and my JavaScript/Python skills merely prevent me from embarrassing myself in front of developers.
With LLMs, I can solve these problems quickly and independently.
This week, I categorized almost 20,000 keywords into eight core topics for a client and paid less than $20 in one hour. AI is NOS for no-code.
Websites are leveraging Aifficiencies the same way: not new but better.
Reddit uses Vortex AI, but examples like Zillow (I cover more in the second State of AI report) have already pulled the trigger.
Amazon and eBay lower the barrier to entry by allowing merchants to snap a picture of a product and automagically write title and description, including product features, with AI.
Amazon creates helpful AI review summaries (inspiration for SGE?).
Redfin allows visitors to customize interior design with AI.
AI makes products better instead of creating new ones.
However, when it comes to product advice and reviews, we want unoptimized, in-ai-fficient information. Just raw, authentic feedback.
We don’t want a biased list of products based on which ones rake in the biggest affiliate cut.
Users have grown up and can tell that most reviews on Google have financial incentives, just like the pitches “I think this resource would be helpful for your audience” and “Let’s connect and find business synergies” are really about backlinks and closed deals.
User Intent: Amateur Experience
One way to make reviews more authentic is to use amateur opinions.
If I were in the reviews business, I’d interview non-professionals and feature their opinions in reviews – but only for products with a lower need for expertise.
Google’s addition of “experience” in the quality rater guidelines is a direct hint that not every query demands expertise from professionals.
Some queries have a demand for low expertise and high experience. Just like we can use SERP Features to infer the user intent(s) for a query, a prominent Reddit result tells us that searchers value opinions and experiences from non-experts.
For example, when searching for the best GPU that fits your computer hardware, you probably want to hear from an expert. When searching for card games for couples, you most likely want to hear from other couples.
Google wants to surface more results that are not optimized for embeddings and backlinks but for human signals like Karma.
Helpful Content Updates are steps toward more authentic content. The Google-Reddit deal is the sequel.
The elusive five-star review used to be something you could only flaunt in a rotating reviews section on your website.
But today, Google has pulled these stars out of the shadows and features them front and center across branded SERPs and beyond.
Star ratings can help businesses earn trust from potential customers, improve local search rankings, and boost conversions.
This is your guide to how they work.
Stars And SERPs: What Is The Google Star Rating?
A Google star rating is a consumer-powered grading system that lets other consumers know how good a business is based on a score of one to five stars.
These star ratings can appear across maps and different Google search results properties like standard blue link search listings, ads, rich results like recipe cards, local pack results, third-party review sites, and on-app store results.
How Does The Google Star Rating Work?
When a person searches Google, they will see star ratings in the results. Google uses an algorithm and an average to determine how many stars are displayed on different review properties.
Google explains that the star score system operates based on an average of all review ratings for that business that have been published on Google.
It’s important to note that this average is not calculated in real-time and can take up to two weeks to update after a new review is created.
When users leave a review, they are asked to rate a business based on specific aspects of their customer experience, as well as the type of business being reviewed and the services they’ve included.
For example, “plumbers may get “Install faucet” or “Repair toilet” as services to add,” and Google also allows businesses to add custom services that aren’t listed.
When customers are prompted to give feedback, they can give positive or critical feedback, or they can choose not to select a specific aspect to review, in which case this feedback aspect is considered unavailable.
This combination of feedback is what Google uses to determine a business’s average score by “dividing the number of positive ratings by the total number of ratings (except the ones where the aspect was not rated).”
Google star ratings do have some exceptions in how they function.
For example, the UK and EU have certain restrictions that don’t apply to other regions, following recent scrutiny by the EU Consumer Protection Cooperation and the UK Competitions and Market Authority about fake reviews being generated.
Additionally, the type of rating search property will determine the specifics of how it operates and how to gather and manage reviews there.
Keep reading to get an in-depth explanation of each type of Google star rating available on the search engine results pages (SERPs).
How To Get Google Star Ratings On Different Search Properties
As mentioned above, there are different types of Google star ratings available across search results, including the standard blue-link listings, ads, local pack results, rich snippets, third-party reviews, and app store results.
Here’s what the different types of star-rating results look like in Google and how they work on each listing type.
As the video points out, you need actual reviews to get your structured data markup to show.
Then, you can work with your development team to input the code on your site that indicates your average rating, highest, lowest, and total rating count.
Screenshot JSON-LD script on Google Developers, August 2021
Once you add the rich snippet to your site, there is no clear timeline for when they will start appearing in the SERPs – that’s up to Google.
In fact, Google specifically mentions that reviews in properties like search can take longer to appear, and often, this delay is caused by business profiles being merged.
Adding schema is strongly encouraged. But even without it, if you own a retail store with ratings, Google may still show your star ratings in the search engine results.
They do this to ensure searchers are getting access to a variety of results. Google says:
“content on your website that’s been crawled and is related to retail may also be shown in product listings and annotations for free across Google.”
If you want star ratings to show up on Shopping Ads, you’ll have to pay for that.
Paid Ads And Google Stars
When Google Stars appear in paid search ads, they’re known as seller ratings, “an automated extension type that showcases advertisers with high ratings.”
These can appear in text ads, shopping ads, and free listings. Both the star rating and the total number of votes or reviews are displayed.
In addition to Google star ratings, shopping ads may include additional production information such as shipping details, color, material, and more, as shown below.
Screenshot from SERPs ads, Google, February 2024
Paid text ads were previously labeled as “ads” and recently have been upgraded to a “sponsored” label, as shown below.
Screenshot from SERPs ads, Google, February 2024
How To Get Google Stars On Paid Ads
To participate in free listings, sellers have to do three things:
Follow all the required policies around personally identifiable information, spam, malware, legal requirements, return policies, and more.
Submit a feed through the Google Merchant Center or have structured data markup on their website (as described in the previous section).
Again, some ecommerce sellers who do not have schema markup may still have their content show up in the SERPs.
For text ads and shopping ads to show star ratings, sellers are typically required to have at least 100 reviews in the last 12 months.
Paid advertisers must also meet a minimum number of stars for seller ratings to appear on their text ads. This helps higher-quality advertisers stand out from the competition.
Google treats reviews on a per-country basis, so the minimum review threshold of 100 also applies only to 1 region at a time.
For star ratings to appear on a Canadian ecommerce company’s ads, for example, they would have to have obtained a minimum of 100 reviews from within Canada in the last year.
Google considers reviews from its own Google Customer Reviews and also from approved third-party partner review sites from its list of 29 supported review partners, which makes it easier for sellers to meet the minimum review threshold each year.
Google also requests:
The domain that has ratings must be the same as the one that’s visible in the ad.
Google or its partners must conduct a research evaluation of your site.
The reviews included must be about the product or service being sold.
Local Pack Results And Google Stars
Local businesses have a handful of options for their business to appear on Google via Places, local map results, and a Google Business Profile page – all of which can show star ratings.
Consumers even have the option to sort local pack results by their rating, as shown in the image example below.
Screenshot from SERPs local pack, Google, February 2024
Customers may leave reviews directly on local business properties without being asked, but Google also encourages business owners to solicit reviews from their customers and shares best practices, including:
Asking your customers to leave you a review and make it easy for them to do so by providing a link to your review pages.
Making review prompts desktop and mobile-friendly.
Customers can also leave star ratings on other local review sites, as Google can pull from both to display on local business search properties. It can take up to two weeks to get new local reviews to show in your overall score.
Once customers are actively leaving reviews, Google Business Profile owners have a number of options to help them manage these:
Screenshot from Google Business Profile Help, Google, February 2024
Rich Results, Like Recipes, And Google Stars
Everybody’s gotta eat, and we celebrate food in many ways — one of which is recipe blogs.
While restaurants rely more on local reviews, organic search results, and even paid ads, food bloggers seek to have their recipes rated.
Similar to other types of reviews, recipe cards in search results show the average review rating and the total number of reviews.
Screenshot from search for [best vegan winter recipes], Google, February 2024
The outcome has become a point of contention among the food blogging community, since only three recipes per search can be seen on Google desktop results (like shown in the image above), and four on a mobile browser.
These coveted spots will attract clicks, leaving anyone who hasn’t mastered online customer reviews in the dust. That means that the quality of the recipe isn’t necessarily driving these results.
Google gives users the option to click “Show more” to see two additional rows of results:
Screenshot from SERPs, Google, February 2024
Searchers can continue to click the “Show more” button to see additional recipe results.
Anyone using Google Home can search for a recipe and get results through their phone:
Screenshot from Elfsight, February 2024
Similarly, recipe search results can be sent from the device to the Google Home assistant. Both methods will enable easy and interactive step-by-step recipe instructions using commands like “start recipe,” “next step,” or even “how much olive oil?”
How To Get Google Stars On Recipe Results
Similar to the steps to have stars appear on organic blue-link listings, food bloggers and recipe websites need to add schema to their websites in order for star ratings to show.
Image of the recipe in a BMP, GIF, JPEG, PNG, WebP, or SVG format.
Recommended Markup For Recipes
Aggregate rating.
Author.
Cook time, preparation time, and total duration.
Date published.
Description.
Keywords.
Nutrition information.
Prep time.
Recipe category by meal type, like “dinner.”
Region associated with the recipe.
Ingredients.
Instructions.
Yield or total serving.
Total time.
Video (and other related markup, if there is a video in the recipe).
To have recipes included in Google Assistant Guided Recipes, the following markup must be included:
recipeIngredient
recipeInstructions
To have the video property, add the contentUrl.
For example, here’s what the structured markup would look like for the recipeIngredient property:
Screenshot from Google Developer, February 2024
Third-Party Review Sites And Google Stars
Many software companies rely on third-party review sites to help inform their customer’s purchasing decisions.
Third-party review sites include any website a brand doesn’t own where a customer can submit a review, such as Yelp, G2, and many more.
Many of these sites, like Featured Customers shown below, can display star ratings within Google search results.
Screenshot from SERPs listing of a review site, Google, February 2024
Rich snippets from third-party reviews, such as stars, summary info, or ratings, can also appear on a Google Business Profile or map view from approved sites.
For local businesses, Google star ratings appear in different locations than the third-party reviews on a desktop:
Screenshot from SERPs listing of a review site, Google, February 2024
On mobile, ratings are displayed on a company’s Google Business Profile. Users need to click on Reviews or scroll down to see the third-party reviews:
Screenshot from SERPs listing of a review site, Google, February 2024
On a map, the results from third parties may be more prominent, like the Tripadvisor review that shows up for a map search of The Hilton in Vancouver (although it does not display a star rating even though Tripadvisor does provide star ratings):
Screenshot from SERPs listing of a review site, Google, February 2024
How To Get Google Stars On Third-Party Review Sites
The best way to get a review on a third-party review site depends on which site is best for the brand or the business.
For example, if you have active customers on Yelp or Tripadvisor, you may choose to engage with customers there.
Screenshot from SERPs listing of a review site, Google, February 2024
Similarly, if a software review site like Trustpilot shows up for your branded search, you could do an email campaign with your customer list asking them to leave you a review there.
Here are a few of the third-party review websites that Google recognizes:
Trustpilot.
Reevoo.
Bizrate – through Shopzilla.
When it comes to third-party reviews, Google reminds businesses that there is no way to opt out of third-party reviews, and they need to take up any issues with third-party site owners.
App Store Results And Google Stars
When businesses have an application as their core product, they typically rely on App Store and Google Play Store downloads.
Right from the SERPs, searchers can see an app’s star ratings, as well as the total votes and other important information, like whether the app is free or not.
Screenshot from SERP play store results, Google, February 2024
How To Get Google Stars On App Store Results
Businesses can list their iOS apps in the App Store or on the Google Play store, prompt customers to leave reviews there, and also respond to them.
Does The Google Star Rating Influence SEO Rankings?
John Mueller confirmed that Google does not factor star ratings or customer reviews into web search rankings. However, Google is clear that star ratings influence local search results and rankings:
“Google review count and review score factor into local search ranking. More reviews and positive ratings can improve your business’ local ranking.”
Even though they are not a ranking factor for non-local organic search, star ratings can serve as an important conversion element, helping you display social proof, build credibility, and increase your click-through rate from search engines (which may indirectly impact your search rankings).
For local businesses, both Google stars and third-party ratings appear in desktop and mobile searches, as seen above.
These ratings not only help local businesses rank above their competitors for key phrases, but they will also help convince more customers to click, which is every company’s search game.
How Do I Improve My Star Rating?
Businesses that want to improve their Google star rating should start by claiming their Google Business Profile and making sure all the information is complete and up to date.
If a company has already taken these steps and wants to offset a poor rating, they are going to need more reviews to offset the average.
Companies can get more Google reviews by making it easy for customers to leave one. The first step for a company is to get the link to leave a review inside their Google Business Profile:
Screenshot from Wordstream, February 2024
From there, companies can send this link out to customers directly (there are four options displayed right from the link as seen above), include it on social media, and even dedicate sections of their website to gathering more reviews and/or displaying reviews from other users.
It isn’t clear whether or not responding to reviews will help improve a local business’s ranking; however, it’s still a good idea for companies to respond to reviews on their Google Business Profile in order to improve their ratings overall.
That’s because responding to reviews can entice other customers to leave a review since they know they will get a response and because the owner is actually seeing the feedback.
For service businesses, Google provides the option for customers to rate aspects of the experience.
This is helpful since giving reviewers this option allows anyone who had a negative experience to rate just one aspect negatively rather than giving a one-star review overall.
Does Having A Star Rating On Google Matter? Yes! So Shoot For The Stars
Stars indicate quality to consumers, so they almost always improve click-through rates wherever they are present.
Consumers tend to trust and buy from brands with higher star ratings in local listings, paid ads, or even app downloads.
Many, many, many studies have demonstrated this phenomenon time and again. So, don’t hold back when it comes to reviews.
Do an audit of where your brand shows up in SERPs and get stars next to as many placements as possible.
The most important part of star ratings across Google, however, will always be the service and experiences companies provide that fuel good reviews from happy customers.
More resources:
Feature Image: BestForBest/Shutterstock All screenshots taken by author
The world of search has seen massive change recently. Whether you’re still in the planning stages for this year or underway with your 2024 strategy, you need to know the new SEO trends to stay ahead of seismic search industry shifts.
It’s time to chart a course for SEO success in this changing landscape.
Watch this on-demand webinar as we explore exclusive survey data from today’s top SEO professionals and digital marketers to inform your strategy this year. You’ll also learn how to navigate SEO in the era of AI, and how to gain an advantage with these new tools.
You’ll hear:
The top SEO priorities and challenges for 2024.
The role of AI in SEO – how to get ahead of the anticipated disruption of SGE and AI overall, plus SGE-specific SEO priorities.
Winning SEO resourcing strategies and reporting insights to fuel success.
With Shannon Vize and Ryan Maloney, we’ll take a deep dive into the top trends, priorities, and challenges shaping the future of SEO.
Robots.txt is a useful and powerful tool to instruct search engine crawlers on how you want them to crawl your website. Managing this file is a key component of good technical SEO.
It is not all-powerful – in Google’s own words, “it is not a mechanism for keeping a web page out of Google” – but it can help prevent your site or server from being overloaded by crawler requests.
If you have this crawl block on your site, you must be certain it’s being used properly.
This is particularly important if you use dynamic URLs or other methods that generate a theoretically infinite number of pages.
In this guide, we will look at some of the most common issues with the robots.txt file, their impact on your website and your search presence, and how to fix these issues if you think they have occurred.
But first, let’s take a quick look at robots.txt and its alternatives.
What Is Robots.txt?
Robots.txt uses a plain text file format and is placed in the root directory of your website.
It must be in the topmost directory of your site. Search engines will simply ignore it if you place it in a subdirectory.
Despite its great power, robots.txt is often a relatively simple document and a basic robots.txt file can be created in seconds using an editor like Notepad. You can have fun with them and add additional messaging for users to find.
Image from author, February 2024
There are other ways to achieve some of the same goals that robots.txt is usually used for.
Individual pages can include a robots meta tag within the page code itself.
You can also use the X-Robots-Tag HTTP header to influence how (and whether) content is shown in search results.
What Can Robots.txt Do?
Robots.txt can achieve a variety of results across a range of different content types:
Webpages can be blocked from being crawled.
They may still appear in search results, but they will not have a text description. Non-HTML content on the page will not be crawled either.
Media files can be blocked from appearing in Google search results.
This includes images, video, and audio files.
If the file is public, it will still “exist” online and can be viewed and linked to, but this private content will not show in Google searches.
Resource files like unimportant external scripts can be blocked.
But this means if Google crawls a page that requires that resource to load, the Googlebot robot will “see” a version of the page as if that resource did not exist, which may affect indexing.
You cannot use robots.txt to completely block a webpage from appearing in Google’s search results.
To achieve that, you must use an alternative method, such as adding a noindex meta tag to the head of the page.
How Dangerous Are Robots.txt Mistakes?
A mistake in robots.txt can have unintended consequences, but it’s often not the end of the world.
The good news is that by fixing your robots.txt file, you can recover from any errors quickly and (usually) in full.
“Web crawlers are generally very flexible and typically will not be swayed by minor mistakes in the robots.txt file. In general, the worst that can happen is that incorrect [or] unsupported directives will be ignored.
Bear in mind though that Google can’t read minds when interpreting a robots.txt file; we have to interpret the robots.txt file we fetched. That said, if you are aware of problems in your robots.txt file, they’re usually easy to fix.”
8 Common Robots.txt Mistakes
Robots.txt Not In The Root Directory.
Poor Use Of Wildcards.
Noindex In Robots.txt.
Blocked Scripts And Stylesheets.
No Sitemap URL.
Access To Development Sites.
Using Absolute URLs.
Deprecated & Unsupported Elements.
If your website behaves strangely in the search results, your robots.txt file is a good place to look for any mistakes, syntax errors, and overreaching rules.
Let’s take a look at each of the above mistakes in more detail and see how to ensure you have a valid robots.txt file.
1. Robots.txt Not In The Root Directory
Search robots can only discover the file if it’s in your root folder.
That’s why there should be only a forward slash between the .com (or equivalent domain) of your website, and the ‘robots.txt’ filename, in the URL of your robots.txt file.
If there’s a subfolder in there, your robots.txt file is probably not visible to the search robots, and your website is probably behaving as if there was no robots.txt file at all.
To fix this issue, move your robots.txt file to your root directory.
It’s worth noting that this will need you to have root access to your server.
Some content management systems will upload files to a “media” subdirectory (or something similar) by default, so you might need to circumvent this to get your robots.txt file in the right place.
2. Poor Use Of Wildcards
Robots.txt supports two wildcard characters:
Asterisk (*) – represents any instances of a valid character, like a Joker in a deck of cards.
Dollar sign ($) – denotes the end of a URL, allowing you to apply rules only to the final part of the URL, such as the filetype extension.
It’s sensible to adopt a minimalist approach to using wildcards, as they have the potential to apply restrictions to a much broader portion of your website.
It’s also relatively easy to end up blocking robot access from your entire site with a poorly placed asterisk.
Test your wildcard rules using a robots.txt testing tool to ensure they behave as expected. Be cautious with wildcard usage to prevent accidentally blocking or allowing too much.
3. Noindex In Robots.txt
This one is more common on websites that are over a few years old.
Google has stopped obeying noindex rules in robots.txt files as of September 1, 2019.
If your robots.txt file was created before that date or contains noindex instructions, you will likely see those pages indexed in Google’s search results.
The solution to this problem is to implement an alternative “noindex” method.
One option is the robots meta tag, which you can add to the head of any webpage you want to prevent Google from indexing.
4. Blocked Scripts And Stylesheets
It might seem logical to block crawler access to external JavaScripts and cascading stylesheets (CSS).
However, remember that Googlebot needs access to CSS and JS files to “see” your HTML and PHP pages correctly.
If your pages are behaving oddly in Google’s results, or it looks like Google is not seeing them correctly, check whether you are blocking crawler access to required external files.
A simple solution to this is to remove the line from your robots.txt file that is blocking access.
Or, if you have some files you do need to block, insert an exception that restores access to the necessary CSS and JavaScript.
5. No XML Sitemap URL
This is more about SEO than anything else.
You can include the URL of your XML sitemap in your robots.txt file.
Because this is the first place Googlebot looks when it crawls your website, this gives the crawler a headstart in knowing the structure and main pages of your site.
While this is not strictly an error – as omitting a sitemap should not negatively affect the actual core functionality and appearance of your website in the search results – it’s still worth adding your sitemap URL to robots.txt if you want to give your SEO efforts a boost.
6. Access To Development Sites
Blocking crawlers from your live website is a no-no, but so is allowing them to crawl and index your pages that are still under development.
It’s best practice to add a disallow instruction to the robots.txt file of a website under construction so the general public doesn’t see it until it’s finished.
Equally, it’s crucial to remove the disallow instruction when you launch a completed website.
Forgetting to remove this line from robots.txt is one of the most common mistakes among web developers; it can stop your entire website from being crawled and indexed correctly.
If your development site seems to be receiving real-world traffic, or your recently launched website is not performing at all well in search, look for a universal user agent disallow rule in your robots.txt file: User-Agent: * Disallow: /
If you see this when you shouldn’t (or don’t see it when you should), make the necessary changes to your robots.txt file and check that your website’s search appearance updates accordingly.
7. Using Absolute URLs
While using absolute URLs in things like canonicals and hreflang is best practice, for URLs in the robots.txt, the inverse is true.
Using relative paths in the robots.txt file is the recommended approach for indicating which parts of a site should not be accessed by crawlers.
A directory or page, relative to the root domain, that may be crawled by the user agent just mentioned.
When you use an absolute URL, there’s no guarantee that crawlers will interpret it as intended and that the disallow/allow rule will be followed.
8. Deprecated & Unsupported Elements
While the guidelines for robots.txt files haven’t changed much over the years, two elements that are oftentimes included are:
Crawl-delay.
Noindex.
While Bing supports crawl-delay, Google doesn’t, but it is often specified by webmasters. You used to be able to set crawl settings in Google Search Console, but this was removed towards the end of 2023.
Google announced it would stop supporting the noindex directive in robots.txt files in July 2019. Before this date, webmasters were able to use the noindex directive in their robots.txt file.
This was not a widely supported or standardized practice, and the preferred method for noindex was to use on-page robots, or x-robots measures at a page level.
How To Recover From A Robots.txt Error
If a mistake in robots.txt has unwanted effects on your website’s search appearance, the first step is to correct robots.txt and verify that the new rules have the desired effect.
Some SEO crawling tools can help so you don’t have to wait for the search engines to crawl your site next.
When you are confident that robots.txt is behaving as desired, you can try to get your site re-crawled as soon as possible.
Submit an updated sitemap and request a re-crawl of any pages that have been inappropriately delisted.
Unfortunately, you are at the whim of Googlebot – there’s no guarantee as to how long it might take for any missing pages to reappear in the Google search index.
All you can do is take the correct action to minimize that time as much as possible and keep checking until Googlebot implements the fixed robots.txt.
Final Thoughts
Where robots.txt errors are concerned, prevention is always better than the cure.
On a large revenue-generating website, a stray wildcard that removes your entire website from Google can have an immediate impact on earnings.
Edits to robots.txt should be made carefully by experienced developers, double-checked, and – where appropriate – subject to a second opinion.
If possible, test in a sandbox editor before pushing live on your real-world server to avoid inadvertently creating availability issues.
Remember, when the worst happens, it’s important not to panic.
Diagnose the problem, make the necessary repairs to robots.txt, and resubmit your sitemap for a new crawl.
Your place in the search rankings will hopefully be restored within a matter of days.
There have been significant changes to the way users interact with search engines, with SERPs moving beyond the basic blue links to a more dynamic and feature-rich structure.
As Google’s SERP features continue to evolve, it’s important to understand each update and how they impact your SEO efforts.
So if you’re a digital marketer or website owner looking to boost your search visibility and user engagement this year, we’ve got you covered.
On March 20, we’re doing a deep dive into all things SERP features, and giving you the tools you need to analyze them.
Join us live as we discuss what these SERP features are, why they’re important, and how you can snag some for your own business.
Not only will STAT’s Senior Search Scientist, Tom Capper, walk you through how to craft an end-to-end strategy, but he’ll also share the latest research on SERP features in 2024, including how frequently each is appearing, and how much visibility each is driving, with comparative analysis across device type and geographic market.
By the end of this session, you’ll know how to uncover strategic SERP feature insights in your space for content, competitive research, and on-page optimizations, in order to enhance your organic presence.
Google announced a new carousel rich result that can be used for local businesses, products, and events which will show a scrolling horizontal carousel displaying all of the items in the list. It’s very flexible and can even be used to create a top things to do in a city list that combines hotels, restaurants, and events. This new feature is in beta, which means it’s being tested.
The new carousel rich result is for displaying lists in a carousel format. According to the announcement the rich results is limited to the following types:
“LocalBusiness and its subtypes, for example: – Restaurant – Hotel – VacationRental
Product
Event”
An example of subtypes is Lodgings, which is a subset of LocalBusiness.
The carousel displays “tiles” that contain information from the webpage that’s about the price, ratings and images. The order of what’s in the ItemList structured data is the order that they will be displayed in the carousel.
Publishers must use the ItemList structured data in order to become eligible for the new rich result
All information in the ItemList structured data must be on the webpage. Just like any other structured data, you can’t stuff the structured data with information that is not visible on the webpage itself.
There are two important rules when using this structured data:
The ItemList type must be the top level container for the structured data.
All the URLs of in the list must point to different webpages on the same domain.
The part about the ItemList being the top level container means that the structured data cannot be merged together with another structured data where the top-level container is something other than ItemList.
For example, the structured data must begin like this:
Be As Specific As Possible
Google’s guidelines recommends being as specific as possible but that if there isn’t a structured data type that closely matches with the type of business then it’s okay to use the more generic LocalBusiness structured data type.
“Depending on your scenario, you may choose the best type to use. For example, if you have a list of hotels and vacation rentals on your page, use both Hotel and VacationRental types. While it’s ideal to use the type that’s closest to your scenario, you can choose to use a more generic type (for example, LocalBusiness).”
Can Be Used For Products
A super interesting use case for this structured data is for displaying a list of products in a carousel rich result.
The structured data for that begins as a ItemList structured data type like this:
The idea that something is not a ranking factor that nevertheless plays a role in ranking websites seems to be logically irreconcilable. Despite seeming like a paradox that cancels itself out, SearchLiaison recently tweeted some comments that go a long way to understanding how to think about E-E-A-T and apply it to SEO.
“You know this hasn’t always been there in Google and it’s something that we developed about ten to twelve or thirteen years ago. And it really is there to make sure that along the lines of what we talked about earlier is that it really is there to ensure that the content that people consume is going to be… it’s not going to be harmful and it’s going to be useful to the user. These are principles that we live by every single day.
And E-A-T, that template of how we rate an individual site based off of Expertise, Authoritativeness and Trustworthiness, we do it to every single query and every single result. So it’s actually very pervasive throughout everything that we do .
I will say that the YMYL queries, the Your Money or Your Life Queries, such as you know when I’m looking for a mortgage or when I’m looking for the local ER, those we have a particular eye on and we pay a bit more attention to those queries because clearly they’re some of the most important decisions that people can make.
So I would say that E-A-T has a bit more of an impact there but again, I will say that E-A-T applies to everything, every single query that we actually look at.”
How can something be a part of every single search query and not be a ranking factor, right?
Background, Experience & Expertise In Google Circa 2012
Something to consider is that in 2012 Google’s senior engineer at the time, Matt Cutts, said that experience and expertise brings a measure of quality to content and makes it worthy of ranking.
Discussing whether the website of a hypothetical person named “Jane” deserves to rank with articles that are original variations of what’s already in the SERPs.
Matt Cutts observed:
“While they’re not duplicates they bring nothing new to the table.
Google would seek to detect that there is no real differentiation between these results and show only one of them so we could offer users different types of sites in the other search results.
They need to ask themselves what really is their value add? …they need to figure out what… makes them special.
…if Jane is just churning out 500 words about a topic where she doesn’t have any background, experience or expertise, a searcher might not be as interested in her opinion.”
Matt then cites the example of Pulitzer Prize-Winning movie reviewer Roger Ebert as a person with the background, experience and expertise that makes his opinion valuable to readers and the content worthy of ranking.
Matt didn’t say that a webpage author’s background, experience and expertise were ranking factors. But he did say that these are the kinds of things that can differentiate one webpage from another and align it to what Google wants to rank.
He specifically said that Google’s algorithm detects if there is something different about it that makes it stand out. That was in 2012 but not much has changed because Google’s John Mueller says the same thing.
“So with that in mind, if you’re focused on kind of this small amount of content that is the same as everyone else then I would try to find ways to significantly differentiate yourselves to really make it clear that what you have on your website is significantly different than all of those other millions of ringtone websites that have kind of the same content.
…And that’s the same recommendation I would have for any kind of website that offers essentially the same thing as lots of other web sites do.
You really need to make sure that what you’re providing is unique and compelling and high quality so that our systems and users in general will say, I want to go to this particular website because they offer me something that is unique on the web and I don’t just want to go to any random other website.”
“Is it something the web has been waiting for? Or is it just another red widget?”
This thing about being compelling and different than other sites, it’s something that’s been a part of Google’s algorithm awhile, just like the Googler in the video said, just like Matt Cutts said and exactly like what Mueller has said as well.
Are they talking about signals?
E-EA-T Algorithm Signals
We know there’s something in the algorithm that relates to someone’s expertise and background that Google’s looking for. The table is set and we can dig into the next step of what it all means.
A while back back I remember reading something that Marie Haynes said about E-A-T, she called it a framework. And I thought, now that’s an interesting thing she just did, she’s conceptualizing E-A-T.
When SEOs discussed E-A-T it was always in the context of what to do in order to demonstrate E-A-T. So they looked at the Quality Raters Guide for guidance, which kind of makes sense since it’s a guide, right?
But what I’m proposing is that the answer isn’t really in the guidelines or anything that the quality raters are looking for.
The best way to explain it is to ask you to think about the biggest part of Google’s algorithm, relevance.
What’s relevance? Is it something you have to do? It used to be about keywords and that’s easy for SEOs to understand. But it’s not about keywords anymore because Google’s algorithm has natural language understanding (NLU). NLU is what enables machines to understand language in the way that it’s actually spoken (natural language).
So, relevance is just something that’s related or connected to something else. So, if I ask, how do I satiate my thirst? The answer can be water, because water quenches the thirst.
How is a site relevant to the search query: “how do I satiate my thirst?”
An SEO would answer the problem of relevance by saying that the webpage has to have the keywords that match the search query, which would be the words “satiate” and “thirst.”
The next step the SEO would take is to extract the related entities for “satiate” and “thirst” because every SEO “knows” they need to do entity research to understand how to make a webpage that answers the search query, “How do I satiate my thirst?”
Now that the SEO has their entities and their keywords they put it all together and write a 600 word essay that uses all their keywords and entities so that their webpage is relevant for the search query, “How do I satiate my thirst?”
I think we can stop now and see how silly that is, right? If someone asked you, “How do I satiate my thirst?” You’d answer, “With water” or “a cold refreshing beer” because that’s what it means to be relevant.
Relevance is just a concept. It doesn’t have anything to do with entities or keywords in today’s search algorithms because the machine is understanding search queries as natural language, even more so with AI search engines.
Similarly, E-E-A-T is also just a concept. It doesn’t have anything to do with author bios, LinkedIn profiles, it doesn’t have anything at all to do with making your content say that you handled the product that’s being reviewed.
“….just making a claim and talking about a ‘rigorous testing process’ and following an ‘E-E-A-T checklist’ doesn’t guarantee a top ranking or somehow automatically cause a page to do better.”
Here’s the part where SearchLiaison ties a bow around the gift of E-E-A-T knowledge:
“We talk about E-E-A-T because it’s a concept that aligns with how we try to rank good content.”
E-E-A-T Can’t Be Itemized On A Checklist
Remember how we established that relevance is a concept and not a bunch of keywords and entities? Relevance is just answering the question.
E-E-A-T is the same thing. It’s not something that you do. It’s closer to something that you are.
SearchLiaison elaborated:
“…our automated systems don’t look at a page and see a claim like “I tested this!” and think it’s better just because of that. Rather, the things we talk about with E-E-A-T are related to what people find useful in content. Doing things generally for people is what our automated systems seek to reward, using different signals.”
A Better Understanding Of E-E-A-T
I think it’s clear now how E-E-A-T isn’t something that’s added to a webpage or is something that is demonstrated on the webpage. It’s a concept, just like relevance.
A good way to think o fit is if someone asks you a question about your family and you answer it. Most people are pretty expert and experienced enough to answer that question. That’s what E-E-A-T is and how it should be treated when publishing content, regardless if it’s YMYL content or a product review, the expertise is just like answering a question about your family, it’s just a concept.