Google’s Response To Experts Outranked By Redditors via @sejournal, @martinibuster

An SEO asked on LinkedIn why an anonymous user on Reddit could outrank a credible website with a named author. Google’s answer gives a peek at what’s going on with search rankings and why Reddit can outrank expert articles.

Why Do Anonymous Redditors Outrank Experts?

The person asking the question wanted to know why an anonymous author on Reddit can outrank an actual author that has “credibility” such as in a brand name site like PCMag.

The person wrote:

“I was referring to how important the credibility of the writer is now. If we search for ‘best product under X amount,’ we see, let’s say, PCMag and Reddit both on the first page.

PCMag is a reliable source for that product, while Reddit has UGC and surely doesn’t guarantee authenticity. Where do you see this in terms of credibility?

In my opinion, Google must be focusing more on this, especially after the AI boom, where misinformation can be easily and massively spread.

Do you think this is an important factor in rankings anymore?”

This is their question that points out what the SEO feels is wrong with Google’s search results:

“As we can see, Reddit, popular for anonymous use, ranks much higher than many other websites.

This means that content from anonymous users is acceptable.

Can I conclude that a blog without any ‘about’ page or ‘author profile’ can also perform as well?”

Relevance And Usefulness Versus Credibility

Google’s John Mueller answered the question by pointing out that there are multiple kinds of websites, not just sites that are perceived to be credible and everyone else. The idea of credibility is one dimension of what a site can be, one quality of a website. Mueller’s answer reminded that search (and SEO) is multidimensional.

Google’s John Mueller answered:

“Both are websites, but also, they’re quite different, right? Finding the right tools for your needs, the right platforms for your audience and for your messages – it’s worth looking at more than just a simplification like that. Google aims to provide search results that are relevant & useful for users, and there’s a lot involved in that.

I feel this might fit, perhaps you have seen it before -“

Does Reddit Lack Credibility?

When it comes to recipes , in my opinion, Reddit users lack a  lot of credibility in some contexts.  When it comes to recipes, I’ll take the opinions of a recipe blogger or Serious Eats over what a random Redditor “thinks” a recipe should be.

The person asking the question mentioned product reviews as a topic that Reddit fails at credibility and ironically that’s actually a topic where Reddit actually shines. A person on Reddit who is sharing their hands-on experience using a brand of air fryer or mobile phone is the epitome of what Google is trying to rank for reviews because it’s the opinion of someone with days, weeks, months, years of actual experience with a product.

Saying that UGC product reviews are useful doesn’t invalidate the professional product reviews. It’s possible that both UGC and  professional reviews have value, right? And I think that’s the point that John Mueller was trying to get across about not simplifying search to one ranking criteria, one dimension.

This a dimension of search that the person asking the question overlooked, the hands-on experience of the reviewer and it illustrates what Mueller means when he says that “it’s worth looking at more than just a simplification” of what’s ranking in the search results.

OTOH… Feels Like A Slap In The Face

There are many high quality sites with original photos, actual reviews and content based on real experience that are no longer ranking the search results. I know because I have seen many of these sites that in my opinion should be ranking but are not. Googlers have expressed the possibility that a future update will help more quality sites bounce back and many expert publishers are counting on that.

Nevertheless, it must be acknowledged that it must feel like a slap in the face for an expert author to see an anonymous Redditor outranking them in Google’s search results.

Multidimensional Approach To SEO

A common issue I see in how some digital marketers and bloggers debug the search engine results pages (SERPs) is that they see it through one, two, or three dimensions such as:

  • Keywords,
  • Expertise
  • Credibility
  • Links

Reviewing the SERPs to understand why Google is ranking something is a good idea. But reviewing it with just a handful of dimensions, a limited amount of “signals” can be frustrating and counterproductive.

It was only a few years ago that SEOs convinced themselves that “author signals” were a critical part of ranking and now almost everyone (finally) understands that this was all a misinterpretation of what Google and Googlers said (despite the Googlers consistently denying that authorship was a ranking signal).

The “authorship” SEO trend is an example of a one dimensional approach to SEO that overlooked the multidimensional quality of how Google ranks web pages.

There are thousands of contexts that contribute to what is ranked, like solving a problem from the user perspective, interpreting user needs, adapting to cultural and language nuances, nationwide trends, local trends, and so on. There are also ranking contexts (dimensions) that are related to Google’s Core Topicality Systems which are used to understand search queries and web pages.

Ranking web pages, from Google’s perspective, is a multidimensional problem. What that means is that reducing a search ranking problem to one dimension, like the anonymity of User Generated Content, inevitably leads to frustration. Broadening the perspective leads to better SEO.

Read the discussion on LinkedIn:

Can I conclude that a blog without any ‘about’ page or ‘author profile’ can also perform as well?

Featured Image by Shutterstock/Master1305

How To Spot SEO Myths: 26 Common SEO Myths, Debunked via @sejournal, @HelenPollitt1

SEO is a complex, vast, and sometimes mysterious practice. There are a lot of aspects to SEO that can lead to confusion.

Not everyone will agree with what SEO entails – where technical SEO stops and development begins.

What also doesn’t help is the vast amount of misinformation that goes around. There are a lot of “experts” online and not all of them should bear that self-proclaimed title. How do you know who to trust?

Even Google employees can sometimes add to the confusion. They struggle to define their own updates and systems and sometimes offer advice that conflicts with previously given statements.

The Dangers Of SEO Myths

The issue is that we simply don’t know exactly how the search engines work. Due to this, much of what we do as SEO professionals is trial and error and educated guesswork.

When you are learning about SEO, it can be difficult to test out all the claims you hear.

That’s when the SEO myths begin to take hold. Before you know it, you’re proudly telling your line manager that you’re planning to “AI Overview optimize” your website copy.

SEO myths can be busted a lot of the time with a pause and some consideration.

How, exactly, would Google be able to measure that? Would that actually benefit the end user in any way?

There is a danger in SEO of considering the search engines to be omnipotent, and because of this, wild myths about how they understand and measure our websites start to grow.

What Is An SEO Myth?

Before we debunk some common SEO myths, we should first understand what forms they take.

Untested Wisdom

Myths in SEO tend to take the form of handed-down wisdom that isn’t tested.

As a result, something that might well have no impact on driving qualified organic traffic to a site gets treated like it matters.

Minor Factors Blown Out Of Proportion

SEO myths might also be something that has a small impact on organic rankings or conversion but are given too much importance.

This might be a “tick box” exercise that is hailed as being a critical factor in SEO success, or simply an activity that might only cause your site to eke ahead if everything else with your competition was truly equal.

Outdated Advice

Myths can arise simply because what used to be effective in helping sites rank and convert well no longer does but is still being advised. It might be that something used to work really well.

Over time, the algorithms have grown smarter. The public is more adverse to being marketed to.

Simply, what was once good advice is now defunct.

Google Being Misunderstood

Many times, the start of a myth is Google itself.

Unfortunately, a slightly obscure or just not straightforward piece of advice from a Google representative gets misunderstood and run away with.

Before we know it, a new optimization service is being sold off the back of a flippant comment a Googler made in jest.

SEO myths can be based on fact, or perhaps these are, more accurately, SEO legends?

In the case of Google-born myths, it tends to be that the fact has been so distorted by the SEO industry’s interpretation of the statement that it no longer resembles useful information.

26 Common SEO Myths

So, now that we know what causes and perpetuates SEO myths, let’s find out the truth behind some of the more common ones.

1. The Google Sandbox And Honeymoon Effects

Some SEO professionals believe that Google will automatically suppress new websites in the organic search results for a period of time before they are able to rank more freely.

Others suggest there is a sort of Honeymoon Period, during which Google will rank new content highly to test what users think of it.

The content would be promoted to ensure more users see it. Signals like click-through rate and bounces back to the search engine results pages (SERPs) would then be used to measure if the content is well received and deserves to remain ranked highly.

There is, however, the Google Privacy Sandbox. This is designed to help maintain peoples’ privacy online. This is a different sandbox from the one that allegedly suppresses new websites.

When asked specifically about the Honeymoon Effect and the rankings Sandbox, John Mueller answered:

“In the SEO world, this is sometimes called kind of like a sandbox where Google is like keeping things back to prevent new pages from showing up, which is not the case.

Or some people call it like the honeymoon period where new content comes out and Google really loves it and tries to promote it.

And it’s again not the case that we’re explicitly trying to promote new content or demote new content.

It’s just, we don’t know and we have to make assumptions.

And then sometimes those assumptions are right and nothing really changes over time.

Sometimes things settle down a little bit lower, sometimes a little bit higher.”

So, there is no systematic promotion or demotion of new content by Google, but what you might be noticing is that Google’s assumptions are based on the rest of the website’s rankings.

  • Verdict: Officially? It’s a myth.

2. Duplicate Content Penalty

This is a myth that I hear a lot. The idea is that if you have content on your website that is duplicated elsewhere on the web, Google will penalize you for it.

The key to understanding what is really going on here is knowing the difference between algorithmic suppression and manual action.

A manual action, the situation that can result in webpages being removed from Google’s index, will be actioned by a human at Google.

The website owner will be notified through Google Search Console.

An algorithmic suppression occurs when your page cannot rank well due to it being caught by a filter from an algorithm.

Essentially, having copy that is taken from another webpage might mean you can’t outrank that other page.

The search engines may determine that the original host of the copy is more relevant to the search query than yours.

As there is no benefit to having both in the search results, yours gets suppressed. This is not a penalty. This is the algorithm doing its job.

There are some content-related manual actions, but essentially, copying one or two pages of someone else’s content is not going to trigger them.

It is, however, potentially going to land you in other trouble if you have no legal right to use that content. It also can detract from the value your website brings to the user.

What about content that is duplicated across your own site? Mueller clarifies that duplicate content is not a negative ranking factor. If there are multiple pages with the same content, Google may choose one to be the canonical page, and the others will not be ranked.

  • Verdict: SEO myth.

3. PPC Advertising Helps Rankings

This is a common myth. It’s also quite quick to debunk.

The idea is that Google will favor websites that spend money with it through pay-per-click advertising. This is simply false.

Google’s algorithm for ranking organic search results is completely separate from the one used to determine PPC ad placements.

Running a paid search advertising campaign through Google while carrying out SEO might benefit your site for other reasons, but it won’t directly benefit your ranking.

  • Verdict: SEO myth.

4. Domain Age Is A Ranking Factor

This claim is seated firmly in the “confusing causation and correlation” camp.

Because a website has been around for a long time and is ranking well, age must be a ranking factor.

Google has debunked this myth itself many times.

In July 2019, Mueller replied to a post on Twitter.com (recovered through Wayback Machine) that suggested that domain age was one of “200 signals of ranking” saying, “No, domain age helps nothing.”

JohnMu tweet: Image from Twitter.com recovered through Wayback Machine, June 2024

The truth behind this myth is that an older website has had more time to do things well.

For instance, a website that has been live and active for 10 years may well have acquired a high volume of relevant backlinks to its key pages.

A website that has been running for less than six months will be unlikely to compete with that.

The older website appears to be ranking better, and the conclusion is that age must be the determining factor.

  • Verdict: SEO myth

5. Tabbed Content Affects Rankings

This idea is one that has roots going back a long way.

The premise is that Google will not assign as much value to the content sitting behind a tab or accordion.

For example, text that is not viewable on the first load of a page.

Google again debunked this myth in March 2020, but it has been a contentious idea among many SEO professionals for years.

In September 2018, Gary Illyes, Webmaster Trends Analyst at Google, answered a tweet thread about using tabs to display content.

His response:

“AFAIK, nothing’s changed here, Bill: we index the content, and its weight is fully considered for ranking, but it might not get bolded in the snippets. It’s another, more technical question of how that content is surfaced by the site. Indexing does have limitations.”

If the content is visible in the HTML, there is no reason to assume that it is being devalued just because it is not apparent to the user on the first load of the page. This is not an example of cloaking, and Google can easily fetch the content.

As long as there is nothing else that is stopping the text from being viewed by Google, it should be weighted the same as copy, which isn’t in tabs.

Want more clarification on this? Then check out this SEJ article that discusses this subject in detail.

  • Verdict: SEO myth.

6. Google Uses Google Analytics Data In Rankings

This is a common fear among business owners.

They study their Google Analytics reports. They feel their average sitewide bounce rate is too high, or their time on page is too low.

So, they worry that Google will perceive their site to be low quality because of that. They fear they won’t rank well because of it.

The myth is that Google uses the data in your Google Analytics account as part of its ranking algorithm.

It’s a myth that has been around for a long time.

Illyes has again debunked this idea simply with, “We don’t use *anything* from Google analytics [sic] in the “algo.”

Gary IIIyes tweet: Image from Twitter.com recovered from Wayback Machine, June 2024

If we think about this logically, using Google Analytics data as a ranking factor would be really hard to police.

For instance, using filters could manipulate data to make it seem like the site was performing in a way that it isn’t really.

What is good performance anyway?

High “time on page” might be good for some long-form content.

Low “time on page” could be understandable for shorter content.

Is either one right or wrong?

Google would also need to understand the intricate ways in which each Google Analytics account had been configured.

Some might be excluding all known bots, and others might not. Some might use custom dimensions and channel groupings, and others haven’t configured anything.

Using this data reliably would be extremely complicated to do. Consider the hundreds of thousands of websites that use other analytics programs.

How would Google treat them?

  • Verdict: SEO myth.

This myth is another case of “causation, not correlation.”

A high sitewide bounce rate might be indicative of a quality problem, or it might not be. Low time on page could mean your site isn’t engaging, or it could mean your content is quickly digestible.

These metrics give you clues as to why you might not be ranking well, they aren’t the cause of it.

7. Google Cares About Domain Authority

PageRank is a link analysis algorithm used by Google to measure the importance of a webpage.

Google used to display a page’s PageRank score a number up to 10 on its toolbar. It stopped updating the PageRank displayed in toolbars in 2013.

In 2016, Google confirmed that the PageRank toolbar metric was not going to be used going forward.

In the absence of PageRank, many other third-party authority scores have been developed.

Commonly known ones are:

  • Moz’s Domain Authority and Page Authority scores.
  • Majestic’s Trust Flow and Citation Flow.
  • Ahrefs’ Domain Rating and URL Rating.

Some SEO pros use these scores to determine the “value” of a page.

That calculation can never be an entirely accurate reflection of how a search engine values a page, however.

SEO pros will sometimes refer to the ranking power of a website often in conjunction with its backlink profile and this, too, is known as the domain’s authority.

You can see where the confusion lies.

Google representatives have dispelled the notion of a domain authority metric used by them.

John Mueller said in 2022:

“We don’t use domain authority. We generally try to have our metrics as granular as possible, sometimes that’s not so easy, in which case we look at things a bit broader (e.g., we’ve talked about this in regards to some of the older quality updates).”

Tweet by JohnMuImage from Twitter.com recovered through Wayback Machine, June 2024
  • Verdict: SEO myth.

8. Longer Content Is Better

You will have definitely heard it said before that longer content ranks better.

More words on a page automatically make yours more rank-worthy than your competitor’s. This is “wisdom” that is often shared around SEO forums without little evidence to substantiate it.

There are a lot of studies that have been released over the years that state facts about the top-ranking webpages, such as “on average pages in the top 10 positions in the SERPs have over 1,450 words on them.”

It would be quite easy for someone to take this information in isolation and assume it means that pages need approximately 1,500 words to rank on Page 1. That isn’t what the study is saying, however.

Unfortunately, this is an example of correlation, not necessarily causation.

Just because the top-ranking pages in a particular study happened to have more words on them than the pages ranking 11th and lower does not make word count a ranking factor.

Mueller dispelled this myth yet again in a Google SEO Office Hours in February 2021.

“From our point of view the number of words on a page is not a quality factor, not a ranking factor.”

For more information on how content length can impact SEO, check out Sam Hollingsworth’s article.

  • Verdict: SEO myth.

9. LSI Keywords Will Help You Rank

What exactly are LSI keywords? LSI stands for “latent semantic indexing.”

It is a technique used in information retrieval that allows concepts within the text to be analyzed and relationships between them identified.

Words have nuances dependent on their context. The word “right” has a different connotation when paired with “left” than when it is paired with “wrong.”

Humans can quickly gauge concepts in a text. It is harder for machines to do so.

The ability of machines to understand the context and linking between entities is fundamental to their understanding of concepts.

LSI is a huge step forward for a machine’s ability to understand text. What it isn’t is synonyms.

Unfortunately, the field of LSI has been devolved by the SEO community into the understanding that using words that are similar or linked thematically will boost rankings for words that aren’t expressly mentioned in the text.

It’s simply not true. Google has gone far beyond LSI in its understanding of text with the introduction of BERT, as just one example.

For more about what LSI is and how it does or doesn’t affect rankings, take a look at this article.

  • Verdict: SEO myth.

10. SEO Takes 3 Months

It helps us get out of sticky conversations with our bosses or clients. It leaves a lot of wiggle room if you aren’t getting the results you promised. “SEO takes at least three months to have an effect.”

It is fair to say that there are some changes that will take time for the search engine bots to process.

There is then, of course, some time to see if those changes are having a positive or negative effect. Then more time might be needed to refine and tweak your work.

That doesn’t mean that any activity you carry out in the name of SEO is going to have no effect for three months. Day 90 of your work will not be when the ranking changes kick in. There is a lot more to it than that.

If you are in a very low-competition market, targeting niche terms, you might see ranking changes as soon as Google recrawls your page. A competitive term could take much longer to see changes in rank.

A study by Semrush suggested that of the 28,000 domains they analyzed, only 19% of domains started ranking in the top 10 positions within six months and managed to maintain those rankings for the rest of the 13-month study.

This study indicates that newer pages struggle to rank high.

However, there is more to SEO than ranking in the top 10 of Google.

For instance, a well-positioned Google Business Profile listing with great reviews can pay dividends for a company. Bing, Yandex, and Baidu might make it easier for your brand to conquer the SERPs.

A small tweak to a page title could see an improvement in click-through rates. That could be the same day if the search engine were to recrawl the page quickly.

Although it can take a long time to see first page rankings in Google, it is naïve of us to reduce SEO success just down to that.

Therefore, “SEO takes 3 months” simply isn’t accurate.

  • Verdict: SEO myth.

11. Bounce Rate Is A Ranking Factor

Bounce rate is the percentage of visits to your website that result in no interactions beyond landing on the page. It is typically measured by a website’s analytics program, such as Google Analytics.

Some SEO professionals have argued that bounce rate is a ranking factor because it is a measure of quality.

Unfortunately, it is not a good measure of quality.

There are many reasons why a visitor might land on a webpage and leave again without interacting further with the site. They may well have read all the information they needed on that page and left the site to call the company and book an appointment.

In that instance, the visitor bouncing has resulted in a lead for the company.

Although a visitor leaving a page having landed on it could be an indicator of poor quality content, it isn’t always. Therefore, it wouldn’t be reliable enough for a search engine to use as a measure of quality.

“Pogo-sticking,” or a visitor clicking on a search result and then returning to the SERPs, would be a more reliable indicator of the quality of the landing page.

It would suggest that the content of the page was not what the user was after, so much so that they have returned to the search results to find another page or re-search.

John Mueller cleared this up (again) during Google Webmaster Central Office Hours in June 2020. He was asked if sending users to a login page would appear to be a “bounce” to Google and damage their rankings:

“So, I think there is a bit of misconception here, that we’re looking at things like the analytics bounce rate when it comes to ranking websites, and that’s definitely not the case.”

Back on another Google Webmaster Central Office Hours in July 2018, he also said:

“We try not to use signals like that when it comes to search. So that’s something where there are lots of reasons why users might go back and forth, or look at different things in the search results, or stay just briefly on a page and move back again. I think that’s really hard to refine and say, “well, we could turn this into a ranking factor.”

So, why does this keep coming up? Well, for a lot of people, it’s because of this one paragraph in Google’s How Search Works:

“Beyond looking at keywords, our systems also analyze if content is relevant to a query in other ways. We also use aggregated and anonymised interaction data to assess whether Search results are relevant to queries.”

The issue with this is that Google doesn’t specify what this “aggregated and anonymised interaction data” is. This has led to a lot of speculation and of course, arguments.

My opinion? Until we have some more conclusive studies, or hear something else from Google, we need to keep testing to determine what this interaction data is.

For now, regarding the traditional definition of a bounce,  I’m leaning towards “myth.”

In itself, bounce rate (measured through the likes of Google Analytics) is a very noisy, easily manipulated figure. Could something akin to a bounce be a ranking signal? Absolutely, but it will need to be a reliable, repeatable data point that genuinely measures quality.

In the meantime, if your pages are not satisfying user intent, that is definitely something you need to work on – not simply because of bounce rate.

Fundamentally, your pages should encourage users to interact, or if not that sort of page, at least leave your site with a positive brand association.

  • Verdict: SEO myth.

12. It’s All About Backlinks

Backlinks are important – that’s without much contention within the SEO community. However, exactly how important is still debated.

Some SEO pros will tell you that backlinks are one of the many tactics that will influence rankings, but they are not the most important. Others will tell you it’s the only real game-changer.

What we do know is that the effectiveness of links has changed over time. Back in the wild pre-Jagger days, link-building consisted of adding a link to your website wherever you could.

Forum comments had spun articles, and irrelevant directories were all good sources of links.

It was easy to build effective links. It’s not so easy now.

Google has continued to make changes to its algorithms that reward higher-quality, more relevant links and disregard or penalize “spammy” links.

However, the power of links to affect rankings is still great.

There will be some industries that are so immature in SEO that a site can rank well without investing in link-building, purely through the strength of their content and technical efficiency.

That’s not the case with most industries.

Relevant backlinks will, of course, help with ranking, but they need to go hand-in-hand with other optimizations. Your website still needs to have relevant content, and it must be crawlable.

If you want your traffic to actually do something when they hit your website, it’s definitely not all about backlinks.

Ranking is only one part of getting converting visitors to your site. The content and usability of the site are extremely important in user engagement.

Following the slew of Helpful Content updates and a better understanding of what Google considers E-E-A-T, we know that content quality is extremely important.

Backlinks can definitely help to indicate that a page would be useful to a reader, but there are many other factors that would suggest that, too.

  • Verdict: SEO myth.

13. Keywords In URLs Are Very Important

Cram your URLs full of keywords. It’ll help.

Unfortunately, it’s not quite as powerful as that.

John Mueller has said several times that keywords in a URL are a very minor, lightweight ranking signal.

In a Google SEO Office Hours in 2021, he affirmed again:

“We use the words in a URL as a very, very lightweight factor. And from what I recall, this is primarily something that we would take into account when we haven’t had access to the content yet.

So, if this is the absolute first time we see this URL and we don’t know how to classify its content, then we might use the words in the URL as something to help rank us better.

But as soon as we’ve crawled and indexed the content there, then we have a lot more information.”

If you are looking to rewrite your URLs to include more keywords, you are likely to do more damage than good.

The process of redirecting URLs en masse should be when necessary, as there is always a risk when restructuring a site.

For the sake of adding keywords to a URL? Not worth it.

  • Verdict: SEO myth.

14. Website Migrations Are All About Redirects

SEO professionals hear this too often. If you are migrating a website, all you need to do is remember to redirect any URLs that are changing.

If only this one were true.

In actuality, website migration is one of the most fraught and complicated procedures in SEO.

A website changing its layout, content management system (CMS), domain, and/or content can all be considered a website migration.

In each of those examples, there are several aspects that could affect how the search engines perceive the quality and relevance of the pages to their targeted keywords.

As a result, there are numerous checks and configurations that need to occur if the site is to maintain its rankings and organic traffic – ensuring tracking hasn’t been lost, maintaining the same content targeting, and making sure the search engine bots can still access the right pages.

All of this needs to be considered when a website is significantly changing.

Redirecting URLs that are changing is a very important part of website migration. It is in no way the only thing to be concerned about.

  • Verdict: SEO myth.

15. Well-Known Websites Will Always Outrank Unknown Websites

It stands to reason that a larger brand will have resources that smaller brands do not. As a result, more can be invested in SEO.

More exciting content pieces can be created, leading to a higher volume of backlinks acquired. The brand name alone can lend more credence to outreach attempts.

The real question is, does Google algorithmically or manually boost big brands because of their fame?

This one is a bit contentious.

Some people say that Google favors big brands. Google says otherwise.

In 2009, Google released an algorithm update named “Vince.” This update had a huge impact on how brands were treated in the SERPs.

Brands that were well-known offline saw ranking increases for broad competitive keywords. It stands to reason that brand awareness can help with discovery through Search.

It’s not necessarily time for smaller brands to throw in the towel.

The Vince update falls very much in line with other Google moves towards valuing authority and quality.

Big brands are often more authoritative on broad-level keywords than smaller contenders.

However, small brands can still win.

Long-tail keyword targeting, niche product lines, and local presence can all make smaller brands more relevant to a search result than established brands.

Yes, the odds are stacked in favor of big brands, but it’s not impossible to outrank them.

  • Verdict: Not entirely truth or myth.

16. Your Page Needs To Include ‘Near Me’ To Rank Well For Local SEO

It’s understandable that this myth is still prevalent.

There is still a lot of focus on keyword search volumes in the SEO industry, sometimes at the expense of considering user intent and how the search engines understand it.

When a searcher is looking for something with local intent, i.e., a place or service relevant to a physical location, the search engines will take this into consideration when returning results.

With Google, you will likely see the Google Maps results as well as the standard organic listings.

The Maps results are clearly centered around the location searched. However, so are the standard organic listings when the search query denotes local intent.

So, why do “near me” searches confuse some?

A typical keyword research exercise might yield something like the following:

  • “pizza restaurant manhattan” – 110 searches per month.
  • “pizza restaurants in manhattan” – 110 searches per month.
  • “best pizza restaurant manhattan” – 90 searches per month.
  • “best pizza restaurants in manhattan” – 90 searches per month.
  • “best pizza restaurant in manhattan”– 90 searches per month.
  • “pizza restaurants near me” – 90,500 searches per month.

With search volume like that, you would think [pizza restaurants near me] would be the one to rank for, right?

It is likely, however, that people searching for [pizza restaurant manhattan] are in the Manhattan area or planning to travel there for pizza.

[pizza restaurant near me] has 90,500 searches across the USA. The likelihood is that the vast majority of those searchers are not looking for Manhattan pizzas.

Google knows this and, therefore, will serve pizza restaurant results relevant to the searcher’s location.

Therefore, the “near me” element of the search becomes less about the keyword and more about the intent behind the keyword. Google will just consider it to be the location the searcher is in.

So, do you need to include “near me” in your content to rank for those [near me] searches?

No, you need to be relevant to the location the searcher is in.

  • Verdict: SEO myth.

17. Better Content Equals Better Rankings

It’s prevalent in SEO forums and X (formally Twitter) threads. The common complaint is, “My competitor is ranking above me, but I have amazing content, and theirs is terrible.”

The cry is one of indignation. After all, shouldn’t search engines reward sites for their “amazing” content?

This is both a myth and sometimes a delusion.

The quality of content is a subjective consideration. If it is your own content, it’s harder still to be objective.

Perhaps in Google’s eyes, your content isn’t better than your competitors’ for the search terms you are looking to rank for.

Perhaps you don’t meet searcher intent as well as they do. Maybe you have “over-optimized” your content and reduced its quality.

In some instances, better content will equal better rankings. In others, the technical performance of the site or its lack of local relevance may cause it to rank lower.

Content is one factor within the ranking algorithms.

  • Verdict: SEO myth.

18. You Need To Blog Every Day

This is a frustrating myth because it seems to have spread outside of the SEO industry.

Google loves frequent content. You should add new content or tweak existing content daily for “freshness.”

Where did this idea come from?

Google had an algorithm update in 2011 that rewards fresher results in the SERPs.

This is because, for some queries, the fresher the results, the better the likelihood of accuracy.

For instance, if you search for [royal baby] in the UK in 2013, you will be served with news articles about Prince George. Search it again in 2015, and you will see pages about Princess Charlotte.

In 2018, you would see reports about Prince Louis at the top of the Google SERPs, and in 2019 it would be baby Archie.

If you were to search [royal baby] in 2021, shortly after the birth of Lilibet, then seeing news articles on Prince George would likely be unhelpful.

In this instance, Google discerns the user’s search intent and decides showing articles related to the newest UK royal baby would be better than showing an article that is arguably more rank-worthy due to authority, etc.

What this algorithm update doesn’t mean is that newer content will always outrank older content. Google decides if the “query deserves freshness” or not.

If it does, then the age of content becomes a more important ranking factor.

This means that if you are creating content purely to make sure it is newer than competitors’ content, you are not necessarily going to benefit.

If the query you are looking to rank for does not deserve freshness, i.e., [who is Prince William’s third child?] a fact that will not change, then the age of content will not play a significant part in rankings.

If you are writing content every day thinking it is keeping your website fresh and, therefore, more rank-worthy, then you are likely wasting time.

It would be better to write well-considered, researched, and useful content pieces less frequently and reserve your resources to make those highly authoritative and shareable.

  • Verdict: SEO myth.

19. You Can Optimize Copy Once & Then It’s Done

The phrase “SEO optimized” copy is a common one in agency-land.

It’s used as a way to explain the process of creating copy that will be relevant to frequently searched queries.

The trouble with this is that it suggests that once you have written that copy – and ensured it adequately answers searchers’ queries – you can move on.

Unfortunately, over time, how searchers look for content might change. The keywords they use, the type of content they want could alter.

The search engines, too, may change what they feel is the most relevant answer to the query. Perhaps the intent behind the keyword is perceived differently.

The layout of the SERPs might alter, meaning videos are being shown at the top of the search results where previously it was just webpage results.

If you look at a page only once and then don’t continue to update it and evolve it with user needs, then you risk falling behind.

  • Verdict: SEO myth.

20. Google Respects The Declared Canonical URL As The Preferred Version For Search Results

This can be very frustrating. You have several pages that are near duplicates of each other. You know which one is your main page, the one you want to rank, the “canonical.” You tell Google that through the specially selected “rel=canonical” tag.

You’ve chosen it. You’ve identified it in the HTML.

Google ignores your wishes, and another of the duplicate pages ranks in its place.

The idea that Google will take your chosen page and treat it like the canonical out of a set of duplicates isn’t a challenging one.

It makes sense that the website owner would know best which page should be the one that ranks above its cousins. However, Google will sometimes disagree.

There may be instances where another page from the set is chosen by Google as a better candidate to show in the search results.

This could be because the page receives more backlinks from external sites than your chosen page. It could be that it’s included in the sitemap or is being linked to your main navigation.

Essentially, the canonical tag is a signal – one of many that will be taken into consideration when Google chooses which page from a set of duplicates should rank.

If you have conflicting signals on your site, or externally, then your chosen canonical page may be overlooked in favor of another page.

Want to know if Google has selected another URL to be the canonical despite your canonical tag? In Google Search Console, in the Index Coverage report, you might see this: “Duplicate, Google chose different canonical than user.”

Google’s support documents helpfully explain what this means:

“This page is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Google has indexed the page that we consider canonical rather than this one.”

  • Verdict: SEO myth.

21. Google Has 3 Top Ranking Factors

It’s links, content, and Rank Brain, right?

This idea that these are the three top ranking factors seems to come from a WebPromo Q&A in 2016 with Andrei Lipattsev, a search quality senior strategist at Google at the time (recovered through Wayback Machine; find this discussion at around the 30-minute mark).

When questioned on the “other two” top ranking factors, the questioner assumed that Rank Brain was one, Lipattsev stated that links pointing to a site, and content were the other two. He does clarify by saying:

“Third place is a hotly contested issue. I think… It’s a funny one. Take this with a grain of salt. […] And so I guess, if you do that, then you’ll see elements of RankBrain having been involved in here, rewriting this query, applying it like this over here… And so you’d say, ‘I see this two times as often as the other thing, and two times as often as the other thing’. So it’s somewhere in number three.

It’s not like having three links is ‘X’ important, and having five keywords is ‘Y’ important, and RankBrain is some ‘Z’ factor that is also somehow important, and you multiply all of that … That’s not how this works.”

However it started, the concept prevails. A good backlink profile, great copy, and “Rank Brain” type signals are what matter most with rankings, according to many SEO pros.

What we have to take into consideration when reviewing this idea is John Mueller’s response to a question in a 2017 English Google Webmaster Central office-hours hangout.

Mueller is asked if there is a one-size-fits-all approach to the top three ranking signals in Google. His answer is a clear “No.”

He follows that statement with a discussion around the timeliness of searches and how that might require different search results to be shown.

He also mentions that depending on the context of the search, different results may need to be shown, for instance, brand or shopping.

He continues to explain that he doesn’t think that there is one set of ranking factors that can be declared the top three that apply to all search results all the time.

Within the “How Search Works” documentation it clearly states:

“To give you the most useful information, Search algorithms look at many factors and signals, including the words of your query, relevance and usability of pages, expertise of sources, and your location and settings.

The weight applied to each factor varies depending on the nature of your query. For example, the freshness of the content plays a bigger role in answering queries about current news topics than it does about dictionary definitions. ”

  • Verdict: Not entirely true or myth.

22. Use The Disavow File To Proactively Maintain A Site’s Link Profile

To disavow or not disavow — this question has popped up a lot over the years since Penguin 4.0.

Some SEO professionals are in favor of adding any link that could be considered spammy to their site’s disavow file. Others are more confident that Google will ignore them anyway and save themselves the trouble.

It’s definitely more nuanced than that.

In a 2019 Webmaster Central Office Hours Hangout, Mueller was asked about the disavow tool and whether we should have confidence that Google is ignoring medium (but not very) spammy links.

His answer indicated that there are two instances where you might want to use a disavow file:

  • In cases where a manual action has been given.
  • And where you might think if someone from the webspam team saw it, they would issue a manual action.

You might not want to add every spammy link to your disavow file. In practice, that could take a long time if you have a very visible site that accrues thousands of these links a month.

There will be some links that are obviously spammy, and their acquisition is not a result of activity on your part.

However, where they are a result of some less-than-awesome link building strategies (buying links, link exchanges, etc.) you may want to proactively disavow them.

Read Roger Montti’s full breakdown of the 2019 exchange with John Mueller to get a better idea of the context around this discussion.

  • Verdict: Not a myth, but don’t waste your time unnecessarily.

23. Google Values Backlinks From All High Authority Domains

The better the website authority, the bigger the impact it will have on your site’s ability to rank. You will hear that in many SEO pitches, client meetings, and training sessions.

However, that’s not the whole story.

For one, it’s arguable whether Google has a concept of domain authority (see “Google Cares About Domain Authority” above).

And more importantly, it is the understanding that there is a lot that goes into Google’s calculations of whether a link will impact a site’s ability to rank highly or not.

Relevancy, contextual clues, no-follow link attributes. None of these should be ignored when chasing a link from a high “domain authority” website.

John Mueller also threw a cat among the pigeons during a live Search Off the Record podcast recorded at BrightonSEO in 2022 when he said:

“And to some extent, links will always be something that we care about because we have to find pages somehow. It’s like how do you find a page on the web without some reference to it?” But my guess is over time, it won’t be such a big factor as sometimes it is today. I think already, that’s something that’s been changing quite a bit.”

  • Verdict: Myth.

24. You Cannot Rank A Page Without Lightning-Fast Loading Speed

There are many reasons to make your pages fast: usability, crawlability, and conversion. Arguably, it is important for the health and performance of your website, and that should be enough to make it a priority.

However, is it something that is absolutely key to ranking your website?

As this Google Search Central post from 2010 suggests, it was definitely something that factored into the ranking algorithms. Back when it was published, Google stated:

“While site speed is a new signal, it doesn’t carry as much weight as the relevance of a page. Currently, fewer than 1% of search queries are affected by the site speed signal in our implementation and the signal for site speed only applies for visitors searching in English on Google.com at this point.”

Is it still only affecting such a low percentage of visitors?

In 2021, the Google Page Experience system, which incorporates the Core Web Vitals for which speed is important, rolled out on mobile. It was followed in 2022 with a rollout of the system to desktop.

This was met with a flurry of activity from SEO pros, trying to get ready for the update.

Many perceive it to be something that would make or break their site’s ranking potential. However, over time, Google representatives have downplayed the ranking effect of Core Web Vitals.

More recently, in May 2023, Google introduced Interaction to Next Paint (INP) to the Core Web Vitals to replace First Input Delay (FID).

Google claims that INP helps to deal with some of the limitations found with FID. This change in how a page’s responsiveness is measured shows that Google still cares about accurately measuring user experience.

From Google’s previous statements and recent focus on Core Web Vitals, we can see that load speed continues to be an important ranking factor.

However, it will not necessarily cause your website to dramatically increase or decrease in rankings.

Google representatives Gary Illyes, Martin Splitt, and John Mueller hypothesized in 2021 during a “Search off the Record” podcast about the weighting of speed as a ranking factor.

Their discussion drew out the thinking around page load speed as a ranking metric and how it would need to be considered a fairly lightweight signal.

They went on to talk about it being more of a tie-breaker, as you can make an empty page lightning-fast, but it will not serve much use for a searcher.

John Mueller reinforced this in 2022 during Google SEO Office Hours when he said:

“Core Web Vitals is definitely a ranking factor. We have that for mobile and desktop now. It is based on what users actually see and not kind of a theoretical test of your pages […] What you don’t tend to see is big ranking changes overall for that.

But rather, you would see changes for queries where we have similar content in the search results. So if someone is searching for your company name, we would not show some random blog, just because it’s a little bit faster, instead of your homepage.

We would show your homepage, even if it’s very slow. On the other hand, if someone is searching for, I don’t know, running shoes, and there are lots of people writing about running shoes, then that’s where the speed aspect does play a bit more of a role.”

With this in mind, can we consider page speed a major ranking factor?

My opinion is no, page speed is definitely one of the ways Google decides which pages should rank above others, but not a major one.

  • Verdict: Myth.

25. Crawl Budget Isn’t An Issue

Crawl budget – the idea that every time Googlebot visits your website, there is a limited number of resources it will visit – isn’t a contentious issue. However, how much attention should be paid to it is.

For instance, many SEO professionals will consider crawl budget optimization a central part of any technical SEO roadmap. Others will only consider it if a site reaches a certain size or complexity.

Google is a company with finite resources. It cannot possibly crawl every single page of every site every time its bots visit them. Therefore, some of the sites that get visited might not see all of their pages crawled every time.

Google has helpfully created a guide for owners of large and frequently updated websites to help them understand how to enable their sites to be crawled.

In the guide, Google states:

“If your site does not have a large number of pages that change rapidly, or if your pages seem to be crawled the same day that they are published, you don’t need to read this guide; merely keeping your sitemap up to date and checking your index coverage regularly is adequate.”

Therefore, it would seem that Google is in favor of some sites paying attention to its advice on managing crawl budget, but doesn’t consider it necessary for all.

For some sites, particularly ones that have a complex technical setup and many hundreds of thousands of pages, managing crawl budget is important. For those with a handful of easily crawled pages, it isn’t.

  • Verdict: SEO myth.

26. There Is A Right Way To Do SEO

This is probably a myth in many industries, but it seems prevalent in SEO. There is a lot of gatekeeping in SEO social media, forums, and chats.

Unfortunately, it’s not that simple.

We know some core tenets about SEO.

Usually, something is stated by a search engine representative that has been dissected, tested, and ultimately declared true.

The rest is a result of personal and collective trial and error, testing, and experience.

Processes are extremely valuable within SEO business functions, but they have to evolve and be applied appropriately.

Different websites within different industries will respond to changes in ways others would not. Altering a meta title so it is under 60 characters long might help the click-through rate for one page and not for another.

Ultimately, we have to hold any SEO advice we’re given lightly before deciding whether it is right for the website you are working on.

  • Verdict: SEO myth.

When Can Something Appear To Be A Myth

Sometimes an SEO technique can be written off as a myth by others purely because they have not experienced success from carrying out this activity for their own site.

It is important to remember that every website has its own industry, set of competitors, the technology powering it, and other factors that make it unique.

Blanket application of techniques to every website and expecting them to have the same outcome is naive.

Someone may not have had success with a technique when they have tried it in their highly competitive vertical.

It doesn’t mean it won’t help someone in a less competitive industry have success.

Causation & Correlation Being Confused

Sometimes, SEO myths arise because of an inappropriate connection between an activity that was carried out and a rise in organic search performance.

If an SEO has seen a benefit from something they did, then it is natural that they would advise others to try the same.

Unfortunately, we’re not always great at separating causation and correlation.

Just because rankings or click-through rates increased around the same time as you implemented a new tactic doesn’t mean it caused the increase. There could be other factors at play.

Soon, an SEO myth will arise from an overeager SEO who wants to share what they incorrectly believe to be a golden ticket.

Steering Clear Of SEO Myths

It can save you from experiencing headaches, lost revenue, and a whole lot of time if you learn to spot SEO myths and act accordingly.

Test

The key to not falling for SEO myths is making sure you can test advice whenever possible.

If you have been given the advice that structuring your page titles a certain way will help your pages rank better for their chosen keywords, then try it with one or two pages first.

This can help you measure whether making a change across many pages will be worth the time before you commit to it.

Is Google Just Testing?

Sometimes, there will be a big uproar in the SEO community because of changes in the way Google displays or orders search results.

These changes are often tested in the wild before they are rolled out to more search results.

Once a big change has been spotted by one or two SEO pros, advice on how to optimize for it begins to spread.

Remember the favicons in the desktop search results? The upset that caused the SEO industry (and Google users in general) was vast.

Suddenly, articles sprang up about the importance of favicons in attracting users to your search results. There was barely time to study whether favicons would impact the click-through rate that much.

Because just like that, Google changed it back.

Before you jump for the latest SEO advice being spread around Twitter as a result of a change by Google, wait to see if it will hold.

It could be that the advice that appears sound now will quickly become a myth if Google rolls back changes.

More resources: 


Featured Image: Search Engine Journal/Paulo Bobita

Google Says These Are Not Good Signals via @sejournal, @martinibuster

Google’s Gary Illyes’ answer about authorship shared insights about why Google has less trust for signals that are under direct control of site owners and SEOs and provides a better understanding about what site owners and SEOs should focus on when optimizing a website.

The question that Illyes answered was in the context of a live interview at a search conference in May 2024. The interview went largely unnoticed but it’s full of great information related to digital marketing and how Google ranks web pages.

Authorship Signals

Someone asked the question about whether Google would bring back authorship signals. Authorship has been a fixation by some SEOs based on Google’s encouragement that SEOs and site owners review the Search Quality Raters Guidelines to understand what Google aspires to rank. SEOs however took the encouragement too literally and started to parse the document for ranking signal ideas instead.

Digital marketers came to see the concept of EEAT (Expertise, Experience, Authoritativeness, and Trustworthiness) as actual signals that Google’s algorithms were looking for and from there came the idea that authorship signals were important for ranking.

The idea of authorship signals is not far-fetched because Google at one time created a way for site owners and SEOs pass along metadata about webpage authorship but Google eventually abandoned that idea.

SEO-Controlled Markup Is Untrustworthy

Google’s Gary Illyes answered the question about authorship signals and very quickly, within the same sentence, shared that Google’s experience with SEO-controlled data on the web page (markup) tends to become spammy (implying that it’s untrustworthy).

This is the question as relayed by the interviewer:

“Are Google planning to release some authorship sooner or later, something that goes back to that old authorship?”

Google’s Gary Illyes answered:

“Uhm… I don’t know of such plans and honestly I’m not very excited about anything along those lines, especially not one that is similar to what we had back in 2011 to 2013 because pretty much any markup that SEOs and site owners have access to will be in some form spam.”

Gary next went into greater detail by saying that SEO and author controlled markup are not good signals.

Here is how he explained it:

“And generally they are not good signals. That’s why rel-canonical, for example is not a directive but a hint. And that’s why Meta description is not a directive, but something that we might consider and so on.

Having something similar for authorship, I think would be a mistake.”

The concept of SEO-controlled data not being a good signal is important to understand because many in search marketing believe that they can manipulate Google by spoofing authorship signals with fake author profiles, with reviews that pretend to be hands-on, and with metadata (like titles and meta descriptions) that is specifically crafted to rank for keywords.

What About Algorithmically Determined Authorship?

Gary then turned to the idea of algorithmically determined authorship signals and it may surprise some that Gary describes those siganls as lacking in value. This may come as a blow to SEOs and site owners who have spent significant amounts of time updating their web pages to improve their authorship data.

The concept of the importance of “authorship signals” for ranking is something that some SEOs created all by themselves, it’s not an idea that Google encouraged. In fact, Googlers like John Mueller and SearchLiaison have consistently downplayed the necessity of author profiles for years.

Gary explained about algorithmically determined authorship signals:

“Having something similar for authorship, I think would be a mistake. If it’s algorithmically determined, then perhaps it would be more accurate or could be higher accuracy, but honestly I don’t necessarily see the value in it.”

The interviewer commented about rel-canonicals sometimes being a poor source of information:

“I’ve seen canonical done badly a lot of times myself, so I’m glad to hear that it is only a suggestion rather than a rule.”

Gary’s response to the observation about poor canonicals is interesting because he doesn’t downplay the importance of “suggestions” but implies that some of them are stronger although still falling short of a directive. A directive is something that Google is obligated to obey, like a noindex meta tag.

Gary explained about rel-canonicals being a strong suggestion:

“I mean it’s it’s a strong suggestion, but still it’s a suggestion.”

Gary affirmed that even though rel=canonicals is a suggestion, it’s a strong suggestion. That implies a relative scale of how much Google trusts certain inputs that publishers make. In the case of a canonical, Google’s stronger trust in rel-canonical is probably a reflection of the fact that it’s in a publisher’s best interest to get it right, whereas other data like authorship could be prone to exaggeration or outright deception and therefore less trustworthy.

What Does It All Mean?

Gary’s comments should give a foundation for setting the correct course on what to focus on when optimizing a web page. Gary (and other Googlers) have said multiple times that authorship is not really something that Google is looking for. That’s something that SEOs invented, not something that Google encouraged.

This also provides guidance on not overestimating the importance of metadata that is controlled by a site owner or SEO.

Watch the interview starting at about the two minute mark:

Featured Image by Shutterstock/Asier Romero

Google Search Now Supports Labeling AI Generated Or Manipulated Images via @sejournal, @martinibuster

Google Search Central updated their documentation to reflect support for labeling images that were extended or manipulated with AI. Google also quietly removed the “AI generated” metadata from Beta status, indicating that the “AI Generated” label is now fully supported in search.

IPTC Photo Metadata

The International Press Telecommunications Council (IPTC) is a standards making body that among other things creates standards for photo metadata. Photo metadata enables a photograph to be labeled with information about the photo, like information about copyright, licensing and image descriptions.

Although the standards is made for by an international press standards organization the meta data standards they curate are used by Google Images in a context outside of Google News. The metadata allows Google Images to show additional information about the image.

Google’s documentation explains the use case and benefit of the metadata:

“When you specify image metadata, Google Images can show more details about the image, such as who the creator is, how people can use an image, and credit information. For example, providing licensing information can make the image eligible for the Licensable badge, which provides a link to the license and more detail on how someone can use the image.”

AI Image Manipulation Metadata

Google quietly adopted the metadata standards pertaining to images that were manipulated with AI algorithms that are typically used to manipulate images, like convolutional neural networks (CNNs) and generative adversarial networks (GANs).

There are two forms of AI image manipulation that are covered by the new metadata:

  • Inpainting
  • Outpainting

Inpainting

Inpainting is generally conceived as enhancing an image for the purpose of restoring or reconstructing it, to fill in the missing parts. But inpainting is also any algorithm manipulation that adds to an image.

Outpainting

Outpainting is the algorithm process of adding to an image, extending it beyond the borders of the original photograph, adding more to it than what was in the original image.

Google now supports labeling images that were manipulated in both those ways with a new metadata property of the Digital Source Type that’s called compositeWithTrainedAlgorithmicMedia.

compositeWithTrainedAlgorithmicMedia

While the new property looks like structured data, it’s not Schema structured data. It’s metadata that’s embedded in a digital image.

This is what was added to Google’s documentation:

“Digital Source Type

compositeWithTrainedAlgorithmicMedia: The image is a composite of trained algorithmic media with some other media, such as with inpainting or outpainting operations.”

Label For “AI Generated” – algorithmicMedia Metadata

Google also lifted the Beta status of the algorithmicMedia metadata specifications, which means that images that are created with AI can now be labeled as AI Generated if the algorithmicMedia metadata is embedded within an image.

This is the documentation before the change:

“algorithmicMedia: The image was created purely by an algorithm not based on any sampled training data (for example, an image created by software using a mathematical formula).

Beta: Currently, this property is in beta and only available for IPTC photo metadata. Adding this property makes your image eligible for display with an AI-generated label, but you may not see the label in Google Images right away, as we’re still actively developing it.”

The change in the documentation was to remove the entirety of the second paragraph to remove any mention of Beta status. Curiously, this change is not reflected in Google’s changelog.

Google’s Search Central documentation changelog noted:

“Supporting a new IPTC digital source type
What: Added compositeWithTrainedAlgorithmicMedia to the IPTC photo metadata documentation.

Why: Google can now extract the compositeWithTrainedAlgorithmicMedia IPTC NewsCode.”

Read Google’s updated documentation:

Image metadata in Google Images

Featured Image by Shutterstock/Roman Samborskyi

Critical SERP Features Of Google’s Shopping Marketplace via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Google’s launch and pullback of AI Overviews (AIOs) caught the most attention in the SEO scene over the last two months.

However, a change with at least the same significance flew under the radar: Google’s transformation from search engine to marketplace for shopping queries.

Yes, AIOs are impactful: In my initial analysis, I found a negative impact of -8.9% when a page cited in an AIO compared to ranking at the top of the classic web search results.

I then found that Google pulled 50-66% of AIOs back. However, Google shows a whole slew of SERP features and AI features for ecommerce queries that are at least as impactful as AIOs.

To better understand the key trends for shopping queries, I analyzed 35,305 keywords across categories like fashion, beds, plants, and automotive in the US over the last five months using SEOClarity.

The results:

  • Product listings appear more often in position 1 in June compared to February 2024.
  • SERP features like Discussions & Forums gained visibility and opened a new playground for marketers.
  • SERP features fluctuate in visibility and introduce a lot of noise in SEO metrics.

Google Shopping Marketplace

To summarize ecommerce shifts, where I explain Google’s shift from search engine to ecommerce marketplace, Google has merged the web results and shopping tab for shopping searches as a response to Amazon’s long-standing dominance:

  • Google has fully transitioned into a shopping marketplace by adding product filters to search result pages and implementing a direct checkout option.
  • These new features create an ecommerce search experience within Google Search and may significantly impact the organic traffic merchants and retailers rely on.
  • Google has quietly introduced a direct checkout feature that allows merchants to link free listings directly to their checkout pages.
  • Google’s move to a shopping marketplace was likely driven by the need to compete with Amazon’s successful advertising business.
  • Google faces the challenge of balancing its role as a search engine with the need to generate revenue through its shopping marketplace, especially considering its dependence on partners for logistics.

To illustrate with an example:

  1. Say you are looking for kayaks (summertime!).
  2. On desktop (logged-in), Google will now show you product filters on the left sidebar and product carousels in the middle on top of classic organic results – and ads, of course.
Google search for kayaksImage Credit: Kevin Indig
  1. On mobile, you get product filters at the top, ads above organic results, and product carousels in the form of popular products.
Google search for kayaks on mobileImage Credit: Kevin Indig
  1. This experience doesn’t look very different from Amazon, which is the whole point.
Amazon resultsImage Credit: Kevin Indig

Google’s new shopping experience lets users explore products on Amazon, Walmart, Ebay, Esty, & Co.

From an SEO perspective, the prominent position of product grid (listings) and filters likely significantly impacts CTR, organic traffic, and, ultimately, revenue.

Product Listings Appear More Often In Position 1

30,172 out of 35,305 keywords (85.6%) show product listings, which are the free product carousels, in my analysis. It’s the most visible SERP feature in shopping search.

In February, product listings showed up for 39% of queries in position 1 and 15% of queries in position 3.

In July, that number shifted to 43% for position 1 and 13.6% for position 3. Google moved product listings higher up the SERPs.

Google product listings by positionImage Credit: Kevin Indig

The shift from web links to product images makes product listings a cornerstone feature in Google’s transformation. The increased visibility means Google doubles down on the new model.

Discussions & Forums Gain Visibility

After product listings (85.6% of queries), image carousels (61.8% of queries) are the most common SERP features.

SERP Features by occurenceImage Credit: Kevin Indig

Image carousels are highly impactful because shopping is a visual act. Seeing the right product can very quickly trigger a purchase, as opposed to customers being stuck in the Messy Middle for longer.

Retailers and ecommerce brands put a lot of effort into high-quality product pictures and need to spend equal time optimizing images for Google Search, even though organic traffic is usually much lower than web ranks.

Google now tests “generate image with AI,” a feature that lets users generate product images with prompts and then see similar (real) products.

It’s a powerful application of AI that, again, flies under the AIO radar but could also be impactful by making it easier for users to find things they want.

Image Credit: Kevin Indig

Visibility for most SERP features remained relatively unchanged between February and July, with one exception: Discussions & Forums grew from 28.7% to 34% of all queries (+5.3 percentage points).

SERP Features February vs June 2024Image Credit: Kevin Indig

The change in Discussions & Forums SERP features is in line with Reddit’s unprecedented SEO visibility gain over the last 12 months. The domain now operates at the traffic level of Facebook and Amazon.

Google’s Discussions & Forums feature highlights threads in forums like Reddit, Quora, and others. People visit forums when they are looking for authentic and unincentivized opinions from other consumers. Many review articles are biased, and it seems consumers know.

As a result, Google compensates for lower review quality with more user-generated content from forums. In Free Content, I referenced a study from Germany titled “Is Google getting worse?” that found:

  • “An overall downward trend in text quality in all three search engines.”
  • “Higher-ranked pages are on average more optimized, more monetized with affiliate marketing, and they show signs of lower text quality.”

Discussions & Forums show that high visibility doesn’t equal high impact for SERP features.

SERP Features And Their Impact Fluctuate

SERP features are commonly assumed to show up at a stable rate in Search, but Google constantly tests them.

As a result, SERP features that impact click-through rates can introduce a lot of noise into common SEO data (CTR, clicks, even revenue).

At the same time, Google switching some features on and off can help SEO pros understand the impact of SERP features on SEO metrics.

A good example is the Things To Know feature (TTK), which answers two common questions about a product with links to websites.

Things To Know featureImage Credit: Kevin Indig

After months of stable visibility, Google suddenly reduced the number of TTKs by -37.5% for a month, bringing it back to previous levels.

Sites that were linked in TTK might have seen less organic traffic during that month. Since TTK isn’t reported in Search Console, those sites might wonder why their organic traffic dropped even though ranks might be stable.

Things to Know SERP FeatureImage Credit: Kevin Indig

Coming back to the Kayak example from earlier, Google tests variations like deals and carousel segments (“Kayaks For Beginners”).

Kayaks for beginnersImage Credit: Kevin Indig

You can imagine how hard this makes getting stable data and why it’s so critical to monitor SERP features.


Featured Image: Paulo Bobita/Search Engine Journal

Google Struggles To Boost Search Traffic On Its iPhone Apps via @sejournal, @MattGSouthern

According to a report by The Information, Google is working to reduce its reliance on Apple’s Safari browser, but progress has been slower than anticipated.

As Google awaits a ruling on the U.S. Department of Justice’s antitrust lawsuit, its arrangement with Apple is threatened.

The current agreement, which makes Google the default search engine on Safari for iPhones, could be in jeopardy if the judge rules against Google.

To mitigate this risk, Google encourages iPhone users to switch to its Google Search or Chrome apps for browsing. However, these efforts have yielded limited success.

Modest Gains In App Adoption

Over the past five years, Google has increased the percentage of iPhone searches conducted through its apps from 25% to the low 30s.

While this represents progress, it falls short of Google’s internal target of 50% by 2030.

The company has employed various marketing strategies, including campaigns showcasing features like Lens image search and improvements to the Discover feed.

Despite these efforts, Safari’s preinstalled status on iPhones remains an obstacle.

Financial Stakes & Market Dynamics

The financial implications of this struggle are considerable for both Google and Apple.

In 2023, Google reportedly paid over $20 billion to Apple to maintain its status as the default search engine on Safari.

By shifting more users to its apps, Google aims to reduce these payments and gain leverage in future negotiations.

Antitrust Lawsuit & Potential Consequences

The ongoing antitrust lawsuit threatens Google’s business model.

If Google loses the case, it could potentially lose access to approximately 70% of searches conducted on iPhones, which account for about half of the smartphones in the U.S.

This outcome could impact Google’s mobile search advertising revenue, which exceeded $207 billion in 2023.

New Initiatives & Leadership

To address these challenges, Google has brought in new talent, including former Instagram and Yahoo product executive Robby Stein.

Stein is now tasked with leading efforts to shift iPhone users to Google’s mobile apps, exploring ways to make the apps more compelling, including the potential use of generative AI.

Looking Ahead

With the antitrust ruling on the horizon, Google’s ability to attract users to its apps will determine whether it maintains its search market share.

We’ll be watching closely to see how Google navigates these challenges and if it can reduce its reliance on Safari.


Featured Image: photosince/shutterstock

Technical SEO Strategy: Expert Tips To Maximize Website Performance via @sejournal, @lorenbaker

Wondering why your carefully crafted content isn’t climbing the search rankings? 

You might be overlooking a crucial piece of the puzzle: technical SEO. 

It’s easy to get lost in content optimization and on-page SEO, but the real game-changer lies behind the scenes. 

Technical SEO is basically the backbone of your website’s performance, ensuring that search engines can find, crawl, and index your pages effectively.

So if your site’s technical foundation hasn’t been a top priority, you could be missing out on major ranking opportunities. 

But it’s never too late to pivot – if you’re ready to start maximizing your web performance and outranking your competition, our upcoming webinar is one you won’t want to miss.  

Join us live on July 17, as we lay out an actionable framework for auditing and improving your technical SEO across four key pillars:

  1. Discoverability is all about how easily search engines can find your website and its pages.
  2. Crawlability ensures that search engine bots can navigate and access your site without any issues.
  3. Indexability means your pages can be stored in the search engine’s database and shown in search results.
  4. User Experience (UX) focuses on making sure your site is easy for visitors to navigate and enjoyable to use.

Our presenters Steven van Vessum, Director of Organic Marketing at Conductor and Alexandra Dristas, Principal Solutions Consultant at Conductor, will explore ways you can implement core technical SEO best practices.

You’ll also learn which to prioritize based on impact, as well as how to maintain these improvements moving forward.

In this webinar, we’ll cover the following topics: 

  • Optimizing for Discoverability: Learn how creating a clear sitemap and well-organized site architecture helps search engines find and index your pages efficiently.
  • Improving Crawl Budget: Ensure search engine bots focus on valuable pages rather than getting stuck in loops or wasting resources on low-priority content. 
  • Leveraging Schema and Headings: How using Schema markup and optimizing your heading structure can help improve indexability in search results.
  • Core Web Vitals and Accessibility: Discover best practices to provide a seamless and satisfying experience for all visitors.
  • Monitoring Technical SEO: Learn the top tools and processes to continuously identify and fix technical issues, maintaining optimal site performance.

Don’t miss this opportunity to elevate your technical SEO strategy and boost your search visibility. 

Plus, if you stick around after the presentation, Steven and Alexandra will be answering questions live in our Q&A session. 

Sign up now and get the expert insights you need to rank higher on SERPs.

The Three Pillars Of SEO: Authority, Relevance, And Experience via @sejournal, @marktraphagen

If there’s one thing we SEO pros are good at, it’s making things complicated.

That’s not necessarily a criticism.

Search engine algorithms, website coding and navigation, choosing and evaluating KPIs, setting content strategy, and more are highly complex tasks involving lots of specialized knowledge.

But as important as those things all are, at the end of the day, there is really just a small set of things that will make most of the difference in your SEO success.

In SEO, there are really just three things – three pillars – that are foundational to achieving your SEO goals.

  • Authority.
  • Relevance.
  • Experience (of the users and bots visiting the site).

Nutritionists tell us our bodies need protein, carbohydrates, and fats in the right proportions to stay healthy. Neglect any of the three, and your body will soon fall into disrepair.

Similarly, a healthy SEO program involves a balanced application of authority, relevance, and experience.

Authority: Do You Matter?

In SEO, authority refers to the importance or weight given to a page relative to other pages that are potential results for a given search query.

Modern search engines such as Google use many factors (or signals) when evaluating the authority of a webpage.

Why does Google care about assessing the authority of a page?

For most queries, there are thousands or even millions of pages available that could be ranked.

Google wants to prioritize the ones that are most likely to satisfy the user with accurate, reliable information that fully answers the intent of the query.

Google cares about serving users the most authoritative pages for their queries because users that are satisfied by the pages they click through to from Google are more likely to use Google again, and thus get more exposure to Google’s ads, the primary source of its revenue.

Authority Came First

Assessing the authority of webpages was the first fundamental problem search engines had to solve.

Some of the earliest search engines relied on human evaluators, but as the World Wide Web exploded, that quickly became impossible to scale.

Google overtook all its rivals because its creators, Larry Page and Sergey Brin, developed the idea of PageRank, using links from other pages on the web as weighed citations to assess the authoritativeness of a page.

Page and Brin realized that links were an already-existing system of constantly evolving polling, in which other authoritative sites “voted” for pages they saw as reliable and relevant to their users.

Search engines use links much like we might treat scholarly citations; the more scholarly papers relevant to a source document that cite it, the better.

The relative authority and trustworthiness of each of the citing sources come into play as well.

So, of our three fundamental categories, authority came first because it was the easiest to crack, given the ubiquity of hyperlinks on the web.

The other two, relevance and user experience, would be tackled later, as machine learning/AI-driven algorithms developed.

Links Still Primary For Authority

The big innovation that made Google the dominant search engine in a short period was that it used an analysis of links on the web as a ranking factor.

This started with a paper by Larry Page and Sergey Brin called The Anatomy of a Large-Scale Hypertextual Web Search Engine.

The essential insight behind this paper was that the web is built on the notion of documents inter-connected with each other via links.

Since putting a link on your site to a third-party site might cause a user to leave your site, there was little incentive for a publisher to link to another site unless it was really good and of great value to their site’s users.

In other words, linking to a third-party site acts a bit like a “vote” for it, and each vote could be considered an endorsement, endorsing the page the link points to as one of the best resources on the web for a given topic.

Then, in principle, the more votes you get, the better and the more authoritative a search engine would consider you to be, and you should, therefore, rank higher.

Passing PageRank

A significant piece of the initial Google algorithm was based on the concept of PageRank, a system for evaluating which pages are the most important based on scoring the links they receive.

So, a page that has large quantities of valuable links pointing to it will have a higher PageRank and will, in principle, be likely to rank higher in the search results than other pages without as high a PageRank score.

When a page links to another page, it passes a portion of its PageRank to the page it links to.

Thus, pages accumulate more PageRank based on the number and quality of links they receive.

Three Pillars of SEO: Authority, Relevance, and Trust | SEJ

Not All Links Are Created Equal

So, more votes are better, right?

Well, that’s true in theory, but it’s a lot more complicated than that.

PageRank scores range from a base value of one to values that likely exceed trillions.

Higher PageRank pages can have a lot more PageRank to pass than lower PageRank pages. In fact, a link from one page can easily be worth more than one million times a link from another page.

Three Pillars of SEO: Authority, Relevance, and Trust | SEJ

But the PageRank of the source page of a link is not the only factor in play.

Google also looks at the topic of the linking page and the anchor text of the link, but those have to do with relevance and will be referenced in the next section.

It’s important to note that Google’s algorithms have evolved a long way from the original PageRank thesis.

The way that links are evaluated has changed in significant ways – some of which we know, and some of which we don’t.

What About Trust?

You may hear many people talk about the role of trust in search rankings and in evaluating link quality.

For the record, Google says it doesn’t have a concept of trust it applies to links (or ranking), so you should take those discussions with many grains of salt.

These discussions began because of a Yahoo patent on the concept of TrustRank.

The idea was that if you started with a seed set of hand-picked, highly trusted sites and then counted the number of clicks it took you to go from those sites to yours, the fewer clicks, the more trusted your site was.

Google has long said it doesn’t use this type of metric.

However, in 2013 Google was granted a patent related to evaluating the trustworthiness of links. We should not though that the existence of a granted patent does not mean it’s used in practice.

For your own purposes, however, if you want to assess a site’s trustworthiness as a link source, using the concept of trusted links is not a bad idea.

If they do any of the following, then it probably isn’t a good source for a link:

  • Sell links to others.
  • Have less than great content.
  • Otherwise, don’t appear reputable.

Google may not be calculating trust the way you do in your analysis, but chances are good that some other aspect of its system will devalue that link anyway.

Fundamentals Of Earning & Attracting Links

Now that you know that obtaining links to your site is critical to SEO success, it’s time to start putting together a plan to get some.

The key to success is understanding that Google wants this entire process to be holistic.

Google actively discourages, and in some cases punishes, schemes to get links in an artificial way. This means certain practices are seen as bad, such as:

  • Buying links for SEO purposes.
  • Going to forums and blogs and adding comments with links back to your site.
  • Hacking people’s sites and injecting links into their content.
  • Distributing poor-quality infographics or widgets that include links back to your pages.
  • Offering discount codes or affiliate programs as a way to get links.
  • And many other schemes where the resulting links are artificial in nature.

What Google really wants is for you to make a fantastic website and promote it effectively, with the result that you earn or attract links.

So, how do you do that?

Who Links?

The first key insight is understanding who it is that might link to the content you create.

Here is a chart that profiles the major groups of people in any given market space (based on research by the University of Oklahoma):

Three Pillars of SEO: Authority, Relevance, and Trust | SEJ

Who do you think are the people that might implement links?

It’s certainly not the laggards, and it’s also not the early or late majority.

It’s the innovators and early adopters. These are the people who write on media sites or have blogs and might add links to your site.

There are also other sources of links, such as locally-oriented sites, such as the local chamber of commerce or local newspapers.

You might also find some opportunities with colleges and universities if they have pages that relate to some of the things you’re doing in your market space.

Relevance: Will Users Swipe Right On Your Page?

You have to be relevant to a given topic.

Think of every visit to a page as an encounter on a dating app. Will users “swipe right” (thinking, “this looks like a good match!)?

If you have a page about Tupperware, it doesn’t matter how many links you get – you’ll never rank for queries related to used cars.

This defines a limitation on the power of links as a ranking factor, and it shows how relevance also impacts the value of a link.

Consider a page on a site that is selling a used Ford Mustang. Imagine that it gets a link from Car and Driver magazine. That link is highly relevant.

Also, think of this intuitively. Is it likely that Car and Driver magazine has some expertise related to Ford Mustangs? Of course it does.

In contrast, imagine a link to that Ford Mustang from a site that usually writes about sports. Is the link still helpful?

Probably, but not as helpful because there is less evidence to Google that the sports site has a lot of knowledge about used Ford Mustangs.

In short, the relevance of the linking page and the linking site impacts how valuable a link might be considered.

What are some ways that Google evaluates relevance?

The Role Of Anchor Text

Anchor text is another aspect of links that matters to Google.

Three Pillars of SEO: Authority, Relevance, and Trust | SEJ

The anchor text helps Google confirm what the content on the page receiving the link is about.

For example, if the anchor text is the phrase “iron bathtubs” and the page has content on that topic, the anchor text, plus the link, acts as further confirmation that the page is about that topic.

Thus, the links evaluate both the page’s relevance and authority.

Be careful, though, as you don’t want to go aggressively obtaining links to your page that all use your main keyphrase as the anchor text.

Google also looks for signs that you are manually manipulating links for SEO purposes.

One of the simplest indicators is if your anchor text looks manually manipulated.

Internal Linking

There is growing evidence that Google uses internal linking to evaluate how relevant a site is to a topic.

Properly structured internal links connecting related content are a way of showing Google that you have the topic well-covered, with pages about many different aspects.

By the way, anchor text is as important when creating external links as it is for external, inbound links.

Your overall site structure is related to internal linking.

Think strategically about where your pages fall in your site hierarchy. If it makes sense for users it will probably be useful to search engines.

The Content Itself

Of course, the most important indicator of the relevance of a page has to be the content on that page.

Most SEO professionals know that assessing content’s relevance to a query has become way more sophisticated than merely having the keywords a user is searching for.

Due to advances in natural language processing and machine learning, search engines like Google have vastly increased their competence in being able to assess the content on a page.

What are some things Google likely looks for in determining what queries a page should be relevant for?

  • Keywords: While the days of keyword stuffing as an effective SEO tactic are (thankfully) way behind us, having certain words on a page still matters. My company has numerous case studies showing that merely adding key terms that are common among top-ranking pages for a topic is often enough to increase organic traffic to a page.
  • Depth: The top-ranking pages for a topic usually cover the topic at the right depth. That is, they have enough content to satisfy searchers’ queries and/or are linked to/from pages that help flesh out the topic.
  • Structure: Structural elements like H1, H2, and H3, bolded topic headings, and schema-structured data may help Google better understand a page’s relevance and coverage.

What About E-E-A-T?

E-E-A-T is a Google initialism standing for Experienced-Expertise-Authoritativeness-Trustworthiness.

It is the framework of the Search Quality Rater’s Guidelines, a document used to train Google Search Quality Raters.

Search Quality Raters evaluate pages that rank in search for a given topic using defined E-E-A-T criteria to judge how well each page serves the needs of a search user who visits it as an answer to their query.

Those ratings are accumulated in aggregate and used to help tweak the search algorithms. (They are not used to affect the rankings of any individual site or page.)

Of course, Google encourages all site owners to create content that makes a visitor feel that it is authoritative, trustworthy, and written by someone with expertise or experience appropriate to the topic.

The main thing to keep in mind is that the more YMYL (Your Money or Your Life) your site is, the more attention you should pay to E-E-A-T.

YMYL sites are those whose main content addresses things that might have an effect on people’s well-being or finances.

If your site is YMYL, you should go the extra mile in ensuring the accuracy of your content, and displaying that you have qualified experts writing it.

Building A Content Marketing Plan

Last but certainly not least, create a real plan for your content marketing.

Don’t just suddenly start doing a lot of random stuff.

Take the time to study what your competitors are doing so you can invest your content marketing efforts in a way that’s likely to provide a solid ROI.

One approach to doing that is to pull their backlink profiles using tools that can do that.

With this information, you can see what types of links they’ve been getting and, based on that, figure out what links you need to get to beat them.

Take the time to do this exercise and also to map which links are going to which pages on the competitors’ sites, as well as what each of those pages rank for.

Building out this kind of detailed view will help you scope out your plan of attack and give you some understanding of what keywords you might be able to rank for.

It’s well worth the effort!

In addition, study the competitor’s content plans.

Learn what they are doing and carefully consider what you can do that’s different.

Focus on developing a clear differentiation in your content for topics that are in high demand with your potential customers.

This is another investment of time that will be very well spent.

Experience

As we traced above, Google started by focusing on ranking pages by authority, then found ways to assess relevance.

The third evolution of search was evaluating the site and page experience.

This actually has two separate but related aspects: the technical health of the site and the actual user experience.

We say the two are related because a site that is technically sound is going to create a good experience for both human users and the crawling bots that Google uses to explore, understand a site, and add pages to its index, the first step to qualifying for being ranked in search.

In fact, many SEO pros (and I’m among them) prefer to speak of SEO not as Search Engine Optimization but as Search Experience Optimization.

Let’s talk about the human (user) experience first.

User Experience

Google realized that authoritativeness and relevancy, as important as they are, were not the only things users were looking for when searching.

Users also want a good experience on the pages and sites Google sends them to.

What is a “good user experience”? It includes at least the following:

  • The page the searcher lands on is what they would expect to see, given their query. No bait and switch.
  • The content on the landing page is highly relevant to the user’s query.
  • The content is sufficient to answer the intent of the user’s query but also links to other relevant sources and related topics.
  • The page loads quickly, the relevant content is immediately apparent, and page elements settle into place quickly (all aspects of Google’s Core Web Vitals).

In addition, many of the suggestions above about creating better content also apply to user experience.

Technical Health

In SEO, the technical health of a site is how smoothly and efficiently it can be crawled by Google’s search bots.

Broken connections or even things that slow down a bot’s progress can drastically affect the number of pages Google will index and, therefore, the potential traffic your site can qualify for from organic search.

The practice of maintaining a technically healthy site is known as technical SEO.

The many aspects of technical SEO are beyond the scope of this article, but you can find many excellent guides on the topic, including Search Engine Journal’s Advanced Technical SEO.

In summary, Google wants to rank pages that it can easily find, that satisfy the query, and that make it as easy as possible for the searcher to identify and understand what they were searching for.

What About the Google Leak?

You’ve probably heard by now about the leak of Google documents containing thousands of labeled API calls and many thousands of attributes for those data buckets.

Many assume that these documents reveal the secrets of the Google algorithms for search. But is that a warranted assumption?

No doubt, perusing the documents is interesting and reveals many types of data that Google may store or may have stored in the past. But some significant unknowns about the leak should give us pause.

  • As  Google has pointed out, we lack context around these documents and how they were used internally by Google, and we don’t know how out of date they may be.
  • It is a huge leap from “Google may collect and store data point x” to “therefore data point x is a ranking factor.”
  • Even if we assume the document does reveal some things that are used in search, we have no indication of how they are used or how much weight they are given.

Given those caveats, it is my opinion that while the leaked documents are interesting from an academic point of view, they should not be relied upon for actually forming an SEO strategy.

Putting It All Together

Search engines want happy users who will come back to them again and again when they have a question or need.

They create and sustain happiness by providing the best possible results that satisfy that question or need.

To keep their users happy, search engines must be able to understand and measure the relative authority of webpages for the topics they cover.

When you create content that is highly useful (or engaging or entertaining) to visitors – and when those visitors find your content reliable enough that they would willingly return to your site or even seek you out above others – you’ve gained authority.

Search engines work hard to continually improve their ability to match the human quest for trustworthy authority.

As we explained above, that same kind of quality content is key to earning the kinds of links that assure the search engines you should rank highly for relevant searches.

That can be either content on your site that others want to link to or content that other quality, relevant sites want to publish, with appropriate links back to your site.

Focusing on these three pillars of SEO – authority, relevance, and experience – will increase the opportunities for your content and make link-earning easier.

You now have everything you need to know for SEO success, so get to work!

More resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Google Gives Exact Reason Why Negative SEO Doesn’t Work via @sejournal, @martinibuster

Google’s Gary Illyes answered a question about negative SEO provides useful insights into the technical details of how Google prevents low quality spam links from affecting normal websites.

The answer about negative SEO was given in an interview in May and has gone unnoticed until now.

Negative SEO

Negative SEO is the practice of sabotaging a competitor with an avalanche of low quality links. The idea is that Google will assume that the competitor is spamming and knock them out of the search engine results pages (SERPs).

The practice of negative SEO originated in the online gambling space where the rewards for top ranking are high and the competition is fierce. I first heard of it around the mid-2000s (probably before 2010) when someone involved in the gambling space told me about it.

Virtually all websites that rank for meaningful search queries attract low quality links and there is nothing unusual about, it’s always been this way. The concept of negative SEO became more prominent after the Penguin link spam update caused site owners to become more aware of the state of their inbound links.

Does Negative SEO Cause Harm?

The person interviewing Gary Illyes was taking questions from the audience.

She asked:

“Does negative SEO via spammy link building, a competitor throwing tens of thousands of links at another competitor, does that kind of thing still harm people or has Google kind of pushed that off to the side?

Google’s Gary Illyes answered the question by first asking the interviewer if she remembered the Penguin update to which she answered yes.

He then explained his experience reviewing examples of negative SEO that site owners and SEOs had sent him. He said that out of hundreds of cases he reviewed there was only one case that might have actually been negative SEO but that the web spam team wasn’t 100% sure.

Gary explained:

“Around the time we released Penguin, there was tons and tons of tons of complaints about negative SEO, specifically link based negative SEO and then very un-smartly, I requested examples like show me examples, like show me how it works and show me that it worked.

And then I got hundreds, literally hundreds of examples of alleged negative SEO and all of them were not negative SEO. It was always something that was so far away from negative SEO that I didn’t even bother looking further, except one that I sent to the web spam team for double checking and that we haven’t made up our mind about it, but it could have been negative SEO.

With this, I want to say that the fear about negative SEO is much bigger than or much larger than it needs to be, we disable insane numbers of links…”

The above is Gary’s experience of negative SEO. Next he explains the exact reason why “negative SEO links” have no effect.

Links From Irrelevant Topics Are Not Counted

At about the 30 minute mark of the interview, Gary confirmed something interesting about how links evaluated that is important to understand. Google has, for a very long time, examined the context of the site that’s linking out to match it to the site that’s being linked to, and if they don’t match up then Google wouldn’t pass the PageRank signal.

Gary continued his answer:

“If you see links from completely irrelevant sites, be that p–n sites or or pure spam sites or whatever, you can safely assume that we disabled the links from those sites because, one of the things is that we try to match the the topic of the target page plus whoever is linking out, and if they don’t match then why on Earth would we use those links?

Like for example if someone is linking to your flower page from a Canadian casino that sells Viagra without prescription, then why would we trust that link?

I would say that I would not worry about it. Like, find something else to worry about.”

Google Matches Topics From Page To Page

There was a time, in the early days of SEO, when thousands of links from non-matching topics could boost a site to the top of Google’s search results.  Some link builders used to offer “free” traffic counter widgets to universities that when placed in the footer would contain a link back to their client sites and they used to work. But Google tightened up on those kinds of links.

What Gary said about links having to be relevant matches up with what link builders have known for at least twenty years. The concept of off topic links not being counted by Google was understood way in the days when people did reciprocal links.

Although I can’t remember everything every Googler has ever said about negative SEO, this seems to be one of the rare occasions that a Googler offered a detailed reason why negative SEO doesn’t work.

Watch Gary Illyes answer the question at the 26 minute mark:

Featured Image by Shutterstock/MDV Edwards

42 Facebook Statistics & Facts For 2024 via @sejournal, @annabellenyst

Don’t believe what you may have heard; Facebook is still a dominant social media force in 2024.

With over 3 billion active users, it remains a key player for businesses, marketers, and social media enthusiasts.

And despite the rise of newer, shinier platforms, Facebook’s expansive reach and diverse user base are still unrivaled, making it a powerful channel for both personal and business engagement.

In this article, we’ll highlight the latest Facebook statistics and facts, providing a comprehensive overview of its reach, user behavior, and influence.

Facebook Overview

1. Facebook is the world’s most-used social platform in 2024, with over 3 billion global active users.

2. It is the third most-used app globally among mobile users, trailing only WhatsApp and YouTube.

3. Facebook ranks third in terms of time spent (behind TikTok and YouTube), with users spending an average of 19 hours and 47 minutes on Android app per month.

4. 64.1% of Facebook Android users open the app every day.

5. Facebook is the third most visited website in the US, with an estimated 2.90 billion monthly visits in April 2024.

6. Of its monthly US visitors, roughly 50.07% are mobile users, and 49.93% are using a desktop.

7. Globally, users spend an average of 3 minutes and 42 seconds on Facebook per app session.

8. Facebook is the second most searched query globally, with a search volume of 584.9 million.

9. Facebook is the fourth most downloaded social networking app in the US, behind Threads, WhatsApp, and Telegram.

(Source) (Source) (Source) (Source) (Source) (Source) (Source)

Facebook Company Background

10. Facebook was founded in 2004 by Mark Zuckerberg, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes.

11. The platform was originally launched as ‘TheFacebook’ on February 4, 2004. In August of 2005, it rebranded to Facebook.

12. Mark Zuckerberg is the current CEO of Facebook.

13. Facebook is headquartered in Menlo Park, California.

14. Facebook has 69,329 employees in 2024, a decrease of 10% year-over-year.

(Source) (Source) (Source)

Facebook Financial Performance

15. As of May 2024, Meta, Facebook’s parent company, has a market cap of $562.19 billion.

16. Meta generated $36.46 billion in revenue in Q1 2024, reflecting a 27% increase year-over-year.

17.  The company reported a net income of $12.37 billion in Q1 2024 – a significant 117% uptick from Q1 of 2023.

(Source) (Source)

Facebook User Statistics

18. Facebook had an average of 2.11 billion daily active users (DAUs) in 2023.

19. Facebook has approximately 3.07 billion monthly active users (MAUs).

20. That figure represents 37.7% of the total population and 57% of total internet users.

21. Facebook saw a 3.4% increase in MAUs between April 2023 and April 2024.

22. More than two-thirds of the world’s total internet users visit Facebook monthly.

23. English is the most represented language among Facebook users (53.8%), followed by Spanish (14.9%) and Hindi (8.5%).

24. Approximately seven in 10 US adults report ever using Facebook, second only to YouTube (83%).

25. A third of US teens aged 13-17 use Facebook, a decrease from 71% in 2014-2015.

26. More than 56.8% of Facebook users are male in 2024.

(Source) (Source) (Source) (Source) (Source)

Facebook Statistics By Location

27. 1.37 billion of Facebook’s MAUs are based in the Asia Pacific, making it the largest segment of the app’s users.

28. Europe and the US & Canada make up the next largest user groups.

29. Facebook’s global audience size, April 2023:

Country Active Facebook Users
India 369.9 million
US 186.4 million
Indonesia 135.1 million
Brazil 114.2 million
Mexico 93.3 million
The Philippines 91.9 million
Vietnam 75.6 million
Bangladesh 54.2 million
Thailand 51.6 million
Egypt 47.0 million

(Source) (Source)

Facebook Advertising

30. Advertisers can reach 2.24 billion users on Facebook in 2024, representing 41.3% of all internet users and 27.7% of the total population.

31. Among active Facebook users, 53.8% say they use the platform to follow or research brands and products. This ranks the platform second behind Instagram (62.7%) and ahead of TikTok (47.4%).

32. Male users aged 25-34 years old make up the largest portion of Facebook’s advertising audience (18.4%), followed by those aged 18-24 years old (13.5%).

33. Ad impressions on Meta’s Family of Apps (FoA), which includes Facebook, Instagram, WhatsApp, and Messenger, increased by 28% YoY in 2023.

(Source) (Source) (Source)

Facebook User Activities And Engagement

34. Active users use the app to message friends and family, with 72.6% doing so regularly.

35. Posting or sharing photos or videos is a common activity for 63.2% of Facebook users.

36. Almost 60% of users leverage Facebook to keep up to date with news and current events.

37. Facebook is the go-to platform for news for three in 10 Americans, making it the most popular social platform for this purpose.

(Source) (Source)

Facebook Content And Engagement

38. Link posts account for 44.5% of Facebook posts.

39. Photo posts follow at 33.4%.

40. Video posts make up 18.9% of content.

41. Photo posts receive an average engagement rate of 0.35%, followed by video posts at 0.23%, and album posts at 0.22%.

(Source)

Most Followed Facebook Pages

42. The top 10 most followed Facebook pages are:

Brand Followers*
1 Facebook App 188 million
2 Cristiano Ronaldo 168 million
3 Samsung 161 million
4 Mr. Bean 140 million
5 5-Minute Crafts 126 million
6 Shakira 124 million
7 Real Madrid C.F. 121 million
7 CGTN 121 million
9 Will Smith 116 million
9 Lionel Messi 116 million

*Facebook followers as of January 2024

(Source)

In Summary

Say what you will about Facebook, but its enduring relevance is undeniable.

With extensive reach, a broad user base, and significant advertising potential, Facebook will remain a cornerstone of any social media strategy in 2024.

By understanding these trends and user behaviors – and leveraging many of the insights covered above – you can maximize the potential of Facebook to drive engagement, awareness, and impact.

More resources: 


Featured Image: Kaspars Grinvalds/Shutterstock