How To Spot SEO Myths: 26 Common SEO Myths, Debunked via @sejournal, @HelenPollitt1

SEO is a complex, vast, and sometimes mysterious practice. There are a lot of aspects to SEO that can lead to confusion.

Not everyone will agree with what SEO entails – where technical SEO stops and development begins.

What also doesn’t help is the vast amount of misinformation that goes around. There are a lot of “experts” online and not all of them should bear that self-proclaimed title. How do you know who to trust?

Even Google employees can sometimes add to the confusion. They struggle to define their own updates and systems and sometimes offer advice that conflicts with previously given statements.

The Dangers Of SEO Myths

The issue is that we simply don’t know exactly how the search engines work. Due to this, much of what we do as SEO professionals is trial and error and educated guesswork.

When you are learning about SEO, it can be difficult to test out all the claims you hear.

That’s when the SEO myths begin to take hold. Before you know it, you’re proudly telling your line manager that you’re planning to “AI Overview optimize” your website copy.

SEO myths can be busted a lot of the time with a pause and some consideration.

How, exactly, would Google be able to measure that? Would that actually benefit the end user in any way?

There is a danger in SEO of considering the search engines to be omnipotent, and because of this, wild myths about how they understand and measure our websites start to grow.

What Is An SEO Myth?

Before we debunk some common SEO myths, we should first understand what forms they take.

Untested Wisdom

Myths in SEO tend to take the form of handed-down wisdom that isn’t tested.

As a result, something that might well have no impact on driving qualified organic traffic to a site gets treated like it matters.

Minor Factors Blown Out Of Proportion

SEO myths might also be something that has a small impact on organic rankings or conversion but are given too much importance.

This might be a “tick box” exercise that is hailed as being a critical factor in SEO success, or simply an activity that might only cause your site to eke ahead if everything else with your competition was truly equal.

Outdated Advice

Myths can arise simply because what used to be effective in helping sites rank and convert well no longer does but is still being advised. It might be that something used to work really well.

Over time, the algorithms have grown smarter. The public is more adverse to being marketed to.

Simply, what was once good advice is now defunct.

Google Being Misunderstood

Many times, the start of a myth is Google itself.

Unfortunately, a slightly obscure or just not straightforward piece of advice from a Google representative gets misunderstood and run away with.

Before we know it, a new optimization service is being sold off the back of a flippant comment a Googler made in jest.

SEO myths can be based on fact, or perhaps these are, more accurately, SEO legends?

In the case of Google-born myths, it tends to be that the fact has been so distorted by the SEO industry’s interpretation of the statement that it no longer resembles useful information.

26 Common SEO Myths

So, now that we know what causes and perpetuates SEO myths, let’s find out the truth behind some of the more common ones.

1. The Google Sandbox And Honeymoon Effects

Some SEO professionals believe that Google will automatically suppress new websites in the organic search results for a period of time before they are able to rank more freely.

Others suggest there is a sort of Honeymoon Period, during which Google will rank new content highly to test what users think of it.

The content would be promoted to ensure more users see it. Signals like click-through rate and bounces back to the search engine results pages (SERPs) would then be used to measure if the content is well received and deserves to remain ranked highly.

There is, however, the Google Privacy Sandbox. This is designed to help maintain peoples’ privacy online. This is a different sandbox from the one that allegedly suppresses new websites.

When asked specifically about the Honeymoon Effect and the rankings Sandbox, John Mueller answered:

“In the SEO world, this is sometimes called kind of like a sandbox where Google is like keeping things back to prevent new pages from showing up, which is not the case.

Or some people call it like the honeymoon period where new content comes out and Google really loves it and tries to promote it.

And it’s again not the case that we’re explicitly trying to promote new content or demote new content.

It’s just, we don’t know and we have to make assumptions.

And then sometimes those assumptions are right and nothing really changes over time.

Sometimes things settle down a little bit lower, sometimes a little bit higher.”

So, there is no systematic promotion or demotion of new content by Google, but what you might be noticing is that Google’s assumptions are based on the rest of the website’s rankings.

  • Verdict: Officially? It’s a myth.

2. Duplicate Content Penalty

This is a myth that I hear a lot. The idea is that if you have content on your website that is duplicated elsewhere on the web, Google will penalize you for it.

The key to understanding what is really going on here is knowing the difference between algorithmic suppression and manual action.

A manual action, the situation that can result in webpages being removed from Google’s index, will be actioned by a human at Google.

The website owner will be notified through Google Search Console.

An algorithmic suppression occurs when your page cannot rank well due to it being caught by a filter from an algorithm.

Essentially, having copy that is taken from another webpage might mean you can’t outrank that other page.

The search engines may determine that the original host of the copy is more relevant to the search query than yours.

As there is no benefit to having both in the search results, yours gets suppressed. This is not a penalty. This is the algorithm doing its job.

There are some content-related manual actions, but essentially, copying one or two pages of someone else’s content is not going to trigger them.

It is, however, potentially going to land you in other trouble if you have no legal right to use that content. It also can detract from the value your website brings to the user.

What about content that is duplicated across your own site? Mueller clarifies that duplicate content is not a negative ranking factor. If there are multiple pages with the same content, Google may choose one to be the canonical page, and the others will not be ranked.

  • Verdict: SEO myth.

3. PPC Advertising Helps Rankings

This is a common myth. It’s also quite quick to debunk.

The idea is that Google will favor websites that spend money with it through pay-per-click advertising. This is simply false.

Google’s algorithm for ranking organic search results is completely separate from the one used to determine PPC ad placements.

Running a paid search advertising campaign through Google while carrying out SEO might benefit your site for other reasons, but it won’t directly benefit your ranking.

  • Verdict: SEO myth.

4. Domain Age Is A Ranking Factor

This claim is seated firmly in the “confusing causation and correlation” camp.

Because a website has been around for a long time and is ranking well, age must be a ranking factor.

Google has debunked this myth itself many times.

In July 2019, Mueller replied to a post on Twitter.com (recovered through Wayback Machine) that suggested that domain age was one of “200 signals of ranking” saying, “No, domain age helps nothing.”

JohnMu tweet: Image from Twitter.com recovered through Wayback Machine, June 2024

The truth behind this myth is that an older website has had more time to do things well.

For instance, a website that has been live and active for 10 years may well have acquired a high volume of relevant backlinks to its key pages.

A website that has been running for less than six months will be unlikely to compete with that.

The older website appears to be ranking better, and the conclusion is that age must be the determining factor.

  • Verdict: SEO myth

5. Tabbed Content Affects Rankings

This idea is one that has roots going back a long way.

The premise is that Google will not assign as much value to the content sitting behind a tab or accordion.

For example, text that is not viewable on the first load of a page.

Google again debunked this myth in March 2020, but it has been a contentious idea among many SEO professionals for years.

In September 2018, Gary Illyes, Webmaster Trends Analyst at Google, answered a tweet thread about using tabs to display content.

His response:

“AFAIK, nothing’s changed here, Bill: we index the content, and its weight is fully considered for ranking, but it might not get bolded in the snippets. It’s another, more technical question of how that content is surfaced by the site. Indexing does have limitations.”

If the content is visible in the HTML, there is no reason to assume that it is being devalued just because it is not apparent to the user on the first load of the page. This is not an example of cloaking, and Google can easily fetch the content.

As long as there is nothing else that is stopping the text from being viewed by Google, it should be weighted the same as copy, which isn’t in tabs.

Want more clarification on this? Then check out this SEJ article that discusses this subject in detail.

  • Verdict: SEO myth.

6. Google Uses Google Analytics Data In Rankings

This is a common fear among business owners.

They study their Google Analytics reports. They feel their average sitewide bounce rate is too high, or their time on page is too low.

So, they worry that Google will perceive their site to be low quality because of that. They fear they won’t rank well because of it.

The myth is that Google uses the data in your Google Analytics account as part of its ranking algorithm.

It’s a myth that has been around for a long time.

Illyes has again debunked this idea simply with, “We don’t use *anything* from Google analytics [sic] in the “algo.”

Gary IIIyes tweet: Image from Twitter.com recovered from Wayback Machine, June 2024

If we think about this logically, using Google Analytics data as a ranking factor would be really hard to police.

For instance, using filters could manipulate data to make it seem like the site was performing in a way that it isn’t really.

What is good performance anyway?

High “time on page” might be good for some long-form content.

Low “time on page” could be understandable for shorter content.

Is either one right or wrong?

Google would also need to understand the intricate ways in which each Google Analytics account had been configured.

Some might be excluding all known bots, and others might not. Some might use custom dimensions and channel groupings, and others haven’t configured anything.

Using this data reliably would be extremely complicated to do. Consider the hundreds of thousands of websites that use other analytics programs.

How would Google treat them?

  • Verdict: SEO myth.

This myth is another case of “causation, not correlation.”

A high sitewide bounce rate might be indicative of a quality problem, or it might not be. Low time on page could mean your site isn’t engaging, or it could mean your content is quickly digestible.

These metrics give you clues as to why you might not be ranking well, they aren’t the cause of it.

7. Google Cares About Domain Authority

PageRank is a link analysis algorithm used by Google to measure the importance of a webpage.

Google used to display a page’s PageRank score a number up to 10 on its toolbar. It stopped updating the PageRank displayed in toolbars in 2013.

In 2016, Google confirmed that the PageRank toolbar metric was not going to be used going forward.

In the absence of PageRank, many other third-party authority scores have been developed.

Commonly known ones are:

  • Moz’s Domain Authority and Page Authority scores.
  • Majestic’s Trust Flow and Citation Flow.
  • Ahrefs’ Domain Rating and URL Rating.

Some SEO pros use these scores to determine the “value” of a page.

That calculation can never be an entirely accurate reflection of how a search engine values a page, however.

SEO pros will sometimes refer to the ranking power of a website often in conjunction with its backlink profile and this, too, is known as the domain’s authority.

You can see where the confusion lies.

Google representatives have dispelled the notion of a domain authority metric used by them.

John Mueller said in 2022:

“We don’t use domain authority. We generally try to have our metrics as granular as possible, sometimes that’s not so easy, in which case we look at things a bit broader (e.g., we’ve talked about this in regards to some of the older quality updates).”

Tweet by JohnMuImage from Twitter.com recovered through Wayback Machine, June 2024
  • Verdict: SEO myth.

8. Longer Content Is Better

You will have definitely heard it said before that longer content ranks better.

More words on a page automatically make yours more rank-worthy than your competitor’s. This is “wisdom” that is often shared around SEO forums without little evidence to substantiate it.

There are a lot of studies that have been released over the years that state facts about the top-ranking webpages, such as “on average pages in the top 10 positions in the SERPs have over 1,450 words on them.”

It would be quite easy for someone to take this information in isolation and assume it means that pages need approximately 1,500 words to rank on Page 1. That isn’t what the study is saying, however.

Unfortunately, this is an example of correlation, not necessarily causation.

Just because the top-ranking pages in a particular study happened to have more words on them than the pages ranking 11th and lower does not make word count a ranking factor.

Mueller dispelled this myth yet again in a Google SEO Office Hours in February 2021.

“From our point of view the number of words on a page is not a quality factor, not a ranking factor.”

For more information on how content length can impact SEO, check out Sam Hollingsworth’s article.

  • Verdict: SEO myth.

9. LSI Keywords Will Help You Rank

What exactly are LSI keywords? LSI stands for “latent semantic indexing.”

It is a technique used in information retrieval that allows concepts within the text to be analyzed and relationships between them identified.

Words have nuances dependent on their context. The word “right” has a different connotation when paired with “left” than when it is paired with “wrong.”

Humans can quickly gauge concepts in a text. It is harder for machines to do so.

The ability of machines to understand the context and linking between entities is fundamental to their understanding of concepts.

LSI is a huge step forward for a machine’s ability to understand text. What it isn’t is synonyms.

Unfortunately, the field of LSI has been devolved by the SEO community into the understanding that using words that are similar or linked thematically will boost rankings for words that aren’t expressly mentioned in the text.

It’s simply not true. Google has gone far beyond LSI in its understanding of text with the introduction of BERT, as just one example.

For more about what LSI is and how it does or doesn’t affect rankings, take a look at this article.

  • Verdict: SEO myth.

10. SEO Takes 3 Months

It helps us get out of sticky conversations with our bosses or clients. It leaves a lot of wiggle room if you aren’t getting the results you promised. “SEO takes at least three months to have an effect.”

It is fair to say that there are some changes that will take time for the search engine bots to process.

There is then, of course, some time to see if those changes are having a positive or negative effect. Then more time might be needed to refine and tweak your work.

That doesn’t mean that any activity you carry out in the name of SEO is going to have no effect for three months. Day 90 of your work will not be when the ranking changes kick in. There is a lot more to it than that.

If you are in a very low-competition market, targeting niche terms, you might see ranking changes as soon as Google recrawls your page. A competitive term could take much longer to see changes in rank.

A study by Semrush suggested that of the 28,000 domains they analyzed, only 19% of domains started ranking in the top 10 positions within six months and managed to maintain those rankings for the rest of the 13-month study.

This study indicates that newer pages struggle to rank high.

However, there is more to SEO than ranking in the top 10 of Google.

For instance, a well-positioned Google Business Profile listing with great reviews can pay dividends for a company. Bing, Yandex, and Baidu might make it easier for your brand to conquer the SERPs.

A small tweak to a page title could see an improvement in click-through rates. That could be the same day if the search engine were to recrawl the page quickly.

Although it can take a long time to see first page rankings in Google, it is naïve of us to reduce SEO success just down to that.

Therefore, “SEO takes 3 months” simply isn’t accurate.

  • Verdict: SEO myth.

11. Bounce Rate Is A Ranking Factor

Bounce rate is the percentage of visits to your website that result in no interactions beyond landing on the page. It is typically measured by a website’s analytics program, such as Google Analytics.

Some SEO professionals have argued that bounce rate is a ranking factor because it is a measure of quality.

Unfortunately, it is not a good measure of quality.

There are many reasons why a visitor might land on a webpage and leave again without interacting further with the site. They may well have read all the information they needed on that page and left the site to call the company and book an appointment.

In that instance, the visitor bouncing has resulted in a lead for the company.

Although a visitor leaving a page having landed on it could be an indicator of poor quality content, it isn’t always. Therefore, it wouldn’t be reliable enough for a search engine to use as a measure of quality.

“Pogo-sticking,” or a visitor clicking on a search result and then returning to the SERPs, would be a more reliable indicator of the quality of the landing page.

It would suggest that the content of the page was not what the user was after, so much so that they have returned to the search results to find another page or re-search.

John Mueller cleared this up (again) during Google Webmaster Central Office Hours in June 2020. He was asked if sending users to a login page would appear to be a “bounce” to Google and damage their rankings:

“So, I think there is a bit of misconception here, that we’re looking at things like the analytics bounce rate when it comes to ranking websites, and that’s definitely not the case.”

Back on another Google Webmaster Central Office Hours in July 2018, he also said:

“We try not to use signals like that when it comes to search. So that’s something where there are lots of reasons why users might go back and forth, or look at different things in the search results, or stay just briefly on a page and move back again. I think that’s really hard to refine and say, “well, we could turn this into a ranking factor.”

So, why does this keep coming up? Well, for a lot of people, it’s because of this one paragraph in Google’s How Search Works:

“Beyond looking at keywords, our systems also analyze if content is relevant to a query in other ways. We also use aggregated and anonymised interaction data to assess whether Search results are relevant to queries.”

The issue with this is that Google doesn’t specify what this “aggregated and anonymised interaction data” is. This has led to a lot of speculation and of course, arguments.

My opinion? Until we have some more conclusive studies, or hear something else from Google, we need to keep testing to determine what this interaction data is.

For now, regarding the traditional definition of a bounce,  I’m leaning towards “myth.”

In itself, bounce rate (measured through the likes of Google Analytics) is a very noisy, easily manipulated figure. Could something akin to a bounce be a ranking signal? Absolutely, but it will need to be a reliable, repeatable data point that genuinely measures quality.

In the meantime, if your pages are not satisfying user intent, that is definitely something you need to work on – not simply because of bounce rate.

Fundamentally, your pages should encourage users to interact, or if not that sort of page, at least leave your site with a positive brand association.

  • Verdict: SEO myth.

12. It’s All About Backlinks

Backlinks are important – that’s without much contention within the SEO community. However, exactly how important is still debated.

Some SEO pros will tell you that backlinks are one of the many tactics that will influence rankings, but they are not the most important. Others will tell you it’s the only real game-changer.

What we do know is that the effectiveness of links has changed over time. Back in the wild pre-Jagger days, link-building consisted of adding a link to your website wherever you could.

Forum comments had spun articles, and irrelevant directories were all good sources of links.

It was easy to build effective links. It’s not so easy now.

Google has continued to make changes to its algorithms that reward higher-quality, more relevant links and disregard or penalize “spammy” links.

However, the power of links to affect rankings is still great.

There will be some industries that are so immature in SEO that a site can rank well without investing in link-building, purely through the strength of their content and technical efficiency.

That’s not the case with most industries.

Relevant backlinks will, of course, help with ranking, but they need to go hand-in-hand with other optimizations. Your website still needs to have relevant content, and it must be crawlable.

If you want your traffic to actually do something when they hit your website, it’s definitely not all about backlinks.

Ranking is only one part of getting converting visitors to your site. The content and usability of the site are extremely important in user engagement.

Following the slew of Helpful Content updates and a better understanding of what Google considers E-E-A-T, we know that content quality is extremely important.

Backlinks can definitely help to indicate that a page would be useful to a reader, but there are many other factors that would suggest that, too.

  • Verdict: SEO myth.

13. Keywords In URLs Are Very Important

Cram your URLs full of keywords. It’ll help.

Unfortunately, it’s not quite as powerful as that.

John Mueller has said several times that keywords in a URL are a very minor, lightweight ranking signal.

In a Google SEO Office Hours in 2021, he affirmed again:

“We use the words in a URL as a very, very lightweight factor. And from what I recall, this is primarily something that we would take into account when we haven’t had access to the content yet.

So, if this is the absolute first time we see this URL and we don’t know how to classify its content, then we might use the words in the URL as something to help rank us better.

But as soon as we’ve crawled and indexed the content there, then we have a lot more information.”

If you are looking to rewrite your URLs to include more keywords, you are likely to do more damage than good.

The process of redirecting URLs en masse should be when necessary, as there is always a risk when restructuring a site.

For the sake of adding keywords to a URL? Not worth it.

  • Verdict: SEO myth.

14. Website Migrations Are All About Redirects

SEO professionals hear this too often. If you are migrating a website, all you need to do is remember to redirect any URLs that are changing.

If only this one were true.

In actuality, website migration is one of the most fraught and complicated procedures in SEO.

A website changing its layout, content management system (CMS), domain, and/or content can all be considered a website migration.

In each of those examples, there are several aspects that could affect how the search engines perceive the quality and relevance of the pages to their targeted keywords.

As a result, there are numerous checks and configurations that need to occur if the site is to maintain its rankings and organic traffic – ensuring tracking hasn’t been lost, maintaining the same content targeting, and making sure the search engine bots can still access the right pages.

All of this needs to be considered when a website is significantly changing.

Redirecting URLs that are changing is a very important part of website migration. It is in no way the only thing to be concerned about.

  • Verdict: SEO myth.

15. Well-Known Websites Will Always Outrank Unknown Websites

It stands to reason that a larger brand will have resources that smaller brands do not. As a result, more can be invested in SEO.

More exciting content pieces can be created, leading to a higher volume of backlinks acquired. The brand name alone can lend more credence to outreach attempts.

The real question is, does Google algorithmically or manually boost big brands because of their fame?

This one is a bit contentious.

Some people say that Google favors big brands. Google says otherwise.

In 2009, Google released an algorithm update named “Vince.” This update had a huge impact on how brands were treated in the SERPs.

Brands that were well-known offline saw ranking increases for broad competitive keywords. It stands to reason that brand awareness can help with discovery through Search.

It’s not necessarily time for smaller brands to throw in the towel.

The Vince update falls very much in line with other Google moves towards valuing authority and quality.

Big brands are often more authoritative on broad-level keywords than smaller contenders.

However, small brands can still win.

Long-tail keyword targeting, niche product lines, and local presence can all make smaller brands more relevant to a search result than established brands.

Yes, the odds are stacked in favor of big brands, but it’s not impossible to outrank them.

  • Verdict: Not entirely truth or myth.

16. Your Page Needs To Include ‘Near Me’ To Rank Well For Local SEO

It’s understandable that this myth is still prevalent.

There is still a lot of focus on keyword search volumes in the SEO industry, sometimes at the expense of considering user intent and how the search engines understand it.

When a searcher is looking for something with local intent, i.e., a place or service relevant to a physical location, the search engines will take this into consideration when returning results.

With Google, you will likely see the Google Maps results as well as the standard organic listings.

The Maps results are clearly centered around the location searched. However, so are the standard organic listings when the search query denotes local intent.

So, why do “near me” searches confuse some?

A typical keyword research exercise might yield something like the following:

  • “pizza restaurant manhattan” – 110 searches per month.
  • “pizza restaurants in manhattan” – 110 searches per month.
  • “best pizza restaurant manhattan” – 90 searches per month.
  • “best pizza restaurants in manhattan” – 90 searches per month.
  • “best pizza restaurant in manhattan”– 90 searches per month.
  • “pizza restaurants near me” – 90,500 searches per month.

With search volume like that, you would think [pizza restaurants near me] would be the one to rank for, right?

It is likely, however, that people searching for [pizza restaurant manhattan] are in the Manhattan area or planning to travel there for pizza.

[pizza restaurant near me] has 90,500 searches across the USA. The likelihood is that the vast majority of those searchers are not looking for Manhattan pizzas.

Google knows this and, therefore, will serve pizza restaurant results relevant to the searcher’s location.

Therefore, the “near me” element of the search becomes less about the keyword and more about the intent behind the keyword. Google will just consider it to be the location the searcher is in.

So, do you need to include “near me” in your content to rank for those [near me] searches?

No, you need to be relevant to the location the searcher is in.

  • Verdict: SEO myth.

17. Better Content Equals Better Rankings

It’s prevalent in SEO forums and X (formally Twitter) threads. The common complaint is, “My competitor is ranking above me, but I have amazing content, and theirs is terrible.”

The cry is one of indignation. After all, shouldn’t search engines reward sites for their “amazing” content?

This is both a myth and sometimes a delusion.

The quality of content is a subjective consideration. If it is your own content, it’s harder still to be objective.

Perhaps in Google’s eyes, your content isn’t better than your competitors’ for the search terms you are looking to rank for.

Perhaps you don’t meet searcher intent as well as they do. Maybe you have “over-optimized” your content and reduced its quality.

In some instances, better content will equal better rankings. In others, the technical performance of the site or its lack of local relevance may cause it to rank lower.

Content is one factor within the ranking algorithms.

  • Verdict: SEO myth.

18. You Need To Blog Every Day

This is a frustrating myth because it seems to have spread outside of the SEO industry.

Google loves frequent content. You should add new content or tweak existing content daily for “freshness.”

Where did this idea come from?

Google had an algorithm update in 2011 that rewards fresher results in the SERPs.

This is because, for some queries, the fresher the results, the better the likelihood of accuracy.

For instance, if you search for [royal baby] in the UK in 2013, you will be served with news articles about Prince George. Search it again in 2015, and you will see pages about Princess Charlotte.

In 2018, you would see reports about Prince Louis at the top of the Google SERPs, and in 2019 it would be baby Archie.

If you were to search [royal baby] in 2021, shortly after the birth of Lilibet, then seeing news articles on Prince George would likely be unhelpful.

In this instance, Google discerns the user’s search intent and decides showing articles related to the newest UK royal baby would be better than showing an article that is arguably more rank-worthy due to authority, etc.

What this algorithm update doesn’t mean is that newer content will always outrank older content. Google decides if the “query deserves freshness” or not.

If it does, then the age of content becomes a more important ranking factor.

This means that if you are creating content purely to make sure it is newer than competitors’ content, you are not necessarily going to benefit.

If the query you are looking to rank for does not deserve freshness, i.e., [who is Prince William’s third child?] a fact that will not change, then the age of content will not play a significant part in rankings.

If you are writing content every day thinking it is keeping your website fresh and, therefore, more rank-worthy, then you are likely wasting time.

It would be better to write well-considered, researched, and useful content pieces less frequently and reserve your resources to make those highly authoritative and shareable.

  • Verdict: SEO myth.

19. You Can Optimize Copy Once & Then It’s Done

The phrase “SEO optimized” copy is a common one in agency-land.

It’s used as a way to explain the process of creating copy that will be relevant to frequently searched queries.

The trouble with this is that it suggests that once you have written that copy – and ensured it adequately answers searchers’ queries – you can move on.

Unfortunately, over time, how searchers look for content might change. The keywords they use, the type of content they want could alter.

The search engines, too, may change what they feel is the most relevant answer to the query. Perhaps the intent behind the keyword is perceived differently.

The layout of the SERPs might alter, meaning videos are being shown at the top of the search results where previously it was just webpage results.

If you look at a page only once and then don’t continue to update it and evolve it with user needs, then you risk falling behind.

  • Verdict: SEO myth.

20. Google Respects The Declared Canonical URL As The Preferred Version For Search Results

This can be very frustrating. You have several pages that are near duplicates of each other. You know which one is your main page, the one you want to rank, the “canonical.” You tell Google that through the specially selected “rel=canonical” tag.

You’ve chosen it. You’ve identified it in the HTML.

Google ignores your wishes, and another of the duplicate pages ranks in its place.

The idea that Google will take your chosen page and treat it like the canonical out of a set of duplicates isn’t a challenging one.

It makes sense that the website owner would know best which page should be the one that ranks above its cousins. However, Google will sometimes disagree.

There may be instances where another page from the set is chosen by Google as a better candidate to show in the search results.

This could be because the page receives more backlinks from external sites than your chosen page. It could be that it’s included in the sitemap or is being linked to your main navigation.

Essentially, the canonical tag is a signal – one of many that will be taken into consideration when Google chooses which page from a set of duplicates should rank.

If you have conflicting signals on your site, or externally, then your chosen canonical page may be overlooked in favor of another page.

Want to know if Google has selected another URL to be the canonical despite your canonical tag? In Google Search Console, in the Index Coverage report, you might see this: “Duplicate, Google chose different canonical than user.”

Google’s support documents helpfully explain what this means:

“This page is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Google has indexed the page that we consider canonical rather than this one.”

  • Verdict: SEO myth.

21. Google Has 3 Top Ranking Factors

It’s links, content, and Rank Brain, right?

This idea that these are the three top ranking factors seems to come from a WebPromo Q&A in 2016 with Andrei Lipattsev, a search quality senior strategist at Google at the time (recovered through Wayback Machine; find this discussion at around the 30-minute mark).

When questioned on the “other two” top ranking factors, the questioner assumed that Rank Brain was one, Lipattsev stated that links pointing to a site, and content were the other two. He does clarify by saying:

“Third place is a hotly contested issue. I think… It’s a funny one. Take this with a grain of salt. […] And so I guess, if you do that, then you’ll see elements of RankBrain having been involved in here, rewriting this query, applying it like this over here… And so you’d say, ‘I see this two times as often as the other thing, and two times as often as the other thing’. So it’s somewhere in number three.

It’s not like having three links is ‘X’ important, and having five keywords is ‘Y’ important, and RankBrain is some ‘Z’ factor that is also somehow important, and you multiply all of that … That’s not how this works.”

However it started, the concept prevails. A good backlink profile, great copy, and “Rank Brain” type signals are what matter most with rankings, according to many SEO pros.

What we have to take into consideration when reviewing this idea is John Mueller’s response to a question in a 2017 English Google Webmaster Central office-hours hangout.

Mueller is asked if there is a one-size-fits-all approach to the top three ranking signals in Google. His answer is a clear “No.”

He follows that statement with a discussion around the timeliness of searches and how that might require different search results to be shown.

He also mentions that depending on the context of the search, different results may need to be shown, for instance, brand or shopping.

He continues to explain that he doesn’t think that there is one set of ranking factors that can be declared the top three that apply to all search results all the time.

Within the “How Search Works” documentation it clearly states:

“To give you the most useful information, Search algorithms look at many factors and signals, including the words of your query, relevance and usability of pages, expertise of sources, and your location and settings.

The weight applied to each factor varies depending on the nature of your query. For example, the freshness of the content plays a bigger role in answering queries about current news topics than it does about dictionary definitions. ”

  • Verdict: Not entirely true or myth.

22. Use The Disavow File To Proactively Maintain A Site’s Link Profile

To disavow or not disavow — this question has popped up a lot over the years since Penguin 4.0.

Some SEO professionals are in favor of adding any link that could be considered spammy to their site’s disavow file. Others are more confident that Google will ignore them anyway and save themselves the trouble.

It’s definitely more nuanced than that.

In a 2019 Webmaster Central Office Hours Hangout, Mueller was asked about the disavow tool and whether we should have confidence that Google is ignoring medium (but not very) spammy links.

His answer indicated that there are two instances where you might want to use a disavow file:

  • In cases where a manual action has been given.
  • And where you might think if someone from the webspam team saw it, they would issue a manual action.

You might not want to add every spammy link to your disavow file. In practice, that could take a long time if you have a very visible site that accrues thousands of these links a month.

There will be some links that are obviously spammy, and their acquisition is not a result of activity on your part.

However, where they are a result of some less-than-awesome link building strategies (buying links, link exchanges, etc.) you may want to proactively disavow them.

Read Roger Montti’s full breakdown of the 2019 exchange with John Mueller to get a better idea of the context around this discussion.

  • Verdict: Not a myth, but don’t waste your time unnecessarily.

23. Google Values Backlinks From All High Authority Domains

The better the website authority, the bigger the impact it will have on your site’s ability to rank. You will hear that in many SEO pitches, client meetings, and training sessions.

However, that’s not the whole story.

For one, it’s arguable whether Google has a concept of domain authority (see “Google Cares About Domain Authority” above).

And more importantly, it is the understanding that there is a lot that goes into Google’s calculations of whether a link will impact a site’s ability to rank highly or not.

Relevancy, contextual clues, no-follow link attributes. None of these should be ignored when chasing a link from a high “domain authority” website.

John Mueller also threw a cat among the pigeons during a live Search Off the Record podcast recorded at BrightonSEO in 2022 when he said:

“And to some extent, links will always be something that we care about because we have to find pages somehow. It’s like how do you find a page on the web without some reference to it?” But my guess is over time, it won’t be such a big factor as sometimes it is today. I think already, that’s something that’s been changing quite a bit.”

  • Verdict: Myth.

24. You Cannot Rank A Page Without Lightning-Fast Loading Speed

There are many reasons to make your pages fast: usability, crawlability, and conversion. Arguably, it is important for the health and performance of your website, and that should be enough to make it a priority.

However, is it something that is absolutely key to ranking your website?

As this Google Search Central post from 2010 suggests, it was definitely something that factored into the ranking algorithms. Back when it was published, Google stated:

“While site speed is a new signal, it doesn’t carry as much weight as the relevance of a page. Currently, fewer than 1% of search queries are affected by the site speed signal in our implementation and the signal for site speed only applies for visitors searching in English on Google.com at this point.”

Is it still only affecting such a low percentage of visitors?

In 2021, the Google Page Experience system, which incorporates the Core Web Vitals for which speed is important, rolled out on mobile. It was followed in 2022 with a rollout of the system to desktop.

This was met with a flurry of activity from SEO pros, trying to get ready for the update.

Many perceive it to be something that would make or break their site’s ranking potential. However, over time, Google representatives have downplayed the ranking effect of Core Web Vitals.

More recently, in May 2023, Google introduced Interaction to Next Paint (INP) to the Core Web Vitals to replace First Input Delay (FID).

Google claims that INP helps to deal with some of the limitations found with FID. This change in how a page’s responsiveness is measured shows that Google still cares about accurately measuring user experience.

From Google’s previous statements and recent focus on Core Web Vitals, we can see that load speed continues to be an important ranking factor.

However, it will not necessarily cause your website to dramatically increase or decrease in rankings.

Google representatives Gary Illyes, Martin Splitt, and John Mueller hypothesized in 2021 during a “Search off the Record” podcast about the weighting of speed as a ranking factor.

Their discussion drew out the thinking around page load speed as a ranking metric and how it would need to be considered a fairly lightweight signal.

They went on to talk about it being more of a tie-breaker, as you can make an empty page lightning-fast, but it will not serve much use for a searcher.

John Mueller reinforced this in 2022 during Google SEO Office Hours when he said:

“Core Web Vitals is definitely a ranking factor. We have that for mobile and desktop now. It is based on what users actually see and not kind of a theoretical test of your pages […] What you don’t tend to see is big ranking changes overall for that.

But rather, you would see changes for queries where we have similar content in the search results. So if someone is searching for your company name, we would not show some random blog, just because it’s a little bit faster, instead of your homepage.

We would show your homepage, even if it’s very slow. On the other hand, if someone is searching for, I don’t know, running shoes, and there are lots of people writing about running shoes, then that’s where the speed aspect does play a bit more of a role.”

With this in mind, can we consider page speed a major ranking factor?

My opinion is no, page speed is definitely one of the ways Google decides which pages should rank above others, but not a major one.

  • Verdict: Myth.

25. Crawl Budget Isn’t An Issue

Crawl budget – the idea that every time Googlebot visits your website, there is a limited number of resources it will visit – isn’t a contentious issue. However, how much attention should be paid to it is.

For instance, many SEO professionals will consider crawl budget optimization a central part of any technical SEO roadmap. Others will only consider it if a site reaches a certain size or complexity.

Google is a company with finite resources. It cannot possibly crawl every single page of every site every time its bots visit them. Therefore, some of the sites that get visited might not see all of their pages crawled every time.

Google has helpfully created a guide for owners of large and frequently updated websites to help them understand how to enable their sites to be crawled.

In the guide, Google states:

“If your site does not have a large number of pages that change rapidly, or if your pages seem to be crawled the same day that they are published, you don’t need to read this guide; merely keeping your sitemap up to date and checking your index coverage regularly is adequate.”

Therefore, it would seem that Google is in favor of some sites paying attention to its advice on managing crawl budget, but doesn’t consider it necessary for all.

For some sites, particularly ones that have a complex technical setup and many hundreds of thousands of pages, managing crawl budget is important. For those with a handful of easily crawled pages, it isn’t.

  • Verdict: SEO myth.

26. There Is A Right Way To Do SEO

This is probably a myth in many industries, but it seems prevalent in SEO. There is a lot of gatekeeping in SEO social media, forums, and chats.

Unfortunately, it’s not that simple.

We know some core tenets about SEO.

Usually, something is stated by a search engine representative that has been dissected, tested, and ultimately declared true.

The rest is a result of personal and collective trial and error, testing, and experience.

Processes are extremely valuable within SEO business functions, but they have to evolve and be applied appropriately.

Different websites within different industries will respond to changes in ways others would not. Altering a meta title so it is under 60 characters long might help the click-through rate for one page and not for another.

Ultimately, we have to hold any SEO advice we’re given lightly before deciding whether it is right for the website you are working on.

  • Verdict: SEO myth.

When Can Something Appear To Be A Myth

Sometimes an SEO technique can be written off as a myth by others purely because they have not experienced success from carrying out this activity for their own site.

It is important to remember that every website has its own industry, set of competitors, the technology powering it, and other factors that make it unique.

Blanket application of techniques to every website and expecting them to have the same outcome is naive.

Someone may not have had success with a technique when they have tried it in their highly competitive vertical.

It doesn’t mean it won’t help someone in a less competitive industry have success.

Causation & Correlation Being Confused

Sometimes, SEO myths arise because of an inappropriate connection between an activity that was carried out and a rise in organic search performance.

If an SEO has seen a benefit from something they did, then it is natural that they would advise others to try the same.

Unfortunately, we’re not always great at separating causation and correlation.

Just because rankings or click-through rates increased around the same time as you implemented a new tactic doesn’t mean it caused the increase. There could be other factors at play.

Soon, an SEO myth will arise from an overeager SEO who wants to share what they incorrectly believe to be a golden ticket.

Steering Clear Of SEO Myths

It can save you from experiencing headaches, lost revenue, and a whole lot of time if you learn to spot SEO myths and act accordingly.

Test

The key to not falling for SEO myths is making sure you can test advice whenever possible.

If you have been given the advice that structuring your page titles a certain way will help your pages rank better for their chosen keywords, then try it with one or two pages first.

This can help you measure whether making a change across many pages will be worth the time before you commit to it.

Is Google Just Testing?

Sometimes, there will be a big uproar in the SEO community because of changes in the way Google displays or orders search results.

These changes are often tested in the wild before they are rolled out to more search results.

Once a big change has been spotted by one or two SEO pros, advice on how to optimize for it begins to spread.

Remember the favicons in the desktop search results? The upset that caused the SEO industry (and Google users in general) was vast.

Suddenly, articles sprang up about the importance of favicons in attracting users to your search results. There was barely time to study whether favicons would impact the click-through rate that much.

Because just like that, Google changed it back.

Before you jump for the latest SEO advice being spread around Twitter as a result of a change by Google, wait to see if it will hold.

It could be that the advice that appears sound now will quickly become a myth if Google rolls back changes.

More resources: 


Featured Image: Search Engine Journal/Paulo Bobita

Part 1: How To Launch, Manage, & Grow An Affiliate Program Step-By-Step via @sejournal, @rollerblader

A value-adding affiliate program is among the highest-value, lowest-risk, and most reliable revenue channels. This three-part series will teach you how to launch, manage, and grow a value-adding affiliate program.

First, we should define “value-adding.” For this guide, value-adding is traffic that does not intercept your own efforts. If you lose SEO rankings, get banned on social media, or your email and SMS lists are destroyed, your affiliates will continue to be able to send you the same volume of customers and sales, helping you stay afloat.

But there are risks to the channel, and it is a heavy labor marketing strategy. Unless you are a major brand, there is no massive group of people who want to promote your product or service and drive sales to you. This is why having a proper plan to launch, manage, and grow your affiliate program is vital, and these three guides teache you how to do that.

Over the last 20+/- years, I’ve helped companies of all sizes and across the world launch, manage, and close down affiliate programs. I’m a two-time winner of the Affiliate Summit Pinnacle Award, which at the time required nominations from the international affiliate community and voting on by their board of directors.

I currently manage affiliate programs, coach companies, and in-house managers. I also managed an affiliate CPA network for a year in the past. I’ve been on all sides of the equation.

This guide is based on my experience and is intended to help you launch, grow, or remove stagnation from your affiliate program. It’s packed with pro tips to help you with attribution and answer your questions when something feels off, and you’re not getting explanations that sit right, like “It’s part of the customer journey or lifecycle.”

So, let’s start with a definition of an affiliate program because there is a lot of confusion between programs and networks. Then, we will go into the rest of part 1. Each part of the series gets more advanced, so if this is too easy, keep reading.

What Is An Affiliate Program?

An affiliate program is a marketing channel in which a company pays a third party on a revenue-sharing basis to promote its products, services, or offers.

The affiliate program is tracked via a software solution known as an affiliate or CPA network or through an analytics platform.

Now that we have a definition of what an affiliate program is, let’s get into the post.

This topic is split into three parts. Use the jump links below to navigate this post, and watch out for part 2!

Definitions

The jargon with affiliate programs can get confusing, the following is how we define each in this guide. Please note the wording can change based on the country and language.

For example, we say “affiliate program” in the USA, but in the UK, you may hear “affiliate scheme.” It’s the same thing.

  • Affiliate (also known as a publisher) – The person, company, or entity that promotes a brand, service, or product on a performance basis.
  • Affiliate network – A tracking platform that traditionally hosts ecommerce stores with multiple products, single or multiple lead forms for SAAS, service providers, aggregators, or services, and earns their money through override fees on transactions and annual software usage fees.
  • Affiliate program (also known as scheme) – A store, service provider, or company and aggregator that pays other people, companies, or groups to promote their offering on a revenue-sharing or mixed payment model.
  • CPA network – Similar to an affiliate network, but does single offers or multiple private offers for a long-form, lead form, or landing page type of deal. Instead of ecommerce stores and sites, you may find subscriptions, bundles, and other types of “deals” or “offers” vs. selling individual products or shopping experiences.
  • Offer – Normally found on CPA networks, not affiliate networks, an offer is a commissionable service, bundle, or lead gen that pays a fee for a specific action, including downloads, form fills, and completed purchases.
  • OPM (also known as affiliate management company, consultant, or affiliate marketing agency) – Stands for outsourced program management.
  • Intent to purchase or convert – Commonly used to define where the person is in their customer journey. It is often confused with value-adding, they are not equal or one-in-the same. “High-intent to purchase” or “relevant traffic” can often be used to disguise financially damaging behaviors to the company if allowed in the affiliate program.
  • FTC disclosures – These are advertising, endorsement, and relationship disclosures the FTC requires when promoting a product, service, brand, or app in order to receive some form of compensation. Click here and here to learn more.

Value add – The level of influence an affiliate click or interaction has on the decision to purchase:

  • High value – Partners that introduce new users to the brand and have their own traffic. Without this partner, the brand would not gain exposure to the audience or have sales.
  • Mid value – This touch point can be a review that helps convince a customer to convert or brings a customer back who either did not know the brand offered the product or service or forgot the brand existed.
  • Low value – An interaction that likely would have occurred without the partner, but there was at least some level of influence. This could be reviews, some end-of-sale touchpoints, or mid-shopping interceptions.
  • No value – When an affiliate has a touch point that does not influence the decision but takes a commission. This includes coupon codes that leak from influencers or partnerships, some end-of-sale and mid-sale touch points via browser extensions, and websites (including mass media) showing up for “your brand + coupons” in Google.

Now that you have the jargon, let’s jump into the guide.

Setting Goals And Expectations

The first step in launching or rebuilding an affiliate program is to set clear goals and expectations. Some companies do not care if their partners add value; they just need to show that there is a program and sales occur in it.

This is most common with large brands, inexperienced affiliate managers, and agencies that use a “set it and forget it” or automated” strategy.

Other brands want customer acquisition, brand exposure, and new traffic sources so they can increase revenue and win back previous customers. It is up to you to define the goals for your company and program.

Side note: I’ve heard from C-level and marketing executives who say they do not care if the affiliates add value or not; they just want to keep the board or the C-suite happy. Other times, they need to spend their budget to keep their budget, so they turn their heads the other way, knowing their company is taking a loss. The network reps tell me similar things, and that is why low—and no-value partners will continue to thrive.

Based on the goals you set, you’ll be able to define what is needed in a platform and how to locate and recruit partners that meet your goals and see success with the channel. Proper affiliate platform selection is vital.

Not all platforms offer video creative or advanced HTML/JavaScript for advanced tools. Some have a great reputation in your niche but only do offers vs. ecommerce sales, so you won’t be able to grow or scale if you work with them and want traditional affiliates.

If compliance is important, not all networks give you direct access to the partners in your affiliate program, and some block referring URLs. This means you don’t know if your partners are making false claims, including medical claims, not following brand guidelines, or using advertising disclosures.

To pick a tracking platform for your affiliate program, ask yourself these questions:

  • Do I want new customers or not?

Will I be ok with revenue losses if AOV (average order value) increases, and can I do a controlled test before I launch?

  • This is a common talking point by voucher/coupon and loyalty browser extensions to get into programs. They will say allowing them to interact with customers already in the shopping process increases conversions or AOV.
  • You must have an unbiased third party, which means no affiliate networks, affiliate managers, or affiliate agencies running the test. None of these groups is unbiased, as all are incentivized to allow these touchpoints.
  • What types of creatives will I need to provide in order to achieve my goals?
  • Am I okay with not being able to forecast profitability, as the entire channel is out of my control?
  • Knowing this is a labor-intensive channel, can I dedicate the resources and take the financial loss during the first year or two to test its viability? Or will my time and money be better focused on PPC, social media, SEO, win-backs, co-marketing, offline advertising, etc…? If I don’t have the time, can I afford to take a loss on an agency for a year while they try it for me?
  • What is the potential market opportunity, and have I tested the conversions from it? This refers to how much traffic is out there that you cannot reach on your own if your goal is a value-adding affiliate program.

Pro tip: Launching multiple networks because access to all affiliates is a bad idea 99.99% of the time. You’ll need to add custom logic code to your shopping cart to prevent paying out to multiple networks and to track all affiliate network clicks with a custom internal attribution system.

If you don’t have custom click attribution, the wrong network will get credit for the sale when two are involved, and you’ll end up choosing the wrong one to stick with. Don’t make this mistake as so many do.

Forecasting If An Affiliate Program Makes Sense Or Can Be Profitable

If all your affiliates are doing is intercepting your own traffic through browser extensions or by showing up in Google or Bing for your brand + coupons, you can forecast affiliate sales based on total site conversions.

These partners grow and fall as your own efforts grow and fall as your traffic falls because they are intercepting your own customers on your own website.

The more customers you have, the more they can intercept and the more they make. The less you have, the less they have to intercept and the less they make.

With that said, you can make a forecast for high-value affiliates that bring sales you would not have had on your own. This involves using data points from other channels. I’ll use non-review and non-coupon SEO affiliates for the example.

  • Start by using Google’s Keyword Planner or a keyword estimator from your favorite SEO tool to find estimated search volumes.
  • Combine the volume with your own data points for conversions. (For example, if you have a 5% conversion rate from PPC for the phrase “best blue tshirts” and there are 10,000 people searching each month, having affiliates show up for this phrase in SEO lets you forecast potential revenue if they send you the traffic.)
  • Combine this with your other data points for a more complete opportunity, including social media influencers, YouTube, and co-marketing.

Here’s A Formula To Use For A Basic Affiliate Program Profitability Forecast

2,000 visitors at 5% conversions with an AOV of $50 = $5,000.

With a 10% commission, 20% network fee, and operating cost of $2 per order, your profit is $4,200 (there is a net cost of $800 in the example above).

Last, add in anything you pay your affiliate manager including bonuses and design costs for banners, etc…

If you pay your affiliate manager $2,000 per month, your revenue will be $2,200 per month or $26,400 per year. The customer acquisition cost (CAC) is amazing!

Bonus tip: Look at how many customers come back and purchase again. If you are not paying on the second or third sale but keep the touchpoint in your records, then each additional sale from this acquisition counts as revenue with a higher ROAS (return on ad spend).

In the situation above you may find that this affiliate traffic leads to a large LTV (lifetime value) customer, so maybe you take a loss on the first sale for the partners with a higher PLTV (predicted lifetime value).

You may lose on the first sale, but you don’t have to pay for that same customer multiple times, and the affiliate continues to send you more like them because your affiliates are being paid fairly.

Move On To Part Two: Types Of Affiliates & Onboarding

Now that you know what the terminology means, how to forecast profitability, and can set goals and expectations for your affiliate program, let’s look at the types of affiliates, the tools they’ll need, ways to activate them, and communications strategies in part two.

More resources: 


Featured Image: Roman Samborskyi/Shutterstock

New Ecommerce Tools: July 9, 2024

Our rundown of new tools for merchants this week includes social commerce, flexible payments, luxury shopping, fulfillment, AI-powered pricing, and shopping in the metaverse.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: July 9, 2024

Walmart helps sell used collectibles. Walmart Marketplace is allowing sellers to list collectors’ items without incurring referral fees until September 30, 2024, along with additional perks. Sellers can (i) display the conditions of collectibles through a new grading system, (ii) build anticipation around a new drop by offering pre-orders for products, and (iii) select a preferred return policy: no returns, a 15-day return with sealed, or the standard 30-day Walmart return policy.

Web page of Walmart Marketplace

Walmart Marketplace

Captiv8 integrates with TikTok Shop. Captiv8, an influencer marketing platform for enterprise brands, has expanded its partnership with TikTok to include an integration with TikTok Shop. The integration into Captiv8’s Commerce Suite enhances its social commerce capabilities, strengthening efforts to advance the creator economy and maximize the earning potential for creators. The collaboration follows the recent launch of Captiv8’s Brand Exclusive Storefronts.

Klarna partners with Adobe Commerce on flexible payment options. Klarna, the buy-now-pay-later service, has partnered with Adobe Commerce to enable merchants to implement Klarna’s BNPL and other payment options, such as direct payments, pay-after-delivery, and installment plans. According to Adobe Commerce, consumers are embracing the flexibility of BNPL services, with Adobe Analytics data showing over 11% growth in 2024.

e.l.f. Beauty to sell physical items on Roblox. e.l.f. Beauty is testing real-world commerce, powered by Walmart, on the Roblox virtual universe gaming platform. e.l.f. has created a virtual kiosk within its e.l.f. UP! experience on Roblox, where U.S. visitors can purchase Roblox-exclusive limited-edition physical products. The e.l.f. UP! experience is developed in partnership with Egen (cloud development) and Supersocial (metaverse services).

Web page of e.l.f. UP! on Roblox

e.l.f. UP! on Roblox

Luxury platform Senser expands to the U.S. Senser, a luxury shopping platform, is entering the U.S. market. Senser partners with over 2,500 brands across 60 countries, offering a curated selection of more than 500,000 luxury items. The platform serves over 2 million high-net-worth customers. All products are shipped directly from European boutiques, allowing consumers to purchase their desired items at 30-60% off retail prices while enjoying a VIP-level professional fashion service experience.

Mollie and Riverty partner on BNPL invoicing. Mollie, a Europe-based financial service provider, is partnering with buy-bow-pay-later provider Riverty, supported by Bertelsmann, the media company. Mollie customers now have an additional option to pay later — up to 30 days. Riverty’s solution is integrated with Mollie, allowing customers to activate it in their Mollie dashboard. Riverty is available for Mollie customers in Belgium, Germany, Austria, and the Netherlands. Mollie now offers more than 30 different payment methods.

Dunnhumby unveils AI-powered assortment tool for products. Dunnhumby, a provider of customer data science, has launched an assortment solution, harnessing AI to enable retailers to identify and curate product ranges. Retailers can deliver localized assortments tailored to the unique preferences of a customer base. Retailers can use the planogram feature to optimize layouts based on customer behavior and implement merchandising rules, restrictions, and AI-powered predictive analytics to quantify product arrangements.

Home page of Dunnhumby

Dunnhumby

Google Merchant Center releases Merchant API Beta. Google has released the Merchant API Beta, an update of the Content API for Shopping. The Merchant API can help showcase products with new features, modular design with isolated sub-API updates, and improved alignment with Google’s API improvement proposals. It allows multiple API feeds, enhanced management of supplemental feeds, and support for other shopping feeds, such as promotions. New features include simplified promotions statuses, local feeds, and more.

Co-op partners with Walmart Commerce Technologies for online fulfillment. Co-op is partnering with Walmart Commerce Technologies to implement its online fulfillment technology, Store Assist. The app allows retailers to manage pickup, third-party marketplace, ship-from-store, and last-mile delivery orders all in one place. This streamlined omnichannel fulfillment workflow will enhance in-store processes and operations, remove the need for colleagues to switch between quick commerce apps or different devices, and allow for faster delivery times to support Co-op’s commerce growth ambitions.

Amazon Q in Connect utilizes step-by-step guides to assist customer service agents. Amazon Q in Connect, a generative-AI powered assistant for contact center agents, now recommends step-by-step guides in real time, which agents use to resolve customers’ issues. Amazon Q in Connect uses real-time customer conversations to detect their intent and provides a guided workflow to solve the problem.

HyperFinity launches AI-powered pricing for retailers. HyperFinity, a decision intelligence platform for retailers, has launched an AI-powered pricing product. According to HyperFinity, the tool presents data and insights in an easy-to-consume format, replacing spreadsheets. Users can model commercial scenarios based on price change decisions to understand margin impact as well as create AI-driven actions against every product in the range.

Home page of HyperFinity

HyperFinity

13 Custom GPTs to Generate Content

Custom GPTs are versions of ChatGPT that OpenAI and its users create for specific tasks or topics. Custom GPTs offer innovative, time-saving shortcuts for businesses.

Here is a list of custom GPTs from OpenAI’s GPT Store to generate content. There are custom GPTs to produce images, videos, logos, graphs and slides, copy for ads and content marketing, and more.

Custom GPTs for Content

Copywriter GPT generates copy for marketing campaigns. Define your product’s campaign goal to generate copy and an ad.

Screenshot of the web page for Copywriter GPT

Copywriter GPT

DALL-E is a text-to-image generative AI model developed by OpenAI. Generate images from text prompts.

Canva provides visual design elements, such as presentations, logos, and social media posts. Quickly generate a promotional poster for a sale or an inspirational social media story.

Logo Creator generates professional logo designs and app icons. Define the style, simplicity, and type of logo and then refine it.

Screenshot of the web page for Logo Creator

Logo Creator

Write For Me is a tool for generating tailored, relevant content with a specified word count. Just specify the count and subject.

Image Generator specializes in generating images using a mix of professional and friendly tones. It can generate visual scenes for your product or design a logo.

AI Humanizer is a converter that humanizes AI-generated content. Create human-like content while maintaining meaning and quality.

Screenshot of the web page for AI Humanizer

AI Humanizer

Video GPT by VEED generates video for social media. Create a script and generate a video with AI avatars, text-to-speech, music, and stock footage.

Data Analyst, by OpenAI, analyzes and visualizes data. Upload data directly to ChatGPT, and then ask questions. Identify trends and generate charts to visualize the results.

Consensus is an assistant to research scientific literature and academic papers. Ask questions, then prompt it to create content based on the results.

Screenshot of the web page for Consensus

Consensus

Video AI generates videos with voice-overs in any language. Quickly create a marketing or how-to video. Generate a script from text prompts, or convert text to video.

Fully SEO Optimized Article including FAQs produces search-engine-optimized content, including titles, meta descriptions, and tags.

Presentation and Slides GPT generates a presentation or slide deck to convey an idea or introduce a product.

Screenshot of the web page for Presentation and Slides GPT

Presentation and Slides GPT

Google Says These Are Not Good Signals via @sejournal, @martinibuster

Google’s Gary Illyes’ answer about authorship shared insights about why Google has less trust for signals that are under direct control of site owners and SEOs and provides a better understanding about what site owners and SEOs should focus on when optimizing a website.

The question that Illyes answered was in the context of a live interview at a search conference in May 2024. The interview went largely unnoticed but it’s full of great information related to digital marketing and how Google ranks web pages.

Authorship Signals

Someone asked the question about whether Google would bring back authorship signals. Authorship has been a fixation by some SEOs based on Google’s encouragement that SEOs and site owners review the Search Quality Raters Guidelines to understand what Google aspires to rank. SEOs however took the encouragement too literally and started to parse the document for ranking signal ideas instead.

Digital marketers came to see the concept of EEAT (Expertise, Experience, Authoritativeness, and Trustworthiness) as actual signals that Google’s algorithms were looking for and from there came the idea that authorship signals were important for ranking.

The idea of authorship signals is not far-fetched because Google at one time created a way for site owners and SEOs pass along metadata about webpage authorship but Google eventually abandoned that idea.

SEO-Controlled Markup Is Untrustworthy

Google’s Gary Illyes answered the question about authorship signals and very quickly, within the same sentence, shared that Google’s experience with SEO-controlled data on the web page (markup) tends to become spammy (implying that it’s untrustworthy).

This is the question as relayed by the interviewer:

“Are Google planning to release some authorship sooner or later, something that goes back to that old authorship?”

Google’s Gary Illyes answered:

“Uhm… I don’t know of such plans and honestly I’m not very excited about anything along those lines, especially not one that is similar to what we had back in 2011 to 2013 because pretty much any markup that SEOs and site owners have access to will be in some form spam.”

Gary next went into greater detail by saying that SEO and author controlled markup are not good signals.

Here is how he explained it:

“And generally they are not good signals. That’s why rel-canonical, for example is not a directive but a hint. And that’s why Meta description is not a directive, but something that we might consider and so on.

Having something similar for authorship, I think would be a mistake.”

The concept of SEO-controlled data not being a good signal is important to understand because many in search marketing believe that they can manipulate Google by spoofing authorship signals with fake author profiles, with reviews that pretend to be hands-on, and with metadata (like titles and meta descriptions) that is specifically crafted to rank for keywords.

What About Algorithmically Determined Authorship?

Gary then turned to the idea of algorithmically determined authorship signals and it may surprise some that Gary describes those siganls as lacking in value. This may come as a blow to SEOs and site owners who have spent significant amounts of time updating their web pages to improve their authorship data.

The concept of the importance of “authorship signals” for ranking is something that some SEOs created all by themselves, it’s not an idea that Google encouraged. In fact, Googlers like John Mueller and SearchLiaison have consistently downplayed the necessity of author profiles for years.

Gary explained about algorithmically determined authorship signals:

“Having something similar for authorship, I think would be a mistake. If it’s algorithmically determined, then perhaps it would be more accurate or could be higher accuracy, but honestly I don’t necessarily see the value in it.”

The interviewer commented about rel-canonicals sometimes being a poor source of information:

“I’ve seen canonical done badly a lot of times myself, so I’m glad to hear that it is only a suggestion rather than a rule.”

Gary’s response to the observation about poor canonicals is interesting because he doesn’t downplay the importance of “suggestions” but implies that some of them are stronger although still falling short of a directive. A directive is something that Google is obligated to obey, like a noindex meta tag.

Gary explained about rel-canonicals being a strong suggestion:

“I mean it’s it’s a strong suggestion, but still it’s a suggestion.”

Gary affirmed that even though rel=canonicals is a suggestion, it’s a strong suggestion. That implies a relative scale of how much Google trusts certain inputs that publishers make. In the case of a canonical, Google’s stronger trust in rel-canonical is probably a reflection of the fact that it’s in a publisher’s best interest to get it right, whereas other data like authorship could be prone to exaggeration or outright deception and therefore less trustworthy.

What Does It All Mean?

Gary’s comments should give a foundation for setting the correct course on what to focus on when optimizing a web page. Gary (and other Googlers) have said multiple times that authorship is not really something that Google is looking for. That’s something that SEOs invented, not something that Google encouraged.

This also provides guidance on not overestimating the importance of metadata that is controlled by a site owner or SEO.

Watch the interview starting at about the two minute mark:

Featured Image by Shutterstock/Asier Romero

Google Search Now Supports Labeling AI Generated Or Manipulated Images via @sejournal, @martinibuster

Google Search Central updated their documentation to reflect support for labeling images that were extended or manipulated with AI. Google also quietly removed the “AI generated” metadata from Beta status, indicating that the “AI Generated” label is now fully supported in search.

IPTC Photo Metadata

The International Press Telecommunications Council (IPTC) is a standards making body that among other things creates standards for photo metadata. Photo metadata enables a photograph to be labeled with information about the photo, like information about copyright, licensing and image descriptions.

Although the standards is made for by an international press standards organization the meta data standards they curate are used by Google Images in a context outside of Google News. The metadata allows Google Images to show additional information about the image.

Google’s documentation explains the use case and benefit of the metadata:

“When you specify image metadata, Google Images can show more details about the image, such as who the creator is, how people can use an image, and credit information. For example, providing licensing information can make the image eligible for the Licensable badge, which provides a link to the license and more detail on how someone can use the image.”

AI Image Manipulation Metadata

Google quietly adopted the metadata standards pertaining to images that were manipulated with AI algorithms that are typically used to manipulate images, like convolutional neural networks (CNNs) and generative adversarial networks (GANs).

There are two forms of AI image manipulation that are covered by the new metadata:

  • Inpainting
  • Outpainting

Inpainting

Inpainting is generally conceived as enhancing an image for the purpose of restoring or reconstructing it, to fill in the missing parts. But inpainting is also any algorithm manipulation that adds to an image.

Outpainting

Outpainting is the algorithm process of adding to an image, extending it beyond the borders of the original photograph, adding more to it than what was in the original image.

Google now supports labeling images that were manipulated in both those ways with a new metadata property of the Digital Source Type that’s called compositeWithTrainedAlgorithmicMedia.

compositeWithTrainedAlgorithmicMedia

While the new property looks like structured data, it’s not Schema structured data. It’s metadata that’s embedded in a digital image.

This is what was added to Google’s documentation:

“Digital Source Type

compositeWithTrainedAlgorithmicMedia: The image is a composite of trained algorithmic media with some other media, such as with inpainting or outpainting operations.”

Label For “AI Generated” – algorithmicMedia Metadata

Google also lifted the Beta status of the algorithmicMedia metadata specifications, which means that images that are created with AI can now be labeled as AI Generated if the algorithmicMedia metadata is embedded within an image.

This is the documentation before the change:

“algorithmicMedia: The image was created purely by an algorithm not based on any sampled training data (for example, an image created by software using a mathematical formula).

Beta: Currently, this property is in beta and only available for IPTC photo metadata. Adding this property makes your image eligible for display with an AI-generated label, but you may not see the label in Google Images right away, as we’re still actively developing it.”

The change in the documentation was to remove the entirety of the second paragraph to remove any mention of Beta status. Curiously, this change is not reflected in Google’s changelog.

Google’s Search Central documentation changelog noted:

“Supporting a new IPTC digital source type
What: Added compositeWithTrainedAlgorithmicMedia to the IPTC photo metadata documentation.

Why: Google can now extract the compositeWithTrainedAlgorithmicMedia IPTC NewsCode.”

Read Google’s updated documentation:

Image metadata in Google Images

Featured Image by Shutterstock/Roman Samborskyi

Critical SERP Features Of Google’s Shopping Marketplace via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Google’s launch and pullback of AI Overviews (AIOs) caught the most attention in the SEO scene over the last two months.

However, a change with at least the same significance flew under the radar: Google’s transformation from search engine to marketplace for shopping queries.

Yes, AIOs are impactful: In my initial analysis, I found a negative impact of -8.9% when a page cited in an AIO compared to ranking at the top of the classic web search results.

I then found that Google pulled 50-66% of AIOs back. However, Google shows a whole slew of SERP features and AI features for ecommerce queries that are at least as impactful as AIOs.

To better understand the key trends for shopping queries, I analyzed 35,305 keywords across categories like fashion, beds, plants, and automotive in the US over the last five months using SEOClarity.

The results:

  • Product listings appear more often in position 1 in June compared to February 2024.
  • SERP features like Discussions & Forums gained visibility and opened a new playground for marketers.
  • SERP features fluctuate in visibility and introduce a lot of noise in SEO metrics.

Google Shopping Marketplace

To summarize ecommerce shifts, where I explain Google’s shift from search engine to ecommerce marketplace, Google has merged the web results and shopping tab for shopping searches as a response to Amazon’s long-standing dominance:

  • Google has fully transitioned into a shopping marketplace by adding product filters to search result pages and implementing a direct checkout option.
  • These new features create an ecommerce search experience within Google Search and may significantly impact the organic traffic merchants and retailers rely on.
  • Google has quietly introduced a direct checkout feature that allows merchants to link free listings directly to their checkout pages.
  • Google’s move to a shopping marketplace was likely driven by the need to compete with Amazon’s successful advertising business.
  • Google faces the challenge of balancing its role as a search engine with the need to generate revenue through its shopping marketplace, especially considering its dependence on partners for logistics.

To illustrate with an example:

  1. Say you are looking for kayaks (summertime!).
  2. On desktop (logged-in), Google will now show you product filters on the left sidebar and product carousels in the middle on top of classic organic results – and ads, of course.
Google search for kayaksImage Credit: Kevin Indig
  1. On mobile, you get product filters at the top, ads above organic results, and product carousels in the form of popular products.
Google search for kayaks on mobileImage Credit: Kevin Indig
  1. This experience doesn’t look very different from Amazon, which is the whole point.
Amazon resultsImage Credit: Kevin Indig

Google’s new shopping experience lets users explore products on Amazon, Walmart, Ebay, Esty, & Co.

From an SEO perspective, the prominent position of product grid (listings) and filters likely significantly impacts CTR, organic traffic, and, ultimately, revenue.

Product Listings Appear More Often In Position 1

30,172 out of 35,305 keywords (85.6%) show product listings, which are the free product carousels, in my analysis. It’s the most visible SERP feature in shopping search.

In February, product listings showed up for 39% of queries in position 1 and 15% of queries in position 3.

In July, that number shifted to 43% for position 1 and 13.6% for position 3. Google moved product listings higher up the SERPs.

Google product listings by positionImage Credit: Kevin Indig

The shift from web links to product images makes product listings a cornerstone feature in Google’s transformation. The increased visibility means Google doubles down on the new model.

Discussions & Forums Gain Visibility

After product listings (85.6% of queries), image carousels (61.8% of queries) are the most common SERP features.

SERP Features by occurenceImage Credit: Kevin Indig

Image carousels are highly impactful because shopping is a visual act. Seeing the right product can very quickly trigger a purchase, as opposed to customers being stuck in the Messy Middle for longer.

Retailers and ecommerce brands put a lot of effort into high-quality product pictures and need to spend equal time optimizing images for Google Search, even though organic traffic is usually much lower than web ranks.

Google now tests “generate image with AI,” a feature that lets users generate product images with prompts and then see similar (real) products.

It’s a powerful application of AI that, again, flies under the AIO radar but could also be impactful by making it easier for users to find things they want.

Image Credit: Kevin Indig

Visibility for most SERP features remained relatively unchanged between February and July, with one exception: Discussions & Forums grew from 28.7% to 34% of all queries (+5.3 percentage points).

SERP Features February vs June 2024Image Credit: Kevin Indig

The change in Discussions & Forums SERP features is in line with Reddit’s unprecedented SEO visibility gain over the last 12 months. The domain now operates at the traffic level of Facebook and Amazon.

Google’s Discussions & Forums feature highlights threads in forums like Reddit, Quora, and others. People visit forums when they are looking for authentic and unincentivized opinions from other consumers. Many review articles are biased, and it seems consumers know.

As a result, Google compensates for lower review quality with more user-generated content from forums. In Free Content, I referenced a study from Germany titled “Is Google getting worse?” that found:

  • “An overall downward trend in text quality in all three search engines.”
  • “Higher-ranked pages are on average more optimized, more monetized with affiliate marketing, and they show signs of lower text quality.”

Discussions & Forums show that high visibility doesn’t equal high impact for SERP features.

SERP Features And Their Impact Fluctuate

SERP features are commonly assumed to show up at a stable rate in Search, but Google constantly tests them.

As a result, SERP features that impact click-through rates can introduce a lot of noise into common SEO data (CTR, clicks, even revenue).

At the same time, Google switching some features on and off can help SEO pros understand the impact of SERP features on SEO metrics.

A good example is the Things To Know feature (TTK), which answers two common questions about a product with links to websites.

Things To Know featureImage Credit: Kevin Indig

After months of stable visibility, Google suddenly reduced the number of TTKs by -37.5% for a month, bringing it back to previous levels.

Sites that were linked in TTK might have seen less organic traffic during that month. Since TTK isn’t reported in Search Console, those sites might wonder why their organic traffic dropped even though ranks might be stable.

Things to Know SERP FeatureImage Credit: Kevin Indig

Coming back to the Kayak example from earlier, Google tests variations like deals and carousel segments (“Kayaks For Beginners”).

Kayaks for beginnersImage Credit: Kevin Indig

You can imagine how hard this makes getting stable data and why it’s so critical to monitor SERP features.


Featured Image: Paulo Bobita/Search Engine Journal

Poor Marketing Kills Ecommerce Dropshipping

Dropshipping is a good way to source products without much investment. Unfortunately, this seemingly turn-key model has little barrier to entry and thus attracts many competitors with razor-thin margins and no clear way to differentiate.

Yet creating a successful dropshipping business is not impossible, provided the would-be entrepreneur understands the growth and profit challenges.

Dropshipping Boost

I once heard ecommerce dropshipping described as “the perfect business model for anyone who wants all the stress and frustration of running a business without any of the pesky profits to worry about.”

While this description is a little unfair to an industry with estimated sales of $351.8 billion in 2024, according to Oberlo, a dropship provider, it also hints at the benefits of starting or scaling a business.

As a business model, ecommerce dropshipping is attractive for a reason: it is relatively easy to start and very low risk.

There are at least four reasons an entrepreneur might be attracted to dropshipping.

  • Little or no investment. There is no need to purchase inventory upfront.
  • Low risk. Merchants only pay for products they sell, minimizing the risk.
  • Access to products. Stores can offer a variety of products without worrying about storage or investment. When I led ecommerce for a retail chain, we would use drop shippers to add complementary products to our site, boosting average order value.
  • Flexibility. Sellers can change product offerings based on market trends without significant financial risk.

All of these features focus on product sourcing and financial investment. The trade-off, however, is a marketing problem.

A Marketing Business

Selling drop-shipped items is a choice to focus on attracting and converting customers rather than developing and sourcing products.

Effectively, when you start or scale a dropshipping business, you prefer solving marketing problems rather than sourcing.

And there will be marketing problems. The top three are likely customer acquisition limits, undifferentiated products, and customer relationships.

Not much CAC

Think for a moment about a traditional retailer that orders products from a manufacturer at wholesale prices, warehouses the items, and sells them for, say, a 25% margin.

Thus, a $100 sale will result in a $25 gross profit. If the retailer wanted a return on advertising spend of 4:1, it could invest $6.25 to acquire a customer — that would be its target customer acquisition cost.

The drop shipping supply chain is longer and more expensive by comparison. More parties take a cut of the profit, and some are taking significant percentages because they carry the inventory risk.

A typical margin for a store selling a drop-shipped item may be as low as 10%, according to Shopify. So, the same $100 sale will result in $10 of margin. A 4:1 ROAS puts this shop’s target CAC at $2.50.

If a retailer and a dropship shop sell identical items — a real possibility — the marketing challenge is clear: the dropship store must acquire customers for less.

Same products

Selling an identical product exacerbates the already anemic CAC. Yet selling the same products is what most dropship-based stores do.

This t-shirt is available on a specialty t-shirt shop, AliExpress, and the Dsers-AliExpress Dropshipping app.

Consider the Dsers-AliExpress Dropshipping, an app for Shopify. The product takes an item from AliExpress and adds it directly to a Shopify store. It will do this for any Shopify store, potentially placing the identical AliExpress item in dozens or even hundreds of shops.

Hence it’s not enough to market a store’s products. Operating an ecommerce dropshipping business requires differentiating from many others.

Customer relationships

Marketing tasks should not end when a sale is consummated. Some of the best tactics focus on retaining and engaging those buyers afterward.

Thus building strong customer relationships is crucial, especially in a dropshipping business where the same products might be available from multiple sources at similar prices.

That means investing time in content marketing, email marketing, retargeting, and social media marketing.

SEO Tools for Keyword Intent, Audience

AI is upending keyword research and analysis for search engine optimization.

I’ve addressed AI’s ability to generate new terms and identify search intent for an existing term. Yet the industry continues to innovate and evolve. Here are three AI tools for analyzing keywords, target audiences, content, intent, and more.

TermSniper

TermSniper identifies search intent for any keyword and suggests content tips for optimizing. Type a keyword, and the tool will provide an analysis.

I typed “best web hosting.” Here’s TermSniper’s analysis:

The primary search intent of this search term is to find and compare the best web hosting services available in 2024. The secondary search intent is to get recommendations and reviews of specific web hosting providers from trusted sources.

Insights

  • Tone of voice: Informational
  • User’s next action: Continue research
  • Webpage format: Listicle
  • Expert author needed: Not necessarily
  • Brand authority needed: High to medium
  • Incentive: Yes, various offers like discounts and promotions are common
  • CTR boost: Yes, the use of numbers like “2024,” specific terms like “Top Picks,” and mentions of well-known brands

The tool pulls keywords that can match the search intent. For “best web hosting,” those keywords were “best,” “bluehost,” “wordpress,” and “millions.”

TermSniper scores the related keywords from 1 to 100 depending on how many high-ranking pages include each in their page copy and meta tags. The tool recommends adding words scoring 20 to 100 into your meta title, H1, and first paragraph. Add words scoring 1 to 19 throughout the rest of your page in a natural fashion.

TermSniper offers five free credits to test ‚ with one keyword analysis per credit. Paid plans start at $2 per credit with a $10 minimum.

SEO.ai

SEO.ai offers AI-driven research that can limit keyword lists by the target audience. Provide a base term, click on the “Audience keywords” tab, and describe your target audience. The tool will find related keywords matching that audience.

For “best web hosting,” I described the target audience as “Beginner users who are planning to start a site.”

SEO.ai’s keyword suggestions that match my audience were:

  • “reliable providers,”
  • “starter plans,”
  • “money-back,”
  • “low-cost.”

SEO.ai provides keyword suggestions to match a target audience. Click image to enlarge.

I can sort those keywords by search volume or ranking difficulty. Based on those metrics, I can launch a dedicated page, add the words to existing pages, or both.

SEO.ai offers a 7-day free trial with a credit card. Paid plans start at $49 per month.

Free SEO Keyword Research & SERP Analyzer

Free SEO Keyword Research & SERP Analyzer is a custom GPT for all ChatGPT users. It can scrape any Google search-result page (or any page) and analyze keyword usage, intent, and the target audience.

For “best web hosting,” the tool’s  SERP analysis was as follows:

  • User Intent: Users searching for “best web hosting” are likely looking for comprehensive reviews, comparisons, and recommendations to make an informed decision on web hosting services.
  • Content Freshness: Most articles are from 2024, ensuring up-to-date information on the latest web hosting trends and services.
  • Authority & Trust: The top results come from reputable tech review sites like TechRadar, PCMag, and CNET, as well as a highly engaged community platform (Reddit).

As with any ChatGPT or custom GPT prompts, users can refine the prompt as needed. For example, follow-up prompts could be:

  • “What are the most common keywords found in search snippets for this query?”
  • “What types of pages rank for this search query?”

Users can also identify any URL ranking for a target query and prompt the custom GPT to analyze its keywords and usefulness and compare it to users’ own pages.