Twitter Now Puts Recommended Tweets In Everyone’s Feeds via @sejournal, @MattGSouthern

Twitter is expanding recommended tweets to all users, including people who managed to avoid them until now.

In an announcement, the company states:

“We want to ensure everyone on Twitter sees the best content on the platform, so we’re expanding recommendations to all users, including those who may not have seen them in the past.”

Twitter links to a blog post from September that explains how recommendations work.

In short – recommendations are algorithmically selected tweets from accounts you don’t follow.

By surfacing recommended tweets alongside other content in the main feed, Twitter aims to help users discover other accounts they may be interested in following.

This change doesn’t necessarily mean that people who saw recommended tweets will start seeing more of them.

Twitter’s announcement emphasizes this update applies to people who weren’t already seeing recommended tweets.

However, actual users report seeing an uptick in recommended tweets. Some are even getting recommendations in their notification tab.

If you don’t enjoy recommended tweets or see too many, you can switch to the chronological feed and avoid them entirely.

The above tweet’s ratio will give you an idea of the consensus toward recommendations and the lack of options for turning them off.

A vocal contingent of Twitter users would prefer to see more relevant recommendations or have more ways to turn them off.

It was only a few months ago that Twitter CEO Elon Musk criticized the recommendation algorithm, saying he prefers the chronological feed:

Could Twitter’s expansion of recommended tweets mean there are improvements to the algorithm?

You can customize your recommendations by tapping the three-dot icon next to individual tweets.

From there, you can say you’re “not interested in this tweet,” or go a step further and mute the account so Twitter doesn’t recommend it to you again.

Admittedly, it’s a cumbersome way to improve Twitter’s recommendation system, but it’s the only alternative to switching to ‘Latest Tweets.’


Featured Image: Roman Samborskyi/Shutterstock

Google Updates Display & Video 360 Account Level Suspension Policies via @sejournal, @brookeosmundson

Throughout the year, Google has made numerous updates and clarifications to its disapproval policies.

These updates have mostly revolved around providing more context to advertisers on the nature of ad disapprovals.

Earlier this year, Google updated its ad destination policy, unavailable video policy, and 3-strike rule.

To round out the year, Google is updating its account-level suspensions for Display and Video 360 users in December 2022.

What’s Changing?

Google is updating the main Disapprovals and Suspensions page to have a page dedicated to Display and Video 360 users.

The new page will clarify what violations would constitute an account suspension. These include:

  • Circumventing systems
  • Coordinated deceptive practices
  • Counterfeit
  • Promotion of unauthorized pharmacies
  • Unacceptable business practices
  • Trade Sanctions violation
  • Sexually explicit content

While the new Display and Video 360 page isn’t available yet, expect to see additional context on the violations above to ensure you avoid them.

Repeat Violations Can Result In Suspension

In addition to the listed violations above, Google announced that a partner account could also risk suspension.

If an advertiser within a partner account has been found to violate policies repeatedly, the account itself can be suspended. To be clear, this doesn’t mean any violation could get an account suspended.

Google’s announcement stated that if the partner account or advertiser has “repeatedly or predominantly engaged in egregious policy violations,” that would cause suspension.

Per Google’s definition, egregious violations are:

  • “A violation so serious that it is unlawful or poses significant harm to our users or our digital advertising ecosystem. Egregious violations often reflect that the advertiser’s overall business does not adhere to Google Ads policies or that one violation is so severe that we cannot risk future exposure to our users.”

So, what can you do if that happens?

The good news is that the partner and advertiser can appeal account suspensions if hit with one.

Summary

Google is constantly reviewing its ad and account policies and violations. Consistent updates throughout the year have brought more clarity to advertisers around not only the nature of the disapproval but also more guidance on how to troubleshoot these violations.

Because Google relies on both human interaction and automation to detect violations, it’s always a good idea to spot-check your account for any disapprovals each week.

As Google rolls out the new policy page, we will update this article to include the link.


Featured Image: paper cut design/Shutterstock

Finance Marketing: How To Form A Successful Content Strategy via @sejournal, @sejournal

As a financial service business, you’re facing a unique set of challenges when it comes to creating content.

  1. Finance isn’t a particularly glamorous or entertaining subject to write about, which can make it tough to engage your readers.
  2. There are heavy regulations and strict guidelines in Google results that limit what you can say, as well as how you can say it.

So, how can you overcome these challenges to form an effective content strategy?

How do you create finance content that’s responsible and accurate yet still compelling and convincing?

Our new ebook, Content Marketing For Finance, walks you through how you can develop a content strategy that respects the rigorous demands of the financial space while truly connecting with your target audience.

“Audience is at the heart of every content marketing strategy and should always be kept top of mind,” writes author Chandal Nolasco da Silva.

Download your copy and learn how to meet your customers at each stage of their journey and create the kind of content that consistently converts.

What’s Inside This Finance Content Marketing Ebook?

This pocket guide has all the insights you need to navigate the ins and outs of content marketing within the finance industry.

Topics covered include:

  • Content marketing principles, best practices, and how to apply them specifically to finance.
  • Solutions to the unique challenges of finance marketing: slow adaptation to change, difficulty getting buy-in for digital efforts, and managing complex content and content marketing in an industry with high scrutiny on advertising.
  • Key marketing channels for finance and how to use them effectively.

Key Takeaways:

The contents of this marketing ebook can help you navigate complex issues, such as the:

  • Very long sales cycles in the B2B space, as well as the long delays at the bottom of the funnel. The finance industry has been notoriously slow to digitize, so new products and services are dealing with slow movers that are resistant to change.
  • Stark reality of required due diligence processes with lots of different stakeholders involved. There can be complications with regulators, operational delays, reference checks, or other risk-reduction processes involved. These are increasingly important and lengthy, depending on the institution or firm size involved.
  • Fact that sometimes traditional channels don’t perform as well as they do in other industries; instead, more traditional ways of doing business, like in-person meetings, are sometimes better. Money is involved, after all.

If you’re a financial service professional looking to step up your content strategy for 2023, download the ebook now!

Finance Marketing: How To Form A Successful Content Strategy

Google Answers If Splitting A Long Article Could Result In Thin Content via @sejournal, @martinibuster

In a Google Search Office Hours video, Googler Lizzi Sassman answered a question about thin content, clarifying a common misperception about what thin content really is.

Thin Content

The word thin means lacking thickness or width.

So when we hear the term “thin content” it’s not uncommon to think of thin content as a webpage with not much content on it.

The actual definition of thin content is more along the lines of content that lacks any added value.

Examples are a cookie cutter page that barely differs from other pages, and even a webpage that is copied from a retailer or manufacturer with nothing additional added to it.

Google’s Product Review Update weeds out, among other things, thin pages consisting of review pages that are only product summaries.

The hallmark qualities of thin pages is that they lack originality, are barely different from other pages and/or do not offer any particular added value.

Doorway pages are a form of thin content. These are webpages designed to rank for specific keywords. An example can be pages created to rank for a keyword phrase and different city names, where all the pages are virtually the same except for the names of the cities.

Are Short Articles Thin Content?

The person asking the question wanted to know if splitting up a long article into shorter articles would result in thin content.

This is the question asked:

“Would it be considered thin content if an article covering a lengthy topic was broken down into smaller articles and interlinked?”

Lizzi Sassman answered:

“Well, it’s hard to know without looking at that content.

But word count alone is not indicative of thin content.

These are two perfectly legitimate approaches: it can be good to have a thorough article that deeply explores a topic, and it can be equally just as good to break it up into easier to understand topics.

It really depends on the topic and the content on that page, and you know your audience best.

So I would focus on what’s most helpful to your users and that you’re providing sufficient value on each page for whatever the topic might be.”

Splitting a Long Article Into Multiple Pages

What the person asking the question may have been asking is if was okay to split one lengthy topic across multiple pages that are interlinked, which is called pagination.

With pagination, a site visitor clicks to the next page to keep reading the content.

The Googler assumed that the person asking the question was splitting a long article into shorter articles devoted to the multiple topics that the lengthy article covered.

The non-live nature of Google’s new version of SEO office-hours didn’t allow the Googler to ask a follow-up question to verify if she was understanding the question correctly.

In any case, pagination is a fine way to break up a lengthy article.

Google Search Central has a page about pagination best practices.

Citation

Listen to the Google SEO Office Hours video at the 12:05 minute mark

Google: Links Have Less Impact Today Than In The Past via @sejournal, @martinibuster

In a Google SEO office hours video, a Googler answered a question about backlinks and rankings and offered the interesting fact that backlinks have less impact as a ranking signal than it used to in the past.

Backlinks Ranking Signal

Links and anchor text signals made Google a better search engine than the competition when it was first introduced.

SEO used to primarily be about optimizing titles, headings, and content with keywords.

After Google became important it was realized that links were the key to better rankings.

Whole industries rose to service the need for links, such as web directories and link selling brokers.

Various link building techniques also came to be such as reciprocal linking, comment spam, forum spam and so on.

Google largely lost the war against link spam. The turning point was 2012 with the introduction of the Penguin algorithm, as well as other updates to Google’s infrastructure (Hummingbird) which allowed Google to do increasingly massive amounts of link related ranking functions.

Today we are at a point where Google is able to rank links in such a way that low quality links are discarded.

Links continue to be an important ranking factor but it has been a mystery as to how much impact links have today.

John Mueller speculated recently that links may begin playing a decreasing role in ranking, saying:

“…it’s something where I imagine, over time, the weight on the links at some point will drop off a little bit as we can figure out a little bit better how the content fits in within the context of the whole web.”

Backlinks Have Less Impact Today

It is interesting to hear a Googler say that links have less impact today because it was understood that the reduction in importance was something in the future.

But perhaps the key point to keep in mind is that the strength of the link signal is being compared to when Google first started.

The remark about links came about from a question about why Google still uses backlinks and if link building campaigns are not allowed.

This is the question:

“Why does Google keep using backlinks as a ranking factor if link building campaigns are not allowed?

Why can’t Google find other ranking factors that can’t be easily manipulated like backlinks?”

Google’s answer:

“There are several things to unpack here.

First, backlinks as a signal has a lot less significant impact compared to when Google Search first started out many years ago.

We have robust ranking signals, hundreds of them, to make sure that we are able to rank the most relevant and useful results for all queries.”

That is definitely true, links have a lot less impact today than when Google first started, mainly because less kinds of links (like directory links, paid links) have the ability to impact search rankings.

It’s unclear if the Googler was making a reference to more than just the kinds of links that still have an impact.

The Googler continued:

“Second, full link building campaigns, which are essentially link spam according to our spam policy.

We have many algorithms capable of detecting unnatural links at scale and nullify them.

This means that spammers or SEOs spending money on links truly have no way of knowing if the money they spent on link building is actually worth it or not, since it’s really likely that they’re just wasting money building all these spammy links and they were already nullified by our systems as soon as we see them.”

Links and Site Promotion Are Still Important

Links have a function that goes beyond just ranking. Google discovers webpages through links.

Google’s own documentation not only cites links as how Google discovers web pages, it encourages publishers to promote their sites.

The documentation says:

“Google also finds pages through links from other pages. Learn how to encourage people to discover your site by Promoting your site.

…Chances are, there are a number of sites that cover topic areas similar to yours. Opening up communication with these sites is usually beneficial. Hot topics in your niche or community could spark additional ideas for content or building a good community resource.”

The quantity of links pointing to a site still indicates how important a site is.

The linking patterns that are created from natural links helps Google to understand what a site is about as well through the resulting link graph.

Follow Up Questions

The Googlers statements seem to require follow up questions.

  • Did the Googler mean that links that Google uses for ranking have less impact than in the past?
  • What about link building campaigns that are centered on telling others about a site and asking for a link, are those considered spam?
  • When the Googler referenced “link building campaigns” were they talking about campaigns to pay for guest posts or link insertions into existing articles?

The answers given are good starting points but this new format for the Google office hours is not conducted live.

That means there is no way to ask follow up questions, which makes some of the answers less useful.

Citation

Featured image by Shutterstock/Asier Romero

Listen to the Google Office Hours at the 6:08 minute mark

Google Adds 2 New Metrics To GA4 Reports via @sejournal, @MattGSouthern

Google Analytics is adding two new metrics to GA4 properties that provide more insight into how many pages visitors view and how long they stay.

Views per session and average session duration are now available in Explorations and Reporting Customization in GA4.

Views per session tracks the number of app screens or webpages people look at during a single visit, while average session duration measures the time users spend on the website.

An announcement from Carly Boddy, Product Manager at Google Analytics, shows how to add these metrics when building custom reports:

If you track views per session and average session duration for Universal Analytics (UA) properties, there are a few key differences to know regarding GA4.

You’re likely to see a difference in session counts, which can vary from business to business based on the following factors:

  • Geography: UA properties count a new session at midnight even if the user hasn’t left, which means session counts might be higher than GA4. Consider the time zones of your users and how likely they are to cross the midnight threshold to restart a session.
  • UTM Tagging: UTM tagging on your website will reset the session in UA, which means you may see a much higher count of sessions in UA than in GA4.
  • Filters: The data in UA reporting may be subject to view filters that exclude data. The data in GA4 reporting for GA360 customers may be subject to filters that define which data from a source property appears in a sub-property.
  • Estimation: GA4 properties use a statistical estimate of the number of sessions on your website or app by estimating the number of unique session IDs, while UA properties don’t estimate the number of sessions. GA4 properties more efficiently count sessions with high accuracy and low error rate.

Source: @carly_boddy on Twitter
Featured Image: photo_gonzo/Shutterstock

Google: Disavowing Random Links Flagged By Tools Is A Waste Of Time via @sejournal, @martinibuster

Google’s John Mueller answered a question about using the link disavow tool and offered a tip about the best way to use it, specifically mentioning links flagged by tools.

Although this tool was introduced ten years ago there is still much confusion as to the proper use of it.

Link Disavow Tool

The link disavow tool was introduced by Google in October 2012.

The disavow tool followed in the wake of the Penguin Algorithm from May 2012, which ushered in a period of unprecedented chaos in the search marketing community because so many people were buying and selling links.

This period of openly buying and selling links came to a stop on May 2012 when the Penguin algorithm update was released and thousands of websites lost rankings.

Getting paid links removed was a huge pain for because they had to request removal from every site, one by one.

There were so many link removal requests that some site owners started charging a fee to remove the links.

The SEO community begged Google for an easier way to disavow links and in response to popular demand Google released the Link Disavow tool on October 2012 for the express purpose of disavowing spam links that a site owner was responsible for.

The idea of a link disavow tool was something that had been kicking around for many years, at least since 2007.

Google resisted releasing that tool until after the Penguin update.

Google’s official announcement from October 2012 explained:

“If you’ve been notified of a manual spam action based on “unnatural links” pointing to your site, this tool can help you address the issue.

If you haven’t gotten this notification, this tool generally isn’t something you need to worry about.”

Google also offered details of what kinds of links could trigger a manual action:

“We send you this message when we see evidence of paid links, link exchanges, or other link schemes that violate our quality guidelines.”

John Mueller Advice on Link Disavow Tool

Mueller answered a question about disavowing links to a domain property and as a side note offered advice on the proper use of the tool.

The question asked was:

“The disavow feature in Search Console is currently unavailable for domain properties. What are the options then?”

John Mueller answered:

“Well, if you have domain level verification in place, you can verify the prefix level without needing any additional tokens.

Verify that host and do what you need to do.”

Then Mueller added an additional comment about the proper way to use the link disavow tool.

Mueller continued his answer:

“Also, keep in mind that disavowing random links that look weird or that some tool has flagged, is not a good use of your time.

It changes nothing.

Use the disavow tool for situations where you actually paid for links and can’t get them removed afterwards.”

Toxic Link Tools and Random Links

Many third party tools use proprietary algorithms to score backlinks according to how spammy or toxic the tool company feels they are.

Those toxicity scores may accurately rank how bad certain links appear to be but they don’t necessarily correlate with how Google ranks and uses links.

Toxic link tool scores are just opinions.

The tools are useful for generating an automated backlink review, especially when they highlight negative links that you thought were good.

However, the only links one should be disavowing are the links one knows are paid for or are a part of a link scheme.

Should You Believe Anecdotal Evidence of Toxic Links?

Many people experience ranking losses and when checking their backlinks are shocked to discover a large amount of extremely low quality webpages linking to their websites.

Naturally it’s assumed that this is the reason for the ranking drops and a never-ending cycle of link disavowing commences.

In those cases it may be useful to consider that there is some other reason for the change in rankings.

One case that stands out is when someone came to me about a negative SEO attack. I took a look at the links and they were really bad, exactly as described.

There were hundreds of adult themed spam links with exact match anchor text on unrelated adult topics pointing to his website.

Those backlinks fit the definition of a negative SEO attack.

I was curious so I privately contacted a Googler by email.
They emailed me back the next day and confirmed that negative SEO was not the reason why the site had lost rankings.

The real cause for the loss of rankings was that the site was affected by the Panda algorithm.

What triggered the Panda algorithm was low quality content that the site owner had created.

I have seen this many times since then, where the real problem was that the site owner was unable to objectively review their own content so they blamed links.

It’s helpful to keep in mind that what seems like the obvious reason for a loss in rankings is not necessarily the actual reason, it’s just the easiest to blame because it’s obvious.

But as John Mueller said, disavowing links that a tool has flagged and that aren’t paid links is not a good use of time.

Citation

Featured image by Shutterstock/Asier Romero

Listen to the Google SEO Office Hours video at the 1:10 minute mark

Google: Noindexed Pages Do Not Impact Crawl Budget via @sejournal, @MattGSouthern

Google’s Search Relations team confirms that noindexed pages don’t adversely impact a website’s crawl budget, no matter how many a site has.

This topic is addressed not once but thrice during the November 2022 edition of Google’s SEO office-hours Q&A session.

Google Search Advocates John Mueller and Gary Illyes take turns answering three similar questions from people concerned they have too many noindexed pages on their sites.

From Mueller’s and Illyes’ responses, we learn there’s no such thing as “too many” indexed pages. Further, unless your website has over a million pages, there’s no need to worry about crawl budget.

Here’s a quick recap of each question and answer.

Question 1: Excessive Indexed Pages

At the 8:23 mark in Google’s November 2022 office-hours, Illyes addresses a question asking if an “excessive” number of indexed pages is an issue for discovery or indexed.

Illyes says noindex is a tool to help websites keep content out of search engines. Google encourages using the noindex tag when necessary, and for that reason, there are no adverse effects associated with it.

“Noindex is a very powerful tool that search engines support to help you, this site owner, keep content out of their indexes. For this reason, it doesn’t carry any unintended effects when it comes to crawling and indexing. For example, having many pages with noindex will not influence how Google crawls and indexes your site.”

Question 2: Ratio Of Indexed/Noindexed Pages

The following question comes up at the 10:22 mark:

“Should we keep an eye on the ratio between indexed and non-indexed pages in Search Console in order to better recognize possibly wasted crawl budget on non-indexed pages?”

Mueller debunks that websites should attempt to balance their indexed and nonindexed pages.

To that end, crawl budget is a factor that few sites need to think about; Mueller says:

“No, there is no magic ratio to watch out for. Also, for a site that’s not gigantic, with less than a million pages, perhaps, you really don’t need to worry about the crawl budget of your website. It’s fine to remove unnecessary internal links, but for small to medium-sized sites, that’s more of a site hygiene topic than an SEO one.”

Question 3: Noindexed Pages Linked From Spammy Sites

At the 11:26 mark, a question comes up about no indexed pages that’s slightly different from the previous two.

The question reads:

“A lot of SEOs are complaining about having millions of URLs flagged as excluded by noindex in Google Search Console. All to nonsense internal search pages linked from spammy sites. Is this a problem for crawl budget?”

Ilyes reiterates the earlier point about noindex being a tool for sites to use as needed.

Assuming the pages are intentionally noindexed, there’s no need to worry about Search Console flagging them. It doesn’t mean you’re doing anything wrong.

Illyes states:

“Noindex is there to help you keep things out of the index, and it doesn’t come with unintended negative effects, as we said previously. If you want to ensure that those pages or their URLs, more specifically, don’t end up in Google’s index, continue using noindex and don’t worry about crawl budget.”


Source: Google

Google Shares New Info About Vulnerabilities Found In Chrome via @sejournal, @MattGSouthern

Google security researchers are sharing new information about vulnerabilities detected in Chrome, Firefox, and Windows.

In a blog post, Google and Threat Analysis Group (TAG) detail steps taken since discovering a commercial spyware operation with ties to Variston IT.

Based in Barcelona, Spain, Variston IT claims to provide custom security solutions. However, the company is connected to an exploitation framework called “Heliconia.”

Heliconia works in three ways:

  • It exploits a Chrome renderer bug to run malware on a user’s operating system.
  • It deploys a malicious PDF document containing an exploit for Windows Defender.
  • It utilizes a set of Firefox exploits for Windows and Linux machines.

The Heliconia exploit was used as early as December 2018 with the release of Firefox 64.

New information released by Google reveals Heliconia was likely used in the wild as a zero-day exploit.

Heliconia poses no risk to users today, as Google says it cannot detect active exploitation. Google, Mozilla, and Microsoft fixed the bugs in early 2021 and 2022.

Although Heliconia is patched, commercial spyware is a growing problem, Google says:

“TAG’s research underscores that the commercial surveillance industry is thriving and has expanded significantly in recent years, creating risk for Internet users around the globe. Commercial spyware puts advanced surveillance capabilities in the hands of governments who use them to spy on journalists, human rights activists, political opposition and dissidents.”

To protect yourself against Heliconia and other exploits like it, it’s essential to keep your internet browsers and operating system up to date.

TAG’s research into Heliconia is available in Google’s new blog post, which Google is publishing to raise awareness about the threat of commercial spyware.


Source: Google

Featured Image: tomfallen/Shutterstock

Ex-Googler Answers Why Google Search is Getting Worse via @sejournal, @martinibuster

An ex-Googler named Marissa Mayer appeared on the Freakonomics podcast to discuss the topic of whether Google is getting worse. Mayer suggested that asking why Google Search is getting worse is the wrong question. Her explanation of what is wrong turns the spotlight back on the web itself.

Why Marissa Mayer’s Opinion Matters

Marissa Mayer was employee #20 at Google, overseeing engineers, becoming director of consumer web products and was a part of the three-person team that worked on creating AdWords.

Mayer worked on many projects, including Google Images, News, Maps, and Gmail. She was at one point in charge of Local, Maps, and Location Services.

She eventually left Google to become the president and CEO of Yahoo! for five years.

There are few people in the world with her level of expert knowledge of and history with search, which makes her views about the current state of search of great interest.

Freakonomics Podcast: Is Google Getting Worse?

The host of the podcast started out the show by describing how in their experience Google is not as good as it used to be.

Freakonomics:

“The power of that revelation faded, as revelations do, and we all began to take Google for granted.

When you needed some information, you just typed a few words into the search box and, very quickly, you got the answer you were looking for, usually from an authoritative source.

But today? To me, at least, it doesn’t feel the same.

My search results just don’t seem as useful.

I feel like I’m seeing more ads, more links that might as well be ads, and more links to spammy web pages.”

Marissa Mayer Says Google is Just a Window

Marissa Mayer agreed that the search experience is different today.

But in her opinion the problem isn’t Google. The way she sees it, Google is only a window onto the Internet.

Mayer shared her opinion:

“I do think the quality of the Internet has taken a hit.

…When I started at Google, there were about 30 million web pages, so crawling them all and indexing them all was relatively straightforward.

It sounds like a lot, but it’s small.

Today, I think there was one point where Google had seen more than a trillion URLs.”

The host of the show asked if the increase in the number of URLs is the reason why search results are worse.

Mayer answered:

“When you see the quality of your search results go down, it’s natural to blame Google and be like, ‘Why are they worse?’

To me, the more interesting and sophisticated thought is if you say, ‘Wait, but Google’s just a window onto the web. The real question is, why is the web getting worse?’ “

Why is the Web Getting Worse?

The host of the show went along with the idea that the problem is that the Internet is getting worse and, as Marissa suggested, he asked her why the web getting worse.

Mayer offered an explanation that deflects from Google and lays blame for poor search results on the web itself.

She explained the reason why the web is worse:

“I think because there’s a lot of economic incentive for misinformation, for clicks, for purchases.

There’s a lot more fraud on the web today than there was 20 years ago.

And I think that the web has been able to grow and develop as quickly as it has because of less regulation and because it’s so international.

But we also have to take the flipside of that.

In a relatively unregulated space, there’s going to be, you know, economic mis-incentives that can sometimes degrade quality.

And that does put a lot of onus on the brokers who are searching that information to try and overcome that. And it’s difficult.

It kind of has to be more, in my view, an ecosystem-style reaction, rather than just a simple correction from one actor.”

Is the Problem Really the Internet?

The idea that the Internet is low quality because it is relatively unregulated is debatable.

There are government agencies dedicated to protecting consumers from fraudulent online activities. One example is the the United States government Federal Trade Commission guidelines on advertising, endorsements and marketing. These rules are the reason why websites disclose they are profiting from affiliate links.

Google itself also regulates the Internet through its publishing guidelines. Failure to abide by Google’s guidelines can result in exclusion from the search results.

Google’s ability to regulate the Internet extends to the quality of content itself as evidenced by the fact that out of eight algorithm updates in 2022, six of them were focused on spam, product reviews and demoting unhelpful content.

It could be said that Google’s algorithm updates proves that Google is more focused on fixing Internet content than it is on improving the technology for returning relevant search results.

That so much of Google’s efforts is focused on encouraging an “ecosystem-style reaction” aligns with Marissa Mayer’s observation that the problem with search is the websites and not Google.

Is Google Search worse because websites today are worse or is the problem with Google itself and they just can’t see it?


Citation

Listen to the Freakonomics podcast:

Is Google Getting Worse?

Featured image by Shutterstock/Asier Romero