Google Now Uses Open Graph Title Tag (og:title) For Title Links via @sejournal, @MattGSouthern

In an update to its search documentation, Google has expanded the list of sources it uses to generate title links in search results.

Google now includes the og:title meta tag as one of the elements it considers when automatically creating title links for web pages.

Screenshot from: developers.google.com/search/updates, August 2024.

Title links, which appear as clickable headlines for search results, give people a quick introduction to a webpage and how well it matches their search.

Google’s system for generating title links has long relied on various on-page elements. Adding og:title expands the list of criteria Google uses.

Understanding og:title

The og:title tag allows you to specify a title for your content that may differ from the traditional HTML title tag. This can be useful for optimizing how a page appears when shared on social networks or, now, in search results.

Og:title is part of the Open Graph protocol, a set of meta tags developed by Facebook that allows any page to become a rich object in social graphs.

While it’s used to control how content appears on social media platforms, Google’s inclusion of this tag in its title link sources indicates a broader use of Open Graph data.

Impact On SEO & Content Strategy

With this update, you may need to pay closer attention to og:title tags, ensuring they accurately represent page content while remaining engaging for searchers.

Google’s documentation now lists the following sources for automatically determining title links:

  1. Content in elements
  2. Main visual title shown on the page
  3. Heading elements, such as

    elements

  4. Content in og:title meta tags
  5. Other large and prominent text through style treatments
  6. Other page content
  7. Anchor text on the page
  8. Text within links pointing to the page
  9. Website structured data

While Google says its title link generation is automated, understanding the sources it uses can help you influence how pages appear in search.

Best Practices Remain Unchanged

Google’s best practices for title links remain largely unchanged. The company recommends creating unique, descriptive titles for each page, avoiding keyword stuffing, and ensuring titles accurately reflect page content.

Note that changes to these elements may take time to be reflected in search results, as pages must be recrawled and reprocessed.


Featured Image: Sir. David/Shutterstock

Google Explains Why Low-Quality Traffic Can’t Impact Rankings via @sejournal, @martinibuster

Google’s Martin Splitt addressed a question about website trustworthiness and whether competitors can negatively impact it. He explained how Google assesses site trustworthiness and clarified why factors like links and site traffic don’t have a negative influence on Google’s perception of trustworthiness.

Trustworthiness

Googlers, research papers and patents mention the trustworthiness of websites but there is no actual trust metric in use at Google. It was confirmed at one time a long time ago that there are multiple signals that together indicate if a site could be trusted but that’s not a trust algorithm, those are just signals.

When Googlers talk about whether a site is trustworthy it’s probably best to not overthink it, they’re just talking about whether a site is trustworthy.

Can A Competitor Create Negative Trustworthiness Signals?

The person asking the question was worried about a competitor that was sending bot traffic to their site in what they felt was an effort to make their site appear to be untrustworthy by Google’s algorithm.

That might be a reference to an SEO idea that Google uses click metrics to rank web pages but most research papers about clicks are using clicks to validate search results, not for ranking web pages, it’s generally a quality assurance thing.

This is the question that was asked:

“Do I have to be concerned about bad actors trying to make our site appear untrustworthy by sending spam or fake traffic to my site? Since site trustworthiness is binary.”

Binary means it’s either this or that. In this case the person asking the question probably means a site is either trustworthy or untrustworthy with no gray areas in between.

Martin Splitt downplayed the idea of a binary quality to trustworthiness and outright denied that traffic could influence how Google sees a site.

He answered:

“It’s not really binary and just by sending traffic from questionable sources to a site, that won’t be ‘tainted’.”

“Spam or fake traffic” is not something that can negatively influence trust.

Martin explained that if a site itself is spammy then it’s going to be seen as spammy. He then confirmed that what other sites do in terms of linking or traffic has no effect on whether a site looks spammy or not.

He answered:

“If a site itself does shady things, such as spam, malware, sure, that’s a problem, but nobody gets to choose or control where traffic or links are coming from, so that’s not something Google Search will look at to judge a website’s trustworthiness.”

Bot Traffic Doesn’t Affect How Google Sees A Site

Pretty much every website experiences high levels of hacker bots probing around looking for vulnerabilities. Some bots repeatedly hit a site looking for non-existent pages. That’s just the state of the web, every site experiences that.

So what Martin said about third parties being unable to make another site appear to be untrustworthy makes sense, especially when it’s understood that all sites have low quality inbound links and low quality bot traffic.

Watch the SEO Office Hours podcast at the 18:48 minute mark:

Featured Image by Shutterstock/Krakenimages.com

Google Shows How To Block Bots And Boost Site Performance via @sejournal, @martinibuster

Google’s Martin Splitt answered a question about malicious bots that impact site performance, offering suggestions every SEO and site owner should know and put into action.

Malicious Bots Are An SEO Problem

Many SEOs who do site audits commonly overlook security and bot traffic as part of their audits because it’s not widely understood by digital marketers that security events impact site performance and can account for why a site is inadequately crawled. Improving core web vitals will do nothing to improve site performance when a poor security posture is contributing to poor site performance.

Every website is under attack and the effects of excessive crawling can trigger a “500 server error” response code, signaling an inability to serve web pages and hindering Google’s ability to crawl web pages.

How To Defend Against Bot Attacks

The person asking the question wanted Google’s advice on how to fight back against the waves of scraper bots impacting their server performance.

This is the question asked:

“Our website is experiencing significant disruptions due to targeted scraping by automated software, leading to performance issues, increased server load, and potential data security concerns. Despite IP blocking and other preventive measures, the problem persists. What can we do?”

Google’s Martin Splitt suggested identifying the service that is serving as the source of the attacks and notifying them of an abusive use of their services. He also recommended the firewall capabilities of a CDN (Content Delivery Network).

Martin answered:

“This sounds like somewhat of a distributed denial-of-service issue if the crawling is so aggressive that it causes performance degradation.

You can try identifying the owner of the network where the traffic is coming from, thank “their hoster” and send an abuse notification. You can use WHOIS information for that, usually.

Alternatively, CDNs often have features to detect bot traffic and block it and by definition they take the traffic away from your server and distribute it nicely, so that’s a win. Most CDNs recognize legitimate search engine bots and won’t block them but if that’s a major concern for you, consider asking them before starting to use them.”

Will Google’s Advice Work?

Identifying the cloud provider or server data center that’s hosting the malicious bots is good advice. But there are many scenarios where that won’t work.

Three Reasons Why Contacting Resource Providers Won’t Work

1. Many Bots Are Hidden

Bots often use VPNs and open source “Tor” networks that hide the source of the bots, defeating all attempts of identifying the cloud services or web host providing the infrastructure for the bots. Hackers also hide behind compromised home and business computers, called botnets to launch their attacks. There’s no way to identify them.

2. Bots Switch IP Addresses

Some bots respond to IP blocking by instantly switching to a different network to immediately resume their attack. An attack can originate from a German server and when blocked will switch to a network provider in Asia.

3. Inefficient Use Of Time

Contacting network providers about abusive users is futile when the source of the traffic is obfuscated or from hundreds of sources. Many site owners and SEOs might be surprised to discover how intensive the attacks on their websites are. Even taking action against a small group of offenders is an inefficient use of time because there are literally millions of other bots that will replace the ones blocked by a cloud provider.

And what about botnets made up of thousands of compromised computers around the world? Think you have time to notify all of those ISPs?

Those are three reasons why notifying infrastructure providers is not a viable approach to stopping bots that impact site performance. Realistically, it’s a futile and inefficient use of time.

Use A WAF To Block Bots

Using a Web Application Firewall (WAF) is a good idea and that’s the function that Martin Splitt suggests when he mentioned using a CDN (content delivery network). A CDN, like Cloudflare, sends browsers and crawlers the requested web page from a server that’s located closest to them, speeding up site performance and reducing server resources for the site owner.

A CDN also has a WAF (Web Application Firewall) which automatically blocks malicious bots. Martin’s suggestion for using a CDN is definitely a good option, especially because it has the additional benefit of improving site performance.

An option that Martin didn’t mention is to use a WordPress plugin WAF like Wordfence. Wordfence has a WAF that automatically shuts down bots based on their behavior. For example, if a bot is requesting ridiculous amounts of pages it will automatically create a temporary IP block. If the bot rotates to another IP address it will identify the crawling behavior and block it again.

Another solution to consider is a SaaS platform like Sucuri that offers a WAF and a CDN to speed up performance. Both Wordfence and Sucuri are trustworthy providers of WordPress security and they come with limited but effective free versions.

Listen to the question and answer at the 6:36 minute mark of the Google SEO Office Hours podcast:

Featured Image by Shutterstock/Krakenimages.com

Google Says Title Tags “Maybe” Impact Rankings via @sejournal, @martinibuster

Google’s John Mueller offered a surprising explanation about the ranking impact of title tags. His answer challenged the SEO belief that title tags are a critical ranking factor and clarified their actual role.

Mueller also discussed the proper use of meta descriptions.

Title Elements

The purpose of title tags is to provide a general description of the topic of a webpage.

Google’s SEO Starter Guide shows how to write titles:

“…a good title is unique to the page, clear and concise, and accurately describes the contents of the page. For example, your title could include the name of your website or business, other bits of important information like the physical location of the business, and maybe some information about what the particular page has to offer for users.”

The official W3C documentation defines the purpose of the title tag like this:

“The title element represents the document’s title or name. Authors should use titles that identify their documents even when they are used out of context, for example in a user’s history or bookmarks, or in search results. The document’s title is often different from its first heading, since the first heading does not have to stand alone when taken out of context.”

Meta Description

The meta description describes the web page (that’s why it’s called a meta description).

The official W3C HTML documentation says:

description
The value must be a free-form string that describes the page. The value must be appropriate for use in a directory of pages, e.g. in a search engine. There must not be more than one meta element with its name attribute set to the value description per document.”

Google’s SEO Stater Guide explains that meta descriptions are “occasionally” used for generating the snippet shown in the search results:

“The snippet is sourced from the actual content of the page the search result is linking to… Occasionally the snippet may be sourced from the contents of the meta description tag, which is typically a succinct, one- or two-sentence summary of the page. A good meta description is short, unique to one particular page, and includes the most relevant points of the page.”

And Google’s Meta Description Best Practices recommends:

“A meta description tag generally informs and interests users with a short, relevant summary of what a particular page is about. They are like a pitch that convince the user that the page is exactly what they’re looking for.”

Notice that both the official HTML documentation and Google do not recommend using your title tag or meta description as a place to park your keywords or for putting a call to action to get people to visit the site? Those are SEO practices that SEOs invented and it’s probably why so many title tags get rewritten, because they’re incorrectly done.

The Ranking Impact Of Title Tags

Google’s John Mueller directly answered and confirmed that changing the title element maybe can impact rankings. He didn’t say anything about the title tag being a ranking factor. He just said that changing the title tag maybe can impact rankings.

Mueller’s answer confirmed that changing the title element could maybe impact rankings:

This is the question:

“We have a website with satisfying ranks and now our product added new features. We need to modify the page meta title & description, does that affect the current rankings?”

John Mueller answered that maybe changing the title tag could change the rankings in the search results.

Here’s his answer:

“Yes, or better, maybe. Changing things like titles or headings on pages can result in changes in Search.”

Why did Mueller say maybe?

He didn’t explain why.

But his answer confirms that changing your title elements doesn’t automatically cause a change in rankings.

My opinion (based on my experience) is that title elements are just content and changing your content can affect rankings.

Mueller’s answer about meta descriptions implied that it doesn’t have an affect on rankings but it could affect the snippet shown in the search results. That’s something that the digital marketing community is already on board with.

This is what he said about meta descriptions:

“Similarly, changing the meta description on a page can result in changes with how the snippet of a page is shown in Search. This is expected, and usually something that SEOs or site-owners focus on in an attempt to improve things in search.”

Title Tags Maybe Can Change Rankings

Mueller’s answer might come as a surprise to some because belief that the title tag is an important ranking factor has been a part of the SEO tradition since the beginning of SEO over 20 years ago when search engines were relatively primitive compared to today. Some in the SEO community, for whatever reason, continue the decades old tradition that title elements are a huge ranking factor.

Mueller confirmed that changing the title element might maybe impact search results, which is what some people have experienced, in either direction of rankings.

Listen to the question and answer at the 19:29 minute mark:

Featured Image by Shutterstock/Krakenimages.com

Google’s Revamped Documentation Shows 4 Reasons To Refresh Content via @sejournal, @martinibuster

Google significantly revamped its documentation about ranking pages that contain video content. While the changelog lists three areas that changed, a review of the content provides a case study of four considerations for digital marketers and publishers when refreshing content to improve relevance for site visitors—and Google.

What Changed

The documentation that was updated relates to ranking web pages that contain videos. The purpose of the documentation is to communicate best practices for optimizing videos for higher visibility in Google’s search results.

Google’s changelog indicated that there were three major changes to the Video SEO best practices page.

  • Clarified video indexing criteria
  • Updated technical requirements
  • Added a new section about dedicated watch pages for each video

This is what the changelog shows what was changed:

“Improving the Video SEO documentation

What: Overhauled the video SEO best practices. Notably, we clarified the video indexing criteria and technical requirements, added a new watch page section, and expanded our examples.

Why: Based on feedback submissions, we revisited our video SEO guidance to clarify what’s eligible for a video result and how site owners can make it easier for Google to find their videos.”

Four Reasons To Refresh Content

There’s a common misinterpretation that encourages changing content annually because “Google loves fresh content,” which is a gross misunderstanding of the Freshness Algorithm. Content shouldn’t be changed without purpose—otherwise, it’s just “rearranging the furniture” instead of truly “redesigning the space.”

Google’s reasons for updating the content offer a mini case study of three things publishers and businesses should consider when freshening up their content.

These are the three reasons for changing the Video SEO content:

  1. Remove Outdated Content
  2. Improved Information Density
  3. Add Fresh Information
  4. Update For Brevity And Clarity

1. Remove Outdated Content

The old version of the documentation was written when video as web content was a “growing format” and the changes reflect that the times have changed, rendering the old content out of date.

“Video is a growing format for content creation and consumption on the web, and Google indexes videos from millions of different sites to serve to users. “

Videos in content are not a growing format. The editors of the web page were right to remove that passage because it no longer made any sense.

Takeaway: Always keep up to date with how your readers perceive the topic. Failure to do this will make the content look less authoritative and trustworthy.

2. Improved Information Density

Information density in this context describes the ability of content to communicate ideas and topics with the least amount of words and with the highest amount of clarity.

An opening sentence should reflect what the entire topic of the web page is about but the original opening sentence did a poor job of communicating that. It referenced that “Video is a growing format” which is a statement that absolutely did not reflect the web page topic.

This is the new opening sentence:

“If you have videos on your site, following these video SEO best practices can help more people find your site through video results on Google.”

The new sentence accurately describes the topic of the entire web page is about in only 23 words.  Here’s something really cool: The second sentence remains exactly the same between the old and revised versions.

Takeaway: The lesson here is to revise what needs to be revised and don’t make changes when the original works just fine.

3. Add Fresh Information

An important change that all publishers should consider is to update content with fresh content that reflects how topics evolve over time. Products, laws, how consumers use services and products, everything undergoes some kind of change over time.

Google added content about tools available in Google Search Console that enable publishers to monitor the performance of their video content pages.

4. Update For Brevity And Clarity

The third reason for changing some of the content was to make it more concise, easier to read with simplified language. One of the subtle changes they made was change the phrase “landing page” to “watch page.” This seemingly small change clarifies the meaning of the sentence by making it super clear that they are referring to a page where videos are watched. Previously the documentation made zero references to watch page and now it makes 21 references to that phrase, introducing consistency in the message of the web page.

Many Reasons To Update Content

Every publisher should consider reviewing their content on a daily basis, whether that’s once a year for a smaller site, or chunking it up and tackling different sections on a monthly basis, a content review is a great way to keep content relevant to users and to discover new topics for content. Sometimes it’s better to break out a topic from a web page and create a dedicated page for it.

Read the updated documentation:
Video SEO best practices

Compare it to the old documentation at Archive.org:
Video SEO best practices

Featured Image by Shutterstock/Cast Of Thousands

Google Cautions On Improper 404 Handling via @sejournal, @martinibuster

Google’s John Mueller addressed whether numerous 404 errors negatively impact rankings and provided a clear explanation of the best practices for handling them.

404 (Not Found) Status Code

404 is the code that a server sends when a browser or a crawler requests a web page that the server couldn’t find. It only means that the page was not found.

The official W3C documentation doesn’t use the word “error” in its definition of 404. That said, the 400 series of codes (400, 404, 410, etc.) are classified as Client Error Responses. A client is a browser or a crawler, so a client error response means that the server is telling the browser or crawler that their request is in error. It doesn’t mean that the website is in error.

This is the official W3C definition of a 404 Page Not Found response:

“The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists. A 404 status code does not indicate whether this lack of representation is temporary or permanent; the 410 (Gone) status code is preferred over 404 if the origin server knows, presumably through some configurable means, that the condition is likely to be permanent.”

Will 404 Errors Affect Rankings?

The person asking the question wanted to know if a lot of 404 responses will affect rankings. Google’s John Mueller answered the question then he explained the right way to “fix” 404 error responses and cautioned about when not to “fix” them. I put “fix” in quotation marks because 404 responses are not always something that needs fixing.

Here’s the question:

“My website has a lot of 404s. Would I lose my site’s rankings if I don’t redirect them?”

John Mueller answered:

“First off, the 404s wouldn’t affect the rest of your site’s rankings.”

Addressing 404s With Redirects

Mueller next discussed the use of redirects for stopping 404 responses from happening. A redirect is a server response that tells the client that the web page they are requesting has been moved to another URL. A 301 redirect tells the browser or crawler that the URL has permanently moved to another URL.

When To Use Redirects For 404s

Redirecting a web page that no longer exists to another web page is sometimes the right way to handle 404 page not found responses.

This is how Mueller explains the proper use of redirects for “fixing” 404 responses:

“Redirects can play a role in dealing with old pages, but not always. For example, if you have a genuine replacement product, such as a new cup that functionally replaces a cup which is no longer produced, then redirecting is fine.”

When Not To Use Redirects For 404s

Next he explained when not to use redirects for 404s, explaining that it’s a crummy experience to show a web page that is irrelevant to what the site visitors are expecting to see.

Mueller explains:

“On the other hand, if you just have similar pages, then don’t redirect. If the user clicked on your site in search of a knife, they would be frustrated to only see spoons. It’s a terrible user-experience, and doesn’t help in search. “

It’s Okay To Show 404 Responses

Mueller next explained that it’s okay to show 404 responses because it’s the right response for when a browser or crawler asks for a page that doesn’t exist on a server anymore.

He explained:

“Instead, return an HTTP 404 result code. Make a great 404 page. Maybe even make a 404 page that explains why spoons are superior to knives, if you can make that argument. Just don’t blindly redirect to a similar page, a category page, or your homepage. If you’re unsure, don’t redirect. Accept that 404s are fine, they’re a normal part of a healthy website.”

Always Investigate Error Responses

Something that Mueller didn’t mention is that 404 responses should always be investigated. Don’t stop investigating just because the page doesn’t exist and there’s no other page to redirect it to. Sometimes there’s a real problem that needs solving.

404 By Internal Links

For example, some 404s are caused by broken internal linking where a URL is misspelled. You can “fix” that by redirecting the wrong URL to the correct URL but that’s not fixing the problem because the real problem is the broken link itself.

404 Caused By Outgoing Links

Some 404s are caused by linking to pages that no longer exist. Linking to pages that don’t exist makes it look like the page is abandoned. It’s a poor user experience to link to a non-existent web page and there is never a “normal part of a healthy website.” So either link to the right page, link to something else or don’t link to anything at all.

404s Caused By Inbound Links

There are another type of 404 responses that Mueller didn’t talk about that need looking into. Sometimes sites misspell a URL and when that happens the right response would be a 301 to the correct response. You can try contacting the site to ask them to fix their mistake but it’s easier to just add the redirect and move on with your life.

Listen to the question and answer at the 2:08 minute mark:

Featured Image by Shutterstock/Krakenimages.com

Google Confirms AI Overviews Affected By Core Updates via @sejournal, @MattGSouthern

In a recent LinkedIn exchange, Google’s Senior Search Analyst John Mueller confirmed that core algorithm updates impact the search engine’s AI-powered overviews.

This info gives us a clearer picture of how AI is being woven into Google’s search results.

Responding to a question on LinkedIn, Mueller stated:

“These are a part of search, and core updates affect search, so yes.”

This backs up what folks in the SEO industry have noticed—the sources used in AI overviews seem to change after major algorithm updates.

Background On AI Overviews

Google rolled out AI overviews in US search results a few months back.

These summaries use a special version of Google’s Gemini AI to generate answers at the top of search results. The AI pulls info from different websites and combines it into a short, easy-to-read overview.

The Impact Of Core Updates

Core updates are broad changes to Google’s search algorithms and systems, typically rolled out several times a year.

These updates are intended to improve the quality of search results by reassessing how content is evaluated and ranked.

Google’s most recent core update, launched on August 15, is still rolling out. The company advises waiting until the update is finished before analyzing the impact.

Looking Ahead

As Google keeps integrating AI into search, publishers need more clarity around how core algorithm updates impact these features.

Mueller’s confirmation helps, but there’s still a lot we don’t know. There are still many questions about what makes content show up in AI overviews and whether it’s different from what makes websites rank high in regular search results.


Featured Image: Veroniksha/Shutterstock

6 Ways Spammers Exploit Google With Reddit via @sejournal, @martinibuster

A search marketer named Lars Lofgren felt exasperated by the avalanche of spam hitting Reddit and decided to expose the tricks that spammers are using to exploit Google’s preference for ranking Reddit discussions.

Reddit’s Spam Problem

Lars wrote that the reason he decided to expose how spammers are exploiting Reddit was because he’s a Redditor who is appalled by how spammy Reddit has become which in turn is infecting how spammy Google’s search results have become.

I spoke with Lars by email and he explained his motivation:

“I’m in the SEO industry and I believe in producing great content. That’s always been what motivates me. Produce great content, make Google users happy, Google features my stuff, and everyone wins.

Starting last fall, Google has begun featuring Reddit everywhere. In theory, this can be good. Reddit has tons of authentic conversations. But it’s anonymous, not moderated well, and has no accountability. So great content across the internet is getting buried below spammy Reddit content.

I want to be able to search Google and reliably find great content again, not spam. And a lot of business owners that run great websites have had to do layoffs or even shut down their websites because of this.”

How Spammers Exploit Reddit

Lars wrote an article that exposes six ways that spammers are using Reddit to promote affiliate sites and getting rewarded by Google. In our discussion I asked him if links were commoditized on Reddit and he speculated that it wouldn’t surprise him.

“I wouldn’t be surprised at all if moderators are selling links. …I’d be shocked if it wasn’t happening somewhere on Reddit. Or a company could just buy the link outright, lots of companies do this with blog posts that rank.”

1. Become A Moderator

Moderators have the power to pin a comment to any post. If anyone objects to the spammy activity the moderators can simply ban the users and make them go away.

In our conversation, Lars explained how moderators spam their subreddits and why Google’s uncontrolled ranking of Reddit discussions has corrupted them:

“A moderator of a subreddit can publish a comment of their own to any post in their subreddit. Usually it’ll be a new comment that they make to the conversation. And then pin the comment to the very top of the conversation so everyone sees it.

They can’t modify comments from other users but they can pin any comment of their own. This moderation power is there for communicating to their community, like when a conversation gets too controversial and a moderator wants to slow it down.

Some moderators have realized that their subreddit is being featured for lucrative terms in Google. So they go looking for posts that rank, then pin comments in those posts with a link that makes them a lot of money personally.”

2. Find Posts That Google Ranks

Lars wrote that another tactic spammers use is to identify posts that Google is ranking and then add a new pinned post that contains an affiliate link. A pinned post is a post that’s permanently lodged at the beginning of a discussion. That ensures that it will have maximum exposure and engagement with users who visit Reddit from Google’s search results.

He writes:

“If you were a mod of that subreddit, you could go into that comment thread, add a new comment with your own affiliate link, then pin that comment to the top of the thread. And that’s EXACTLY what someone has done…”

3. Spam A Trusted Subreddit

The next technique doesn’t even involve becoming a mod. Lars described that a spammer just needs to find a subreddit that has a history of ranking in the SERPs. The next step is to create a topic. Lars suggests using a sock puppet to answer the question with a link.

As a moderator at WebmasterWorld for around 20 years, and a forum owner as well, I can confirm that that technique is an old one known as Tag Teaming. Tag Teaming is where a person posts a question then subsequently answers the question with a different account. That second account is called a sock puppet.

Successfully using sock puppets against a seasoned and determined moderator won’t work. But from my experience as a forum owner and a moderator, I know that most moderators are just enthusiasts and rarely have any idea when someone is abusing their forum.  So Lar’s insistence that spammers are using sock puppets is absolutely valid.

4. Create Engagement Bait

Another tactic that Lars exposes is about dropping a mention of a product or website in a Reddit discussion instead of a link. Spammers do this because it has a higher likelihood of surviving moderator scrutiny.

What spammers do is post a discussion that purposely inspires empathy and causes other Redditors to jump in with their advice and experiences. The key to this strategy is to use a sock puppet to post an answer early in the discussion so that when after it begins trending the post with the spam in it will rank higher.

Lars writes:

“Use all your Reddit accounts to pump up the conversation and get it trending. Once that’s done, the community should run with it.

If you hit an emotional pain really well and your product mention looks natural, the subreddit will go crazy. Everyone will jump in, empathize, offer their own suggestions, argue, upvote, the whole thing.”

There’s a similar strategy called Linkbaiting that uses emotional triggers to promote engagement. One of the legendary SEOs from the past, Todd Malicoat, wrote about these triggers 17 years ago (some of which he credited to other people for having invented them). Using emotionally triggering topics is an old but proven tactic.

5. Build A New Subreddit

Lars said that another spam tactic is to build their own subreddit, seed it with sock puppet conversations on topics that are a highly ranked in similar subreddits. Lars explained that Reddit’s recommendation engine will begin surfacing the spam posts to users interested in that topic which will then help the spammy subreddit grow so that the spammer can do whatever they want with it once it’s mature.

6. Aged Reddit Accounts Can Be Bought

Lars revealed that there is a market for aged Reddit member accounts with posting histories that can be purchased for as little as $150. This avoids having to create an army of sock puppets and building a posting history so that they appear to be legit accounts.

Everything Google Ranks Is A Commodity

I’ve been in SEO for twenty five years and one thing that never changes is that everything that Google ranks becomes a commodity. If it ranks then there are a legion of spammers and hackers who desire to exploit it. So it’s not surprising to me that spammers have worked out how to successfully exploit Reddit.

The spam situation is bad news for Google because now it has to police Reddit discussions to weed out the spam. Is Google up to the task?

Read the article by Lars Lofgren:

The Sleazy World of Reddit Marketing, Everything is Fake

Featured Image by Shutterstock/Luis Molinero

WordPress Cache Plugin Vulnerability Affects +5 Million Websites via @sejournal, @martinibuster

Up to 5 million installations of the LiteSpeed Cache WordPress plugin are vulnerable to an exploit that allows hackers to gain administrator rights and upload malicious files and plugins

The vulnerability was first reported to Patchstack, a WordPress security company, which notified the plugin developer and waited until the vulnerability was patched before making a public announcement.

Patchstack founder Oliver Sild discussed this with Search Engine Journal and provided background information about how the vulnerability was discovered and how serious it is.

Sild shared:

“It was reported to through the Patchstack WordPress Bug Bounty program which offers bounties to security researchers who report vulnerabilities. The report qualified for a $14,400 USD bounty. We work directly with both the researcher and the plugin developer to ensure vulnerabilities get patched properly before public disclosure.

We’ve monitored the WordPress ecosystem for possible exploitation attempts since the beginning of August and so far there are no signs of mass-exploitation. But we do expect this to become exploited soon though.”

Asked how serious this vulnerability is, Sild responded:

“It’s a critical vulnerability, made particularly dangerous because of its large install base. Hackers are definitely looking into it as we speak.”

What Caused The Vulnerability?

According to Patchstack, the compromise arose because of a plugin feature that creates a temporary user that crawls the site in order to then create a cache of the web pages. A cache is a copy of web page resources that stored and delivered to browsers when they request a web page. A cache speeds up web pages by reducing the amount of times a server has to fetch from a database to serve web pages.

The technical explanation by Patchstack:

“The vulnerability exploits a user simulation feature in the plugin which is protected by a weak security hash that uses known values.

…Unfortunately, this security hash generation suffers from several problems that make its possible values known.”

Recommendation

Users of the LiteSpeed WordPress plugin are encouraged to update their sites immediately because hackers may be hunting down WordPress sites to exploit. The vulnerability was fixed in version 6.4.1 on August 19th.

Users of the Patchstack WordPress security solution receive instant mitigation of vulnerabilities. Patchstack is available in a free version and the paid version costs as little as $5/month.

Read more about the vulnerability:

Critical Privilege Escalation in LiteSpeed Cache Plugin Affecting 5+ Million Sites

Featured Image by Shutterstock/Asier Romero

Data Confirms Disruptive Potential Of SearchGPT via @sejournal, @martinibuster

Researchers analyzed SearchGPT’s responses to queries and identified how it may impact publishers, B2B websites, and e-commerce, discovering key differences between SearchGPT, AI Overviews, and Perplexity.

What is SearchGPT?

SearchGPT is a prototype natural language search engine created by OpenAI that combines a generative AI model with the most current web data to provide contextually relevant answers in a natural language interface that includes citations to relevant online sources.

OpenAI has not offered detailed information about how SearchGPT accesses web information. But the fact that it uses generative AI models means that it likely uses Retrieval Augmented Generation (RAG), a technology that connects an AI language model to indexed web data to give it access to information that it wasn’t trained on. This enables AI search to provide contextually relevant answers that are up to date and grounded with authoritative and trustworthy web sources.

How BrightEdge Analyzed SearchGPT

BrightEdge used a pair of search marketing research tools developed for enterprise users to help identify search and content opportunities, emerging trends and conduct deep competitor analysis.

They used their proprietary DataCube X and the BrightEdge Generative Parser™ to extract data points from SearchGPT, AI Overviews and Perplexity.

Here’s how it was done:

“BrightEdge compared SearchGPT, Google’s AI Overviews, and Perplexity.

To evaluate SearchGPT against Google’s AI Overviews and Perplexity, BrightEdge utilized DataCube X alongside BrightEdge Generative Parser ™ to identify a high-volume term and question based on exact match volumes. These queries were then input into all three engines to evaluate their approach, intent interpretation, and answer-sourcing methods.

This comparative study employs real, popular searches within each sector to accurately reflect the performance of these engines for typical users.”

DataCube X was used for identifying high-volume keywords and questions, all volumes were based on exact matches.

Each search engine was analyzed for:

  1. Approach to the query
  2. Ability to interpret intent
  3. Method of sourcing answers

SearchGPT Versus Google AI Overviews

Research conducted by BrightEdge indicates that SearchGPT offers comprehensive answers while Google AI Overviews (AIO) provides answers that are more concise but also has an edge with surfacing current trends.

The difference found is that SearchGPT in its current state is better for deep research and Google AIO excels at giving quick answers that are also aware of current trends.

Strength: BrightEdge’s report indicates that SearchGPT answers rely on a diverse set of authoritative web resources that reflect academic, industry-specific, and government sources.

Weakness: The results of the report imply that SearchGPT’s weakness in a comparison with AIO is in the area of trends, where Google AIO was found to be more articulate.

SearchGPT Versus Perplexity

The researchers concluded that Perplexity offers concise answers that are tightly focused on topicality. This suggests that Perplexity, which styles itself as an “answer engine” shares the strengths with Google’s AIO in terms of providing concise answers. If I were to speculate I would say that this might reflect a focus on satisfaction metrics that are biased toward more immediate answers.

Strength: Because SearchGPT seems to be tuned more for research and on high quality information sources, it could be said to have an edge over Perplexity as a more comprehensive and potentially more trustworthy tool for research than Perplexity.

Weakness: Perplexity was found to be a more concise source of answers, excelling at summarizing online sources of information for answers to questions.

SearchGPT’s focus on facilitating research makes sense because the eventual context of SearchGPT is as a complement to ChatGPT.

Is SearchGPT A Competitor To Google?

SearchGPT is not a competitor to Google because OpenAI’s stated plans are to incorporate it into ChatGPT and not as a standalone search engine. SearchGPT’s official purpose is not as a standalone search engine but to be integrated into ChatGPT.

This is how OpenAI explains it:

“We also plan to get feedback on the prototype and bring the best of the experience into ChatGPT.

…Please note that we plan to integrate the SearchGPT experience into ChatGPT in the future. SearchGPT combines the strength of our AI models with information from the web to give you fast and timely answers with clear, relevant sources.”

Is SearchGPT then a competitor to Google? The more appropriate question is if ChatGPT is building toward disrupting the entire concept of organic search.

Google has done a fair job of exhausting and disenchanting users with ads, tracking and data mining their personal lives.  So it’s not implausible that a more capable version of ChatGPT could redefine how people get answers.

BrightEdge’s research discovered that SearchGPT’s strength was in facilitating trustworthy research. That makes even more sense with the understanding that SearchGPT is currently planned to be integrated into ChatGPT, not as a competitor to Google but as a competitor to the concept of organic search.

Takeaways: What SEOs And Marketers Need To Know

The major takeaways from the research can be broken down into five ways SearchGPT is better than Google AIO and Perplexity.

1. Diverse Authoritative Sources.
The research shows that SearchGPT consistently surfaces answers from authoritative and trustworthy sources.

“Its knowledge base spans academic resources, specialized industry platforms, official government pages, and reputable commercial websites.”

2. Comprehensive Answers
BrightEdge’s analysis showed that SearchGPT delivers comprehensive answers on any topic, simplifying them into clear, understandable responses.

3. Proactive Query Interpretation
This is really interesting because the researchers discovered that SearchGPT was not only able to understand the user’s immediate information need, it answered questions with an expanded breadth of coverage.

BrightEdge explained it like this:

“Its initial response often incorporates additional relevant information, illustrative examples, and real-world applications.”

4. Pragmatic And Practical
SearchGPT tended to provide practical answers that were good for ecommerce search queries. BrightEdge noted:

“It frequently offers specific product suggestions and recommendations.”

5. Wide-Ranging Topic Expertise
The research paper noted that SearchGPT correctly used industry jargon, even for esoteric B2B search queries. The researchers explained:

“This approach caters to both general users and industry professionals alike.”

Read the research results on SearchGPT here.

Featured Image by Shutterstock/Khosro