Google Gives Exact Reason Why Negative SEO Doesn’t Work via @sejournal, @martinibuster

Google’s Gary Illyes answered a question about negative SEO provides useful insights into the technical details of how Google prevents low quality spam links from affecting normal websites.

The answer about negative SEO was given in an interview in May and has gone unnoticed until now.

Negative SEO

Negative SEO is the practice of sabotaging a competitor with an avalanche of low quality links. The idea is that Google will assume that the competitor is spamming and knock them out of the search engine results pages (SERPs).

The practice of negative SEO originated in the online gambling space where the rewards for top ranking are high and the competition is fierce. I first heard of it around the mid-2000s (probably before 2010) when someone involved in the gambling space told me about it.

Virtually all websites that rank for meaningful search queries attract low quality links and there is nothing unusual about, it’s always been this way. The concept of negative SEO became more prominent after the Penguin link spam update caused site owners to become more aware of the state of their inbound links.

Does Negative SEO Cause Harm?

The person interviewing Gary Illyes was taking questions from the audience.

She asked:

“Does negative SEO via spammy link building, a competitor throwing tens of thousands of links at another competitor, does that kind of thing still harm people or has Google kind of pushed that off to the side?

Google’s Gary Illyes answered the question by first asking the interviewer if she remembered the Penguin update to which she answered yes.

He then explained his experience reviewing examples of negative SEO that site owners and SEOs had sent him. He said that out of hundreds of cases he reviewed there was only one case that might have actually been negative SEO but that the web spam team wasn’t 100% sure.

Gary explained:

“Around the time we released Penguin, there was tons and tons of tons of complaints about negative SEO, specifically link based negative SEO and then very un-smartly, I requested examples like show me examples, like show me how it works and show me that it worked.

And then I got hundreds, literally hundreds of examples of alleged negative SEO and all of them were not negative SEO. It was always something that was so far away from negative SEO that I didn’t even bother looking further, except one that I sent to the web spam team for double checking and that we haven’t made up our mind about it, but it could have been negative SEO.

With this, I want to say that the fear about negative SEO is much bigger than or much larger than it needs to be, we disable insane numbers of links…”

The above is Gary’s experience of negative SEO. Next he explains the exact reason why “negative SEO links” have no effect.

Links From Irrelevant Topics Are Not Counted

At about the 30 minute mark of the interview, Gary confirmed something interesting about how links evaluated that is important to understand. Google has, for a very long time, examined the context of the site that’s linking out to match it to the site that’s being linked to, and if they don’t match up then Google wouldn’t pass the PageRank signal.

Gary continued his answer:

“If you see links from completely irrelevant sites, be that p–n sites or or pure spam sites or whatever, you can safely assume that we disabled the links from those sites because, one of the things is that we try to match the the topic of the target page plus whoever is linking out, and if they don’t match then why on Earth would we use those links?

Like for example if someone is linking to your flower page from a Canadian casino that sells Viagra without prescription, then why would we trust that link?

I would say that I would not worry about it. Like, find something else to worry about.”

Google Matches Topics From Page To Page

There was a time, in the early days of SEO, when thousands of links from non-matching topics could boost a site to the top of Google’s search results.  Some link builders used to offer “free” traffic counter widgets to universities that when placed in the footer would contain a link back to their client sites and they used to work. But Google tightened up on those kinds of links.

What Gary said about links having to be relevant matches up with what link builders have known for at least twenty years. The concept of off topic links not being counted by Google was understood way in the days when people did reciprocal links.

Although I can’t remember everything every Googler has ever said about negative SEO, this seems to be one of the rare occasions that a Googler offered a detailed reason why negative SEO doesn’t work.

Watch Gary Illyes answer the question at the 26 minute mark:

Featured Image by Shutterstock/MDV Edwards

Instagram Algorithm Shift: Why ‘Sends’ Matter More Than Ever via @sejournal, @MattGSouthern

In a recent Instagram Reel, Adam Mosseri, the head of Instagram, revealed a top signal the platform uses to rank content: sends per reach.

This metric measures the number of people who share a post with friends through direct messages (DMs) relative to the total number of viewers.

Mosseri advises creating content people want to share directly with close friends and family, saying it can improve your reach over time.

This insight helps demystify Instagram’s ranking algorithms and can assist your efforts to improve visibility on the platform.

Instagram’s ‘Sends Per Reach’ Ranking Signal

Mosseri describes the sends per reach ranking signal and its reasoning:

“Some advice: One of the most important signals we use in ranking is sends per reach. So out of all the people who saw your video or photo, how many of them sent it to a friend in a DM? At Instagram we’re trying to be a place where people can be creative, but in a way that brings people together.

We want to not only be a place where you passively consume content, but where you discover things you want to tell your friends about.

A reel that made you laugh so hard you want to send it to your brother or sister. Or a soccer highlight that blew your mind and you want to send it to another fan. That kind of thing.

So, don’t force it as a creator. But if you can, think about making content that people would want to send to a friend, or to someone they care about.”

The emphasis on sends as a ranking factor aligns with Instagram’s desire to become a platform where users discover and share content that resonates with them personally.

Advice For Creators

While encouraging creators to produce shareworthy content, Mosseri cautioned against forced attempts to game the system.

However, prompting users to share photos and videos via DM is said to boost reach

What Does This Mean For You?

Getting people to share posts and reels with friends can improve reach, resulting in more engagement and leads.

Content creators and businesses can use this information to refine their Instagram strategies.

Rather than seeing Instagram’s focus on shareable content as an obstacle, consider it an opportunity to experiment with new approaches.

If your reach has been declining lately, and you can’t figure out why, this may be the factor that brings it back up.


Featured Image: soma sekhar/Shutterstock

Google Shows How To Beat Reddit & Big Brands via @sejournal, @martinibuster

In an interview published on YouTube, Google’s Gary Illyes offered advice on what small sites should consider doing if they want to compete against Reddit, Amazon and other big brand websites.

About Big Brand Dominance

Google’s Gary Illyes answered questions about SEO back in May that went underreported so I’m correcting that oversight this month. Gary answered a question about how to compete against Reddit and big brands.

While it may appear that Gary is skeptical that Reddit is dominating, he’s not disputing that perception and that’s not the context of his answer. The context is larger than Reddit because his answer is about the core issue of competing against big brands in the search engine results pages (SERPs).

This is the question that an audience member asked:

“Since Reddit and big publishers dominate nowadays in the SERPS for many keywords, what can the smaller brands do besides targeting the long tail keywords?”

The History Of Big Brands In The SERPs

Gary’s answer encompasses the entire history of big brands in the SERPs and the SEO response to that. About.com was a website about virtually any topic of interest and it used to rank for just about everything. It was like the Wikipedia of its day and many SEOs resented how About.com used to rank so well.

He first puts that context into his answer, that this complaint about Reddit is part of a long history of various brands ranking at the top of the SERPs then washing out of the SERPs as trends change.

Gary answered:

“So before I joined Google I was doing some SEO stuff for big publishers. …SEO type. Like I was also server manager like a cluster manager.

So, I would have had the same questions and in fact back in the day we saw these kind of questions all the time.

Now it’s Reddit. Back then it was Amazon. A few years before that, it was I think …About.com.

Pretty much every two years the name that you would put there …changes.”

Small Sites Can Outcompete Big Brands

Gary next shares that the history of SEO is also about small sites figuring out how to outcompete the bigger sites. This is also true. Some big sites started as small sites that figured out a way to outcompete larger big brand sites. For example, Reviewed.com, before it was purchased by USA Today, was literally started by a child whose passion for the topic contributed to it becoming massively successful.

Gary says that there are two things to do:

  1. Wait until someone else figures out how to outcompete and then copy them
  2. Or figure it out yourself and lead the way

But of course, if you wait for someone else to show the way it’s probably too late.

He continued:

“It seems that people always figure out ways to compete with whoever would be the second word in that question.

So it’s not like, oh my God, like everything sucks now and we can retire. It’s like, one thing you could do is to wait it out and let someone else come up with something for you that you can use to compete with Reddit and the big publishers that allegedly dominate nowadays the SERPs.

Or you sit down and you start thinking about how can you employ some marketing strategies that will boost you to around the same positions as the big publishers and Reddit and whatnot.

One of the most inspiring presentations I’ve seen was the empathetic marketing… do that. Find a way to compete with these positions in the SERPs because it is possible, you just have to find the the angle to compete with them.”

Gary is right. Big brands are slowed down by bureaucracy and scared to take chances. As I mentioned about Reviewed.com, a good strategy can outrun the big brands all day long, I know this from my own experience and from knowing others who have done the same thing, including the founder of Reviewed.com.

Long Tail Keywords & Other Strategies

Gary next talked about long tail keywords. A lot of newbie SEO gurus define long tail keyword phrases with a lot of words in it. That’s 100% wrong. Long tail keyword phrases are keyword phrases that searches rarely use. It’s the rareness of keyword use that makes them long tail, not how many words are in the keyword phrase.

The context this part of Gary’s answer is that the person asking the question essentially dismissed long tail search queries as the crumbs that the big brands leave behind for small sites.

Gary explains:

“And also the other thing is that, like saying that you are left with the long tail keywords. It’s like we see like 15 to even more percent of new long tail keywords every single day.

There’s lots of traffic in long tail keywords. You you can jump on that bandwagon and capture a ton of traffic.”

Something left unmentioned is that conquering long tail keyword phrases is one way to create awareness that a site is about a topic. People come for the long tail and return for the head phrases (the queries with more traffic).

The problem with some small sites is that they’re trying to hit the big traffic keywords without first showing relevance in the long tail. Starting small and building up toward big is one of the secrets of successful sites.

Small Sites Can Be Powerful

Gary is right, there is a lot of traffic in the long tail and emerging trends. The thing that small sites need to remember is that big sites move slow and have to get through layers of bureaucracy in order to make a strategic decision. The stakes for them are also higher so they’re not prone to take big swings either. Speed and the ability to make bold moves is the small site’s super power. Exercise it.

I know from my own experience and from working with clients that it’s absolutely possible to outrank to big sites that have been around for years. The history of SEO is littered with small sites that outpaced the slower moving bigger sites.

Watch Gary answer this question at the 20 minute mark:

Featured Image by Shutterstock/Volodymyr TVERDOKHLIB

Google Explains Reasons For Crawled Not Indexed via @sejournal, @martinibuster

Back in May Google’s Gary Illyes sat for an interview at the SERP Conf 2024 conference in Bulgaria and answered a question about the causes of crawled but not indexed, offering multiple reasons that are helpful for debugging and fixing this error.

Although the interview happened in May, the video of the interview went underreported and not many people have actually watched it. I only heard of it because the always awesome Olesia Korobka (@Giridja) recently drew attention to the interview in a Facebook post.

So even though the interview happened in May, the information is still timely and useful.

Reason For Crawled – Currently Not Indexed

Crawled Currently Not Indexed is a reference to an error report in the Google Search Console Page Indexing report which alerts that a page was crawled by Google but was not indexed.

During a live interview someone submitted a question, asking:

“Can crawled but not indexed be a result of a page being too similar to other stuff already indexed?

So is Google suggesting there is enough other stuff already and your stuff is not unique enough?”

Google’s search console documentation doesn’t provide an answer as to why Google may crawl a page and not index it, so it’s a legitimate question.

Gary Illyes answered that yes, one of the reasons could be that there is already other content that is similar. But he also goes on to say that there are other reasons, too.

He answered:

“Yeah, that that could be one thing that it can mean. Crawled but not indexed is, ideally we would break up that category into more granular chunks, but it’s super hard because of how the data internally exists.

It can be a bunch of things, dupe elimination is one of those things, where we crawl the page and then we decide to not index it because there’s already a version of that or an extremely similar version of that content available in our index and it has better signals.

But yeah, but it it can be multiple things.”

General Quality Of Site Can Impact Indexing

Gary then called attention to another reason why Google might crawl but choose not to index a site, saying that it could be a site quality issue.

Illyes then continued his answer:

“And the general quality of the of the site, that can matter a lot of how many of these crawled but not indexed you see in search console. If the number of these URLs is very high that could hint at general quality issues.

And I’ve seen that a lot since February, where suddenly we just decided that we are indexing a vast amount of URLs on a site just because …our perception of the site has changed.”

Other Reasons For Crawled Not Indexed

Gary next offered other reasons for why URLs might be crawled but not indexed, saying that it could be that Google’s perception of the site could have changed but that it could be a technical issue.

Gary explained:

“…And one possibility is that when you see that number rising, that the perception of… Google’s perception of the site has changed, that could be one thing.

But then there could also be that there was an error, for example on the site and then it served the same exact page to every single URL on the site. That could also be one of the reasons that you see that number climbing.

So yeah, there could be many things.”

Takeaways

Gary provided answers that should help debug why a web page might be crawled but not indexed by Google.

  • Content is similar to content already ranked in the search engine results pages (SERPs)
  • Exact same content exists on another site that has better signals
  • General site quality issues
  • Technical issues

Although Illyes didn’t elaborate on what he meant about another site with better signals, I’m fairly certain that he’s describing the scenario when a site syndicates its content to another site and Google chooses to rank the other site for the content and not the original publisher.

Watch Gary answer this question at the 9 minute mark of the recorded interview:

Featured Image by Shutterstock/Roman Samborskyi

Google’s AI Overviews Coincide With Drop In Mobile Searches via @sejournal, @MattGSouthern

A new study by search industry expert Rand Fishkin has revealed that Google’s rollout of AI overviews in May led to a noticeable decrease in search volume, particularly on mobile devices.

The study, which analyzed millions of Google searches in the United States and European Union, sheds light on the unexpected consequences of AI integration.

AI Overviews Rollout & Reversal

In May 2024, Google rolled out AI overviews in the United States, which generate summaries for many search queries.

However, the feature was met with mixed reactions and was quickly dialed back by the end of the month.

In a blog post published on May 30, Google admitted to inaccurate or unhelpful AI overviews, particularly for unusual queries.

Google says it implemented over a dozen technical improvements to its systems in response.

A subsequent study by SE Ranking found the frequency of these summaries decreased, with only 8% of searches now triggering an AI Overview. However, when shown, these overviews are now longer and more detailed, averaging 25% more content.

SE Ranking also noted that after expansion, AI overviews typically link to fewer sources, usually around four.

Decline In Mobile Searches

Fishkin’s analysis reveals that the introduction of AI Overviews coincided with a marked decline in mobile searches in May.

While desktop searches saw a slight increase, the drop in mobile searches was significant, considering that mobile accounts for nearly two-thirds of all Google queries.

This finding suggests that users may have been less inclined to search on their mobile devices when confronted with AI-generated summaries.

Fishkin commented:

“The most visible changes in May were shared by both the EU and US, notably… Mobile searches fell a considerable amount (if anything spooked Google into rolling back this feature, I’d put my money on this being it).”

He adds:

“If I were running Google, that dip in mobile searches (remember, mobile accounts for almost 2/3rds of all Google queries) would scare the stock-price-worshiping-crap outta me.”

Impact On Overall Search Behavior

Despite the dip in mobile searches, the study found that search behavior remained relatively stable during the AI overviews rollout.

The number of clicks per search on mobile devices increased slightly, while desktop clicks per search remained flat.

This indicates that while some users may have been deterred from initiating searches, those who did engage with the AI Overviews still clicked on results at a similar or slightly higher rate than the previous months.

Implications For Google & the Search Industry

The study highlights the challenges Google faces in integrating AI-generated content into its search results.

Additionally, the research found other concerning trends in Google search behavior:

  • Low Click-through Rates: Only 360 out of every 1,000 Google searches in the US result in clicks to non-Google websites. The EU fares slightly better with 374 clicks per 1,000 searches.
  • Zero-click Searches Dominate: Nearly 60% of searches in both regions end without any clicks, classified as “zero-click searches.”
  • Google’s Self-referral Traffic: About 30% of clicks from US searches go to Google-owned properties, with a somewhat lower percentage in the EU.

Why SEJ Cares

This study underscores the need for adaptable SEO strategies.

As an industry, we may need to shift focus towards optimizing for zero-click searches and diversifying traffic sources beyond Google.

The findings also raise questions about the future of AI in search.

While major tech companies continue to invest in AI technologies, this study suggests that implementation may not always yield the expected results.


Featured Image: Marco Lazzarini/Shutterstock

GraphRAG Is A Better RAG And Now It’s Free via @sejournal, @martinibuster

Microsoft is making publicly available a new technology called GraphRAG, which enables chatbots and answer engines to connect the dots across an entire dataset, outperforming standard Retrieval-Augmented Generation (RAG) by large margins.

What’s The Difference Between RAG And GraphRAG?

RAG (Retrieval-Augmented Generation) is a technology that enables an LLM to reach into a database like a search index and use that as a basis for answering a question. It can be used to bridge a large language model and a conventional search engine index.

The benefit of RAG is that it can use authoritative and trustworthy data in order to answer questions. RAG also enables generative AI chatbots to use up to date information to answer questions about topics that the LLM wasn’t trained on. This is an approach that’s used by AI search engines like Perplexity.

The upside of RAG is related to its use of embeddings. Embeddings is a way of representing the semantic relationships between words, sentences, and documents. This representation enables the retrieval part of RAG to match a search query to text in a database (like a search index).

But the downside of using embeddings is that it limits the RAG to matching text at a granular level (as opposed to a global reach across the data).

Microsoft explains:

“Since naive RAG only considers the top-k most similar chunks of input text, it fails. Even worse, it will match the question against chunks of text that are superficially similar to that question, resulting in misleading answers.”

The innovation of GraphRAG is that it enables an LLM to answer questions based on the overall dataset.

What GraphRAG does is it creates a knowledge graph out of the indexed documents, also known as unstructured data. The obvious example of unstructured data are web pages. So when GraphRAG creates a knowledge graph, it’s creating a “structured” representation of the relationships between various “entities” (like people, places, concepts, and things) which is then more easily understood by machines.

GraphRAG creates what Microsoft calls “communities” of general themes (high level) and more granular topics (low level). An LLM then creates a summarization of each of these communities, a “hierarchical summary of the data” that is then used to answer questions. This is the breakthrough because it enables a chatbot to answer questions based more on knowledge (the summarizations) than depending on embeddings.

This is how Microsoft explains it:

“Using an LLM to summarize each of these communities creates a hierarchical summary of the data, providing an overview of a dataset without needing to know which questions to ask in advance. Each community serves as the basis of a community summary that describes its entities and their relationships.

…Community summaries help answer such global questions because the graph index of entity and relationship descriptions has already considered all input texts in its construction. Therefore, we can use a map-reduce approach for question answering that retains all relevant content from the global data context…”

Examples Of RAG Versus GraphRAG

The original GraphRAG research paper illustrated the superiority of the GraphRAG approach in being able to answer questions for which there is no exact match data in the indexed documents. The example uses a limited dataset of Russian and Ukrainian news from the month of June 2023 (translated to English).

Simple Text Matching Question

The first question that was used an example was “What is Novorossiya?” and both RAG and GraphRAG answered the question, with GraphRAG offering a more detailed response.

The short answer by the way is that “Novorossiya” translates to New Russia and is a reference to Ukrainian lands that were conquered by Russia in the 18th century.

The second example question required that the machine make connections between concepts within the indexed documents, what Microsoft calls a “query-focused summarization (QFS) task” which is different than a simple text-based retrieval task. It requires what Microsoft calls, “connecting the dots.”

The question asked of the RAG and GraphRAG systems:

“What has Novorossiya done?”

This is the RAG answer:

“The text does not provide specific information on what Novorossiya has done.”

GraphRAG answered the question of “What has Novorossiya done?” with a two paragraph answer that details the results of the Novorossiya political movement.

Here’s a short excerpt from the two paragraph answer:

“Novorossiya, a political movement in Ukraine, has been involved in a series of destructive activities, particularly targeting various entities in Ukraine [Entities (6494, 912)]. The movement has been linked to plans to destroy properties of several Ukrainian entities, including Rosen, the Odessa Canning Factory, the Odessa Regional Radio Television Transmission Center, and the National Television Company of Ukraine [Relationships (15207, 15208, 15209, 15210)]…

…The Office of the General Prosecutor in Ukraine has reported on the creation of Novorossiya, indicating the government’s awareness and potential concern over the activities of this movement…”

The above is just some of the answer which was extracted from the limited one-month dataset, which illustrates how GraphRAG is able to connect the dots across all of the documents.

GraphRAG Now Publicly Available

Microsoft announced that GraphRAG is publicly available for use by anybody.

“Today, we’re pleased to announce that GraphRAG is now available on GitHub, offering more structured information retrieval and comprehensive response generation than naive RAG approaches. The GraphRAG code repository is complemented by a solution accelerator, providing an easy-to-use API experience hosted on Azure that can be deployed code-free in a few clicks.”

Microsoft released GraphRAG in order to make the solutions based on it more publicly accessible and to encourage feedback for improvements.

Read the announcement:

GraphRAG: New tool for complex data discovery now on GitHub

Featured Image by Shutterstock/Deemerwha studio

Robots.txt Turns 30: Google Highlights Hidden Strengths via @sejournal, @MattGSouthern

In a recent LinkedIn post, Gary Illyes, Analyst at Google, highlights lesser-known aspects of the robots.txt file as it marks its 30th year.

The robots.txt file, a web crawling and indexing component, has been a mainstay of SEO practices since its inception.

Here’s one of the reasons why it remains useful.

Robust Error Handling

Illyes emphasized the file’s resilience to errors.

“robots.txt is virtually error free,” Illyes stated.

In his post, he explained that robots.txt parsers are designed to ignore most mistakes without compromising functionality.

This means the file will continue operating even if you accidentally include unrelated content or misspell directives.

He elaborated that parsers typically recognize and process key directives such as user-agent, allow, and disallow while overlooking unrecognized content.

Unexpected Feature: Line Commands

Illyes pointed out the presence of line comments in robots.txt files, a feature he found puzzling given the file’s error-tolerant nature.

He invited the SEO community to speculate on the reasons behind this inclusion.

Responses To Illyes’ Post

The SEO community’s response to Illyes’ post provides additional context on the practical implications of robots.txt’s error tolerance and the use of line comments.

Andrew C., Founder of Optimisey, highlighted the utility of line comments for internal communication, stating:

“When working on websites you can see a line comment as a note from the Dev about what they want that ‘disallow’ line in the file to do.”

Screenshot from LinkedIn, July 2024.

Nima Jafari, an SEO Consultant, emphasized the value of comments in large-scale implementations.

He noted that for extensive robots.txt files, comments can “help developers and the SEO team by providing clues about other lines.”

Screenshot from LinkedIn, July 2024.

Providing historical context, Lyndon NA, a digital marketer, compared robots.txt to HTML specifications and browsers.

He suggested that the file’s error tolerance was likely an intentional design choice, stating:

“Robots.txt parsers were made lax so that content might still be accessed (imagine if G had to ditch a site, because someone borked 1 bit of robots.txt?).”

Screenshot from LinkedIn, July 2024.

Why SEJ Cares

Understanding the nuances of the robots.txt file can help you optimize sites better.

While the file’s error-tolerant nature is generally beneficial, it could potentially lead to overlooked issues if not managed carefully.

What To Do With This Information

  1. Review your robots.txt file: Ensure it contains only necessary directives and is free from potential errors or misconfigurations.
  2. Be cautious with spelling: While parsers may ignore misspellings, this could result in unintended crawling behaviors.
  3. Leverage line comments: Comments can be used to document your robots.txt file for future reference.

Featured Image: sutadism/Shutterstock

WordPress Takes Bite Out Of Plugin Attacks via @sejournal, @martinibuster

WordPress announced over the weekend that they were pausing plugin updates and initiating a force reset on plugin author passwords in order to prevent additional website compromises due to the ongoing Supply Chain Attack on WordPress plugins.

Supply Chain Attack

Hackers have been attacking plugins directly at the source using password credentials exposed in previous data breaches (unrelated to WordPress itself). The hackers are looking for compromised credentials used by plugin authors who use the same passwords across multiple websites (including passwords exposed in a previous data breach).

WordPress Takes Action To Block Attacks

Some plugins have been compromised by the WordPress community has rallied to clamp down on further plugin compromises by instituting a forced password reset and encouraging plugin authors to use 2 factor authentication.

WordPress also temporarily blocked all new plugin updates at the source unless they received team approval in order to make sure that a plugin is not being updated with malicious backdoors. By Monday WordPress updated their post to confirm that plugin releases are no longer paused.

The WordPress announcement on the forced password reset:

“We have begun to force reset passwords for all plugin authors, as well as other users whose information was found by security researchers in data breaches. This will affect some users’ ability to interact with WordPress.org or perform commits until their password is reset.

You will receive an email from the Plugin Directory when it is time for you to reset your password. There is no need to take action before you’re notified.”

A discussion in the comments section between a WordPress community member and the author of the announcement revealed that WordPress did not directly contact plugin authors who were identified as using “recycled” passwords because there was evidence that the list of users found in the data breach list whose credentials were in fact safe (false positives). WordPress also discovered that some accounts that were assumed to be safe were in fact compromised (false negatives). That is what led to to the current action of forcing password resets.

Francisco Torres of WordPress answered:

“You’re right that specifically reaching out to those individuals mentioning that their data has been found in data breaches will make them even more sensitive, but unfortunately as I’ve already mentioned that might be inaccurate for some users and there will be others that are missing. What we’ve done since the beginning of this issue is to individually notify those users that we’re certain have been compromised.”

Read the official WordPress announcement:

Password Reset Required for Plugin Authors

Featured Image by Shutterstock/Aleutie