Gemini 2.5 Flash Update: Clearer Answers, Better Image Understanding via @sejournal, @MattGSouthern

Google updates Gemini 2.5 Flash with clearer step-by-step help, more structured responses, and stronger image understanding, now live in the Gemini app.

  • Gemini 2.5 Flash adds step-by-step guidance aimed at homework and complex topics.
  • Responses are formatted with headers, lists, and tables for faster scanning.
  • Image understanding can explain detailed diagrams and turn notes into flashcards.
Marketing Is 4th Most Exposed To GenAI, Indeed Study Finds via @sejournal, @MattGSouthern

Marketing professionals face one of the highest levels of potential AI disruption across all occupations, with 69% of marketing job skills positioned for transformation by generative AI, according to new data from Indeed.

The analysis evaluated nearly 2,900 work skills against U.S. job postings and found that marketing is the fourth most exposed profession, trailing only software development, data and analytics, and accounting.

The Shift From Doing To Directing

Indeed’s GenAI Skill Transformation Index groups skills into four levels: minimal, assisted, hybrid, and full transformation.

For marketing professionals, the majority of affected skills fall into hybrid transformation, where AI handles routine execution while humans provide oversight, validation, and strategic direction.

Indeed writes:

“Human oversight will remain critical when applying these skills, but GenAI can already perform a significant portion of routine work.”

That covers tasks AI can complete reliably in standard cases, with people stepping in to manage exceptions, interpret ambiguous situations, and ensure quality control.

What Marketing Skills Are Most at Risk?

Administrative, documentation, and text-processing tasks show high transformation potential, where AI already performs well at information retrieval, drafting, and analysis.

Communication-related work sits in the hybrid zone for many occupations. In one example from the report, communication skills appear in 23% of nursing postings and are classified as “hybrid.” This illustrates how routine language tasks are increasingly AI-assistable while human judgment remains essential.

How the Study Scored Skills

The study used multiple large language models and based its ratings on consistent results from OpenAI’s GPT-4.1 and Anthropic’s Claude Sonnet 4, noting that model performance varies.

The team evaluated each skill on two dimensions: problem-solving requirements and physical necessity. Marketing scores high on problem-solving and low on physical necessity, making many skills strong candidates for AI transformation.

A Change From Previous Research

Earlier Hiring Lab work found zero skills “very likely” to be fully replaced by GenAI.

In this update, the report identifies 19 skills (0.7% of the ~2,900 analyzed) that cross that “very likely” threshold. The authors frame this as incremental progress toward end-to-end automation for narrow, well-structured tasks, not broad replacement.

The Broader Employment Picture

Across the labor market, 26% of jobs on Indeed could be highly transformed by GenAI, 54% are moderately transformed, and 20% show low exposure.

These are measures of potential transformation. Actual outcomes depend on adoption, workflow design, and reskilling.

The report notes:

“Any realized impacts will depend entirely on whether and how businesses adopt and integrate GenAI tools…”

Marketing vs. Other Professions

Software development tops the list with 81% of skills facing transformation, followed by data and analytics (79%) and accounting (74%).

On the other end, nursing shows 33% skill transformation, with core patient-care responsibilities remaining human-centered.

Marketing’s position reflects its reliance on cognitive, screen-based work that AI can increasingly assist.

Not All AI Models Are Equal

The report emphasizes that model choice matters. Different models varied in output quality and stability, so teams should test tools against their own use cases rather than assume uniform performance.

Looking Ahead

The report’s authors, Annina Hering and Arcenis Rojas, created the GenAI Skill Transformation Index to reflect the level of transformation rather than simple replacement.

They advise developing skills that complement AI, such as strategy, creative problem-solving, and the ability to validate and interpret AI-generated outputs.

The timeline for these changes will differ depending on the size of the company, the industry, and how digitally advanced they are.

But the overall trend is clear: roles are evolving from hands-on task execution to overseeing AI and developing strategies. Those who stay ahead by adopting hybrid workflows will likely be in the best position.


Featured Image: Roman Samborskyi/Shutterstock

SISTRIX Reports Sharp Drop In ChatGPT Web Searches via @sejournal, @MattGSouthern

SISTRIX reports that ChatGPT is triggering live web searches far less often for people who use the app without logging in.

In daily spot-checks over the last two weeks, the share of answers that called the web fell from above 15% to below 2.5%. SISTRIX does not assign a cause and notes the observation applies to anonymous sessions.

What Changed

SISTRIX says it “analyses numerous ChatGPT responses to a wide variety of prompts” each day and recently “noticed that ChatGPT uses web searches significantly less frequently.”

It adds that, “at least when using the app without an account,” the measured rate of responses completed via a web search declined sharply in the period reviewed.

SISTRIX doesn’t publish a sample size, list of prompts, or detection method in the post.

SISTRIX also writes that ChatGPT has “traditionally” relied on Bing for web lookups and references rumors of Google data being used, but it doesn’t claim a direct link between any specific backend change and the measured decline.

Related Context

Microsoft Bing Search APIs Retirement

Microsoft announced that the Bing Search APIs were retired on August 11.

Some third-party tools have migrated to alternatives. This doesn’t prove a change inside ChatGPT, but it’s a relevant ecosystem shift.

Google’s SERP Access Changes

SISTRIX separately documented that Google no longer supports the “num=100” parameter and now returns 10 results per request, increasing the effort required to collect SERP data at scale.

Again, this is context rather than causation.

Recent ChatGPT Product Notes

OpenAI’s release notes list “improvements to search in ChatGPT” on September 16, without detailing backend sourcing.

That update may be unrelated to the SISTRIX measurement, but is worth noting in the same timeframe.

Why This Matters

If ChatGPT is consulting the web less frequently in anonymous sessions, you might notice fewer answers citing current sources and a greater reliance on the model’s internal knowledge for those users.

This could influence how often recent news is referenced in responses for users who aren’t logged in, although the behavior may differ for Plus or Enterprise accounts.

Looking Ahead

SISTRIX’s observation is limited to a specific time frame and anonymous usage. Currently, there’s no confirmed information from OpenAI about how frequently ChatGPT performs live lookups overall, and SISTRIX hasn’t provided a reason for the recent drop.

The most cautious conclusion is that one independent measurement showed a sharp short-term decline, which deserves further testing.


Featured Image: matakeris.creative/Shutterstock

Google App Adds Search Live For Real-Time Visual Search via @sejournal, @MattGSouthern

Google has rolled out Search Live in English in the United States, bringing real-time, camera-aware conversations to the Google app on Android and iOS.

You can tap the new Live icon under the search bar, or open Google Lens and choose Live to start an interactive voice conversation that can also see what your camera sees.

Rajan Patel, VP of Engineering for Search at Google, highlights the launch in a post on X:

How It Works

Search Live has two entry points. In the Google app, you can start a voice conversation and optionally enable video input.

Look for the icon shown below:

Image Credit: Google

In Lens, camera sharing is on by default so you can immediately ask questions about what is in front of you and get follow-ups with links to dig deeper on the web.

Google highlights practical scenarios such as hands-free trip planning, quick how-to guidance for hobbies, step-by-step troubleshooting for electronics without typing model numbers, support for school projects, and picking a board game by scanning several boxes at once.

See it in action in this launch video:

Why This Matters

Search Live moves queries from typed text to camera and voice, with answers arriving while people are actively engaged in tasks.

You can capture these searchers by prioritizing content that answers specific, in-the-moment questions. Ensure that your visual information is accurate and easily recognizable.

Local businesses should consider keeping storefront photos, product imagery, and key details current since people can now point, ask, and get links in real time.

Looking Ahead

Search Live is only launching in English in the U.S. for now, but Google says more languages and regions are coming.

This launch continues Google’s push to move everyday search beyond the keyboard. Businesses that prepare their content and visuals for that shift will be better positioned when the rollout expands

Newfold Digital Sells MarkMonitor As Part Of Strategic Refocus via @sejournal, @martinibuster

London-headquartered corporate domain management company Com Laude announced the acquisition of its competitor, MarkMonitor, previously one of the holdings of Newfold Digital.

Newfold Digital Simplifies Portfolio

Newfold Digital owns many top Internet brands like Yoast, Bluehost, Register.com, and Domain.com, all businesses that focus on small and medium-sized businesses. This divestiture may be a sign that Newfold Digital may be shifting away from the enterprise market and toward focusing its portfolio of web services on the SMB end of the market.

The official Newfold Digital press release states:

“The sale is part of Newfold Digital’s strategy to simplify its portfolio and double down on the areas where it can deliver the greatest value to customers – its core brands, Bluehost and Network Solutions. ”

Stu Homan, Head of MarkMonitor, commented:

“With this acquisition, Markmonitor has found owners who value our dedicated corporate services as much as our customers do. Com Laude is deeply committed to preserving and building upon our ability to continue to deliver industry-leading customer service while growing to new levels with dedication and investment.

Our entire team is excited to bring Com Laude’s advanced tools and services to our customers, and to be part of the most exciting development in corporate domain services since Markmonitor invented the white glove service model twenty-six years ago.”

Previous to the acquisition, Com Laude was a competitor of MarkMonitor, offering services that were similar to MarkMonitor but with key differences and technologies like an AI-powered domain management dashboard.

Com Laude is headquartered in London, United Kingdom, and MarkMonitor is in Boise, Idaho, which is not commonly regarded as the center of Internet commerce or technology but is actually a growing regional technology hub.

Benjamin Crawford, CEO of Com Laude, remarked:

“Markmonitor is the best-known name in domain services for corporate customers, having virtually invented the category twenty-six years ago, and since then grown a long list of blue-chip customers with its “white glove” customer service. Com Laude offers market leading advanced tools and bespoke services in domains and online brand protection, developed for the world’s largest companies and most valuable brands. Together we will be uniquely positioned to protect and grow the digital presence of any company that needs assistance with its domain names, internet infrastructure and security, online brand protection, internet policy and compliance, and online strategy.”

Read Com Laude’s announcement:

Com Laude to Acquire Markmonitor in a Landmark Transaction

Featured Image by Shutterstock/thodonal88

Are AI Search Summaries Making Evergreen Articles Obsolete? via @sejournal, @martinibuster

Ahrefs’ Tim Soulo recently posted that AI is making publishing evergreen content obsolete and no longer worth the investment because AI summaries leave fewer clicks for publishers.  He posits that it may be more profitable to focus on trending topics, calling it Fast SEO.  Is publishing evergreen content no longer a viable content strategy?

The Reason For Evergreen Content

Evergreen content can be a basic topic that generally doesn’t change much from year to year. For example, the answer to how to change a tire will generally always be the same.

The promise of evergreen content was that it represents a steady source of traffic. Once a web page is ranking for evergreen topics, publishers basically just have to make sure that it’s updated if the topic has changed in some way.

Does AI Break The Evergreen Content Promise?

Tim Soulo is suggesting that evergreen content, which can be easy to answer with a summary, is less likely to send a click because AI summarizes the answer and satisfies the user, who may not need to visit a website.

Soulo tweeted:

“The era of “evergreen SEO content” is over. We’re entering the era of “fast SEO.”

There’s little point in writing yet another “Ultimate Guide To ___.” Most evergreen topics have already been covered to death and turned into common knowledge. Google is therefore happy to give an AI answer, and searchers are fine with that.

Instead, the real opportunity lies in spotting and covering new trends — or even setting them yourself.”

Is Fast SEO The Future Of Publishing?

Fast SEO is another way of describing trending topics. Trending topics have always been around; it’s why Google invented the freshness algorithm, to satisfy users with up-to-date content when a “query deserves freshness.”

Soulo’s idea is that trending topics are not the kind of content that AI summarizes. Perplexity is the exception; it has an entire content discovery section called Perplexity Discover that’s dedicated to showing trending news articles.

Fast SEO is about spotting and seizing short-lived content opportunities. These can be new developments, shifts in the industry or perceptions, or cultural moments.

His tweet captures the current feeling within the SEO and publishing communities that AI is the reason for diminishing traffic from Google.

The Evergreen Content Situation Is Worse Than Imagined

A technical issue that Soulo didn’t mention but is relevant here is that it’s challenging to create an “Ultimate Guide To X, Y, Z” or the “Definitive Guide To Bla, Bla, Bla” and expect it to be fresh and different from what is already published.

The barrier to entry for evergreen content is higher now than it’s ever been for several reasons:

  • There are more people publishing content.
  • People are consuming multiple forms of content (text, audio, and video).
  • Search algorithms are focused on quality, which shuts out those who focus harder on SEO than they do on people.
  • User behavior signals are more reliable than traditional link signals, and SEOs still haven’t caught on to this, making it harder to rank.
  • Query Fan-Out is causing a huge disruption in SEO.

Why Query Fan-Out Is A Disruption

Evergreen content is an uphill struggle, compounded by the seeming inevitability that AI will summarize the content and, because of Query Fan-Out, possibly send the click to another website that is cited because it offers the answer to a follow-up question to the initial search query.

Query Fan-Out displays answers to the initial query and to follow-up questions to the initial search query. If the user is happy with the summary to the initial query, they may become interested in one of the follow-up queries, and one of those will get the click, not the initial query.

This completely changes what it means to target a search query. How does an SEO target a follow-up question? Maybe, instead of targeting the main high-traffic query, it may make sense to target the follow-up queries with evergreen content.

Evergreen Content Publishing Still Has Life

There is another side to this story, and it’s about user demand. Foundational questions stick around for a long time. People will always search “how to tie a bowtie” or “how to set up WordPress.” Many users prefer the stability of an established guide that has been reviewed and updated by a trusted brand. It’s not about being a brand; it’s about being the kind of site that is trusted, well-liked, and recommended.

A strong resource can become the canonical source for a topic, ranking for years and generating the kind of user behavior signals that reinforce its authority and signal the quality of being trusted.

Trend-driven content, by contrast, often delivers only a brief spike before fading. A newsroom model is difficult to maintain because it requires constant work to be first and be the best.

The Third Way: Do It All

The choice between producing evergreen content and trending topics doesn’t have to be binary; there’s a third option where you can do it all. Evergreen and trending topics can complement each other because each side provides opportunities for driving traffic to the other. Fresh, trend-driven content can link back to the evergreen, and this can be reversed to send readers to fresh content from the evergreen.

Trend-driven content sometimes becomes evergreen itself. But in general, creating evergreen content requires deep planning, quality execution, and marketing. Somebody’s going to get the click from evergreen content, it might as well be you.

Featured Image by Shutterstock/Stokkete

Pew: Most Americans Want AI Labels, Few Trust Detection via @sejournal, @MattGSouthern

A new Pew Research Center survey reveals a gap between people’s desire to know when AI is used in content and their confidence in being able to identify it.

Seventy-six percent say it’s extremely or very important to know whether pictures, videos, or text were made by AI or by people. Only 12% feel confident they could tell the difference themselves.

Pew Research Center wrote:

“Americans feel strongly that it’s important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don’t trust their own ability to spot AI-generated content.”

This confidence gap reflects a rising unease with AI.

Half of Americans believe that the increased presence of AI in daily life raises more concerns than excitement, while just 10% are more excited than worried.

What Pew Research Found

People Want More Control

About 60% of Americans want more control over AI in their lives, an increase from 55% last year.

They’re open to AI helping with daily tasks, but still want clarity on where AI ends and human involvement begins.

When People Accept vs. Reject AI

Most support the use of AI in data-intensive tasks, such as weather prediction, financial crime detection, fraud investigation, and drug development.

About two-thirds oppose AI in personal areas such as religious guidance and matchmaking.

Younger Audiences Are More Aware

Awareness of AI is highest among adults under 30, with 62% claiming they’ve heard a lot about it, compared to only 32% of those 65 and older.

But this awareness doesn’t lead to optimism. Younger adults are more likely than seniors to believe that AI will negatively impact creative thinking and the development of meaningful relationships.

Creativity Concerns

More Americans believe AI will negatively impact essential human skills.

Fifty-three percent think it will reduce creative thinking, and 50% feel it will hinder the ability to connect with others, with only a few expecting improvements.

This suggests labeling alone isn’t sufficient. Human input must also be evident in the work.

Why This Matters

People are generally not against AI, but they do want to know when AI is involved. Being open about AI use can help build trust.

Brands that go the transparent route might find themselves at an advantage in creating connections with their audience.

For more insights, see the full report.


Featured Image: Roman Samborskyi/Shutterstock

Review Signals Gain Influence In Top Google Local Rankings via @sejournal, @MattGSouthern

A new analysis from Search Atlas quantifies the interaction between proximity and reviews in local rankings.

Proximity drives visibility overall, while review signals become stronger differentiators in the highest positions.

This study examines 3,269 businesses across the food, health, law, and beauty sectors.

It shows that for positions 1–21, proximity influences 55% of decisions, while review count accounts for 19%. In the top ten, proximity’s influence decreases to 36%, but review count increases to 26%, with review keyword relevance reaching 22%.

Search Atlas writes:

Proximity is the top driver of local visibility.

The study also notes:

Proximity does not always dominate in elite positions.

What It Means

You’ll have a better chance of achieving top results by focusing on earning more reviews and naturally incorporating service-specific terms into reviews, rather than relying on your pin’s location on the map.

The report suggests that Google understands review text semantically. Using service-specific language in reviews can help your rankings for high-value queries.

How To Apply This

Think of proximity as your default setting. It’s fixed, so focus your attention on the inputs you can control.

When crafting your review requests, aim for natural, service-specific language. For instance, “best dentist for whitening” tends to work better than “great service.”

Also, ensure that your GBP name and profile details are aligned. The research shows that matching your business name to the search intent, such as “Downtown Dental Clinic” for someone searching “dentist near me,” can make a positive difference.

Sector Behavior

While the overall pattern remains consistent, shoppers can exhibit different behaviors across categories.

Per the report:

  • For Law, proximity tends to be the most important factor, with reviews playing a secondary role.
  • In Beauty, reputation signals are more influential. While proximity is still key, review volume and keywords are also important.
  • When it comes to Food, review content and profile relevance become especially valuable, particularly in crowded markets.
  • Health balances proximity with strong reviews and service alignment in reviews.

Looking Ahead

This study quantifies something practitioners have long suspected: proximity earns you a look, but review content helps you secure the top spot in the close contest.

If you can’t change your location, shape the language around it.

For more data on GBP ranking factors, see the full report.

Methods & Limits

The authors applied XGBoost to grid visibility, GBP metadata, website content, and reviews, achieving a global model that explains approximately 92–93% of the variance.

They emphasize that feature importance indicates correlation, not causation. Additionally, they warn that proximity might be overstated due to fixed grid collection and note that their results represent a snapshot in time.

Use these insights as guidance, not a strict rulebook.


Featured Image: Roman Samborskyi/Shutterstock

LLMs.txt For AI SEO: Is It A Boost Or A Waste Of Time? via @sejournal, @martinibuster

Many popular WordPress SEO plugins and content management platforms offer the ability to generate LLMs.txt for the purpose of improving visibility in AI search platforms. With so many popular SEO plugins and CMS platforms offering LLMs.txt functionality, one might come away with the impression that it is the new frontier of SEO. The fact, however, is that LLMs.txt is just a proposal, and no AI platform has signed on to use it.

So why are so many companies rushing to support a standard that no one actually uses? Some SEO tools offer it because their users are asking for it, while many users feel they need to adopt LLMs.txt simply because their favorite tools provide it. A recent Reddit discussion on this very topic is a good place to look for answers.

Third Party SEO Tool And LLMs.txt

Google’s John Mueller addressed the LLMs.txt confusion in a recent Reddit discussion.  The person asking the question was concerned because an SEO tool flagged it as 404, missing. The user had the impression that the tool implied it was needed.

Their question was:

“Why is SEMRush showing that the /llm.txt is a 404? Yes, I. know I don’t have one for the website, but, I’ve heard it’s useless and not needed. Is that true?

If i need it, how do i build it?

Thanks”

The Redditor seems to be confused by the Semrush audit that appears to imply that they need an LLMs.txt. I don’t know what they saw in the audit but this is what the official Semrush audit documentation shares about the usefulness of LLMs.txt:

“If your site lacks a clear llms.txt file it risks being misrepresented by AI systems.

…This new check makes it easy to quickly identify any issues that may limit your exposure in AI search results.”

Their documentation says that it’s a “risk” to not have an LLMs.txt but the fact is that there is absolutely no risk because no AI platform uses it. And that may be why the Redditor was asking the question, “If i need it, how do I build it?”

LLMs.txt Is Unnecessary

Google’s John Mueller confirmed that LLMs.txt is unnecessary.

He explained:

“Good catch! Especially in SEO, it’s important to catch misleading & bad information early, before you invest time into doing something unnecessary. Question everything.”

Why AI Platforms May Choose To Not Use LLMs.txt

Aside from John Mueller’s many informal statements about the uselessness of LLMs.txt, I don’t think there are any formal statements from AI platforms as to why they don’t use LLMs.txt and their associated .md markdown texts. There are, however, many good reasons why an AI platform would choose not to use it.

The biggest reason not to use LLMs.txt is that it is inherently untrustworthy. On-page content is relatively trustworthy because it is the same for users as it is for an AI bot.

A sneaky SEO could add things to structured data and markdown texts that don’t exist in the regular HTML content in order to get their content to rank better. It is naive to think that an SEO or publisher would not use .md files to trick AI platforms.

For example, unscrupulous SEOs add hidden text and AI prompts within HTML content. A research paper from 2024 (Adversarial Search Engine Optimization for Large Language Models) showed that manipulation of LLMs was possible using a technique they called Preference Manipulation Attacks.

Here’s a quote from that research paper (PDF):

“…an attacker can trick an LLM into promoting their content over competitors. Preference Manipulation Attacks are a new threat that combines elements from prompt injection attacks… Search Engine Optimization (SEO)… and LLM ‘persuasion.’

We demonstrate the effectiveness of Preference Manipulation Attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). Our attacks are black-box, stealthy, and reliably manipulate the LLM to promote the attacker’s content. For example, when asking Bing to search for a camera to recommend, a Preference Manipulation Attack makes the targeted camera 2.5× more likely to be recommended by the LLM.”

The point is that if there’s a loophole to be exploited, someone will think it’s a good idea to take advantage of it, and that’s the problem with creating a separate file for AI chatbots: people will see it as the ideal place to spam LLMs.

It’s safer to rely on on-page content than on a markdown file that can be altered exclusively for AI. This is why I say that LLMs.txt is inherently untrustworthy.

What SEO Plugins Say About LLMs.txt

The makers of Squirrly WordPress SEO plugin acknowledge that they provided the feature only because their users asked for it, and they assert that it has no influence on AI search visibility.

They write:

“I know that many of you love using Squirrly SEO and want to keep using it. Which is why you’ve asked us to bring this feature.

So we brought it.

But, because I care about you:

– know that LLMs txt will not help you magically appear in AI search. There is currently zero proof that it helps with being promoted by AI search engines.”

They strike a good balance between giving users what they want while also letting them know it’s not actually needed.

While Squirrly is at one end saying (correctly) that LLMs.txt doesn’t boost AI search visibility, Rank Math is on the opposite end saying that AI chatbots actually use the curated version of the content presented in the markdown files.

Rank Math is generally correct in its description of what an LLMs.txt is and how it works, but it overstates the usefulness by suggesting that AI chatbots use the curated LLMs.txt and the associated markdown files.

They write:

“So when an AI chatbot tries to summarize or answer questions based on your site, it doesn’t guess—it refers to the curated version you’ve given it. This increases your chances of being cited properly, represented accurately, and discovered by users in AI-powered results.”

We know for a fact that AI chatbots do not use a curated version of the content. They don’t even use structured data; they just use the regular HTML content.

Yoast SEO is a little more conservative, occupying a position in the center between Squirrly and Rank Math, explaining the purpose of LLMs.txt but not overstating the benefits by hedging with words like “can” and “could.” That is a fair way to describe LLMs.txt, although I like Squirrly’s approach that says, you asked for it, here it is, but don’t expect a boost in search performance.

The LLMs.txt Misinformation Loop

The conversation around LLMs.txt has become a self-reinforcing loop: business owners and SEOs feel anxiety over AI visibility and feel they must do something, viewing LLMs.txt as the something they can do.

SEO tool providers are compelled to provide the LLMs.txt option, reinforcing the belief that it’s a necessity, unintentionally perpetuating the cycle of misunderstanding.

Concern over AI visibility has led to the adoption of LLMs.txt which at this stage is only a proposal for a standard that no AI platform currently uses.

Featured Image by Shutterstock/James Delia

Google Answers SEO Question About Keyword Cannibalization via @sejournal, @martinibuster

Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.

What Is Keyword/Content Cannibalization?

There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.

The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.

SEO Confusion

The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.

The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.

Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.

Here is the question that was asked:

“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?

If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”

Different Pages Ranking For Same Query

Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.

Mueller explained:

“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)

All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.

Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”

Actual SEO Problems

Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.

Here are some real reasons why pages on the same topic may fail to rank:

  • The pages are too long and consequently are unfocused.
  • The pages contain off-topic passages.
  • The pages are insufficiently linked internally.
  • The pages are thin.
  • The pages are virtually duplicates of the other pages in the group.

The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.

Takeaway

The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.

Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.

What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.

Featured Image by Shutterstock/Roman Samborskyi