AI-powered search is a new way for shoppers to discover products. ChatGPT, Perplexity, Claude, Gemini, and even AI Overviews answer shopping-related questions directly — no additional clicks required.
For brands, that’s a double-edged sword. The good news is the potential for additional exposure. The challenge is replacing organic search traffic (see the Semrush study) and surfacing the company and its products in those AI-generated answers.
A growing set of generative engine optimization (GEO) tools promises to fix this problem by measuring and improving how products and brands appear in the responses.
Few GEO platforms offer SKU-level capability — tracking and optimizing individual products in AI answers. Most focus on page-level optimization and citations, making it difficult to bulk update products with optimized content.
Nonetheless, I recently evaluated over a dozen of these GEO platforms to see which are viable for small and mid-sized businesses. Below are three recommendations with use cases, overviews, and limitations.
GEO Tools for SMBs
Writesonic
Writesonic
Writesonic focuses on product page optimization. It lets merchants rewrite and optimize (for genAI) individual product pages or articles, to then publish directly to Shopify, BigCommerce, or WordPress.
Here’s the workflow:
Identify target pages. Manually select SKUs with poor organic search traffic, using Search Console, Shopify analytics, or other SEO tools.
Analyze in Writesonic. Paste product page content into Writesonic or connect via API.
Optimize with content metric. Edit the pages in real time with Writesonic’s Content Score metric.
Update product pages. Export and publish optimized content, including metadata and formatting, to the ecommerce platform, keeping metadata and formatting intact.
Overview
Pricing: Tiered plans start at $49 per month.
Ease of use: Self-service, minimal learning curve.
Integrations: Direct with WordPress; export for Shopify and BigCommerce.
Content optimization: Strong, with rewrites of product pages and articles.
Limitations
Does not surface underperforming SKUs on its own.
No historical performance tracking.
No SKU-level competitive benchmarking.
Peec AI
Peec AI
Peec AI provides competitive benchmarking, showing merchants where their products and brands appear in AI-generated answers and how they compare to competitors. Peec AI doesn’t (yet) create or publish content, but its SKU-level gap analysis can guide optimization.
To use:
Identify visibility gaps. Track which prompts cite your brand and products, and those of competitors.
Analyze competitors. Monitor competitor product visibility at the SKU level for missed opportunities.
Export data. Pull CSV files (or link via API) to feed into your search engine, content, or analytics tools.
Refine on-page content. Update product pages in Shopify, BigCommerce, or other platforms, closing identified gaps.
Overview
Pricing: Tiered plans start at €89 per month ($103)
Ease of use: Simple dashboards; quick start.
Integrations: No direct cart integrations.
Content optimization: Monitoring only; no optimization tools.
Limitations
Does not optimize or publish product content.
Profound
Profound
Profound is primarily a measurement platform, monitoring how brands appear across AI-powered search engines. It doesn’t optimize or publish content, but it offers deep discovery and measurement capabilities that can inform SKU-level strategy.
To use:
Identify visibility gaps. Use Profound’s dashboards to track your products, categories, or brand in AI answers.
Analyze competitors. Benchmark against competitors to pinpoint missed opportunities and find high-impact prompts to target.
Surface related prompts. Filter by geography, category, or topic to find prompts that align with your products for potential conversions.
Use insights to optimize content. Export reports or integrate with analytics and SEO tools to guide on-site optimization.
Overview
Pricing: $499 per month with custom plans available.
Ease of use: Training required to interpret fully.
Integrations: No direct ecommerce cart integrations.
Content optimization: None. Focus is on measurement.
Limitations
Does not optimize or publish product content.
Getting Started
Merchants do not require expensive tools to improve genAI visibility. To start:
Audit your presence. Use free trials or affordable tools such as Peec AI to see how your products appear in AI answers.
Identify high-intent prompts. Ask the genAI platforms, “Identify the most common customer questions about [product/category] by analyzing Reddit, Quora, product reviews, support tickets, and forums.”
Start small. Pick a half-dozen products and categories to track monthly. Adjust and expand over time.
AI may produce first-time customers, but loyalty programs, email marketing, and standout service will bring them back.
Marketers today spend their time on keyword research to uncover opportunities, closing content gaps, making sure pages are crawlable, and aligning content with E-E-A-T principles. Those things still matter. But in a world where generative AI increasingly mediates information, they are not enough.
The difference now is retrieval. It doesn’t matter how polished or authoritative your content looks to a human if the machine never pulls it into the answer set. Retrieval isn’t just about whether your page exists or whether it’s technically optimized. It’s about how machines interpret the meaning inside your words.
That brings us to two factors most people don’t think about much, but which are quickly becoming essential: semantic density and semantic overlap. They’re closely related, often confused, but in practice, they drive very different outcomes in GenAI retrieval. Understanding them, and learning how to balance them, may help shape the future of content optimization. Think of them as part of the new on-page optimization layer.
Image Credit:: Duane Forrester
Semantic density is about meaning per token. A dense block of text communicates maximum information in the fewest possible words. Think of a crisp definition in a glossary or a tightly written executive summary. Humans tend to like dense content because it signals authority, saves time, and feels efficient.
Semantic overlap is different. Overlap measures how well your content aligns with a model’s latent representation of a query. Retrieval engines don’t read like humans. They encode meaning into vectors and compare similarities. If your chunk of content shares many of the same signals as the query embedding, it gets retrieved. If it doesn’t, it stays invisible, no matter how elegant the prose.
This concept is already formalized in natural language processing (NLP) evaluation. One of the most widely used measures is BERTScore (https://arxiv.org/abs/1904.09675), introduced by researchers in 2020. It compares the embeddings of two texts, such as a query and a response, and produces a similarity score that reflects semantic overlap. BERTScore is not a Google SEO tool. It’s an open-source metric rooted in the BERT model family, originally developed by Google Research, and has become a standard way to evaluate alignment in natural language processing.
Now, here’s where things split. Humans reward density. Machines reward overlap. A dense sentence may be admired by readers but skipped by the machine if it doesn’t overlap with the query vector. A longer passage that repeats synonyms, rephrases questions, and surfaces related entities may look redundant to people, but it aligns more strongly with the query and wins retrieval.
In the keyword era of SEO, density and overlap were blurred together under optimization practices. Writing naturally while including enough variations of a keyword often achieved both. In GenAI retrieval, the two diverge. Optimizing for one doesn’t guarantee the other.
This distinction is recognized in evaluation frameworks already used in machine learning. BERTScore, for example, shows that a higher score means greater alignment with the intended meaning. That overlap matters far more for retrieval than density alone. And if you really want to deep-dive into LLM evaluation metrics, this article is a great resource.
Generative systems don’t ingest and retrieve entire webpages. They work with chunks. Large language models are paired with vector databases in retrieval-augmented generation (RAG) systems. When a query comes in, it is converted into an embedding. That embedding is compared against a library of content embeddings. The system doesn’t ask “what’s the best-written page?” It asks “which chunks live closest to this query in vector space?”
This is why semantic overlap matters more than density. The retrieval layer is blind to elegance. It prioritizes alignment and coherence through similarity scores.
Chunk size and structure add complexity. Too small, and a dense chunk may miss overlap signals and get passed over. Too large, and a verbose chunk may rank well but frustrate users with bloat once it’s surfaced. The art is in balancing compact meaning with overlap cues, structuring chunks so they are both semantically aligned and easy to read once retrieved. Practitioners often test chunk sizes between 200 and 500 tokens and 800 and 1,000 tokens to find the balance that fits their domain and query patterns.
Microsoft Research offers a striking example. In a 2025 study analyzing 200,000 anonymized Bing Copilot conversations, researchers found that information gathering and writing tasks scored highest in both retrieval success and user satisfaction. Retrieval success didn’t track with compactness of response; it tracked with overlap between the model’s understanding of the query and the phrasing used in the response. In fact, in 40% of conversations, the overlap between the user’s goal and the AI’s action was asymmetric. Retrieval happened where overlap was high, even when density was not. Full study here.
This reflects a structural truth of retrieval-augmented systems. Overlap, not brevity, is what gets you in the answer set. Dense text without alignment is invisible. Verbose text with alignment can surface. The retrieval engine cares more about embedding similarity.
This isn’t just theory. Semantic search practitioners already measure quality through intent-alignment metrics rather than keyword frequency. For example, Milvus, a leading open-source vector database, highlights overlap-based metrics as the right way to evaluate semantic search performance. Their reference guide emphasizes matching semantic meaning over surface forms.
The lesson is clear. Machines don’t reward you for elegance. They reward you for alignment.
There’s also a shift in how we think about structure needed here. Most people see bullet points as shorthand; quick, scannable fragments. That works for humans, but machines read them differently. To a retrieval system, a bullet is a structural signal that defines a chunk. What matters is the overlap inside that chunk. A short, stripped-down bullet may look clean but carry little alignment. A longer, richer bullet, one that repeats key entities, includes synonyms, and phrases ideas in multiple ways, has a higher chance of retrieval. In practice, that means bullets may need to be fuller and more detailed than we’re used to writing. Brevity doesn’t get you into the answer set. Overlap does.
If overlap drives retrieval, does that mean density doesn’t matter? Not at all.
Overlap gets you retrieved. Density keeps you credible. Once your chunk is surfaced, a human still has to read it. If that reader finds it bloated, repetitive, or sloppy, your authority erodes. The machine decides visibility. The human decides trust.
What’s missing today is a composite metric that balances both. We can imagine two scores:
Semantic Density Score: This measures meaning per token, evaluating how efficiently information is conveyed. This could be approximated by compression ratios, readability formulas, or even human scoring.
Semantic Overlap Score: This measures how strongly a chunk aligns with a query embedding. This is already approximated by tools like BERTScore or cosine similarity in vector space.
Together, these two measures give us a fuller picture. A piece of content with a high density score but low overlap reads beautifully, but may never be retrieved. A piece with a high overlap score but low density may be retrieved constantly, but frustrate readers. The winning strategy is aiming for both.
Imagine two short passages answering the same query:
Dense version: “RAG systems retrieve chunks of data relevant to a query and feed them to an LLM.”
Overlap version:“Retrieval-augmented generation, often called RAG, retrieves relevant content chunks, compares their embeddings to the user’s query, and passes the aligned chunks to a large language model for generating an answer.”
Both are factually correct. The first is compact and clear. The second is wordier, repeats key entities, and uses synonyms. The dense version scores higher with humans. The overlap version scores higher with machines. Which one gets retrieved more often? The overlap version. Which one earns trust once retrieved? The dense one.
Let’s consider a non-technical example.
Dense version:“Vitamin D regulates calcium and bone health.”
Overlap‑rich version: “Vitamin D, also called calciferol, supports calcium absorption, bone growth, and bone density, helping prevent conditions such as osteoporosis.”
Both are correct. The second includes synonyms and related concepts, which increases overlap and the likelihood of retrieval.
This Is Why The Future Of Optimization Is Not Choosing Density Or Overlap, It’s Balancing Both
Just as the early days of SEO saw metrics like keyword density and backlinks evolve into more sophisticated measures of authority, the next wave will hopefully formalize density and overlap scores into standard optimization dashboards. For now, it remains a balancing act. If you choose overlap, it’s likely a safe-ish bet, as at least it gets you retrieved. Then, you have to hope the people reading your content as an answer find it engaging enough to stick around.
The machine decides if you are visible. The human decides if you are trusted. Semantic density sharpens meaning. Semantic overlap wins retrieval. The work is balancing both, then watching how readers engage, so you can keep improving.
But, not all high-intent visibility translates to sales-ready traffic, especially in long sales cycles or complex buying journeys.
As we get deeper into an era of diminishing importance of keywords that translate directly into attributable clicks, a focus on quality of traffic is as important as ever.
Don’t get me wrong. Quality has long been a critical component and key performance indicator (KPI) in the sense that most of us know our conversion rates and stages of the funnel someone might be in related to intent with B2B and lead generation.
However, now, as much as ever, we have to scrutinize intent and the traffic quality even more.
When SEO teams over-prioritize high-intent visibility and focus under the false assumption that intent equals urgency, we can find that we’re not getting the conversions or leads that we expect.
In this article, I’m unpacking seven ways to help go deeper with mapping visibility to actual funnel stages, differentiating decision-makers vs. influencers, and building SEO content that nurtures, qualifies, and educates before the handoff to sales.
1. The Myth: “If It’s A Bottom-Of-Funnel Topic, The Traffic Is Ready To Convert”
While we may think about our website, our content, and analytics data in customer journeys and conversion funnels, our target audience doesn’t.
Maybe my nerdy search brain thinks that way when I’m consuming content on a website I’m interested in, but most of the world doesn’t.
There are too many variables that impact someone’s conversion decision to fully unpack here. I can personally tell you that a couple of times, I filled out forms while lying on the floor next to one of my kids in bed, falling asleep.
And, other times, I sat with a tab open on my giant desktop work monitor for months before eventually finding that tab again and filling out the form.
Those are two extreme examples, but as more possible ways to be found (e.g., AI search) enter play, we’re going to see myriad behaviors and paths that we couldn’t have anticipated just a few years ago.
Things are not going to make sense in a simple way, and what might seem like a home run conversion will be frustrating when you dig into the data, and things that might have felt like top of funnel that convert quickly will be equally (but pleasantly) surprising as well.
2. B2B Searchers Are Often Researching, Comparing, Or Gathering Info, Not Buying
Beyond the intent challenges and varying entry points and sources I noted above, we have a hand to play at times as well in how and where someone converts.
Our best converting content can still be standing in the way of getting the actual conversion. Just like the head scratchers that we might find when someone converts quickly without seemingly spending much time on the site or in “research” mode.
With more answers being given within search engines and large language models (LLMs), a lot of the research is done before someone gets to our site.
That being said, whether our content is helping inform AI, getting us found off-site, or to do the traditional education work on our site, we have to understand that, even on what might seem like a high intent page, someone might still be in research and information gathering mode.
They might be seeking pricing (if we disclose it) or building their own deck of us, plus competitors, to help with their own decisions for buying or outreach.
When we make too many assumptions, put someone on a singular navigation path, or take away options, we risk losing the opportunity for them to continue their research journey.
We’ve got to find a balance between prominent calls-to-action (CTAs) and long-form content so there’s more flexibility for the user based on what their intent is in that visitor or session that we worked so hard to get.
3. How To Differentiate Between User Intent And Sales-Readiness
I’ve talked a lot about intent already. What I haven’t unpacked is how sales-ready someone is.
Our brand story, content, and user experience can be persuasive and do the job of getting a form submission or phone call.
However, if someone isn’t “sales-ready,” they likely are going to consume everything up to the point of a conversion action, then leave. They may come back often up to that point and leave.
This might lead us to think there’s something wrong with the form, or the CTA, or the content itself. Sure, that could be the case and should be validated. But, it could also be that they simply aren’t ready to buy.
As an agency owner, I also operate a B2B business that relies on lead generation. I can personally validate that while we have received a lot of seemingly bottom-of-the-funnel traffic, my team has been told by prospects that they were ready to buy, but not ready to talk to anyone, as they were told to slow down the process or await a final budget approval before reaching out.
It’s frustrating, but that’s seemingly the nature of the world economically these past couple of years (probably not a “hot take”).
4. Why One-Size-Fits-All CTAs (E.g., “Request A Quote”) Often Fail In B2B
I admit that I’ve been guilty of slapping the one-size-fits-all, generic CTA in the footer or sidebar of all pages of a site.
As I noted earlier, we need to expect the unexpected with matching intent to content and funnel levels. We should definitely review and evaluate our CTAs.
In line with my note above about someone possibly being close to a conversion but not sales-ready, if we have other areas of value we can provide like additional content they can subscribe to or ways to engage with us to get further acquainted (ex: webinars, Q&As, etc), that don’t involve a direct sales process, then we can further engage with them and stay in front of them in a way that is welcome based on where they are right now.
5. Using Content To Build Trust And Qualification, Not Just Capture Form Fills
When we rush someone to a form submission and they’re not ready to buy, not prepared for the sales process, or qualified, we often get feedback from sales about discrepancies related to marketing qualified leads (MQLs) vs. sales qualified leads (SQLs) or how leads are accepted by sales.
Wasting time on sales internally, while frustrating, someone who wasn’t prepared for (or qualified for) the process is a loss for both sides.
Building trust through quality content, differentiation, setting expectations on what happens after the form submission, and other trust signals like transparency of pricing can go a long way to ensuring higher rates of conversions to customers.
Don’t forget that quality trumps quantity if we look at additional metrics and KPIs in our marketing-to-revenue process.
6. How To Structure B2B Content Around Intent Clusters, Not Just Funnel Stages
If you aren’t convinced yet from what I’ve shared about how user behavior can differ from what we might expect or predict, then maybe thinking about content specifically will help.
In the zero-click searches and AI search push that has taken focus away from specific keyword and has put it more on visibility, one piece consistently is important: the content you create.
In topics, clusters, or however you want to think about how the content is organized on your website, you still need to focus on how it is presented to the user.
Starting with the user intent and mapped to where they are in the funnel, then working backwards, we can see where we have gaps in content and what we need to support answering all questions possible and moving the prospect forward in the process.
This will serve you well for search engines today and LLMs and AI-generated search results today and tomorrow.
While we used to (and in some cases still approach it today) as topics driven by keywords, I’m advocating for thinking about topics in how someone might be moving through a customer journey.
What questions are they asking at the phase they are in? Have we anticipated everything? Have we accidentally assumed too much about their knowledge or their sales-readiness?
We’re not going to be able to think of everything. Much like long-tail keywords and queries, we can see people doing a lot more research and probing in AI research.
My company got a lead from ChatGPT a few months ago, and we could see that they visited our site seven times from ChatGPT in the process before eventually filling out our form. This is not user behavior that we would have planned for or anticipated just a few years ago.
7. Creating SEO Content For Both Decision-Makers And Gatekeepers
We can’t control who comes to our website. Humans aren’t as blockable as robots and web crawlers. However, we don’t need to be worried about those who might not be the ultimate prospect or decision-maker.
Whether you’re seeing traffic from AI engines, search engines, or those that never convert and seem like unqualified human visitors, I encourage you to still work on building your authority position, be helpful with your content, and to know that you might be helping get critical information to gatekeepers (human or systems) that go further upstream to a human who is a decision-maker.
Whether you’re educating the search committee for a Request for Proposal (RFP) process, an assistant or intern doing field research, or something automated trying to learn so it can feed good info to a decision-maker, it isn’t wasted effort, even though narrow thinking and reviewing of conversion metrics at the bottom of the funnel may make it seem that way.
Final Thoughts
Customer journeys, funnel thinking, search intent, and how it all works together in generating conversions and leads for B2B focuses can be complex and hard to track. It is getting even harder.
That doesn’t mean that we should give up or try to force everyone through a narrow funnel or one-size-fits-all approach.
We can’t predict all the ways that our content will be understood, consumed, and engaged with.
What we can do is be helpful, leverage a strong brand, be transparent, and do everything we can to present users (and other sources) with a complete picture of our products, services, and how we are the right fit (or not) for our website visitors.
Leveraging our moments of visibility to generate quality traffic, but understanding that the bottom of the funnel isn’t a slam dunk to convert, and what it all can mean, can go a long way for engaging and re-engaging bottom of the funnel traffic to get every conversion we deserve.
AI search has completely changed the way customers make decisions. If you’re still just tracking data instead of driving sales from AI search, you are missing out.
Join Bart Góralewicz, Co-Founder of ZipTie.dev, on September 3, 2025, for an insightful webinar on how to map customer journeys in AI search and turn those insights into measurable sales.
What you will learn:
Why attend:
Brands that win in AI search are not just watching their metrics. They are understanding how customers discover and decide to buy. This session will give you the tools to drive higher conversions and grow revenue with AI search.
Register now to learn practical strategies you can apply right away. Can’t attend live? No problem! Sign up, and we will send you the recording.
In 2023, I wrote about a provocative “what if”: What if Google added an AI chatbot to Search, effectively cannibalizing itself?
Fast-forward to 2025, and it’s no longer hypothetical.
With AI Overviews (AIOs) and AI Mode, Google has not only eaten into its own search product, but it has also taken a big bite out of publishers, too.
Cannibalization is usually framed as a risk. But in the right circumstances, it can be a growth driver-or even a survival tactic.
In today’s Memo, I’m revisiting product cannibalization through a fresh AI-era lens, including:
What cannibalization really is (and why it’s not always bad).
Iconic examples from Netflix, Apple, Amazon, Google, and Instagram.
How Google’s AI shift meets the definition of self-cannibalization – and where it doesn’t.
The four big marketing implications if your brand cannibalizes itself in the AI-boom landscape (for premium subscribers).
Image Credit: Kevin Indig
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Today’s Memo is an updated version of my previous guide on product cannibalization.
Previously, I wrote about how adding an AI chatbot to Google Search would mean that Google would be cannibalizing itself – which only a few companies in history have successfully accomplished.
In this updated Memo, I’ll walk you through successful product cannibalization examples while we revisit how Google has cannibalized itself through a refreshed lens.
Because … Google has effectively cannibalized itself with the incorporation of AI Overviews (AIOs) and AI Mode, but they haven’t found a way to monetize them yet.
And publishers and brands are suffering as a result.
So who wins here? (Does anyone?) Only time will tell.
Product cannibalization is the replacement of a product with a new one from the same company, typically expressed in sales revenue.
Even though most definitions say that cannibalization occurs when revenue is flat while two products trade market share, there are a number of examples that show revenue can grow as a result of cannibalization.
Product cannibalization, or market cannibalization, is often seen as something bad – but it can be good or even necessary.
Let’s consider a few examples of product cannibalization you’re likely already familiar with:
Hardware.
Retail.
SaaS/Tech.
Hardware companies, for example, need to bring out better and newer chips on a regular basis. The lifecycle of AI training chips is often less than a year because new architectures and higher processing capabilities quickly make the previous generation obsolete.
Right now, chips are the hottest commodity in tech – companies building and training AI models need them in massive quantities, and as soon as the next generation is released, the old one loses most of its value.
As a result, chipmakers are forced to cannibalize their own products, advancing designs to stay competitive not only with rival manufacturers but also with their own previous breakthroughs.
But there are stark differences between cannibalization in retail and tech.
Retail cannibalization is driven by seasonal changes or consumer trends, while tech product cannibalization is primarily a result of progress.
In fashion, for example, consumers prefer this year’s collection over last season’s. The new collections cannibalize old ones.
In tech, new technology leads companies to replace old products.
New PlayStation consoles, for example, significantly replace sales from older ones – especially since they’re backward compatible with games.
Another example? The growth of the headless content management system (CMS), which increasingly replaces the coupled CMS and pushes content management providers to offer new products and features.
Netflix made several product pivots in its history, but two stand out the most:
The switch from DVD rental to streaming, and
Subscription-only memberships to ad-supported revenue.
On November 3, 2022, Netflix launched an ad-supported plan for $6.99/month on top of its standard and premium plans. (It has since increased to $7.99/month. See image below.)
During the pandemic, Netflix’s subscriber numbers skyrocketed, but they came back to earth like Falcon 9 when Covid receded: Enter the “Basic with ads” subscription that promoted retention.
Image Credit: Kevin Indig
Another challenge for Netflix? Competitors. Lots of them – and with legacy media histories.
Initially, the strategy of creating original content and making the experience seamless across many countries resulted in strong growth.
But when competitors like HBO, Disney, and Paramount launched similar products with original content, growth slowed down.
When Netflix launched the ad-supported plan, only 0.1% of existing users made the switch, but 9% of new users chose it (see below).
A look at other streaming platforms suggests the share will increase over time. Here’s a quick look at percentages of subscribers on ad-supported plans across platforms:
Hulu has 57%.
Paramount+ – 44%.
Peacock – 76%.
HBO Max – 21%.
Image Credit: Kevin Indig
However, Netflix’s new plan is not technically considered product cannibalization but partial cannibalization based on price.
The product is the same, but through the new plan, it’s now accessible to a new customer segment that previously wouldn’t have considered Netflix.
We can conclude that the new ad-supported Netflix plan is not the same type of cannibalization as its streaming service.
In 2007, internet connections became strong enough to open the door to streaming. Netflix was not the first company to provide movie streaming, but the first one to be successful at it.
The company paved the way by incentivizing DVD rental customers to engage online, for example, with rental queues on Netflix’s website. But ultimately, the pivot was the result of technological progress.
Another product that saw the light of day for the first time in 2007?
The iPhone.
When it launched, the iPhone had all the features of the iPod and more, making it a case of full product cannibalization.
As a result, the share of revenue from the iPod significantly decreased once the iPhone launched (see image below).
Even though you could argue it’s a “regular case” of market cannibalization when looking at revenue streams from each product, it was a technological step-change instead of partial cannibalization based on pricing.
Image Credit: Kevin Indig
However, big steps in technology don’t always lead to a desired replacement of an old product.
Take the Amazon Kindle, for example.
Launched in 2007 – just like Netflix’s streaming product and the iPhone (something was up that year) – Amazon brought its new ebook reader, Kindle, to market.
It made such an impact that people predicted the death of paper books. (And librarians everywhere laughed while booksellers braced themselves.)
But over 10 years later, ebooks stabilized at 20% market share, while print books captured 80%.
The reason is that publishers got into pricing battles with Amazon and Apple, which also started to play an important role in the ebook market. (It’s a long story; but you can read about it here).
Amazon attempted to cannibalize its core business (books) with the Kindle (ebooks), but couldn’t make product pricing work, which resulted in ebooks often being more expensive than print editions. Yikes.
The technology changed, but consumers weren’t incentivized to use it.
Let’s look at two final examples here. These two companies acquired or copied competitors to control partial cannibalization:
YouTube videos are technically better answers to many search queries than web results. Google saw this very early on and smartly acquired YouTube. Video results took some time to fill more space in the Google SERP, even though they technically cannibalize web results. But today, they’re often some of the most visually impactful results (and often the first results) that searchers see.
Instagram saw the success of Snapchat stories and decided to copy the feature in order to mitigate competitor growth. Despite the cannibalization of regular Instagram posts, net engagement with Stories was greater. (And speaking of YouTube, you could argue that YouTube Shorts follow the same principle.)
With all this in mind, we can say there is full and partial cannibalization based on how many features a new product replaces.
Pricing changes, copied features, or acquisitions lead to partial cannibalization that doesn’t result in the same revenue growth as full cannibalization.
Full cannibalization requires two conditions to be true:
The new product must be built on a technological step change, and
Customers need to be incentivized to use it.
With this knowledge foundation in place, let’s examine the shifts in the Google Search product over the last 12-24 months.
Let’s apply these product cannibalization principles to the case of Google vs. ChatGPT & Co.
In the original content of this memo (published in 2023), I shared the following:
If Google were to add AI to Search in a similar way as Neeva & Co (see previous article about Early attempts at integrating AI in Search), the following conditions would be true:
AI Chatbots are a technological step-change.
Customers are incentivized to use AI Chatbots because they give quick and good answers to most questions.
However, not all conditions are true:
AI Chatbots don’t provide the full functionality of Google Search.
It’s not cheaper to integrate an AI Chatbot with Search.
I’ve been clear about my hypothesis for a while now. As I shared in my 2025 Halftime Report:
I personally believe that AI Mode won’t launch [fully in the SERP] before Google has figured out the monetization model. And I predict that searchers will see way fewer ads but much better ones and displayed at a better time.
Google won’t show AI Mode everywhere, because adoption is generational (see the UX study of AIOs for more info). I think AI Mode will launch at a broader scale (like showing up for more queries overall) when Google figures out monetization.
Plus, ChatGPT is not yet monetizing, so advertisers go to Google and Meta – for now. And that’s my hypothesis as to why Google Search is continuing to grow.
Keep in mind, to successfully cannibalize your existing product, you need customers to want to use it. And according to a recent report from Garrett Sussman over at iPullrank, over 50% of users who tried Google’s AI Mode once and didn’t return. [Source] (So it seems Google’s still figuring out the incentivising part.)
Even with the advancements we’ve seen in the last six to 12 months with AI models – and the inclusion of live web search and product recommendations into AI chats – I’d argue that they’re useful for information-driven or generative queries but lack the databases needed to give good answers to product or service searches.
Let’s take a look at an example:
If you input “best plumber in Chicago” or “best toaster” into ChatGPT, I’d argue you’d actually get less quality results – for now – than if you input the same queries into Google. (Go try it for yourself and let me know what you find. But here’s a walk-through with Amanda Johnson hopping in to illustrate this below.)
At the same time, these product and service queries are the queries that search engines with an ad revenue business model can monetize best.
It was said that ChatGPT costs at least $100,000 per day to run when it first crossed 1 million users in December 2022. By 2023, it was costing about $700,000 a day. [Source]
Today, it’s likely to be a significant multiple of that.
Keep in mind, Google Search sees billions of search queries every day.
Even with Google’s advanced infrastructure and talent, AI chatbots are costly.
And they can (still) be slow – even with the advancements they’ve made in the last 12 months. Current and classic Google Search systems (like Featured Snippets and People Also Ask) might provide a much faster answer.
But, alas, here we are in 2025, and Google is cannibalizing its own product via AIOs and AI Mode.
Right now, according to Similarweb data, usage of the AI Mode tab on Google.com in the U.S. has slightly dipped and now sits at just over 1%. [Source, Source]
Google AIOs are now seen by more than 1.5 billion searchers every month, and they sit front and center. But engagement is falling. Users are spending less time on Google and clicking fewer pages. [Source]
But Google has to compete with not only other search engines that provide an AI-chat-forward experience, but also with ChatGPT & Co. themselves.
Below, I’ve listed out important considerations for your brand if you might utilize product cannibalization as a strategy.
You’ll want to:
Reframe cannibalization as a strategic option for the brand rather than a failure.
Use the full vs. partial cannibalization lens.
Test the two success conditions.
Protect your core offerings while you experiment.
Use competitive cannibalization defensively.
Monitor, learn, and adjust.
In the next section, for premium subscribers, I’ll walk you through what to watch out for if you decide to use product cannibalization as a growth strategy.
1. Reframe Cannibalization As A Strategic Option
Don’t default to seeing product cannibalization as a failure; assess if it can protect market share or accelerate growth.
Audit your product line and GTM strategy to identify areas where you could self-disrupt before a competitor does.
2. Use The Full Vs. Partial Cannibalization Lens
Full cannibalization works best when there’s a tech leap and strong customer incentives.
Example: Apple iPhone replacing iPod – all iPod features plus far more capability led to the iPod’s rapid decline.
Partial cannibalization via pricing, features, or acquisitions is less risky but may not deliver big growth.
Example: Netflix ad-supported plan – same streaming product, but a lower-cost tier opened the door to new segments and reduced churn risk.
Map current and future offerings against these two categories to decide your approach.
3. Test The 2 Success Conditions
A cannibalizing product is more likely to succeed when both are true:
Tech Leap: Offers a meaningfully better way to solve the problem.
Example: Netflix DVD → Streaming in 2007 leveraged faster internet speeds to change the delivery model entirely.
Customer Incentive: Lower cost, better performance, more convenience, or status.
Example: YouTube acquisition by Google made richer, more visual answers possible in Search, improving the user experience.
If both apply → pursue full cannibalization.
If one applies → pursue partial cannibalization with controlled scope.
4. Protect Your Core While You Experiment
Identify high-revenue segments and shield them from early disruption.
Example: Google keeping AI Mode away from highly monetizable queries like “best credit card” until the ad model is ready.
Test self-disruption in lower-stakes markets to validate demand before scaling
Example: Instagram Stories rolled out in a way that boosted net engagement while protecting the feed’s ad inventory.
5. Use Competitive Cannibalization Defensively
When a rival launches a threat, choose between:
Acquire: Google acquiring YouTube to control the rise of video as a search answer format.
Copy: Instagram adopting Stories from Snapchat to stop user migration and grow engagement.
Differentiate: Amazon Kindle – a tech leap that tried to move readers from print to digital, but without a price advantage, adoption plateaued.
6. Monitor, Learn, And Adjust
Track engagement, revenue mix, and adoption by segment.
Example: Similarweb data on AI Mode – U.S. usage holding just over 1%, signaling limits to adoption speed.
Adjust rollout pace based on generational adoption patterns and competitor moves.
Example: Google AIO engagement drop – showing that placement alone doesn’t guarantee sustained user interest.
A good example is how to do this is Chegg.
The company has been obliterated by Google’s AI Overviews and even sued Google. Chegg’s value was answers to homework questions, but since almost every student uses ChatGPT for that, their value chain broke. How is the company reacting to this life-ending threat?
Chegg has launched a new tool, Solution Scout, that allows students to compare answers from ChatGPT & Co. with Chegg’s archive.
Instead of trying to beat AI Chatbots, Chegg hits them where it hurts: in the hallucinations.
LLMs can make stuff up, which is especially painful when it comes to learning and taking tests. Imagine you spend hours internalizing the wrong facts!
Solution Scout validates AI answers with Chegg’s archive of human-sourced material. It compares the answer from foundational models and highlights differences and consensus.
Featured Image: Paulo Bobita/Search Engine Journal
Google’s AI-generated results reshape how people search, and Google has said that websites should expect traffic fluctuations and that prior success in organic Search does not guarantee future success in the new ecosystem.
This is a big claim, and it’s been debated whether “Hidden Gems” are getting more visibility in modern Search and I’m doing my best to work through as much data as possible to identify if the claims from Google above have substance.
Google’s Hidden Gems initiative is its effort to highlight genuine, first‑hand content from smaller corners of the web.
It was first revealed in May 2023 and fully integrated into the core algorithm later that year, with official acknowledgment in mid-November 2023.
It targets posts with first-hand knowledge, personal insights, and unique experiences usually found on forums, blogs, social platforms, and niche sites.
Rather than favoring only high-authority domains, it now surfaces these overlooked “gems” because they offer genuine and practical perspectives from creators and brands, not powered by the traditional SEO metrics and big brand budgets.
Hidden Gems has the objective of:
Improving how we (Google) rank results in Search overall, with a greater focus on content with unique expertise and experience.
This brings me to the travel sector and the notion of Hidden Gems.
It has been a long-held belief in the travel sector that Google favors bigger travel brands. When I worked in regional agencies that had travel clients, this was almost a party line when pitching SME and challenger travel websites.
Now search is evolving, and we’re seeing more and more Search features either powered by or directly interfacing with AI, is this now an opportunity for challenger travel brands to gain further visibility within Google’s Search ecosystem?
Methodology
To investigate, we analyzed a dataset of 5,824 URLs surfaced in Google’s AI-generated results for travel-related queries.
As part of the methodology, we also reviewed traditional SEO metrics such as estimated site traffic, domain rating, and total number of domain keywords to validate a qualitative review of whether a site functions as a powerful travel brand or a challenger brand.
Each URL was manually reviewed and tagged based on whether Google identified it as a Hidden Gem. We compared their visibility, domain authority, and how often they appeared in AI results.
Quantity Vs. Frequency
The dataset revealed a nuanced dynamic: While Hidden Gems were more diverse, they were not more dominant.
From the 5,824 cited URLs, we identified 1,371 unique domains. We classified 590 of these as Hidden Gem domains compared to 781 established domains.
However, those 781 established domains appeared 4,576 times in total, a much higher return rate than the 1,248 total appearances of the Hidden Gems.
This suggests that while AI mode is surfacing a wide variety of lesser-known sources, it still leans heavily on established brands for repeated visibility.
As you would expect, domains we identified as not being “Hidden Gems” had a greater weighting of higher DR than not.
Image from author, August 2025
By contrast, the domains we identified as being Hidden Gems were not weighted in the opposite direction, but instead much more evenly spread out.
Image from author, August 2025
In other words, Google is sampling widely from the long tail but serving frequently from the head of the distribution.
Authority Still Has A Role
While traditional SEO has long placed emphasis on authority metrics like Domain Rating (DR) or Domain Authority (DA), our analysis shows that their influence may be diminishing in the context of AI-led search.
This shift aligns with broader trends we’ve observed in Google’s evolving ranking systems.
Instead of relying heavily on link-based authority, AI Overviews and similar experiences appear to prioritize content that demonstrates depth, originality, and strong alignment with user intent.
Authority hasn’t disappeared, but it’s been repositioned. Rather than acting as a gatekeeper for visibility, it’s now one of many factors, often taking a back seat to how well a piece of content anticipates and satisfies the user’s informational needs in the moment.
What This Means For Travel Brands
Hidden Gems are showing up in Google’s AI results, but they’re not displacing the giants. They’re appearing alongside them, offering more variety but less dominance.
For challenger brands, this represents both an opportunity and a challenge.
First-Hand Content Gains Ground
The opportunity is clear: Content that is specific, genuine, and useful is getting noticed, even from smaller or lesser-known sites.
AI-powered results seem to be more willing to include pages that deliver practical insights, first-hand experience, and niche relevance, even if they lack the traditional signals of authority.
This creates new openings for brands that previously struggled to compete on backlinks or brand strength alone.
Repetition And Recall Still Matter
But the challenge is equally clear in that visibility is not evenly distributed.
While Google may sample from a broader range of sources, the repetition and prominence still favor the dominant travel brands.
These brands appear more frequently, benefit from greater brand recall, and are more likely to be clicked simply because they’re familiar.
So for newer or challenger brands, the question becomes: How do you turn presence into preference?
Where Should I Be Focusing?
Consistency Of Presence
It starts with consistency. One or two appearances in AI Overviews won’t move the needle.
Travel brands need to think about sustained visibility, showing up across a range of topics, formats, and moments in the user journey.
That means building out content that doesn’t just answer common queries but anticipates nuanced needs, inspires curiosity, and offers unique, first-hand insight.
Clarity Of Voice
Next comes clarity of voice. AI systems are increasingly sensitive to content that signals credibility, experience, and originality.
Brands that find and articulate a clear editorial voice, whether that’s luxury travel with a local twist, slow travel for sustainability, or adventure itineraries from people who’ve actually been there, are more likely to stand out.
Intent Understanding
Finally, there’s an intent understanding. Challenger brands must shift from thinking in keywords to thinking in moments.
What’s the user trying to imagine, plan, solve, or feel at this point in their journey? How can your content speak directly to that?
A New Definition Of Authority
The travel giants still have scale on their side, but challenger brands now have a better chance to earn visibility through authenticity and depth. That’s a different kind of authority, one rooted in relevance and resonance.
For travel SEOs willing to rethink what authority means, and for brands ready to invest in meaningful, user-first content, AI-powered search is no longer just a threat. It’s an invitation.
Not to play the same game the giants are playing, but to play a different one, and win in different ways.
A software engineer from New York got so fed up with the irrelevant results and SEO spam in search engines that he decided to create a better one. Two months later, he has a demo search engine up and running. Here is how he did it, and four important insights about what he feels are the hurdles to creating a high-quality search engine.
One of the motives for creating a new search engine was the perception that mainstream search engines contained increasing amount of SEO spam. After two months the software engineer wrote about their creation:
“What’s great is the comparable lack of SEO spam.”
Neural Embeddings
The software engineer, Wilson Lin, decided that the best approach would be neural embeddings. He created a small-scale test to validate the approach and noted that the embeddings approach was successful.
Chunking Content
The next phase was how to process the data, like should it be divided into blocks of paragraphs or sentences? He decided that the sentence level was the most granular level that made sense because it enabled identifying the most relevant answer within a sentence while also enabling the creation of larger paragraph-level embedding units for context and semantic coherence.
But he still had problems with identifying context with indirect references that used words like “it” or “the” so he took an additional step in order to be able to better understand context:
“I trained a DistilBERT classifier model that would take a sentence and the preceding sentences, and label which one (if any) it depends upon in order to retain meaning. Therefore, when embedding a statement, I would follow the “chain” backwards to ensure all dependents were also provided in context.
This also had the benefit of labelling sentences that should never be matched, because they were not “leaf” sentences by themselves.”
Identifying The Main Content
A challenge for crawling was developing a way to ignore the non-content parts of a web page in order to index what Google calls the Main Content (MC). What made this challenging was the fact that all websites use different markup to signal the parts of a web page, and although he didn’t mention it, not all websites use semantic HTML, which would make it vastly easier for crawlers to identify where the main content is.
So he basically relied on HTML tags like the paragraph tag
to identify which parts of a web page contained the content and which parts did not.
This is the list of HTML tags he relied on to identify the main content:
blockquote – A quotation
dl – A description list (a list of descriptions or definitions)
ol – An ordered list (like a numbered list)
p – Paragraph element
pre – Preformatted text
table – The element for tabular data
ul – An unordered list (like bullet points)
Issues With Crawling
Crawling was another part that came with a multitude of problems to solve. For example, he discovered, to his surprise, that DNS resolution was a fairly frequent point of failure. The type of URL was another issue, where he had to block any URL from crawling that was not using the HTTPS protocol.
These were some of the challenges:
“They must have https: protocol, not ftp:, data:, javascript:, etc.
They must have a valid eTLD and hostname, and can’t have ports, usernames, or passwords.
Canonicalization is done to deduplicate. All components are percent-decoded then re-encoded with a minimal consistent charset. Query parameters are dropped or sorted. Origins are lowercased.
Some URLs are extremely long, and you can run into rare limits like HTTP headers and database index page sizes.
Some URLs also have strange characters that you wouldn’t think would be in a URL, but will get rejected downstream by systems like PostgreSQL and SQS.”
Storage
At first, Wilson chose Oracle Cloud because of the low cost of transferring data out (egress costs).
He explained:
“I initially chose Oracle Cloud for infra needs due to their very low egress costs with 10 TB free per month. As I’d store terabytes of data, this was a good reassurance that if I ever needed to move or export data (e.g. processing, backups), I wouldn’t have a hole in my wallet. Their compute was also far cheaper than other clouds, while still being a reliable major provider.”
But the Oracle Cloud solution ran into scaling issues. So he moved the project over to PostgreSQL, experienced a different set of technical issues, and eventually landed on RocksDB, which worked well.
He explained:
“I opted for a fixed set of 64 RocksDB shards, which simplified operations and client routing, while providing enough distribution capacity for the foreseeable future.
…At its peak, this system could ingest 200K writes per second across thousands of clients (crawlers, parsers, vectorizers). Each web page not only consisted of raw source HTML, but also normalized data, contextualized chunks, hundreds of high dimensional embeddings, and lots of metadata.”
GPU
Wilson used GPU-powered inference to generate semantic vector embeddings from crawled web content using transformer models. He initially used OpenAI embeddings via API, but that became expensive as the project scaled. He then switched to a self-hosted inference solution using GPUs from a company called Runpod.
He explained:
“In search of the most cost effective scalable solution, I discovered Runpod, who offer high performance-per-dollar GPUs like the RTX 4090 at far cheaper per-hour rates than AWS and Lambda. These were operated from tier 3 DCs with stable fast networking and lots of reliable compute capacity.”
Lack Of SEO Spam
The software engineer claimed that his search engine had less search spam and used the example of the query “best programming blogs” to illustrate his point. He also pointed out that his search engine could understand complex queries and gave the example of inputting an entire paragraph of content and discovering interesting articles about the topics in the paragraph.
Four Takeaways
Wilson listed many discoveries, but here are four that may be of interest to digital marketers and publishers interested in this journey of creating a search engine:
1. The Size Of The Index Is Important
One of the most important takeaways Wilson learned from two months of building a search engine is that the size of the search index is important because in his words, “coverage defines quality.” This is
2. Crawling And Filtering Are Hardest Problems
Although crawling as much content as possible is important for surfacing useful content, Wilson also learned that filtering low quality content was difficult because it required balancing the need for quantity against the pointlessness of crawling a seemingly endless web of useless or junk content. He discovered that a way of filtering out the useless content was necessary.
This is actually the problem that Sergey Brin and Larry Page solved with Page Rank. Page Rank modeled user behavior, the choice and votes of humans who validate web pages with links. Although Page Rank is nearly 30 years old, the underlying intuition remains so relevant today that the AI search engine Perplexity uses a modified version of it for its own search engine.
3. Limitations Of Small-Scale Search Engines
Another takeaway he discovered is that there are limits to how successful a small independent search engine can be. Wilson cited the inability to crawl the entire web as a constraint which creates coverage gaps.
4. Judging trust and authenticity at scale is complex
Automatically determining originality, accuracy, and quality across unstructured data is non-trivial
Wilson writes:
“Determining authenticity, trust, originality, accuracy, and quality automatically is not trivial. …if I started over I would put more emphasis on researching and developing this aspect first.
Infamously, search engines use thousands of signals on ranking and filtering pages, but I believe newer transformer-based approaches towards content evaluation and link analysis should be simpler, cost effective, and more accurate.”
Interested in trying the search engine? You can find it here and you can read how the full technical details of how he did it here.
Someone posted details of a novel negative SEO attack that they said appeared to be a Core Web Vitals performance poisoning attack. Google’s John Mueller and Chrome’s Barry Pollard assisted in figuring out what was going on.
The person posted on Bluesky, tagging Google’s John Mueller and Rick Viscomi, the latter a DevRel Engineer at Google.
“Hey we’re seeing a weird type of negative SEO attack that looks like core web vitals performance poisoning, seeing it on multiple sites where it seems like an intentional render delay is being injected, see attached screenshot.Seeing across multiple sites & source countries
..this data is pulled by webvitals-js. At first I thought dodgy AI crawler but the traffic pattern is from multiple countries hitting the same set of pages and forging the referrer in many cases”
The significance of the reference to “webvitals-js” is that the degraded Core Web Vitals data is from what’s hitting the server, actual performances scores recorded on the website itself, not the CrUX data, which we’ll discuss next.
Could This Affect Rankings?
The person making the post did not say if the “attack” had impacted search rankings, although that is unlikely, given that website performance is a weak ranking factor and less important than things like content relevance to user queries.
Google’s John Mueller responded, sharing his opinion that it’s unlikely to cause an issue, and tagging Chrome Web Performance Developer Advocate Barry Pollard (@tunetheweb) in his response.
Mueller said:
“I can’t imagine that this would cause issues, but maybe @tunetheweb.com has seen things like this or would be keen on taking a look.”
Barry Pollard wondered if it’s a bug in the web-vitals library and asked the original poster if it’s reflected in the CrUX data (Chrome User Experience Report), which is a record of actual user visits to websites.
The person who posted about the issue responded to Pollard’s question by answering that the CrUX report does not reflect the page speed issues.
They also stated that the website in question is experiencing a cache-bypass DoS (denial-of-service) attack, which is when an attacker sends a massive number of web page requests that bypass a CDN or a local cache, causing stress to server resources.
The method employed by a cache-bypass DoS attack is to bypass the cache (whether that’s a CDN or a local cache) in order to get the server to serve a web page (instead of a copy of it from the cache or CDN), thus slowing down the server.
The local web-vitals script is recording the performance degradation of those visits, but it is likely not registering with the CrUX data because that comes from actual Chrome browser users who have opted in to sharing their web performance data.
So What’s Going On?
Judging by the limited information in the discussion, it appears that a DoS attack is slowing down server response times, which in turn is affecting page speed metrics on the server. The Chrome User Experience Report (CrUX) data is not reflecting the degraded response times, which could be because the CDN is handling the page requests for the users recorded in CrUX. There’s a remote chance that the CrUX data isn’t fresh enough to reflect recent events but it seems logical that users are getting cached versions of the web page and thus not experiencing degraded performance.
I think the bottom line is that CWV scores themselves will not have an effect on rankings. Given that actual users themselves will hit the cache layer if there’s a CDN, the DoS attack probably won’t have an effect on rankings in an indirect way either.
While industry professionals have debates over nomenclature of SEO, GEO, or AEO, and if ChatGPT or Google’s AI Overviews will replace traditional search, a more fundamental shift is happening that could disrupt the entire industry business model.
To get a better understanding of this, I spoke to the 25-year veteran and SEO pioneer Duane Forrester to discuss some of his recent articles about the shift from traditional SEO and the impact on how SEO roles are changing and adapting.
Duane previously worked at Microsoft as a senior program manager of SEO, where he helped to launch Bing Webmaster Tools and bring Schema.org to life. He has a deep understanding of how search engines work and has now turned his attention to adapting to the realities of AI-powered search and digital discovery.
His belief is that the real disruption isn’t AI replacing search engines; it’s the rise of AI agents. These “Agentic AI” systems will empower individuals to work like small agencies, and the jobs that thrive will be those that can effectively manage an AI team.
The Rise Of Agentic AI: Virtual Team Members
In Duane’s recent article “SEO’s Existential Threat is AI, but Not in the Way You Think,” he said it’s the rise of AI agents and retrieval-based systems that are already transforming how people interact with information, quietly eroding SEO’s return on investment. So, I asked him how agents and not SERPs are the future.
Duane explained:
“The most significant development isn’t AI replacing search engines; it’s the emergence of Agentic AI systems that can be given tasks and execute them autonomously … This is really a personal thing and I’ve been following this since I worked at Microsoft. I did some early work with Cortana with that program and training it for language recognition.”
Within six months, Duane predicts professionals will routinely instruct AI agents to perform work while they focus on higher-value activities. This is going to have the impact where individuals can behave much more like a small agency.
“If I can create a process and the process is largely executed by agents, then the 100% of my time that I can devote can be reapportioned to human-in-the-loop analysis.
This is going to be the way for us to create virtual players on our team and to do specific tasks to enable us to define the most valuable use of our time, whatever it happens to be. That valuable use of time for some people may be closing their next client. It may simply be the sales cycle. For other people who, maybe, lack knowledge and experience, it may actually be executing on what you promised the client.”
However, Dunae thinks that developing people management skills will be critical to success:
“If you step into the world of Agentic AI and you’re going down that path, you better have people management skills because you’re going to need them. That’s the skill set that will prove most valuable to managing Agentic AI work. You have to think of them not necessarily as humans, but as systems that need guidance.”
He responded that the most dramatic changes will impact content creators, but not in the way many expect.
Duanes thinks that traditional writing roles face automation, but professionals who adapt will become more valuable than ever.
“If your full-time job is sitting down writing, that’s in jeopardy,” Duane acknowledges.
“The new model transforms writers from creators to instructors, managing multiple AI agents across different clients simultaneously. Instead of spending hours researching and writing, professionals can brief a dozen agents in minutes and focus on editing, refining tone, and ensuring accuracy.”
“You can tell a dozen agents for a dozen clients to all start and you can get them all started in less than two minutes and then in about 10 minutes have all of the output that you now will go in and edit one by one.”
Paradoxically, he thinks the role most in demand will be quality experienced writers, but only those who learn how to embrace and integrate AI to be efficient and effectively manage an AI team of writers.
By becoming a “human in the loop” editor who can guide AI output, an experienced writer can add value in ways machines can’t by refining tone, ensuring factual accuracy, and aligning copy with brand voice and client needs.
“I recently wrote about a Microsoft survey that showed the overlay of how AI can do a job versus humans doing that same job … their point was, if you’re in these jobs, you kind of want to figure out how to pivot to something different.”
Strategic Roles Remain Safe
The jobs that are vulnerable to AI are those with a repetitive nature that can be done by an AI faster, easier, and cheaper than a human.
While these execution-focused roles face disruption, strategic positions like CMOs remain relatively protected. These roles survive because they require experience-based decision-making that AI cannot replicate.
“It’s going to be harder to replace that level of experience because the system doesn’t have the experience,” Duane emphasizes.
The distinction isn’t about seniority but about the nature of the work. Repetitive tasks get automated first, while roles requiring strategic thinking, relationship building, and complex problem-solving remain human-dominated.
CMOs are considered “safe” not because they are senior, but because they are thinking in terms of strategy. They succeed by analyzing consumer behavior, identifying monetization opportunities, and aligning products with customer problems, capabilities that demand human insight and industry knowledge.
“They’re watching consumer behavior, and they’re trying to tease out from the consumer behavior: How do we make money from that? How do we align our product to solve a customer’s problem? And then that generates more sales. That’s the job of the CMO.
And then everything else under it, which is building and maintaining the team, running all the groups, and making sure everything is on track. It’s going to be harder to replace that level of experience because the system doesn’t have the experience.”
Preparing For The Future
Success in these evolving times requires immediate action on hiring and training. Companies must update job descriptions today to reflect skills needed in two to three years, or develop comprehensive training programs for existing staff.
“The people you’re hiring today, in theory, should still be with you in a couple of years. And if they are still with you in a couple of years and you don’t hire these new skills today, well then, you better have a training plan to get them there.”
I compared the current transformation with the early days of SEO, when pioneers navigated uncharted territory. Today’s professionals face a similar challenge of adapting to work alongside AI systems or risking obsolescence.
The future belongs to those who can embrace AI as a productivity multiplier rather than a replacement threat. Those who learn to instruct, guide, and optimize AI agents will find themselves more valuable than ever, while those who resist change may find themselves left behind.
“This isn’t just about surviving disruption,” Duane concluded. “It’s about positioning yourself to benefit from it.”
Watch the full video interview with Duane Forrester below.
Google’s Gary Illyes answered a question about why Google doesn’t use social sharing as a ranking factor, explaining that it’s about the inability to control certain kinds of external signals.
Kenichi Suzuki Interview With Gary Illyes
Kenichi Suzuki (LinkedIn profile), of Faber Company (LinkedIn profile), is a respected Japanese search marketing expert who has at least 25 years of experience in digital marketing. I last saw him speak at a Pubcon session a few years back, where he shared his findings on qualities inherent to sites that Google Discover tended to show.
Suzuki published an interview with Gary Illyes, where he asked a number of questions about SEO, including this one about SEO, social media, and Google ranking factors.
Gary Illyes is an Analyst at Google (LinkedIn profile) who has a history of giving straightforward answers that dispel SEO myths and sometimes startle, like the time recently when he said that links play less of a role in ranking than most SEOs tend to believe. Gary used to be a part of the web publishing community before working at Google, and he was even a member of the WebmasterWorld forums under the nickname Methode. So I think Gary knows what it’s like to be a part of the SEO community and how important good information is, and that’s reflected in the quality of answers he provides.
Are Social Media Shares Or Views Google Ranking Factors?
The question about social media and ranking factors was asked by Rio Ichikawa (LinkedIn profile), also of Faber Company. She asked Gary whether social media views and shares were ranking signals.
Gary’s answer was straightforward and with zero ambiguity. He said no. The interesting part of his answer was the explanation of why Google doesn’t use them and will never use them as a ranking factor.
Ichikawa asked the following question:
“All right then. The next question. So this is about the SEO and social media. Is the number of the views and shares on social media …used as one of the ranking signals for SEO or in general?”
Gary answered:
“For this we have basically a very old, very canned response and something that we learned or it’s based on something that we learned over the years, or particularly one incident around 2014.
The answer is no. And for the future is also likely no.
And that’s because we need to be able to control our own signals. And if we are looking at external signals, so for example, a social network’s signals, that’s not in our control.
So basically if someone on that social network decides to inflate the number, we don’t know if that inflation was legit or not, and we have no way knowing that.”
Easily Gamed Signals Are Unreliable For SEO
External signals that Google can’t control but can be influenced by an SEO are untrustworthy. Googlers have expressed similar opinions about other things that are easily manipulated and therefore unreliable as ranking signals.
Some SEOs might say, “If that’s true, then what about structured data? Those are under the control of SEOs, but Google uses them.”
Yes, Google uses structured data, but not as a ranking factor; they just make websites eligible for rich results. Additionally, stuffing structured data with content that’s not visible on the web page is a violation of Google’s guidelines and can lead to a manual action.
A recent example is the LLMs.txt protocol proposal, which is essentially dead in the water precisely because it is unreliable, in addition to being superfluous. Google’s John Mueller has said that the LLMs.txt protocol is unreliable because it could easily be misused to show highly optimized content for ranking purposes, and that it is analogous to the keywords meta tag, which was used by SEOs for every keyword they wanted their web pages to rank for.
Mueller said:
“To me, it’s comparable to the keywords meta tag – this is what a site-owner claims their site is about … (Is the site really like that? well, you can check it. At that point, why not just check the site directly?)”
The content within an LLMs.txt and associated files are completely in control of SEOs and web publishers, which makes them unreliable.
Another example is the author byline. Many SEOs promoted author bylines as a way to show “authority” and influence Google’s understanding of Expertise, Experience, Authoritativeness, and Trustworthiness. Some SEOs, predictably, invented fake LinkedIn profiles to link from their fake author bios in the belief that author bylines were a ranking signal. The irony is that the ease of abusing author bylines should have been reason enough for the average SEO to dismiss them as a ranking-related signal.
In my opinion, the key statement in Gary’s answer is this:
“…we need to be able to control our own signals.”
I think that the SEO community, moving forward, really needs to rethink some of the unconfirmed “ranking signals” they believe in, like brand mentions, and just move on to doing things that actually make a difference, like promoting websites and creating experiences that users love.
Watch the question and answer at about the ten minute mark: