Google’s John Mueller says small businesses may be hurting their search visibility by choosing generic keyword domains instead of building distinctive brand names.
Speaking on a recent episode of Search Off the Record, Mueller and fellow Search Advocate Martin Splitt discussed common challenges for photography websites.
During the conversation, Mueller noted that many small business owners fall into a “generic domain” trap that can make it harder to connect the business name with its work.
Why Keyword Domains Can Be a Problem
The topic came up when Splitt mentioned that his photography site uses a German term for “underwater photo” as its domain. Mueller responded:
“I see a lot of small businesses make the mistake of taking a generic term and calling it their brand.”
He explained that businesses choosing keyword-rich domains often end up competing with directories, aggregators, and other established sites targeting the same phrases.
Even if the domain name exactly matches a service, there’s little room to stand out in search.
The Advantage Of A Distinct Brand
Mueller contrasted this with using a unique business name:
“If your brand were Martin Splitt Photos then people would be able to find you immediately.”
When customers search for a brand they remember, competition drops. Mentions and links from other websites also become clearer signals to search engines, reducing the chance of confusion with similarly named businesses.
Lost Opportunities For Word-of-Mouth
Relying on a generic keyword domain can also make offline marketing less effective.
If a potential client hears about a business at an event but can’t remember its exact generic name, finding it later becomes more difficult.
Mueller noted:
“If you’ve built up a reputation as being kind of this underwater photography guy and they remember your name, it’s a lot easier to find you with a clear brand name.”
Why This Matters
For service providers like photographers, event planners, or contractors, including the service and location in a domain name can feel like a shortcut to local rankings.
Mueller’s advice suggests otherwise: location targeting can be achieved through content, structured data, and Google Business Profile optimization, without giving up a distinctive brand.
Looking Ahead
While Mueller didn’t recommend immediate rebrands for existing sites, he made it clear that unique, brandable domains give small businesses a defensible advantage in search and marketing.
For those still choosing a domain, the long-term benefits of memorability and differentiation can outweigh any short-term keyword gains.
Join Wayne Cichanski on August 20, 2025 for an exclusive webinar sponsored by iQuanti. Learn how to adapt your SEO strategy and site architecture for AI-driven queries and remain competitive in this new search era.
In this session, you’ll discover:
Why user experience, schema, and site architecture are now just as important as keywords
How to position your brand for discovery in AI-driven queries, not just rankings
Why this session is essential:
With generative AI reshaping search results across platforms like Google, Bing, and ChatGPT, it is crucial to rethink how your content is structured and how people interact with your brand in AI search. Do not get left behind. Optimize for AI-driven search now.
Register today for actionable insights and a roadmap to success in the AI search era. If you cannot attend live, do not worry. Sign up anyway and we will send you the full recording.
Marketers spent decades perfecting the funnel: awareness, consideration, conversion. We built personas. We mapped content to stages. We watched users click, scroll, bounce, convert. Everything was visible.
But GenAI doesn’t show its hand.
The funnel still exists, it’s just hidden inside the model. Every time someone prompts ChatGPT or Perplexity, they reveal their place in a decision journey.
Not by filling out a form or triggering a pixel, but through the prompt fingerprint embedded in their question.
That’s the new funnel. You’re still being evaluated. Still being chosen. But the targeting is now invisible, inferred, and dynamic.
And most marketers have no idea it’s happening. In fairness, I think only the cohort portion of this is actively happening today.
The ad system I explore here is purely theoretical (though Google appears to be working in a similar direction currently, and its rollout could be realistic, soon – links below).
TL;DR: This article doesn’t just explain how I think GenAI is reshaping audience targeting; it introduces three new concepts I think you’ll need to understand the next evolution of paid media: Prompt Fingerprints, Embedding Fingerprints, and Intent Vector Bidding.
The funnel isn’t gone. It’s embedded. And it’s about to start building and placing ads on its own.
About the terminology:
Prompt Fingerprint and Intent Vector Bidding, I believe, are net-new terms for our industry, coined here to describe how future LLM-based systems could group users and auction ad space.
Conceptually, Intent Vector Bidding aligns with work already being done behind the scenes at Google (and I’m sure elsewhere), though I don’t believe they use this phrase.
Embedding Fingerprint draws from AI research but is reframed here as a brand-side construct to power targeting and retrieval inside GenAI systems.
This article was written over the last three weeks of July, and I was happy to find an article on August 4 talking about the concepts I’m exploring for a future paid ads bidding system.
Coincidental, but validating. The link to that article is below.
Image credit: Duane Forrester
What Cohort Targeting Used To Be
In the pre-AI era, cohort targeting was built around observable behaviors.
Retargeting audiences built from cookies and pixels.
Segments shaped by demographics, location, and device.
Lookalikes trained on customer traits and CRM lists.
We mapped campaigns to persona types and funnel stages. A 42-year-old dad in Ohio was mid-funnel if he clicked a product video. An 18-year-old in Mumbai was top-funnel if he downloaded an ebook.
These were guesses, good ones, often, but still blunt instruments. And they were built on identifiers that don’t necessarily survive the GenAI shift.
Prompts Are The New Personas
Large language models don’t need to know who you are. They don’t really need to track you. They don’t care where you came from. They only care what you ask, and how you ask it.
Every prompt is vectorized. That means it’s turned into a mathematical representation of meaning, called an embedding. These vectors capture everything the model can glean from your input:
Topical domain.
Familiarity and depth.
Sentiment and urgency.
Stage of intent.
LLMs use this signal to group prompts with similar meaning, even if they come from completely different types of people.
And that’s how new cohorts can form. Not from identity. From intent.
Right now, most marketers are still optimizing for keywords, and missing the bigger picture. Keywords describe what someone is searching for. Prompt fingerprints describe why and how.
Someone asking “quietest portable generator for camping” isn’t just looking for a product, they’re signaling lifestyle priorities (minimal noise, portability, outdoor use) and stage (comparison shopping).
That single prompt tells the model far more than any demographic profile ever could.
And crucially, that person is joining a cohort of other prompters asking similar questions in similar ways. If your content isn’t semantically aligned with that group, it’s not just less visible. It’s excluded.
New Concept: Prompt Fingerprint
A unique embedding signature derived from a user’s language, structure, and inferred intent within a prompt. This fingerprint is your new persona.
It’s what the model actually sees and what it uses to determine which answers (and potentially which ads) you receive. (More on those ads later!)
When Context Creates The Cohort
Let’s say the Toronto Maple Leafs just won the Stanley Cup (hey, a guy can dream, right?!). Across the city, thousands of people start prompting:
“Where to celebrate in Toronto tonight?”
“Best bars near Scotiabank Arena open late?”
“Leaf’s victory parade time and location?”
None of these users knows each other. Some are teenagers, others are retirees. Some are local, others are visiting. Some are hardcore fans, some just like to party. But to the model, they’re now a momentary cohort; a group connected by real-time context, not long-term traits.
This is a fundamental break from everything digital marketers are used to. We’ve always grouped people by identity: age, interests, behavior, psychographics. But LLMs group people by situational similarity.
That creates new marketing opportunities and new blind spots.
Imagine you sell travel gear. A major snowstorm is forecast to slam into the Northeast U.S.
Within hours, prompts spike around early departures, snowproof duffel bags, and waterproof boots. A travel-stress cohort forms: people trying to escape before the storm hits. They’re not a segment you planned for. They’re a moment the system saw before you did.
If your content or product is aligned with that moment, you need a system that detects, matches, and delivers immediately. That’s what makes system-embedded ad tech essential.
You’re not buying audiences anymore. You’re buying alignment with the now, with a moment in time.
And this part is real today.
While the inner workings of commercial GenAI systems remain opaque, cluster-like behavior is often visible within a single platform session.
When you ask a string of similar questions in one ChatGPT or Gemini session, you may encounter repeated phrasing, brand mentions, or answer structure. That consistency suggests the model is grouping prompts by embedded meaning, not demographics or declared traits.
I cannot find studies or examples of this behavior being recorded, so please drop a comment if you have a source for such data. I keep hearing about it, but cannot find dedicated data.
Looking Forward
Entire classes of micro-cohorts may form and disappear within hours. To reach them, you’ll need AI-powered, system-embedded ad systems that can:
Detect the cohort’s emergence through real-time prompt patterns.
Generate ads aligned with the cohort’s immediate need.
Place and optimize those ads before the window closes.
Humans can’t move at that speed. AI can. And it has to because the opportunity vanishes with the context.
Sidebar: What I Think Is Real Vs. What I Think Is Coming
Prompt Fingerprints – Live Today: Every GenAI system turns your prompt into a vector embedding. It’s already the foundation of how models interpret meaning.
Cohort Clustering by Prompt Similarity – Active Now: You can observe this in tools like ChatGPT and Gemini. Similar prompts return similar answers, meaning the system is clustering users based on shared intent.
Embedding Fingerprints – Possible Today: If brands structure their content for vectorization, they can create an embedding signature that aligns with relevant prompts. Most don’t yet.
Intent Vector Bidding – Emerging Theory: Almost in the market today. Given current ad platform trends, this kind of bidding system is likely being explored widely across platforms.
Why Old-School Personas Will Work Less Effectively
Age. Income. ZIP code. None of that maps cleanly in vector space.
In the GenAI era, two people with radically different demographics might prompt in nearly identical ways and be served the same answers as a result.
It’s not about who you are. It’s about how your question fits into the model’s understanding of the world.
The classic marketing persona is much less reliable as a targeting unit. I’m suggesting the new unit is the Prompt Fingerprint, and marketers who ignore that shift may find themselves omitted from the conversation entirely.
The Funnel Is Still There — You Just Can’t See It
Here’s the thing: LLMs do understand funnel stages.
They just don’t label them the way marketers do. They infer them from phrasing, specificity, and structure.
TOFU: “Best folding kayaks for beginners”
MOFU: “Oru Inlet vs. Tucktec comparison”
BOFU: “Oru kayak discount codes July 2025”
These are prompt-level indicators of funnel stage. And if your content doesn’t align with how those prompts are formed, it likely won’t get retrieved.
Want to stay visible? Start mapping your content to the language patterns of funnel-stage prompts, not just to topics or keywords.
Embedding Fingerprints: The New Targeting Payload
It’s not just prompts that get vectorized. Your content does, too.
Every product page, blog post, or ad you write forms its own Embedding Fingerprint, a vector signature that reflects what your message actually means in the model’s understanding.
Repurposed Concept: Embedding Fingerprint
Originally used in machine learning to describe the vector signature of a piece of data, this concept is reframed here for content strategy.
An embedding fingerprint becomes the reusable vector signature tied to a brand, product, or message – a semantic identity that determines cohort alignment in GenAI systems.
If your content’s fingerprint aligns closely with a user’s prompt fingerprint, it’s more likely to be retrieved. If not, it’s effectively invisible, no matter how “optimized” it may be in traditional terms.
Intent Vector Bidding: A Possible New Advertising Paradigm
So, what happens when GenAI systems all start monetizing this behavior?
You could get a new kind of auction. One where the bid isn’t for a keyword or a user profile, per se, but for alignment.
New Concept: Intent Vector Bidding
A real-time ad bidding mechanism where placement is determined by alignment between a user’s prompt intent vector and an advertiser’s content vector.
To be clear: this is not live today in any public, commercial ad platform that I am aware of. But I think it’s well within reach. Models already understand alignment. Prompt clustering is already happening.
What’s missing is the infrastructure to let advertisers fully plug in. And you can bet the major players (OpenAI, Google, Meta, Microsoft, Amazon, etc.) are already thinking this way. Google is already looking at this openly.
We’ve Been Heading Here All Along
The shift toward LLM-native ad platforms might sound radical, but in reality, we’ve been headed this way for over a decade.
Step by step, platform by platform, advertisers have been ceding control to automation, often without realizing they were walking toward full autonomy.
Before we trace the path, please keep in mind that while I do have some background in the paid ad world, it’s much less than many of you.
I’m attempting to keep my date ranges and tech evolutions accurate, and I believe they are, but others may have a different view.
My point here isn’t historical accuracy, it’s to demonstrate a continual, directional progression, not nail down on which day of which year did Google do X.
And, I’ll add, maybe I’m entirely off base with my thinking here, but it’s still been interesting to map all this out, especially since Google has already been digging in on a similar concept.
1. From Manual Control To Rule-Based Efficiency
Early 2000s – 2015
In the early days of search and display, marketers controlled everything: keyword targeting, match types, ad copy, placements, and bidding.
Power users lived inside tools like AdWords Editor, manually optimizing bids by time of day, device type, and conversion rate.
Automation started small, with rule-based scripts for bid adjustments, budget caps, and geo-targeting refinements. You were still the pilot, just with some helpful instruments.
2. From Rule-Based Logic To AI-Guided Bidding
2015 – 2018
Then came Smart Bidding.
Google introduced Target CPA, Target ROAS, and Enhanced CPC: bid strategies powered by machine learning models that ingested real-time auction data (device, time, location, conversion likelihood) and made granular decisions on your behalf.
Marketers set the goal, but the system chose the path. Control shifted from how to what result you want. This was a foundational step toward AI-defined outcomes.
3. From AI-Guided Bidding To Creative Automation
2018 – 2023
Next came the automation of the message itself.
Responsive Search Ads let advertisers upload multiple headlines and descriptions and Google handled the permutations and combinations.
Meta and TikTok adopted similar dynamic creative formats.
Then Google launched Performance Max (2021), a turning point that eliminated keywords entirely.
You provide assets and conversion goals.
The system decides where and when to show your ads, whether across Search, YouTube, Display, Gmail, Maps, and more.
Targeting becomes opaque. Placement is more invisible. Strategy becomes trust.
You’re no longer steering the vehicle. You’re defining the destination and expecting the algorithm gets you there efficiently.
4. From Creative Automation To Generative Execution
2023–2025
The model doesn’t just optimize messages anymore; it writes them.
Meta’s AI Sandbox generates headlines and CTAs from a prompt.
TikTok’s Creative Assistant produces hook-driven video scripts on demand.
Third-party tools and GPT-based agents build full ad campaigns, including copy and targeting.
Google’s Veo 3 and Veo 3 Fast now live on Vertex AI, generate polished ads and social clips from text or image-to-video inputs, optimized for rapid iteration and programmatic use.
This isn’t sci-fi. It’s what’s coming to market today.
5. What Comes Next – And Why It’s Inevitable
The final leap is where you don’t submit an ad, you instead submit your business.
A fully LLM-native ad platform would:
Accept your brand’s value propositions, certifications, product specs, creative assets, brand guidelines, company vision statements, and guardrails.
Monitor emergent cohorts in real time based on prompt clusters and conversation spikes.
Inject your brand into those moments if, and only if, your business’s vector aligns with the cohort’s intent.
Charge you automatically for participation in that alignment.
You wouldn’t target. You wouldn’t build campaigns. You’d just feed the system and monitor how well it performs as a semantic extension of your business.
The ad platform becomes a meaning-based proxy for your company, an intent-aware agent acting on your behalf.
That’s not speculative science fiction. It’s a natural endpoint of the road we’re already on, I believe. Performance Max removed the steering wheel. Generative AI threw out the copywriter. Prompt-aligned retrieval will take care of the rest.
Building The LLM-Native Ad Platform
This is a theoretical suggestion of what could be our future for paid ads within AI-generated answer systems.
To make Intent Vector Bidding real at scale, the underlying ad platform will have to evolve dramatically. I don’t see this as a plug-in bolted onto legacy PPC infrastructure.
It will be a fully native layer inside LLM-based systems, one that replaces both creative generation and ad placement management.
Here’s how it could work:
1. Advertiser Input Shifts From Campaigns To Data Feeds
Instead of building ads manually, businesses upload:
Targeted keywords, concepts, and product entities.
Business limitations: geography, availability, compliance.
Structured value props and pricing tiers.
2. The System Becomes The Creative + Placement Engine
The LLM:
Detects emerging prompt cohorts.
Matches intent vectors to advertiser fingerprints.
Constructs and injects ads on the fly, using aligned assets and messaging.
Adjusts tone and detail based on prompt stage (TOFU vs BOFU).
3. Billing Becomes Automated And Embedded
Accounts are pre-funded or credit-card linked.
Ad spend is triggered by real-time participation in retrieval or output injection.
No ad reps. No auctions you manage. Just vector-aligned outcomes billed per engagement, view, or inclusion.
Ad creation and placement become a single-price-point item as the system manages all, in real time.
If you want some more thoughts on this concept, or one that’s closely related, Cindy Krum was recently on Shelley Walsh’s IMHO show, where she talked about whether she thinks Google will put ads inside Gemini’s answers, and it was an interesting discussion.
Marketers and ad teams won’t be eliminated. Instead, they’ll become the data stewards and strategic interpreters of the system.
Expectation setting: Clients will need help understanding why their content shows up (or doesn’t) in GenAI outputs.
Data maintenance: The system is only as good as the assets you feed it, and relevance and freshness matter.
Governance and constraints: Humans will define ethical limits, messaging boundaries, and exclusions.
Training and iteration: AI ad visibility will rely on live outputs and observed responses, not static dashboards. You’ll tune prompts, inputs, and outputs based on what the system retrieves and how often it surfaces your content.
In this model, the ad strategist becomes part translator, part data curator, part retrieval mechanic.
And the ad platform? It becomes autonomous, context-driven, and functionally invisible, until you realize your product’s already been included in the buyer’s decision … and you’ve been billed accordingly.
A Closer Look: Intent Vector Bidding In Action
Imagine you’re an outdoor gear brand and there’s a sudden heatwave hitting the Pacific Northwest. Across Oregon and Washington, people begin prompting:
“Best ultralight tents for summer hiking”
“Camping gear for extreme heat”
“Stay cool while backpacking in July”
The model recognizes a spike in semantically similar prompts and data from news sources, etc. A heatwave cohort forms.
At the same time, your brand has a product page and ad copy about breathable mesh tents and high-vent airflow systems.
If your content has been vectorized (or if your system embeds an ad payload with a strong Embedding Fingerprint), it’s eligible to enter the auction.
But this isn’t a bid based on demographic data or historical retargeting. It’s based on how closely your product vector aligns with the live cohort’s prompt vectors.
The LLM chooses the most semantically aligned match. The better your alignment, the more likely your product is included in the AI’s answer, or inserted into the contextual ad slot within the response.
No campaign setup. No segmented audience targeting. Just semantic match at machine speed. This is where creative, product, and performance converge, and that convergence rewrites what it means to “win” in modern advertising.
What Marketers Can Do Right Now
There’s no dashboard that will tell you which Prompt Fingerprints you’re aligned with. That’s the hard part.
But you can start by thinking like a model until tools start to develop features that allow you to model your Prompt Fingerprint.
Start with:
Simulated prompt testing: Use GPT-4 (or Gemini or any other) to generate sample queries by funnel stage and see what brands get retrieved.
Create content for multi-cohort resonance: for example, a camping blog that aligns with both eco-conscious minimalists and adventure-seeking parents.
Build your own prompt libraries: Classify by intent stage, specificity, and phrasing. Use these to guide creative briefs, content chunking, and SEO.
Track AI summaries: In platforms like Perplexity, Gemini, and ChatGPT, your brand might influence answers even when you’re not explicitly mentioned. Your goal is to become the attributed source, not just a silent contributor.
In this new, genAI version of search, you’re no longer optimizing for page views. You’re optimizing for retrievability by semantic proximity.
The Rise Of The Prompt-Native Brand
Some brands will begin designing entire messaging strategies around prompt behavior. These prompt-native brands won’t wait for traffic to arrive. They’ll engineer their content to surf the wave of prompt clusters as they form.
Product copy structured to match MOFU queries.
Comparison pages written in prompt-first language.
AI ad copy tuned by cohort spike detection.
And eventually, new brands will emerge that never even needed a traditional website. Their entire presence will exist in AI conversations.
Built, tuned, and served directly into LLMs via vector-aligned content and Intent Vector Bids.
Wrapping Up
This is the next funnel, and it’s not a page. It’s a probability field. The funnel didn’t disappear. It just went invisible.
In traditional marketing, we mapped clear stages (awareness, interest, decision) and built content to match. That funnel still exists. But now it lives inside the model. It’s inferred, not declared. It’s shaped by prompts, not click paths.
And if your content doesn’t align with what the model sees in that moment, you’re missing in the retrieval.
Over the past decade, digital marketers have witnessed a dramatic shift in how search budgets are allocated.
In the past decade, companies were funding SEO teams alongside PPC teams. However, a shift towards PPC-first has dominated the inbound marketing space.
Where Have SEO Budgets Gone?
Today, more than $150 billion is spent annually on paid search in the United States alone, while only $50 billion is invested in SEO.
With Google Ads, every dollar has a direct, reportable outcome:
Impressions.
Clicks.
Conversions.
SEO, by contrast, has long been:
A black box.
As a result, agencies and the clients that hire them followed the money, even when SEO’s results were higher.
PPC’s Direct Attribution Makes PPC Look More Important, But SEO Still Dominates
Hard facts:
SEO drives 5x more traffic than PPC.
Companies pay 3x more on PPC than SEO.
Image created by MarketBrew, August 2025
You Can Now Trace ROI Back To SEO
As a result, many SEO professionals and agencies want a way back to organic. Now, there is one, and it’s powered by attribution.
Attribution Is the Key to Measurable SEO Performance
Instead of sitting on the edge of the search engine’s black box, guessing what might happen, we can now go inside the SEO black box, to simulate how the algorithms behave, factor by factor, and observe exactly how rankings react to each change.
With this model in place, you are no longer stuck saying “trust us.”
You can say, “Here’s what we changed. Here’s how rankings moved. Here’s the value of that movement.” Whether the change was a new internal link structure or a content improvement, it’s now visible, measurable, and attributable.
For the first time, SEO teams have a way to communicate performance in terms executives understand: cause, effect, and value.
This transparency is changing the way agencies operate. It turns SEO into a predictable system, not a gamble. And it arms client-facing teams with the evidence they need to justify the budget, or win it back.
How Agencies Are Replacing PPC With Measurable Organic SEO
For agencies, attribution opens the door to something much bigger than better reporting; it enables a completely new kind of offering: performance-based SEO.
Traditionally, SEO services have been sold as retainers or hourly engagements. Clients pay for effort, not outcomes. With attribution, agencies can now flip that model and say: You only pay when results happen.
Enter Market Brew’s AdShifted feature to model this value and success as shown here:
Screenshot from a video by MarketBrew, August 2025
The AdShift tool starts by entering a keyword to discover up to 4* competitive URLs for the Keyword’s Top Clustered Similarities. (*including your own website plus 4 top-ranking competitors)
Screenshot of PPC vs. MarketBrew comparison dashboard by Marketbrew, August 2025
AdShift averages CPC and search volume across all keywords and URLs, giving you a reliable market-wide estimate and details for your brand towards a monthly PPC investment to rank #1.
Screenshot of a dashboard by Marketbrew, August 2025
AdShift then calculates YOUR percentage of replacement for PPC to fund SEO.
This allows you to model your own Performance Plan with variable discounts available to the Market Brew license fees with an always less than 50% of PPC Fee for clicks replaced by new SEO traffic.
Screenshot of a dashboard by Marketbrew, August 2025
AdShift simulates a PPC replacement plan option selected based on its keywords footprint to instantly see savings from the associated Performance Plans.
That’s the heart of the PPC replacement plan: a strategy you can use to gradually shift a clients’ paid search budgets into measurable performance-based SEO.
What Is A PPC Replacement Plan? Trackable SEO.
A PPC replacement plan is a strategy in which agencies gradually shift their clients’ paid search budgets into organic investments, with measurable outcomes and shared performance incentives.
Here’s how it works:
Benchmark Paid Spend: Identify the current Google Ads budget, i.e., $10,000 per month or $120,000 per year.
Forecast Organic Value: Use search engine modeling to predict the lift in organic traffic from specific SEO tasks.
Execute & Attribute: Complete tasks and monitor real-time changes in rankings and traffic.
Charge on Impact: Instead of billing for time, bill for results, often at a fraction of the client’s former ad spend.
This is not about replacing all paid spend.
Branded queries and some high-value targets may remain in PPC. But for the large, expensive middle of the keyword funnel, agencies can now offer a smarter path: predictable, attributable organic results, at a lower cost-per-click, with better margins.
And most importantly, instead of lining Google’s pockets with PPC revenue, your investments begin to fuel both organic and LLM searches!
Real-World Proof That SEO Attribution Works
Agencies exploring this new attribution-powered model aren’t just intrigued … they’re energized. For many, it’s the first time in years that SEO feels like a strategic growth engine, not just a checklist of deliverables.
“We’ve pitched performance SEO to three clients this month alone,” said one digital strategy lead. “The ability to tie ranking improvements to specific tasks changed the entire conversation.”
“Instead of walking into meetings looking to justify an SEO retainer, we enter with a blueprint representing a SEO/GEO/AEO Search Engine’s ‘digital twin’ with the AI-driven tasks that show exactly what needs to be changed and the rankings it produces. Clients don’t question the value … they ask what’s next.”
Several agencies report that new business wins are increasing simply because they offer something different. While competitors stick to vague SEO promises or expensive PPC management, partners leveraging attribution offer clarity, accountability, and control.
And when the client sees that they’re paying less and getting more, it’s not a hard sell, it’s a long-term relationship.
A Smarter, More Profitable Model for Agencies and SEOs
The traditional agency model in search has become a maze of expectations.
Managing paid search may deliver short-term wins, but it comes to a bidding war with only those with the biggest budgets winning. SEO, meanwhile, has often felt like a thankless task … necessary but underappreciated, valuable but difficult to prove.
Attribution changes that.
For agencies, this is a path back to profitability and positioning. With attribution, you’re not just selling effort … you’re selling outcomes. And because the work is modeled and measured in advance, you can confidently offer performance plans that are both client-friendly and agency-profitable.
For SEOs, this is about getting the credit they deserve. Attribution allows practitioners to demonstrate their impact in concrete terms. Rankings don’t just move, … they move because of you. Traffic increases aren’t vague, … they’re connected to your specific strategies.
Now, you can show this.
Most importantly, this approach rebuilds trust.
Clients no longer have to guess what’s working. They see it. In dashboards, in forecasts, in side-by-side comparisons of where they were and where they are now. It restores SEO to a place of clarity and control where value is obvious, and investment is earned.
The industry has been waiting for this. And now, it’s here.
From PPC Dependence to Organic Dominance — Now Backed by Data
Search budgets have long been upside down, pouring billions into paid clicks that capture a mere fraction of user attention, while underfunding the organic channel that delivers lasting value.
Why? Because SEO lacked attribution.
That’s no longer the case.
Today, agencies and SEO professionals have the tools to prove what works, forecast what’s next, and get paid for the real value they deliver. It’s a shift that empowers agencies to move beyond bidding-war PPC management and into a lower cost & higher ROAS, performance-based SEO.
This isn’t just a new service mode it’s a rebalancing of power in search.
Organic is back. It’s measurable. It’s profitable. And it’s ready to take center stage again.
The only question is: will you be the agency or brand that leads the shift or watch as others do it first?
Generative AI platforms such as ChatGPT, Perplexity, and Claude now execute live web searches with all prompts. Ensuring a site is crawlable by AI bots is therefore essential for mentions and citations on those platforms.
Here’s how to optimize a website for AI crawlers.
Disable JavaScript
Make sure your pages are readable with JavaScript disabled.
Unlike Google’s crawler, AI bots are immature. Many tests from industry practitioners confirm AI crawlers cannot always render JavaScript.
Most publishers and businesses no longer worry about JavaScript crawlability since Google has rendered those pages for years. Hence there’s a huge number of JavaScript-heavy sites.
The Chrome browser can render a site without JavaScript. To activate:
Go to your site using Chrome.
Open Web Developer tools at View > Developer > Developer Tools.
Click Settings (behind the gear icon) on the right side of the panel.
Scroll down and check the option “Disable JavaScript” under “Debugger.”
Disable JavaScript in Chrome’s Developer Tools panel.
Now browse your site, making sure:
All essential content is visible, especially behind tabs and drop-down menus.
The navigation menu and other links are clickable.
For video embeds, there’s an option to click to the original video, access a transcript, or both.
You can use Aiso, an AI optimization platform, to ensure AI bots can access and crawl your site. With a free trial, the platform allows a few free checks. Go to the “Website crawlability” section and enter your URL.
The tool will conduct a thorough review with suggestions on improving access for AI crawlers and even show the appearance of pages with JavaScript disabled.
Aiso can review a site’s use of JavaScript and suggest improvements for AI bot access.
Ensure AI Access
Make sure your site allows access for AI bots. Some content management platforms and plugins disallow AI access by default — site owners are often unaware.
To check, review your robots.txt file at [yoursite.com]/robots.txt.
The AI platforms themselves can interpret the file to ensure it allows access. Paste your robots.text URL into a ChatGPT prompt, for example, and request an analysis.
Schema markup makes it easier for AI bots to extract essential information from a page (or bypass a block) without crawling it in full.
For example, many website FAQ sections have collapsible elements that prevent access to AI bots. Schema’s FAQPage Type replicates all questions and answers, enabling bot visibility.
Similarly, Schema’s Article Type can communicate context and authorship of content.
In 2024, Google turned the SERP into a storefront.
In 2025, it turned it into a marketplace with an AI-based mind of its own.
Over the past 12 months, Google has layered AI into nearly every inch of the shopping search experience by merging organic results with product listings, rolling out AI Overviews that replace traditional product grids, and introducing a full-screen “AI Mode.”
Meanwhile, ChatGPT is inching closer to becoming a personalized shopping assistant, but for now, the most dramatic shifts for SEOs are still happening inside Google.
To understand the impact, I revisited a set of 35,000+ U.S. shopping queries I first analyzed in July 2024.
In today’s Memo, I’m breaking down the state of Google Shopping SERPs in 2025. A year later, the landscape looks … different:
AI Overviews have started to displace classic ecommerce SERP features.
Image packs dominate the page.
Discussion forums are on the decline.
Plus, an exclusive comparison of 2024 vs. 2025 ecommerce SERP features and a full, detailed checklist of optimizations for the SERP features that matter most today (available for premium subscribers. I show you exactly how I do this).
This memo breaks down exactly what’s changed in Google’s shopping SERPs over the past year. Let’s goooooo.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
In the last 12 months, Google hasn’t just transformed itself into a publisher that serves up content to answer queries right in the SERP (via AI Overviews and AI Mode). It’s also built out an extensive marketplace for shopping queries.
However, Google now provides a whole slew of SERP features and AI features for ecommerce queries that are at least as impactful as AIOs and AI Mode.
Meanwhile, ChatGPT & Co. are starting to include product recommendations with links, reviews, buy buttons, and recommendations directly in the chat. (But this analysis focuses on Google results only.)
To better understand the key trends for Google shopping queries, in July 2024, I analyzed 35,305 keywords across product categories like fashion, beds, plants, and automotive in the U.S. over the last five months using seoClarity.
We’re revisiting that data today, examining those same keywords and categories for July 2025.
The results:
AI Overviews have started to replace product grids.
Ecommerce SERPs are increasingly visual.
There are more question-related SERP features (like People Also Ask), less UGC.
Fewer videos are appearing across the SERPs for product-related searches.
About the data:
This data specifically covers Google search results and features. It doesn’t include ChatGPT, Perplexity, etc. However, we’ll touch on this briefly below.
Over 35,000 search queries were analyzed, and the same group was examined in both July 2024 and July 2025.
The search queries analyzed include product-related queries across a broad spectrum, from brand terms (like Walmart) to individual products (iPads) and categories (e-bikes).
If you’re curious about the exact list of Google shopping SERP features included in this analysis, they’re included at the bottom of this memo.
Web results and the shopping tab for shopping searches were combined as a response to Amazon’s long-standing dominance.
The shopping tab still exists, sure.
But for product-related searches, the main search page and the Google shopping experience look incredibly similar, with the Shopping tab streamlined to a product-grid experience only.
Google has fully transitioned into a shopping marketplace by adding product filters to search result pages and implementing a direct checkout option.
These new features create an ecommerce search experience within Google Search and may significantly impact the organic traffic merchants and retailers rely on.
Google has quietly introduced a direct checkout feature that allows merchants to link free listings directly to their checkout pages.
Google’s move to a shopping marketplace was likely driven by the need to compete with Amazon’s successful advertising business.
Google faces the challenge of balancing its role as a search engine with the need to generate revenue through its shopping marketplace, especially considering its dependence on partners for logistics.
And now?
Google’s layered AI and personalized SERP features into the shopping experience as well.
Below are the Google SERP features I’ll be examining in this year-over-year (YoY) analysis, specifically, with a quick synopsis if you’re not familiar.
Images: A horizontal carousel of image results related to the query pulled from product pages or image-rich content; usually appear at the top or mid-page and link to Google Images or directly to source pages.
Products: Displays a visual grid or carousel of products with titles, images, prices, reviews, and merchants. This includes free product listings (organic) and Product Listing Ads (PLAs) (paid).
People Also Ask (PAA): Related questions users frequently ask. Clicking a question reveals a source link. (These often inform Google’s understanding of search intent and user curiosity.)
Things To Know: An AI-driven feature that breaks a topic into subtopics and frequently misunderstood concepts. Found mostly on broad, educational, or commercial-intent queries, this is Google’s way of guiding users deeper into a topic and understanding deeper search intent.
Discussion and Forums: Highlights relevant threads from platforms like Reddit, Quora, and niche forums. Answers are often community-generated and authentic. Replaced some traditional “People Also Ask” real estate for shopping or reviews queries.
Knowledge Graph: Displays structured facts about a person, brand, product, or topic-sourced from trusted databases. Appears in a right-hand sidebar or embedded box.
Buying Guide: A feature that explains what to consider when shopping for a product, e.g., “What to look for in a DSLR camera.” Usually placed mid-page for commerce-intent queries. It mimics a human assistant or product expert’s advice. Contains snippets and links to sources.
Local Listing: Shows local business listings with map, ratings, hours, and quick call/location links. Prominent in searches with local intent like “shoe store near me” or “coffee shops in Detroit.”
AI Overview: Generative AI summary at the top of the SERP that answers the query using information synthesized from multiple sources. For shopping queries, it often includes product summaries.
Video: A carousel or block of video content, mostly from YouTube, but also from other video-hosting platforms. May include timestamps, captions, or “key moments” for long videos.
Answer Box (a.k.a. Featured Snippet): A direct answer to a query extracted from a single web page, shown at the top of the SERP in a stylized box. Often used for factual or how-to queries. Includes the source link.
Free Product Listings: Organic product results submitted via Google Merchant Center feeds. These listings show in the Shopping tab and occasionally in the main SERP product grid (distinct from paid Shopping ads).
From sources across the web: A content block showing opinions or quotes on a product or topic from a variety of sites. Often used in AI Overviews or product reviews to surface aggregated user sentiment or editorial input.
FAQ: An expandable schema-driven block showing common questions and answers sourced from a specific page. Typically appears under a site’s organic result when FAQ schema is properly implemented.
PPC: Sponsored links shown at the top or bottom of the SERP, marked “Sponsored” or “Ad.” These can show up as text, product images/grids, etc.
In addition to the standard SERP features tracked in this analysis via the above list, here’s a look at the current Google shopping marketplace SERP features and/or elements (like toggle filters) that we’re dealing with at the halfway point of 2025.
AI Mode (Full-Screen): Interactive, immersive full-page AI shopping experience with filters and buy links.
Shopping filters inline: Dynamic filters (brand, color, price) within AI Mode and Shopping grids.
Virtual try-on: This feature was recently released. It’s a generative AI module showing clothes on diverse body types (expanding by category).
Price tracking/alerts: Users can track price drops and get alerts via Gmail or Chrome. Honestly, a pretty great tool.
Popular stores/top stores: Scrollable carousel of prominent retailers for the product category.
Product sites (EU market): Organic feature that shows prominent ecommerce domains (due to regulatory changes in the EU).
Trending products/popular products: Highlights products rising in popularity based on recent search activity.
Merchant star ratings: Display review scores and counts in summaries or tiles.
Free shipping/returns labels: Highlighted callouts in product tiles.
“Verified by Google” merchant badges: Google-trusted seller icon in some listings.
Quick comparison panels: Side-by-side spec or feature comparisons (this is an early-stage rollout, similar to Amazon’s product comparison panel or module).
To illustrate with an example, let’s say you are looking for kayaks (summertime!).
On desktop (logged-in), Google will now show you product filters on the left sidebar and “Popular products” carousels in the middle on top of classic organic results, but under ads, of course.
Image Credit: Kevin Indig
Directly under the shopping product grids, you have traditional organic results along with an on-SERP Buying Guide, similar to People Also Ask questions (which is also included further down the page).
Both the Buying Guide and People Also Ask features deliver answers with links to original content.
Image Credit: Kevin Indig
On mobile, you get product filters at the top, ads above organic results, and product carousels in the form of Popular products or “Products for you.”
Image Credit: Kevin Indig
This experience doesn’t look very different from Amazon … which is the whole point.
Image Credit: Kevin Indig
Google’s shopping experience lets users explore products on a variety of marketplaces, like Amazon, Walmart, eBay, Etsy, & Co.
From an SEO perspective, the prominent position of product grid (listings) and filters likely significantly impacts CTR, organic visibility, and ultimately, revenue.
But let’s take a look at the same search via AI Mode.
Below is the desktop experience via Chrome.
I’ve zoomed out here so you get the whole view, but it takes the user two to three scrolls to get to the product grid when in a standard view.
Image Credit: Kevin Indig
Here on mobile, getting to product recommendations takes several scrolls. In one instance, I received a result that included a list of places near me in my city where I could get a kayak.
Image Credit: Kevin Indig
Keeping the current Google shopping SERP experience in mind, here’s what the data shows.
This is the most noteworthy shift found in the data, as you can probably guess.
Since March 2025, when Google began rolling out AI Overviews more aggressively, they’ve also started replacing (organic) product grids.
Image Credit: Kevin Indig
The graph above might look like it represents minimal changes when you examine it in a timeline view, but you can see the trend even better when moving AIOs to a second y-axis (below).
Image Credit: Kevin Indig
I expect AI Overviews to still show the product grids searchers have become accustomed to, although they might take a different form.
When searching for [which camera tripod should I buy?], for example, we find an AI Overview at the top with specific product recommendations.
Image Credit: Kevin Indig
Of course, AI Mode takes that a step further with richer product recommendations and buying guides.
(Shoutout to The New York Times and the other five sources for this AI Mode answer … which now don’t see an ad impression or affiliate click.)
Image Credit: Kevin Indig
As a result of this shift, which I predict will only increase over time, tracking your brand mentions and product links in AI Overviews becomes critical. Skip this at your own risk.
Here, you’ll see the increase in image packs over time, with a big shift in March 2025.
Image packs for ecommerce-related queries grew from ~60% in 2024 to a new baseline of over 90% of keywords in 2025.
Image Credit: Kevin Indig
Also, notice how Google systematically tests SERP layouts between core updates (e.g., the dip in the graph above happens between the March and June 2025 Core Updates).
Having strong product images, which are properly optimized, continues to be crucial for ecommerce search.
Since January 2025, Google has shown more People Also Asked (PAA) features at the cost of Discussions & Forums.
Even though Reddit is the second most visible site on the web, I’m surprised to see more PAA – two years after Google removed FAQ rich snippets from the SERPs.
Image Credit: Kevin Indig
This is something you want to consider tracking for queries that are directly related to your products, if you’re not doing so already. (You can do this in classic SEO tools like Semrush or Ahrefs, for example.)
Since August 2024, Google has systematically reduced the number of videos in the ecommerce search results.
Image Credit: Kevin Indig
It seems that images have taken a lot of the real estate videos that used to own.
Image Credit: Kevin Indig
As a result, videos are less important in ecommerce search, while images are increasingly more important.
If you’ve been creating and optimizing videos and haven’t seen the SEO results you wanted for your products/site, this could be your signal to invest in other types of content.
While this analysis covers Google SERP data specifically, it’d be a miss to not discuss the new shopping features in ChatGPT.
However, we don’t yet have months and months of data on LLM-based conversational product recommendations to give us good, clear information, so I anticipate there will be more analysis ahead once more time passes.
ChatGPT’s shopping experience is starting to look a lot like Google’s – but with a twist: Instead of viewing lists of blue links or multiple product grids, it curates a conversational shortlist with minimal product listings included.
No affiliate links and no paid ads (yet).
Image Credit: Kevin Indig
OpenAI integrates real-time product data from tools like Klarna and Shopify, allowing ChatGPT to surface up-to-date prices, availability, reviews, and product details in a shoppable card-style format.
ChatGPT also offers a “Why you might like this” and “What people are saying” generative summary when a specific product is clicked.
Image Credit: Kevin Indig
OpenAI offers the following guidance about how these products are selected [source]:
A product appears in the visual carousel when ChatGPT perceives it’s relevant to the user’s intent. ChatGPT assesses intent based on the user’s query and other available context, such as memories or custom instructions….
When determining which products to surface, ChatGPT considers:
• Structured metadata from third-party providers (e.g., price, product description) and other third-party content (e.g., reviews).
• Model responses generated by ChatGPT before it considers any new search results. Learn more.
• OpenAI safety standards.
Depending on the user’s needs, some of these factors will be more relevant than others. For example, if the user specifies a budget of $30, ChatGPT will focus more on price, whereas if price isn’t important, it may focus on other aspects instead.
OpenAI also explains how merchants are selected for products [source]:
When a user clicks on a product, we may show a list of merchants offering it. This list is generated based on merchant and product metadata we receive from third-party providers. Currently, the order in which we display merchants is predominantly determined by these providers….
To that end, we’re exploring ways for merchants to provide us their product feeds directly, which will help ensure more accurate and current listings. If you’re interested in participating, complete the interest form here, and we’ll notify you once submissions open.
That being said, it takes some trial and error to trigger product recommendations directly in the chat.
For instance, the prompt [can you help me find the best kayaks for beginners] results in an output that includes product recommendations, while the query [what are the best kayaks for beginners] results in a list without shopping results, features, or links.
Prompts with action-oriented language like “can you help me” and “will you find” may have a higher likelihood of offering shopping results directly in the chat, while queries like “what is the best” and “what are the best” and “compare the features of” may result in a variety of recommendations.
Image Credit: Kevin Indig
Featured Image: Paulo Bobita/Search Engine Journal
Perplexity published a response to Cloudflare’s claims that it disrespects robots.txt and engages in stealth crawling. Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.
Perplexity AI Assistants Fetch On Demand
According to Perplexity, its system does not store or index content ahead of time. Instead, it fetches webpages only in response to specific user questions. For example, when a user asks for recent restaurant reviews, the assistant retrieves and summarizes relevant content on demand. This, the company says, contrasts with how traditional crawlers operate, systematically indexing vast portions of the web without regard to immediate user intent.
Perplexity compared this on-demand fetching to Google’s user-triggered fetches. Although that is not an apples-to-apples comparison because Google’s user-triggered fetches are in the service of reading text aloud or site verification, it’s still an example of user-triggered fetching that bypasses robots.txt restrictions.
In the same way, Perplexity argues that its AI operates as an extension of a user’s request, not as an autonomous bot crawling indiscriminately. The company states that it does not retain or use the fetched content for training its models.
Criticizes Cloudflare’s Infrastructure
Perplexity also criticized Cloudflare’s infrastructure for failing to distinguish between malicious scraping and legitimate, user-initiated traffic, suggesting that Cloudflare’s approach to bot management risks overblocking services that are acting responsibly. Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.
Perplexity makes a strong case for the claim that Cloudflare is blocking legitimate bot traffic and says that Cloudflare’s decision to block its traffic was based on a misunderstanding of how its technology works.
Cloudflare announced that they delisted Perplexity’s crawler as a verified bot and are now actively blocking Perplexity and all of its stealth bots from crawling websites. Cloudflare acted in response to multiple user complaints against Perplexity related to violations of robots.txt protocols, and a subsequent investigation revealed that Perplexity was using aggressive rogue bot tactics to force its crawlers onto websites.
Cloudflare Verified Bots Program
Cloudflare has a system called Verified Bots that whitelists bots in their system, allowing them to crawl the websites that are protected by Cloudflare. Verified bots must conform to specific policies, such as obeying the robots.txt protocols, in order to maintain their privileged status within Cloudflare’s system.
Perplexity was found to be violating Cloudflare’s requirements that bots abide by the robots.txt protocol and refrain from using IP addresses that are not declared as belonging to the crawling service.
Cloudflare Accuses Perplexity Of Using Stealth Crawling
Cloudflare observed various activities indicative of highly aggressive crawling, with the intent of circumventing the robots.txt protocol.
Stealth Crawling Behavior: Rotating IP Addresses
Perplexity circumvents blocks by using rotating IP addresses, changing ASNs, and impersonating browsers like Chrome.
Perplexity has a list of official IP addresses that crawl from a specific ASN (Autonomous System Number). These IP addresses help identify legitimate crawlers from Perplexity.
An ASN is part of the Internet networking system that provides a unique identifying number for a group of IP addresses. For example, users who access the Internet via an ISP do so with a specific IP address that belongs to an ASN assigned to that ISP.
When blocked, Perplexity attempted to evade the restriction by switching to different IP addresses that are not listed as official Perplexity IPs, including entirely different ones that belonged to a different ASN.
Stealth Crawling Behavior: Spoofed User Agent
The other sneaky behavior that Cloudflare identified was that Perplexity changed its user agent in order to circumvent attempts to block its crawler via robots.txt.
For example, Perplexity’s bots are identified with the following user agents:
PerplexityBot
Perplexity-User
Cloudflare observed that Perplexity responded to user agent blocks by using a different user agent that posed as a person crawling with Chrome 124 on a Mac system. That’s a practice called spoofing, where a rogue crawler identifies itself as a legitimate browser.
“Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36”
Cloudflare Delists Perplexity
Cloudflare announced that Perplexity is delisted as a verified bot and that they will be blocked:
“The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling.”
Takeaways
Violation Of Cloudflare’s Verified Bots Policy Perplexity violated Cloudflare’s Verified Bots policy, which grants crawling access to trusted bots that follow common-sense rules like honoring the robots.txt protocol.
Perplexity Used Stealth Crawling Tactics Perplexity used undeclared IP addresses from different ASNs and spoofed user agents to crawl content after being blocked from accessing it.
User Agent Spoofing Perplexity disguised its bot as a human user by posing as Chrome on a Mac operating system in attempts to bypass filters that block known crawlers.
Cloudflare’s Response Cloudflare delisted Perplexity as a Verified Bot and implemented new blocking rules to prevent the stealth crawling.
SEO Implications Cloudflare users who want Perplexity to crawl their sites may wish to check if Cloudflare is blocking the Perplexity crawlers, and, if so, enable crawling via their Cloudflare dashboard.
Cloudflare delisted Perplexity as a Verified Bot after discovering that it repeatedly violated the Verified Bots policies by disobeying robots.txt. To evade detection, Perplexity also rotated IPs, changed ASNs, and spoofed its user agent to appear as a human browser. Cloudflare’s decision to block the bot is a strong response to aggressive bot behavior on the part of Perplexity.
AI is rapidly changing the rules of SEO. From generative ranking to vector search, the new rules are not only technical but also reshaping how business leaders make decisions.
Join Dan Taylor on August 14, 2025, for an exclusive SEJ Webinar tailored for C-suite executives and senior leaders. In this session, you’ll gain essential insights to understand and communicate SEO performance in the age of AI.
Here’s what you’ll learn:
AI Search Is Impacting Everything. Are You Ready?
AI search is already here, and it’s impacting everything from SEO KPIs to customer journeys. This webinar will give you the tools to lead your teams through the shift with confidence and precision.
Register now for a business-first perspective on AI search innovation. If you can’t attend live, don’t worry. Sign up anyway, and we’ll send you the full recording.
You’ve probably seen the headlines like: “AI will kill SEO,” “AI will replace marketing roles,” or the latest panic: “Is your digital marketing job safe?”
Well, maybe not those exact headlines, but you get the idea, and I’m sure you have seen something similar.
Let’s clear something up: AI is not making SEO irrelevant. It’s making certain tasks obsolete. And yes, some jobs built entirely around those tasks are at risk.
A recent Microsoft study analyzed over 200,000 Bing Copilot interactions to measure task overlap between human job functions and AI-generated outputs. Their findings are eye-opening:
Translators and Interpreters: 98% overlap with AI tasks.
Writers and Authors: 88% overlap.
Public Relations Specialists: 79% overlap.
SEO as a field wasn’t directly named in the study, but many roles common within SEO map tightly to these job categories.
If you write, edit, report, research, or publish content as part of your daily work, this isn’t a hypothetical shift. It’s already happening.
AI isn’t replacing SEO. It’s changing what “search engine optimization” means, and where and how value is measured.
In traditional SEO, the focus was clear:
Rank high.
Earn the click.
Optimize the page for humans and crawlers.
That still matters. But, in AI-powered search systems, the sequence is different:
Content is chunked behind the scenes, paragraphs, lists, and answers are sliced and stored in vector form.
Prompts trigger retrieval, the LLM pulls relevant chunks, often based on embeddings, not just keywords. (So, concepts and relationships, not keywords per se.)
Only a few chunks make it into the answer. Everything else is invisible, no matter how high it once ranked.
This new paradigm shifts the rules of engagement. Instead of asking, “Where do I rank?” the better question is, “Was my content even retrieved?” That makes this a binary system, not a sliding scale.
In this new world of retrieval, the direct answer to the question, “Where do I rank?” could be “ChatGPT,” “Perplexity,” “Claude,” or “CoPilot,” instead of a numbered position.
In some ways, this isn’t as big a shift as some folks would have you believe. After all, as the old joke asks, “Where do you hide a dead body?” To which the correct answer is “…on Page 2 of Google’s results!”
Morbid humor aside, the implication is no one goes there, so there’s no value, and while that sentiment actually drops a lot of the real, nuanced details that actual click through rate data shows us (like the top of page 2 results actually has better CTRs than the bottom of page 1 typically), it does serve up a meta point: If you’re not in the first few results on a traditional SERP, the drop off of CTRs is precipitous.
So, it could be argued that with most “answers” today in generative AI systems being comprised of a very limited set of references, that today’s AI-based systems offer a new display path for consumers, but ultimately, those consumers will only be interacting with the same number of results they historically engaged with.
I mean, if we only ever really clicked on the top 3 results (generalizing here), and the rest were surplus to needs, then cutting an AI-sourced answer down to some words with only 1, 2 or 3 cited results amounts to a similar situation in terms of raw numbers of choice for consumers … 1, 2 or 3 clickable options.
Regardless, it does mark a shift in terms of work items and workflows, and here’s how that shift shows up across some core SEO tasks. Obviously, there could be many more, but these examples help set the stage:
Keyword research becomes embedding relevance and semantic overlap. It’s not about the exact phrase match in a gen AI result. It’s about aligning your language with the concepts AI understands. It’s about the concept of query fan-out (not new, by the way, but very important now).
Meta tag and title optimization become chunked headers and contextual anchor phrases. AI looks for cues inside content to determine chunk focus.
Backlink building becomes trust signal embedding and source transparency. Instead of counting links, AI asks: Does this source feel credible and citable?
Traffic analytics becomes retrieval testing and AI response monitoring. The question isn’t just how many visits you got, it’s whether your content shows up at all in AI-generated responses.
What this means for teams:
Your title tag isn’t just a headline; it’s a semantic hook for AI retrieval.
Content format matters more: bullets, tables, lists, and schema win because they’re easier to cite.
You need to test with prompts to see if your content is actually getting surfaced.
None of this invalidates traditional SEO. But, the visibility layer is moving. If you’re not optimizing for retrieval, you’re missing the first filter, and ranking doesn’t matter if you’re never in the response set.
The SEO Job Risk Spectrum
Microsoft’s study didn’t target SEO directly, but it mapped 20+ job types by their overlap with current AI tasks. I used those official categories to extrapolate risk within SEO job functions.
Image Credit: Duane Forrester
High Risk – Immediate Change Needed
SEO Content Writers
Mapped to: Writers & Authors (88% task overlap in the study: 88% of these tasks an AI can do today).
Why: These roles often involve creating repeatable, factual content, precisely the kind of output AI handles well today (to a degree, anyway). Think meta descriptions, product overviews, and FAQ pages.
The writing isn’t disappearing, but humans aren’t always required for first drafts anymore. Final drafts, yes, but first? No. And I’m not debating how factual the content is that an AI produces.
We all know the pitfalls, but I’ll say this: If your boss is telling you your job is going away, and your argument is “but AIs hallucinate,” think about whether that’s going to change the outcome of that meeting.
Link Builders/Outreach Specialists
Mapped to: Public Relations Specialists (79% overlap).
Why: Cold outreach and templated link negotiation can now be automated.
AI can scan for unlinked mentions, generate outreach messages, and monitor link placement outcomes, cutting into the core responsibilities of these roles.
Moderate Risk – Upskill To Stay Relevant
SEO Analysts
Mapped to: Market Research Analysts (~65% overlap).
Why: Data gathering and trend reporting are susceptible to automation. But, analysts who move into interpreting retrieval patterns, building AI visibility reports, or designing retrieval experiments can thrive.
Admittedly, SEO is a bit more specialized, but bottom or top of this stack, the risk remains moderate. This one, however, is heavily dependent on your actual job tasks.
Technical SEOs
Mapped to: Web Developers (not perfect, but as close as the study got).
Why: Less overlap with generative AI, but still pressured to evolve. Embedding hygiene, chunk structuring, and schema precision are now foundational.
The most valuable technical SEOs are becoming AI optimization architects. Not leaving their traditional work behind, but adopting new workflows.
Content Strategists/Editors
Mapped to: Editors & Technical Writers.
Why: Editing for humans and tone alone is out. Editing for retrievability is in. Strategists now must prioritize chunking, citation density, and clarity of topic anchors, not just user readability.
Or, at least, now consider that LLM bots are de facto users as well.
Lower Risk – Expanded Value And Influence
SEO Managers/Leads
Mapped to: Marketing Managers.
Why: Managers who understand both traditional and AI SEO have more leverage than ever. They’re responsible for team alignment, training decisions, and tool adoption.
This is a growth role, if guided by data, not gut instinct. Testing is life here.
CMOs/Strategy Executives
Mapped to: Marketing Executives.
Why: Strategic thinking isn’t automatable. AI can suggest, but it can’t set priorities across brand, trust, and investment.
Executives who understand how AI affects visibility will steer their companies more effectively, especially in content-heavy verticals.
Tactical Response By Role Type
Every job category on the risk curve deserves practical action.
Now, let’s look at how people in SEO roles can pivot, strengthen, or evolve, based on clear, verifiable capabilities.
High-Risk Roles: SEO Content Writers, Editors, Link Builders
Shift from traditional copywriting to creating structured, retrieval-friendly content.
Focus on chunk-based writing: short Q&A blocks, bullet-based explanations, and schema-rich snippets.
Learn AI prompt testing: Use platforms like ChatGPT or Google Gemini to query key topics and see if your content is surfaced without requiring a click.
Use gen AI visibility tools verified to support AI search tracking:
Profound tracks your brand’s appearance in AI search results across platforms like ChatGPT, Perplexity, and Google Overviews. You can see where you’re cited and which topics AI engines associate with you.
SERPRecon offers AI-powered content outlines and helps reverse-engineer AI overview logic to show what keywords and phrasing matter most. So, use a tool like this, then take the output as the basis for your query fan-out work.
Collaborate with data teams on embedding accuracy and chunk performance.
Moderate-Risk Roles: SEO Analysts, Technical SEOs, Content Strategists
Expand traditional ranking reports with retrievability diagnostics:
Use prompt simulations that probe content retrieval in real-time across AI engines.
Audit embedding and semantic alignment at the paragraph or chunk level.
Employ tools like those mentioned to analyze AI Overviews and generate content improvement outlines.
Monitor AI visibility gaps through new dashboards:
Track citation share versus competitors.
Identify topic clusters where your domain is cited less.
Understand structured data and schema:
Use markup to clearly define entities, relationships, and context for AI systems.
Prioritize formats like FAQPage, HowTo, and Product schema, where applicable. These are easier for LLMs and AI Overviews to cite.
Align semantic clarity within chunks to schema-defined roles (e.g., question/answer pairs, step lists) to improve retrievability and surface relevance.
Join or lead internal “AI-SEO Workshops”:
Teach teams how to test content visibility in ChatGPT, Perplexity, or Google Overviews.
Share experiments in prompt engineering, chunk format outcomes, and schema effectiveness.
Lower-Risk Roles: SEO Managers, Digital Leads, CMOs
Sponsor retraining initiatives for semantic and vector-led SEO practices.
Revise hiring briefs and job descriptions to include skills like embedding knowledge, prompt testing, schema fluency, and chunk analysis.
Implement AI-visibility dashboards using dedicated tools:
Benchmark brand presence across search engines and generative platforms.
Use insights to guide future content and authority decisions.
Keep traditional SEO strong alongside AI tactics:
Technical optimization, speed, quality of content, etc., still matter.
Hybrid success requires both sides working in sync.
Set internal AI literacy standards:
Offer training on retrieval engineering, LLM behavior, and chunk visibility.
Ensure everyone understands AI’s core behaviors, what it cites, and what it ignores.
Reframing The Opportunity
This isn’t a “get out now” scenario for these jobs. It’s a “rebuild your toolkit” moment.
High overlap doesn’t mean you’re obsolete. It means the old version of your job won’t hold value without adaptation. And what gets automated away often wasn’t the best part of the job anyway.
AI isn’t replacing SEO, it’s distilling it. What’s left is:
Strategy that aligns with machine logic and user needs.
Content structure that supports fast retrieval, not just ranking.
Authority based on more, deeper, sometimes implied, trust signals, not just age or backlinks. Like E-E-A-T++.
Think of it this way: AI strips away the boilerplate. What’s left is your real contribution. Your judgment. Your design. Your clarity.
New opportunity lanes are forming right now:
Writers who evolve into retrievability engineers.
Editors who become semantic format strategists.
Technical SEOs who own chunk structuring and indexing hygiene.
Analysts who specialize in AI visibility benchmarking.
These aren’t job titles (yet), but the work is happening. If you’re in a role that touches content, structure, trust, or performance, now is the time to sharpen your relevance, not to fear automation.
Final Word
The fundamentals still matter. Technical SEO, content quality, and UX don’t go away; they evolve alongside AI.
No, SEO isn’t dying, it’s becoming more strategic, more semantic, more valuable. AI-driven retrievability is already redefining visibility. Are you ready to adapt?