Google Ecommerce SERP Features 2025 Vs. 2024 via @sejournal, @Kevin_Indig

In 2024, Google turned the SERP into a storefront.

In 2025, it turned it into a marketplace with an AI-based mind of its own.

Over the past 12 months, Google has layered AI into nearly every inch of the shopping search experience by merging organic results with product listings, rolling out AI Overviews that replace traditional product grids, and introducing a full-screen “AI Mode.”

Meanwhile, ChatGPT is inching closer to becoming a personalized shopping assistant, but for now, the most dramatic shifts for SEOs are still happening inside Google.

To understand the impact, I revisited a set of 35,000+ U.S. shopping queries I first analyzed in July 2024.

In today’s Memo, I’m breaking down the state of Google Shopping SERPs in 2025. A year later, the landscape looks … different:

  • AI Overviews have started to displace classic ecommerce SERP features.
  • Image packs dominate the page.
  • Discussion forums are on the decline.

Plus, an exclusive comparison of 2024 vs. 2025 ecommerce SERP features and a full, detailed checklist of optimizations for the SERP features that matter most today (available for premium subscribers. I show you exactly how I do this).

This memo breaks down exactly what’s changed in Google’s shopping SERPs over the past year. Let’s goooooo.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

In the last 12 months, Google hasn’t just transformed itself into a publisher that serves up content to answer queries right in the SERP (via AI Overviews and AI Mode). It’s also built out an extensive marketplace for shopping queries.

However, Google now provides a whole slew of SERP features and AI features for ecommerce queries that are at least as impactful as AIOs and AI Mode.

Meanwhile, ChatGPT & Co. are starting to include product recommendations with links, reviews, buy buttons, and recommendations directly in the chat. (But this analysis focuses on Google results only.)

To better understand the key trends for Google shopping queries, in July 2024, I analyzed 35,305 keywords across product categories like fashion, beds, plants, and automotive in the U.S. over the last five months using seoClarity.

We’re revisiting that data today, examining those same keywords and categories for July 2025.

The results:

  1. AI Overviews have started to replace product grids.
  2. Ecommerce SERPs are increasingly visual.
  3. There are more question-related SERP features (like People Also Ask), less UGC.
  4. Fewer videos are appearing across the SERPs for product-related searches.

About the data:

  • This data specifically covers Google search results and features. It doesn’t include ChatGPT, Perplexity, etc. However, we’ll touch on this briefly below.
  • Over 35,000 search queries were analyzed, and the same group was examined in both July 2024 and July 2025.
  • The search queries analyzed include product-related queries across a broad spectrum, from brand terms (like Walmart) to individual products (iPads) and categories (e-bikes).
  • If you’re curious about the exact list of Google shopping SERP features included in this analysis, they’re included at the bottom of this memo.

Before we dig into the findings…

In Google’s shift from search engine to ecommerce marketplace (and from search engine to publisher), Google has merged as much as possible into the SERP page.

Web results and the shopping tab for shopping searches were combined as a response to Amazon’s long-standing dominance.

The shopping tab still exists, sure.

But for product-related searches, the main search page and the Google shopping experience look incredibly similar, with the Shopping tab streamlined to a product-grid experience only.

In June 2024, I reported in Critical SERP Features of Google’s shopping marketplace:

  • Google has fully transitioned into a shopping marketplace by adding product filters to search result pages and implementing a direct checkout option.
  • These new features create an ecommerce search experience within Google Search and may significantly impact the organic traffic merchants and retailers rely on.
  • Google has quietly introduced a direct checkout feature that allows merchants to link free listings directly to their checkout pages.
  • Google’s move to a shopping marketplace was likely driven by the need to compete with Amazon’s successful advertising business.
  • Google faces the challenge of balancing its role as a search engine with the need to generate revenue through its shopping marketplace, especially considering its dependence on partners for logistics.

And now?

Google’s layered AI and personalized SERP features into the shopping experience as well.

Below are the Google SERP features I’ll be examining in this year-over-year (YoY) analysis, specifically, with a quick synopsis if you’re not familiar.

  • Images: A horizontal carousel of image results related to the query pulled from product pages or image-rich content; usually appear at the top or mid-page and link to Google Images or directly to source pages.
  • Products: Displays a visual grid or carousel of products with titles, images, prices, reviews, and merchants. This includes free product listings (organic) and Product Listing Ads (PLAs) (paid).
  • People Also Ask (PAA): Related questions users frequently ask. Clicking a question reveals a source link. (These often inform Google’s understanding of search intent and user curiosity.)
  • Things To Know: An AI-driven feature that breaks a topic into subtopics and frequently misunderstood concepts. Found mostly on broad, educational, or commercial-intent queries, this is Google’s way of guiding users deeper into a topic and understanding deeper search intent.
  • Discussion and Forums: Highlights relevant threads from platforms like Reddit, Quora, and niche forums. Answers are often community-generated and authentic. Replaced some traditional “People Also Ask” real estate for shopping or reviews queries.
  • Knowledge Graph: Displays structured facts about a person, brand, product, or topic-sourced from trusted databases. Appears in a right-hand sidebar or embedded box.
  • Buying Guide: A feature that explains what to consider when shopping for a product, e.g., “What to look for in a DSLR camera.” Usually placed mid-page for commerce-intent queries. It mimics a human assistant or product expert’s advice. Contains snippets and links to sources.
  • Local Listing: Shows local business listings with map, ratings, hours, and quick call/location links. Prominent in searches with local intent like “shoe store near me” or “coffee shops in Detroit.”
  • AI Overview: Generative AI summary at the top of the SERP that answers the query using information synthesized from multiple sources. For shopping queries, it often includes product summaries.
  • Video: A carousel or block of video content, mostly from YouTube, but also from other video-hosting platforms. May include timestamps, captions, or “key moments” for long videos.
  • Answer Box (a.k.a. Featured Snippet): A direct answer to a query extracted from a single web page, shown at the top of the SERP in a stylized box. Often used for factual or how-to queries. Includes the source link.
  • Free Product Listings: Organic product results submitted via Google Merchant Center feeds. These listings show in the Shopping tab and occasionally in the main SERP product grid (distinct from paid Shopping ads).
  • From sources across the web: A content block showing opinions or quotes on a product or topic from a variety of sites. Often used in AI Overviews or product reviews to surface aggregated user sentiment or editorial input.
  • FAQ: An expandable schema-driven block showing common questions and answers sourced from a specific page. Typically appears under a site’s organic result when FAQ schema is properly implemented.
  • PPC: Sponsored links shown at the top or bottom of the SERP, marked “Sponsored” or “Ad.” These can show up as text, product images/grids, etc.

In addition to the standard SERP features tracked in this analysis via the above list, here’s a look at the current Google shopping marketplace SERP features and/or elements (like toggle filters) that we’re dealing with at the halfway point of 2025.

  • AI Mode (Full-Screen): Interactive, immersive full-page AI shopping experience with filters and buy links.
  • Shopping filters inline: Dynamic filters (brand, color, price) within AI Mode and Shopping grids.
  • Virtual try-on: This feature was recently released. It’s a generative AI module showing clothes on diverse body types (expanding by category).
  • Price tracking/alerts: Users can track price drops and get alerts via Gmail or Chrome. Honestly, a pretty great tool.
  • Popular stores/top stores: Scrollable carousel of prominent retailers for the product category.
  • Product sites (EU market): Organic feature that shows prominent ecommerce domains (due to regulatory changes in the EU).
  • Trending products/popular products: Highlights products rising in popularity based on recent search activity.
  • Merchant star ratings: Display review scores and counts in summaries or tiles.
  • Free shipping/returns labels: Highlighted callouts in product tiles.
  • “Verified by Google” merchant badges: Google-trusted seller icon in some listings.
  • Quick comparison panels: Side-by-side spec or feature comparisons (this is an early-stage rollout, similar to Amazon’s product comparison panel or module).

To illustrate with an example, let’s say you are looking for kayaks (summertime!).

On desktop (logged-in), Google will now show you product filters on the left sidebar and “Popular products” carousels in the middle on top of classic organic results, but under ads, of course.

kayaks on desktopImage Credit: Kevin Indig

Directly under the shopping product grids, you have traditional organic results along with an on-SERP Buying Guide, similar to People Also Ask questions (which is also included further down the page).

Both the Buying Guide and People Also Ask features deliver answers with links to original content.

Image Credit: Kevin Indig

On mobile, you get product filters at the top, ads above organic results, and product carousels in the form of Popular products or “Products for you.”

Image Credit: Kevin Indig

This experience doesn’t look very different from Amazon … which is the whole point.

Image Credit: Kevin Indig

Google’s shopping experience lets users explore products on a variety of marketplaces, like Amazon, Walmart, eBay, Etsy, & Co.

From an SEO perspective, the prominent position of product grid (listings) and filters likely significantly impacts CTR, organic visibility, and ultimately, revenue.

But let’s take a look at the same search via AI Mode.

Below is the desktop experience via Chrome.

I’ve zoomed out here so you get the whole view, but it takes the user two to three scrolls to get to the product grid when in a standard view.

Image Credit: Kevin Indig

Here on mobile, getting to product recommendations takes several scrolls. In one instance, I received a result that included a list of places near me in my city where I could get a kayak.

Image Credit: Kevin Indig

Keeping the current Google shopping SERP experience in mind, here’s what the data shows.

This is the most noteworthy shift found in the data, as you can probably guess.

Since March 2025, when Google began rolling out AI Overviews more aggressively, they’ve also started replacing (organic) product grids.

Image Credit: Kevin Indig

The graph above might look like it represents minimal changes when you examine it in a timeline view, but you can see the trend even better when moving AIOs to a second y-axis (below).

Image Credit: Kevin Indig

I expect AI Overviews to still show the product grids searchers have become accustomed to, although they might take a different form.

When searching for [which camera tripod should I buy?], for example, we find an AI Overview at the top with specific product recommendations.

Image Credit: Kevin Indig

Of course, AI Mode takes that a step further with richer product recommendations and buying guides.

(Shoutout to The New York Times and the other five sources for this AI Mode answer … which now don’t see an ad impression or affiliate click.)

Image Credit: Kevin Indig

As a result of this shift, which I predict will only increase over time, tracking your brand mentions and product links in AI Overviews becomes critical. Skip this at your own risk.

Here, you’ll see the increase in image packs over time, with a big shift in March 2025.

Image packs for ecommerce-related queries grew from ~60% in 2024 to a new baseline of over 90% of keywords in 2025.

Image Credit: Kevin Indig

Also, notice how Google systematically tests SERP layouts between core updates (e.g., the dip in the graph above happens between the March and June 2025 Core Updates).

Having strong product images, which are properly optimized, continues to be crucial for ecommerce search.

Since January 2025, Google has shown more People Also Asked (PAA) features at the cost of Discussions & Forums.

Even though Reddit is the second most visible site on the web, I’m surprised to see more PAA – two years after Google removed FAQ rich snippets from the SERPs.

Image Credit: Kevin Indig

This is something you want to consider tracking for queries that are directly related to your products, if you’re not doing so already. (You can do this in classic SEO tools like Semrush or Ahrefs, for example.)

Since August 2024, Google has systematically reduced the number of videos in the ecommerce search results.

Image Credit: Kevin Indig

It seems that images have taken a lot of the real estate videos that used to own.

Image Credit: Kevin Indig

As a result, videos are less important in ecommerce search, while images are increasingly more important.

If you’ve been creating and optimizing videos and haven’t seen the SEO results you wanted for your products/site, this could be your signal to invest in other types of content.

While this analysis covers Google SERP data specifically, it’d be a miss to not discuss the new shopping features in ChatGPT.

However, we don’t yet have months and months of data on LLM-based conversational product recommendations to give us good, clear information, so I anticipate there will be more analysis ahead once more time passes.

ChatGPT’s shopping experience is starting to look a lot like Google’s  – but with a twist: Instead of viewing lists of blue links or multiple product grids, it curates a conversational shortlist with minimal product listings included.

No affiliate links and no paid ads (yet).

Image Credit: Kevin Indig

OpenAI integrates real-time product data from tools like Klarna and Shopify, allowing ChatGPT to surface up-to-date prices, availability, reviews, and product details in a shoppable card-style format.

ChatGPT also offers a “Why you might like this” and “What people are saying” generative summary when a specific product is clicked.

Image Credit: Kevin Indig

OpenAI offers the following guidance about how these products are selected [source]:

A product appears in the visual carousel when ChatGPT perceives it’s relevant to the user’s intent. ChatGPT assesses intent based on the user’s query and other available context, such as memories or custom instructions….

When determining which products to surface, ChatGPT considers:

• Structured metadata from third-party providers (e.g., price, product description) and other third-party content (e.g., reviews).

• Model responses generated by ChatGPT before it considers any new search results. Learn more.

• OpenAI safety standards.

Depending on the user’s needs, some of these factors will be more relevant than others. For example, if the user specifies a budget of $30, ChatGPT will focus more on price, whereas if price isn’t important, it may focus on other aspects instead.

OpenAI also explains how merchants are selected for products [source]:

When a user clicks on a product, we may show a list of merchants offering it. This list is generated based on merchant and product metadata we receive from third-party providers. Currently, the order in which we display merchants is predominantly determined by these providers….

To that end, we’re exploring ways for merchants to provide us their product feeds directly, which will help ensure more accurate and current listings. If you’re interested in participating, complete the interest form here, and we’ll notify you once submissions open.

That being said, it takes some trial and error to trigger product recommendations directly in the chat.

For instance, the prompt [can you help me find the best kayaks for beginners] results in an output that includes product recommendations, while the query [what are the best kayaks for beginners] results in a list without shopping results, features, or links.

Prompts with action-oriented language like “can you help me” and “will you find” may have a higher likelihood of offering shopping results directly in the chat, while queries like “what is the best” and “what are the best” and “compare the features of” may result in a variety of recommendations.

Image Credit: Kevin Indig

Featured Image: Paulo Bobita/Search Engine Journal

Perplexity Says Cloudflare Is Blocking Legitimate AI Assistants via @sejournal, @martinibuster

Perplexity published a response to Cloudflare’s claims that it disrespects robots.txt and engages in stealth crawling. Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.

Perplexity AI Assistants Fetch On Demand

According to Perplexity, its system does not store or index content ahead of time. Instead, it fetches webpages only in response to specific user questions. For example, when a user asks for recent restaurant reviews, the assistant retrieves and summarizes relevant content on demand. This, the company says, contrasts with how traditional crawlers operate, systematically indexing vast portions of the web without regard to immediate user intent.

Perplexity compared this on-demand fetching to Google’s user-triggered fetches. Although that is not an apples-to-apples comparison because Google’s user-triggered fetches are in the service of reading text aloud or site verification, it’s still an example of user-triggered fetching that bypasses robots.txt restrictions.

In the same way, Perplexity argues that its AI operates as an extension of a user’s request, not as an autonomous bot crawling indiscriminately. The company states that it does not retain or use the fetched content for training its models.

Criticizes Cloudflare’s Infrastructure

Perplexity also criticized Cloudflare’s infrastructure for failing to distinguish between malicious scraping and legitimate, user-initiated traffic, suggesting that Cloudflare’s approach to bot management risks overblocking services that are acting responsibly. Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.

Perplexity makes a strong case for the claim that Cloudflare is blocking legitimate bot traffic and says that Cloudflare’s decision to block its traffic was based on a misunderstanding of how its technology works.

Read Perplexity’s response:

Agents or Bots? Making Sense of AI on the Open Web

Cloudflare Delists And Blocks Perplexity From Crawling Websites via @sejournal, @martinibuster

Cloudflare announced that they delisted Perplexity’s crawler as a verified bot and are now actively blocking Perplexity and all of its stealth bots from crawling websites. Cloudflare acted in response to multiple user complaints against Perplexity related to violations of robots.txt protocols, and a subsequent investigation revealed that Perplexity was using aggressive rogue bot tactics to force its crawlers onto websites.

Cloudflare Verified Bots Program

Cloudflare has a system called Verified Bots that whitelists bots in their system, allowing them to crawl the websites that are protected by Cloudflare. Verified bots must conform to specific policies, such as obeying the robots.txt protocols, in order to maintain their privileged status within Cloudflare’s system.

Perplexity was found to be violating Cloudflare’s requirements that bots abide by the robots.txt protocol and refrain from using IP addresses that are not declared as belonging to the crawling service.

Cloudflare Accuses Perplexity Of Using Stealth Crawling

Cloudflare observed various activities indicative of highly aggressive crawling, with the intent of circumventing the robots.txt protocol.

Stealth Crawling Behavior: Rotating IP Addresses

Perplexity circumvents blocks by using rotating IP addresses, changing ASNs, and impersonating browsers like Chrome.

Perplexity has a list of official IP addresses that crawl from a specific ASN (Autonomous System Number). These IP addresses help identify legitimate crawlers from Perplexity.

An ASN is part of the Internet networking system that provides a unique identifying number for a group of IP addresses. For example, users who access the Internet via an ISP do so with a specific IP address that belongs to an ASN assigned to that ISP.

When blocked, Perplexity attempted to evade the restriction by switching to different IP addresses that are not listed as official Perplexity IPs, including entirely different ones that belonged to a different ASN.

Stealth Crawling Behavior: Spoofed User Agent

The other sneaky behavior that Cloudflare identified was that Perplexity changed its user agent in order to circumvent attempts to block its crawler via robots.txt.

For example, Perplexity’s bots are identified with the following user agents:

  • PerplexityBot
  • Perplexity-User

Cloudflare observed that Perplexity responded to user agent blocks by using a different user agent that posed as a person crawling with Chrome 124 on a Mac system. That’s a practice called spoofing, where a rogue crawler identifies itself as a legitimate browser.

According to Cloudflare, Perplexity used the following stealth user agent:

“Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36”

Cloudflare Delists Perplexity

Cloudflare announced that Perplexity is delisted as a verified bot and that they will be blocked:

“The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling.”

Takeaways

  • Violation Of Cloudflare’s Verified Bots Policy
    Perplexity violated Cloudflare’s Verified Bots policy, which grants crawling access to trusted bots that follow common-sense rules like honoring the robots.txt protocol.
  • Perplexity Used Stealth Crawling Tactics
    Perplexity used undeclared IP addresses from different ASNs and spoofed user agents to crawl content after being blocked from accessing it.
  • User Agent Spoofing
    Perplexity disguised its bot as a human user by posing as Chrome on a Mac operating system in attempts to bypass filters that block known crawlers.
  • Cloudflare’s Response
    Cloudflare delisted Perplexity as a Verified Bot and implemented new blocking rules to prevent the stealth crawling.
  • SEO Implications
    Cloudflare users who want Perplexity to crawl their sites may wish to check if Cloudflare is blocking the Perplexity crawlers, and, if so, enable crawling via their Cloudflare dashboard.

Cloudflare delisted Perplexity as a Verified Bot after discovering that it repeatedly violated the Verified Bots policies by disobeying robots.txt. To evade detection, Perplexity also rotated IPs, changed ASNs, and spoofed its user agent to appear as a human browser. Cloudflare’s decision to block the bot is a strong response to aggressive bot behavior on the part of Perplexity.

How AI Search Should Be Shaping Your CEO’s & CMO’s Strategy [Webinar] via @sejournal, @theshelleywalsh

AI is rapidly changing the rules of SEO. From generative ranking to vector search, the new rules are not only technical but also reshaping how business leaders make decisions.

Join Dan Taylor on August 14, 2025, for an exclusive SEJ Webinar tailored for C-suite executives and senior leaders. In this session, you’ll gain essential insights to understand and communicate SEO performance in the age of AI.

Here’s what you’ll learn:

AI Search Is Impacting Everything. Are You Ready?

AI search is already here, and it’s impacting everything from SEO KPIs to customer journeys. This webinar will give you the tools to lead your teams through the shift with confidence and precision.

Register now for a business-first perspective on AI search innovation. If you can’t attend live, don’t worry. Sign up anyway, and we’ll send you the full recording.

Which SEO Jobs AI Will Reshape & Which Might Disappear via @sejournal, @DuaneForrester

You’ve probably seen the headlines like: “AI will kill SEO,” “AI will replace marketing roles,” or the latest panic: “Is your digital marketing job safe?”

Well, maybe not those exact headlines, but you get the idea, and I’m sure you have seen something similar.

Let’s clear something up: AI is not making SEO irrelevant. It’s making certain tasks obsolete. And yes, some jobs built entirely around those tasks are at risk.

A recent Microsoft study analyzed over 200,000 Bing Copilot interactions to measure task overlap between human job functions and AI-generated outputs. Their findings are eye-opening:

  • Translators and Interpreters: 98% overlap with AI tasks.
  • Writers and Authors: 88% overlap.
  • Public Relations Specialists: 79% overlap.

SEO as a field wasn’t directly named in the study, but many roles common within SEO map tightly to these job categories.

If you write, edit, report, research, or publish content as part of your daily work, this isn’t a hypothetical shift. It’s already happening.

(Source: Microsoft AI Job Impact – Business Insider – follow through this link to reach the download location for the original PDF of the study. BI summarizes the information, but links to MSFT, which in turn links to the source for the PDF.)

What’s Actually Changing

AI isn’t replacing SEO. It’s changing what “search engine optimization” means, and where and how value is measured.

In traditional SEO, the focus was clear:

  • Rank high.
  • Earn the click.
  • Optimize the page for humans and crawlers.

That still matters. But, in AI-powered search systems, the sequence is different:

  1. Content is chunked behind the scenes, paragraphs, lists, and answers are sliced and stored in vector form.
  2. Prompts trigger retrieval, the LLM pulls relevant chunks, often based on embeddings, not just keywords. (So, concepts and relationships, not keywords per se.)
  3. Only a few chunks make it into the answer. Everything else is invisible, no matter how high it once ranked.

This new paradigm shifts the rules of engagement. Instead of asking, “Where do I rank?” the better question is, “Was my content even retrieved?” That makes this a binary system, not a sliding scale.

In this new world of retrieval, the direct answer to the question, “Where do I rank?” could be “ChatGPT,” “Perplexity,” “Claude,” or “CoPilot,” instead of a numbered position.

In some ways, this isn’t as big a shift as some folks would have you believe. After all, as the old joke asks, “Where do you hide a dead body?” To which the correct answer is “…on Page 2 of Google’s results!”

Morbid humor aside, the implication is no one goes there, so there’s no value, and while that sentiment actually drops a lot of the real, nuanced details that actual click through rate data shows us (like the top of page 2 results actually has better CTRs than the bottom of page 1 typically), it does serve up a meta point: If you’re not in the first few results on a traditional SERP, the drop off of CTRs is precipitous.

So, it could be argued that with most “answers” today in generative AI systems being comprised of a very limited set of references, that today’s AI-based systems offer a new display path for consumers, but ultimately, those consumers will only be interacting with the same number of results they historically engaged with.

I mean, if we only ever really clicked on the top 3 results (generalizing here), and the rest were surplus to needs, then cutting an AI-sourced answer down to some words with only 1, 2 or 3 cited results amounts to a similar situation in terms of raw numbers of choice for consumers … 1, 2 or 3 clickable options.

Regardless, it does mark a shift in terms of work items and workflows, and here’s how that shift shows up across some core SEO tasks. Obviously, there could be many more, but these examples help set the stage:

  • Keyword research becomes embedding relevance and semantic overlap. It’s not about the exact phrase match in a gen AI result. It’s about aligning your language with the concepts AI understands. It’s about the concept of query fan-out (not new, by the way, but very important now).
  • Meta tag and title optimization become chunked headers and contextual anchor phrases. AI looks for cues inside content to determine chunk focus.
  • Backlink building becomes trust signal embedding and source transparency. Instead of counting links, AI asks: Does this source feel credible and citable?
  • Traffic analytics becomes retrieval testing and AI response monitoring. The question isn’t just how many visits you got, it’s whether your content shows up at all in AI-generated responses.

What this means for teams:

  • Your title tag isn’t just a headline; it’s a semantic hook for AI retrieval.
  • Content format matters more: bullets, tables, lists, and schema win because they’re easier to cite.
  • You need to test with prompts to see if your content is actually getting surfaced.

None of this invalidates traditional SEO. But, the visibility layer is moving. If you’re not optimizing for retrieval, you’re missing the first filter, and ranking doesn’t matter if you’re never in the response set.

The SEO Job Risk Spectrum

Microsoft’s study didn’t target SEO directly, but it mapped 20+ job types by their overlap with current AI tasks. I used those official categories to extrapolate risk within SEO job functions.

Image Credit: Duane Forrester

High Risk – Immediate Change Needed

SEO Content Writers

Mapped to: Writers & Authors (88% task overlap in the study: 88% of these tasks an AI can do today).

Why: These roles often involve creating repeatable, factual content, precisely the kind of output AI handles well today (to a degree, anyway). Think meta descriptions, product overviews, and FAQ pages.

The writing isn’t disappearing, but humans aren’t always required for first drafts anymore. Final drafts, yes, but first? No. And I’m not debating how factual the content is that an AI produces.

We all know the pitfalls, but I’ll say this: If your boss is telling you your job is going away, and your argument is “but AIs hallucinate,” think about whether that’s going to change the outcome of that meeting.

Link Builders/Outreach Specialists

Mapped to: Public Relations Specialists (79% overlap).

Why: Cold outreach and templated link negotiation can now be automated.

AI can scan for unlinked mentions, generate outreach messages, and monitor link placement outcomes, cutting into the core responsibilities of these roles.

Moderate Risk – Upskill To Stay Relevant

SEO Analysts

Mapped to: Market Research Analysts (~65% overlap).

Why: Data gathering and trend reporting are susceptible to automation. But, analysts who move into interpreting retrieval patterns, building AI visibility reports, or designing retrieval experiments can thrive.

Admittedly, SEO is a bit more specialized, but bottom or top of this stack, the risk remains moderate. This one, however, is heavily dependent on your actual job tasks.

Technical SEOs

Mapped to: Web Developers (not perfect, but as close as the study got).

Why: Less overlap with generative AI, but still pressured to evolve. Embedding hygiene, chunk structuring, and schema precision are now foundational.

The most valuable technical SEOs are becoming AI optimization architects. Not leaving their traditional work behind, but adopting new workflows.

Content Strategists/Editors

Mapped to: Editors & Technical Writers.

Why: Editing for humans and tone alone is out. Editing for retrievability is in. Strategists now must prioritize chunking, citation density, and clarity of topic anchors, not just user readability.

Or, at least, now consider that LLM bots are de facto users as well.

Lower Risk – Expanded Value And Influence

SEO Managers/Leads

Mapped to: Marketing Managers.

Why: Managers who understand both traditional and AI SEO have more leverage than ever. They’re responsible for team alignment, training decisions, and tool adoption.

This is a growth role, if guided by data, not gut instinct. Testing is life here.

CMOs/Strategy Executives

Mapped to: Marketing Executives.

Why: Strategic thinking isn’t automatable. AI can suggest, but it can’t set priorities across brand, trust, and investment.

Executives who understand how AI affects visibility will steer their companies more effectively, especially in content-heavy verticals.

Tactical Response By Role Type

Every job category on the risk curve deserves practical action.

Now, let’s look at how people in SEO roles can pivot, strengthen, or evolve, based on clear, verifiable capabilities.

High-Risk Roles: SEO Content Writers, Editors, Link Builders

  • Shift from traditional copywriting to creating structured, retrieval-friendly content.
  • Focus on chunk-based writing: short Q&A blocks, bullet-based explanations, and schema-rich snippets.
  • Learn AI prompt testing: Use platforms like ChatGPT or Google Gemini to query key topics and see if your content is surfaced without requiring a click.
  • Use gen AI visibility tools verified to support AI search tracking:
    • Profound tracks your brand’s appearance in AI search results across platforms like ChatGPT, Perplexity, and Google Overviews. You can see where you’re cited and which topics AI engines associate with you.
    • SERPRecon offers AI-powered content outlines and helps reverse-engineer AI overview logic to show what keywords and phrasing matter most. So, use a tool like this, then take the output as the basis for your query fan-out work.
  • Reinvent your role:
    • Write in chunks that AI can cite.
    • Embed trust signals (clear sourcing, authoritativeness).
    • Collaborate with data teams on embedding accuracy and chunk performance.

Moderate-Risk Roles: SEO Analysts, Technical SEOs, Content Strategists

  • Expand traditional ranking reports with retrievability diagnostics:
    • Use prompt simulations that probe content retrieval in real-time across AI engines.
    • Audit embedding and semantic alignment at the paragraph or chunk level.
  • Employ tools like those mentioned to analyze AI Overviews and generate content improvement outlines.
  • Monitor AI visibility gaps through new dashboards:
    • Track citation share versus competitors.
    • Identify topic clusters where your domain is cited less.
  • Understand structured data and schema:
    • Use markup to clearly define entities, relationships, and context for AI systems.
    • Prioritize formats like FAQPage, HowTo, and Product schema, where applicable. These are easier for LLMs and AI Overviews to cite.
    • Align semantic clarity within chunks to schema-defined roles (e.g., question/answer pairs, step lists) to improve retrievability and surface relevance.
  • Join or lead internal “AI-SEO Workshops”:
    • Teach teams how to test content visibility in ChatGPT, Perplexity, or Google Overviews.
    • Share experiments in prompt engineering, chunk format outcomes, and schema effectiveness.

Lower-Risk Roles: SEO Managers, Digital Leads, CMOs

  • Sponsor retraining initiatives for semantic and vector-led SEO practices.
  • Revise hiring briefs and job descriptions to include skills like embedding knowledge, prompt testing, schema fluency, and chunk analysis.
  • Implement AI-visibility dashboards using dedicated tools:
    • Benchmark brand presence across search engines and generative platforms.
    • Use insights to guide future content and authority decisions.
  • Keep traditional SEO strong alongside AI tactics:
    • Technical optimization, speed, quality of content, etc., still matter.
    • Hybrid success requires both sides working in sync.
  • Set internal AI literacy standards:
    • Offer training on retrieval engineering, LLM behavior, and chunk visibility.
    • Ensure everyone understands AI’s core behaviors, what it cites, and what it ignores.

Reframing The Opportunity

This isn’t a “get out now” scenario for these jobs. It’s a “rebuild your toolkit” moment.

High overlap doesn’t mean you’re obsolete. It means the old version of your job won’t hold value without adaptation. And what gets automated away often wasn’t the best part of the job anyway.

AI isn’t replacing SEO, it’s distilling it. What’s left is:

  • Strategy that aligns with machine logic and user needs.
  • Content structure that supports fast retrieval, not just ranking.
  • Authority based on more, deeper, sometimes implied, trust signals, not just age or backlinks. Like E-E-A-T++.

Think of it this way: AI strips away the boilerplate. What’s left is your real contribution. Your judgment. Your design. Your clarity.

New opportunity lanes are forming right now:

  • Writers who evolve into retrievability engineers.
  • Editors who become semantic format strategists.
  • Technical SEOs who own chunk structuring and indexing hygiene.
  • Analysts who specialize in AI visibility benchmarking.

These aren’t job titles (yet), but the work is happening. If you’re in a role that touches content, structure, trust, or performance, now is the time to sharpen your relevance, not to fear automation.

Final Word

The fundamentals still matter. Technical SEO, content quality, and UX don’t go away; they evolve alongside AI.

No, SEO isn’t dying, it’s becoming more strategic, more semantic, more valuable. AI-driven retrievability is already redefining visibility. Are you ready to adapt?

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: /Shutterstock

Next Steps for AI Shopping

For two decades, search has driven ecommerce. Whether through Google, Amazon, or on‑site, the sequence was simple: a shopper types a query, compares results, and makes a purchase. Search engine marketing, organic and paid, enhanced performance.

That model is evolving.

Generative AI “agents” can now help shoppers compare products, prices, and options, often in a single conversational step. Shoppers initiate the interactions; agents then search relevant sites and respond based on the query, past preferences, and purchase history.

What Is Agentic Shopping?

“Agentic” refers to autonomous systems that can independently act and decide. Agentic shopping uses AI agents —  ChatGPT, Gemini, Perplexity, Claude — to guide the buying journey for a user. Think of it as a personal shopping assistant that interprets a request, searches multiple retailers, and surfaces relevant product and price options.

Retailgentic, the Substack publication of ReFiBuy, a soon-to-launch agentic tool provider, recently published its test of Comet, Perplexity’s AI-driven browser:

Shopper: This bed set is pretty expensive, but I like it. Can you find it cheaper?

Comet: I searched for a better price on the Allied Home Down Alternative XLT Dorm Kit (comforter, pillow, and mattress pad, Twin XL) and it is still listed at $84.99 at Target — the standard price for this exact bundled set. I didn’t find this specific 3‑piece kit for less elsewhere.

However, other retail sites (like Kohl’s and Macy’s) offer Twin XL bedding pieces or comforter sets individually.

Instead of hopping from site to site, the shopper gets an answer in one dialog.

Why It Matters

Shoppers are warming to AI shopping, though unevenly by age. A February 2025 New Consumer survey (PDF) of approximately 3,000 U.S. residents found that 64% of Gen Zs (ages late 20s to early 40s) and Millennials (mid-teens to late 20s) are “very” or “somewhat” comfortable interacting with an AI shopping advisor, versus 40% for Gen Xs (mid 40s to early 60s).

AI platforms are capitalizing:

  • ChatGPT now embeds Shop Pay, Shopify’s hosted checkout and payment tool. Shoppers can discover, evaluate, and purchase goods from Shopify-powered merchants without leaving the chat, turning conversational AI into a sales channel.
  • Perplexity’s agent‑led checkout, in partnership with PayPal, enables purchases, travel bookings, and event ticket sales directly in chat.
  • Structured product feeds in Perplexity can ingest clean, up‑to‑date product data, such as from beauty brand Ulta (powered by Rithum, my employer), for accurate pricing, attributes, and real‑time recommendations.

Next Steps

There’s no definitive AI playbook, but merchants can still prepare.

Audit product data

Universal standards for AI product feeds don’t (yet) exist, but you’re likely in good shape if you already maintain a product feed, such as for Google Shopping. Make sure it includes all key attributes: size, color, material, weight, and use cases.

Track AI visibility

Test how your products appear in genAI platforms. Brands and manufacturers can prompt their name to see how it surfaces. Even better, try prompts that shoppers might use. See how AI ranks or references your products compared with competitors. For example, “Find me the best backpack that fits two days of clothes and fits under an airplane seat” or “List the highest-rated cordless drills from DeWalt under $200.”

Multiple Channels

Widespread use of AI shopping is far from certain.

Adoption varies. Younger shoppers are more comfortable, older shoppers less so.

Accuracy is uneven. AI can show outdated prices, inventory, and product details, as many are scraping product data, which is prone to errors, instead of using product feeds. In ChatGPT, products unrelated to a query sometimes appear in comparison carousels.

AI shopping agents could become an important revenue channel, but they’re not a replacement for direct customer relationships, traditional search, or advertising. Make your product data AI‑ready while continuing to diversify your sales mix.

Invest in multiple channels, customer engagement, and building a brand that can thrive regardless of how shoppers discover products.

Google Backtracks On Plans For URL Shortener Service via @sejournal, @martinibuster

Google announced that they will continue to support some links created by the deprecated goo.gl URL shortening service, saying that 99% of the shortened URLs receive no traffic. They were previously going to end support entirely, but after receiving feedback, they decided to continue support for a limited group of shortened URLs.

Google URL Shortener

Google announced in 2018 that they were deprecating the Google URL Shortener, no longer accepting new URLs for shortening but continuing to support existing URLs. Seven years later, they noticed that 99% of the shortened links did not receive any traffic at all, so on July 18 of this year, Google announced they would end support for all shortened URLs by August 25, 2025.

After receiving feedback, they changed their plan on August 1 and decided that they would move ahead with ending support for URLs that do not receive traffic, but continue servicing shortened URLs that still receive traffic.

Google’s announcement explained:

“While we previously announced discontinuing support for all goo.gl URLs after August 25, 2025, we’ve adjusted our approach in order to preserve actively used links.

We understand these links are embedded in countless documents, videos, posts and more, and we appreciate the input received.

…If you get a message that states, “This link will no longer work in the near future”, the link won’t work after August 25 and we recommend transitioning to another URL shortener if you haven’t already.

…All other goo.gl links will be preserved and will continue to function as normal.”

If you have a goog.gl redirected link, Google recommends visiting the link to check if it displays a warning message. If it does move the link to another URL shortener. If it doesn’t display the warning then the link will continue to function.

Featured Image by Shutterstock/fizkes

Google Confirms It Uses Something Similar To MUVERA via @sejournal, @martinibuster

Google’s Gary Illyes answered questions during the recent Search Central Live Deep Dive in Asia about whether or not they use the new Multi‑Vector Retrieval via Fixed‑Dimensional Encodings (MUVERA) retrieval method and also if they’re using Graph Foundation Models.

MUVERA

Google recently announced MUVERA in a blog post and a research paper: a method that improves retrieval by turning complex multi-vector search into fast single-vector search. It compresses sets of token embeddings into fixed-dimensional vectors that closely approximate their original similarity. This lets it use optimized single-vector search methods to quickly find good candidates, then re-rank them using exact multi-vector similarity. Compared to older systems like PLAID, MUVERA is faster, retrieves fewer candidates, and still improves recall, making it a practical solution for large-scale retrieval.

The key points about MUVERA are:

  • MUVERA converts multi-vector sets into fixed vectors using Fixed Dimensional Encodings (FDEs), which are single-vector representations of multi-vector sets.
  • These FDEs (Fixed Dimensional Encodings) match the original multi-vector comparisons closely enough to support accurate retrieval.
  • MUVERA retrieval uses MIPS (Maximum Inner Product Search), an established search technique used in retrieval, making it easier to deploy at scale.
  • Reranking: After using fast single-vector search (MIPS) to quickly narrow down the most likely matches, MUVERA re-ranks them using Chamfer similarity, a more detailed multi-vector comparison method. This final step restores the full accuracy of multi-vector retrieval, so you get both speed and precision.
  • MUVERA is able to find more of the precisely relevant documents with a lower processing time than the state-of-the-art retrieval baseline (PLAID) it was compared to.

Google Confirms That They Use MUVERA

José Manuel Morgal (LinkedIn profile) related his question to Google’s Gary Illyes and his response was to jokingly ask what MUVERA was and then he confirmed that they use a version of it:

This is how the question and answer was described by José:

“An article has been published in Google Research about MUVERA and there is an associated paper. Is it currently in production in Search?

His response was to ask me what MUVERA was haha and then he commented that they use something similar to MUVERA but they don’t name it like that.”

Does Google Use Graph Foundation Models (GFMs)?

Google recently published a blog announcement about an AI breakthrough called a Graph Foundation Model.

Google’s Graph Foundation Model (GFM) is a type of AI that learns from relational databases by turning them into graphs, where rows become nodes and the connections between tables become edges.

Unlike older models (machine learning models and graph neural networks (GNNs)) that only work on one dataset, GFMs can handle new databases with different structures and features without retraining on the new data. GFMs use a large AI model to learn how data points relate across tables. This lets GFMs find patterns that regular models miss, and they perform much better in tasks like detecting spam in Google’s scaled systems. GFMs are a big step forward because they bring foundation-model flexibility to complex structured data.

Graph Foundation Models represent a notable achievement because their improvements are not incremental. They are an order-of-magnitude improvement, with performance gains of 3x to 40x in average precision.

José next asked Illyes if Google uses Graph Foundation Models and Gary again jokingly feigned not knowing what José was talking about.

He related the question and answer:

“An article has been published in Google Research about Graph Foundation Models for data, this time there are not paper associated with it. Is it currently in production in Search?

His answer was the same as before, asking me what Graph Foundation Models for data was, and he thought it was not in production. He did not know because there are not associated paper and on the other hand, he commented me that he did not control what is published in Google Research blog.”

Gary expressed his opinion that Graph Foundation Model was not currently used in Search. At this point, that’s the best information we have.

Is GFM Ready For Scaled Deployment?

The official Graph Foundation Model announcement says it was tested in an internal task, spam detection in ads, which strongly suggests that real internal systems and data were used, not just academic benchmarks or simulations.

Here is what Google’s announcement relates:

“Operating at Google scale means processing graphs of billions of nodes and edges where our JAX environment and scalable TPU infrastructure particularly shines. Such data volumes are amenable for training generalist models, so we probed our GFM on several internal classification tasks like spam detection in ads, which involves dozens of large and connected relational tables. Typical tabular baselines, albeit scalable, do not consider connections between rows of different tables, and therefore miss context that might be useful for accurate predictions. Our experiments vividly demonstrate that gap.”

Takeaways

Google’s Gary Illyes confirmed that a form of MUVERA is in use at Google. His answer about GFM seemed to be expressed as an opinion, so it’s somewhat less clear, as it’s related as Gary saying that he thinks it’s not in production.

Featured Image by Shutterstock/Krakenimages.com

Merging SEO And Content Using Your Knowledge Graph to AI-Proof Content via @sejournal, @marthavanberkel

New AI platforms, powered by generative technologies like Google’s Gemini, Microsoft’s Copilot, Grok, and countless specialized chatbots, are rapidly becoming the front door for digital discovery.

We’ve entered an era of machine-led discovery, where AI systems aggregate, summarize, and contextualize content across multiple platforms.

Users today no longer follow a linear journey from keyword to website. Instead, they engage in conversations and move fluidly between channels and experiences.

These shifts are being driven by new types of digital engagement, including:

  • AI-generated overviews, such as AI Overviews in Google, that pull data from many sources.
  • Conversational search, such as ChatGPT and Gemini, where follow-up questions replace traditional browsing.
  • Social engagement, with platforms like TikTok equipped with their own generative search features, engaging entire generations in interactive journeys of discovery.

The result is a new definition of discoverability and a need to rethink how you manage your brand across these experiences.

It’s not enough to optimize your brand’s website for search engines. You must ensure your website content is machine-consumable and semantically connected to appear in AI-generated results.

This is why forward-thinking organizations are turning to schema markup (structured data) and building content knowledge graphs to manage the data layer that powers both traditional search and emerging AI platforms.

Semantic structured data transforms your content into a machine-readable network of information, enabling your brand to be recognized, connected, and potentially included in AI-driven experiences across channels.

In this article, we’ll explore how SEO and content teams can partner to build a content knowledge graph that fuels discoverability in the age of AI, and why this approach is critical for enterprise brands aiming to future-proof their digital presence.

Why Schema Markup Is Your Strategic Data Layer

You may be asking, “Schema markup – is that not just for rich results (visual changes in SERP)?”

Schema markup is no longer just a technical SEO tactic for achieving rich results; it can also be used to define the content on your website and its relationship to other entities within your brand.

When you apply markup in a connected way, AI and search can do more accurate inferencing, resulting in more accurate targeting to user queries or prompts.

In May 2025, Google and Microsoft both reiterated that the use of structured data does make your content “machine-readable” that makes you eligible for certain features. [Editor’s note: Although, Gary Illyes recently said to avoid excessive use and that Schema is not a ranking factor.]

Schema markup can be a strategic foundation for creating a data layer that feeds AI systems. While schema markup is a technical SEO approach, it all starts with content.

When You Implement Schema Markup, You’re:

Defining Entities

Schema markup clarifies the “things” your content is about, such as products, services, people, locations, and more.

It provides precise tags that help machines recognize and categorize your content accurately.

Establishing Relationships

Beyond defining individual entities (a.k.a. topics), schema markup describes how those entities connect to each other and to broader topics across the web.

This creates a web of meaning that mirrors how humans understand context and relationships.

Providing Machine-Readable Context

Schema markup assists your content to be machine-readable.

It enables search engines and AI tools to confidently identify, interpret, and surface your content in relevant contexts, which can help your brand appear where it is most relevant.

Enterprise SEO and content teams can work together to implement schema markup to create a content knowledge graph, a structured representation of your brand’s expertise, offerings, and topic authority.

When you do this, the data you put into search and AI platforms is ready for large language models (LLMs) to make accurate inferences, which can help with consumer visibility.

What Is A Content Knowledge Graph?

A content knowledge graph organizes your website’s data into a network of interconnected entities and topics, all defined by implementing schema markup based on the Schema.org vocabulary. This graph serves as a digital map of your brand’s expertise and topical authority.

Imagine your website as a library. Without a knowledge graph, AI systems trying to read your site have to sift through thousands of pages, hoping to piece together meaning from scattered words and phrases.

With a content knowledge graph:

  • Entities are defined. Machines can informed precisely who, what, and where you’re talking about.
  • Topics are connected. Machines can better understand and infer how subjects relate. For example, machines can infer that “cardiology” encompasses entities like heart disease, cholesterol, or specific medical procedures.
  • Content becomes query-ready. your content is assisted to become structured data that AI can reference, cite, and include in responses.

When your content is organized into a knowledge graph, you’re effectively supplying AI platforms with information about your products, services, and expertise.

This becomes a powerful control point for how your brand is represented in AI search experiences.

Rather than leaving it to chance how AI systems interpret your web content, you can help to proactively shape the narrative and ensure machines have the right signals to potentially include your brand in conversations, summaries, and recommendations.

Your organization’s leaders should be aware this is now a strategic issue, not just a technical one.

A content knowledge graph gives you some influence over how your organization’s expertise and authority are recognized and distributed by AI systems, which can impact discoverability, reputation, and competitive advantage in a rapidly evolving digital landscape.

This structure can improve your chances of appearing in AI-generated answers and equips your content and SEO teams with data-driven insights to guide your content strategy and optimization efforts.

How Enterprise SEO And Content Teams Can Build A Content Knowledge Graph

Here’s how enterprise teams can operationalize a content knowledge graph to future-proof discoverability and unify SEO and content strategies:

1. Define What You Want To Be Known For

Enterprise brands should start by identifying their core topical authority areas. Ask:

  • Which topics matter most to our audience and brand?
  • Where do we want to be the recognized authority?
  • What new topics are emerging in our industry that we should own?

These strategic priorities shape the pillars of your content knowledge graph.

2. Use Schema Markup To Define Key Entities

Next, use schema markup to:

  • Identify key entities tied to your priority topics, such as products, services, people, places, or concepts.
  • Connect those entities to each other through Schema.org properties, such as “about,” “mentions,” or “sameAs.”
  • Ensure consistent entity definitions across your entire site so that AI systems can reliably identify and understand entities and their relationships.

This is how your content becomes machine-readable and more likely to be accurately included in AI-driven results and recommendations.

3. Audit Your Existing Content Against Your Content Knowledge Graph

Instead of just tracking keywords, enterprises should audit their content based on entity coverage:

  • Are all priority entities represented on your site?
  • Do you have “entity homes” (pillar pages) that serve as authoritative hubs for those priority entities?
  • Where are there gaps in entity coverage that could limit your presence in search and AI responses?
  • What content opportunities exist to improve coverage of priority entities where these gaps have been identified?

A thorough audit provides a clear roadmap for aligning your content strategy with how machines interpret and surface information, ensuring your brand has the potential to be discoverable in evolving AI-driven search experiences.

4. Create Pillar Pages And Fill Content Gaps

Based on your findings from Step 3, create dedicated pillar pages for high-priority entities where needed. These become the authoritative source that:

  • Defines the entity.
  • Links to supporting content, including case studies, blog posts, or service pages.
  • Signals to search engines and AI systems on where to find reliable information about that entity.

Supporting content can then be created to expand on subtopics and related entities that link back to these pillar pages, ensuring comprehensive coverage of topics.

5. Measure Performance By Entity And Topic

Finally, enterprises should track how well their content performs at the entity and topic levels:

  • Which entities drive impressions and clicks in AI-powered search results?
  • Are there emerging entities gaining traction in your industry that you should cover?
  • How does your topical authority compare to competitors?

This data-driven approach enables continuous optimization, helping you to stay visible as AI search evolves.

Why SEO And Content Teams Are The Heroes Of The AI Search Evolution

In this new landscape, where AI generates answers before users ever reach your website, schema markup and content knowledge graphs provide a critical control point.

They enable your brand to signal its authority to machines, support the possibility of accurate inclusion in AI results and overviews, and inform SEO and content investment based on data, not guesswork.

For enterprise organizations, this isn’t just an SEO tactic; it’s a strategic imperative that could protect visibility and brand presence in the new digital ecosystem.

So, the question remains: What does your brand want to be known for?

Your content knowledge graph is the infrastructure that ensures AI systems, and by extension, your future customers, know the answer.

More Resources:


Featured Image: Urbanscape/Shutterstock

2025 Core Web Vitals Challenge: WordPress Versus Everyone via @sejournal, @martinibuster

The Core Web Vitals Technology Report shows the top-ranked content management systems by Core Web Vitals (CWV) for the month of June (July’s statistics aren’t out yet). The breakout star this year is an e-commerce platform, which is notable because shopping sites generally have poor performance due to the heavy JavaScript and image loads necessary to provide shopping features.

This comparison also looks at the Interaction to Next Paint (INP) scores because they don’t mirror the CWV scores. INP measures how quickly a website responds visually after a user interacts with it. The phrase “next paint” refers to the moment the browser visually updates the page in response to a user’s interaction.

A poor INP score can mean that users will be frustrated with the site because it’s perceived as unresponsive. A good INP score correlates with a better user experience because of how quickly the website performs.

Core Web Vitals Technology Report

The HTTP Archive Technology Report combines two public datasets:

  1. Chrome UX Report (CrUX)
  2. HTTP Archive

1. Chrome UX Report (CrUX)
CrUX obtains its data from Chrome users who opt into providing usage statistics reporting as they browse over 8 million websites. This data includes performance on Core Web Vitals metrics and is aggregated into monthly datasets.

2. HTTP Archive
HTTP Archive obtains its data from lab tests by tools like WebPageTest and Lighthouse that analyze how pages are built and whether they follow performance best practices. Together, these datasets show how websites perform and what technologies they use.

The CWV Technology Report combines data from HTTP Archive (which tracks websites through lab-based crawling and testing) and CrUX (which collects real-user performance data from Chrome users), and that’s where the Core Web Vitals performance data of content management systems comes from.

#1 Ranked Core Web Vitals (CWV) Performer

The top-performing content management system is Duda. A remarkable 83.63% of websites on the Duda platform received a good CWV score. Duda has consistently ranked #1, and this month continues that trend.

For Interaction to Next Paint scores, Duda ranks in the second position.

#2 Ranked CWV CMS: Shopify

The next position is occupied by Shopify. 75.22% of Shopify websites received a good CWV score.

This is extraordinary because shopping sites are typically burdened with excessive JavaScript to power features like product filters, sliders, image effects, and other tools that shoppers rely on to make their choices. Shopify, however, appears to have largely solved those issues and is outperforming other platforms, like Wix and WordPress.

In terms of INP, Shopify is ranked #3, at the upper end of the rankings.

#3 Ranked CMS For CWV: Wix

Wix comes in third place, just behind Shopify. 70.76% of Wix websites received a good CWV score. In terms of INP scores, 86.82% of Wix sites received a good INP score. That puts them in fourth place for INP.

#4 Ranked CMS: Squarespace

67.66% of Squarespace sites had a good CWV score, putting them in fourth place for CWV, just a few percentage points behind the No. 3 ranked Wix.

That said, Squarespace ranks No. 1 for INP, with a total of 95.85% of Squarespace sites achieving a good INP score. That’s a big deal because INP is a strong indicator of a good user experience.

#5 Ranked CMS: Drupal

59.07% of sites on the Drupal platform had a good CWV score. That’s more than half of sites, considerably lower than Duda’s 83.63% score but higher than WordPress’s score.

But when it comes to the INP score, Drupal ranks last, with only 85.5% of sites scoring a good INP score.

#6 Ranked CMS: WordPress

Only 43.44% of WordPress sites had a good CWV score. That’s over fifteen percentage points lower than fifth-ranked Drupal. So WordPress isn’t just last in terms of CWV performance; it’s last by a wide margin.

WordPress performance hasn’t been getting better this year either. It started 2025 at 42.58%, then went up a few points in April to 44.93%, then fell back to 43.44%, finishing June at less than one percentage point higher than where it started the year.

WordPress is in fifth place for INP scores, with 85.89% of WordPress sites achieving a good INP score, just 0.39 points above Drupal, which is in last place.

But that’s not the whole story about the WordPress INP scores. WordPress started the year with a score of 86.05% and ended June with a slightly lower score.

INP Rankings By CMS

Here are the rankings for INP, with the percentage of sites exhibiting a good INP score next to the CMS name:

  1. Squarespace 95.85%
  2. Duda 93.35%
  3. Shopify 89.07%
  4. Wix 86.82%
  5. WordPress 85.89%
  6. Drupal 85.5%

As you can see, positions 3–6 are all bunched together in the eighty percent range, with only a 3.57 percentage point difference between the last-placed Drupal and the third-ranked Shopify. So, clearly, all the content management systems deserve a trophy for INP scores. Those are decent scores, especially for Shopify, which earned a second-place ranking for CWV and third place for INP.

Takeaways

  • Duda Is #1
    Duda leads in Core Web Vitals (CWV) performance, with 83.63% of sites scoring well, maintaining its top position.
  • Shopify Is A Strong Performer
    Shopify ranks #2 for CWV, a surprising performance given the complexity of e-commerce platforms, and scores well for INP.
  • Squarespace #1 For User Experience
    Squarespace ranks #1 for INP, with 95.85% of its sites showing good responsiveness, indicating an excellent user experience.
  • WordPress Performance Scores Are Stagnant
    WordPress lags far behind, with only 43.44% of sites passing CWV and no signs of positive momentum.
  • Drupal Also Lags
    Drupal ranks last in INP and fifth in CWV, with over half its sites passing but still underperforming against most competitors.
  • INP Scores Are Generally High Across All CMSs
    Overall INP scores are close among the bottom four platforms, suggesting that INP scores are relatively high across all content management systems.

Find the Looker Studio rankings for here (must be logged into a Google account to view).

Featured Image by Shutterstock/Krakenimages.com