Organic Search Winners Share 5 Traits

Google’s March 2026 core algorithm update concluded on April 8. The search giant doesn’t provide recovery guidelines for businesses whose rankings have decreased. We’re left with search-engine optimizers to create tactics that align with the winners to help losing sites maintain organic visibility.

A just-published study by SEO pro Cyrus Shepard of Zyppy Signal is an example. He analyzed organic search traffic of 400 winning and losing websites over the past 12 months and classified them by business model, content types, creator profiles, and other definable traits. From there, he identified five characteristics of winning sites.

Here are Cyrus’s five features of sites that consistently maintain prominent organic rankings on Google.

Proprietary assets

Of the 400 analyzed sites, 92.9% of the winners own proprietary assets that are difficult to replicate, such as datasets, products, images, or studies.

For example, a fashion ecommerce site may use its user data to report trends in colors or seasonality. A site with extensive product reviews could repurpose them into shopping guides.

Completes a task

According to the study, 83.7% of winning websites help searchers do something: buy, download, or search.

Winning sites tend to help users accomplish whatever they’re looking for. Losing sites may offer meaningful info on topics, but the searcher must go elsewhere to complete the task.

The solution may be a unique product or an interactive tool. For example, a tutorial site could offer interactive tools, quizzes, and workbooks to help students practice math.

Niche expertise

Expertise within a niche was a trait of 75.9% of the winners.

Winning sites tend to focus on a topic in which they have deep knowledge and experience.

Those sites become go-to authorities for specialized subjects. Hyper-specific travel blogs, for example, often outrank global travel brands.

Unique product or service

A unique product or service is a trait of 70.2% of sites that consistently rank well across core updates. Cyrus’s study found that informational sites (news publishers and affiliate sites) lost the most traffic and that offering a product may be the answer.

For example, a recipe site can sell a subscription meal plan, a book, or access to a private cooking community.

Strong brand

A strong brand, a destination site, was a trait of 32.6% of organic search winners. Cyrus found a high correlation between winning in organic search and having a strong profile of branded search terms.

The more searchers query a business’s name, the more that site is a destination and a strong signal to Google. Treat your brand search metrics as a key performance indicator, in other words.

I’ll add one feature for 2026 that Cyrus doesn’t address: sites that rank prominently in organic search offer something that AI cannot easily replicate.

How To Build AI Visibility In 90 Days [Webinar] via @sejournal, @hethr_campbell

AI search has changed how buyers discover solutions. Here’s how to make sure they find you.

Why AI Visibility Is Now a Growth Priority

Platforms like ChatGPT, Perplexity, and Google AI Overviews are now active discovery channels for buyers. Marketing leaders who understand those signals are building durable visibility. Those who don’t are quietly losing ground.

What You’ll Learn in This Free SEO Webinar

  • Which AI visibility signals actually drive discoverability in 2026
  • A phased 90-day framework that helps you audit your baseline, run AI-native experiments, then scale what works
  • How funded startups are restructuring teams and budgets around this shift

About the Speaker

Jason Shafton is Founder & CEO of Winston Francois, a growth consulting firm. He’s led growth and marketing at Google, Headspace, and Kajabi, and has built AI visibility playbooks across 10+ venture and PE-backed startups navigating this exact transition.

Register Free

This is one hour of tactical, experience-backed frameworks, built for founders, CMOs, and marketing leaders who are ready to act.

Google May Have To Share Search Data With Rivals via @sejournal, @MattGSouthern

The European Commission has sent preliminary findings to Google proposing measures to share search data with rival search engines, including AI chatbots that qualify as online search engines under the DMA, across the EU and EEA.

Under the proposal, Google must share four categories of anonymized data on fair, reasonable, and non-discriminatory (FRAND) terms.

The categories are ranking, query, click, and view data. The Commission says the aim is to allow third-party search engines to “optimise their search services and contest Google Search’s position.”

The measures are not yet binding. A public consultation is open until May, and a final decision is due by July 27.

What’s In The Proposal

The Commission’s proposed measures cover six areas:

  • Eligibility criteria for data beneficiaries, including AI chatbots with search capabilities
  • The extent of search data that Google is required to share
  • Methods and intervals for sharing data
  • Anonymization standards for personal data
  • Guidelines for determining FRAND pricing
  • Procedures for how beneficiaries access the data

The data will be available to eligible third parties operating search engines in the EEA, including AI chatbot providers that qualify as such.

This is Article 6(11) proceeding following the Commission’s opening on January 27. A separate Article 6(7) proceeding addresses Android interoperability for third-party AI. Both aim to turn broad DMA obligations into specific, enforceable rules.

AI Chatbots Are Eligible

Eligibility criteria for qualifying AI chatbots are what change the picture for AI search visibility.

Under the proposal, AI chatbots meeting the DMA’s definition of online search engines could access Google’s anonymized search data. Qualified AI search products might use this data to improve their retrieval and ranking systems.

The proposed measures specify data sharing methods, frequency, access, and pricing, with technical details to be finalized.

Google Is Pushing Back

Google opposed the proposal in a statement provided to multiple outlets. Clare Kelly, Senior Competition Counsel at Google, said in a statement to Engadget:

“Hundreds of millions of Europeans trust Google with their most sensitive searches — including private questions about their health, family, and finances — and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections. We will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security.”

Google also told The Register the investigation appears to be driven “at least in part by OpenAI,” which it claims is “seeking to take advantage of the DMA to harvest data from Google in ways not anticipated by the drafters of the DMA.”

The company is fighting on several DMA fronts. Brussels sent preliminary findings in 2025 on a separate Article 6(5) self-preferencing case. In February, Google began testing search result changes in the EU to address that proceeding.

Why This Matters

The measures are preliminary and, if adopted, applicable only in the EEA. Anonymization and pricing details remain open through the May consultation.

The longer-term issue is whether AI chatbot eligibility survives the final decision in July.

If the EU proposal is adopted with eligibility for AI chatbots, eligible products serving EU/EEA users could access anonymized signals from Google Search.

The proposal doesn’t give AI chatbots access to Google’s index but instead allows access to data similar to what Alphabet uses to optimize its search services, which differs from current AI search data sources.

Looking Ahead

The public consultation closes on May 1, and the Commission will assess the feedback before making a final, binding decision by July 27, which will apply to Google.

These proceedings do not constitute a non-compliance finding, but separate DMA enforcement can impose fines up to 10% of global turnover. The next milestone for AI visibility practitioners is the consultation outcome.

If the Commission maintains eligibility for AI chatbots, the focus shifts to how quickly data-sharing arrangements enable AI tools to compete for citation visibility.


Featured Image: Samuel Boivin/Shutterstock

What Search Engines Trust Now: Authority, Freshness & First-Party Signals via @sejournal, @cshel

Search has not become more chaotic. It has become more continuous.

If the last two years have felt like a blur of updates, volatility, and shifting guidance, you’re not imagining it. What’s changed is not just what search engines value. It’s how those values are evaluated.

The traditional model (the model we’re accustomed to) – periodic updates, relatively stable ranking signals, and long feedback loops – has been replaced by something faster and less discrete. Search engines are now heavily influenced by/running on AI systems that continuously test, interpret, and refine results, so what looks like constant algorithm change is actually ongoing model adjustment.

It’s this shift that has redefined what search engines trust.

The Algorithm Isn’t Static Anymore

For years, SEO operated on a predictable rhythm: core updates arrived, the rankings shifted, and then the industry analyzed the damage, identified patterns, and adapted.

That model assumed a relatively stable system punctuated by updates, but that assumption no longer holds.

Modern search systems incorporate multiple layers of AI-driven evaluation, including ranking systems, retrieval mechanisms, and answer-generation layers. These systems do not wait for quarterly updates. They iterate constantly, adjusting weighting, refining interpretation, and recalibrating outputs in near real time.

What we’re left with is a shorter signal half-life. What worked six months ago may still matter, but it is being re-evaluated continuously rather than periodically.

This is why it feels like we’re in a persistent state of chaos. The system is never settled; it’s always learning.

From Ranking To Evaluation

Traditional SEO focused on ranking documents. Pages competed as whole units, evaluated on signals like links, relevance, and technical accessibility. That model still exists, but it is no longer the full picture.

AI-driven search introduces a second layer: retrieval and synthesis. Instead of simply ranking pages, systems increasingly extract and recombine information from multiple sources to produce answers. This changes the competitive unit, pages still rank but fragments are what get used.

In practical terms, your content is no longer evaluated solely as a document or single URL. It is evaluated as an entire collection of potential answers. Each section, paragraph, and list becomes a candidate for inclusion in AI-generated responses.

Why does this distinction matter? Because it shifts the role of trust. Search engines are not just deciding which page deserves to rank; they are deciding which source is trustworthy enough to be a resource.

Redefining “Trust” In Search

Trust used to feel like a score – it was a combination of authority signals, content quality, and technical hygiene that resulted in stable rankings.

Today, trust behaves more like a probability – it is continuously evaluated, recalculated, and reinforced based on new data. It is not assigned once and retained. It is earned repeatedly.

How is trust determined? There are three factors that dominate the evaluation: authority, freshness, and first-party signals. Each plays a distinct role in how AI-driven systems determine what to surface.

Authority: The Entry Point

Authority has always mattered, no question, but what has changed is where it sits in the process. In an AI-driven system, authority functions as a filter. It determines whether your content is even considered. Not all sources get equal treatment because not all sources are considered authoritative. There is a systems bias toward entities they recognize – brands, authors, and domains that have demonstrated consistent expertise and visibility across the web.

A certain quantity of backlinks is no longer a reliable proxy for authority. Entity-level authoritative presence requires more proof than just links. The search engines build an understanding of who you are (and your authority) based on:

  • Mentions across other authoritative sites.
  • Consistent authorship and topical focus.
  • Brand recognition within a subject area.
  • Inclusion in structured knowledge systems.

These signals create what can be thought of as “entity gravity.” The stronger your presence, the more likely your content is to be included in the candidate set for retrieval.

The key distinction is that authority does not guarantee visibility, it guarantees eligibility. Without it, your content may be well-written, well-structured, and technically sound – and still be ignored.

Authority Comes Before Structure

There is a common misconception that better formatting or clearer writing alone can improve visibility in AI-driven search. Sorry, but it cannot, at least not in isolation.

Authority determines whether your content is selected. Structure determines whether it can be used. So, if your brand lacks recognition, your content may never be retrieved. If your content lacks structure, it may be retrieved but never cited. Both layers are required for this to work well.

This is why entity-building efforts, like PR, partnerships, thought leadership, and brand presence, have become inseparable from SEO. They influence not just rankings, but inclusion.

Freshness: The Signal Of Ongoing Relevance

Freshness has also evolved, or maybe it’s more accurate to say that it’s diverged.

In the past, all types of content benefited from freshness, and that fresh factor was often tied to recency. Newer content could reliably receive a temporary boost, especially for time-sensitive queries.

Today, that old kind of freshness only benefits time-sensitive publishers like news outlets. For everyone else, freshness is less about when something was published and more about whether it is being maintained.

When we’re looking at how freshness is evaluated for non-news publishers (i.e., everyone else), we see that AI-driven systems prioritize sources that demonstrate ongoing relevance. This includes:

  • Regularly updated content.
  • Clear timestamps and revision history.
  • Reinforcement of key topics over time.
  • Alignment with current information and context.

Outdated content introduces risk. If a system cannot determine whether information is still accurate (especially at grounding), it is less likely to include it in a synthesized answer.
Freshness, in this sense, becomes a trust reinforcement loop. Updating content signals continued expertise. It reduces uncertainty. It increases the likelihood of inclusion.

Please do not confuse this with rewriting everything constantly. It means maintain the content that matters.

First-Party Signals: The Ground Truth

The third big shift is the dramatically increasing importance of first-party signals. AI systems are designed to synthesize information, but they still depend on source material. The quality of that material directly affects the quality of the output. As a result, systems favor content that represents original, verifiable input rather than recycled summaries.

First-party signals include:

  • Original research and data.
  • Proprietary insights and analysis.
  • Direct product or service information.
  • First-hand experience and expertise.

These signals reduce ambiguity. They provide a clear source of truth. They are easier to attribute and harder to replicate.

This is one of the reasons the “content at scale” model has struggled in recent years. Large volumes of derivative content offer little new information. They increase noise without increasing value.

AI systems are not looking for more content; they are looking for better inputs. If your content does not add something unique, it is unlikely to be selected.

The Hidden Layer: Usability

So we know that authority gets you considered, freshness keeps you relevant, and first-party signals establish credibility. But none of that matters if your content cannot be used, and this is where many sites fail.

A page can rank well and still have no presence in AI-generated answers. When that happens, it is rarely a ranking issue. It is an extractability issue.

AI systems do not read pages the way humans do. They do not navigate, interpret, and synthesize in a leisurely, exploratory way. They retrieve what is easy to extract and move on.

Content that performs well in this environment tends to share a few characteristics:

  • Clear, descriptive headings.
  • Logical hierarchy (H1, H2, H3).
  • One primary idea per paragraph.
  • Direct, declarative statements.
  • Lists and tables where appropriate.
  • Key points introduced early, not buried.

This is not about writing style. It is about reducing friction.

If a system has to reinterpret your content to isolate the answer, it is less likely to use it. If it can lift a sentence or a list directly, it is more likely to include it. In this sense, structure is not cosmetic. It is functional.

Why “Good SEO” Isn’t Always Enough

Many teams are encountering a frustrating pattern: They rank well, traffic is stable, but they are absent from AI-generated answers.

The first instinct is to look for ranking issues. Then, when that doesn’t fix the problem, move on to re-optimizing keywords, building more links, or publishing more content. These are solutions that do not address the real problem.

Ranking determines whether you are visible in search results. Retrieval determines whether you are used in answers. Those are not the same system. A page can perform well in traditional SEO metrics and still fail to provide clean, extractable segments for AI systems. When that happens, competitors with clearer structure or stronger authority are more likely to be cited, even if they rank lower.

This is not a contradiction, rather it is a shift in evaluation.

Practical Implications

The implications for SEO are straightforward, even if the execution is not.

First, please stop treating updates as isolated events. They are outputs of a continuous system. Optimizing for long-term direction is more effective than reacting to short-term volatility.

Second, invest in authority at the entity level. Build recognition beyond your own site. Where and how you are mentioned matters as much as what you publish.

Third, maintain your content. Freshness is not a one-time signal. It is an ongoing demonstration of relevance.

Fourth, prioritize first-party value. Original insights, data, and expertise are more durable than derivative content.

Finally, structure for usability. Make your content easy to extract, not just easy to read.

Trust Is Now Dynamic

Search engines no longer assign trust once and move on. They evaluate it continuously, so you need to continuously monitor and maintain your trust signals.

Authority determines whether you are considered. Freshness determines whether you remain relevant. First-party signals determine whether you are credible. Structure determines whether you are usable.

All four are required.

If your content cannot be selected, extracted, and trusted quickly, it does not matter how well it ranks. That is the shift, and it is not going away.

More Resources:


Featured Image: beast01/Shutterstock

Google Lists Best Practices For Read More Deep Links via @sejournal, @MattGSouthern

Google updated its snippet documentation today with a new section on “Read more” deep links in Search results. The section outlines three best practices for increasing the likelihood that a page appears with these deep links.

What A Read More Deep Link Is

Google defines the feature as “a link within a snippet that leads users to a specific section on that page.”

The examples in the documentation show the link appearing inside the snippet area of a standard Search result.

Screenshot from: developers.google.com/search/docs/appearance/snippet, April 2026.

The Three Best Practices

Google lists three best practices that can increase the likelihood of these links appearing.

First, content must be immediately visible to a human on page load. Content hidden behind expandable sections or tabbed interfaces can reduce that likelihood, per Google’s guidance.

Second, avoid using JavaScript to control the user’s scroll position on page load. One example Google gives is forcing the user’s scroll to the top of the page.

Third, if the page uses history API calls or window.location.hash modifications on page load, keep the hash fragment in the URL. Removing it breaks deep linking behavior.

More Context

Read more deep links are one type of anchor URL that appears in Search Console performance reports. John Mueller previously addressed those hashtag URLs, confirming that they come from Google and link to page sections.

Before today’s addition, the documentation was last revised in 2024. That change clarified page content, not the meta description, as the primary source of search snippets.

Why This Matters

For websites, the new guidance outlines what can increase the likelihood that a Read more deep link will appear.

Pages using accordion UI patterns, tabbed content, or forced-scroll JavaScript may reduce that likelihood. Teams working with single-page applications should ensure that hash fragments remain in URLs during page loads.

Looking Ahead

This is a documentation clarification, not a new SERP feature. Read more deep links have appeared in Search for some time. What’s new is the written guidance on how to increase that likelihood.

Developers working on JavaScript-heavy sites should test how their pages handle scroll position and hash fragments on initial load. Today’s update provides clearer signals on what can reduce the likelihood of a “Read more” link appearing.


Featured Image: Blossom Stock Studio/Shutterstock

Winning Google Ads Campaign Structures For DTC Ecommerce via @sejournal, @MenachemAni

You’ve got a whole library of winning ads from Meta to run on Google, but you don’t want to spend a ton of time setting up campaigns or becoming a Google guru. So, you take your existing creatives and pop them into Performance Max, spin up some ad copy, and let Google do its thing.

One campaign, one budget, and your entire product line targeting a broad audience – just like Meta taught you. When we audit ecommerce brands expanding to Google, this is the thinking we often see reflected in a highly consolidated account setup.

The logic makes sense if you think in Meta terms. Consolidate spend, let the algorithm find buyers, and scale what converts. It works on Meta because the platform is built on interest-based targeting. You define a pool, feed it plenty of creatives, and the system shows it to the right people.

Except … Google doesn’t work that way. Targeting is driven by active search intent, so a consolidated, broad structure doesn’t give the algorithm better signal – just noise. So, your account ends up burning through your $20,000/month budget without the architecture needed to distinguish between demand that was on its way to being captured and truly net new revenue.

If you live in the world of direct-to-consumer (DTC) and ecommerce brands and operate this way, you aren’t being careless. You’ve mastered one of the most competitive paid channels available and are simply applying that expertise to a platform that operates on entirely different principles.

Let me fix it.

Why Account Structure Is Vital To Success

Every search query in Google is a person telling you something – not a demographic or an interest category inferred from content they’ve engaged with. Explicit, real-time signal that someone is looking for what you offer right now.

That signal is the foundation of everything Google Ads is. Smart Bidding reads it, query matching acts on it, the auction gives it weight, and your campaign structure puts you in a position to capitalize on it.

This is why structure in Google Ads carries more consequence than it does on many other paid channels. Campaigns without clear segmentation and defined boundaries prevent the algorithm from learning efficiently. This spreads budget across queries that don’t reflect the same intent and makes you compete against yourself, leading to outcomes that don’t map to your actual business goals.

The other dimension is economics. Different products carry different margins, average order values, and conversion rates. A structure that treats all of them the same can’t divert spend toward products where it actually makes sense. You end up with an account that converts but doesn’t necessarily generate optimal returns.

And here’s a secret: Sometimes, I never run PMax at all. And if I do, I set it up in a way where it’s not going to just recycle Meta traffic but focus on as much net new as possible (even blocking brand, retargeting, and existing customers can’t get you to 100% net new). But if you have a very heavy Meta presence and PMax looks like it will over-index on recycling traffic, I’d move towards Shopping so we can move the needle.

3 Mistakes That Erode Efficiency For Google Ecommerce

1. Launching Every Campaign Type At Once

The instinct to go broad from day one is understandable. You have products to sell with multiple campaign types available to you and a budget ready to deploy. So you build out brand Search, Shopping, Performance Max, and YouTube, and wait for the data to come in.

The problem is that each of those campaigns needs impressions, clicks, and conversions to learn. When you split a less-than-astronomical budget across five campaign types, none of them gets enough volume to learn efficiently. Visibility is low across the board, and data is slow to compound, and Google’s machine learning systems are starved of the information they need to do better for your account.

Your account is running, but it isn’t moving. At the end of the quarter, you’ll still have no meaningful insights and won’t be able to optimize with confidence.

A smarter approach could be to start with just a couple of campaigns, like Search plus Shopping. This lets you get wider product visibility without being constrained by budget. Once those campaigns have data behind them and are generating returns, you layer in PMax, YouTube, and other formats one by one.

This way, each new move has a foundation to build on rather than competing for scraps.

2. Putting The Same Products In Multiple Campaigns

When your flagship product lives across multiple campaigns, they compete against each other in the same auction. That means a split budget, divided impressions, and not enough conversion momentum for any campaign to become meaningfully better.

Reporting is just as damaging. Sales come through, but you can’t tell which campaign was responsible. Attribution, which is already murky when two platforms are involved, gets harder. And optimization decisions get made with incomplete data.

Clean product segmentation across your account solves all three problems. Each product has a home, which makes performance readable. And when something isn’t working, you know exactly where to look.

3. Segmenting Performance Max Asset Groups By Audience Signal

Performance Max gives you audience signals as an input – customer lists, past purchasers, site visitors. The temptation is to use those signals as the basis for how you divide your asset groups. One group for past buyers, one for prospecting, one for lapsed customers.

The problem is that audience membership has nothing to do with the economics of what you’re selling. A past buyer and a new visitor can both be in the market for your highest-margin product. Structuring asset groups around who they are rather than what you’re selling means your budget isn’t organized around the products that actually matter most to your business.

A more effective approach is to build asset groups around shared product themes – bestsellers, new releases, bundles, seasonal offers. This way, the creative, the budget, and the optimization signal are all pointed at a coherent set of products with similar business value. Performance Max can still find the right audience. Your job is to give it the right product context to work with.

3 Proven Examples Of Google Ads Account Structure For Ecommerce

Example 1: Single-Product DTC Brand

A brand selling one hero product with a few variants (sizes, colors, or bundles) doesn’t need a complex account structure, just a disciplined one.

Start with two campaigns:

  • Branded search captures anyone searching for you by name (high intent), protects your brand equity, and tends to convert at a lower cost – so remember not to use automated bidding.
  • Either Performance Max or Shopping to drive product discovery.
  • If you choose PMax, divide asset groups by variant type rather than audience: one for the core product, one for bundles, one for any subscription or multi-unit offers. This keeps creative and budget in line with how the product is actually sold rather than who you think is buying it.

Adding both retail campaigns or YouTube before the first two layers capture enough conversion data only splinters your budget and stops the algorithm from learning anything meaningful to optimize against.

Example 2: Multi-Product DTC Brand With Bestsellers

Brands with larger catalogs make a common structural mistake: treating all SKUs equally. A single PMax campaign with one asset group covering 40 items gives Google no basis for prioritization and will spend where it finds the path of least resistance, which isn’t always where your margins are.

The better approach is to build asset groups around product tiers.

  • Bestsellers – products with the strongest sales velocity and healthiest margins – get their own asset group with dedicated creative and the largest share of budget.
  • New releases get a separate asset group because they need impression volume to gather data and shouldn’t compete directly with proven performers.
  • Include lower-margin, specialty, or slow-moving SKUs but cap their spend, or exclude from PMax entirely and handle them through a Shopping campaign where you have more direct control.

This structure makes performance readable by economic impact level. When a bestseller starts to slip, you see it immediately. And when a new release gains traction, you can promote it without disrupting the rest of the account.

Example 3: Seasonal DTC Brands

For brands with strong seasonal demand, like gifting or back to school, the structural challenge is running seasonal campaigns without damaging the learning of evergreen ones. The approach here is to treat seasonal pushes as additions to the account, not replacements.

  • Evergreen PMax stays live and funded at a baseline level throughout the year.
  • When a seasonal moment approaches, a separate PMax campaign is layered on with its own budget, asset groups built around the seasonal offer, and a defined run window.
  • Seasonal spend is then contained so that when it ends, the evergreen campaign’s learning history is unaffected.
  • When the seasonal campaign winds down, asset groups are paused rather than deleted. Conversion data accumulated during each period is preserved and available when the next seasonal cycle begins, which shortens the relearning period significantly compared to building a new campaign from scratch each time.

Make This Read Worthwhile: Product Segmentation Exercise

Meta finds customers by matching your offer to people’s interests. Google finds customers who are actively looking. What both platforms share is that the systems are increasingly in charge of the operational side: Smart Bidding, Advantage+, Performance Max. These tools make decisions about who sees your ads, when, and at what cost. The advertiser’s job has shifted from button pusher to signal architect.

On Google, that starts with how your campaigns and product/asset groups are organized.

Your Next Step To Value

Before you change any settings or adjust any budgets, try this product segmentation exercise.

  • Pull your catalog and group SKUs by shared characteristics: bestsellers, new releases, bundles, seasonal offers, margin tiers. The goal is to understand which products belong together and which need their own dedicated focus.
  • Once you have that, look at whether retargeting is siloed or folded into your broader activity. It should be a standalone campaign as blending it with prospecting dilutes performance data and makes it harder to read what’s actually driving new customer acquisition.

These two steps alone will give you a clearer foundation than many DTC brands have as they start layering in Google Ads as a channel.

More Resources:


Featured Image: Summit Art Creations/Shutterstock

68 Million AI Crawler Visits Show What Drives AI Search Visibility via @sejournal, @martinibuster

A new analysis of 858,457 sites hosted on the Duda platform shows how AI crawlers are interacting with websites at scale. The data offers a clearer view of how crawling activity is growing and what SEOs and businesses should do to increase traffic from AI search.

AI Crawling Has Already Reached Scale

AI crawling is growing quickly, with more requests tied to real-time answers and most of that activity coming from a single provider. The data creates a pattern that shows which sites are being crawled and more importantly, why.

Year-Over-Year Growth In LLM Referrals

LLM referral traffic has increased sharply over the past year, with multiple platforms showing meaningful gains from very different starting points.

AI Referral Traffic Patterns

  • Total LLM referrals: 93,484 to 161,469 (+72.7%)
  • ChatGPT: 81,652 to 136,095 (+66.7%)
  • Claude: 106 to 2,488 (23x growth)
  • Copilot: 22 to 9,560 (from near-zero)
  • Perplexity: 11,533 to 13,157 (+14.1%)

Growth is not happening evenly, but across the board, referral traffic from AI systems is increasing. That makes AI-generated discovery a growing source of traffic, not a marginal one.

Crawlers Are Increasingly Fetching Content To Ground Answers

AI crawlers are no longer used primarily for indexing, with most activity now tied to retrieving content in real time to generate answers for users.

Most crawling is now happening in response to user queries rather than for building an index, which changes how content is accessed and used.

  • User Fetch (real-time answers): 56.9% of all crawler activity, driven almost entirely by ChatGPT
  • Training (model learning): 28.8%, split across GPTBot and other model crawlers
  • Discovery (content indexing): 14.3%, distributed across multiple systems
  • ChatGPT User Fetch volume: ~39.8 million visits

The trends are largely driven by ChatGPT, which is responsible for nearly all real-time retrieval activity. That means the move toward answer-based crawling is not evenly distributed, but concentrated in one platform shaping how content is accessed. This trend may change with Google’s new Google-Agent crawler.

Market Concentration In AI Crawling

AI crawler activity is heavily concentrated, with OpenAI responsible for the vast majority of requests, reflecting its position as the primary tool users rely on to find and retrieve information.

  • OpenAI: 55.8 million visits (81.0%)
  • Anthropic (Claude): 11.5 million (16.6%)
  • Perplexity: 1.3 million (1.8%)
  • Google (Gemini): 380,000 (0.6%)

Most AI crawling activity comes from OpenAI, which aligns with ChatGPT’s role as a primary tool for finding and retrieving information. Claude follows at a much smaller share, suggesting a different usage pattern, while the rest of the market accounts for a minimal portion of crawler activity.

Scale And What That Actually Means

AI crawling is already operating across a large portion of the web, reaching hundreds of thousands of sites and generating tens of millions of requests in a single month.

More than half of all sites in the dataset received at least one AI crawler visit, showing that this activity is not limited to a small subset of websites.

  • Total sites analyzed: 858,457
  • Sites with at least one AI crawler visit: 506,910 (59%)
  • Total AI crawler visits (Feb 2026): 68.9 million

AI crawling is not isolated to high-profile or heavily trafficked sites. It is already widespread, with consistent activity across a majority of the web.

The Relationship Between Crawling and Real Traffic

Sites that allow AI systems to crawl them consistently show stronger engagement across multiple metrics.

What the data actually shows is:

  1. Sites that allow AI crawling receive significantly more human traffic
  2. Higher-traffic sites are more likely to be crawled

Sites that allow crawling by AI systems receive significantly more human traffic, averaging 527.7 sessions compared to 164.9 for sites that are not crawled. This does not establish causation, but it shows a clear alignment between sites that attract human visitors and how often AI systems revisit them.

  • Average human traffic (AI-crawled vs not): 527.7 vs 164.9 (3.2x higher)
  • Average form completions: 4.17 vs 1.57 (2.7x higher)
  • Averageclick-to-call: 8.62 vs 3.46 (2.5x higher)
  • Sites with 10K+ sessions: 90.5% crawl rate

AI systems are not discovering weak or inactive sites and lifting them up. They are returning to sites that already attract human visitors. For marketers, that shifts the focus away from trying to “get crawled” and toward building real audience demand, since visibility in AI systems appears to follow it.

What Correlates With More Crawling

The research compared sites that include specific third-party integrations, structured features, and content depth with those that do not and found which ones mattered most for AI crawler activity and referrals.

Across the dataset, 59% of sites received at least one AI crawler visit in February 2026. Sites that are crawled more often tend to combine three types of signals: external integrations, structured business data, and content depth.

1. External Integrations

These integrations connect the site to external systems that validate and distribute business information.

  • Yext integration: 97.1% crawl rate vs ~58% without (+38.9pp)
  • Reviews integrations: 89.8% crawl rate vs 58.8% without, 376.9 average crawler visits

Sites that are connected to external data and review systems are crawled more often and more frequently, indicating that AI systems rely on these integrations as signals that a business is real, verifiable, and worth revisiting.

2. Structured Site Features And Business Data

These are built into the site and help AI systems understand and verify business identity.

  • Google Business Profile sync: 92.8% crawl rate vs 58.9% without, 415.6 average crawler visits
  • Local schema: 72.3% vs 55.2% (+17.1pp), 22.3% adoption
  • Dynamic pages: 69.4% vs 58.2% (+11.2pp)
  • Ecommerce: 54.2% vs 59.2% (-5.0pp)

Sites that clearly define their business identity and structure their information in a machine-readable way are crawled more often, showing that AI systems favor sites they can easily interpret, verify, and extract information from.

3. Content Depth (Volume Of Usable Data)

Sites with more content provide more opportunities for AI systems to retrieve, reference, and reuse information in responses.

  • Sites with 50+ blog posts: 1,373.7 average crawler visits vs 41.6 with no blog (~33x higher)

Sites with more content are crawled far more often, indicating that AI systems may return to sources that offer a larger supply of usable information to draw from when generating answers.

Local Business Schema Completeness = More Crawling

This part of the research focuses specifically on local business schema, comparing how the completeness of schema implementation for communicating business details relates to AI crawler activity. The fields measured include business name, phone number, address, hours, and social profiles.

  • No local schema fields: 55.2% crawl rate
  • 10–11 completed schema fields: 82% crawl rate
  • Sites with more complete local schema show a 26.8 percentage point higher crawl rate (82% vs 55.2%)

Sites that provide more complete local business information in structured form are crawled more often and receive more crawler visits. As more of these fields are filled in, both crawl rate and crawl frequency increase.

The data shows that clearly defined local business data makes a site easier for AI systems to identify, verify, and subsequently revisit, all the prerequisites for receiving traffic from AI search.

Takeaways

AI crawling is a parallel method for content discovery and the research shows clear patterns for sites that are visited by crawlers most often.

  • AI crawling operates alongside traditional search, changing how content is accessed and reused
  • Sites with structured local signals, deeper content, and more complete schema are crawled more often
  • Multiple reinforcing signals appear together on the same sites, not in isolation
  • The data shows direction, not causation, but the patterns are consistent

The data shows that sites that make it easy for AI crawlers to index and revisit the them tend to perform better. Interestingly, sites that present clear, structured, and verifiable information, while continuing to build real audience demand, are more likely to be revisited by AI systems and benefit from traffic generated through AI search.

Read the research: Duda study finds AI-optimized websites drive 320% more traffic to local businesses

Featured Image by Shutterstock/Preaapluem

Mixed Reports on AI Ecommerce Traffic

Consumers arriving from AI search and chat may be high-intent and ready to buy, but the early evidence is uneven and easily misread.

AI-referred visitors are engaging more deeply and converting at higher rates, according to the April 2026 Adobe Digital Insights “Quarterly AI Traffic Report” (PDF).

Premium Engagement

AI-referred visitors in March were 42% more likely to purchase, according to Adobe, generating 37% more revenue per visit than visitors from other channels.

Consumers from AI platforms:

  • Spent 48% longer on site,
  • Visited 13% more pages,
  • Bounced 32% less.

In short, Adobe’s report puts AI as a strong customer acquisition channel.

Early Data

Yet other analyses suggest the channel is nascent and driving only modest visits. For example, “ChatGPT Referrals to E-Commerce Websites,” an October 2025 study by German university professors Maximilian Kaiser and Christian Schulze, found that ChatGPT accounted for less than 0.2% of ecommerce traffic.

Compared with more established channels such as email, advertising, and organic search, the available datasets are tiny, especially for high-intent shoppers.

Moreover, performance almost certainly varies by store size, product category, and brand recognition. For small and midsize ecommerce companies, the implication is not to chase volume but to understand how AI is reshaping product discovery and prepare for it.

Mixed Reports

Adobe is not the first to suggest that AI is a premium ecommerce acquisition channel. Google claims that clicks on AI Overviews are more likely to convert than those of traditional organic listings.

To this end, Similarweb’s “State of Ecommerce 2025” report stated that “AI search has become a high-intent growth channel.”

Traffic to ecommerce sites from OpenAI’s ChatGPT converted at roughly 11.4%, according to Similarweb, compared to 5.3% from organic search.

However, conversions vary depending on the report. Schulze and Kaiser’s analysis found ChatGPT-referred traffic converted about twice as well as paid social, but it underperformed most other channels. Organic search, for example, showed about a 13% higher conversion rate than AI referrals, while affiliate (86% more likely to convert) and paid search (45% more) performed significantly better.

These findings are noteworthy, in part, because the paper analyzed 12 months of first-party data — from August 2024 through July 2025 — across 973 ecommerce websites and $20 billion (revenue) of orders. The data included nearly 50,000 transactions attributed to ChatGPT referrals and 164 million from traditional channels.

The professors also found that engagement varied. AI visitors, according to the report, were less likely to bounce than other channels. This matches the Adobe findings but implies fewer pages visited and less time on site, perhaps suggesting a different browsing pattern.

Easy to Misread

So which report is correct?

They might all be right. The differences between Adobe’s analysis and the findings of Kaiser and Schulze may accurately reflect each dataset.

Factors that might skew the numbers include:

  • Measurement. Adobe emphasized post-click performance, including engagement, conversion rate, and revenue per visit. Kaiser and Schulze relied on last-click attribution, which can undercount AI’s role in earlier research and consideration.
  • Definition of AI traffic. Adobe groups “generative AI traffic” broadly across multiple tools and interfaces. The academic study isolates ChatGPT referrals.
  • Geography. Adobe’s data is U.S.-focused. The academic dataset spans 49 countries, where adoption, trust, and shopping behavior most certainly differ.
  • Timing. The academic study collected data from August 2024 through July 2025, an early phase of AI shopping. Adobe’s data reflects more recent usage, after rapid improvements in tools and consumer familiarity.
  • Channel maturity. AI traffic represents a minor share of visits. Small samples can exaggerate differences, especially when comparing across merchants, categories, and brands.

Taken together, these differences are a healthy reminder that AI chat, search, and shopping are a moving target.

AI Is Vital

AI as an acquisition channel is early, uneven, and unclear.

Nonetheless, AI already influences how shoppers discover products, the most important such channel since the internet itself.

Measure its impact, optimize for AI visibility, and iterate quickly. The ecommerce industry may be in the midst of a once-in-a-generation shift. Merchants who adapt early are far better positioned than those who wait.

Selling To AI: The Complete Guide To Agentic Commerce via @sejournal, @slobodanmanic

For 30 years, checkout has been a page. A form with fields for name, address, credit card number. Whether it was Amazon’s one-click patent or Apple Pay’s fingerprint, the innovation was always about making that form faster to get through.

The form itself never went away. Now it is.

This is the final article in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO. Part 2 explained how to get your content cited in AI responses. Part 3 mapped the protocols forming the infrastructure layer. Part 4 got technical with how AI agents perceive your website. This article covers the commerce layer: how AI agents find products, complete purchases, and handle payments without ever loading a checkout page.

In September 2025, Stripe and OpenAI launched Instant Checkout inside ChatGPT. In January 2026, Google and Shopify unveiled the Universal Commerce Protocol at the National Retail Federation conference. Two open standards. Two competing visions for the same shift: checkout becoming a protocol, not a page.

Throughout this article, we draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

How We Got Here

Every generation of commerce technology has solved the same problem: reducing the friction between “I want something” and “I have it.” Agentic commerce is not a break from this pattern. It’s the pattern’s logical conclusion.

1994: The first online purchase. On Aug. 11, 1994, Phil Brandenberger used his credit card to buy Sting’s Ten Summoner’s Tales CD for $12.48 from a website called NetMarket. The New York Times covered it the next day. NetMarket’s 21-year-old CEO, Daniel Kohn, told the paper: “Even if the N.S.A. was listening in, they couldn’t get his credit card number.” Netscape’s SSL protocol, released that same year, made it possible.

The friction removed: You no longer had to go to a physical store.

Late 1990s: Comparison shopping. Within a few years, websites like BizRate (1996), mySimon (1998), and PriceGrabber (1999) let buyers see prices across multiple merchants instantly. Google entered the space in 2002 with Froogle, later renamed Google Product Search in 2007, then Google Shopping in 2012.

The friction removed: You no longer had to visit each store to compare.

1998: The store adapts to you. Amazon deployed item-to-item collaborative filtering at scale, the algorithm behind “customers who bought this also bought.” Greg Linden, Brent Smith, and Jeremy York published the underlying research in IEEE Internet Computing in 2003. In 2017, the journal named it the best paper in its 20-year history.

The friction removed: You no longer had to know exactly what you wanted.

2015: Commerce moves into conversations. Chris Messina, then Developer Experience Lead at Uber, coined the term “conversational commerce” in a January 2015 Medium post, describing “delivering convenience, personalization, and decision support while people are on the go.” In April 2016, Mark Zuckerberg launched the Facebook Messenger Platform, declaring: “I’ve never met anyone who likes calling a business.” Meanwhile, in China, WeChat had already proved the model. Its Mini Programs, launched January 2017, generated 800 billion yuan (~$115 billion) in transactions by 2019.

The friction removed: You no longer had to open a store’s website.

2014-2023: Voice and social commerce. Amazon Echo launched in November 2014, promising you could buy things without a screen. The promise was mostly unfulfilled. Social commerce had better luck: TikTok Shop, launched in the U.S. in September 2023, reached $33.2 billion in global sales by 2024. Content became the storefront.

The friction removed: Purchase intent was created inside the feed, not searched for.

2024: AI starts shopping for you. Within months, every major platform launched AI shopping features. Amazon introduced Rufus in February, a conversational assistant trained on its product catalog. Google rebuilt Shopping with AI in October, drawing on 50 billion product listings. Perplexity launched “Buy with Pro” in November, turning a search engine into a store.

The friction removed: AI did the research, comparison, and recommendation for you.

2025: The buyer disappears. In January, OpenAI launched Operator, an agent that navigated websites, filled forms, and completed purchases autonomously. In May, Google announced “Buy for Me” at I/O 2025. In September, Instant Checkout went live in ChatGPT.

The friction removed: The last one. The human no longer needs to be there for the transaction to happen.

Each of these shifts was about the same thing: removing one more step between wanting and having. Agentic commerce removes the final step: doing it yourself.

Checkout Is No Longer A Page

Here’s the shift in one sentence: In traditional commerce, the seller builds the checkout experience. In agentic commerce, the agent does.

When you buy something on a website today, you interact with the merchant’s checkout page. They designed the form, they chose the layout, they control the flow. You fill in your details, click “Buy,” and the payment processes.

In agentic commerce, the AI agent presents the checkout information within its own interface. ChatGPT shows you the product, the price, the shipping options, within the chat. You confirm. The agent handles the rest. The merchant never renders a page. They receive an API call.

Stripe’s agentic commerce guide puts it directly: “The parts of commerce that used to be user experience problems are becoming protocol problems.” Instead of optimizing button colors and form layouts, merchants are defining API endpoints and product feeds. Discovery, comparison, and checkout are all handled by the agent. The merchant’s job shifts to supplying structured product data and processing the order.

Emily Glassberg Sands, Stripe’s Head of Information and Data Science, framed the broader implications: “Agents don’t just change who’s at the checkout. They change who’s doing the searching, the deciding, the trusting. All of it.”

I discussed this with Jes Scholz, who ran digital across 140+ ecommerce brands at Ringier, on the podcast. Her experience was clear: Agents browse in text mode, and if they can’t parse your site cleanly, they leave. No second chances.

This isn’t theoretical. As of February 2026, several agentic commerce implementations are live. ChatGPT Instant Checkout is available to U.S. users on Free, Plus, and Pro plans. Etsy, Instacart, and Walmart are among the merchants processing orders through it. Shopify’s Agentic Storefronts are active by default for eligible merchants, syndicating products to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity simultaneously. Perplexity launched Instant Buy with PayPal in November 2025, allowing purchases directly within the chat interface with merchants like Wayfair, Abercrombie & Fitch, and thousands more via BigCommerce and Wix.

Every major AI company is moving in this direction. Anthropic, the company behind Claude, has been equally explicit about its commerce plans. In February 2026, Anthropic confirmed it is building features for “agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end,” while committing to keeping the experience ad-free with no sponsored links or third-party product placements. Claude already connects to Stripe, PayPal, and Square via MCP integrations. And in June 2025, Anthropic published Project Vend, a research experiment where Claude autonomously operated a physical retail store for a month, managing inventory, pricing, supplier relations, and customer interactions. The results were instructive: The agent performed well at supplier discovery and customer service, but sold items at a loss and hallucinated payment details. A useful preview of both the potential and the current limitations.

Two open protocols are making this possible. Both launched within four months of each other.

The Agentic Commerce Protocol

The Agentic Commerce Protocol (ACP) is an open standard co-developed by OpenAI and Stripe, announced Sept. 29, 2025. Licensed under Apache 2.0, it defines how AI agents complete purchases on behalf of users.

ACP uses a four-party model: the buyer (discovers and approves), the AI agent (presents products and handles checkout UI), the merchant (processes the order and payment), and the payment service provider (handles payment credentials securely). The merchant remains the merchant of record. They process the payment, handle fulfillment, manage returns. The agent is an intermediary, not a marketplace.

The protocol defines four API endpoints:

Endpoint Purpose
Create Checkout Agent sends a product SKU; merchant generates a cart with pricing, shipping, and payment options
Update Checkout Modifies quantities, shipping method, or customer details mid-flow
Complete Checkout Agent sends a payment token; merchant processes payment and returns order confirmation
Cancel Checkout Signals cancellation; merchant releases reserved inventory

The responsibility shift is worth spelling out:

Responsibility Traditional Checkout ACP Checkout
Checkout UI Seller Agent
Payment credential collection Seller Agent
Cart and data model Seller Seller
Payment processing Seller Seller

The agent handles what the buyer sees. The seller handles what happens after they click “Buy.” ACP can be implemented as either a REST API or an MCP server, connecting naturally to the protocol ecosystem covered in Part 3.

Stripe’s Agentic Commerce Suite, launched Dec. 11, 2025, makes ACP adoption practical. Ahmed Gharib, Stripe’s Product Lead for Agentic Commerce, described it as “a low-code solution enabling businesses to sell across multiple AI agents via a single integration.” Without it, connecting to each AI agent individually would take up to six months of bespoke engineering per platform.

The Suite has three components: product discovery (sync your catalog and Stripe distributes it to AI agents), checkout (powered by Stripe’s Checkout Sessions API, handling taxes and shipping), and payments (using Shared Payment Tokens and Stripe Radar for fraud detection). Merchants connect their existing product catalog or upload directly to Stripe, then select which AI agents to sell through from the Stripe Dashboard.

The ecosystem is growing quickly. Beyond OpenAI, Stripe lists Microsoft Copilot, Anthropic, Perplexity, Vercel, and Replit as AI platform partners. On the ecommerce side, Squarespace, Wix, WooCommerce, BigCommerce, and commercetools have integrated. Salesforce announced ACP support in October 2025. Shopify’s 1 million+ US merchants are coming soon.

The Universal Commerce Protocol

Four months after ACP launched, a different coalition unveiled a second standard.

The Universal Commerce Protocol (UCP) was co-developed by Shopify and Google, announced Jan. 11, 2026 at the National Retail Federation conference in New York. Google CEO Sundar Pichai presented it. The co-developers include Etsy, Wayfair, Target, and Walmart. Over 20 companies endorsed it at launch, including Mastercard, Visa, Best Buy, Home Depot, Macy’s, American Express, and Stripe. I broke down UCP and its strategic implications the week it launched on the podcast.

Where ACP is tightly focused on the checkout flow, UCP is designed as a full commerce standard covering discovery through post-purchase. Its architecture is modeled after TCP/IP, with three layers:

Layer Purpose
Shopping Service Core primitives: checkout sessions, line items, totals, messages, status
Capabilities Major functional areas (Checkout, Orders, Catalog), each independently versioned
Extensions Domain-specific schemas, added via composition without a central registry

UCP is protocol-agnostic. It supports REST, MCP, A2A, and AP2 (Agent Payments Protocol, Google’s standard for agent-initiated payments). ACP currently supports REST and MCP.

Discovery works through a published profile at /.well-known/ucp, similar to how A2A agents publish their capabilities at /.well-known/agent-card.json (covered in Part 3). Both agents and merchants declare their capabilities, and on each request, the system computes the intersection of what they can do together. Ashish Gupta, VP/GM of Merchant Shopping at Google, described the logic: “The shift to agentic commerce will require a shared language across the ecosystem.

The two protocols reflect different strategic positions. ACP, built by the company running the AI agent (OpenAI) and the company processing the payment (Stripe), is optimized for getting transactions through ChatGPT quickly. UCP, built by the company hosting the merchants (Shopify) and the company running search (Google), is designed for a multi-agent future where many AI platforms compete for the same shoppers.

Dimension ACP (Stripe + OpenAI) UCP (Shopify + Google)
Launched Sept. 29, 2025 Jan. 11, 2026
Focus Checkout flow Full commerce journey
Transport REST, MCP REST, MCP, A2A, AP2
Payment Shared Payment Tokens (Stripe) AP2 with cryptographic Mandates
Discovery Structured product feeds /.well-known/ucp endpoint
Integration effort Days (existing Stripe merchants) Weeks to months
Coalition OpenAI, Stripe, Salesforce Google, Shopify, Mastercard, Visa

The good news for merchants: These aren’t mutually exclusive. Shopify merchants can serve both simultaneously. The same products appear in ChatGPT via ACP and in Google AI Mode via UCP. Shopify’s Agentic Storefronts handle the multi-protocol complexity, syndicating catalog data across ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from a single admin panel.

Vanessa Lee, Shopify’s VP of Product, framed the company’s position: “Agentic commerce has so much potential to redefine shopping and we want to make sure it can scale.

The Trust Problem: Payments Without People

Both protocols face the same foundational challenge: How do you process a payment when the person with the credit card isn’t the one at the checkout?

Traditional commerce treats credential possession as a trust signal. If you have the card number, the expiry date, and the CVV, you’re probably the cardholder. Agentic commerce breaks this assumption. The agent has been authorized to act on the buyer’s behalf, but it’s not the buyer. As Stripe’s Kevin Miller wrote in his October 2025 blog post: “Trust can’t be inferred. It has to be explicitly granted, scoped, and enforced in code.”

Javelin Strategy & Research, cited by Visa, describes this as the shift from “card-not-present” to “person-not-present” transactions. It’s a useful framing. Card-not-present fraud was the defining challenge of ecommerce. Person-not-present fraud is the defining challenge of agentic commerce.

Shared Payment Tokens

Stripe’s solution is the Shared Payment Token (SPT), a new payment primitive designed specifically for agent transactions. Here’s how it works:

  1. The buyer saves a payment method with the AI platform (e.g., ChatGPT).
  2. When approving a purchase, the AI platform issues an SPT scoped to the specific merchant, capped at the checkout amount, with a time limit.
  3. The AI platform sends the SPT to the merchant via ACP.
  4. The merchant creates a Stripe PaymentIntent using the token.
  5. Stripe processes the payment, applying fraud detection in real time.

The buyer’s actual card details are never shared with the merchant or the agent. Each token is programmable (scoped by merchant, time, and amount), reusable across platforms, and revocable at any time. For existing Stripe merchants, enabling SPTs requires “as little as one line of code.”

The Payment Networks Respond

The card networks have launched their own standards. Visa introduced the Trusted Agent Protocol in October 2025, an open framework built on HTTP Message Signatures that helps merchants distinguish legitimate AI agents from malicious bots. Developed in collaboration with Cloudflare, it has feedback from Adyen, Checkout.com, Microsoft, Shopify, Stripe, and Worldpay, among others.

Mastercard launched Agent Pay in April 2025, introducing “Agentic Tokens” that build on its existing tokenization infrastructure. Each agent action uses permissions and limits defined by the consumer. Mastercard CEO Michael Miebach described agent-led payments as a “significant paradigm shift” for the industry. U.S. issuers were enabled in November 2025, with global rollout in early 2026.

PayPal joined the ACP ecosystem on October 28, 2025, enabling PayPal wallets for ChatGPT checkout and building an ACP server that connects its global merchant catalog without requiring individual merchant integrations.

Google launched its own payment standard in parallel. The Agent Payments Protocol (AP2), announced September 2025 with 60+ industry partners, uses Verifiable Digital Credentials and a cryptographic Mandate system to create tamper-evident proof of user consent at every step of the transaction. AP2 is payment-agnostic, supporting credit and debit cards, real-time bank transfers, and even stablecoins via a Coinbase x402 extension. It’s integrated directly into UCP.

Fraud Without Fingerprints

Traditional fraud detection relies on human behavioral signals: mouse movements, typing patterns, browsing behavior, session duration. AI agents have none of these. A legitimate agent transaction can look indistinguishable from a sophisticated bot attack.

Stripe addressed this by building what they describe as “the world’s first AI foundation model for payments,” a transformer-based model trained on tens of billions of transactions. The model treats each charge as a token and behavior sequences as context, ingesting signals including IPs, payment methods, geography, device characteristics, and merchant traits. When SPTs are used, Stripe Radar relays risk signals including dispute likelihood, card testing detection, and stolen card indicators to help “differentiate between high-intent agents and low-trust automated bots.”

The attack surface is also novel. Researchers demonstrated in a June 2025 study that ecommerce agents are susceptible to visual prompt injection: malicious content embedded in product listings can hijack agent behavior during shopping tasks. All agents tested were vulnerable. A separate study accepted to IEEE S&P 2026 found that 13% of randomly selected ecommerce websites had already exposed their chatbot plugins to indirect prompt injection via third-party content like product reviews. And a January 2025 paper on authenticated delegation argues that for agentic commerce to function at scale, the industry needs standardized mechanisms to “explicitly delegate authority to agents, transparently identify those agents as AI, and enforce human-centered choices around security and permissions.” SPTs, the Trusted Agent Protocol, and Agent Pay are all early answers to that challenge.

The concern is real on the consumer side, too. 88% of consumers surveyed by Javelin are concerned that AI will be used for identity fraud, according to Visa’s analysis. Building trust infrastructure that works for agent transactions is the prerequisite for agentic commerce scaling beyond early adopters.

→ Read More: Trust In AI Shopping Is Limited As Shoppers Verify On Websites

Who’s Already Selling to AI

Despite the infrastructure still being built, adoption is moving fast.

AI platforms with commerce capabilities:

Merchants and brands on board:

The early adopter list reads like a mall directory. URBN (parent of Anthropologie, Free People, and Urban Outfitters), Etsy, Coach, Kate Spade, Glossier, Vuori, Spanx, SKIMS, Ashley Furniture, Revolve, and Halara are among those onboarding to Stripe’s Agentic Commerce Suite. Walmart and Instacart are live on ChatGPT. Gymshark, Everlane, and Monos are live on Google AI Mode via UCP.

Ecommerce platforms enabling it:

Shopify’s 1 million+ U.S. merchants are eligible for ChatGPT integration. BigCommerce, Wix, Squarespace, WooCommerce, and commercetools have integrated with Stripe’s Suite. Salesforce Commerce Cloud announced ACP support in October 2025, with new Agentforce agents for merchant, buyer, and personal shopper workflows.

The Market

The market projections vary widely, which tells you how early we are. McKinsey projects $1 trillion in U.S. retail revenue orchestrated by agents by 2030, scaling to $3-5 trillion globally. Gartner predicts 90% of B2B purchases will be handled by AI agents within three years, intermediating $15 trillion in spending by 2028. Forrester predicts that by 2026, one-third of retail marketplace projects will be deserted as answer engines steal traffic.

The consumer side is more cautious. A Contentsquare survey of 1,300 U.S. consumers found 30% willing to let an AI agent complete a purchase on their behalf. A YouGov survey of 1,287 U.S. adults found 65% trust AI to compare prices, but only 14% trust it to actually place an order. Among Gen Z, that number rises to 20%. The gap between “I’ll let AI help me shop” and “I’ll let AI buy for me” is where we are right now.

But the traffic is already there. AI-driven traffic to U.S. retail websites grew 4,700% year-over-year by mid-2025, according to Adobe Analytics. Shopify reported that orders attributed to AI searches grew 11x since January 2025. OpenAI estimates approximately 2% of all ChatGPT queries are shopping-related, roughly 50 million shopping queries daily across a user base of 700 million weekly users.

Academic research is starting to reveal what happens when agents do the buying. A Columbia Business School and Yale study (August 2025) introduced ACES, the first agentic ecommerce simulator, and tested six frontier models, including Claude and GPT-4. They found that AI shopping agents exhibit “choice homogeneity,” concentrating demand on a small number of products and showing strong position biases in how listings are ranked. The researchers warn of winner-take-all dynamics and the emergence of “AI-SEO,” where sellers optimize listings specifically for agent behavior rather than human preferences. A February 2026 study on personalized product curation found that current agentic systems remain “largely insufficient” for tailored product recommendations in open-web settings. The agents are getting better at buying. They’re not yet great at buying the right thing for a specific person.

The infrastructure is being built regardless of whether consumers are fully ready. When they are, the businesses that are prepared will be the ones the agents can find.

How To Get Started

The good news: For most businesses, the entry point is simpler than you’d expect.

If you’re on Shopify, you may already be selling to AI. Agentic Storefronts are active by default for eligible U.S. merchants. Your products are syndicated to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from your existing Shopify admin. Check your dashboard for the agentic channel settings and ensure your product data (descriptions, images, categories) is clean and complete.

If you’re on Stripe, enabling Shared Payment Tokens for ACP requires as little as one line of code. The Agentic Commerce Suite handles catalog syndication, checkout, and fraud detection. Connect your product catalog, select which AI agents to sell through, and you’re live.

If you’re on BigCommerce, Wix, Squarespace, or WooCommerce, integrations with Stripe’s Suite are available. BigCommerce described the shift from “months of bespoke engineering work” per AI platform to “a single, configurable integration.”

Regardless of platform, the protocol integrations get you connected. But agents still need to find and understand your products. This is where the work from Part 2 (getting cited) and Part 4 (being agent-readable) converges with commerce.

Audit your product data. Agents parse your catalog programmatically. Every product needs:

  • A descriptive, specific title (“Men’s Organic Cotton Crew Neck T-Shirt, Navy,” not “Blue Shirt”).
  • A complete description including materials, dimensions, care instructions, and use cases.
  • Accurate, real-time pricing and stock availability.
  • High-quality images with descriptive alt text.
  • Consistent categorization across your catalog.

Add structured markup. At minimum, every product page should include Product schema with name, description, image, sku, and brand, plus nested Offer schema with price, priceCurrency, availability, and seller. If you have reviews, add AggregateRating. This is the machine-readable layer that agents parse when direct protocol integrations aren’t available. I talked about this with Duane Forrester, who co-launched Schema.org while at Bing, on the podcast. His argument: consistent structured data builds what he calls “machine comfort bias,” where AI systems develop a preference for sources that have proven reliable over time.

Test your agent visibility. Open ChatGPT, Perplexity, and Google AI Mode, and ask them to recommend products in your category. If yours don’t appear, agents can’t sell them. View your product pages in reader mode or a text-based browser to see what agents see when they visit your site directly (covered in Part 4).

Track agent-driven traffic. ChatGPT appends utm_source=chatgpt.com to referral links. Perplexity and other AI platforms leave similar referral signatures. Set up segments in your analytics to isolate AI-referred visits and monitor conversion rates separately from human traffic. The numbers are small now, but the 4,700% year-over-year growth in AI traffic to retail means they won’t stay small.

Walmart CEO Doug McMillon put it directly: “For many years now, ecommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change.”

Whether it changes next quarter or next year for your business depends on whether your product data is ready when the agents come looking.

Key Takeaways

  • Checkout is becoming a protocol, not a page. In agentic commerce, the AI agent handles the interface; the merchant processes the order. Two open standards, ACP (Stripe + OpenAI) and UCP (Shopify + Google), define how this works.
  • Both protocols are open and growing fast. ACP launched in September 2025 and powers Instant Checkout in ChatGPT. UCP launched in January 2026 with endorsements from Mastercard, Visa, Walmart, and Target. They’re complementary, not mutually exclusive. Shopify merchants can serve both simultaneously.
  • Shared Payment Tokens solve the “person-not-present” problem. When the buyer isn’t at the checkout, traditional trust signals break down. SPTs are programmable, scoped, time-limited, and revocable, letting agents initiate payments without ever seeing the buyer’s card details.
  • Payment networks are building their own standards. Visa’s Trusted Agent Protocol and Mastercard’s Agent Pay provide authentication and fraud frameworks specific to agent transactions. PayPal joined the ACP ecosystem. The payments infrastructure for agentic commerce is taking shape across the industry.
  • Major brands are already live. Etsy, Walmart, Instacart, Glossier, SKIMS, Coach, and dozens more are selling through AI agents today. Ecommerce platforms, including Shopify, BigCommerce, Wix, Squarespace, and WooCommerce, have integrations available.
  • Consumer trust is lagging behind infrastructure. Only 14% of consumers currently trust AI to place orders on their behalf. But AI-driven traffic to retail grew 4,700% in a year. The infrastructure is being built for the adoption curve that follows.

This is the final article in a five-part series on the agentic web. Part 1 framed the shift from SEO to AAIO. Part 2 covered how to get cited by AI. Part 3 mapped the protocols. Part 4 explained how agents perceive your website. This article covered where it all leads: transactions.

The thread connecting all five parts is straightforward. Structured data helps AI find you. Clean content helps AI cite you. Accessible HTML helps AI navigate you. Structured commerce protocols help AI buy from you. It’s the same principle at every layer: Make your business machine-readable, and the machines will do business with you.

Kevin Miller, Stripe’s Head of Payments, captured the moment: “Stripe spent the last 15 years optimizing commerce for human shoppers. Now, we’re starting to do the same with agents.”

The agents are already shopping. The question is whether they can find your store.

More Resources:


This post was originally published on No Hacks.


Featured Image: showcake/Shutterstock

AI Adoption Outpaced The PC & Internet: Dive Into The Stanford Report Data via @sejournal, @MattGSouthern

Stanford’s Human-Centered Artificial Intelligence Institute published its 2026 AI Index Report. The report runs over 400 pages across nine chapters covering technical performance, investment, workforce effects, and public sentiment.

The number getting the most attention is that Generative AI reached 53% adoption among the global population within three years of ChatGPT’s launch. That’s faster than either the personal computer or the internet reached comparable levels.

For anyone working in search, the report contains data that connects directly to the changes you’ve been navigating all year.

What The Report Found

This is the ninth annual AI Index, and it covers a lot of ground. A few findings matter most for the search industry.

In terms of capability, frontier models now exceed human performance on PhD-level science questions and in competitive mathematics. AI agents handling real-world tasks improved from a 20% success rate in 2025 to 77% today. Coding benchmarks that models struggled with a year ago are now nearly solved.

On investment, global corporate AI investment hit $581 billion in 2025, up 130% from the prior year. US private AI investment reached $285 billion. More than 90% of frontier models now come from private companies, not academic labs.

Regarding workforce effects, employment among software developers aged 22 to 25 has dropped by nearly 20% since 2024. A similar pattern appeared in customer service and other roles with higher AI exposure.

Transparency is declining. The Foundation Model Transparency Index fell from 58 to 40. The most capable models now disclose the least about their training data, parameters, and methods. Of the 95 most notable models launched last year, 80 were released without their training code.

The Adoption Number Everyone Is Citing

Understanding the 53% figure, what it includes, and what it doesn’t, matters for how you interpret it.

The comparison to PCs and the internet is based on research by the St. Louis Fed, Vanderbilt, and Harvard Kennedy School. The team compared adoption rates by years since each technology’s first mass-market product. The IBM PC launched in 1981. Commercial internet traffic opened in 1995. ChatGPT launched in November 2022.

At comparable points after launch, generative AI adoption runs well ahead of both earlier technologies.

But the comparison isn’t apples-to-apples, and the researchers said so themselves. Harvard’s David Deming pointed out that AI is built on top of PCs and the internet. People already had the hardware and the connectivity. Nobody needed to buy new equipment or wait for connectivity to reach their area. AI adoption rode on decades of prior technology investment.

Adoption numbers also vary depending on who’s counting and how. The Stanford report puts US adoption at 28%, ranking the country 24th globally. The St. Louis Fed’s own tracker puts US adoption at 54% as of August 2025. Same country, nearly double the rate, measured differently. The Fed team even revised its earlier estimate upward from 39% to 44% after changing the order of its survey questions.

“Adoption” also doesn’t distinguish intensity. Someone who signed up for a free ChatGPT account and tried it once counts the same as someone who uses it eight hours a day. The Stanford report notes that most users access free or near-free tiers. That’s a different picture than the one the headline number implies.

None of this means the adoption data is wrong. Generative AI is spreading faster than comparable technologies did at the same stage. But the speed of adoption alone doesn’t tell you how deeply it’s embedded in workflows or how much it’s changing search behavior specifically.

The Jagged Frontier

The report’s most useful concept for search professionals might be its “jagged frontier” of AI capability.

The same models that win gold at the International Mathematical Olympiad read analog clocks correctly only 50% of the time. IEEE Spectrum reported that Claude Opus 4.6 scores at the top of Humanity’s Last Exam while reading clocks at just 8.9% accuracy. Models that ace PhD-level science questions still struggle with video understanding and multi-step planning.

Ray Perrault, co-director of the AI Index steering committee, told IEEE Spectrum that benchmarks don’t map cleanly to real-world results. Knowing a model scores 75% on a legal reasoning benchmark “tells us little about how well it would fit in a law practice’s activities,” he said.

Search professionals have seen similar unevenness in AI search products. Ahrefs research showed that AI Mode and AI Overviews cite different URLs for the same queries, with only 13% overlap. Google’s Robby Stein acknowledged that the system pulls AI Overviews back when people don’t engage with them. Those signals suggest AI search performance is uneven across contexts, even if Google hasn’t fully explained where those differences are most pronounced.

Stanford’s data suggest that strong benchmark performance doesn’t guarantee reliable results across all tasks or query types. Whether that unevenness improves with future models is an open question the report doesn’t answer.

What’s Happening To Transparency

What the report says about transparency connects directly to search.

The Foundation Model Transparency Index dropped from 58 to 40 in a single year. The most capable models score lowest. Google, Anthropic, and OpenAI have all stopped disclosing dataset sizes and training duration for their latest models. 80 of the 95 most notable models launched in 2025 shipped without training code.

TechCrunch noted a disconnect between expert optimism about AI and public anxiety about it. The US reported the lowest trust in its government’s ability to regulate AI among the countries surveyed, at 31%.

For context on the index itself, a drop from 58 to 40 could indicate that companies are becoming more secretive. It could also reflect that the index penalizes closed-source models by design, and the most capable models happen to be closed-source. Both explanations can be true at the same time.

What matters for practitioners is the implication. The models powering AI Overviews, AI Mode, and ChatGPT Search are getting more capable and less explainable simultaneously. You’re optimizing for systems where the companies building them are sharing less about how they work, not more.

The report’s acknowledgments disclose that Stanford HAI receives financial support from Google, OpenAI, and others, and that the report was produced with assistance from ChatGPT and Claude.

The Entry-Level Question

Employment among software developers aged 22 to 25 dropped nearly 20% since 2024, according to the report. Older developers’ headcounts grew over the same period. A similar pattern appeared in customer service roles.

At first glance, that looks like AI replacing entry-level work. But the report included a caveat that complicates that conclusion. Unemployment is rising across many occupations, and workers least exposed to AI have seen it rise more than those most exposed.

That doesn’t rule out AI as a factor. It means the 20% decline could reflect AI displacement, broader hiring slowdowns, companies restructuring their entry-level hiring, or all three at once. The report presents correlation, not causation.

For search and content teams, the signal is directional even if the cause is mixed. The Stanford data is consistent with what the Tufts AI Jobs Risk Index showed earlier this year. Roles that involve assembling information from existing sources face more pressure than roles that require judgment, experience, and original analysis.

Why This Matters For Search Professionals

Even with its caveats, the adoption speed explains the pace of what you’ve been seeing.

Google expanded AI Overviews to 1.5 billion monthly users by Q1 2025. AI Mode reached 75 million daily active users by Q3 2025, then went global. Google expanded Search Live to 200+ countries. Personal Intelligence rolled out to free US users this year.

The adoption curve helps explain why Google has been expanding AI search features at this pace. It doesn’t tell us how much of that usage is happening inside search rather than standalone AI tools.

The “jagged frontier” means you can’t make blanket assumptions about AI search quality across query categories. A query type that returns accurate AI Overviews today might hallucinate with slight variations. Monitoring needs to happen at the query level, not the category level. Search Console doesn’t currently separate AI Overview or AI Mode performance from traditional search metrics, which makes this harder.

The decline in transparency affects how well you can understand why your content appears or doesn’t appear in AI-generated answers. When Google shares less about the models powering its search features, the feedback loop between what you publish and what gets surfaced becomes harder to read.

Shelley Walsh spoke at SEJ Live and referenced Grant Simmons, “golden knowledge” is content built on original data, firsthand experience, and depth that AI summaries can’t replicate from training data. The Stanford report’s data on adoption speed and model limitations support that position. The models are fast and widely used, but they’re uneven. Content that fills the gaps where AI is unreliable has a structural advantage.

What The Report Doesn’t Tell Us

The Stanford report doesn’t break out search-specific adoption data. We don’t know what percentage of that 53% uses AI via search specifically, rather than via ChatGPT, Gemini, or other standalone tools.

Google’s AI search usage numbers are limited. The company reported that AI Overviews reached 1.5 billion monthly users in Q1 2025, and AI Mode reached 75 million daily active users in Q3 2025. Updated figures should be included in the next earnings call.

The report also can’t tell us whether the jagged frontier problem is improving or worsening in search applications. The benchmark data shows models improving overall, but the clock-reading example shows that improvement isn’t uniform. Whether AI Overviews and AI Mode are getting more reliable for the specific queries that matter to your business requires your own monitoring, not aggregate benchmark data.

Looking Ahead

The Stanford report lands one week after Google’s March core update completed. Alphabet’s next earnings call will likely include updated AI search usage numbers.

The adoption data doesn’t predict what search will look like by year-end. But it does confirm that AI-first behavior isn’t speculative anymore. The question is whether Google’s AI search products will get reliable enough to match the pace of adoption.

Read More Resources:


Featured Image: n_a vector/Shutterstock