What’s The Biggest Technical SEO Blind Spot From Over-Relying On Tools? – Ask An SEO via @sejournal, @HelenPollitt1

We are fortunate to have a wide range of SEO tools available, designed to help us understand how our websites might be crawled, indexed, used, and ranked. They often have a similar interface of bold charts, color-coded alerts, and a score that sums up the “health” of your website. For those of us high-achievers who love to be graded.

But these tools can be a curse as well as a blessing, so today’s question is a really important one:

“What’s the biggest technical SEO blind spot caused by SEOs over-relying on tools instead of raw data?”

It’s the false sense of completeness. The belief that the tool is showing you the full picture, when in reality, you’re only seeing a representative model of it.

Everything else, mis-prioritization, conflicting insights, and misguided fixes all flow from that single issue.

Why Technical SEO Tools “Feel Complete” But Aren’t

Technical SEO programs are a critical part of an SEO’s toolkit. They provide insight into how a website is functioning as well as how it may be perceived by users and search bots.

A Snippet In Time Of The State Of Your Website

With a lot of the tools currently on the market, you are presented with a snapshot of the website at the point you set the crawler or report to run. This is helpful for spot-checking issues and fixes. It can be highly beneficial in spotting technical issues that could cause problems in the future, before they have made an impact.

However, they don’t necessarily show how issues have developed over time, or what might be the root cause.

Prioritized List Of Issues

The tools often help to cut through the noise of data by providing prioritized lists of issues. They may even give you a checklist of items to address. This can be very helpful for marketers who haven’t got much experience in SEO and need a hand knowing where to start.

All of these give the illusion that the tool is showing a complete picture of how a search engine perceives your site. But it’s far from accurate.

What’s Missing From Technical SEO Tools

Every tool is constricted in some way. They apply their own crawl limits, assumptions about site structure, prioritization algorithms, and data sampling or aggregation.

Even when tools integrate with each other, they are still stitching together partial views.

By contrast, raw data shows what actually happened, not what could happen or what a tool infers.

In technical SEO, raw data can include:

Without these, you are often diagnosing a simulation of your site and not the real thing.

Joined Up Data

These tools will often only report on data from their own crawl findings. Sometimes it is possible to link tools together, so your crawler can ingest information from Google Search Console, or your keyword tracking tool uses information from Google Analytics. However, they are largely independent of each other.

This means you may well be missing critical information about your website by only looking at one of two of the tools. For a holistic understanding of a website’s potential or actual performance, multiple data sets may be needed.

For example, looking at a crawling tool will not necessarily give you clarity over how the website is currently being crawled by the search engines, just how it potentially could be crawled. For more accurate crawl data, you would need to look at the server log files.

Non-Comparable Metrics

The reverse of this issue is that using too many of these tools in parallel can lead to confusing perspectives on what is going well or not with the website. What do you do if the tools provide conflicting priorities? Or the number of issues doesn’t match up?

Looking at the data through the lens of the tool means there can be an extra layer added to the data that makes it not comparable. For example, sampling could be occurring, or a different prioritization algorithm used. This might result in two tools giving conflicting results or recommendations.

Some Tools Give Simulations Rather Than Actual Data

The other potential pitfall is that, sometimes, the data provided through these reports is simulated rather than actual data. Simulated “lab” data is not the same as actual bot or user data. This can lead to false assumptions and incorrect conclusions being drawn.

In this context, “simulated” doesn’t mean the data is fabricated. It means the tool is recreating conditions to estimate how a page might behave, rather than measuring what actually did happen.

A common example of lab vs. real data is found in speed tests. Tools like Lighthouse simulate page load performance under controlled conditions.

For example, a Lighthouse mobile test runs under throttled network conditions simulating a slow 4G connection. That lab result might show an LCP of 4.5s. But CrUX field data, reflecting real users across all their devices and connections, might show a 75th percentile LCP of 2.8s, because many of your actual visitors are on faster connections.

The lab result is helpful for debugging, but it doesn’t reflect the distribution of real user experiences in real-world scenarios.

Why This Is Important

Understanding the difference between the false sense of completeness shown through tools, and the actual experience of users and bots through raw data can be critical.

As an example, a crawler could flag 200 pages with missing meta descriptions. It suggests you address these missing meta descriptions as a matter of urgency.

Looking at server logs reveals something different. Googlebot only crawls 50 of those pages. The remaining 150 are effectively undiscovered due to poor internal linking. GSC data shows impressions are concentrated on a small subset of the URLs.

If you follow the tool, you spend time writing 200 meta descriptions.

If you follow the raw data, you fix internal linking, thereby unlocking crawlability for 150 pages that currently don’t have visibility in the search engines at all.

The Risk Of This Completeness Blind Spot

The “completeness” blind spot, caused by over-reliance on technical tools, causes a lot of knock-on effects. Through the false sense of completeness, key aspects are overlooked. As a result, time and effort are misguided.

Losing Your Industry Context

Tools often make recommendations without the context of your industry or organization. When SEOs rely too much on the tools and not the data, they may not put on this additional contextual overlay that is important for a high-performing technical SEO strategy.

Optimizing For The Tool, Not Users

When following the recommendations of a tool rather than looking at the raw data itself, there can be a tendency to optimize for the “green tick” of the tool, and not what’s best for users. For example, any tool that provides a scoring system for technical health can lead SEOs to make changes to the site purely so the score goes up, even if it is actually detrimental to users or their search visibility.

Ignoring The Best Way Forward By Following The Tool

For complex situations that take a nuanced approach, there is a risk that overly relying on tools rather than the raw data can lead to SEOs ignoring the complexity of a situation in favor of following the tools’ recommendations. Think of times when you have needed to ignore a tool’s alerts or recommendations because following them would lead to pages on your site being indexed that shouldn’t, or pages being crawlable that you would rather not be. Without the overall context of your strategy for the site, tools cannot possibly know when a “noindex” is good or bad. Therefore, they tend to report in a very black-and-white manner, which can go against what is best for your site.

Final Thought

Overall, there is a very real risk that by accessing all of your technical SEO data only through tools, you may well be nudged towards taking actions that are not beneficial for your overall SEO goals at best, or at worst, you may be doing harm to your site.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Adds New Tasked-Based Search Features via @sejournal, @martinibuster

Google introduced new features for Search that continues its evolution into a more task-oriented tool, enabling users to launch AI agents directly from AI Mode and complete more tasks. This is a trend that all SEOs and online businesses need to be aware of.

Rose Yao, Product leader in Search, posted about the new features on X. The first tool is a toggle that enables users to track hotel prices directly from the search bar.

Yao explained:

“To help you save $$, today we launched hotel price tracking on Search! Use the new tracking toggle to get an email if prices drop for your dream hotel. Available now, globally”

An accompanying official blog post further explained the new tool:

“You can already track hotel prices at the city level, and launching today, you can now track prices for individual hotels, too. To get started on desktop, head to Search and look up a specific hotel by name, then tap the new price tracking toggle. On mobile, you’ll find the price tracking option under the Prices tab after you search. Either way, you’ll get an email alert if rates change significantly during your chosen dates, so you can jump on those price drops and snag a great deal.”

Agentic Search From AI Mode

Google’s CEO, Sundar Pichai, recently shared that the future of search was tasked-based with a reliance on AI agents that can complete tasks for users. This announcement brings Google search closer to that paradigm by introducing agentic search directly from AI Mode. This new feature launches an AI agent from AI Mode that will call local stores.

Yao explained:

“Agentic calling in AI Mode for finding last-minute travel gear.

When you just need that *one thing* before you leave but don’t know who’s got it in stock, you can ask AI Mode to save you the stress. Just search for what you need “near me” and Google AI will call local stores directly to get the details you need.”

This feature has been available on Google Search since November 2025 but it’s now rolling out to AI Mode.

Canvas Tool

AI Mode in Search has a Canvas tool that can accomplish planning tasks for users. The official blog post describes it:

“AI Mode in Search can transform your scattered research into a cohesive travel plan. Just head to AI Mode, select the Canvas tool from the plus (+) menu and describe your ideal trip. AI Mode will craft a custom itinerary in the Canvas side panel, including options for flights and hotels, as well as local attractions laid out on a map.”

The results can be further refined by the user. Travel planning with the Canvas tool is currently only available in the United States.

Three Featured Travel Tools

Those are the three travel-related features that Yao announced on X. The official blog post lists seven features related to travel, not all of which are new. For example, saving a boarding pass to Google Wallet is not a new feature.

Google’s Seven Travel Related Search Features

  1. Build a custom trip plan with AI Mode in Search
  2. Save money with hotel price tracking on Search
  3. Let Google take the hassle out of booking restaurants
  4. Ask Google to call nearby stores for last-minute shopping
  5. Translate and communicate with confidence
  6. Ask Maps for the best stops on your summer trips
  7. Make airport travel easier with Google Wallet

Transformation Of Search Continues

The main takeaways are:

  • Search is on a path toward becoming task oriented
  • Features like hotel tracking, AI calling, and Canvas show Google handling real-world actions, not just queries
  • Sundar Pichai’s “task-based” vision is already live in product features, not theoretical
  • AI Mode acts as an execution layer, turning search into a tool that does things on behalf of users
  • Local intent is becoming more actionable, with AI directly interacting with businesses
  • The traditional “ten blue links” model is being replaced by an interface that organizes and completes workflows
  • Visibility in search is increasingly tied to whether your business can be used by these systems, not just found

Google Search is becoming less about answering queries and is becoming more about helping users with their every day tasks. In that mode, it changes the role of a website from a destination into a data source and service endpoint.

For marketers, that creates an opportunity for helping businesses be aware of these changes and be ready for them.

If AI agents are calling stores, tracking prices, and assembling plans, then the winners are not just the best-ranked pages but the ones that are use accurately structured HTML elements as well as Schema.org structured markup.  The winners are the businesses whose data is structured, accessible, and actionable enough for those agents to use.

What this means:

  • Treat product availability, pricing, hours, and inventory as critical inputs, not just content
  • Ensure local listings, structured data, and third-party integrations are accurate and consistent

Google Search is transforming into a tasked-based user interface. Tasked-based Agentic Search is not hype, it’s something real and these new features are a part of that transformation. The old ten blue links paradigm is steadily fading away and what’s replacing it is the concept of search as an interface for navigating the modern world.

Read more about Google’s tasked-based agentic search. On a related note, research based on 68 million AI crawler visits show what successful websites do to drive better AI search performance to local business sites.

Featured Image by Shutterstock/Sergio Reis

Organic Search Winners Share 5 Traits

Google’s March 2026 core algorithm update concluded on April 8. The search giant doesn’t provide recovery guidelines for businesses whose rankings have decreased. We’re left with search-engine optimizers to create tactics that align with the winners to help losing sites maintain organic visibility.

A just-published study by SEO pro Cyrus Shepard of Zyppy Signal is an example. He analyzed organic search traffic of 400 winning and losing websites over the past 12 months and classified them by business model, content types, creator profiles, and other definable traits. From there, he identified five characteristics of winning sites.

Here are Cyrus’s five features of sites that consistently maintain prominent organic rankings on Google.

Proprietary assets

Of the 400 analyzed sites, 92.9% of the winners own proprietary assets that are difficult to replicate, such as datasets, products, images, or studies.

For example, a fashion ecommerce site may use its user data to report trends in colors or seasonality. A site with extensive product reviews could repurpose them into shopping guides.

Completes a task

According to the study, 83.7% of winning websites help searchers do something: buy, download, or search.

Winning sites tend to help users accomplish whatever they’re looking for. Losing sites may offer meaningful info on topics, but the searcher must go elsewhere to complete the task.

The solution may be a unique product or an interactive tool. For example, a tutorial site could offer interactive tools, quizzes, and workbooks to help students practice math.

Niche expertise

Expertise within a niche was a trait of 75.9% of the winners.

Winning sites tend to focus on a topic in which they have deep knowledge and experience.

Those sites become go-to authorities for specialized subjects. Hyper-specific travel blogs, for example, often outrank global travel brands.

Unique product or service

A unique product or service is a trait of 70.2% of sites that consistently rank well across core updates. Cyrus’s study found that informational sites (news publishers and affiliate sites) lost the most traffic and that offering a product may be the answer.

For example, a recipe site can sell a subscription meal plan, a book, or access to a private cooking community.

Strong brand

A strong brand, a destination site, was a trait of 32.6% of organic search winners. Cyrus found a high correlation between winning in organic search and having a strong profile of branded search terms.

The more searchers query a business’s name, the more that site is a destination and a strong signal to Google. Treat your brand search metrics as a key performance indicator, in other words.

I’ll add one feature for 2026 that Cyrus doesn’t address: sites that rank prominently in organic search offer something that AI cannot easily replicate.

What Search Engines Trust Now: Authority, Freshness & First-Party Signals via @sejournal, @cshel

Search has not become more chaotic. It has become more continuous.

If the last two years have felt like a blur of updates, volatility, and shifting guidance, you’re not imagining it. What’s changed is not just what search engines value. It’s how those values are evaluated.

The traditional model (the model we’re accustomed to) – periodic updates, relatively stable ranking signals, and long feedback loops – has been replaced by something faster and less discrete. Search engines are now heavily influenced by/running on AI systems that continuously test, interpret, and refine results, so what looks like constant algorithm change is actually ongoing model adjustment.

It’s this shift that has redefined what search engines trust.

The Algorithm Isn’t Static Anymore

For years, SEO operated on a predictable rhythm: core updates arrived, the rankings shifted, and then the industry analyzed the damage, identified patterns, and adapted.

That model assumed a relatively stable system punctuated by updates, but that assumption no longer holds.

Modern search systems incorporate multiple layers of AI-driven evaluation, including ranking systems, retrieval mechanisms, and answer-generation layers. These systems do not wait for quarterly updates. They iterate constantly, adjusting weighting, refining interpretation, and recalibrating outputs in near real time.

What we’re left with is a shorter signal half-life. What worked six months ago may still matter, but it is being re-evaluated continuously rather than periodically.

This is why it feels like we’re in a persistent state of chaos. The system is never settled; it’s always learning.

From Ranking To Evaluation

Traditional SEO focused on ranking documents. Pages competed as whole units, evaluated on signals like links, relevance, and technical accessibility. That model still exists, but it is no longer the full picture.

AI-driven search introduces a second layer: retrieval and synthesis. Instead of simply ranking pages, systems increasingly extract and recombine information from multiple sources to produce answers. This changes the competitive unit, pages still rank but fragments are what get used.

In practical terms, your content is no longer evaluated solely as a document or single URL. It is evaluated as an entire collection of potential answers. Each section, paragraph, and list becomes a candidate for inclusion in AI-generated responses.

Why does this distinction matter? Because it shifts the role of trust. Search engines are not just deciding which page deserves to rank; they are deciding which source is trustworthy enough to be a resource.

Redefining “Trust” In Search

Trust used to feel like a score – it was a combination of authority signals, content quality, and technical hygiene that resulted in stable rankings.

Today, trust behaves more like a probability – it is continuously evaluated, recalculated, and reinforced based on new data. It is not assigned once and retained. It is earned repeatedly.

How is trust determined? There are three factors that dominate the evaluation: authority, freshness, and first-party signals. Each plays a distinct role in how AI-driven systems determine what to surface.

Authority: The Entry Point

Authority has always mattered, no question, but what has changed is where it sits in the process. In an AI-driven system, authority functions as a filter. It determines whether your content is even considered. Not all sources get equal treatment because not all sources are considered authoritative. There is a systems bias toward entities they recognize – brands, authors, and domains that have demonstrated consistent expertise and visibility across the web.

A certain quantity of backlinks is no longer a reliable proxy for authority. Entity-level authoritative presence requires more proof than just links. The search engines build an understanding of who you are (and your authority) based on:

  • Mentions across other authoritative sites.
  • Consistent authorship and topical focus.
  • Brand recognition within a subject area.
  • Inclusion in structured knowledge systems.

These signals create what can be thought of as “entity gravity.” The stronger your presence, the more likely your content is to be included in the candidate set for retrieval.

The key distinction is that authority does not guarantee visibility, it guarantees eligibility. Without it, your content may be well-written, well-structured, and technically sound – and still be ignored.

Authority Comes Before Structure

There is a common misconception that better formatting or clearer writing alone can improve visibility in AI-driven search. Sorry, but it cannot, at least not in isolation.

Authority determines whether your content is selected. Structure determines whether it can be used. So, if your brand lacks recognition, your content may never be retrieved. If your content lacks structure, it may be retrieved but never cited. Both layers are required for this to work well.

This is why entity-building efforts, like PR, partnerships, thought leadership, and brand presence, have become inseparable from SEO. They influence not just rankings, but inclusion.

Freshness: The Signal Of Ongoing Relevance

Freshness has also evolved, or maybe it’s more accurate to say that it’s diverged.

In the past, all types of content benefited from freshness, and that fresh factor was often tied to recency. Newer content could reliably receive a temporary boost, especially for time-sensitive queries.

Today, that old kind of freshness only benefits time-sensitive publishers like news outlets. For everyone else, freshness is less about when something was published and more about whether it is being maintained.

When we’re looking at how freshness is evaluated for non-news publishers (i.e., everyone else), we see that AI-driven systems prioritize sources that demonstrate ongoing relevance. This includes:

  • Regularly updated content.
  • Clear timestamps and revision history.
  • Reinforcement of key topics over time.
  • Alignment with current information and context.

Outdated content introduces risk. If a system cannot determine whether information is still accurate (especially at grounding), it is less likely to include it in a synthesized answer.
Freshness, in this sense, becomes a trust reinforcement loop. Updating content signals continued expertise. It reduces uncertainty. It increases the likelihood of inclusion.

Please do not confuse this with rewriting everything constantly. It means maintain the content that matters.

First-Party Signals: The Ground Truth

The third big shift is the dramatically increasing importance of first-party signals. AI systems are designed to synthesize information, but they still depend on source material. The quality of that material directly affects the quality of the output. As a result, systems favor content that represents original, verifiable input rather than recycled summaries.

First-party signals include:

  • Original research and data.
  • Proprietary insights and analysis.
  • Direct product or service information.
  • First-hand experience and expertise.

These signals reduce ambiguity. They provide a clear source of truth. They are easier to attribute and harder to replicate.

This is one of the reasons the “content at scale” model has struggled in recent years. Large volumes of derivative content offer little new information. They increase noise without increasing value.

AI systems are not looking for more content; they are looking for better inputs. If your content does not add something unique, it is unlikely to be selected.

The Hidden Layer: Usability

So we know that authority gets you considered, freshness keeps you relevant, and first-party signals establish credibility. But none of that matters if your content cannot be used, and this is where many sites fail.

A page can rank well and still have no presence in AI-generated answers. When that happens, it is rarely a ranking issue. It is an extractability issue.

AI systems do not read pages the way humans do. They do not navigate, interpret, and synthesize in a leisurely, exploratory way. They retrieve what is easy to extract and move on.

Content that performs well in this environment tends to share a few characteristics:

  • Clear, descriptive headings.
  • Logical hierarchy (H1, H2, H3).
  • One primary idea per paragraph.
  • Direct, declarative statements.
  • Lists and tables where appropriate.
  • Key points introduced early, not buried.

This is not about writing style. It is about reducing friction.

If a system has to reinterpret your content to isolate the answer, it is less likely to use it. If it can lift a sentence or a list directly, it is more likely to include it. In this sense, structure is not cosmetic. It is functional.

Why “Good SEO” Isn’t Always Enough

Many teams are encountering a frustrating pattern: They rank well, traffic is stable, but they are absent from AI-generated answers.

The first instinct is to look for ranking issues. Then, when that doesn’t fix the problem, move on to re-optimizing keywords, building more links, or publishing more content. These are solutions that do not address the real problem.

Ranking determines whether you are visible in search results. Retrieval determines whether you are used in answers. Those are not the same system. A page can perform well in traditional SEO metrics and still fail to provide clean, extractable segments for AI systems. When that happens, competitors with clearer structure or stronger authority are more likely to be cited, even if they rank lower.

This is not a contradiction, rather it is a shift in evaluation.

Practical Implications

The implications for SEO are straightforward, even if the execution is not.

First, please stop treating updates as isolated events. They are outputs of a continuous system. Optimizing for long-term direction is more effective than reacting to short-term volatility.

Second, invest in authority at the entity level. Build recognition beyond your own site. Where and how you are mentioned matters as much as what you publish.

Third, maintain your content. Freshness is not a one-time signal. It is an ongoing demonstration of relevance.

Fourth, prioritize first-party value. Original insights, data, and expertise are more durable than derivative content.

Finally, structure for usability. Make your content easy to extract, not just easy to read.

Trust Is Now Dynamic

Search engines no longer assign trust once and move on. They evaluate it continuously, so you need to continuously monitor and maintain your trust signals.

Authority determines whether you are considered. Freshness determines whether you remain relevant. First-party signals determine whether you are credible. Structure determines whether you are usable.

All four are required.

If your content cannot be selected, extracted, and trusted quickly, it does not matter how well it ranks. That is the shift, and it is not going away.

More Resources:


Featured Image: beast01/Shutterstock

68 Million AI Crawler Visits Show What Drives AI Search Visibility via @sejournal, @martinibuster

A new analysis of 858,457 sites hosted on the Duda platform shows how AI crawlers are interacting with websites at scale. The data offers a clearer view of how crawling activity is growing and what SEOs and businesses should do to increase traffic from AI search.

AI Crawling Has Already Reached Scale

AI crawling is growing quickly, with more requests tied to real-time answers and most of that activity coming from a single provider. The data creates a pattern that shows which sites are being crawled and more importantly, why.

Year-Over-Year Growth In LLM Referrals

LLM referral traffic has increased sharply over the past year, with multiple platforms showing meaningful gains from very different starting points.

AI Referral Traffic Patterns

  • Total LLM referrals: 93,484 to 161,469 (+72.7%)
  • ChatGPT: 81,652 to 136,095 (+66.7%)
  • Claude: 106 to 2,488 (23x growth)
  • Copilot: 22 to 9,560 (from near-zero)
  • Perplexity: 11,533 to 13,157 (+14.1%)

Growth is not happening evenly, but across the board, referral traffic from AI systems is increasing. That makes AI-generated discovery a growing source of traffic, not a marginal one.

Crawlers Are Increasingly Fetching Content To Ground Answers

AI crawlers are no longer used primarily for indexing, with most activity now tied to retrieving content in real time to generate answers for users.

Most crawling is now happening in response to user queries rather than for building an index, which changes how content is accessed and used.

  • User Fetch (real-time answers): 56.9% of all crawler activity, driven almost entirely by ChatGPT
  • Training (model learning): 28.8%, split across GPTBot and other model crawlers
  • Discovery (content indexing): 14.3%, distributed across multiple systems
  • ChatGPT User Fetch volume: ~39.8 million visits

The trends are largely driven by ChatGPT, which is responsible for nearly all real-time retrieval activity. That means the move toward answer-based crawling is not evenly distributed, but concentrated in one platform shaping how content is accessed. This trend may change with Google’s new Google-Agent crawler.

Market Concentration In AI Crawling

AI crawler activity is heavily concentrated, with OpenAI responsible for the vast majority of requests, reflecting its position as the primary tool users rely on to find and retrieve information.

  • OpenAI: 55.8 million visits (81.0%)
  • Anthropic (Claude): 11.5 million (16.6%)
  • Perplexity: 1.3 million (1.8%)
  • Google (Gemini): 380,000 (0.6%)

Most AI crawling activity comes from OpenAI, which aligns with ChatGPT’s role as a primary tool for finding and retrieving information. Claude follows at a much smaller share, suggesting a different usage pattern, while the rest of the market accounts for a minimal portion of crawler activity.

Scale And What That Actually Means

AI crawling is already operating across a large portion of the web, reaching hundreds of thousands of sites and generating tens of millions of requests in a single month.

More than half of all sites in the dataset received at least one AI crawler visit, showing that this activity is not limited to a small subset of websites.

  • Total sites analyzed: 858,457
  • Sites with at least one AI crawler visit: 506,910 (59%)
  • Total AI crawler visits (Feb 2026): 68.9 million

AI crawling is not isolated to high-profile or heavily trafficked sites. It is already widespread, with consistent activity across a majority of the web.

The Relationship Between Crawling and Real Traffic

Sites that allow AI systems to crawl them consistently show stronger engagement across multiple metrics.

What the data actually shows is:

  1. Sites that allow AI crawling receive significantly more human traffic
  2. Higher-traffic sites are more likely to be crawled

Sites that allow crawling by AI systems receive significantly more human traffic, averaging 527.7 sessions compared to 164.9 for sites that are not crawled. This does not establish causation, but it shows a clear alignment between sites that attract human visitors and how often AI systems revisit them.

  • Average human traffic (AI-crawled vs not): 527.7 vs 164.9 (3.2x higher)
  • Average form completions: 4.17 vs 1.57 (2.7x higher)
  • Averageclick-to-call: 8.62 vs 3.46 (2.5x higher)
  • Sites with 10K+ sessions: 90.5% crawl rate

AI systems are not discovering weak or inactive sites and lifting them up. They are returning to sites that already attract human visitors. For marketers, that shifts the focus away from trying to “get crawled” and toward building real audience demand, since visibility in AI systems appears to follow it.

What Correlates With More Crawling

The research compared sites that include specific third-party integrations, structured features, and content depth with those that do not and found which ones mattered most for AI crawler activity and referrals.

Across the dataset, 59% of sites received at least one AI crawler visit in February 2026. Sites that are crawled more often tend to combine three types of signals: external integrations, structured business data, and content depth.

1. External Integrations

These integrations connect the site to external systems that validate and distribute business information.

  • Yext integration: 97.1% crawl rate vs ~58% without (+38.9pp)
  • Reviews integrations: 89.8% crawl rate vs 58.8% without, 376.9 average crawler visits

Sites that are connected to external data and review systems are crawled more often and more frequently, indicating that AI systems rely on these integrations as signals that a business is real, verifiable, and worth revisiting.

2. Structured Site Features And Business Data

These are built into the site and help AI systems understand and verify business identity.

  • Google Business Profile sync: 92.8% crawl rate vs 58.9% without, 415.6 average crawler visits
  • Local schema: 72.3% vs 55.2% (+17.1pp), 22.3% adoption
  • Dynamic pages: 69.4% vs 58.2% (+11.2pp)
  • Ecommerce: 54.2% vs 59.2% (-5.0pp)

Sites that clearly define their business identity and structure their information in a machine-readable way are crawled more often, showing that AI systems favor sites they can easily interpret, verify, and extract information from.

3. Content Depth (Volume Of Usable Data)

Sites with more content provide more opportunities for AI systems to retrieve, reference, and reuse information in responses.

  • Sites with 50+ blog posts: 1,373.7 average crawler visits vs 41.6 with no blog (~33x higher)

Sites with more content are crawled far more often, indicating that AI systems may return to sources that offer a larger supply of usable information to draw from when generating answers.

Local Business Schema Completeness = More Crawling

This part of the research focuses specifically on local business schema, comparing how the completeness of schema implementation for communicating business details relates to AI crawler activity. The fields measured include business name, phone number, address, hours, and social profiles.

  • No local schema fields: 55.2% crawl rate
  • 10–11 completed schema fields: 82% crawl rate
  • Sites with more complete local schema show a 26.8 percentage point higher crawl rate (82% vs 55.2%)

Sites that provide more complete local business information in structured form are crawled more often and receive more crawler visits. As more of these fields are filled in, both crawl rate and crawl frequency increase.

The data shows that clearly defined local business data makes a site easier for AI systems to identify, verify, and subsequently revisit, all the prerequisites for receiving traffic from AI search.

Takeaways

AI crawling is a parallel method for content discovery and the research shows clear patterns for sites that are visited by crawlers most often.

  • AI crawling operates alongside traditional search, changing how content is accessed and reused
  • Sites with structured local signals, deeper content, and more complete schema are crawled more often
  • Multiple reinforcing signals appear together on the same sites, not in isolation
  • The data shows direction, not causation, but the patterns are consistent

The data shows that sites that make it easy for AI crawlers to index and revisit the them tend to perform better. Interestingly, sites that present clear, structured, and verifiable information, while continuing to build real audience demand, are more likely to be revisited by AI systems and benefit from traffic generated through AI search.

Read the research: Duda study finds AI-optimized websites drive 320% more traffic to local businesses

Featured Image by Shutterstock/Preaapluem

Selling To AI: The Complete Guide To Agentic Commerce via @sejournal, @slobodanmanic

For 30 years, checkout has been a page. A form with fields for name, address, credit card number. Whether it was Amazon’s one-click patent or Apple Pay’s fingerprint, the innovation was always about making that form faster to get through.

The form itself never went away. Now it is.

This is the final article in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO. Part 2 explained how to get your content cited in AI responses. Part 3 mapped the protocols forming the infrastructure layer. Part 4 got technical with how AI agents perceive your website. This article covers the commerce layer: how AI agents find products, complete purchases, and handle payments without ever loading a checkout page.

In September 2025, Stripe and OpenAI launched Instant Checkout inside ChatGPT. In January 2026, Google and Shopify unveiled the Universal Commerce Protocol at the National Retail Federation conference. Two open standards. Two competing visions for the same shift: checkout becoming a protocol, not a page.

Throughout this article, we draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

How We Got Here

Every generation of commerce technology has solved the same problem: reducing the friction between “I want something” and “I have it.” Agentic commerce is not a break from this pattern. It’s the pattern’s logical conclusion.

1994: The first online purchase. On Aug. 11, 1994, Phil Brandenberger used his credit card to buy Sting’s Ten Summoner’s Tales CD for $12.48 from a website called NetMarket. The New York Times covered it the next day. NetMarket’s 21-year-old CEO, Daniel Kohn, told the paper: “Even if the N.S.A. was listening in, they couldn’t get his credit card number.” Netscape’s SSL protocol, released that same year, made it possible.

The friction removed: You no longer had to go to a physical store.

Late 1990s: Comparison shopping. Within a few years, websites like BizRate (1996), mySimon (1998), and PriceGrabber (1999) let buyers see prices across multiple merchants instantly. Google entered the space in 2002 with Froogle, later renamed Google Product Search in 2007, then Google Shopping in 2012.

The friction removed: You no longer had to visit each store to compare.

1998: The store adapts to you. Amazon deployed item-to-item collaborative filtering at scale, the algorithm behind “customers who bought this also bought.” Greg Linden, Brent Smith, and Jeremy York published the underlying research in IEEE Internet Computing in 2003. In 2017, the journal named it the best paper in its 20-year history.

The friction removed: You no longer had to know exactly what you wanted.

2015: Commerce moves into conversations. Chris Messina, then Developer Experience Lead at Uber, coined the term “conversational commerce” in a January 2015 Medium post, describing “delivering convenience, personalization, and decision support while people are on the go.” In April 2016, Mark Zuckerberg launched the Facebook Messenger Platform, declaring: “I’ve never met anyone who likes calling a business.” Meanwhile, in China, WeChat had already proved the model. Its Mini Programs, launched January 2017, generated 800 billion yuan (~$115 billion) in transactions by 2019.

The friction removed: You no longer had to open a store’s website.

2014-2023: Voice and social commerce. Amazon Echo launched in November 2014, promising you could buy things without a screen. The promise was mostly unfulfilled. Social commerce had better luck: TikTok Shop, launched in the U.S. in September 2023, reached $33.2 billion in global sales by 2024. Content became the storefront.

The friction removed: Purchase intent was created inside the feed, not searched for.

2024: AI starts shopping for you. Within months, every major platform launched AI shopping features. Amazon introduced Rufus in February, a conversational assistant trained on its product catalog. Google rebuilt Shopping with AI in October, drawing on 50 billion product listings. Perplexity launched “Buy with Pro” in November, turning a search engine into a store.

The friction removed: AI did the research, comparison, and recommendation for you.

2025: The buyer disappears. In January, OpenAI launched Operator, an agent that navigated websites, filled forms, and completed purchases autonomously. In May, Google announced “Buy for Me” at I/O 2025. In September, Instant Checkout went live in ChatGPT.

The friction removed: The last one. The human no longer needs to be there for the transaction to happen.

Each of these shifts was about the same thing: removing one more step between wanting and having. Agentic commerce removes the final step: doing it yourself.

Checkout Is No Longer A Page

Here’s the shift in one sentence: In traditional commerce, the seller builds the checkout experience. In agentic commerce, the agent does.

When you buy something on a website today, you interact with the merchant’s checkout page. They designed the form, they chose the layout, they control the flow. You fill in your details, click “Buy,” and the payment processes.

In agentic commerce, the AI agent presents the checkout information within its own interface. ChatGPT shows you the product, the price, the shipping options, within the chat. You confirm. The agent handles the rest. The merchant never renders a page. They receive an API call.

Stripe’s agentic commerce guide puts it directly: “The parts of commerce that used to be user experience problems are becoming protocol problems.” Instead of optimizing button colors and form layouts, merchants are defining API endpoints and product feeds. Discovery, comparison, and checkout are all handled by the agent. The merchant’s job shifts to supplying structured product data and processing the order.

Emily Glassberg Sands, Stripe’s Head of Information and Data Science, framed the broader implications: “Agents don’t just change who’s at the checkout. They change who’s doing the searching, the deciding, the trusting. All of it.”

I discussed this with Jes Scholz, who ran digital across 140+ ecommerce brands at Ringier, on the podcast. Her experience was clear: Agents browse in text mode, and if they can’t parse your site cleanly, they leave. No second chances.

This isn’t theoretical. As of February 2026, several agentic commerce implementations are live. ChatGPT Instant Checkout is available to U.S. users on Free, Plus, and Pro plans. Etsy, Instacart, and Walmart are among the merchants processing orders through it. Shopify’s Agentic Storefronts are active by default for eligible merchants, syndicating products to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity simultaneously. Perplexity launched Instant Buy with PayPal in November 2025, allowing purchases directly within the chat interface with merchants like Wayfair, Abercrombie & Fitch, and thousands more via BigCommerce and Wix.

Every major AI company is moving in this direction. Anthropic, the company behind Claude, has been equally explicit about its commerce plans. In February 2026, Anthropic confirmed it is building features for “agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end,” while committing to keeping the experience ad-free with no sponsored links or third-party product placements. Claude already connects to Stripe, PayPal, and Square via MCP integrations. And in June 2025, Anthropic published Project Vend, a research experiment where Claude autonomously operated a physical retail store for a month, managing inventory, pricing, supplier relations, and customer interactions. The results were instructive: The agent performed well at supplier discovery and customer service, but sold items at a loss and hallucinated payment details. A useful preview of both the potential and the current limitations.

Two open protocols are making this possible. Both launched within four months of each other.

The Agentic Commerce Protocol

The Agentic Commerce Protocol (ACP) is an open standard co-developed by OpenAI and Stripe, announced Sept. 29, 2025. Licensed under Apache 2.0, it defines how AI agents complete purchases on behalf of users.

ACP uses a four-party model: the buyer (discovers and approves), the AI agent (presents products and handles checkout UI), the merchant (processes the order and payment), and the payment service provider (handles payment credentials securely). The merchant remains the merchant of record. They process the payment, handle fulfillment, manage returns. The agent is an intermediary, not a marketplace.

The protocol defines four API endpoints:

Endpoint Purpose
Create Checkout Agent sends a product SKU; merchant generates a cart with pricing, shipping, and payment options
Update Checkout Modifies quantities, shipping method, or customer details mid-flow
Complete Checkout Agent sends a payment token; merchant processes payment and returns order confirmation
Cancel Checkout Signals cancellation; merchant releases reserved inventory

The responsibility shift is worth spelling out:

Responsibility Traditional Checkout ACP Checkout
Checkout UI Seller Agent
Payment credential collection Seller Agent
Cart and data model Seller Seller
Payment processing Seller Seller

The agent handles what the buyer sees. The seller handles what happens after they click “Buy.” ACP can be implemented as either a REST API or an MCP server, connecting naturally to the protocol ecosystem covered in Part 3.

Stripe’s Agentic Commerce Suite, launched Dec. 11, 2025, makes ACP adoption practical. Ahmed Gharib, Stripe’s Product Lead for Agentic Commerce, described it as “a low-code solution enabling businesses to sell across multiple AI agents via a single integration.” Without it, connecting to each AI agent individually would take up to six months of bespoke engineering per platform.

The Suite has three components: product discovery (sync your catalog and Stripe distributes it to AI agents), checkout (powered by Stripe’s Checkout Sessions API, handling taxes and shipping), and payments (using Shared Payment Tokens and Stripe Radar for fraud detection). Merchants connect their existing product catalog or upload directly to Stripe, then select which AI agents to sell through from the Stripe Dashboard.

The ecosystem is growing quickly. Beyond OpenAI, Stripe lists Microsoft Copilot, Anthropic, Perplexity, Vercel, and Replit as AI platform partners. On the ecommerce side, Squarespace, Wix, WooCommerce, BigCommerce, and commercetools have integrated. Salesforce announced ACP support in October 2025. Shopify’s 1 million+ US merchants are coming soon.

The Universal Commerce Protocol

Four months after ACP launched, a different coalition unveiled a second standard.

The Universal Commerce Protocol (UCP) was co-developed by Shopify and Google, announced Jan. 11, 2026 at the National Retail Federation conference in New York. Google CEO Sundar Pichai presented it. The co-developers include Etsy, Wayfair, Target, and Walmart. Over 20 companies endorsed it at launch, including Mastercard, Visa, Best Buy, Home Depot, Macy’s, American Express, and Stripe. I broke down UCP and its strategic implications the week it launched on the podcast.

Where ACP is tightly focused on the checkout flow, UCP is designed as a full commerce standard covering discovery through post-purchase. Its architecture is modeled after TCP/IP, with three layers:

Layer Purpose
Shopping Service Core primitives: checkout sessions, line items, totals, messages, status
Capabilities Major functional areas (Checkout, Orders, Catalog), each independently versioned
Extensions Domain-specific schemas, added via composition without a central registry

UCP is protocol-agnostic. It supports REST, MCP, A2A, and AP2 (Agent Payments Protocol, Google’s standard for agent-initiated payments). ACP currently supports REST and MCP.

Discovery works through a published profile at /.well-known/ucp, similar to how A2A agents publish their capabilities at /.well-known/agent-card.json (covered in Part 3). Both agents and merchants declare their capabilities, and on each request, the system computes the intersection of what they can do together. Ashish Gupta, VP/GM of Merchant Shopping at Google, described the logic: “The shift to agentic commerce will require a shared language across the ecosystem.

The two protocols reflect different strategic positions. ACP, built by the company running the AI agent (OpenAI) and the company processing the payment (Stripe), is optimized for getting transactions through ChatGPT quickly. UCP, built by the company hosting the merchants (Shopify) and the company running search (Google), is designed for a multi-agent future where many AI platforms compete for the same shoppers.

Dimension ACP (Stripe + OpenAI) UCP (Shopify + Google)
Launched Sept. 29, 2025 Jan. 11, 2026
Focus Checkout flow Full commerce journey
Transport REST, MCP REST, MCP, A2A, AP2
Payment Shared Payment Tokens (Stripe) AP2 with cryptographic Mandates
Discovery Structured product feeds /.well-known/ucp endpoint
Integration effort Days (existing Stripe merchants) Weeks to months
Coalition OpenAI, Stripe, Salesforce Google, Shopify, Mastercard, Visa

The good news for merchants: These aren’t mutually exclusive. Shopify merchants can serve both simultaneously. The same products appear in ChatGPT via ACP and in Google AI Mode via UCP. Shopify’s Agentic Storefronts handle the multi-protocol complexity, syndicating catalog data across ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from a single admin panel.

Vanessa Lee, Shopify’s VP of Product, framed the company’s position: “Agentic commerce has so much potential to redefine shopping and we want to make sure it can scale.

The Trust Problem: Payments Without People

Both protocols face the same foundational challenge: How do you process a payment when the person with the credit card isn’t the one at the checkout?

Traditional commerce treats credential possession as a trust signal. If you have the card number, the expiry date, and the CVV, you’re probably the cardholder. Agentic commerce breaks this assumption. The agent has been authorized to act on the buyer’s behalf, but it’s not the buyer. As Stripe’s Kevin Miller wrote in his October 2025 blog post: “Trust can’t be inferred. It has to be explicitly granted, scoped, and enforced in code.”

Javelin Strategy & Research, cited by Visa, describes this as the shift from “card-not-present” to “person-not-present” transactions. It’s a useful framing. Card-not-present fraud was the defining challenge of ecommerce. Person-not-present fraud is the defining challenge of agentic commerce.

Shared Payment Tokens

Stripe’s solution is the Shared Payment Token (SPT), a new payment primitive designed specifically for agent transactions. Here’s how it works:

  1. The buyer saves a payment method with the AI platform (e.g., ChatGPT).
  2. When approving a purchase, the AI platform issues an SPT scoped to the specific merchant, capped at the checkout amount, with a time limit.
  3. The AI platform sends the SPT to the merchant via ACP.
  4. The merchant creates a Stripe PaymentIntent using the token.
  5. Stripe processes the payment, applying fraud detection in real time.

The buyer’s actual card details are never shared with the merchant or the agent. Each token is programmable (scoped by merchant, time, and amount), reusable across platforms, and revocable at any time. For existing Stripe merchants, enabling SPTs requires “as little as one line of code.”

The Payment Networks Respond

The card networks have launched their own standards. Visa introduced the Trusted Agent Protocol in October 2025, an open framework built on HTTP Message Signatures that helps merchants distinguish legitimate AI agents from malicious bots. Developed in collaboration with Cloudflare, it has feedback from Adyen, Checkout.com, Microsoft, Shopify, Stripe, and Worldpay, among others.

Mastercard launched Agent Pay in April 2025, introducing “Agentic Tokens” that build on its existing tokenization infrastructure. Each agent action uses permissions and limits defined by the consumer. Mastercard CEO Michael Miebach described agent-led payments as a “significant paradigm shift” for the industry. U.S. issuers were enabled in November 2025, with global rollout in early 2026.

PayPal joined the ACP ecosystem on October 28, 2025, enabling PayPal wallets for ChatGPT checkout and building an ACP server that connects its global merchant catalog without requiring individual merchant integrations.

Google launched its own payment standard in parallel. The Agent Payments Protocol (AP2), announced September 2025 with 60+ industry partners, uses Verifiable Digital Credentials and a cryptographic Mandate system to create tamper-evident proof of user consent at every step of the transaction. AP2 is payment-agnostic, supporting credit and debit cards, real-time bank transfers, and even stablecoins via a Coinbase x402 extension. It’s integrated directly into UCP.

Fraud Without Fingerprints

Traditional fraud detection relies on human behavioral signals: mouse movements, typing patterns, browsing behavior, session duration. AI agents have none of these. A legitimate agent transaction can look indistinguishable from a sophisticated bot attack.

Stripe addressed this by building what they describe as “the world’s first AI foundation model for payments,” a transformer-based model trained on tens of billions of transactions. The model treats each charge as a token and behavior sequences as context, ingesting signals including IPs, payment methods, geography, device characteristics, and merchant traits. When SPTs are used, Stripe Radar relays risk signals including dispute likelihood, card testing detection, and stolen card indicators to help “differentiate between high-intent agents and low-trust automated bots.”

The attack surface is also novel. Researchers demonstrated in a June 2025 study that ecommerce agents are susceptible to visual prompt injection: malicious content embedded in product listings can hijack agent behavior during shopping tasks. All agents tested were vulnerable. A separate study accepted to IEEE S&P 2026 found that 13% of randomly selected ecommerce websites had already exposed their chatbot plugins to indirect prompt injection via third-party content like product reviews. And a January 2025 paper on authenticated delegation argues that for agentic commerce to function at scale, the industry needs standardized mechanisms to “explicitly delegate authority to agents, transparently identify those agents as AI, and enforce human-centered choices around security and permissions.” SPTs, the Trusted Agent Protocol, and Agent Pay are all early answers to that challenge.

The concern is real on the consumer side, too. 88% of consumers surveyed by Javelin are concerned that AI will be used for identity fraud, according to Visa’s analysis. Building trust infrastructure that works for agent transactions is the prerequisite for agentic commerce scaling beyond early adopters.

→ Read More: Trust In AI Shopping Is Limited As Shoppers Verify On Websites

Who’s Already Selling to AI

Despite the infrastructure still being built, adoption is moving fast.

AI platforms with commerce capabilities:

Merchants and brands on board:

The early adopter list reads like a mall directory. URBN (parent of Anthropologie, Free People, and Urban Outfitters), Etsy, Coach, Kate Spade, Glossier, Vuori, Spanx, SKIMS, Ashley Furniture, Revolve, and Halara are among those onboarding to Stripe’s Agentic Commerce Suite. Walmart and Instacart are live on ChatGPT. Gymshark, Everlane, and Monos are live on Google AI Mode via UCP.

Ecommerce platforms enabling it:

Shopify’s 1 million+ U.S. merchants are eligible for ChatGPT integration. BigCommerce, Wix, Squarespace, WooCommerce, and commercetools have integrated with Stripe’s Suite. Salesforce Commerce Cloud announced ACP support in October 2025, with new Agentforce agents for merchant, buyer, and personal shopper workflows.

The Market

The market projections vary widely, which tells you how early we are. McKinsey projects $1 trillion in U.S. retail revenue orchestrated by agents by 2030, scaling to $3-5 trillion globally. Gartner predicts 90% of B2B purchases will be handled by AI agents within three years, intermediating $15 trillion in spending by 2028. Forrester predicts that by 2026, one-third of retail marketplace projects will be deserted as answer engines steal traffic.

The consumer side is more cautious. A Contentsquare survey of 1,300 U.S. consumers found 30% willing to let an AI agent complete a purchase on their behalf. A YouGov survey of 1,287 U.S. adults found 65% trust AI to compare prices, but only 14% trust it to actually place an order. Among Gen Z, that number rises to 20%. The gap between “I’ll let AI help me shop” and “I’ll let AI buy for me” is where we are right now.

But the traffic is already there. AI-driven traffic to U.S. retail websites grew 4,700% year-over-year by mid-2025, according to Adobe Analytics. Shopify reported that orders attributed to AI searches grew 11x since January 2025. OpenAI estimates approximately 2% of all ChatGPT queries are shopping-related, roughly 50 million shopping queries daily across a user base of 700 million weekly users.

Academic research is starting to reveal what happens when agents do the buying. A Columbia Business School and Yale study (August 2025) introduced ACES, the first agentic ecommerce simulator, and tested six frontier models, including Claude and GPT-4. They found that AI shopping agents exhibit “choice homogeneity,” concentrating demand on a small number of products and showing strong position biases in how listings are ranked. The researchers warn of winner-take-all dynamics and the emergence of “AI-SEO,” where sellers optimize listings specifically for agent behavior rather than human preferences. A February 2026 study on personalized product curation found that current agentic systems remain “largely insufficient” for tailored product recommendations in open-web settings. The agents are getting better at buying. They’re not yet great at buying the right thing for a specific person.

The infrastructure is being built regardless of whether consumers are fully ready. When they are, the businesses that are prepared will be the ones the agents can find.

How To Get Started

The good news: For most businesses, the entry point is simpler than you’d expect.

If you’re on Shopify, you may already be selling to AI. Agentic Storefronts are active by default for eligible U.S. merchants. Your products are syndicated to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from your existing Shopify admin. Check your dashboard for the agentic channel settings and ensure your product data (descriptions, images, categories) is clean and complete.

If you’re on Stripe, enabling Shared Payment Tokens for ACP requires as little as one line of code. The Agentic Commerce Suite handles catalog syndication, checkout, and fraud detection. Connect your product catalog, select which AI agents to sell through, and you’re live.

If you’re on BigCommerce, Wix, Squarespace, or WooCommerce, integrations with Stripe’s Suite are available. BigCommerce described the shift from “months of bespoke engineering work” per AI platform to “a single, configurable integration.”

Regardless of platform, the protocol integrations get you connected. But agents still need to find and understand your products. This is where the work from Part 2 (getting cited) and Part 4 (being agent-readable) converges with commerce.

Audit your product data. Agents parse your catalog programmatically. Every product needs:

  • A descriptive, specific title (“Men’s Organic Cotton Crew Neck T-Shirt, Navy,” not “Blue Shirt”).
  • A complete description including materials, dimensions, care instructions, and use cases.
  • Accurate, real-time pricing and stock availability.
  • High-quality images with descriptive alt text.
  • Consistent categorization across your catalog.

Add structured markup. At minimum, every product page should include Product schema with name, description, image, sku, and brand, plus nested Offer schema with price, priceCurrency, availability, and seller. If you have reviews, add AggregateRating. This is the machine-readable layer that agents parse when direct protocol integrations aren’t available. I talked about this with Duane Forrester, who co-launched Schema.org while at Bing, on the podcast. His argument: consistent structured data builds what he calls “machine comfort bias,” where AI systems develop a preference for sources that have proven reliable over time.

Test your agent visibility. Open ChatGPT, Perplexity, and Google AI Mode, and ask them to recommend products in your category. If yours don’t appear, agents can’t sell them. View your product pages in reader mode or a text-based browser to see what agents see when they visit your site directly (covered in Part 4).

Track agent-driven traffic. ChatGPT appends utm_source=chatgpt.com to referral links. Perplexity and other AI platforms leave similar referral signatures. Set up segments in your analytics to isolate AI-referred visits and monitor conversion rates separately from human traffic. The numbers are small now, but the 4,700% year-over-year growth in AI traffic to retail means they won’t stay small.

Walmart CEO Doug McMillon put it directly: “For many years now, ecommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change.”

Whether it changes next quarter or next year for your business depends on whether your product data is ready when the agents come looking.

Key Takeaways

  • Checkout is becoming a protocol, not a page. In agentic commerce, the AI agent handles the interface; the merchant processes the order. Two open standards, ACP (Stripe + OpenAI) and UCP (Shopify + Google), define how this works.
  • Both protocols are open and growing fast. ACP launched in September 2025 and powers Instant Checkout in ChatGPT. UCP launched in January 2026 with endorsements from Mastercard, Visa, Walmart, and Target. They’re complementary, not mutually exclusive. Shopify merchants can serve both simultaneously.
  • Shared Payment Tokens solve the “person-not-present” problem. When the buyer isn’t at the checkout, traditional trust signals break down. SPTs are programmable, scoped, time-limited, and revocable, letting agents initiate payments without ever seeing the buyer’s card details.
  • Payment networks are building their own standards. Visa’s Trusted Agent Protocol and Mastercard’s Agent Pay provide authentication and fraud frameworks specific to agent transactions. PayPal joined the ACP ecosystem. The payments infrastructure for agentic commerce is taking shape across the industry.
  • Major brands are already live. Etsy, Walmart, Instacart, Glossier, SKIMS, Coach, and dozens more are selling through AI agents today. Ecommerce platforms, including Shopify, BigCommerce, Wix, Squarespace, and WooCommerce, have integrations available.
  • Consumer trust is lagging behind infrastructure. Only 14% of consumers currently trust AI to place orders on their behalf. But AI-driven traffic to retail grew 4,700% in a year. The infrastructure is being built for the adoption curve that follows.

This is the final article in a five-part series on the agentic web. Part 1 framed the shift from SEO to AAIO. Part 2 covered how to get cited by AI. Part 3 mapped the protocols. Part 4 explained how agents perceive your website. This article covered where it all leads: transactions.

The thread connecting all five parts is straightforward. Structured data helps AI find you. Clean content helps AI cite you. Accessible HTML helps AI navigate you. Structured commerce protocols help AI buy from you. It’s the same principle at every layer: Make your business machine-readable, and the machines will do business with you.

Kevin Miller, Stripe’s Head of Payments, captured the moment: “Stripe spent the last 15 years optimizing commerce for human shoppers. Now, we’re starting to do the same with agents.”

The agents are already shopping. The question is whether they can find your store.

More Resources:


This post was originally published on No Hacks.


Featured Image: showcake/Shutterstock

Google Bans Back Button Hijacking, Agentic Search Grows – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect what Google considers spam, what happens when you report it, and what agentic search looks like in practice.

Here’s what matters for you and your work.

Google’s New Spam Policy Targets Back Button Hijacking

Google added back button hijacking to its spam policies, with enforcement beginning June 15. The behavior is now an explicit violation under the malicious practices category.

Key facts: Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Pages engaging in the behavior face manual spam actions or automated demotions.

Why This Matters

Google called out that some back button hijacking originates from included libraries or advertising platforms, which means the liability sits with the publisher even when the behavior comes from a vendor.

You have two months to audit every script running on your site, including ad libraries and recommendation widgets you didn’t write yourself.

Sites that receive a manual action after June 15 can submit a reconsideration request through Search Console once the offending code is removed.

What SEO Professionals Are Saying

Daniel Foley Carter, SEO Consultant, summed up the community reaction on LinkedIn:

“So basically, that spammy thing you do to try and stop users leaving? Yeah, don’t do it.”

Manish Chauhan, SEO Head at Groww, added on LinkedIn that he was:

“glad this is being addressed. It always felt like a short-term hack for pageviews at the cost of user trust.”

Read our full coverage: New Google Spam Policy Targets Back Button Hijacking

Spam Reports May Now Trigger Manual Actions

Google updated its report-a-spam documentation on April 14 to say user submissions may now trigger manual actions against sites found violating spam policies. The previous guidance said spam reports were used to improve spam detection systems rather than to take direct action.

Key facts: Google may use spam reports to take manual action against violations. If Google issues a manual action, the report text is sent verbatim to the reported website through Search Console.

Why This Matters

Google now states that spam reports can be used to initiate manual actions, making reports explicitly part of its enforcement process in official documentation.

This also raises concerns about potential abuse, as grudge reports and competitor sabotage may become more appealing when reports have a tangible impact. Therefore, the true test will be the quality of reports that Google actually considers.

What SEO Professionals Are Saying

Gagan Ghotra, SEO Consultant, wrote on LinkedIn about why the change may lead to better reports:

“Now spam reports have direct relation to Google issuing manual actions against domains. Google announced if there is a spam report from a user and based upon that report Google decide to issue manual action against a domain then Google will just send the user submitted content in report to the site owner (Search Console – Manual Action report) and will ask them to fix those things. Seems like Google was getting too many generic spam reports and now as the incentive to report are aligned. That’s why I guess people are going to submit reports which have a lot of relevant information detailing why/how a specific site is violating Google’s spam policies.”

Read Roger Montti’s full coverage: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Agentic Restaurant Booking Expands In AI Mode

Google expanded agentic restaurant booking in AI Mode to additional markets on April 10, including the UK and India. Robby Stein, VP of Product for Google Search, announced the rollout on X.

Key facts: Searchers can describe group size, time, and preferences to AI Mode, which scans booking platforms simultaneously for real-time availability. The booking itself is completed through Google partners rather than directly on restaurant websites.

Why This Matters

Restaurant booking shows how task completion within search works. For local SEOs and marketers, traffic patterns shift: users now often stay within Google during discovery, with bookings routed through partners.

This depends on Google booking partners, which may limit visibility for restaurants outside those platforms, making presence on Google-supported booking sites more important than the restaurant’s own website. This model may or may not extend to other experiences.

What SEO Professionals Are Saying

Glenn Gabe, SEO and AI Search Consultant at G-Squared Interactive, flagged the rollout on X:

I feel like this is flying under the radar -> Google rolls out worldwide agentic restaurant booking via AI Mode. TBH, not sure how many people would use this in AI Mode versus directly in Google Maps or Search (where you can already make a reservation), but it does show how Google is moving quickly to scale agentic actions.

Aleyda Solís, SEO Consultant and Founder at Orainti, noted a key limitation in a LinkedIn post:

“Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though.”

Read Roger Montti’s full coverage: Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow

Theme Of The Week: Google Gets Specific

What counts as spam, what happens when spam gets reported, and what agentic search looks like all got clearer definitions this week.

Back button hijacking becomes a named violation with an enforcement date. Google’s documentation now says spam reports may be used for manual actions, not just fed into detection systems. Agentic search becomes a live product for restaurant reservations in specific markets rather than a talking point about the future.

Now, the compliance work, reporting mechanics, and agentic experience are all clearly understood enough to be tracked directly, instead of just forecasted.

Top Stories Of The Week:

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Your AI Visibility Strategy Doesn’t Work Outside English via @sejournal, @DuaneForrester

This series has been written in English, tested in English, and grounded in research conducted primarily in English. Every framework discussed here (vector index hygiene, cutoff-aware content calendaring, community signals, machine-readable content APIs) was conceived by an English-speaking practitioner, stress-tested against English-language queries, and validated against benchmarks that, as this article will show, are themselves English-weighted by design. That is not a disclaimer, but it is the central problem this article is about.

The AI visibility discourse at large carries the same limitation. One 2024 study analyzing AI evaluation datasets found that over 75% of major LLM benchmarks are designed for English tasks first, with non-English testing treated as an afterthought. The strategies built on top of those benchmarks inherit the same bias.

Enterprise brands are not the villains in this story. Translation-first search content strategies produced imperfect results globally, but markets had learned to live with the nuanced failures. Traditional search indexed what existed, ranked it imperfectly, and the degradation was quiet enough that no one filed a complaint. LLMs raise the bar in a way search never did, and the reason is structural, which is what the rest of this article examines.

The Platform Map

Before optimizing AI visibility in any market, a brand needs to answer a question the English-centric visibility discourse rarely asks: Which AI system are your target customers actually using? The answer varies more dramatically by region than most global marketing teams have accounted for.

In China, a market of 1.4 billion people, ChatGPT and Gemini are not accessible. The AI visibility contest happens entirely within a separate ecosystem. Baidu’s ERNIE Bot crossed 200 million monthly active users in January 2026, and Baidu holds the leading position in AI search market share, according to Quest Mobile. But Baidu is no longer operating in a vacuum. ByteDance’s Doubao surpassed 100 million daily active users by end of 2025, and Alibaba’s Qwen exceeded 100 million monthly active users in the same period. A brand’s English-optimized content architecture is not underperforming in this ecosystem. It simply does not exist there.

South Korea tells a different version of the same story. Naver captured 62.86% of the South Korean search market in 2025 (more than double Google’s share) and since March 2025 has been deploying AI Briefing, a generative search module powered by its proprietary HyperCLOVA X model, with plans for up to 20% of all Korean searches to surface AI-generated answers by end of 2025. Naver is also a closed ecosystem where results route to internal Naver properties, not necessarily the open web. Western brands whose structured data and llms.txt implementation was designed for open-web crawlers are operating with architecture that was never built to reach Naver’s retrieval layer. China and Korea alone account for well over a billion AI-active users on platforms a standard global visibility strategy does not touch.

The Map Is Far Bigger Than We’re Drawing

Those two markets are the ones that get cited because their scale is impossible to ignore. But the platforms being built outside the English-dominant orbit extend considerably further, and the breadth of what has launched in the last two years deserves attention on its own terms.

Europe

  • France – Mistral AI’s Le Chat was the No. 1 free app in France after its February 2025 launch; the French military awarded Mistral a deployment contract through 2030, and France committed €109 billion in AI infrastructure investment at the 2025 AI Action Summit.
  • Germany – Aleph Alpha trains in five languages with EU regulatory compliance by design, backed by Bosch and SAP.
  • Italy – Velvet AI (Almawave/Sapienza Università di Roma) is built specifically for Italian language and cultural context, designed for EU AI Act compliance from inception.
  • European Union – The OpenEuroLLM initiative, launched in 2025, is developing a family of open LLMs covering all 24 official EU languages.
  • Switzerland – Apertus (EPFL/ETH Zurich/Swiss National Supercomputing Centre, September 2025) supports over 1,000 languages with 40% non-English training data, including Swiss German and Romansh.

Middle East

  • UAE/Abu Dhabi – Falcon (Technology Innovation Institute) ranges from 7B to 180B parameters; Falcon Arabic, launched May 2025, outperforms models up to 10 times its size on Arabic benchmarks.
  • Saudi Arabia – HUMAIN, backed by the sovereign wealth fund, is framed as a full-stack national AI ecosystem.
  • South and Southeast Asia
  • India – Bhashini (Ministry of Electronics and IT) has produced over 350 AI-powered language models; BharatGen, launched June 2025, is India’s first government-funded multimodal LLM.
  • Singapore / Southeast Asia – SEA-LION (AI Singapore) supports 11 Southeast Asian languages; Malaysia, Thailand, and Vietnam have deployed MaLLaM, OpenThaiGPT, and GreenMind-Medium-14B-R1, respectively.

Latin America

  • 12-country consortium – Latam-GPT launched September 2025, led by Chile’s CENIA with over 30 regional institutions, trained on court decisions, library records, and school textbooks, with an initial Indigenous language tool for Rapa Nui.

Africa/Eastern Europe

  • Sub-Saharan Africa – Lelapa AI’s InkubaLM supports Swahili, Yoruba, IsiXhosa, Hausa, and IsiZulu; Nigeria launched a national multilingual LLM in 2024.
  • Russia/Ukraine – GigaChat (Sberbank) is the dominant domestically deployed Russian AI assistant; Ukraine announced a national LLM in December 2025, built with Kyivstar and trained on Ukrainian historical and library data.

This list is not really meant to be exhaustive, but it is meant to be disorienting.

Every entry above represents a retrieval ecosystem, a cultural signal hierarchy, and a community proof-point structure that a North American-optimized AI visibility strategy does not reach. But the more important observation is about which direction these models were built in.

The old content strategy model was centrifugal: the brand sits at the center, creates content, translates it, and pushes it outward into markets. Traditional search accommodated this because crawlers are indifferent to cultural authenticity: they index what is there. The imperfect results were tolerated because most markets had no better alternative.

These regional models were built in the opposite direction. A government mandate, a national corpus, a specific cultural identity, a language’s syntactic logic, that is the origin point. The model was trained on what that place knows about itself. A brand’s translated content arrives as a foreign object with no parametric presence, carrying the syntactic and cultural signatures of its origin language. Translation does not retrofit cultural fit into a model that was built without you in it.

And this does not stop at the English/non-English boundary. Even within English, regional identity shapes what a model treats as native. Irish English carries vocabulary – craic, gas, giving out, that exists nowhere else. Australian idiom, Singaporean English, Nigerian Pidgin all have distinct fingerprints. A U.S. brand’s content may read as subtly foreign to a model trained predominantly on British or Irish corpora. The direction of the problem is the same regardless of whether the language is technically shared. So often these aren’t just words. They’re compressed cultural signals. A literal translation gives you the category, but often strips out aspects like intensity, intent, emotional tone, social expectation, or shared history.

The Embedding Quality Gap

The reason translation does not solve this is not just strategic. It’s structural, and it lives in the embedding layer.

Retrieval in AI systems depends on semantic similarity calculations. Content is encoded as a vector, queries are encoded as vectors, and the system identifies matches by measuring distance in that vector space. The accuracy of those matches depends entirely on how well the embedding model represents the language in question. Embedding models are not language-neutral. (I think of this as a kind of cultural parametric distance, or a language vector bias issue.)

The most rigorous current evidence comes from the Massive Multilingual Text Embedding Benchmark (MMTEB), published at ICLR 2025. Even across more than 250 languages and 500 evaluation tasks, the benchmark’s own task distribution is skewed toward high-resource languages. The benchmarks practitioners use to evaluate whether their embedding architecture works in other languages are themselves English-weighted. A leaderboard score that looks reassuring may be measuring performance on a test that does not represent the language actually in use.

The structural cause is well documented: the Llama 3.1 model series, positioned at release as state-of-the-art in multilingual performance, was trained on 15 trillion tokens, of which only 8% was declared non-English, and this is not just a Llama-specific problem. It reflects the composition of the large-scale web corpora used to train most foundation models, where English content is overrepresented at every stage: crawl filtering, quality scoring, and final dataset construction. Research comparing English and Italian information retrieval performance, published May 2025, found that while multilingual embedding models bridge the general-domain gap between the two languages reasonably well, performance consistency decreases substantially in specialized domains; precisely the domains enterprise brands operate in.

The embedding gap does not produce obvious errors. It produces quietly degraded retrieval and content that should surface does not, without any visible failure signal. The dashboards stay green. The gap only becomes visible when someone tests in the actual market language.

When Translation Isn’t Enough

Below the embedding layer sits a problem that is harder to instrument: Cultural context shapes what a model treats as relevant in the first place. Research published in 2024 by Cornell University researchers found that when five GPT models were asked questions from a widely used global cultural values survey, responses consistently aligned with the values of English-speaking and Protestant European countries. The models were not asked to translate anything; they were asked to reason, and their default frame of reference was shaped by the cultural composition of their training data.

Consider a brand headquartered outside France, but operating in France. Their content, even if professionally translated, was likely written by non-French-speaking teams with non-French-market authority signals: the institutional citations, the comparison frameworks, the professional register. Mistral was built on French corpora, with French institutional relationships and French media partnerships as its baseline for what counts as authoritative. A Canadian brand’s French content, for example, is tolerated by a French-speaking human reader. Whether it clears the threshold for a model trained on native French content as its definition of relevance is a different question entirely.

The community signals argument from the previous article in this series applies here with a regional dimension. The platforms that drive AI retrieval through community consensus differ by market. In China, Xiaohongshu now processes approximately 600 million daily searches (nearly half of Baidu’s query volume) with over 80% of users searching before purchasing and 90% saying social results directly influence their decisions. The community signals that matter for AI visibility in China are not the ones a strategy built around English-language review platforms is generating.

A brand may have excellent English-language retrieval infrastructure, strong community signals in Western markets, and a well-architected machine-readable content layer, and still be effectively invisible in Korea, structurally disadvantaged in Japan, and culturally misaligned in Brazil. This is not a failure of execution as much as a failure of assumption about which direction the optimization flows.

What Enterprise Teams Should Do

An honest note before the framework: The documented, auditable evidence base for enterprise-level non-English AI visibility strategies does not yet exist in a form that holds up to scrutiny. Work is being done, but a citable case study requires a defined baseline, a measurable intervention, a controlled timeframe, and independently validated results. A practitioner’s assertion that their work applies to your situation is not that. The absence of rigorous case data is a reason to build with intellectual honesty about what is validated versus directional, not a reason to wait. With that in mind, here’s what you can do today:

Audit AI visibility per language and per market, not globally. Query performance in English tells you nothing about performance in Japanese, and performance with global AI platforms tells you nothing about performance inside Naver’s AI Briefing. The audit needs to happen at the market level, using queries constructed in the local language by native speakers, not translated from English.

Map the AI platforms that matter in each target market before optimizing. The list in the previous section is a starting point, not a permanent reference, as this landscape shifts quarterly. Optimization work (structured data, content APIs, entity signals) needs to be built toward the platforms that actually serve each market.

Build localized content, not translated content. The four-layer machine-readable architecture discussed in this series applies in every language. But a translated version of an English content API is not a localized one. Entity relationships, cultural authority signals, and community proof points all need to be rebuilt for local context. The optimization direction is inward from the market, not outward from the brand.

Accept that English-English is not a single market either. The same structural logic applies within English. A US brand’s content may carry American syntactic and cultural signatures that read as subtly foreign to models trained on predominantly British, Irish, or Australian corpora. Regional English is not a rounding error. It is evidence of the same underlying principle operating on a smaller scale.

Accept that a single global AI visibility strategy is insufficient. The frameworks developed in English, including the ones in this series, are a starting point for one slice of the global market. Extending them globally requires treating each major market as a distinct optimization problem: different platforms, different embedding architectures, different cultural retrieval logic, and a different direction of trust.

Image Credit: Duane Forrester

There is real work to be done. If we step back and look at the big picture again, it’s clear that markets that were once willing to live with the nuanced failures of translation-first content strategies are increasingly operating on platforms built to serve them natively, and that gap is widening. You know I like to name things when the industry hasn’t gotten there yet so here it is: this is the Language Vector Bias problem. And the brands that start closing it now are not catching up to a solved problem. They are getting ahead of the most consequential visibility gap we aren’t really talking about.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Billion Photos/Shutterstock; Paulo Bobita/Search Engine Journal

Machine-First Architecture: AI Agents Are Here And Your Website Isn’t Ready, Says NoHacks Podcast Host via @sejournal, @theshelleywalsh

AI agents are already here. Not as a concept, not as a demo, but shipping inside browsers used by billions of people. Every major tech company has launched either a browser with AI built in or an extension that acts on your behalf.

Anthropic’s Claude for Chrome can navigate websites, fill forms, and perform multi-step operations on your behalf. Google announced Gemini in Chrome with agentic browsing capabilities, including auto browse, which can act on webpages for you. OpenClaw, the open-source AI agent, connects large language models directly to browsers, messaging apps, and system tools to execute tasks autonomously.

For more understanding about optimizing for agents, I spoke to Slobodan Manic, who recently wrote a five-part series on optimizing websites for AI agents. His perspective sits at the intersection of technical web performance and where AI agent interaction is actually heading.

From Slobodan Manic’s testing, almost every website is structurally broken for this shift.

“It started with us going to AI and asking questions. And now AI is coming to us and meeting us where we are. From my testing, I noticed that websites are nowhere near being ready for this shift because structurally almost every website is broken.”

The Single Biggest Thing That’s Changed

I started by asking Slobodan what’s changed in the last six to nine months that means SEOs need to pay attention to AI agents right now.

“Every major tech company has launched either a browser that has AI in it that can do things for you or some kind of extension that gets into Chrome. Claude has a plugin for Chrome that can do things for you, not just analyze web pages, summarize web pages, but actually perform operations.”

When ChatGPT first launched in 2023, making AI widely accessible, in parallel with how we started typing basic queries in search engines 25 years ago, we asked AI questions. We are now becoming more sophisticated and fluid with our prompting as we realize that AI can do so much more than [write me an email to politely decline an invitation].

Agents represent an even bigger shift to a different dynamic, where AI can complete tasks on our behalf and run complex systems. [Check my emails and delete any that are spam, sort them into a priority group, and surface what needs my immediate attention and provide a qualified response to anything on a basic query, plus make appointments in my calendar for any meeting invites].

Understanding and taking advantage of the possibilities is something we are all trying to figure out right now. What we should be aware of is that most websites aren’t built or ready for this agentic world.

Websites Are Becoming Optional, Or Are They?

I have a theory that brand websites are becoming hubs, the central point that connects all of your content assets online. But Slobodan has gone further. He’s written about websites becoming optional for the end user, with pages built by machines for machines and the interaction happening through closed system interfaces. I asked him to expand on that vision and what kind of timeframe we’re realistically looking at.

“First I’ll say that this is not fully happening today. This is still near to mid future. This is not March 2026,” he clarified. But the signals are concrete.

“Google had a patent granted in January that will let them use AI to rewrite the landing page for you if your landing page is not good enough. And then we have all these other companies including Google that announced Gemini browsing for you inside Chrome. So we have an end-to-end AI system that does everything while humans just wait for results.”

He was careful not to overstate it. People still like to browse, read, and compare things. Websites aren’t disappearing.

“Just the same way as mobile traffic has not killed desktop traffic even if it’s taken a bigger share of traffic overall, higher percentage of overall traffic while the desktop traffic is staying flat in terms of absolute numbers, I think this is another lane that will open where things will be happening without a human being involved in every step.”

His timeline for this: “Within a year we can have this become a reality. Not majority, but if Google starts rewriting landing pages using AI, we will see this happening probably 2027, if not sooner.”

When Checkout Becomes A Protocol

Slobodan has written that checkout is becoming a protocol, not a page. If an AI agent can buy on your behalf without ever loading a brand’s website, I asked, “What does that mean for how brands build trust and differentiate when the customer never sees their site?”

“If you’re building trust in a checkout page, you’re doing it wrong. Let’s start there. That I firmly believe. This is not to do with AI. This was never the right place to build trust,” he responded.

Slobodan pointed to every Shopify checkout page that looks identical. “There’s no trust built there. It’s just a machine-readable page that looks the same for everyone, for every brand. You’re supposed to be doing your job before the user needs to pay you.”

This is where he referenced Jono Alderson, and the concept of upstream engineering. “Moving upstream and doing work there and not on the website is the only way to move forward for anyone whose job is optimizing websites. That’s SEO, that’s CRO, that’s content, that’s anyone doing any kind of website work.”

He best summarized by saying “Your website is a part of the equation. Your website is not the equation. And that’s the biggest structural shift that people need to make to survive moving forward.”

What SEOs And Brands Should Actually Do Now

I asked what SEOs and brands can practically start doing to transition over the next year. His answer reframed how we should think about the website itself.

“If your website was your storefront, and it was for decades, people come to you, people do business there. It needs to be a warehouse and a storefront moving forward or you’re not going to survive. Simple as that.”

“We had all those bookstores that were selling books in the ’90s and then Amazon shows up and then you need to be a warehouse. You need to exist in two planes at the same time for the near future at least. So focusing only on your website is the most wrong thing you can do moving forward.”

His main area of focus right now is what he calls machine-first architecture. The principle is to build for machines before you build for humans.

“You don’t build your website for humans until you’ve built it for machines. When you’re working on a product page, there’s no Figma, there’s no design, there’s no copy. You start with your schema. What is your schema supposed to say? What is the meaning of the page? You start with the meaning and then from that build into a web page as it’s built for humans.”

He compared it directly to the mobile-first shift. “That did not mean no desktop. That meant do the more difficult version of it first and then do the easy thing. Trust me, it’s a lot more complicated to add meaning and structure to a page that’s already been designed than to do it the other way.”

And it extends beyond the website. “If you’re saying something on your website, you better check all of your profiles everywhere online, what people are saying about you. It’s everything everywhere all at once. But this is what optimization has become and what it needs to be.”

I also put to him the argument that optimizing for LLMs is fundamentally different from SEO. His response was unequivocal.

“Hard disagree. The hardest possible disagree. If you were doing things the right way, working on the foundations and checking every box that has to be checked, it’s not different at all.”

Where he sees a difference is in the speed of consequences. “With AI in the mix, you just get exposed much faster and the consequences are much greater. There’s nothing different other than those two things.”

This echoed something I’ve felt strongly. The cycle is moving more quickly, but there’s so much similarity with what happened at the foundation of this industry 25 to 30 years ago, which I raised in my SEO Pioneers series. We’re feeling our way through in the same way. And Slobodan agreed.

“They figured this out once and maybe we should ask them how to figure it out again.”

Vibe Coding Is A Trap, Deep Work Is The Moat

For my last question, I put it to Slobodan that he’s said vibe coding is a trap and deep work is the only moat left. For the SEO practitioner feeling overwhelmed, what’s the one thing they should actually do this week?

“It’s really the foundations. I hate to give the boring answer, but it’s really fixing every single foundational thing that you have on your website or your website presence.”

He’s watched the industry chase one shiny tool after another. “There’s always a new shiny toy to work on while your website doesn’t work with JavaScript disabled. Just ignore all of that until you’ve fixed every single broken foundation you have on your website.”

On vibe coding specifically, he was precise: “I don’t like the term vibe coding. It just suggests that you have no idea what you’re doing and you’re happy about it. That’s the way that sounds to me. The concept of AI-assisted coding, it’s there. It’s great. It’s not going away.”

“But just focus on what you should be doing first before you use AI to do it faster.”

What resonated with me is how well this applies to writing, too. AI is brilliant at confidently producing a draft that, at first glance, looks great. But when you actually read it, you realize it’s just somebody confidently talking nonsense.

Slobodan nailed the core problem: “You need to know what good is and what good looks like. Because AI will always give you something. If you don’t know enough about that specific thing, it will always look good from the outside. And there’s a reason why everyone is okay with vibing everything except for their own profession, because they try it and they see that the results are just horrific.”

Build For Machines First, Everything Else Follows

The one thing to take away from this conversation is to build for machines first, then humans. Not because human user experience won’t matter, but because getting the machine layer right first makes the human layer better.

Your website is no longer the only version of your business that people, or agents, will encounter. The brands that treat it as part of a wider ecosystem rather than the whole ecosystem are the ones that will come through this transition in the strongest position.

Watch the full video interview with Slobodan Manic here, or on YouTube.

Thank you to Slobodan for sharing his insights and being my guest on IMHO.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

Google’s Patent On Autonomous Search Results via @sejournal, @martinibuster

The United States Patent Office recently published Google’s continuation on a patent for a search system that detects when there is no satisfactory answer for a query and waits to automatically deliver the answer when it becomes available.

Search And AI Assistant

The patent, published in February 2026, is a continuation of an older patent, with the main changes being to apply this patent within the context of an AI assistant. The invention describes solving the problem of answering a question when no actual answer is available at the time a user makes the query. What it does is waits until there’s a satisfactory answer, at which point it circles back to the user with the answer, without them having to ask again.

The patent is titled, Autonomously providing search results post-facto, including in assistant context. Although the patent mentions quality thresholds, those thresholds are defined in the sense of whether the answer meets the user’s needs.

The patent describes six scenarios that would trigger the invention:

  1. When no search results meet defined quality or authoritative-answer criteria.
  2. When results exist but fail to provide a definitive or authoritative answer that satisfies those criteria.
  3. When no results meet quality criteria because the information is not yet available.
  4. When a query seeks a specific answer and no result satisfies the required criteria.
  5. When a resource later satisfies the defined criteria after previously lacking required information.
  6. When a previously available resource is refined or updated so that it now meets the criteria.

Useful And Complete Answers

Google’s patent says that the invention is a solution for times when there is no useful or complete answers because the information does not yet exist or is not good enough, forcing users to keep searching repeatedly.

The system checks if results meet:

  • A quality standard
  • Authoritativeness standard
  • Or a completeness standard.

If the current answers don’t meet those standards, the system will store the query and monitor for new or updated information. Once it becomes available it will send the results to the user later without them searching again.

Follow-Up Questions Are Not Necessary

What is novel about the invention is that it enables follow-up delivery of results after the original query without requiring a new follow-up questions. It also surfaces search results proactively in notifications or assistant conversations.

At a later time, when new or updated information becomes available that satisfies the criteria, the system proactively delivers that information to the user. This delivery can occur through notifications, within an unrelated interaction, or during a later conversation with an automated assistant.

The system may also optionally notify the user that no good results are currently available and ask if they want to be informed when better results appear.

What this system does is it transforms search from a one-time, user-initiated action into a persistent, ongoing process where the system continues working in the background and updates the user when meaningful information becomes available.

Cross-Device Continuity

An interesting feature of this invention is that it can reach out to the user across multiple devices.

Here is where it’s outlined:

[0012] In some implementations, the query is received on an additional computing device that is in addition to the computing device for which the content is provided for presentation to the user.”

This capabiilty is highlighed again in section [0067]:

“For example, the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device.”

It can also go cross-device as a visual and/or audible output across devices and in the form of an automated assistant, and can present the information when the user is interacting with the automated assistant in a different context, describing an “ecosystem” of devices.

Lastly, the patent explains that the information can be surfaced when the user is interfacing with the automated assistant in a completely different context:

[0040]”…the content may be provided for presentation to the user via the same computing device the user utilized to submit the query and/or via a separate computing device. The content may be provided for presentation in various forms. For example, the content may be provided as a visual and/or audible push notification on a mobile computing device of the user, and may be surfaced independent of the user again submitting the query and/or another query.

Also, for example, the content may be presented as visual and/or audible output of an automated assistant during a dialog session between the user and the automated assistant, where the dialog session is unrelated to the query and/or another query seeking similar information.”

Takeaways

The patent (Autonomously providing search results post-facto, including in assistant context) is in line with Google’s vision of tasked-based agentic search, where AI assistants help users accomplish things. This patent could be applied to an AI agent that is asked for tickets to an event when the tickets aren’t yet available. Or it could be applied to making restaurant reservations when the reservations when the dates open up. Both of those scenarios are related to task-based agentic search (TBAS)

Here are seven takeaways:

  1. The system stores data associated with the user about unresolved queries, allowing it to track unanswered information needs over time rather than treating each search as a one-off event.
  2. It delivers results within future interactions, including unrelated assistant conversations, not just through standalone notifications.
  3. The notifications can happen across an ecosystem of devices.
  4. A lack of results is defined by failing to meet quality criteria, which can be the absence of information, the answer not being available yet, or the answer is not available from authoritative sources.
  5. The system focuses on queries that seek specific answers, rather than general informational searches.
  6. It supports cross-device continuity, enabling a query on one device to be fulfilled later on another.
  7. The design reduces repeated searches by eliminating the need for users to check back, then autonomously circling back when the information is available.

Featured Image by Shutterstock/uyabdami