Mixed Reports on AI Ecommerce Traffic

Consumers arriving from AI search and chat may be high-intent and ready to buy, but the early evidence is uneven and easily misread.

AI-referred visitors are engaging more deeply and converting at higher rates, according to the April 2026 Adobe Digital Insights “Quarterly AI Traffic Report” (PDF).

Premium Engagement

AI-referred visitors in March were 42% more likely to purchase, according to Adobe, generating 37% more revenue per visit than visitors from other channels.

Consumers from AI platforms:

  • Spent 48% longer on site,
  • Visited 13% more pages,
  • Bounced 32% less.

In short, Adobe’s report puts AI as a strong customer acquisition channel.

Early Data

Yet other analyses suggest the channel is nascent and driving only modest visits. For example, “ChatGPT Referrals to E-Commerce Websites,” an October 2025 study by German university professors Maximilian Kaiser and Christian Schulze, found that ChatGPT accounted for less than 0.2% of ecommerce traffic.

Compared with more established channels such as email, advertising, and organic search, the available datasets are tiny, especially for high-intent shoppers.

Moreover, performance almost certainly varies by store size, product category, and brand recognition. For small and midsize ecommerce companies, the implication is not to chase volume but to understand how AI is reshaping product discovery and prepare for it.

Mixed Reports

Adobe is not the first to suggest that AI is a premium ecommerce acquisition channel. Google claims that clicks on AI Overviews are more likely to convert than those of traditional organic listings.

To this end, Similarweb’s “State of Ecommerce 2025” report stated that “AI search has become a high-intent growth channel.”

Traffic to ecommerce sites from OpenAI’s ChatGPT converted at roughly 11.4%, according to Similarweb, compared to 5.3% from organic search.

However, conversions vary depending on the report. Schulze and Kaiser’s analysis found ChatGPT-referred traffic converted about twice as well as paid social, but it underperformed most other channels. Organic search, for example, showed about a 13% higher conversion rate than AI referrals, while affiliate (86% more likely to convert) and paid search (45% more) performed significantly better.

These findings are noteworthy, in part, because the paper analyzed 12 months of first-party data — from August 2024 through July 2025 — across 973 ecommerce websites and $20 billion (revenue) of orders. The data included nearly 50,000 transactions attributed to ChatGPT referrals and 164 million from traditional channels.

The professors also found that engagement varied. AI visitors, according to the report, were less likely to bounce than other channels. This matches the Adobe findings but implies fewer pages visited and less time on site, perhaps suggesting a different browsing pattern.

Easy to Misread

So which report is correct?

They might all be right. The differences between Adobe’s analysis and the findings of Kaiser and Schulze may accurately reflect each dataset.

Factors that might skew the numbers include:

  • Measurement. Adobe emphasized post-click performance, including engagement, conversion rate, and revenue per visit. Kaiser and Schulze relied on last-click attribution, which can undercount AI’s role in earlier research and consideration.
  • Definition of AI traffic. Adobe groups “generative AI traffic” broadly across multiple tools and interfaces. The academic study isolates ChatGPT referrals.
  • Geography. Adobe’s data is U.S.-focused. The academic dataset spans 49 countries, where adoption, trust, and shopping behavior most certainly differ.
  • Timing. The academic study collected data from August 2024 through July 2025, an early phase of AI shopping. Adobe’s data reflects more recent usage, after rapid improvements in tools and consumer familiarity.
  • Channel maturity. AI traffic represents a minor share of visits. Small samples can exaggerate differences, especially when comparing across merchants, categories, and brands.

Taken together, these differences are a healthy reminder that AI chat, search, and shopping are a moving target.

AI Is Vital

AI as an acquisition channel is early, uneven, and unclear.

Nonetheless, AI already influences how shoppers discover products, the most important such channel since the internet itself.

Measure its impact, optimize for AI visibility, and iterate quickly. The ecommerce industry may be in the midst of a once-in-a-generation shift. Merchants who adapt early are far better positioned than those who wait.

Selling To AI: The Complete Guide To Agentic Commerce via @sejournal, @slobodanmanic

For 30 years, checkout has been a page. A form with fields for name, address, credit card number. Whether it was Amazon’s one-click patent or Apple Pay’s fingerprint, the innovation was always about making that form faster to get through.

The form itself never went away. Now it is.

This is the final article in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO. Part 2 explained how to get your content cited in AI responses. Part 3 mapped the protocols forming the infrastructure layer. Part 4 got technical with how AI agents perceive your website. This article covers the commerce layer: how AI agents find products, complete purchases, and handle payments without ever loading a checkout page.

In September 2025, Stripe and OpenAI launched Instant Checkout inside ChatGPT. In January 2026, Google and Shopify unveiled the Universal Commerce Protocol at the National Retail Federation conference. Two open standards. Two competing visions for the same shift: checkout becoming a protocol, not a page.

Throughout this article, we draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

How We Got Here

Every generation of commerce technology has solved the same problem: reducing the friction between “I want something” and “I have it.” Agentic commerce is not a break from this pattern. It’s the pattern’s logical conclusion.

1994: The first online purchase. On Aug. 11, 1994, Phil Brandenberger used his credit card to buy Sting’s Ten Summoner’s Tales CD for $12.48 from a website called NetMarket. The New York Times covered it the next day. NetMarket’s 21-year-old CEO, Daniel Kohn, told the paper: “Even if the N.S.A. was listening in, they couldn’t get his credit card number.” Netscape’s SSL protocol, released that same year, made it possible.

The friction removed: You no longer had to go to a physical store.

Late 1990s: Comparison shopping. Within a few years, websites like BizRate (1996), mySimon (1998), and PriceGrabber (1999) let buyers see prices across multiple merchants instantly. Google entered the space in 2002 with Froogle, later renamed Google Product Search in 2007, then Google Shopping in 2012.

The friction removed: You no longer had to visit each store to compare.

1998: The store adapts to you. Amazon deployed item-to-item collaborative filtering at scale, the algorithm behind “customers who bought this also bought.” Greg Linden, Brent Smith, and Jeremy York published the underlying research in IEEE Internet Computing in 2003. In 2017, the journal named it the best paper in its 20-year history.

The friction removed: You no longer had to know exactly what you wanted.

2015: Commerce moves into conversations. Chris Messina, then Developer Experience Lead at Uber, coined the term “conversational commerce” in a January 2015 Medium post, describing “delivering convenience, personalization, and decision support while people are on the go.” In April 2016, Mark Zuckerberg launched the Facebook Messenger Platform, declaring: “I’ve never met anyone who likes calling a business.” Meanwhile, in China, WeChat had already proved the model. Its Mini Programs, launched January 2017, generated 800 billion yuan (~$115 billion) in transactions by 2019.

The friction removed: You no longer had to open a store’s website.

2014-2023: Voice and social commerce. Amazon Echo launched in November 2014, promising you could buy things without a screen. The promise was mostly unfulfilled. Social commerce had better luck: TikTok Shop, launched in the U.S. in September 2023, reached $33.2 billion in global sales by 2024. Content became the storefront.

The friction removed: Purchase intent was created inside the feed, not searched for.

2024: AI starts shopping for you. Within months, every major platform launched AI shopping features. Amazon introduced Rufus in February, a conversational assistant trained on its product catalog. Google rebuilt Shopping with AI in October, drawing on 50 billion product listings. Perplexity launched “Buy with Pro” in November, turning a search engine into a store.

The friction removed: AI did the research, comparison, and recommendation for you.

2025: The buyer disappears. In January, OpenAI launched Operator, an agent that navigated websites, filled forms, and completed purchases autonomously. In May, Google announced “Buy for Me” at I/O 2025. In September, Instant Checkout went live in ChatGPT.

The friction removed: The last one. The human no longer needs to be there for the transaction to happen.

Each of these shifts was about the same thing: removing one more step between wanting and having. Agentic commerce removes the final step: doing it yourself.

Checkout Is No Longer A Page

Here’s the shift in one sentence: In traditional commerce, the seller builds the checkout experience. In agentic commerce, the agent does.

When you buy something on a website today, you interact with the merchant’s checkout page. They designed the form, they chose the layout, they control the flow. You fill in your details, click “Buy,” and the payment processes.

In agentic commerce, the AI agent presents the checkout information within its own interface. ChatGPT shows you the product, the price, the shipping options, within the chat. You confirm. The agent handles the rest. The merchant never renders a page. They receive an API call.

Stripe’s agentic commerce guide puts it directly: “The parts of commerce that used to be user experience problems are becoming protocol problems.” Instead of optimizing button colors and form layouts, merchants are defining API endpoints and product feeds. Discovery, comparison, and checkout are all handled by the agent. The merchant’s job shifts to supplying structured product data and processing the order.

Emily Glassberg Sands, Stripe’s Head of Information and Data Science, framed the broader implications: “Agents don’t just change who’s at the checkout. They change who’s doing the searching, the deciding, the trusting. All of it.”

I discussed this with Jes Scholz, who ran digital across 140+ ecommerce brands at Ringier, on the podcast. Her experience was clear: Agents browse in text mode, and if they can’t parse your site cleanly, they leave. No second chances.

This isn’t theoretical. As of February 2026, several agentic commerce implementations are live. ChatGPT Instant Checkout is available to U.S. users on Free, Plus, and Pro plans. Etsy, Instacart, and Walmart are among the merchants processing orders through it. Shopify’s Agentic Storefronts are active by default for eligible merchants, syndicating products to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity simultaneously. Perplexity launched Instant Buy with PayPal in November 2025, allowing purchases directly within the chat interface with merchants like Wayfair, Abercrombie & Fitch, and thousands more via BigCommerce and Wix.

Every major AI company is moving in this direction. Anthropic, the company behind Claude, has been equally explicit about its commerce plans. In February 2026, Anthropic confirmed it is building features for “agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end,” while committing to keeping the experience ad-free with no sponsored links or third-party product placements. Claude already connects to Stripe, PayPal, and Square via MCP integrations. And in June 2025, Anthropic published Project Vend, a research experiment where Claude autonomously operated a physical retail store for a month, managing inventory, pricing, supplier relations, and customer interactions. The results were instructive: The agent performed well at supplier discovery and customer service, but sold items at a loss and hallucinated payment details. A useful preview of both the potential and the current limitations.

Two open protocols are making this possible. Both launched within four months of each other.

The Agentic Commerce Protocol

The Agentic Commerce Protocol (ACP) is an open standard co-developed by OpenAI and Stripe, announced Sept. 29, 2025. Licensed under Apache 2.0, it defines how AI agents complete purchases on behalf of users.

ACP uses a four-party model: the buyer (discovers and approves), the AI agent (presents products and handles checkout UI), the merchant (processes the order and payment), and the payment service provider (handles payment credentials securely). The merchant remains the merchant of record. They process the payment, handle fulfillment, manage returns. The agent is an intermediary, not a marketplace.

The protocol defines four API endpoints:

Endpoint Purpose
Create Checkout Agent sends a product SKU; merchant generates a cart with pricing, shipping, and payment options
Update Checkout Modifies quantities, shipping method, or customer details mid-flow
Complete Checkout Agent sends a payment token; merchant processes payment and returns order confirmation
Cancel Checkout Signals cancellation; merchant releases reserved inventory

The responsibility shift is worth spelling out:

Responsibility Traditional Checkout ACP Checkout
Checkout UI Seller Agent
Payment credential collection Seller Agent
Cart and data model Seller Seller
Payment processing Seller Seller

The agent handles what the buyer sees. The seller handles what happens after they click “Buy.” ACP can be implemented as either a REST API or an MCP server, connecting naturally to the protocol ecosystem covered in Part 3.

Stripe’s Agentic Commerce Suite, launched Dec. 11, 2025, makes ACP adoption practical. Ahmed Gharib, Stripe’s Product Lead for Agentic Commerce, described it as “a low-code solution enabling businesses to sell across multiple AI agents via a single integration.” Without it, connecting to each AI agent individually would take up to six months of bespoke engineering per platform.

The Suite has three components: product discovery (sync your catalog and Stripe distributes it to AI agents), checkout (powered by Stripe’s Checkout Sessions API, handling taxes and shipping), and payments (using Shared Payment Tokens and Stripe Radar for fraud detection). Merchants connect their existing product catalog or upload directly to Stripe, then select which AI agents to sell through from the Stripe Dashboard.

The ecosystem is growing quickly. Beyond OpenAI, Stripe lists Microsoft Copilot, Anthropic, Perplexity, Vercel, and Replit as AI platform partners. On the ecommerce side, Squarespace, Wix, WooCommerce, BigCommerce, and commercetools have integrated. Salesforce announced ACP support in October 2025. Shopify’s 1 million+ US merchants are coming soon.

The Universal Commerce Protocol

Four months after ACP launched, a different coalition unveiled a second standard.

The Universal Commerce Protocol (UCP) was co-developed by Shopify and Google, announced Jan. 11, 2026 at the National Retail Federation conference in New York. Google CEO Sundar Pichai presented it. The co-developers include Etsy, Wayfair, Target, and Walmart. Over 20 companies endorsed it at launch, including Mastercard, Visa, Best Buy, Home Depot, Macy’s, American Express, and Stripe. I broke down UCP and its strategic implications the week it launched on the podcast.

Where ACP is tightly focused on the checkout flow, UCP is designed as a full commerce standard covering discovery through post-purchase. Its architecture is modeled after TCP/IP, with three layers:

Layer Purpose
Shopping Service Core primitives: checkout sessions, line items, totals, messages, status
Capabilities Major functional areas (Checkout, Orders, Catalog), each independently versioned
Extensions Domain-specific schemas, added via composition without a central registry

UCP is protocol-agnostic. It supports REST, MCP, A2A, and AP2 (Agent Payments Protocol, Google’s standard for agent-initiated payments). ACP currently supports REST and MCP.

Discovery works through a published profile at /.well-known/ucp, similar to how A2A agents publish their capabilities at /.well-known/agent-card.json (covered in Part 3). Both agents and merchants declare their capabilities, and on each request, the system computes the intersection of what they can do together. Ashish Gupta, VP/GM of Merchant Shopping at Google, described the logic: “The shift to agentic commerce will require a shared language across the ecosystem.

The two protocols reflect different strategic positions. ACP, built by the company running the AI agent (OpenAI) and the company processing the payment (Stripe), is optimized for getting transactions through ChatGPT quickly. UCP, built by the company hosting the merchants (Shopify) and the company running search (Google), is designed for a multi-agent future where many AI platforms compete for the same shoppers.

Dimension ACP (Stripe + OpenAI) UCP (Shopify + Google)
Launched Sept. 29, 2025 Jan. 11, 2026
Focus Checkout flow Full commerce journey
Transport REST, MCP REST, MCP, A2A, AP2
Payment Shared Payment Tokens (Stripe) AP2 with cryptographic Mandates
Discovery Structured product feeds /.well-known/ucp endpoint
Integration effort Days (existing Stripe merchants) Weeks to months
Coalition OpenAI, Stripe, Salesforce Google, Shopify, Mastercard, Visa

The good news for merchants: These aren’t mutually exclusive. Shopify merchants can serve both simultaneously. The same products appear in ChatGPT via ACP and in Google AI Mode via UCP. Shopify’s Agentic Storefronts handle the multi-protocol complexity, syndicating catalog data across ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from a single admin panel.

Vanessa Lee, Shopify’s VP of Product, framed the company’s position: “Agentic commerce has so much potential to redefine shopping and we want to make sure it can scale.

The Trust Problem: Payments Without People

Both protocols face the same foundational challenge: How do you process a payment when the person with the credit card isn’t the one at the checkout?

Traditional commerce treats credential possession as a trust signal. If you have the card number, the expiry date, and the CVV, you’re probably the cardholder. Agentic commerce breaks this assumption. The agent has been authorized to act on the buyer’s behalf, but it’s not the buyer. As Stripe’s Kevin Miller wrote in his October 2025 blog post: “Trust can’t be inferred. It has to be explicitly granted, scoped, and enforced in code.”

Javelin Strategy & Research, cited by Visa, describes this as the shift from “card-not-present” to “person-not-present” transactions. It’s a useful framing. Card-not-present fraud was the defining challenge of ecommerce. Person-not-present fraud is the defining challenge of agentic commerce.

Shared Payment Tokens

Stripe’s solution is the Shared Payment Token (SPT), a new payment primitive designed specifically for agent transactions. Here’s how it works:

  1. The buyer saves a payment method with the AI platform (e.g., ChatGPT).
  2. When approving a purchase, the AI platform issues an SPT scoped to the specific merchant, capped at the checkout amount, with a time limit.
  3. The AI platform sends the SPT to the merchant via ACP.
  4. The merchant creates a Stripe PaymentIntent using the token.
  5. Stripe processes the payment, applying fraud detection in real time.

The buyer’s actual card details are never shared with the merchant or the agent. Each token is programmable (scoped by merchant, time, and amount), reusable across platforms, and revocable at any time. For existing Stripe merchants, enabling SPTs requires “as little as one line of code.”

The Payment Networks Respond

The card networks have launched their own standards. Visa introduced the Trusted Agent Protocol in October 2025, an open framework built on HTTP Message Signatures that helps merchants distinguish legitimate AI agents from malicious bots. Developed in collaboration with Cloudflare, it has feedback from Adyen, Checkout.com, Microsoft, Shopify, Stripe, and Worldpay, among others.

Mastercard launched Agent Pay in April 2025, introducing “Agentic Tokens” that build on its existing tokenization infrastructure. Each agent action uses permissions and limits defined by the consumer. Mastercard CEO Michael Miebach described agent-led payments as a “significant paradigm shift” for the industry. U.S. issuers were enabled in November 2025, with global rollout in early 2026.

PayPal joined the ACP ecosystem on October 28, 2025, enabling PayPal wallets for ChatGPT checkout and building an ACP server that connects its global merchant catalog without requiring individual merchant integrations.

Google launched its own payment standard in parallel. The Agent Payments Protocol (AP2), announced September 2025 with 60+ industry partners, uses Verifiable Digital Credentials and a cryptographic Mandate system to create tamper-evident proof of user consent at every step of the transaction. AP2 is payment-agnostic, supporting credit and debit cards, real-time bank transfers, and even stablecoins via a Coinbase x402 extension. It’s integrated directly into UCP.

Fraud Without Fingerprints

Traditional fraud detection relies on human behavioral signals: mouse movements, typing patterns, browsing behavior, session duration. AI agents have none of these. A legitimate agent transaction can look indistinguishable from a sophisticated bot attack.

Stripe addressed this by building what they describe as “the world’s first AI foundation model for payments,” a transformer-based model trained on tens of billions of transactions. The model treats each charge as a token and behavior sequences as context, ingesting signals including IPs, payment methods, geography, device characteristics, and merchant traits. When SPTs are used, Stripe Radar relays risk signals including dispute likelihood, card testing detection, and stolen card indicators to help “differentiate between high-intent agents and low-trust automated bots.”

The attack surface is also novel. Researchers demonstrated in a June 2025 study that ecommerce agents are susceptible to visual prompt injection: malicious content embedded in product listings can hijack agent behavior during shopping tasks. All agents tested were vulnerable. A separate study accepted to IEEE S&P 2026 found that 13% of randomly selected ecommerce websites had already exposed their chatbot plugins to indirect prompt injection via third-party content like product reviews. And a January 2025 paper on authenticated delegation argues that for agentic commerce to function at scale, the industry needs standardized mechanisms to “explicitly delegate authority to agents, transparently identify those agents as AI, and enforce human-centered choices around security and permissions.” SPTs, the Trusted Agent Protocol, and Agent Pay are all early answers to that challenge.

The concern is real on the consumer side, too. 88% of consumers surveyed by Javelin are concerned that AI will be used for identity fraud, according to Visa’s analysis. Building trust infrastructure that works for agent transactions is the prerequisite for agentic commerce scaling beyond early adopters.

→ Read More: Trust In AI Shopping Is Limited As Shoppers Verify On Websites

Who’s Already Selling to AI

Despite the infrastructure still being built, adoption is moving fast.

AI platforms with commerce capabilities:

Merchants and brands on board:

The early adopter list reads like a mall directory. URBN (parent of Anthropologie, Free People, and Urban Outfitters), Etsy, Coach, Kate Spade, Glossier, Vuori, Spanx, SKIMS, Ashley Furniture, Revolve, and Halara are among those onboarding to Stripe’s Agentic Commerce Suite. Walmart and Instacart are live on ChatGPT. Gymshark, Everlane, and Monos are live on Google AI Mode via UCP.

Ecommerce platforms enabling it:

Shopify’s 1 million+ U.S. merchants are eligible for ChatGPT integration. BigCommerce, Wix, Squarespace, WooCommerce, and commercetools have integrated with Stripe’s Suite. Salesforce Commerce Cloud announced ACP support in October 2025, with new Agentforce agents for merchant, buyer, and personal shopper workflows.

The Market

The market projections vary widely, which tells you how early we are. McKinsey projects $1 trillion in U.S. retail revenue orchestrated by agents by 2030, scaling to $3-5 trillion globally. Gartner predicts 90% of B2B purchases will be handled by AI agents within three years, intermediating $15 trillion in spending by 2028. Forrester predicts that by 2026, one-third of retail marketplace projects will be deserted as answer engines steal traffic.

The consumer side is more cautious. A Contentsquare survey of 1,300 U.S. consumers found 30% willing to let an AI agent complete a purchase on their behalf. A YouGov survey of 1,287 U.S. adults found 65% trust AI to compare prices, but only 14% trust it to actually place an order. Among Gen Z, that number rises to 20%. The gap between “I’ll let AI help me shop” and “I’ll let AI buy for me” is where we are right now.

But the traffic is already there. AI-driven traffic to U.S. retail websites grew 4,700% year-over-year by mid-2025, according to Adobe Analytics. Shopify reported that orders attributed to AI searches grew 11x since January 2025. OpenAI estimates approximately 2% of all ChatGPT queries are shopping-related, roughly 50 million shopping queries daily across a user base of 700 million weekly users.

Academic research is starting to reveal what happens when agents do the buying. A Columbia Business School and Yale study (August 2025) introduced ACES, the first agentic ecommerce simulator, and tested six frontier models, including Claude and GPT-4. They found that AI shopping agents exhibit “choice homogeneity,” concentrating demand on a small number of products and showing strong position biases in how listings are ranked. The researchers warn of winner-take-all dynamics and the emergence of “AI-SEO,” where sellers optimize listings specifically for agent behavior rather than human preferences. A February 2026 study on personalized product curation found that current agentic systems remain “largely insufficient” for tailored product recommendations in open-web settings. The agents are getting better at buying. They’re not yet great at buying the right thing for a specific person.

The infrastructure is being built regardless of whether consumers are fully ready. When they are, the businesses that are prepared will be the ones the agents can find.

How To Get Started

The good news: For most businesses, the entry point is simpler than you’d expect.

If you’re on Shopify, you may already be selling to AI. Agentic Storefronts are active by default for eligible U.S. merchants. Your products are syndicated to ChatGPT, Google AI Mode, Microsoft Copilot, and Perplexity from your existing Shopify admin. Check your dashboard for the agentic channel settings and ensure your product data (descriptions, images, categories) is clean and complete.

If you’re on Stripe, enabling Shared Payment Tokens for ACP requires as little as one line of code. The Agentic Commerce Suite handles catalog syndication, checkout, and fraud detection. Connect your product catalog, select which AI agents to sell through, and you’re live.

If you’re on BigCommerce, Wix, Squarespace, or WooCommerce, integrations with Stripe’s Suite are available. BigCommerce described the shift from “months of bespoke engineering work” per AI platform to “a single, configurable integration.”

Regardless of platform, the protocol integrations get you connected. But agents still need to find and understand your products. This is where the work from Part 2 (getting cited) and Part 4 (being agent-readable) converges with commerce.

Audit your product data. Agents parse your catalog programmatically. Every product needs:

  • A descriptive, specific title (“Men’s Organic Cotton Crew Neck T-Shirt, Navy,” not “Blue Shirt”).
  • A complete description including materials, dimensions, care instructions, and use cases.
  • Accurate, real-time pricing and stock availability.
  • High-quality images with descriptive alt text.
  • Consistent categorization across your catalog.

Add structured markup. At minimum, every product page should include Product schema with name, description, image, sku, and brand, plus nested Offer schema with price, priceCurrency, availability, and seller. If you have reviews, add AggregateRating. This is the machine-readable layer that agents parse when direct protocol integrations aren’t available. I talked about this with Duane Forrester, who co-launched Schema.org while at Bing, on the podcast. His argument: consistent structured data builds what he calls “machine comfort bias,” where AI systems develop a preference for sources that have proven reliable over time.

Test your agent visibility. Open ChatGPT, Perplexity, and Google AI Mode, and ask them to recommend products in your category. If yours don’t appear, agents can’t sell them. View your product pages in reader mode or a text-based browser to see what agents see when they visit your site directly (covered in Part 4).

Track agent-driven traffic. ChatGPT appends utm_source=chatgpt.com to referral links. Perplexity and other AI platforms leave similar referral signatures. Set up segments in your analytics to isolate AI-referred visits and monitor conversion rates separately from human traffic. The numbers are small now, but the 4,700% year-over-year growth in AI traffic to retail means they won’t stay small.

Walmart CEO Doug McMillon put it directly: “For many years now, ecommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change.”

Whether it changes next quarter or next year for your business depends on whether your product data is ready when the agents come looking.

Key Takeaways

  • Checkout is becoming a protocol, not a page. In agentic commerce, the AI agent handles the interface; the merchant processes the order. Two open standards, ACP (Stripe + OpenAI) and UCP (Shopify + Google), define how this works.
  • Both protocols are open and growing fast. ACP launched in September 2025 and powers Instant Checkout in ChatGPT. UCP launched in January 2026 with endorsements from Mastercard, Visa, Walmart, and Target. They’re complementary, not mutually exclusive. Shopify merchants can serve both simultaneously.
  • Shared Payment Tokens solve the “person-not-present” problem. When the buyer isn’t at the checkout, traditional trust signals break down. SPTs are programmable, scoped, time-limited, and revocable, letting agents initiate payments without ever seeing the buyer’s card details.
  • Payment networks are building their own standards. Visa’s Trusted Agent Protocol and Mastercard’s Agent Pay provide authentication and fraud frameworks specific to agent transactions. PayPal joined the ACP ecosystem. The payments infrastructure for agentic commerce is taking shape across the industry.
  • Major brands are already live. Etsy, Walmart, Instacart, Glossier, SKIMS, Coach, and dozens more are selling through AI agents today. Ecommerce platforms, including Shopify, BigCommerce, Wix, Squarespace, and WooCommerce, have integrations available.
  • Consumer trust is lagging behind infrastructure. Only 14% of consumers currently trust AI to place orders on their behalf. But AI-driven traffic to retail grew 4,700% in a year. The infrastructure is being built for the adoption curve that follows.

This is the final article in a five-part series on the agentic web. Part 1 framed the shift from SEO to AAIO. Part 2 covered how to get cited by AI. Part 3 mapped the protocols. Part 4 explained how agents perceive your website. This article covered where it all leads: transactions.

The thread connecting all five parts is straightforward. Structured data helps AI find you. Clean content helps AI cite you. Accessible HTML helps AI navigate you. Structured commerce protocols help AI buy from you. It’s the same principle at every layer: Make your business machine-readable, and the machines will do business with you.

Kevin Miller, Stripe’s Head of Payments, captured the moment: “Stripe spent the last 15 years optimizing commerce for human shoppers. Now, we’re starting to do the same with agents.”

The agents are already shopping. The question is whether they can find your store.

More Resources:


This post was originally published on No Hacks.


Featured Image: showcake/Shutterstock

AI Adoption Outpaced The PC & Internet: Dive Into The Stanford Report Data via @sejournal, @MattGSouthern

Stanford’s Human-Centered Artificial Intelligence Institute published its 2026 AI Index Report. The report runs over 400 pages across nine chapters covering technical performance, investment, workforce effects, and public sentiment.

The number getting the most attention is that Generative AI reached 53% adoption among the global population within three years of ChatGPT’s launch. That’s faster than either the personal computer or the internet reached comparable levels.

For anyone working in search, the report contains data that connects directly to the changes you’ve been navigating all year.

What The Report Found

This is the ninth annual AI Index, and it covers a lot of ground. A few findings matter most for the search industry.

In terms of capability, frontier models now exceed human performance on PhD-level science questions and in competitive mathematics. AI agents handling real-world tasks improved from a 20% success rate in 2025 to 77% today. Coding benchmarks that models struggled with a year ago are now nearly solved.

On investment, global corporate AI investment hit $581 billion in 2025, up 130% from the prior year. US private AI investment reached $285 billion. More than 90% of frontier models now come from private companies, not academic labs.

Regarding workforce effects, employment among software developers aged 22 to 25 has dropped by nearly 20% since 2024. A similar pattern appeared in customer service and other roles with higher AI exposure.

Transparency is declining. The Foundation Model Transparency Index fell from 58 to 40. The most capable models now disclose the least about their training data, parameters, and methods. Of the 95 most notable models launched last year, 80 were released without their training code.

The Adoption Number Everyone Is Citing

Understanding the 53% figure, what it includes, and what it doesn’t, matters for how you interpret it.

The comparison to PCs and the internet is based on research by the St. Louis Fed, Vanderbilt, and Harvard Kennedy School. The team compared adoption rates by years since each technology’s first mass-market product. The IBM PC launched in 1981. Commercial internet traffic opened in 1995. ChatGPT launched in November 2022.

At comparable points after launch, generative AI adoption runs well ahead of both earlier technologies.

But the comparison isn’t apples-to-apples, and the researchers said so themselves. Harvard’s David Deming pointed out that AI is built on top of PCs and the internet. People already had the hardware and the connectivity. Nobody needed to buy new equipment or wait for connectivity to reach their area. AI adoption rode on decades of prior technology investment.

Adoption numbers also vary depending on who’s counting and how. The Stanford report puts US adoption at 28%, ranking the country 24th globally. The St. Louis Fed’s own tracker puts US adoption at 54% as of August 2025. Same country, nearly double the rate, measured differently. The Fed team even revised its earlier estimate upward from 39% to 44% after changing the order of its survey questions.

“Adoption” also doesn’t distinguish intensity. Someone who signed up for a free ChatGPT account and tried it once counts the same as someone who uses it eight hours a day. The Stanford report notes that most users access free or near-free tiers. That’s a different picture than the one the headline number implies.

None of this means the adoption data is wrong. Generative AI is spreading faster than comparable technologies did at the same stage. But the speed of adoption alone doesn’t tell you how deeply it’s embedded in workflows or how much it’s changing search behavior specifically.

The Jagged Frontier

The report’s most useful concept for search professionals might be its “jagged frontier” of AI capability.

The same models that win gold at the International Mathematical Olympiad read analog clocks correctly only 50% of the time. IEEE Spectrum reported that Claude Opus 4.6 scores at the top of Humanity’s Last Exam while reading clocks at just 8.9% accuracy. Models that ace PhD-level science questions still struggle with video understanding and multi-step planning.

Ray Perrault, co-director of the AI Index steering committee, told IEEE Spectrum that benchmarks don’t map cleanly to real-world results. Knowing a model scores 75% on a legal reasoning benchmark “tells us little about how well it would fit in a law practice’s activities,” he said.

Search professionals have seen similar unevenness in AI search products. Ahrefs research showed that AI Mode and AI Overviews cite different URLs for the same queries, with only 13% overlap. Google’s Robby Stein acknowledged that the system pulls AI Overviews back when people don’t engage with them. Those signals suggest AI search performance is uneven across contexts, even if Google hasn’t fully explained where those differences are most pronounced.

Stanford’s data suggest that strong benchmark performance doesn’t guarantee reliable results across all tasks or query types. Whether that unevenness improves with future models is an open question the report doesn’t answer.

What’s Happening To Transparency

What the report says about transparency connects directly to search.

The Foundation Model Transparency Index dropped from 58 to 40 in a single year. The most capable models score lowest. Google, Anthropic, and OpenAI have all stopped disclosing dataset sizes and training duration for their latest models. 80 of the 95 most notable models launched in 2025 shipped without training code.

TechCrunch noted a disconnect between expert optimism about AI and public anxiety about it. The US reported the lowest trust in its government’s ability to regulate AI among the countries surveyed, at 31%.

For context on the index itself, a drop from 58 to 40 could indicate that companies are becoming more secretive. It could also reflect that the index penalizes closed-source models by design, and the most capable models happen to be closed-source. Both explanations can be true at the same time.

What matters for practitioners is the implication. The models powering AI Overviews, AI Mode, and ChatGPT Search are getting more capable and less explainable simultaneously. You’re optimizing for systems where the companies building them are sharing less about how they work, not more.

The report’s acknowledgments disclose that Stanford HAI receives financial support from Google, OpenAI, and others, and that the report was produced with assistance from ChatGPT and Claude.

The Entry-Level Question

Employment among software developers aged 22 to 25 dropped nearly 20% since 2024, according to the report. Older developers’ headcounts grew over the same period. A similar pattern appeared in customer service roles.

At first glance, that looks like AI replacing entry-level work. But the report included a caveat that complicates that conclusion. Unemployment is rising across many occupations, and workers least exposed to AI have seen it rise more than those most exposed.

That doesn’t rule out AI as a factor. It means the 20% decline could reflect AI displacement, broader hiring slowdowns, companies restructuring their entry-level hiring, or all three at once. The report presents correlation, not causation.

For search and content teams, the signal is directional even if the cause is mixed. The Stanford data is consistent with what the Tufts AI Jobs Risk Index showed earlier this year. Roles that involve assembling information from existing sources face more pressure than roles that require judgment, experience, and original analysis.

Why This Matters For Search Professionals

Even with its caveats, the adoption speed explains the pace of what you’ve been seeing.

Google expanded AI Overviews to 1.5 billion monthly users by Q1 2025. AI Mode reached 75 million daily active users by Q3 2025, then went global. Google expanded Search Live to 200+ countries. Personal Intelligence rolled out to free US users this year.

The adoption curve helps explain why Google has been expanding AI search features at this pace. It doesn’t tell us how much of that usage is happening inside search rather than standalone AI tools.

The “jagged frontier” means you can’t make blanket assumptions about AI search quality across query categories. A query type that returns accurate AI Overviews today might hallucinate with slight variations. Monitoring needs to happen at the query level, not the category level. Search Console doesn’t currently separate AI Overview or AI Mode performance from traditional search metrics, which makes this harder.

The decline in transparency affects how well you can understand why your content appears or doesn’t appear in AI-generated answers. When Google shares less about the models powering its search features, the feedback loop between what you publish and what gets surfaced becomes harder to read.

Shelley Walsh spoke at SEJ Live and referenced Grant Simmons, “golden knowledge” is content built on original data, firsthand experience, and depth that AI summaries can’t replicate from training data. The Stanford report’s data on adoption speed and model limitations support that position. The models are fast and widely used, but they’re uneven. Content that fills the gaps where AI is unreliable has a structural advantage.

What The Report Doesn’t Tell Us

The Stanford report doesn’t break out search-specific adoption data. We don’t know what percentage of that 53% uses AI via search specifically, rather than via ChatGPT, Gemini, or other standalone tools.

Google’s AI search usage numbers are limited. The company reported that AI Overviews reached 1.5 billion monthly users in Q1 2025, and AI Mode reached 75 million daily active users in Q3 2025. Updated figures should be included in the next earnings call.

The report also can’t tell us whether the jagged frontier problem is improving or worsening in search applications. The benchmark data shows models improving overall, but the clock-reading example shows that improvement isn’t uniform. Whether AI Overviews and AI Mode are getting more reliable for the specific queries that matter to your business requires your own monitoring, not aggregate benchmark data.

Looking Ahead

The Stanford report lands one week after Google’s March core update completed. Alphabet’s next earnings call will likely include updated AI search usage numbers.

The adoption data doesn’t predict what search will look like by year-end. But it does confirm that AI-first behavior isn’t speculative anymore. The question is whether Google’s AI search products will get reliable enough to match the pace of adoption.

Read More Resources:


Featured Image: n_a vector/Shutterstock

The case for fixing everything

The handsome new book Maintenance: Of Everything, Part One, by the tech industry legend Stewart Brand, promises to be the first in a series offering “a comprehensive overview of the civilizational importance of maintenance.” One of Brand’s several biographers described him as a mainstay of both counterculture and cyberculture, and with Maintenance, Brand wants us to understand that the upkeep and repair of tools and systems has profound impact on daily life. As he puts it, “Taking responsibility for maintaining something—whether a motorcycle, a monument, or our planet—can be a radical act.”

Radical how? This volume doesn’t say. In an outline for the overall work, Brand says his goal is to “end with the nature of maintainers and the honor owed them.”

The idea that maintainers are owed anything, much less honor, might surprise some readers. Actually, maintenance and repair have been hot topics in academia since the mid-2010s. I played some role in that movement as a cofounder of the Maintainers, a global, interdisciplinary network dedicated to the study of maintenance, repair, care, and all the work that goes into keeping the world going.

Brand is right, too, that maintainers haven’t gotten the laurels they deserve. Over the past few decades, scholars have shown that work from oiling tools to replacing worn parts to updating code bases all tends to be lower in status than “innovation.” Maintenance gets neglected in many organizational and social settings. (Just look at some American infrastructure!) And as the right-to-­repair movement has shown, companies in pursuit of greater profits have frequently locked us out of being able to do repairs or greatly reduced the maintainable life of their products. It’s hard to think of any other reason to put a computer in the door of a refrigerator.

Some of Brand’s earlier work helped inspire those insights. But his new book makes me think he doesn’t see things that way. For Brand, maintenance seems to be a solitary act, profound but more about personal success and fulfillment than tending to a shared world or making it better.


Born in 1938, Brand is 87 years old. A sense hangs over the book—with its battles against corrosion, rust, and decay, with its attempts to keep things going even as they inevitably falter—of someone looking over life and pondering its end. Maintenance: Of Everything connects to every stage of Brand’s life. It’s worth reviewing where it falls in that arc. Brand has always been interested in tools and fixing things, but rarely has he focused on the systems that need the most care. 

More than a half-century ago, Brand was a member of the Merry Pranksters, a countercultural, LSD-centered hippie collective famously led by Ken Kesey, the author of One Flew Over the Cuckoo’s Nest. In 1966, Brand co-produced the Trips Festival, where bands like the Grateful Dead and Big Brother and the Holding Company performed for thousands amid psychedelic light shows.

Brand’s Whole Earth Catalog had a vision that might feel progressive, but its libertarian, rugged-individualist philosophy of remaking civilization alone stood in contrast to more collective social change movements.

In some ways, the Trips Festival set a paradigm for the rest of his life’s work. Brand’s biographers have described him as a network celebrity—someone who got ahead by bringing people together, building coalitions of influential figures who could boost his signal. As Kesey put it in 1980, “Stewart recognizes power. And cleaves to it.” 

Brand applied this network logic to the undertaking he will always be best remembered for: the Whole Earth Catalog. First published in 1968 and aimed at hippies and members of the nascent back-to-the-land movement, the publication had the motto “Access to tools.” Its pages were full of Quonset huts, geodesic domes, solar panels, well pumps, water filters, and other technologies for life off the grid. It was a vision that might feel progressive or left-leaning, but the libertarian, rugged-individualist philosophy of eschewing corrupt systems and remaking civilization alone stood in contrast to the more collective movements pushing for deep social change at the time—like civil rights, feminism, and environmentalism.

That vision also led straight to the empowerment that came with new digital tools, and to Silicon Valley. In 1985, Brand published the Whole Earth Software Catalog, the last of the series, and also cofounded the WELL—the Whole Earth ’Lectronic Link, a pioneering online community famous for, among other things, facilitating the trade of Grateful Dead bootlegs. He also wrote a hagiographic book about the MIT Media Lab, known for its corporate-sponsored research into new communications tech. “The Lab would cure the pathologies of technology not with economics or politics but with technology,” Brand wrote. Again, not collective action, not policymaking: tools. And Brand then cofounded the Global Business Network, a group of pricey consulting futurists that further connected him to MIT, Stanford, and the Valley. Brand had literally helped bring about the modern digital revolution.

His attention then turned toward its upkeep. Brand’s 1994 book, How Buildings Learn: What Happens After They’re Built, argued against high-modernist architectural ideas. Nearly all buildings eventually get remade, he argued, but he especially favored cheap, simple structures that inhabitants could easily retool to suit changing needs. In some ways, Brand was recapitulating the liberated—or libertarian—philosophy of the Whole Earth Catalog: People can remake their world, if they have access to tools. In a chapter titled “The Romance of Maintenance,” he asked readers to see the beauty, value, and occasional pleasures of fixer-uppers of all kinds.

This chapter was a touchstone for many of us in the academic subfield of maintenance studies. Researchers in disciplines like history, sociology, and anthropology, as well as artists and practitioners in fields like libraries, IT, and engineering, all started trying to understand the realities and, yes, romance of maintenance and repair. Brand joined and contributed to Listservs, attended conferences, chatted with intellectual leaders. So it’s a bit uncharitable when he writes that his new book is “the first to look at maintenance in general.” He knows better. The real question, though, is what his work has to teach us that others have not said before. In this first volume, the answer is unclear.


Maintenance: Of Everything, Part One is an odd book. If so much of Brand’s thinking has been about access to tools, he now asks, in a more extended way: How are our tools maintained? But where Brand began his career with a catalogue, in this volume we get … what? A digest? An almanac? An encyclopedia? Its form and riotous variety fit no genre easily. 

The book has two chapters. The first, “The Maintenance Race,” recounts the story of three men who took part in the Golden Globe, a round-the-world race for solo sailors held in 1968. Each of the sailors, Brand explains, had a different philosophy of maintenance. One neglected it and hoped for the best. He died. Another thought of and prepared for everything in advance, and while he didn’t win the race, he completed it and once held the record for the “world’s longest recorded nonstop solo sailing voyage.” The final sailor won and did so through heroic acts of perseverance; his style was “Whatever comes, deal with it,” Brand explains. Structured like a fairy tale and unremittingly romantic, the story—like most of the anecdotes in the book—focuses on the derring-do of vigorous white guys. The strategy is no secret. Brand’s outline explains: “Start with a dramatic contest of maintenance styles under life-critical conditions—a true story told as a fable.” This myth is meant to inspire. 

The second chapter, “Vehicles (and Weapons),” is over 150 pages long. It has five sections, multiple subsections, five subsections designated “digressions,” one called a “subdigression,” two “postscripts,” and several “footnotes” that are not footnotes in a formal sense but, rather, further addenda. At times, it all feels like notes for a future work. Brand makes no apology for the book’s woolliness. “All I can offer here,” he writes, “is to muse across a representative of maintenance domains and see what emerges.” Perhaps the most charitable reading of the potpourri is that it represents the return of a Merry Prankster, offering us a riotous varied light show. It’s a good book to leave on a table and occasionally open to a random page for entertainment. But it often seems as if it does not know what it wants to say or be. 

“Vehicles (and Weapons)” begins by paraphrasing two famous works of maintenance philosophy, Robert M. Pirsig’s Zen and the Art of Motorcycle Maintenance and Matthew B. Crawford’s Shop Class as Soulcraft. Maintenance involves both “problem finding” and “problem solving.” While much repair work is marked by anxiety, impatience, and boredom, it also offers positive values and outcomes. “Motorcycle maintainers take heart from what they repair for—the glory of the ride,” Brand writes. 

The beauty and triumph of cheapness is a running theme throughout the work, harking back to How Buildings Learn. Henry Ford’s Model T won out over early electric vehicles and hugely expensive luxury vehicles like Rolls-Royce’s Silver Ghost because it was cheap and easier to maintain. The three most popular cars in human history—the Ford Model T, the Volkswagen Bug, and the Lada “Classic” from Russia—all privileged cheapness, “retained their basic design for decades, and … invited repair by the owner.” Or, to be fair, maybe demanded it? For every hobbyist who delighted in being able to self-reliantly keep a VW running, there must have been thousands who appreciated how cheap it was and hated that it broke a lot. Brand never points to social research, like surveys, that might help us know people’s feelings on such matters.

Other sections recount how Americans created interchangeable parts (enabling not only cheap mass production but also easy maintenance), examine how maintenance works with assault rifles and in war, and track the history of technical manuals from the early modern period to the age of YouTube. These stories are solid, but they’re also well known to students of technology, and nearly all are recycled from the work of others, featuring many large block quotes. The volume breaks little new ground. 

Brand treats maintenance as an unalloyed good. But the field of maintenance studies has moved on, burrowing into the domain’s ironies, complexities, and difficulties. A simple example: In most cases, it is environmentally far better to retire and recycle an internal-combustion vehicle and buy an electric one than to keep the polluting beast going forever. Maintaining a gas-guzzler or a coal-­burning power plant isn’t a radical act but a regressive one. Also, maintenance can become a life-breaking burden on the poor, and it falls inequitably on the shoulders of women and people of color. Keeping existing systems going can be a way of avoiding tough, necessary change—like making technological systems more accessible for people with disabilities. In this volume, Brand is uninterested in such difficult trade-offs. He avoids any question of how politics shapes these issues, or how they shape politics.

This avoidance comes out most clearly in a section of “Vehicles (and Weapons)” that talks about Elon Musk—a character of “unique mastery,” Brand informs us. He tells us that Bill Gates once shorted Tesla’s stock, only to lose $1.5 billion. The lesson is clear: Elon won. 

In what political and social vision is money the best way to keep the score? Brand rightly points out that electric vehicles have fewer moving parts and, in that sense, are more maintainable than internal-combustion vehicles. He celebrates Musk most of all because his products “have all proven to be game changers in part because they combine ingenious design with surprisingly low cost.” Again, it’s Brand’s “cheap, available tools” hypothesis. But there’s a real superficiality and lack of follow-through in thinking here: Teslas remain luxury vehicles whose sales have slumped since federal tax subsidies disappeared. The company has faced several right-to-repair lawsuits; there’s even a law review article on the topic. Musk is in no sense a maintenance hero. Yet Brand writes that with his companies, “Musk may have done more practical world saving than any other business leader of his time.” By the time Brand was writing this book, the controversies surrounding Musk for at least flirting with antisemitism, racism, sexism, authoritarianism, and more were quite clear. About this, the book says not a word.

book cover
Maintenance: Of Everything, Part One
Stewart Brand
STRIPE PRESS, 2026

For sure, Brand needn’t agree with Musk’s critics, but failing to even broach the subject is tone deaf and out of touch. Others have argued that Silicon Valley’s “Move fast and break things” mentality undermines healthy maintenance. Brand doesn’t raise the idea—even to dismiss it. 

It could be that with Maintenance: Of Everything, Part One Brand is just getting going; that in subsequent volumes he’ll have something more coherent to say; that he’ll raise really hard questions and try to answer them. But given his track record, we might reasonably doubt it. Kesey said Brand cleaves to power; he certainly doesn’t question it. 

Lee Vinsel is an associate professor of science, technology, and society at Virginia Tech and host of Peoples & Things, a podcast about human life with technology.

How robots learn: A brief, contemporary history

Roboticists used to dream big but build small. They’d hope to match or exceed the extraordinary complexity of the human body, and then they’d spend their career refining robotic arms for auto plants. Aim for C-3P0; end up with the Roomba. 

The real ambition for many of these researchers was the robot of science fiction—one that could move through the world, adapt to different environments, and interact safely and helpfully with people. For the socially minded, such a machine could help those with mobility issues, ease loneliness, or do work too dangerous for humans. For the more financially inclined, it would mean a bottomless source of wage-free labor. Either way, a long history of failure left most of Silicon Valley hesitant to bet on helpful robots.

That has changed. The machines are yet unbuilt, but the money is flowing: Companies and investors put $6.1 billion into humanoid robots in 2025 alone, four times what was invested in 2024. 

What happened? A revolution in how machines have learned to interact with the world. 

Imagine you’d like a pair of robot arms installed in your home purely to do one thing: fold clothes. How would it learn to do that? You could start by writing rules. Check the fabric to figure out how much deformation it can tolerate before tearing. Identify a shirt’s collar. Move the gripper to the left sleeve, lift it, and fold it inward by exactly this distance. Repeat for the right sleeve. If the shirt is rotated, turn the plan accordingly. If the sleeve is twisted, correct it. Very quickly the number of rules explodes, but a complete accounting of them could produce reliable results. This was the original craft of robotics: anticipating every possibility and encoding it in advance.

Around 2015, the cutting edge started to do things differently: Build a digital simulation of the robotic arms and the clothes, and give the program a reward signal every time it folds successfully and a ding every time it fails. This way, it gets better by trying all sorts of techniques through trial and error, with millions of iterations—the same way AI got good at playing games.

The arrival of ChatGPT in 2022 catalyzed the current boom. Trained on vast amounts of text, large language models work not through trial and error but by learning to predict what word should come next in a sentence. Similar models adapted to robotics were soon able to absorb pictures, sensor readings, and the position of a robot’s joints and predict the next action the machine should take, issuing dozens of motor commands every second.

This conceptual shift—to reliance on AI models that ingest large amounts of data—seems to work whether that helpful robot is supposed to talk to people, move through an environment, or even do complicated tasks. And it was paired with other ideas about how to accomplish this new way of learning, like deploying robots even if they aren’t yet perfect so they can learn from the environment they’re meant to work in. Today, Silicon Valley roboticists are dreaming big again. Here’s how that happened. 


Jibo

A movable social robot carried out conversations long before the age of LLMs.

An MIT robotics researcher named Cynthia Breazeal introduced an armless, legless, faceless robot called Jibo to the world in 2014. It looked, in fact, like a lamp. Breazeal’s aim was to create a social robot for families, and the idea pulled in $3.7 million in a crowdsourced funding campaign. Early preorders cost $749.

The early Jibo could introduce itself and dance to entertain kids, but that was about it. The vision was always for it to become a sort of embodied assistant that could handle everything from scheduling and emails to telling stories. It earned a number of devoted users, but ultimately the company shut down in 2019.

A crowdfunding campaign started in 2014 and drew 4,800 Jibo preorders.
COURTESY OF MIT MEDIA LAB

In retrospect, one thing that Jibo really needed was better language capabilities. It was competing against Apple’s Siri and Amazon’s Alexa, and all those technologies at the time relied on heavy scripting. In broad terms, when you spoke to them, software would translate your speech into text, analyze what you wanted, and create a response pulled from preapproved snippets. Those snippets could be charming, but they were also repetitive and simply boringdownright robotic. That was especially a challenge for a robot that was supposed to be social and family oriented. 

What has happened since, of course, is a revolution in how machines can generate language. Voice mode from any leading AI provider is now engaging and impressive, and multiple hardware startups are trying (and failing) to build products that take advantage of it. 

But that comes with a new risk: While scripted conversations can’t really go off the rails, ones generated by AI certainly can. Some popular AI toys have, for example, talked to kids about how to find matches and knives. 


Dactyl

A robot hand trained with simulations tries to model the unpredictability and variation of the real world.

By 2018, every leading robotics lab was trying to scrap the old scripted rules and train robots through trial and error. OpenAI tried to train its robotic hand, Dactyl, virtuallywith digital models of the hand and of the palm-size cubes Dactyl was supposed to manipulate. The cubes had letters and numbers on their faces; the model might set a task like “Rotate the cube so the red side with the letter O faces upward.”

Here’s the problem: A robotic hand might get really good at doing this in its simulated world, but when you take that program and ask it to work on a real version in the real world, the slight differences between the two can cause things to go awry. Colors might be slightly different, or the deformable rubber in the robot’s fingertips could turn out to be stretchier than it was in simulation.

a Dactyl robot hand holds a Rubix cube
Dactyl, part of OpenAI’s first attempt at robotics, was trained in simulation to solve Rubik’s Cubes.
COURTESY OF OPENAI

The solution is called domain randomization. You essentially create millions of simulated worlds that all vary slightly and randomly from one another. In each one the friction might be less, or the lighting more harsh, or the colors darkened. Exposure to enough of this variation means the robots will be better able to manipulate the cube in the real world. The approach worked on Dactyl, and one year later it was able to use the same core techniques to do something harder: solving Rubik’s Cubes (though it worked only 60% of the time, and just 20% when the scrambles were particularly hard). 

Still, the limits of simulation mean that this technique plays a far smaller role today than it did in 2018. OpenAI shuttered its robotics effort in 2021 but has recently started the division up againreportedly focusing on humanoids. 


RT-2

Training on images from across the internet helps robots translate language into action.

Around 2022, Google’s robotics team was up to some strange things. It spent 17 months handing people robot controllers and filming them doing everything from picking up bags of chips to opening jars. The team ended up cataloguing 700 different tasks.

The point was to build and test one of the first large-scale foundation models for robotics. As with large language models, the idea was to input lots of text, tokenize it into a format an algorithm could work with, and then generate an output. Google’s RT-1 received input about what the robot was looking at and how the many parts of the robotic arm were positioned; then it took an instruction and translated it into motor commands to move the robot. When it had seen tasks before, it carried out 97% of them successfully; it succeeded at 76% of the instructions it hadn’t seen before. 

a robot at a table of small toys
The model RT-2, for Robotic Transformer 2, incorporated internet data to help robots process what they were seeing.
COURTESY OF GOOGLE DEEPMIND

The second iteration, RT-2, came out the following year and went even further. Instead of training on data specific to robotics, it went broad: It trained on more general images from across the internet, like the vision-language models lots of researchers were working on at the time. That allowed the robot to interpret where certain objects were in the scene.

“All these other things were unlocked,” says Kanishka Rao, a roboticist at Google DeepMind who led work on both iterations. “We could do things now like ‘Put the Coke can near the picture of Taylor Swift.’” 

In 2025, Google DeepMind further fused the worlds of large language models and robotics, releasing a Gemini Robotics model with improved ability to understand commands in natural language. 


RFM-1

An AI model that allows robotic arms to act like coworkers.

In 2017, before OpenAI shuttered its first robotics team, a group of its engineers spun out a project called Covariant, aiming to build not sci-fi humanoids but the most pragmatic of all robots: an arm that could pick up and move things in warehouses. After building a system based on foundation models similar to Google’s, Covariant deployed this platform in warehouses like those operated by Crate & Barrel and treated it as a data collection pipeline. 

By 2024, Covariant had released a robotics model, RFM-1, that you could interact with like a coworker. If you showed an arm many sleeves of tennis balls, for example, you could then instruct it to move each sleeve to a separate area. And the robot could respondperhaps predicting that it wouldn’t be able to get a good grip on the item and then asking for advice on which particular suction cups it should use. 

This sort of thing had been done in experiments, but Covariant was launching it at significant scale. The company now had cameras and data collection machines in every customer location, feeding back even more data for the model to train on.

a warehouse robot arm lifts object with many suckers to place in a bin
A Covariant robot demonstrates “induction”—the common warehouse task of placing objects on sorters or conveyors.
COURTESY OF COVARIANT

It wasn’t perfect. In a demo in March 2024 with an array of kitchen items, the robot struggled when it was asked to “return the banana” to its original location. It picked up a sponge, then an apple, then a host of other items before it finally accomplished the task. 

It “doesn’t understand the new concept” of retracing its steps, cofounder Peter Chen told me at the time. “But it’s a good exampleit might not work well yet in the places where you don’t have good training data.”

Chen and fellow founder Pieter Abbeel were soon hired by Amazon, which is currently licensing Covariant’s robotics model (Amazon did not respond to questions about how it’s being used, but the company runs an estimated 1,300 warehouses in the US alone). 


Digit

Companies are putting this humanoid to the test in real-world settings.

The new investment dollars flowing to robotics startups are aimed largely at robots shaped not like lamps or arms but like people. Humanoid robots are supposed to be able to seamlessly enter the spaces and jobs where humans currently work, avoiding the need to retool assembly lines to accommodate new shapes such as giant arms. 

It’s easier said than done. In the rare cases where humanoids appear in real warehouses, they’re often confined to test zones and pilot programs. 

Digit humanoid robot putting a plastic bin on a conveyor belt
Amazon and other companies are using Digit to help move shipping totes.
COURTESY OF AGILITY ROBOTICS

That said, Agility’s humanoid Digit appears to be doing some real work. The designwith exposed joints and a distinctly unhuman headis driven more by function than by sci-fi aesthetics. Amazon, Toyota, and GXO (a logistics giant with customers like Apple and Nike) have all deployed itmaking it one of the first examples of a humanoid robot that companies see as providing actual cost savings rather than novelty. Their Digits spend their days picking up, moving, and stacking shipping totes.

The current Digit is still a long way from the humanlike helper Silicon Valley is betting on, though. It can lift only 35 pounds, for exampleand every time Agility makes Digit stronger, its battery gets heavier and it has to recharge more often. And standards organizations say humanoids need stricter safety rules than most industrial robots, because they’re designed to be mobile and spend time in proximity to people. 

But Digit shows that this revolution in robot training isn’t converging on a single method. Agility relies on simulation techniques like those OpenAI used to train its hand, and the company has worked with Google’s Gemini models to help its robots adapt to new environments. That’s where more than a decade of experiments have gotten the industry: Now it’s building big.

The Download: bad news for inner Neanderthals, and AI warfare’s human illusion

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The problem with thinking you’re part Neanderthal

There’s a theory that many of us have an “inner Neanderthal.” The idea is that Homo sapiens and a cousin species once bred, leaving some people today with a trace of Neanderthal DNA. 

This DNA is arguably the 21st century’s most celebrated discovery in human evolution. But in 2024, a pair of French geneticists called into question the theory’s very foundations. 

They proposed that what scientists interpret as interbreeding could instead be explained by population structure—the way genes concentrate in smaller, isolated groups.

Find out what it all means for human evolution.

—Ben Crair

This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.

Why having “humans in the loop” in an AI war is an illusion

—Uri Maoz

AI is starting to shape real wars. It’s at the center of a legal battle between Anthropic and the Pentagon, playing a growing role in the conflict with Iran, and raising questions about how much humans should remain “in the loop.”

Under Pentagon guidelines, human oversight is meant to provide accountability, context, and security. But the idea of “humans in the loop” is a comforting distraction.

The real danger isn’t that machines will act without oversight; it’s that human overseers have no idea what the machines are actually “thinking.” Thankfully, science may offer a way forward.

Read the full op-ed on the urgent need for new safeguards around AI warfare.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Despite blacklisting Anthropic, the White House wants its new model
Trump officials are negotiating access to Mythos. (Axios)
+ Anthropic said it was too dangerous for a public release. (Bloomberg $)
+ Finance ministers are alarmed about the security risks. (BBC)
+ Anthropic just rolled out a model that’s less risky than Mythos. (CNBC)
+ The Pentagon has pursued a culture war against the company. (MIT Technology Review)

2 Sam Altman’s side hustles have raised conflict-of-interest concerns
His opaque investments could influence decisions at OpenAI. (WSJ $)
+ A jury will soon decide if OpenAI abandoned its founding mission. (Wired $)
+ The company is making a big play for science. (MIT Technology Review)

3 A Starlink outage during drone tests exposed the Pentagon’s SpaceX reliance
It was one of several Navy test disruptions linked to Starlink. (Reuters $)
+ The DoD is also tapping Ford and GM for military innovations.(NYT $)

4 Data center delays threaten to choke AI expansion
40% of this year’s projects are at risk of falling behind schedule. (FT $)
+ Partly because no one wants a data center in their backyard. (MIT Technology Review)

5 Alibaba just released its own version of a world model
Happy Oyster is the latest attempt to extend AI’s ability to comprehend physical reality. (SCMP)
+ But they still need to understand cause and effect. (FT $)

6 Google’s Gemini is now generating AI images tailored to personal data
By analyzing users’ Google services and data. (Quartz)
+ Google says it will cut the need for detailed prompts. (TechCrunch)

7 OpenAI is beefing up its agentic coding and development system
Its Codex update is a direct shot at Claude Code. (The Verge)
+ But not everyone is convinced about AI coding. (MIT Technology Review)

8 Europe’s online age verification app is here
It’s available for free to any company that wants it. (Wired $) 

9 Smartglasses are giving Korean theaters hope of a K-Pop moment
Their AI-powered translations are taking the shows to the world. (NYT $)

10 Global voice actors are fighting Hollywood’s AI push
Their voices are training the models that are replacing them. (Rest of World)

Quote of the day

“There’s this dark period between now and some time in the future where the advantage is very much offensive AI.” 

—Rob Joyce, former director of cybersecurity at the National Security Agency, tells Bloomberg how AI is creating new hacking threats.

One More Thing

COURTESY OF NOVEON MAGNETICS


The race to produce rare earth elements

Access to rare earth elements will determine which countries meet their goals for lowering emissions or generating energy from non-fossil-fuel sources. But some nations, including the US, are worried about the supply of these elements. 

China dominates the market, while extraction in the US is limited. As a result, scientists and companies are exploring unconventional sources. Read the full story on their search for critical minerals.


—Mureji Fatunde

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ This ska cover of Rage Against the Machine is an upbeat way to start a revolution.
+ We finally know how far Stretch Armstrong can really stretch.
+ Customize these ambient sounds to wash away disruptive thoughts.
+ Here’s proof childhood dreams can come true: a girl guiding a seal to perform tricks. 

Iris Founder Talks AI-Powered Finance

I’ve interviewed a slew of impressive entrepreneurs on this podcast. Drew Fallon is among the most versatile. He and I last spoke in 2022 when he had co-founded a tattoo skincare company. Before that he was an investment banker.

He now runs Iris, an AI-driven financial modeling platform, while also tracking and reporting on consumer-focused M&A transactions.

In our recent conversation, he shared the benefits of agent-powered automation, common merchant use cases, and, yes, the enterprise M&A boom in 2026.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: What the heck do you do?

Drew Fallon: I’m the founder and CEO of Iris. We work with brands to deploy AI agents and automate many of their financial and operational workflows.

Prior to Iris, I was a co-founder of the tattoo skincare company Mad Rabbit for about five years, serving as CFO and COO. Before that I was an investment banker. Iris launched two years ago.

Bandholz: I’ve seen your social media posts announcing M&A deals. How do you obtain that info?

Fallon: I’ve got a handful of AI agents that crawl the web. They know what I’ve written and care about. They will surface those types of stories to me. I then pick them and blast them out.

The last couple of weeks have been crazy. Unilever scooped up Grüns, the nutritional gummy snacks, for $1.2 billion. The Finnish Long Drink, a citrus-flavored alcohol beverage, has just sold to the Mark Anthony Group, the company that owns White Claw. Huel, a British meal-replacement company, sold for $1.1 billion to Danone, the global food and beverage giant.

A lot is going on now, but very few big deals occurred in 2025. You had Poppi and Siete Foods, both acquired by PepsiCo. But overall the year was pretty lackluster for M&A.

But now we’re seeing deals of all sizes. There was a lot of pent-up demand, in part from privacy equity firms that had raised a lot of money.

Bandholz: Should today’s brands focus on mass consumers or on high-price-point niches?

Fallon: I would avoid price-conscious shoppers, especially if I were an emerging brand. It’s much better to pursue a high-dollar niche. Beardbrand, your company, is a good example. Not every dude with a beard will spend the money on your products, but those who really care about their beard will.

We’re seeing good traction with premium supplements, beauty, apparel, and food and beverage niches.

Bandholz: Tell us more about Iris’s use of AI.

Fallon: We started the company roughly when ChatGPT launched. I knew I had to be involved with that industry. Think of Iris as the data infrastructure to deploy AI agents. We integrate with Shopify, Amazon, Walmart, Facebook, Gusto, Rippling, bank accounts, credit cards, Bill.com, QuickBooks, and others.

We operate like a centralized data warehouse. We transform the data so AI agents can use it easily. Our agents are purpose-built for automating finance workflows. But the Iris infrastructure could create all sorts of agents. We’ve chosen to tackle financial models, inventory needs, business intelligence dashboards, cash flow forecasts — pretty much anything that an internal or fractional CFO would do.

For example, we help merchants determine how much to spend on customer acquisition. We’ll analyze variables such as gross margin, channel mix, operating expenses, and cash balances. A client could ask us for the profitability of $60, $70, or $80 CAC. We’ll provide the trade-offs for each and suggest the best channels for scaling.

Our inventory planning models are demand-driven. We first predict sales, then we look at the historical product mix, both seasonally and in aggregate. From there, it’s a basic mathematical model to estimate product distribution, such as 15% for beard oil, 25% for balm, and so on.

We can also model inventory velocity in December versus July, for example.

Bandholz: How can people hire you or reach out?

Fallon: Our site is IrisFinance.co. I’m on X and LinkedIn. My Substack newsletter is “Making Cents.”

Tips and tricks to write SEO-friendly blog posts in the AI era

It is no secret that publishing SEO-friendly blog posts is one of the easiest and most effective ways to drive organic traffic and improve SERP rankings. However, in the era of artificial intelligence, blog posts matter more than ever. They help establish brand authority by consistently delivering fresh, valuable content that can be cited in AI-generated answers.

In this guide, we will share a practical, detailed approach to writing SEO-friendly blog content that not only ranks on Google SERPs but is also surfaced by AI models.

Table of contents

Key takeaways

  • SEO friendly blog post now means writing with search intent, ensuring content is clear and quotable for AI systems
  • Key factors for SEO friendly blog posts include trustworthiness, machine-readability, answer-first structure, and topical authority
  • Conduct thorough keyword research and find readers’ questions to match search intent effectively
  • Use clear headings, improve readability, include inclusive language, and add relevant media to engage readers
  • Write compelling meta titles and descriptions, link to existing content, and focus on building authority to enhance visibility

What does an SEO-friendly blog post mean in the AI era?

The way people search for information has changed, and with it, the meaning of an SEO-friendly blog post. Before the rise of generative AI, writing an SEO-friendly blog post mostly meant this:

‘Writing content with the intention of ranking highly in search engine results pages (SERPs). The content is optimized for specific target keywords, easy to read, and provides value to the reader.’

That definition is not wrong. But it is no longer complete.

In the AI era, an SEO-friendly blog post is written with search intent first, answering a user’s question clearly and efficiently. It is not just about placing keywords in the right spots. It is about creating an information-dense piece with accurate, well-structured, and quotable sentences that AI systems can confidently extract and surface as direct answers.

The new definition clearly shows that strong SEO foundations still matter, and they matter more than ever. What has changed is how content is evaluated and discovered. Search engines and AI models now look beyond clicks and rankings to understand whether your content is trustworthy, helpful, and easy to interpret.

Here are some key factors that play a key role in determining whether a blog post is truly SEO-friendly:

  • Trustworthiness (E-E-A-T): Demonstrating real-world experience, expertise, and credibility helps your content stand out from low-value AI-generated rehashes
  • Machine-readability: Clear structure, clean HTML, and technical signals such as schema markup help search engines and AI systems understand what your content is about
  • Answer-first structure: Placing concise, direct answers at the beginning of sections makes it easier for AI models to extract and reference your content
  • Topical authority: Publishing interconnected, in-depth content around a subject is far more effective than creating isolated blog posts

9 tips to write SEO-friendly blogs for LLM and SERP visibility

Now we get to the core of this guide. Below are some foundational tips to help you plan and write SEO-friendly blog posts that are genuinely helpful, easy to understand, and focused on solving real reader problems. When done right, these practices not only improve search visibility but also shape how your brand is perceived by both users and AI systems.

1. Conduct thorough keyword research

Before you start writing a single word, start with solid keyword research. This step helps you understand how people search for a topic, which terms carry demand, and how competitive those searches are. It also ensures your content aligns with real user intent instead of assumptions.

You can use tools like Google Keyword Planner, Ahrefs, or Semrush for this. Personally, I prefer using Semrush’s Keyword Magic Tool because it quickly surfaces thousands of relevant keyword ideas around a single topic.

Keyword Magic Tool by Semrush for the relevant keyword list

Here’s how I usually approach it. I enter a broad keyword related to my topic, for example, ‘SEO.’ The tool then returns an extensive list of related keywords along with important metrics. I mainly focus on three of them:

  • Search intent, to understand what the user is really looking for
  • Keyword Difficulty (KD%), to estimate how hard it is to rank
  • Search volume, to gauge demand

This combination helps me choose keywords that are realistic to rank for and meaningful for readers.

If you use Yoast SEO, this process becomes even easier. Semrush is integrated into Yoast SEO (both free and Premium), giving you keyword suggestions directly in Yoast SEO. With a single click, you can access relevant keyword data while writing, making it easier to create focused, useful content from the start.

Looking for keyphrase suggestions? When you’ve set a focus keyword in Yoast SEO, you can click on ‘Get related keyphrases’ and our Semrush integration will help you find high-performing keyphrases!

Also read: How to use the Semrush related keyphrases feature in Yoast SEO for WordPress

2. Finding readers’ questions

Keyword research tells you what people search for. Questions tell you why they search.

When you actively look for the questions your audience is asking, you move closer to matching search intent. This is especially important in the AI era, where search engines and AI models prioritize clear, answer-driven content.

For example, consider these two queries:

What are the key features of good running shoes?

This shows informational intent. The searcher wants to understand what makes a running shoe good.

What are the best running shoes?

This suggests a transactional or commercial intent. The searcher is likely comparing options before making a purchase.

Both questions are valid, but they require very different content approaches.

There are two simple ways I usually find relevant questions. The first is by checking the People also ask section in Google search results. By typing in a broad keyphrase, you can see related questions that Google itself considers relevant.

people also ask section on google serps
The People also ask section showing questions related to the broad keyphrase ‘SEO’

The second method is to use the Questions filter in Semrush’s Keyword Magic Tool. This helps uncover question-based queries directly tied to your main topic.

Apart from these methods, I also like using Google’s AI Overview and AI mode as a quick research layer. When I search for my main topic, I pay close attention to AI-cited sources, as they often surface broad questions people are actively seeking. The structured points and highlighted terms usually reflect the answers and subtopics that matter most to users. If I want to go deeper, I click “Show more,” which reveals additional angles and follow-up questions I might not have considered initially.

google ai overview citing resources
AI cited sources by Google AI Overview

Finding and answering these questions helps you do lightweight online audience research and create content that feels genuinely helpful. It also increases the chances of your blog post being referenced in AI-generated answers, since LLMs are designed to surface clear responses to specific questions.

3. Structure your content with headings and subheadings

In our 2026 SEO predictions, we highlighted that editorial quality is no longer just about good writing. It has become a machine-readability requirement. Content that is clearly structured is easier to understand, reuse, and surface across both search and AI-driven experiences.

How LLMs use headings

AI models rely on headings to identify topics, questions, and answers within a page. When your content is broken into clear sections, it becomes easier for them to extract key information and include it in AI-generated summaries.

Why headings still matter for SEO

Headings help search engines understand the hierarchy of your content and the main points you are trying to rank for. They also improve scannability and usability, especially on mobile devices, and increase the chances of earning featured snippets.

Good structure has always been a core SEO principle. In the AI era, it remains one of the simplest and most effective ways to improve visibility and discoverability.

4. Focus on readability aspects

An SEO-friendly blog post should be easy to read before it can rank or get picked up by AI systems. Readability helps readers stay engaged and helps search engines and AI models better understand your content.

A few key readability aspects to focus on while writing:

  • Avoid passive voice where possible
    Active sentences are clearer and more direct. They make it easier for readers to understand who is doing what, and they reduce ambiguity for AI systems processing your content.
  • Use transition words
    Transition words like “because,” “for example,” and “however” guide readers through your content. They improve flow and make it easier to follow relationships between sentences and paragraphs.
  • Keep sentences and paragraphs short
    Long, complex sentences reduce clarity. Breaking content into shorter sentences and paragraphs improves scannability and comprehension.
  • Avoid consecutive sentences starting in the same way
    Varying sentence structure keeps your writing engaging and prevents it from sounding repetitive or robotic.
The readability analysis in the Yoast SEO for WordPress metabox
The readability analysis in the Yoast SEO for WordPress metabox

If you are a WordPress or Shopify user, Yoast SEO (and Yoast SEO for Shopify for Shopify users) can help here. Its readability analysis checks for passive voice, transition words, sentence length, and other clarity signals while you write. If you prefer drafting in Google Docs, you can use the Yoast SEO Google Docs add-on to get the same readability feedback before publishing.

Use Yoast SEO in Google Docs

Optimize as you draft for SEO, inclusivity, and readability. The Yoast SEO Google Docs add-on lets you export content ready for WordPress, no reformatting required.

Good readability is not just about pleasing algorithms. It helps readers understand your message more quickly and makes your content easier to reuse in AI-generated responses.

5. Use inclusive language

Inclusive language helps ensure your content is respectful, clear, and welcoming to a broader audience. It avoids assumptions about gender, ability, age, or background, and focuses on people-first communication.

From an SEO and AI perspective, inclusive language also improves clarity. Content that avoids vague or biased terms is easier to interpret, digest, and trust. This directly supports brand perception, especially when your content is surfaced in AI-generated responses.

Yoast SEO supports this through its inclusive language check, which flags potentially non-inclusive terms and suggests better alternatives. This feature is available in Yoast SEO, Yoast SEO Premium, and in the Yoast SEO Google Docs add-on, making it easier to build inclusive habits directly into your writing workflow.

Inclusive language ensures your content is intentional, thoughtful, and clear, aligning closely with what modern SEO and AI systems value.

6. Add relevant media and interaction points

A well-written blog post should not feel like a long block of text. Adding the right media and interaction points helps guide readers through your content, keeps them engaged, and encourages them to take action.

Why media matters

Media elements such as images, videos, embeds, and infographics make your content easier to consume and more engaging. Blog posts that include images receive 94% more views than those without, simply because visuals break up large blocks of text and make pages easier to scan.

Video content plays an even bigger role. Embedded videos help explain complex ideas faster and can significantly improve organic visibility compared to text-only posts. Together, these elements encourage readers to stay longer on your page, which is a strong signal of content quality for search engines and AI systems alike.

Media also improves accessibility. Properly optimized images with descriptive alt text make content usable for screen readers, while original visuals, screenshots, or diagrams help reinforce credibility and expertise.

Use interaction points to guide and engage readers

Interaction does not always mean complex features. Even simple elements can significantly improve engagement when used well.

Table of contents and sidebar CTA used as interaction points in a Yoast blog post

A table of contents, for example, allows readers to jump directly to the section they care about most.

Other interaction points include clear calls to action (CTAs) that guide readers to the next step, relevant recommendations that encourage users to keep exploring your site, and social sharing buttons that make it easy to amplify your content. Interactive elements like polls, quizzes, or embedded tools further encourage participation and increase time on page.

7. Plan your content length

Content length still matters, but not in the way many people think it does.

A common question is what the ideal word count is for a blog post that performs well. A 2024 study by Backlinko found that while longer content tends to attract more backlinks, the average page ranking on Google’s first page contains around 1,500 words.

That said, this should not be treated as a fixed benchmark. The ideal length is the one that fully answers the user’s question. In an AI-driven era, publishing long content that adds little value or is padded with unnecessary fluff can do more harm than good.

If a topic genuinely requires a longer format, breaking the content into clear subheadings makes a big difference. I personally prefer structuring long articles this way because it improves readability, helps readers navigate the page more easily, and makes the content easier for search engines and AI systems to understand.

Must read: How to use headings on your site

If you use Yoast SEO or Yoast SEO Premium, the paragraph and sentence length checks can help here. These checks exist to prevent pages from being too thin to provide real value. Pages with very low word counts often lack context and struggle to demonstrate relevance or expertise. Yoast SEO flags such cases as a warning, while clearly indicating that adding more words alone does not guarantee better rankings.

Think of word count as a guideline, not a goal. Your focus should always be on clarity, completeness, and usefulness.

Internal linking is one of the most underrated SEO practices, yet it does a lot of heavy lifting behind the scenes.

By linking to relevant content within your site, you help readers discover additional resources and help search engines understand how your content is connected. Over time, this strengthens topical authority and signals that your site consistently covers a subject in depth.

Good internal linking follows a few simple principles:

  • Link only when it adds value and feels natural in context
  • Use clear, descriptive anchor text so users and search engines know what to expect
  • Avoid linking to outdated URLs or pages that redirect, as this wastes crawl signals

Internal links also keep readers engaged longer by guiding them to related articles. This improves overall site engagement while reinforcing your expertise on a topic.

From an AI and search perspective, internal linking plays an even bigger role. Modern search systems analyze content structure, metadata hierarchies, schema markup, and internal links to assess topical depth and clarity. Well-linked content clusters make it easier for search engines and AI systems to understand what your site is about and which pages are most important.

For WordPress users, Yoast SEO Premium offers internal linking suggestions directly in the editor. This makes it easier to spot relevant linking opportunities as you write, helping you build stronger content connections without interrupting your workflow.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

9. Write compelling meta titles and descriptions

Meta titles and meta descriptions help users decide whether to click on your content. While meta descriptions are not a direct ranking factor, they strongly influence click-through rates, making them an essential part of writing SEO-friendly blog posts.

A good meta title clearly communicates what the page is about. Place your main keyword near the beginning, keep it concise, and aim for roughly 55-60 characters so it doesn’t get truncated in search results.

Meta descriptions act like a short invitation. They should explain what the reader will gain from clicking and why it matters. Instead of stuffing keywords, focus on clarity and usefulness. Mention what aspects of the topic your content covers and how it helps the reader. Simple language works best.

Pro tip: Using action-oriented verbs such as “learn,” “discover,” or “read” can also encourage clicks and make your description more engaging.

If you use Yoast SEO Premium, this process becomes much easier. The AI-powered meta title and description generation feature helps you create relevant, well-structured metadata in just one click. It follows SEO best practices while producing descriptions and titles that are clear, engaging, and aligned with search intent.

Bonus tips

Once you have the fundamentals in place, a few extra refinements can go a long way. The following bonus tips help improve usability, clarity, and long-term discoverability. They are not mandatory, but when applied thoughtfully, they can make your blog posts more helpful for readers and easier to surface across search engines and AI-driven experiences.

1. Add a table of contents

A table of contents (TOC) helps readers quickly understand what your blog post covers and jump straight to the section they care about. This is especially useful for long-form content, where users often scan rather than scroll from top to bottom.

From an SEO perspective, a TOC improves structure and readability and can create jump links in search results, which may increase click-through rates. It reduces bounce rates by helping users find answers faster and improves accessibility by offering clear navigation.

By the way, did you know Yoast can help you here too? Yes, the Yoast SEO Internal linking blocks feature lets you add a TOC block to your blog post that automatically includes all the headings with just one click!

2. Add key takeaways

Key takeaways help readers quickly grasp the main points of your blog post without having to read the whole post. This is especially helpful for time-constrained users who want quick, actionable insights.

Summaries also support SEO by reinforcing topic relevance and improving content comprehension for search engines and AI systems. Well-written takeaways might increase visibility in featured snippets and “People also ask” results.

If you use Yoast SEO Premium, the Yoast AI Summarize feature can generate key takeaways for your content in just one click, making it easier to add concise summaries without extra effort.

3. Add an FAQ section

An FAQ section gives you space to answer specific questions your readers may still have after reading your post. This improves user experience by addressing concerns directly and building trust.

FAQs also help search engines better understand your content by clearly outlining common questions and answers related to your topic. While they can support rankings, their real value lies in reducing friction, improving clarity, and even supporting conversions by clearing doubts.

A permalink is the permanent URL of your blog post. Short, descriptive permalinks are easier to read, easier to share, and more likely to be clicked.

Good permalinks clearly describe what the page is about, avoid unnecessary words, and include the main topic where relevant. They improve usability and help search engines understand page context at a glance.

5. Focus on building authority (EEAT aspect)

Building authority is critical, especially for sites that cover sensitive or high-impact topics. Demonstrating Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) helps both users and search engines trust your content.

This includes citing reliable sources, showing real-world experience, maintaining consistent quality, and clearly communicating who is behind the content. Strong E-E-A-T signals are especially important for YMYL topics, where accuracy and credibility matter most.

6. Plan content distribution

Writing a great blog post is only half the work. Distribution helps your content reach the right audience.

Sharing posts on social media, repurposing key insights into newsletters, and earning backlinks from relevant sites can drive more traffic and visibility. Distribution also increases engagement signals and helps your content gain traction faster, which supports long-term SEO performance.

Target your readers always!

In AI-driven search, retrieval beats ranking. Clarity, structure, and language alignment now decide if your content gets seen. – Carolyn Shelby

This perfectly sums up what writing SEO-friendly blog posts looks like today. Success is no longer just about rankings. It is about being clear, helpful, and easy to understand for both readers and AI systems.

Throughout this guide, we focused on the fundamentals that still matter: understanding search intent, structuring content well, improving readability, using inclusive language, and supporting your writing with media, internal links, and thoughtful metadata. These are not new tricks. They are strong SEO foundations, adapted for how search and discovery work in the AI era.

If there is one takeaway, it is this: always write for your readers first. When your content genuinely helps people, answers their questions, and respects how they search and read, it naturally becomes easier to surface across SERPs and AI-driven experiences.

Good SEO has not changed. It has simply become more human.

Google Bans Back Button Hijacking, Agentic Search Grows – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect what Google considers spam, what happens when you report it, and what agentic search looks like in practice.

Here’s what matters for you and your work.

Google’s New Spam Policy Targets Back Button Hijacking

Google added back button hijacking to its spam policies, with enforcement beginning June 15. The behavior is now an explicit violation under the malicious practices category.

Key facts: Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Pages engaging in the behavior face manual spam actions or automated demotions.

Why This Matters

Google called out that some back button hijacking originates from included libraries or advertising platforms, which means the liability sits with the publisher even when the behavior comes from a vendor.

You have two months to audit every script running on your site, including ad libraries and recommendation widgets you didn’t write yourself.

Sites that receive a manual action after June 15 can submit a reconsideration request through Search Console once the offending code is removed.

What SEO Professionals Are Saying

Daniel Foley Carter, SEO Consultant, summed up the community reaction on LinkedIn:

“So basically, that spammy thing you do to try and stop users leaving? Yeah, don’t do it.”

Manish Chauhan, SEO Head at Groww, added on LinkedIn that he was:

“glad this is being addressed. It always felt like a short-term hack for pageviews at the cost of user trust.”

Read our full coverage: New Google Spam Policy Targets Back Button Hijacking

Spam Reports May Now Trigger Manual Actions

Google updated its report-a-spam documentation on April 14 to say user submissions may now trigger manual actions against sites found violating spam policies. The previous guidance said spam reports were used to improve spam detection systems rather than to take direct action.

Key facts: Google may use spam reports to take manual action against violations. If Google issues a manual action, the report text is sent verbatim to the reported website through Search Console.

Why This Matters

Google now states that spam reports can be used to initiate manual actions, making reports explicitly part of its enforcement process in official documentation.

This also raises concerns about potential abuse, as grudge reports and competitor sabotage may become more appealing when reports have a tangible impact. Therefore, the true test will be the quality of reports that Google actually considers.

What SEO Professionals Are Saying

Gagan Ghotra, SEO Consultant, wrote on LinkedIn about why the change may lead to better reports:

“Now spam reports have direct relation to Google issuing manual actions against domains. Google announced if there is a spam report from a user and based upon that report Google decide to issue manual action against a domain then Google will just send the user submitted content in report to the site owner (Search Console – Manual Action report) and will ask them to fix those things. Seems like Google was getting too many generic spam reports and now as the incentive to report are aligned. That’s why I guess people are going to submit reports which have a lot of relevant information detailing why/how a specific site is violating Google’s spam policies.”

Read Roger Montti’s full coverage: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Agentic Restaurant Booking Expands In AI Mode

Google expanded agentic restaurant booking in AI Mode to additional markets on April 10, including the UK and India. Robby Stein, VP of Product for Google Search, announced the rollout on X.

Key facts: Searchers can describe group size, time, and preferences to AI Mode, which scans booking platforms simultaneously for real-time availability. The booking itself is completed through Google partners rather than directly on restaurant websites.

Why This Matters

Restaurant booking shows how task completion within search works. For local SEOs and marketers, traffic patterns shift: users now often stay within Google during discovery, with bookings routed through partners.

This depends on Google booking partners, which may limit visibility for restaurants outside those platforms, making presence on Google-supported booking sites more important than the restaurant’s own website. This model may or may not extend to other experiences.

What SEO Professionals Are Saying

Glenn Gabe, SEO and AI Search Consultant at G-Squared Interactive, flagged the rollout on X:

I feel like this is flying under the radar -> Google rolls out worldwide agentic restaurant booking via AI Mode. TBH, not sure how many people would use this in AI Mode versus directly in Google Maps or Search (where you can already make a reservation), but it does show how Google is moving quickly to scale agentic actions.

Aleyda Solís, SEO Consultant and Founder at Orainti, noted a key limitation in a LinkedIn post:

“Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though.”

Read Roger Montti’s full coverage: Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow

Theme Of The Week: Google Gets Specific

What counts as spam, what happens when spam gets reported, and what agentic search looks like all got clearer definitions this week.

Back button hijacking becomes a named violation with an enforcement date. Google’s documentation now says spam reports may be used for manual actions, not just fed into detection systems. Agentic search becomes a live product for restaurant reservations in specific markets rather than a talking point about the future.

Now, the compliance work, reporting mechanics, and agentic experience are all clearly understood enough to be tracked directly, instead of just forecasted.

Top Stories Of The Week:

More Resources:


Featured Image: Roman Samborskyi/Shutterstock