Google Ads Posts GEO Partner Manager Role via @sejournal, @MattGSouthern

Google’s Large Customer Sales team has posted a role titled “GEO Partner Manager, Performance Solutions” on Google Careers. The listing is a single job posting inside Google’s ads sales organization.

The term “GEO” appears seven times across the listing, including the title. “Generative Engine Optimization” is spelled out twice. Other references include “GEO players,” “GEO ecosystem,” and “GEO/AEO companies.”

The listing says the role will “shape the GEO ecosystem to prioritize Google surfaces.” Responsibilities include influencing partners to prioritize Google-owned surfaces in their tools and methodologies, as well as in “Share of Model” analysis. “Share of Model” is an industry term for a brand’s presence in AI-generated answers.

Why This Matters

The terminology is worth noting because it sits alongside a different public position from Google’s search side. In July, Google’s Gary Illyes said standard SEO is sufficient for AI Overviews and AI Mode, and that specialized AEO or GEO optimization is not needed. As of publication, Google has not publicly updated that guidance.

Large Customer Sales manages relationships with major advertisers and agencies. The role’s alignment with the 3P Measurement team places it firmly inside Google’s ad-side partner work.

Microsoft and Google are in different places here, and the categories of evidence differ. In March, Bing added “GEO” to its official webmaster guidelines, defining the term and placing it alongside SEO as a named category. Bing’s AI Performance dashboard, launched in February, was positioned as a step toward GEO tooling.

The Google listing is one job posting inside an ads sales team. Both are adoption signals, but not the same level of commitment.

Looking Ahead

The language reflects how one team inside Google’s ads organization frames this work today. It doesn’t carry the same weight as a documentation update, a public statement from Google Search, or a policy change.

Whether similar GEO language appears in other Google job listings across Ads, Cloud, or Search would indicate whether this is a pattern or a single team’s choice.

For brands working with GEO or AEO partners, the listing is worth noting. The listing indicates Google’s ads team wants partner tools and methodologies to prioritize Google surfaces.


Featured Image: Jack_the_sparow/Shutterstock

WooCommerce Stores Can Now Sell Products Via YouTube Videos via @sejournal, @martinibuster

Google and WooCommerce announced today that the Google for WooCommerce extension now enables merchants to sell products directly through YouTube. The update connects WooCommerce stores to YouTube channels enabling them to tap into 2.7 billion shoppers.

Merchants can tag products in videos and Shorts, where they appear as shoppable cards during playback and in a dedicated shopping tab on the channel.

  • The cards are pulled from the merchant’s existing product catalog
  • They stay synced automatically through Google Merchant Center
  • The same data is reused across YouTube, Shopping, and ads

Connect WooCommerce Stores To YouTube Shoppers

WooCommerce is an open source eCommerce platform built on WordPress that helps merchants manage products, payments, and orders. Google supports online selling through tools such as Merchant Center and Google Ads, which make product data available across search results, shopping listings, and ads. The Google for WooCommerce extension connects these systems so merchants can manage product data in one place and use it across Google channels.

The update adds YouTube Shopping as a direct sales channel for WooCommerce stores. Merchants can link their store to a YouTube channel and tag products from their catalog in videos and Shorts. Tagged products appear as clickable items while the video plays and remain visible in a shopping tab on the channel.

A product feed syncs automatically with Google Merchant Center, including titles, descriptions, prices, and inventory levels. This same data feeds Google Shopping listings and ad campaigns, so merchants do not need to update each channel separately and can keep product information consistent across search, ads, and video.

Performance Max campaigns use this same Merchant Center feed to generate ads in formats such as video thumbnails, display ads, and text headlines. Google runs experiments in real time and adjusts spend based on conversion trends, while merchants set budgets and return-on-ad-spend goals. While YouTube Shopping enables product tagging within videos, Performance Max handles automated ad creative that can run across YouTube and other Google channels using the same underlying data.

The extension also supports Performance Max campaigns for businesses that sell services, such as bookings or appointments, which do not require a product catalog. These campaigns focus on actions like form submissions, phone calls, or scheduling, expanding the tool beyond physical product sales.

Takeaways

YouTube now serves two roles for WooCommerce merchants:

  1. A place where products are discovered:
    YouTube is the world’s second-largest search engine and the largest platform for researching products via video. It enables merchants to reach an audience of 2.7 billion shoppers.
  2. And a place where those products can be purchased immediately:
    YouTube Shopping is now a direct sales channel for WooCommerce stores. Merchants can tag products in videos and Shorts so they appear as shoppable cards while viewers are watching.

For merchants, this means they can create videos about their products that can directly lead to sales. In terms of SEO, videos are content that can rank across multiple search surfaces, and now they can lead to sales too.

Featured Image by Shutterstock/So happy 59

ChatGPT Ads Now Offer CPC Bidding Between $3 And $5: Report via @sejournal, @MattGSouthern

Digiday reports that an early version of ChatGPT’s ads manager, available to a subset of pilot advertisers, now shows cost-per-click bids ranging from $3 to $5, based on screenshots reviewed and verified by the publication.

Until now, advertisers in the pilot have paid on a CPM basis, meaning a flat rate per 1,000 impressions served. CPC pricing lets buyers pay only when a user clicks. Digiday reported the option is available to marketers already testing advertising in the pilot, not as a broad rollout. OpenAI didn’t respond to Digiday’s request for comment.

Pricing Has Been Falling Since Launch

The CPC addition follows a drop in ChatGPT ad pricing since the pilot launched on February 9, 2026.

CPMs have fallen from $60 at launch to as low as $25 in some cases, per Digiday’s earlier reporting. Digiday also reported the minimum spend commitment has fallen from $250,000 at launch to $50,000, alongside the quiet release of a self-serve ads manager that gives a subset of pilot advertisers the ability to monitor impressions and clicks in real time.

What CPC Pricing Means For Buyers

CPM and CPC pricing serve different advertiser bases. Brand advertisers tend to plan around CPM. Performance marketers, who account for the majority of online ad spend, prefer to pay for clicks rather than impressions.

Adding CPC bidding opens the channel to a buyer category that has largely sat out the pilot. Nicole Greene, VP analyst at Gartner, told Digiday that the pricing change lets advertisers directly compare their results on OpenAI with those on other major platforms.

What ChatGPT clicks are worth depends on where they land relative to existing channels. According to ad agency Adthena (cited by Digiday), Meta CPCs run three to five times cheaper than Google Search, not because Meta’s inventory is worse, but because the intent behind those clicks is different. Social platform users tend to browse without a specific goal, while search users typically have one in mind.

The pricing drops ChatGPT into the same intent-and-value debate advertisers already face when comparing social clicks with search clicks.

Why This Matters

CPC bidding moves ChatGPT advertising into a territory where performance marketers can plan campaigns and compare costs directly against Google and Meta. Combined with the lower minimum spend, the channel is accessible to a wider buyer base than the enterprise tier that defined its launch.

SEJ’s Brooke Osmundson covered the implications for paid media teams in her analysis of whether ChatGPT Ads warrant real budget yet.

A CPM-only enterprise pilot has, in roughly 10 weeks, become a self-serve channel with a $50,000 minimum, lower CPMs, and now CPC pricing visible to a subset of advertisers. Each step down has opened the channel to a different category of buyer.

Looking Ahead

Paid media teams running search and social campaigns should compare ChatGPT’s clicks for intent quality and conversions. Measurement tools are limited and inconsistent, so teams must plan proxy measurement until OpenAI’s reporting improves.

OpenAI is hiring its first advertising marketing science leader, per Digiday. Until that role is filled, advertisers will be evaluating ChatGPT clicks largely on faith.

Google Ads Makes Call Recording Default For AI Lead Calls via @sejournal, @MattGSouthern

Google Ads has enabled call recording by default for eligible call flows associated with AI-qualified call leads, with exceptions for prior opt-outs and certain sensitive verticals.

A new Google support page describes the feature, which uses AI to evaluate phone conversations instead of relying on call duration alone to count conversions.

What Changed

Google Ads previously classified a phone call as a conversion primarily based on its duration. Google’s documentation says the new system analyzes call recordings to identify signals of intent, such as a caller asking about specific services, scheduling a consultation, or indicating readiness to purchase.

Google describes the classification as tiered.

  • Primary signal, call recording. If recording is on, AI evaluates the conversation and only qualified calls count as conversions.
  • Secondary signal, call duration. If a call can’t be recorded, duration determines whether it counts.
  • Tertiary signal, ad interaction. If no Google forwarding number is available, ad interaction data is used.

Call Details reports now include an AI-generated summary of each call and hashtags such as “#HighIntent” or “#ConsultationScheduled.”

Call Recording Defaults And Exceptions

Google’s settings page says call recording will remain off for advertisers who have already turned it off and for accounts Google has identified as operating in healthcare or financial services.

Advertisers in those categories can manually enable recording at any time, according to Google.

To turn recording off, advertisers can go to Admin > Account settings > Call ads > Call recording and select Off.

Where It Works

Call recording and AI-qualified conversions are currently limited to calls in which both the calling and receiving phone numbers are in the United States or Canada. Calls must route through a Google Forwarding Number, which requires call reporting to be enabled at the account level.

Only calls to call ads, call assets, and calls from website visits are eligible. Calls from location assets are not supported at this time.

Privacy And Compliance

Google’s settings page says callers will hear an automated message at the start of the call notifying them the conversation is being recorded for quality purposes. Advertisers agree to the Call Ads Supplemental Terms when using the feature and acknowledge they have given notice to employees or other parties who may participate in calls.

Google also says that recordings are used to evaluate lead quality, monitor spam and fraud, and improve the accuracy of conversion reporting.

Advertisers using call recording should review whether Google’s automated notification complies with their own legal obligations regarding recorded calls.

Why This Matters

Advertisers that don’t plan to use AI-qualified call leads are still producing recordings Google analyzes for lead quality, spam, and fraud, unless they turn recording off.

Smart Bidding now optimizes against AI-classified qualified calls when recording is on, and falls back to call duration when it isn’t.

Looking Ahead

Advertisers who prefer call duration as the primary signal can turn recording off in account settings. The duration threshold itself can be adjusted under Goals > Summary > Phone call leads > AI-qualified call leads.


Featured Image: El editorial/Shutterstock

Google Adds New Tasked-Based Search Features via @sejournal, @martinibuster

Google introduced new features for Search that continues its evolution into a more task-oriented tool, enabling users to launch AI agents directly from AI Mode and complete more tasks. This is a trend that all SEOs and online businesses need to be aware of.

Rose Yao, Product leader in Search, posted about the new features on X. The first tool is a toggle that enables users to track hotel prices directly from the search bar.

Yao explained:

“To help you save $$, today we launched hotel price tracking on Search! Use the new tracking toggle to get an email if prices drop for your dream hotel. Available now, globally”

An accompanying official blog post further explained the new tool:

“You can already track hotel prices at the city level, and launching today, you can now track prices for individual hotels, too. To get started on desktop, head to Search and look up a specific hotel by name, then tap the new price tracking toggle. On mobile, you’ll find the price tracking option under the Prices tab after you search. Either way, you’ll get an email alert if rates change significantly during your chosen dates, so you can jump on those price drops and snag a great deal.”

Agentic Search From AI Mode

Google’s CEO, Sundar Pichai, recently shared that the future of search was tasked-based with a reliance on AI agents that can complete tasks for users. This announcement brings Google search closer to that paradigm by introducing agentic search directly from AI Mode. This new feature launches an AI agent from AI Mode that will call local stores.

Yao explained:

“Agentic calling in AI Mode for finding last-minute travel gear.

When you just need that *one thing* before you leave but don’t know who’s got it in stock, you can ask AI Mode to save you the stress. Just search for what you need “near me” and Google AI will call local stores directly to get the details you need.”

This feature has been available on Google Search since November 2025 but it’s now rolling out to AI Mode.

Canvas Tool

AI Mode in Search has a Canvas tool that can accomplish planning tasks for users. The official blog post describes it:

“AI Mode in Search can transform your scattered research into a cohesive travel plan. Just head to AI Mode, select the Canvas tool from the plus (+) menu and describe your ideal trip. AI Mode will craft a custom itinerary in the Canvas side panel, including options for flights and hotels, as well as local attractions laid out on a map.”

The results can be further refined by the user. Travel planning with the Canvas tool is currently only available in the United States.

Three Featured Travel Tools

Those are the three travel-related features that Yao announced on X. The official blog post lists seven features related to travel, not all of which are new. For example, saving a boarding pass to Google Wallet is not a new feature.

Google’s Seven Travel Related Search Features

  1. Build a custom trip plan with AI Mode in Search
  2. Save money with hotel price tracking on Search
  3. Let Google take the hassle out of booking restaurants
  4. Ask Google to call nearby stores for last-minute shopping
  5. Translate and communicate with confidence
  6. Ask Maps for the best stops on your summer trips
  7. Make airport travel easier with Google Wallet

Transformation Of Search Continues

The main takeaways are:

  • Search is on a path toward becoming task oriented
  • Features like hotel tracking, AI calling, and Canvas show Google handling real-world actions, not just queries
  • Sundar Pichai’s “task-based” vision is already live in product features, not theoretical
  • AI Mode acts as an execution layer, turning search into a tool that does things on behalf of users
  • Local intent is becoming more actionable, with AI directly interacting with businesses
  • The traditional “ten blue links” model is being replaced by an interface that organizes and completes workflows
  • Visibility in search is increasingly tied to whether your business can be used by these systems, not just found

Google Search is becoming less about answering queries and is becoming more about helping users with their every day tasks. In that mode, it changes the role of a website from a destination into a data source and service endpoint.

For marketers, that creates an opportunity for helping businesses be aware of these changes and be ready for them.

If AI agents are calling stores, tracking prices, and assembling plans, then the winners are not just the best-ranked pages but the ones that are use accurately structured HTML elements as well as Schema.org structured markup.  The winners are the businesses whose data is structured, accessible, and actionable enough for those agents to use.

What this means:

  • Treat product availability, pricing, hours, and inventory as critical inputs, not just content
  • Ensure local listings, structured data, and third-party integrations are accurate and consistent

Google Search is transforming into a tasked-based user interface. Tasked-based Agentic Search is not hype, it’s something real and these new features are a part of that transformation. The old ten blue links paradigm is steadily fading away and what’s replacing it is the concept of search as an interface for navigating the modern world.

Read more about Google’s tasked-based agentic search. On a related note, research based on 68 million AI crawler visits show what successful websites do to drive better AI search performance to local business sites.

Featured Image by Shutterstock/Sergio Reis

Google May Have To Share Search Data With Rivals via @sejournal, @MattGSouthern

The European Commission has sent preliminary findings to Google proposing measures to share search data with rival search engines, including AI chatbots that qualify as online search engines under the DMA, across the EU and EEA.

Under the proposal, Google must share four categories of anonymized data on fair, reasonable, and non-discriminatory (FRAND) terms.

The categories are ranking, query, click, and view data. The Commission says the aim is to allow third-party search engines to “optimise their search services and contest Google Search’s position.”

The measures are not yet binding. A public consultation is open until May, and a final decision is due by July 27.

What’s In The Proposal

The Commission’s proposed measures cover six areas:

  • Eligibility criteria for data beneficiaries, including AI chatbots with search capabilities
  • The extent of search data that Google is required to share
  • Methods and intervals for sharing data
  • Anonymization standards for personal data
  • Guidelines for determining FRAND pricing
  • Procedures for how beneficiaries access the data

The data will be available to eligible third parties operating search engines in the EEA, including AI chatbot providers that qualify as such.

This is Article 6(11) proceeding following the Commission’s opening on January 27. A separate Article 6(7) proceeding addresses Android interoperability for third-party AI. Both aim to turn broad DMA obligations into specific, enforceable rules.

AI Chatbots Are Eligible

Eligibility criteria for qualifying AI chatbots are what change the picture for AI search visibility.

Under the proposal, AI chatbots meeting the DMA’s definition of online search engines could access Google’s anonymized search data. Qualified AI search products might use this data to improve their retrieval and ranking systems.

The proposed measures specify data sharing methods, frequency, access, and pricing, with technical details to be finalized.

Google Is Pushing Back

Google opposed the proposal in a statement provided to multiple outlets. Clare Kelly, Senior Competition Counsel at Google, said in a statement to Engadget:

“Hundreds of millions of Europeans trust Google with their most sensitive searches — including private questions about their health, family, and finances — and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections. We will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security.”

Google also told The Register the investigation appears to be driven “at least in part by OpenAI,” which it claims is “seeking to take advantage of the DMA to harvest data from Google in ways not anticipated by the drafters of the DMA.”

The company is fighting on several DMA fronts. Brussels sent preliminary findings in 2025 on a separate Article 6(5) self-preferencing case. In February, Google began testing search result changes in the EU to address that proceeding.

Why This Matters

The measures are preliminary and, if adopted, applicable only in the EEA. Anonymization and pricing details remain open through the May consultation.

The longer-term issue is whether AI chatbot eligibility survives the final decision in July.

If the EU proposal is adopted with eligibility for AI chatbots, eligible products serving EU/EEA users could access anonymized signals from Google Search.

The proposal doesn’t give AI chatbots access to Google’s index but instead allows access to data similar to what Alphabet uses to optimize its search services, which differs from current AI search data sources.

Looking Ahead

The public consultation closes on May 1, and the Commission will assess the feedback before making a final, binding decision by July 27, which will apply to Google.

These proceedings do not constitute a non-compliance finding, but separate DMA enforcement can impose fines up to 10% of global turnover. The next milestone for AI visibility practitioners is the consultation outcome.

If the Commission maintains eligibility for AI chatbots, the focus shifts to how quickly data-sharing arrangements enable AI tools to compete for citation visibility.


Featured Image: Samuel Boivin/Shutterstock

Google Lists Best Practices For Read More Deep Links via @sejournal, @MattGSouthern

Google updated its snippet documentation today with a new section on “Read more” deep links in Search results. The section outlines three best practices for increasing the likelihood that a page appears with these deep links.

What A Read More Deep Link Is

Google defines the feature as “a link within a snippet that leads users to a specific section on that page.”

The examples in the documentation show the link appearing inside the snippet area of a standard Search result.

Screenshot from: developers.google.com/search/docs/appearance/snippet, April 2026.

The Three Best Practices

Google lists three best practices that can increase the likelihood of these links appearing.

First, content must be immediately visible to a human on page load. Content hidden behind expandable sections or tabbed interfaces can reduce that likelihood, per Google’s guidance.

Second, avoid using JavaScript to control the user’s scroll position on page load. One example Google gives is forcing the user’s scroll to the top of the page.

Third, if the page uses history API calls or window.location.hash modifications on page load, keep the hash fragment in the URL. Removing it breaks deep linking behavior.

More Context

Read more deep links are one type of anchor URL that appears in Search Console performance reports. John Mueller previously addressed those hashtag URLs, confirming that they come from Google and link to page sections.

Before today’s addition, the documentation was last revised in 2024. That change clarified page content, not the meta description, as the primary source of search snippets.

Why This Matters

For websites, the new guidance outlines what can increase the likelihood that a Read more deep link will appear.

Pages using accordion UI patterns, tabbed content, or forced-scroll JavaScript may reduce that likelihood. Teams working with single-page applications should ensure that hash fragments remain in URLs during page loads.

Looking Ahead

This is a documentation clarification, not a new SERP feature. Read more deep links have appeared in Search for some time. What’s new is the written guidance on how to increase that likelihood.

Developers working on JavaScript-heavy sites should test how their pages handle scroll position and hash fragments on initial load. Today’s update provides clearer signals on what can reduce the likelihood of a “Read more” link appearing.


Featured Image: Blossom Stock Studio/Shutterstock

68 Million AI Crawler Visits Show What Drives AI Search Visibility via @sejournal, @martinibuster

A new analysis of 858,457 sites hosted on the Duda platform shows how AI crawlers are interacting with websites at scale. The data offers a clearer view of how crawling activity is growing and what SEOs and businesses should do to increase traffic from AI search.

AI Crawling Has Already Reached Scale

AI crawling is growing quickly, with more requests tied to real-time answers and most of that activity coming from a single provider. The data creates a pattern that shows which sites are being crawled and more importantly, why.

Year-Over-Year Growth In LLM Referrals

LLM referral traffic has increased sharply over the past year, with multiple platforms showing meaningful gains from very different starting points.

AI Referral Traffic Patterns

  • Total LLM referrals: 93,484 to 161,469 (+72.7%)
  • ChatGPT: 81,652 to 136,095 (+66.7%)
  • Claude: 106 to 2,488 (23x growth)
  • Copilot: 22 to 9,560 (from near-zero)
  • Perplexity: 11,533 to 13,157 (+14.1%)

Growth is not happening evenly, but across the board, referral traffic from AI systems is increasing. That makes AI-generated discovery a growing source of traffic, not a marginal one.

Crawlers Are Increasingly Fetching Content To Ground Answers

AI crawlers are no longer used primarily for indexing, with most activity now tied to retrieving content in real time to generate answers for users.

Most crawling is now happening in response to user queries rather than for building an index, which changes how content is accessed and used.

  • User Fetch (real-time answers): 56.9% of all crawler activity, driven almost entirely by ChatGPT
  • Training (model learning): 28.8%, split across GPTBot and other model crawlers
  • Discovery (content indexing): 14.3%, distributed across multiple systems
  • ChatGPT User Fetch volume: ~39.8 million visits

The trends are largely driven by ChatGPT, which is responsible for nearly all real-time retrieval activity. That means the move toward answer-based crawling is not evenly distributed, but concentrated in one platform shaping how content is accessed. This trend may change with Google’s new Google-Agent crawler.

Market Concentration In AI Crawling

AI crawler activity is heavily concentrated, with OpenAI responsible for the vast majority of requests, reflecting its position as the primary tool users rely on to find and retrieve information.

  • OpenAI: 55.8 million visits (81.0%)
  • Anthropic (Claude): 11.5 million (16.6%)
  • Perplexity: 1.3 million (1.8%)
  • Google (Gemini): 380,000 (0.6%)

Most AI crawling activity comes from OpenAI, which aligns with ChatGPT’s role as a primary tool for finding and retrieving information. Claude follows at a much smaller share, suggesting a different usage pattern, while the rest of the market accounts for a minimal portion of crawler activity.

Scale And What That Actually Means

AI crawling is already operating across a large portion of the web, reaching hundreds of thousands of sites and generating tens of millions of requests in a single month.

More than half of all sites in the dataset received at least one AI crawler visit, showing that this activity is not limited to a small subset of websites.

  • Total sites analyzed: 858,457
  • Sites with at least one AI crawler visit: 506,910 (59%)
  • Total AI crawler visits (Feb 2026): 68.9 million

AI crawling is not isolated to high-profile or heavily trafficked sites. It is already widespread, with consistent activity across a majority of the web.

The Relationship Between Crawling and Real Traffic

Sites that allow AI systems to crawl them consistently show stronger engagement across multiple metrics.

What the data actually shows is:

  1. Sites that allow AI crawling receive significantly more human traffic
  2. Higher-traffic sites are more likely to be crawled

Sites that allow crawling by AI systems receive significantly more human traffic, averaging 527.7 sessions compared to 164.9 for sites that are not crawled. This does not establish causation, but it shows a clear alignment between sites that attract human visitors and how often AI systems revisit them.

  • Average human traffic (AI-crawled vs not): 527.7 vs 164.9 (3.2x higher)
  • Average form completions: 4.17 vs 1.57 (2.7x higher)
  • Averageclick-to-call: 8.62 vs 3.46 (2.5x higher)
  • Sites with 10K+ sessions: 90.5% crawl rate

AI systems are not discovering weak or inactive sites and lifting them up. They are returning to sites that already attract human visitors. For marketers, that shifts the focus away from trying to “get crawled” and toward building real audience demand, since visibility in AI systems appears to follow it.

What Correlates With More Crawling

The research compared sites that include specific third-party integrations, structured features, and content depth with those that do not and found which ones mattered most for AI crawler activity and referrals.

Across the dataset, 59% of sites received at least one AI crawler visit in February 2026. Sites that are crawled more often tend to combine three types of signals: external integrations, structured business data, and content depth.

1. External Integrations

These integrations connect the site to external systems that validate and distribute business information.

  • Yext integration: 97.1% crawl rate vs ~58% without (+38.9pp)
  • Reviews integrations: 89.8% crawl rate vs 58.8% without, 376.9 average crawler visits

Sites that are connected to external data and review systems are crawled more often and more frequently, indicating that AI systems rely on these integrations as signals that a business is real, verifiable, and worth revisiting.

2. Structured Site Features And Business Data

These are built into the site and help AI systems understand and verify business identity.

  • Google Business Profile sync: 92.8% crawl rate vs 58.9% without, 415.6 average crawler visits
  • Local schema: 72.3% vs 55.2% (+17.1pp), 22.3% adoption
  • Dynamic pages: 69.4% vs 58.2% (+11.2pp)
  • Ecommerce: 54.2% vs 59.2% (-5.0pp)

Sites that clearly define their business identity and structure their information in a machine-readable way are crawled more often, showing that AI systems favor sites they can easily interpret, verify, and extract information from.

3. Content Depth (Volume Of Usable Data)

Sites with more content provide more opportunities for AI systems to retrieve, reference, and reuse information in responses.

  • Sites with 50+ blog posts: 1,373.7 average crawler visits vs 41.6 with no blog (~33x higher)

Sites with more content are crawled far more often, indicating that AI systems may return to sources that offer a larger supply of usable information to draw from when generating answers.

Local Business Schema Completeness = More Crawling

This part of the research focuses specifically on local business schema, comparing how the completeness of schema implementation for communicating business details relates to AI crawler activity. The fields measured include business name, phone number, address, hours, and social profiles.

  • No local schema fields: 55.2% crawl rate
  • 10–11 completed schema fields: 82% crawl rate
  • Sites with more complete local schema show a 26.8 percentage point higher crawl rate (82% vs 55.2%)

Sites that provide more complete local business information in structured form are crawled more often and receive more crawler visits. As more of these fields are filled in, both crawl rate and crawl frequency increase.

The data shows that clearly defined local business data makes a site easier for AI systems to identify, verify, and subsequently revisit, all the prerequisites for receiving traffic from AI search.

Takeaways

AI crawling is a parallel method for content discovery and the research shows clear patterns for sites that are visited by crawlers most often.

  • AI crawling operates alongside traditional search, changing how content is accessed and reused
  • Sites with structured local signals, deeper content, and more complete schema are crawled more often
  • Multiple reinforcing signals appear together on the same sites, not in isolation
  • The data shows direction, not causation, but the patterns are consistent

The data shows that sites that make it easy for AI crawlers to index and revisit the them tend to perform better. Interestingly, sites that present clear, structured, and verifiable information, while continuing to build real audience demand, are more likely to be revisited by AI systems and benefit from traffic generated through AI search.

Read the research: Duda study finds AI-optimized websites drive 320% more traffic to local businesses

Featured Image by Shutterstock/Preaapluem

AI Adoption Outpaced The PC & Internet: Dive Into The Stanford Report Data via @sejournal, @MattGSouthern

Stanford’s Human-Centered Artificial Intelligence Institute published its 2026 AI Index Report. The report runs over 400 pages across nine chapters covering technical performance, investment, workforce effects, and public sentiment.

The number getting the most attention is that Generative AI reached 53% adoption among the global population within three years of ChatGPT’s launch. That’s faster than either the personal computer or the internet reached comparable levels.

For anyone working in search, the report contains data that connects directly to the changes you’ve been navigating all year.

What The Report Found

This is the ninth annual AI Index, and it covers a lot of ground. A few findings matter most for the search industry.

In terms of capability, frontier models now exceed human performance on PhD-level science questions and in competitive mathematics. AI agents handling real-world tasks improved from a 20% success rate in 2025 to 77% today. Coding benchmarks that models struggled with a year ago are now nearly solved.

On investment, global corporate AI investment hit $581 billion in 2025, up 130% from the prior year. US private AI investment reached $285 billion. More than 90% of frontier models now come from private companies, not academic labs.

Regarding workforce effects, employment among software developers aged 22 to 25 has dropped by nearly 20% since 2024. A similar pattern appeared in customer service and other roles with higher AI exposure.

Transparency is declining. The Foundation Model Transparency Index fell from 58 to 40. The most capable models now disclose the least about their training data, parameters, and methods. Of the 95 most notable models launched last year, 80 were released without their training code.

The Adoption Number Everyone Is Citing

Understanding the 53% figure, what it includes, and what it doesn’t, matters for how you interpret it.

The comparison to PCs and the internet is based on research by the St. Louis Fed, Vanderbilt, and Harvard Kennedy School. The team compared adoption rates by years since each technology’s first mass-market product. The IBM PC launched in 1981. Commercial internet traffic opened in 1995. ChatGPT launched in November 2022.

At comparable points after launch, generative AI adoption runs well ahead of both earlier technologies.

But the comparison isn’t apples-to-apples, and the researchers said so themselves. Harvard’s David Deming pointed out that AI is built on top of PCs and the internet. People already had the hardware and the connectivity. Nobody needed to buy new equipment or wait for connectivity to reach their area. AI adoption rode on decades of prior technology investment.

Adoption numbers also vary depending on who’s counting and how. The Stanford report puts US adoption at 28%, ranking the country 24th globally. The St. Louis Fed’s own tracker puts US adoption at 54% as of August 2025. Same country, nearly double the rate, measured differently. The Fed team even revised its earlier estimate upward from 39% to 44% after changing the order of its survey questions.

“Adoption” also doesn’t distinguish intensity. Someone who signed up for a free ChatGPT account and tried it once counts the same as someone who uses it eight hours a day. The Stanford report notes that most users access free or near-free tiers. That’s a different picture than the one the headline number implies.

None of this means the adoption data is wrong. Generative AI is spreading faster than comparable technologies did at the same stage. But the speed of adoption alone doesn’t tell you how deeply it’s embedded in workflows or how much it’s changing search behavior specifically.

The Jagged Frontier

The report’s most useful concept for search professionals might be its “jagged frontier” of AI capability.

The same models that win gold at the International Mathematical Olympiad read analog clocks correctly only 50% of the time. IEEE Spectrum reported that Claude Opus 4.6 scores at the top of Humanity’s Last Exam while reading clocks at just 8.9% accuracy. Models that ace PhD-level science questions still struggle with video understanding and multi-step planning.

Ray Perrault, co-director of the AI Index steering committee, told IEEE Spectrum that benchmarks don’t map cleanly to real-world results. Knowing a model scores 75% on a legal reasoning benchmark “tells us little about how well it would fit in a law practice’s activities,” he said.

Search professionals have seen similar unevenness in AI search products. Ahrefs research showed that AI Mode and AI Overviews cite different URLs for the same queries, with only 13% overlap. Google’s Robby Stein acknowledged that the system pulls AI Overviews back when people don’t engage with them. Those signals suggest AI search performance is uneven across contexts, even if Google hasn’t fully explained where those differences are most pronounced.

Stanford’s data suggest that strong benchmark performance doesn’t guarantee reliable results across all tasks or query types. Whether that unevenness improves with future models is an open question the report doesn’t answer.

What’s Happening To Transparency

What the report says about transparency connects directly to search.

The Foundation Model Transparency Index dropped from 58 to 40 in a single year. The most capable models score lowest. Google, Anthropic, and OpenAI have all stopped disclosing dataset sizes and training duration for their latest models. 80 of the 95 most notable models launched in 2025 shipped without training code.

TechCrunch noted a disconnect between expert optimism about AI and public anxiety about it. The US reported the lowest trust in its government’s ability to regulate AI among the countries surveyed, at 31%.

For context on the index itself, a drop from 58 to 40 could indicate that companies are becoming more secretive. It could also reflect that the index penalizes closed-source models by design, and the most capable models happen to be closed-source. Both explanations can be true at the same time.

What matters for practitioners is the implication. The models powering AI Overviews, AI Mode, and ChatGPT Search are getting more capable and less explainable simultaneously. You’re optimizing for systems where the companies building them are sharing less about how they work, not more.

The report’s acknowledgments disclose that Stanford HAI receives financial support from Google, OpenAI, and others, and that the report was produced with assistance from ChatGPT and Claude.

The Entry-Level Question

Employment among software developers aged 22 to 25 dropped nearly 20% since 2024, according to the report. Older developers’ headcounts grew over the same period. A similar pattern appeared in customer service roles.

At first glance, that looks like AI replacing entry-level work. But the report included a caveat that complicates that conclusion. Unemployment is rising across many occupations, and workers least exposed to AI have seen it rise more than those most exposed.

That doesn’t rule out AI as a factor. It means the 20% decline could reflect AI displacement, broader hiring slowdowns, companies restructuring their entry-level hiring, or all three at once. The report presents correlation, not causation.

For search and content teams, the signal is directional even if the cause is mixed. The Stanford data is consistent with what the Tufts AI Jobs Risk Index showed earlier this year. Roles that involve assembling information from existing sources face more pressure than roles that require judgment, experience, and original analysis.

Why This Matters For Search Professionals

Even with its caveats, the adoption speed explains the pace of what you’ve been seeing.

Google expanded AI Overviews to 1.5 billion monthly users by Q1 2025. AI Mode reached 75 million daily active users by Q3 2025, then went global. Google expanded Search Live to 200+ countries. Personal Intelligence rolled out to free US users this year.

The adoption curve helps explain why Google has been expanding AI search features at this pace. It doesn’t tell us how much of that usage is happening inside search rather than standalone AI tools.

The “jagged frontier” means you can’t make blanket assumptions about AI search quality across query categories. A query type that returns accurate AI Overviews today might hallucinate with slight variations. Monitoring needs to happen at the query level, not the category level. Search Console doesn’t currently separate AI Overview or AI Mode performance from traditional search metrics, which makes this harder.

The decline in transparency affects how well you can understand why your content appears or doesn’t appear in AI-generated answers. When Google shares less about the models powering its search features, the feedback loop between what you publish and what gets surfaced becomes harder to read.

Shelley Walsh spoke at SEJ Live and referenced Grant Simmons, “golden knowledge” is content built on original data, firsthand experience, and depth that AI summaries can’t replicate from training data. The Stanford report’s data on adoption speed and model limitations support that position. The models are fast and widely used, but they’re uneven. Content that fills the gaps where AI is unreliable has a structural advantage.

What The Report Doesn’t Tell Us

The Stanford report doesn’t break out search-specific adoption data. We don’t know what percentage of that 53% uses AI via search specifically, rather than via ChatGPT, Gemini, or other standalone tools.

Google’s AI search usage numbers are limited. The company reported that AI Overviews reached 1.5 billion monthly users in Q1 2025, and AI Mode reached 75 million daily active users in Q3 2025. Updated figures should be included in the next earnings call.

The report also can’t tell us whether the jagged frontier problem is improving or worsening in search applications. The benchmark data shows models improving overall, but the clock-reading example shows that improvement isn’t uniform. Whether AI Overviews and AI Mode are getting more reliable for the specific queries that matter to your business requires your own monitoring, not aggregate benchmark data.

Looking Ahead

The Stanford report lands one week after Google’s March core update completed. Alphabet’s next earnings call will likely include updated AI search usage numbers.

The adoption data doesn’t predict what search will look like by year-end. But it does confirm that AI-first behavior isn’t speculative anymore. The question is whether Google’s AI search products will get reliable enough to match the pace of adoption.

Read More Resources:


Featured Image: n_a vector/Shutterstock