Bing Previews AI Citation Share For Webmaster Tools via @sejournal, @MattGSouthern

Microsoft previewed four new AI reporting features for Bing Webmaster Tools: citation share, grounding query-intent labels, grounding query topic labels, and Generative Engine Optimization (GEO)-focused recommendations.

Krishna Madhavan, Principal Product Manager at Microsoft AI and Bing, previewed the features during a presentation at SEO Week in New York City. Slides shared by attendees on X preview four additions to the AI Performance dashboard.

Citation Share would show the percentage of citations a site captures within a specific grounding query, sitting alongside the raw citation counts already available in the dashboard.

Grounding Query Intent would classify queries into 15 predefined intent labels. Visible labels in the shared screenshots include Learning, Informational Search, Navigational, Research, Comparison, Planning, Conversational, and Content Filtered.

Grounding Query Topic would group queries under topic labels, giving sites a second classification layer alongside intent.

The fourth addition, GEO-focused recommendations, would surface guidance tied to AI visibility. The slide shows recommendation areas, including content structure and crawlability, indexing and canonicalization signals, structured data adoption, and structured data quality.

Microsoft hasn’t published an official blog post about these features. The information available comes from attendee screenshots of the presentation.

https://x.com/ClaraSoteras/status/2048768514677244182?s=20

Why This Matters

The AI Performance dashboard launched in public preview in February, giving sites their first look at how often Microsoft Copilot and Bing AI summaries cite their content. Microsoft expanded it in March with a feature that mapped grounding queries to the specific pages cited for them.

Citation Share would expand that. Citation counts show visibility, while a share metric provides competitive context, indicating if a site captures most citations or appears with others for a query.

The intent and topic classifications could fix data limits in the dashboard. Queries vary in phrasing, making trend spotting hard. Grouping by intent and topic allows sites to gauge visibility against shared categories instead of individual phrases.

GEO recommendations are least defined. Labels imply focus areas are familiar SEO basics like crawlability, indexing, canonicalization, and structured data, but Microsoft hasn’t specified how recommendations are generated or triggered.

Looking Ahead

Microsoft hasn’t announced release dates for any of the four features. Details on Citation Share calculation, intent and topic taxonomies, and GEO recommendation methods remain undocumented publicly.

Treat these as previews, not shipped features. Watch for official Bing Webmaster or Microsoft Advertising blog posts confirming scope and timing.

GoDaddy Transferred A Domain By Mistake And Refused To Fix It via @sejournal, @martinibuster

GoDaddy is alleged to have transferred a domain name without authorization from it’s longtime registrant, transferring the domain name without the proper authorization and the required documentation. The victim spent nearly ten hours with customer service only to receive the response that there is nothing GoDaddy could do to fix the problem.

Domain Transfer Happened On A Saturday

Interestingly, the rogue domain transfer happened on a Saturday, which could be an important detail because some domain registrars outsource their customer service on the weekends and I have heard of other occasions where mistakes have occurred due to less quality control. I know of a case where high-value domain names worth six to seven figures were stolen on a weekend where an attacker was able to manipulate the weekend customer service into changing the email address of the account, enabling the thief to transfer away all of the one and two-word domains to another account.

What happened with this specific domain was not a case of robbery but something worse. A weekend customer service person made a mistake processing a legitimate domain name change by another GoDaddy customer, and instead of initiating the change on the correct domain they transferred the victim’s domain instead.

Compounding the error, GoDaddy’s weekend customer service failed to follow their own protocol for preventing unauthorized transfers, thereby allowing the domain to be transferred to someone else.

32 Calls And Nearly 10 Hours Of Phone Calls

The process of getting GoDaddy to reverse it’s mistake was a bureaucratic nightmare. They placed thirty-two phone calls and spent 9.6 hours on the phone talking to GoDaddy’s customer service.

“Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.

On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.”

Compounding the error was that every time the victim called GoDaddy the call generated a new case number with none of the case numbers tied to any of the previous ones.

GoDaddy’s Response

After four days of trying to get through to someone at GoDaddy to get the problem resolved, GoDaddy finally responded with the following resolution:

“After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.”

GoDaddy’s response contained links to how to dispute a domain name change at ICAAN, the global organization that manages the domain name system, instructions on how to look up the domain name registration information and a customer support page about contacting legal representation.

That’s it.

Error Fixed, But Not By GoDaddy

The person who wrote about the issue said that they contacted a friend within GoDaddy who was then able to have the matter properly dealt with. Ultimately the error was not fixed by GoDaddy but by the innocent person who discovered someone else’s domain name in their GoDaddy account.

As previously stated, the entire fiasco began with a mistake on the part of GoDaddy on a legitimate domain change request. GoDaddy changed the domain name being changed to the victim’s domain name. The person who ended up with the victim’s domain name in their account contacted the victim and between the two of them they began the process of transferring the domain back to the rightful registrant.

Domain Name Ownership Is Non-Existent

A common mistake made by many developers and business owners is that they believe that they own a domain name. That is incorrect, nobody owns a domain name. Domain names are registered but never owned. The registration entitles the registrant to use the domain name but they never actually own it. That is how the domain name system works and it’s a part of the reason for why this issue played out the way it did. However,  the problem in this case was due solely to a mistake by GoDaddy.

The post that detailed the nightmare refers to GoDaddy’s “domain ownership protection” services but that’s not actually what it is called. There is no such thing domain name ownership protection.

What GoDaddy sells is a Domain Protection service that protects against unauthorized transfers and accidental expiration. The victim paid for that protection but because the error was due to GoDaddy’s own mistake the protection did nothing for the victim, the domain change went through without the proper documentation.

Read the blog post about how GoDaddy made a mistake and not only failed to fix the problem, they didn’t even acknowledge they had made a mistake.

GoDaddy Gave a Domain to a Stranger Without Any Documentation

Featured Image by Shutterstock/AVA Bitter

Google’s AI Overviews Cut Organic Clicks 38%, Field Study Finds via @sejournal, @MattGSouthern

A randomized field experiment finds Google’s AI Overviews reduce organic clicks to external websites by 38% on queries where they appear, while self-reported search satisfaction stays nearly unchanged when the summaries are removed.

The working paper by researchers at the Indian School of Business and Carnegie Mellon University was posted to SSRN this month. Authors Saharsh Agarwal and Ananya Sen describe it as the first randomized field experiment to test how AI Overviews affect user behavior in a real browsing environment.

How The Experiment Worked

Agarwal and Sen built a Chrome extension that randomly assigned 1,065 U.S. participants to one of three groups. People were recruited from Prolific and used Chrome on desktop. They also had to meet minimum browsing-history thresholds, so the sample reflects active desktop Chrome users rather than all Google users.

The control group saw Google Search normally. A “Hide AIO” group had the extension remove AI Overviews in real time. A third group was redirected to Google’s AI Mode for all searches. The study ran for two weeks per participant between January and February 2026.

Researchers pre-registered the experiment with the AEA RCT Registry before data collection. Over 95% of users in the Hide AIO group did not detect any changes during the study.

What The Researchers Found

AI Overviews appeared on 42% of queries, and removing them increased outbound clicks from 0.38 to 0.61 per search. They reduced outbound organic clicks by 38% on triggered queries, with zero-click search rising from 54% to 72%.

Effects were strongest when AI Overviews appeared at the top of the page, which occurred 85% of the time. Removing top-position AI Overviews nearly doubled outbound clicks, but lower ones had no effect.

Sponsored clicks and search frequency remained steady, indicating substitution between AI Overviews and organic visits.

The User Experience Finding

The endline survey used a 1-to-5 Likert scale to assess participants’ search experience. Responses from the control and Hide AIO groups were nearly identical across all measures, including satisfaction, information quality, and ease of finding information.

The researchers wrote that AI Overviews “divert traffic away from publishers without delivering measurable improvements in user experience.

How AI Mode Compared

Participants directed to AI Mode had lower outbound click rates, higher zero-click rates, and lower satisfaction at endline compared to other groups.

The authors note that these results are exploratory, as higher attrition, some uninstalling of the extension, or finding workarounds may have influenced the outcomes.

Why This Matters

Independent measurements of the impact of AI Overviews on traffic have mostly been correlational. Pew Research found users click 8% of the time with AI Overviews, compared to 15% without. Ahrefs analyzed GSC data and reported a 58% drop in click-through rate for top-ranking pages when AI Overviews appeared.

This experiment adds a different approach by randomly assigning users to see AI Overviews or not, isolating the causal effect.

Google VP Liz Reid claims AI Overviews cut “bounce clicks,’ but provides no data backing the user-benefit side. The Agarwal and Sen paper tested a related question with a randomized design, finding no measurable change in satisfaction or ease of finding info.

Looking Ahead

The paper is a draft on SSRN and is not peer-reviewed. Authors will add more results, and we will provide an update if findings change.

The High CPC Paradox: When Expensive Clicks Are A Sign Of Success

Cost-per-click (CPC) remains one of the most closely scrutinized metrics in digital advertising for both business owners and expert practitioners. This is understandable; it’s a tangible, easy-to-track metric that offers immediate gratification when it drops and immediate anxiety when it rises. After all, if your average CPC increases from $2 to $5, it’s natural to assume your campaign is performing worse.

However, it’s strategically wrong to evaluate your CPC in isolation. In modern Google Ads account structures, particularly those using Smart Bidding, I’ve noticed that a higher CPC is frequently a sign of account health, while a rock-bottom CPC can be a huge red flag.

We’ll explore why this paradox exists, delineate the scenarios where high CPCs signal success versus inefficiency, and use a real-life case study to illustrate the problem with focusing on CPCs – and what high-value metrics you should prioritize instead.

Why High CPCs Often Signal High Quality

If you transition from manual bidding to smart bidding strategies like maximize conversions or target ROAS, you will likely notice an immediate increase in your average CPC. It can be jarring, but this is a fundamental feature of how the algorithm operates.

Remember, cheap clicks are cheap for a reason: Your competitors didn’t want them! If you focus solely on driving down CPCs, you risk optimizing your account for the low-quality “leftover” traffic. However, when you use smart bidding, while you still pay per click, you are not optimizing for clicks; you are optimizing for the probability of a conversion, and potentially even the probable value of a conversion. This is how you align your business goals with your Google Ads campaigns’ goals, and the unintended (but necessary) side effect may be higher CPCs.

If this occurs, recognize that you are now bidding on conversion probabilities, not keywords. In the old world of manual CPC, you bid a flat rate for a keyword. In the new world, Google’s smart bidding algorithms analyze millions of data points in real-time – including device, location, time of day, operating system, browsing history, audience membership, and even the unique query itself – to assess user intent.

The algorithm is designed to bid aggressively for users who signal a high likelihood of converting. For example, if a user is searching for your specific solution, has a history of converting on similar offers, and is searching during business hours, the system will bid higher to win that auction. You are paying a premium to ensure your ad appears before the most valuable users.

Conversely, the algorithm bids down (or not at all) on users who are unlikely to convert. These might be users who frequently click ads but never buy, or users searching with low-intent informational queries. By avoiding these low-value clicks, your overall traffic volume may decrease, and/or your average cost per click may rise, because you have removed the “cheap” denominator from your equation.

The result should be expensive traffic, but traffic that actually turns into revenue.

In some industries like insurance, law, or emergency services, CPCs can reach an eye-watering $100 or $150 per click. This is simply the cost of doing business in a competitive market where a single client is worth thousands of dollars. If your Average Order Value is high, a high CPC is not a bug; it is a feature of a healthy, competitive auction, and the potential of those clicks for your business.

If High CPCs Often Indicate Quality, What Do Low CPCs Indicate?

If you are seeing CPCs under $1.00 for non-brand search campaigns, you should investigate immediately. Extremely low costs may mean you are purchasing inventory that your competitors have rejected.

  • Junk Inventory: Low CPCs often indicate you are inadvertently opted into the Google Display Network or Search Partners. These networks frequently drive lower-intent traffic compared to the primary Search Engine Results Page (SERP).
  • Broad Match or AI Max mis-matches: Cheap clicks can result from loose keyword matching, where your ads appear for irrelevant, low-competition queries. The root cause of this issue is usually a poor conversion tracking setup and/or the wrong bid strategy; you’ll want to fix the root cause of both issues

However, it is also possible that you’re lucky! I’ve seen non-brand CPCs in the $0.10 to $0.90 range, in 2026, for niches like alcohol and hair salons. Low competition and high-quality ads can mean you get to enjoy low CPCs with zero consequences. Sadly, this is usually the exception, not the rule.

Context Matters: The Non-Search Exception

It is critical to note that the logic of “High CPC = High Quality” changes significantly when you move away from Search. In non-search campaigns, you are interrupting users rather than capturing active intent, so the metrics behave differently.

  • Display & Demand Gen: On the GDN, “good” metrics are often misleading. A high CTR (usually over 1%) is usually a sign of accidental clicks or bot activity. While CPCs here are generally low, extremely low costs (pennies) typically signal placement on low-quality sites. This is why prioritizing the higher quality inventory on Demand Gen, like Discover and Gmail, is often worth it, even with slightly higher CPCs than Display.
  • Video (YouTube): High CPCs on Video are meaningless because the primary goal is views, not clicks. You should be optimizing for cost per view (CPV) or cost per reach (CPM), not CPC.
  • Performance Max: Since PMax blends all of these networks, CPC serves as even less of a diagnostic tool. A very low average CPC ($0.10-$0.50) can suggest the campaign is leaning heavily on Display/Video inventory. A higher CPC can indicate it is successfully winning auctions in Search and Shopping. Your Channel Performance Report will be a more useful optimization tool than looking at blended CPC.

The Counter-Argument: When High CPCs Are A Red Flag

While high CPCs can indicate quality, they are not a free pass to ignore your costs altogether. There are specific scenarios where a high CPC is still a warning sign of inefficiency. This is where your judgment as a skilled practitioner needs to come in:

1. Your Quality Score Is Low

If your Quality Score is low (specifically 5 or below), then you are overpaying for your clicks to compensate.

The Fix: Check your keyword report, add the Quality Score columns, and see which component is the most “Below Average”: Expected CTR, ad relevance, or landing page experience. Optimize accordingly.

2. You Are Over-Invested (Diminishing Returns)

It is possible to capture too much of the market. In my experience, if you are reaching 60%+ impression share on non-brand search in a competitive industry, your CPCs are likely inflated because you are paying a premium to capture the very last, most expensive sliver of available traffic.

The Fix: Switch from a maximize strategy to a target strategy, so that Google Ads isn’t forcing your budget to be spent in full. Or, expand your keyword set through additional keywords and/or broader keywords to open up new pockets of opportunity.

3. The Math Doesn’t Work (The Rule Of 2)

High CPCs are a problem if they break your business economics. Even if the traffic is high quality, if the cost of the click exceeds the revenue you can expect to make from that visit, the ads will never be profitable.

The Fix: For a quick and crude test, compare your average CPC to your revenue per session (Conversion Rate x Average Order Value). If your CPC is $2 but you only make $1 per visit on average, you are losing money on every click. Work on your conversion rate so that you are better equipped to handle this high-quality traffic

4. Irrelevant Matching

Sometimes, high CPCs occur because you are bidding on keywords that match to irrelevant but expensive queries. For example, a branding agency bidding on “branding agency” might match to “marketing agencies” – a highly competitive term that probably doesn’t align with their specialty.

The Fix: Keep an eye on your search terms report, and either restrict your match types or add negatives as needed.

5. Seasonality And Auction Dynamics

CPCs can spike due to external factors like Q4 seasonality or a new competitor entering the auction. While this isn’t a “mistake,” it is a warning that your efficiency is about to drop – or has already dropped – through factors beyond your control.

The Fix: Keep an eye on your impression share and auction insights, so that you can quickly spot anomalies and plan accordingly. For seasonal businesses, analyze year-over-year data as well as month-over-month, so that seasonal swings don’t take you by surprise.

Case Study: The $29 Click That Saved The Account

It’s one thing to know that, in theory, higher CPCs are better. It’s another thing to believe it, trust it, and let it happen to your campaigns. Allow me to share a real-life example with you from a local lead generation business.

The Challenge

My Google Ads coaching client, a digital marketing agency that specializes in home services businesses, hired me after becoming dissatisfied with their white-label PPC freelancer. The Google Ads campaign for one of their electrician clients was performing poorly, and he was threatening to fire the agency.

When we looked in the account, here’s what we saw:

  • Search campaign with 2100 keywords on manual CPC.
  • Average CPC: $1.77.
  • Conversion rate: 1.5%.
  • Conversions (leads): 6 per month.
  • Search impression share <10%>

The Change

I recommended a structural overhaul: a Search campaign with just 23 exact match keywords, with overhauled ad text to fix spelling errors (yes, really) and add clear value propositions like “No Call Out Fee.” And maximize conversions rather than manual CPC.

The Immediate Result

Four days after launching the new strategy, my client emailed me in a panic. The average CPC had skyrocketed from $1.77 to $29. He assumed that we had “broken” the campaign and asked, “Why am I paying $29 for a click?”

The Immediate Outcome

Despite the CPC sticker shock, the Search campaign was actually performing significantly better after just four days. Although the CPC had skyrocketed under maximize conversions bidding from $1.77 to $29 per click, the conversion rate had also skyrocketed from 1.5% to 27%. That meant that even though we were only four days into the new structure, the cost per lead had already decreased from $121 to $107.

High CPCs were the price of admission for quality leads in a competitive big city.

The Unexpected Plot Twist

The story didn’t end there. A few days later, the account’s “Auto-Apply Recommendations” surreptitiously added broad match keywords. Any Google Ads practitioner knows that this can tank your performance, but because the campaign was on a smart bidding strategy with sufficient conversion data – this actually improved performance even further. (I promise Google didn’t pay me to say that!)

In the two weeks that broad match keywords were turned on, the campaign generated 34 leads at an average CPA of $48.

Compare this to the month prior, when the electrician only got six leads from Google Ads at $121 cost per lead. Now, he was getting 34 leads in just two weeks, for a fraction of the cost – and anecdotally, he told my client that most were high quality.

The Victim Of Success

The problem eventually became too much success; the electrician was a small business owner and simply couldn’t handle the volume of leads from Google Ads. My client had to pause most of his ad groups, bringing lead volume back down.

But this case perfectly illustrates the high CPC paradox: A low CPC ($1.77) delivered junk volume. A high CPC ($29.00) proved the concept and delivered quality. A blended approach (broad match + smart bidding) eventually settled the metrics in the middle, but we never would have gotten there if we had optimized for cheap clicks from day one.

In Google Ads, Prioritize CPA And ROAS

As Google’s algorithms get smarter and more pervasive, our role as Google Ads practitioners continues to shift. We are no longer day-traders trying to buy individual clicks for pennies. We are investors looking for a return.

Stop optimizing for CPC. Instead, focus on cost per acquisition (CPA) or return on ad spend (ROAS). If you are acquiring customers within your target efficiency, the cost of the individual click is irrelevant. As our electrician found out, a $29 click that converts is infinitely more valuable than a $1.77 click that doesn’t.

More Resources:


Featured Image: ImageFlow/Shutterstock

The Technical SEO Audit Needs A New Layer via @sejournal, @slobodanmanic

The standard technical SEO audit checks crawlability, indexability, website speed, mobile-friendliness, and structured data. That checklist was designed for one consumer: Googlebot.

This is how it’s always been.

In 2026, your website has, at least, a dozen additional non-human consumers. AI crawlers like GPTBot, ClaudeBot, and PerplexityBot train models and power AI search results. User-triggered agents like the newly announced Google-Agent, or its “siblings” Claude-User and ChatGPT-User, browse websites on behalf of specific humans in real time. A Q1 2026 analysis across Cloudflare’s network found that 30.6% of all web traffic now comes from now bots, with AI crawlers and agents making up a growing share. Your technical audit needs to account for all of them.

Here are the five layers to add to your existing technical SEO audit.

Layer 1: AI Crawler Access

Your robots.txt was probably written for Googlebot, Bingbot, and maybe a few scrapers. AI crawlers need their own robots.txt rules, and they need to be separate from Googlebot and Bingbot.

What To Check

Review your robots.txt for rules targeting AI-specific user agents: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider, AppleBot-Extended, CCBot, and ChatGPT-User. If none of these appear, you’re running on defaults, and those defaults might not reflect what you actually want. Never accept the defaults unless you know they are exactly what you need.

The key is making a conscious decision per crawler rather than blanket allowing or blocking everything. Not all AI crawlers serve the same purpose. AI crawler traffic can be split into three categories: training crawlers that collect data for model training (89.4% of AI crawler traffic according to Cloudflare data), search crawlers that power AI search results (8%), and user-triggered agents like Google-Agent and ChatGPT-User that browse on behalf of a specific human in real time (2.2%). Each category warrants a different robots.txt decision.

Chart showing traffic volume by crawler purpose - Cloudflare Radar Q1 2026
Cloudflare Radar data showing traffic volume by crawl purpose (Q1 2026); Screenshot by author, April 2026

The crawl-to-referral ratios from Cloudflare’s Radar report can make this an informed decision for you. Anthropic’s ClaudeBot crawls 20.6 thousand pages for every single referral it returns. OpenAI’s ratio is 1,300:1. Meta sends no referrals. Blocking OpenAI’s OAI-SearchBot or PerplexityBot reduces your visibility in ChatGPT Search and Perplexity’s AI answers. Blocking training-focused crawlers like CCBot or Meta’s crawler prevents data extraction from a provider that sends zero traffic back. The crawl-to-referral ratios tell you who is taking without giving.

There is one crawler that requires special attention. Google added Google-Agent to its official list of user-triggered fetchers on March 20, 2026. Google-Agent identifies requests from AI systems running on Google infrastructure that browse websites on behalf of users. Unlike traditional crawlers, Google-Agent ignores robots.txt. Google’s position is that since a human initiated the request, the agent acts as a user proxy rather than an autonomous crawler. Blocking Google-Agent requires server-side authentication, not robots.txt rules. This is both interesting, and important for the future, even if it’s not within the scope of this article.

Official documentation for each crawler:

Layer 2: JavaScript Rendering

Googlebot renders JavaScript using headless Chromium. There is nothing new about that. What is new and different is that virtually every major AI crawler does not render JavaScript.

Crawler Renders JavaScript
GPTBot (OpenAI) No
ClaudeBot (Anthropic) No
PerplexityBot No
CCBot (Common Crawl) No
AppleBot Yes
Googlebot Yes

AppleBot (which uses a WebKit-based renderer) and Googlebot are the only major crawlers that render JavaScript. Four of the six major web crawlers (GPTBot, ClaudeBot, PerplexityBot, and CCBot) fetch static HTML only, making server-side rendering a requirement for AI search visibility, not an optimization. If your content lives in client-side JavaScript, it is invisible to the crawlers training OpenAI, Anthropic, and Perplexity’s models and powering their AI search products.

What To Check

Run curl -s [URL] on your critical pages and search the output for key content like product names, prices, or service descriptions. If that content isn’t in the curl response, GPTBot, ClaudeBot, and PerplexityBot can’t see it either. Alternatively, use View Source in your browser (not Inspect Element, which shows the rendered DOM after JavaScript execution) and check whether the important information is present in the raw HTML.

CURL fetch of No Hacks homepage
Curl fetch of No Hacks homepage (Image from author, April 2026)

Single-page applications (SPAs) built with React, Vue, or Angular are particularly at risk unless they use server-side rendering (SSR) or static site generation (SSG). A React SPA that renders product descriptions, pricing, or key claims entirely on the client side is sending AI crawlers a blank page with a link to the JavaScript bundle.

The fix isn’t complicated. Server-side rendering (SSR), static site generation (SSG), or pre-rendering solves this for every major framework. Next.js supports SSR and SSG natively for React, Nuxt provides the same for Vue, and Angular Universal handles server rendering for Angular applications. The audit just needs to flag which pages depend on client-side JavaScript for critical content.

Layer 3: Structured Data For AI

Structured data has been part of technical SEO audits for years, but the evaluation criteria need updating. The question is no longer just “does this page have schema markup?” It’s “does this markup help AI systems understand and cite this content?”

What To Check

  • JSON-LD implementation (preferred over Microdata and RDFa for AI parsing).
  • Schema types that go beyond the basics: Organization, Article, Product, FAQ, HowTo, Person.
  • Entity relationships: sameAs, author, publisher connections that link your content to known entities.
  • Completeness: are all relevant properties populated, or are you just checking a box using skeleton schemas with name and URL?

Why This Matters Now

Microsoft’s Bing principal product manager Fabrice Canel confirmed in March 2025 that schema markup helps LLMs understand content for Copilot. The Google Search team stated in April 2025 that structured data gives an advantage in search results.

No, you can’t win with schema alone. Yes, it can help.

The data density angle matters too. The GEO research paper by Princeton, Georgia Tech, the Allen Institute for AI, and IIT Delhi (presented at ACM KDD 2024, first to publicly use the term “GEO”) found that adding statistics to content improved AI visibility by 41%. Yext’s analysis found that data-rich websites earn 4.3x more AI citations than directory-style listings. Structured data contributes to data density by giving AI systems machine-readable facts rather than requiring them to extract meaning from prose.

An important caveat: No peer-reviewed academic studies exist yet on schema’s impact on AI citation rates specifically. The industry data is promising and consistent, but treat these numbers as indicators rather than guarantees.

W3Techs reports that approximately 53% of the top 10 million websites use JSON-LD as of early 2026. If your website isn’t among them, you’re missing signals that both traditional and AI search systems use to understand your content.

Duane Forrester, who helped build Bing Webmaster Tools and co-launched Schema.org, argues that schema markup is only step one. As AI agents continue moving from simply interpreting pages to making decisions, brands will also need to publish operational truth (pricing, policies, constraints) in machine-verifiable formats with versioning and cryptographic signatures. Publishing machine-verifiable source packs is beyond the scope of a standard audit today, but auditing structured data completeness and accuracy is the foundation verified source packs build on.

Layer 4: Semantic HTML And The Accessibility Tree

The first three layers of the AI-readiness audit cover crawler access (robots.txt), JavaScript rendering, and structured data. The final two address how AI agents actually read your pages and what signals help them discover and evaluate your content.

Most SEOs evaluate HTML for search engine consumption. Agentic browsers like ChatGPT Atlas, Chrome with auto browse, and Perplexity Comet don’t parse pages the way Googlebot does. They read the accessibility tree instead.

The accessibility tree is a parallel representation of your page that browsers generate from your HTML. It strips away visual styling, layout, and decoration, keeping only the semantic structure: headings, links, buttons, form fields, labels, and the relationships between them. Screen readers like VoiceOver and NVDA have used the accessibility tree for decades to make websites usable for people with visual impairments. AI agents now use the same tree to understand and interact with web pages.

And the reason is simple: efficiency. Processing screenshots is both more expensive and slower than working with the accessibility tree.

Accessibility tree shown in Google Chrome
This is what an accessibility tree looks like in Google Chrome (Image from author, April 2026)

This matters because the accessibility tree exposes what your HTML actually communicates, not what your CSS (or JS) makes it look like. A

styled to look like a button doesn’t appear as a button in the accessibility tree. An image without alt text means nothing. A heading hierarchy that skips from H1 to H4 creates a broken structure that both screen readers and AI agents will struggle to navigate.

Microsoft’s Playwright MCP, the standard tool for connecting AI models to browser automation, uses accessibility snapshots rather than raw HTML or screenshots. Playwright MCP’s browser_snapshot function returns an accessibility tree representation because it’s more compact and semantically meaningful for LLMs. OpenAI’s documentation states that ChatGPT Atlas uses ARIA tags to interpret page structure when browsing websites.

Web accessibility and AI agent compatibility are now the same discipline. Proper heading hierarchy (H1-H6) creates meaningful sections that AI systems use for content extraction. Semantic elements like

,

,

, and

tell machines what role each content block plays. Form labels and descriptive button text make interactive elements understandable to agents that parse the accessibility tree instead of rendering visual design.

What To Check

  • Heading hierarchy: logical H1-H6 structure that machines can use to understand content relationships.
  • Semantic elements: nav, main, article, section, aside, header, footer, used appropriately.
  • Form inputs: every input has a label, every button has descriptive text.
  • Interactive elements: clickable things use or , not

    .

  • Accessibility tree: run a Playwright MCP snapshot or test with VoiceOver/NVDA to see what agents actually see.

Somehow, things are getting worse on this front. The WebAIM Million 2026 report found that the average web page now has 56.1 accessibility errors, up 10.1% from 2025.

ARIA (Accessible Rich Internet Applications) usage increased 27% in a single year. ARIA is a set of HTML attributes that add extra semantic information to elements, telling screen readers and AI agents things like “this div is actually a dialog” or “this list functions as a menu.” But what’s critical is this: pages with ARIA present had significantly more errors (59.1 on average) than pages without ARIA (42 on average). Adding ARIA without understanding it makes things worse, not better, because incorrect ARIA overrides the browser’s default accessibility tree interpretation with wrong information. Start with proper semantic HTML. Add ARIA only when native elements aren’t sufficient.

Technical SEOs do not need to become accessibility experts. But treating accessibility as someone else’s problem is no longer viable when the same tree that screen readers parse is now the primary interface between AI agents and your website.

Sidenote: The Markdown Shortcut Doesn’t Work

Serving raw markdown files to AI crawlers instead of HTML can result in a 95% reduction in token usage per page. However, Google Search Advocate John Mueller called this “a stupid idea” in February 2026 on Bluesky. Mueller’s argument was this: “Meaning lives in structure, hierarchy and context. Flatten it and you don’t make it machine-friendly, you make it meaningless.” LLMs were trained on normal HTML pages from the beginning and have no problems processing them. The answer isn’t to create a flat, simplified version for machines. It’s to make the HTML itself properly structured. Well-written semantic HTML already is the machine-readable format. Besides, that simplified version already exists in the accessibility tree, and it is what AI agents already use.

Layer 5: AI Discoverability Signals

The final layer covers signals that don’t fit neatly into traditional audit categories but directly affect how AI systems discover and evaluate your website.

llms.txt (dis-honourable mention). Listed first for one reason only, ask any LLM what you should do to make your website more visible to AI systems, and llms.txt will be at or near the top of the list. It’s their world, I guess. The llms.txt specification provides a simple markdown file that helps AI agents understand your website’s purpose, structure, and key content. No large-scale adoption data has been published yet, and its actual impact on AI citations is unproven. But LLMs consistently recommend it, which means AI-powered audit tools and consultants will flag its absence. It takes minutes to create and costs nothing to maintain.

OK, now that we’ve got that out of the way, let’s look at what might really matter.

AI crawler analytics. Are you monitoring AI bot traffic? Cloudflare’s AI Audit dashboard shows which AI crawlers visit, how often, and which pages they hit. If you’re not on Cloudflare, check server logs for Google-Agent, ChatGPT-User, and ClaudeBot user agent strings. Google publishes a user-triggered-agents.json file containing IP ranges that Google-Agent uses, so you can verify whether incoming requests are genuinely from Google rather than spoofed user agent strings.

Entity definition. Does your website clearly define what the business is, who runs it, and what it does? Not in marketing copy, but in structured, machine-parseable markup. Organization schema should include name, URL, logo, founding date, and sameAs links to verified profiles on LinkedIn, Crunchbase, and Wikipedia. Person schema for key people should connect them to the organization via author and employee properties. AI systems need to resolve your identity as a distinct entity before they can confidently recommend you over competitors with similar names or offerings. Don’t slap this on top of your website when your designer is done with their work. Start here; it will make your life easier.

Content position. Where you place information on the page directly affects whether AI systems cite it. Kevin Indig’s analysis of 98,000 ChatGPT citation rows across 1.2 million responses found that 44.2% of all AI citations come from the top 30% of a page. The bottom 10% earns only 2.4-4.4% of citations regardless of industry. Duane Forrester calls this “dog-bone thinking”: strong at the beginning and end, weak in the middle, a pattern Stanford researchers have confirmed as the “lost in the middle” phenomenon. Audit your key pages: are the most important claims and data points in the first 30%, or buried in the middle?

Content extractability. Pull any key claim from your page and read it in isolation. Does it still make sense without the surrounding paragraphs? AI retrieval systems, like ChatGPT, Perplexity, and Google AI Overviews, extract and cite individual passages and sentences that rely on “this,” “it,” or “the above” for meaning, become unusable when extracted from their original context. Ramon Eijkemans’ excellent utility-writing framework maps these principles to documented retrieval mechanisms: self-contained sentences, explicit entity relationships, and quotable anchor statements that AI systems can confidently cite without additional inference.

The Audit Checklist

Check Tool/Method What You’re Looking For
AI crawler robots.txt Manual review Conscious per-crawler decisions
JavaScript rendering curl, View Source, Lynx browser Critical content in static HTML
Structured data Schema validator, Rich Results Test Complete, connected JSON-LD
Semantic HTML axe DevTools, Lighthouse Proper elements, heading hierarchy
Accessibility tree Playwright MCP snapshot, screen reader What agents actually see
AI bot traffic Cloudflare, server logs Volume, pages hit, patterns

From Audit To Action

This audit identifies gaps. Fixing them requires a sequence, because some fixes depend on others. Optimizing content structure before establishing a machine-readable identity means agents can extract your information, but can’t confidently attribute it to your brand. I wrote Machine-First Architecture to provide that sequence: identity, structure, content, interaction, each pillar building on the previous one.

Why Technical SEO Audit Is Where This Belongs

None of this is technically SEO. Robots.txt rules for AI crawlers don’t affect Google rankings. Accessibility tree optimization doesn’t move keyword positions. Content position scoring has nothing to do with search indexing.

But most of it did grow out of technical SEO. Crawl management, structured data, semantic HTML, JavaScript rendering, server log analysis: these are skills technical SEOs already have. The audit methodology transfers directly. The consumer it serves is what changed.

The websites that get cited in AI responses, that work when Chrome auto browse visits them, that show up when someone asks ChatGPT for a recommendation, they won’t be the ones with the best content alone. They’ll be the ones whose technical foundation made that content accessible to machines. Technical SEOs are the people best equipped to build that foundation. The old audit template just needs a new section to reflect it.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

At 55, Email Is Vital for Marketers

Email turned 55 this month. Despite its age, the messaging technology remains vital for ecommerce. It’s one of the few owned media channels a store has.

1971 was a notable year in technology. Intel introduced the first microprocessor. Bell Labs released the Unix operating system. IBM created the floppy disk.

And Ray Tomlinson, an American software engineer, invented email that year while working at BBN Technologies. He chose the now-familiar “@” symbol for addresses and sent the first message on April 23.

Fast forward to 2026, and email offers a direct connection between a business and its audience of customers.

Owned Media

That direct connection is especially important in an AI world. Many of the online channels sellers use to reach their prospects are changing. Ecommerce product discovery is shifting toward AI-driven search, AI recommendations, and agentic shopping.

At its most basic, “email is a way to connect party A to party B,” said Adam Rosen, CEO of the Email Outreach Company.

Advertising is a relationship by proxy. Algorithms decide visibility on social platforms and organic search results. Email lists, by contrast, are owned, providing control, consistency, and a reliable way to reach customers.

Rosen described the email newsletters his company operates as “direct” marketing, though it owns the subscribers. Nonetheless, email newsletters can attract an audience, keep it engaged, and convert it into sales.

The idea is to focus on a topic related to the products a store sells.

Get the Newsletter

Ecommerce marketers have three primary ways to build a newsletter audience.

  • Start from scratch. Select an email service provider, develop content, and control every aspect of the newsletter — owned media, in other words.
  • Build it, but with help. Rosen’s company and similar services can provide their own subscribers to help launch a newsletter.
  • Buy an established newsletter. Several sites, such as LetterTrader, Flippa, and Acquire.com, offer newsletters for sale.

Grow the Audience

Not surprisingly, the hardest part of developing an email sales channel is building the audience. Acquisition is not necessarily organic.

Rosen, for example, said much of his subscriber growth comes from advertising.

Common tactics include sponsoring other newsletters, running ads on Meta or LinkedIn, and using recommendation networks such as SparkLoop. There are even newsletter growth agencies, including GrowLetter, The Feed Media, and Boletin Growth.

Yet ecommerce companies already advertise to drive immediate sales. Why allocate budget to newsletter growth?

The choice comes down to revenue per subscriber. Marketers who choose subscriber growth bet that newsletters in the long haul will generate more profit.

Engagement is the key.

Newsletter content must match the audience’s interests and expectations. A golf newsletter should appeal to golfers. A travel newsletter should reflect how travelers think and plan. Formats can vary. Some audiences respond to short blurbs and images. Others prefer longer, text-driven analysis.

Regardless, each issue delivers content alongside links to products or offers. Over time, the pattern becomes familiar. Readers come to expect both useful information and relevant recommendations. Merchants sell more products more often.

The Fully Non-Human Web: No One Builds The Page, No One Visits It via @sejournal, @slobodanmanic

In January 2026, Google was granted patent US12536233B1. Six engineers worked on it, and it describes a system that scores a landing page on conversion rate, bounce rate, and design quality. If the landing page falls below a threshold, generate an AI replacement personalized to the searcher. The advertiser never sees it. Never approves it. Might not even know it happened.

The debate around this patent has centered on scope: Is it limited to shopping ads, or does it signal something broader? That’s the wrong question.

The right question: What happens when you combine AI-generated pages with AI agents that browse, shop, and transact on behalf of humans?

For the first time, we have the infrastructure for a web where no human creates the page and no human visits it. Both sides can be non-human. That changes everything.

The Supply Side: AI-Generated Pages

The supply side of the web has always been human. Someone designs a page, writes copy, publishes it. Three developments are changing that.

Google’s patent US12536233B1 is the most direct: Score a landing page on conversion rate, bounce rate, and design quality, then replace underperforming pages with AI-generated versions. The replacement pages draw on the searcher’s full search history, previous queries, click behavior, location, and device data. Google builds personalized landing pages no advertiser can match, because no advertiser has access to cross-query behavioral data at that scale. Barry Schwartz covered the patent on Search Engine Land, describing a system where Google could automatically create custom landing pages, replacing organic results. Glenn Gabe called Google’s AI landing page patent potentially more controversial than AI Overviews. Roger Montti at Search Engine Journal argued the patent’s scope is limited to shopping and ads. Both camps agree: the technology to score and replace landing pages with AI exists and works.

NLWeb, Microsoft’s open project, takes a different approach. NLWeb turns any website into a natural language interface using existing Schema.org markup and RSS feeds. An AI agent querying an NLWeb-enabled site doesn’t load a page at all. The agent asks a structured question, NLWeb returns a structured answer. The rendered page becomes optional.

WebMCP goes further still. With WebMCP, a website registers tools with defined input/output schemas that AI agents discover and call as functions. A product search becomes a function call. A checkout becomes an API request. WebMCP eliminates the “page” concept entirely, dissolving the web page as a unit of content into a set of callable capabilities.

Each mechanism works differently, but the direction is the same: the page is becoming something generated, queried, or bypassed entirely. The human-designed, human-published web page is no longer the only way content reaches an audience.

The Demand Side: AI Agents As Visitors

The demand side shifted faster. In 2024, bots surpassed human traffic for the first time in a decade, accounting for 51% of all web activity. Cloudflare’s data shows AI “user action” crawling (agents actively doing things, not just indexing) grew 15x during 2025. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026, up from less than 5% in 2025. The scale is hard to overstate.

Agentic browsers are the most visible shift. Chrome’s auto browse turned 3 billion Chrome installations into potential AI agent launchpads. Google’s Gemini scrolls, clicks, fills forms, and completes multi-step tasks autonomously inside Chrome. Perplexity’s Comet browser conducts deep research across multiple sites simultaneously. Microsoft’s Edge Copilot Mode handles multi-step workflows from within the browser sidebar. The full agentic browser landscape now includes over a dozen consumer and developer tools, all browsing on behalf of humans.

Commerce agents have moved past browsing into buying. OpenAI launched Instant Checkout to let users purchase products directly inside ChatGPT, powered by Stripe’s Agentic Commerce Protocol (ACP). OpenAI killed the feature in March 2026 after near-zero purchase conversions and only a dozen merchant integrations out of over a million promised. The failure was execution, not concept: Alibaba’s Qwen app processed 120 million orders in six days in February 2026 because Alibaba owns the AI model, the marketplace, the payment rails (Alipay), and the logistics. OpenAI tried to replicate agentic commerce without owning the stack. Google and Shopify’s Universal Commerce Protocol (UCP) connects over 20 companies, including Walmart, Target, and Mastercard, in a framework designed for AI agents to handle commerce from product discovery through checkout. Shopify auto-opted over a million merchants into agentic shopping experiences with ChatGPT, Copilot, and Perplexity. The transaction happens in an AI conversation. No checkout page loads.

Agent-to-agent communication removes the human from both ends. Google’s Agent-to-Agent (A2A) protocol lets AI agents from different vendors discover each other’s capabilities and collaborate on tasks without human mediation. A travel planning agent negotiates directly with a booking agent. A procurement agent evaluates supplier agents across vendors. Over 150 organizations support A2A, including Salesforce, SAP, and PayPal, making agent-to-agent commerce and coordination a production reality.

When Both Sides Go Non-Human

Until now, one side of the web was always human. A person built the page, or a person visited it. Usually both.

Google’s patent closes the circuit.

Here’s what a complete non-human flow might look like. A user tells their AI assistant they need running shoes. The assistant queries product data through NLWeb or WebMCP, no page load needed. The assistant evaluates options by checking inventory across retailers via A2A. If the user needs to review a comparison, Google generates a landing page personalized to that specific user’s search history and preferences. The assistant completes checkout through ACP or UCP using Shared Payment Tokens. The user receives a confirmation.

The human’s role in that entire flow: stating intent and approving the purchase. Discovery, page generation, product evaluation, and transaction completion are all handled by AI systems. The human touches only the two endpoints of the chain.

Every piece of technology in that chain exists in production today. Chrome auto browse is live for 3 billion Chrome users. A2A has 150+ organizational supporters. ACP underpins Stripe’s agentic commerce infrastructure (ChatGPT’s Instant Checkout failed on execution, not protocol). UCP connects Shopify, Google, Walmart, and Target. Patent US12536233B1 is granted. No single company has assembled the full loop yet, but every component is operational.

Who’s Building The Non-Human Web

Here’s where it gets interesting. Map out who’s building what, and a pattern emerges:

Layer What Who
Page generation AI landing pages Google
Content-as-API WebMCP, NLWeb Google, Microsoft
Agent infrastructure MCP, A2A Anthropic, Google
Agent browsers Chrome, Comet, Copilot Google, Perplexity, Microsoft
Agent commerce ACP, UCP Stripe + OpenAI, Shopify + Google
Edge delivery Markdown for Agents Cloudflare

Google appears in five of six layers: page generation (patent US12536233B1), content-as-API (WebMCP), agent infrastructure (A2A), agent browsers (Chrome auto browse), and commerce (UCP). Google is positioning itself to mediate the non-human web the same way Google mediates the human one through Search.

The Agentic AI Foundation (AAIF), formed under the Linux Foundation with Anthropic, OpenAI, Google, and Microsoft as platinum members, provides the governance layer. The AAIF functions as the W3C for the agentic web: the vendor-neutral body that decides which protocols become standards for agent interoperability.

What Website Owners Need To Know

This isn’t an optimization checklist. It’s three structural shifts in what your website is for.

Your Data Layer Is Your Website

Google’s patent generates landing pages from product feed data, making product feeds the most important asset an ecommerce business maintains. NLWeb queries Schema.org markup instead of rendering pages, making structured markup the front door to your content. WebMCP exposes site capabilities as function calls, making tool definitions the user interface agents interact with.

Structured data, product feeds, JSON-LD, and API surfaces have traditionally been treated as backend infrastructure. In the non-human web, these data layers become the primary way a business reaches customers. Product feed accuracy (specs, pricing, stock levels, images) matters more than homepage design when AI systems generate the page from that feed.

Trust Is The Moat

AI can generate a page. It cannot generate a reason to seek you out by name.

Direct traffic, email subscribers, community members, and brand reputation persist when the page itself becomes replaceable. An AI agent can build a product page, but no AI agent can build the trust that makes a consumer (or their agent) request a specific brand by name.

The brands that matter in the non-human web are the ones people tell their agents to find. “Get me a fleece jacket” is a commodity query. “Get me a fleece jacket from Patagonia” is a brand moat.

The Measurement Problem

How do you measure a page you didn’t build? How do you A/B test against something Google generates dynamically? How do you attribute a conversion that happened inside ChatGPT, initiated by an agent acting on behalf of a user who never saw your website?

Traditional web analytics (page views, sessions, bounce rate, time on site) assume two things: a human visitor and a page you control. On the non-human web, neither assumption holds. A Google-generated landing page isn’t yours. A ChatGPT checkout session doesn’t register in your analytics.

I don’t have a clean answer here, and neither does anyone else. Measurement is the genuinely unsolved problem of the non-human web. New metrics will need to track agent discoverability, agent conversion rate, and data feed quality. But as of March 2026, the measurement infrastructure hasn’t caught up to the technology it needs to measure.

Four Predictions For 2026-2027

Four things to watch over the next 12-18 months.

Google ships patent US12536233B1, or something like it. The technology for scoring and replacing landing pages exists. The business incentive exists. Google has a history of introducing features in ads first, then expanding (Google Shopping went from free to paid to essential). AI-generated landing pages will likely appear in shopping ads first, then broaden to other verticals. Landing page quality scores in Google Ads serve as the early warning system for which pages Google considers replaceable.

Agent traffic becomes measurable. Analytics platforms will need to distinguish human sessions from agent sessions. BrightEdge reports AI agents account for roughly 33% of organic search activity as of early 2026. WP Engine’s traffic data shows 1 AI bot visit for every 31 human visits by Q4 2025, up from 1 per 200 at the start of that year. Agent traffic ratios will accelerate further as Chrome auto browse rolls out globally beyond the US. New metrics around agent conversion rate and agent discoverability will emerge from necessity.

The protocol stack consolidates. MCP, A2A, NLWeb, and WebMCP form a coherent stack covering tool access, agent communication, content querying, and browser-level integration. Expect more interoperability between these protocols and fewer competing standards. The Agentic AI Foundation (AAIF) accelerates consolidation. Within 18 months, “does your site support MCP?” will be as standard a question as “is your site mobile-friendly?”

Brand differentiation gets harder and more important. When AI generates pages and agents do the shopping, the only defensible position is being the brand people (and their agents) seek out by name. Direct relationships, owned audiences, trust signals. Everything else is a commodity.

The Web Splits In Two

When Shopify auto-opted merchants into agentic shopping, I asked whether your website just became optional. The answer is more nuanced than optional or essential. It’s becoming something different.

The web isn’t dying. It’s splitting.

The transactional web (product listings, checkout flows, information retrieval, comparison shopping) is going non-human first. AI generates the landing pages. AI agents visit and transact on those pages. Humans approve decisions at the endpoints. Google’s patent lives in the transactional web, and the economics of conversion optimization push hardest toward automation in this layer.

The experiential web (brand storytelling, community, content that rewards sustained attention, design that creates emotional response) stays human. Not because AI can’t generate brand experiences, but because the value of those experiences comes from the human connection behind them. Nobody tells their agent to “go enjoy a brand experience on my behalf.”

Your website’s new job description: data source for the agents, trust anchor for the humans, brand home for both. The companies that treat their structured data, product feeds, and API surfaces with the same care they give their homepage design are the ones that show up in both worlds.

The non-human web isn’t replacing the human web. It’s growing alongside it. Your job is to show up in both.

More Resources:


This was originally published on No Hacks.


Featured Image: Yaaaaayy/Shutterstock

AI Overview CTR Fell 61%, But Clicks Didn’t Collapse via @sejournal, @MattGSouthern

Brand-cited AI Overview CTR fell 61% from Q3 to Q4, according to a new report from Seer Interactive, but the clicks on those pages barely moved.

The drop looks alarming on a dashboard, though it isn’t quite what it seems. Seer’s analysis of 5.47 million queries across 53 brands clearly shows what’s happening

What Happened In Q4

In September, brand-cited pages in AI Overviews received 15.8 million impressions and 398,798 clicks, with a CTR of 2.52%.

In October, impressions doubled to 33.1 million, and clicks increased slightly to 400,271, but CTR dropped to 1.21% as rapid impression growth outpaced clicks.

This isn’t a performance collapse but a math issue caused by faster impression growth than clicks.

November Is A Different Story

November’s impressions rose to 39.5 million, but clicks dropped to 301,783, and CTR fell to 0.76%.

Something pulled clicks down while visibility increased, and Seer’s data can’t explain why. For Q4, both patterns combine into a 61% figure, showing it’s important to analyze months separately in Search Console data.

What Seer Can’t Tell You

The agency is clear on one limit: it can’t determine whether the October impression surge was due to Google serving AI Overviews for more queries where brands were already cited, or because the brands earned citations through their SEO. Both explanations fit, and neither can be confirmed without a detailed analysis of the account.

Websites with similar data face the same ambiguity. Growing impressions are good if earned, but noise if they result from Google’s decisions. Your dashboard might not clarify this without account-level query analysis.

How This Fits With Past AIO CTR Coverage

Several studies show lower CTRs when AI Overviews appear. Ahrefs analyzed 146 million results and found a 20.5% AIO trigger rate, which was higher for informational and question queries.

A SISTRIX analysis in Germany reported a 59% drop in CTR at position one with AIOs, and Pew Research found that U.S. users clicked 8% of the time with AIOs versus 15% without.

Seer’s October data raises the question of whether a falling CTR on cited pages always means fewer clicks or can indicate greater visibility with the same click count.

Other Findings Worth Noting

Brand-cited pages get about 120% more clicks per impression than uncited pages on AIO SERPs, but cited pages lag behind no-AIO pages by 38%. A citation helps, but it doesn’t restore previous rankings.

Seer reports that organic CTR on AIO SERPs rose from 1.3% in December 2025 to 2.4% in February 2026, but calls this a leveling off rather than a recovery and advises against forecasting based on two months’ data.

Why This Matters

A falling CTR in your Q4 data doesn’t necessarily mean you’re losing clicks; check impressions for the same period before assuming there’s a problem.

Benchmarks show general trends, but your data tells your specific story. If clicks stay flat or grow faster than impressions, it’s a different issue than actual decline.

Looking Ahead

The main thing to watch is whether added AI Overview visibility starts driving more clicks, or whether cited pages continue absorbing more impressions without much traffic upside.

If that pattern holds, the value of being cited may look different from what CTR alone suggests. You may need to separate visibility, clicks, and citation coverage before deciding whether AI Overview exposure is helping or simply changing how performance gets measured


Featured Image: TaniaKitura/Shutterstock

Google Pushes “Bounce Clicks” Explanation For AI Overview Traffic Loss via @sejournal, @MattGSouthern

Google’s head of Search, Liz Reid, told Bloomberg’s Odd Lots podcast that AI Overviews are reducing “bounce clicks” from publisher pages, continuing an argument she has made in public appearances since last year.

Reid appeared on the April 23 episode of Odd Lots. Hosts Joe Weisenthal and Tracy Alloway asked how AI Overviews affect publisher traffic and ad revenue.

What Reid Said

Reid described what she called “bounce clicks” as the category of clicks AI Overviews are reducing.

She said users who quickly click and return to search no longer need to visit the page because they get the fact from the Overview. Those wanting to read longer still click through. She acknowledged fewer ad clicks for some queries but said increased query volume balances this. The argument aligns with Reid’s points in other public appearances.

The Pattern

Reid published a Google blog post in August stating that organic click volume from Google Search to websites was “relatively stable” year-over-year and that “quality clicks,” defined as visits where users don’t quickly click back, had increased.

In an October Wall Street Journal interview, she explicitly used the phrase “bounced clicks” and said that ad revenue with AI Overviews had been relatively stable.

The Bloomberg appearance makes the same basic case Reid made in August, describing some lost clicks as low-value visits where users would have quickly returned to Search.

What Reid Didn’t Say

In none of those three appearances has Reid provided supporting data.

Her August blog post included no charts, percentages, or year-over-year comparisons. On Bloomberg, she told Weisenthal and Alloway that Google tracks whether people come to search more often as one of its key signals, without providing numbers.

Weisenthal and Alloway asked about traffic and monetization, but the interview didn’t include follow-up questions requesting evidence for Reid’s explanation.

Google has not publicly shared data that would let outside observers test that distinction.

What Independent Data Shows

Chartbeat data published in the Reuters Institute’s Journalism and Technology Trends and Predictions 2026 report found that global publisher Google search traffic dropped by roughly a third. Google Discover referrals fell 21% year-over-year across more than 2,500 publisher websites.

Seer Interactive’s analysis found that organic click-through rate for queries with AI Overviews fell from 1.76% in 2024 to 0.61% in 2025, a 61% drop. Seer noted those queries tend to be informational searches that historically had lower CTRs.

Pew Research Center’s study of 68,000 real search queries found users clicked on results 8% of the time when AI Overviews appeared, compared with 15% when they did not.

Digital Content Next, a trade body whose members include the New York Times, Condé Nast, and Vox, reported a median 10% year-over-year decline in Google search referrals across 19 member publishers between May and June 2025. DCN CEO Jason Kint said at the time that the member data offered “ground truth” about what was happening to publisher traffic.

Why This Matters

Reid’s “bounce clicks” description answers a question the data raises, but it answers it without data of its own. That’s worth keeping in mind when evaluating any public claim from a platform that controls the measurements.

A business owner can’t verify from Reid’s Bloomberg appearance whether AI Overviews are cutting only low-value clicks or cutting across query types. The independent data measures total clicks and click-through rates, not the subset of clicks Reid describes as low-value. If Google has internal data that separates the two, it hasn’t shared it in the eight months since the August blog post.

Looking Ahead

Reid said that Google measures how often people return to Search. That signal tracks Google’s retention. Publishers need a traffic metric, but Google hasn’t shared one. Until it does, “bounce clicks” should be treated as a claim rather than a finding.

Google’s Updates Push Search Further Into Task Completion via @sejournal, @MattGSouthern

Google announced three updates to Search and AI Mode this week, which Roger Montti reported for SEJ. Reading his article motivated me to examine these updates, the broader pattern, and their implications for search this year.

Looking at this in detail, it appears the updates push more of what used to be a results-page experience into task completion.

What Google Announced

Google launched individual hotel price tracking in Search, now available globally for signed-in users searching in English and Spanish. Email alerts notify users of rate changes during selected dates.

Additionally, in March, Canvas trip planning in AI Mode moved from Labs preview to general U.S. availability, allowing users to describe trips and receive custom itineraries with flights, hotels, and attractions that save automatically. Agent-powered store calling, first introduced in classic Search, will soon roll out to AI Mode, enabling Google’s AI to call nearby stores, check inventory, using Gemini models and Duplex.

Rose Yao, Product Leader in Search, posted the updates on X. Additional detail sits in Google’s blog post.

The Pattern

These updates reflect Google’s product direction seen in research, patents, and executive statements since January.

In January, Google published the SAGE research paper on training agents for reasoning chains over four steps, laying groundwork for multi-step tasks in Search.

Pichai’s April interview made the language public. Pichai said, “A lot of what are just information-seeking queries will be agentic in Search.” Our deep dive tracked how his language shifted from “search will change” to specific descriptions of task completion.

Earlier this month, Montti argued that task-based agentic search was already changing SEO, citing Google’s global rollout of agentic restaurant booking as evidence that the future tense in Pichai’s language was already past tense in product.

A week ago, the U.S. Patent Office published a Google continuation patent titled “Autonomously providing search results post-facto” (our coverage). The filing describes a system that waits for answers when none are immediately available, then delivers them later through assistant interactions.

These updates continue in the same direction. Canvas moves from Labs preview to broader U.S. availability, approximately five months after its initial launch in November. Store calling has been introduced in AI Mode following its debut in Search last November. Additionally, hotel price tracking is now available in Search at the single-property level.

Microsoft’s recent news fits the same pattern. Sumit Chauhan, President of Microsoft’s Office Product Group, wrote in a company blog post that Copilot’s agentic capabilities are now generally available in Word, Excel, and PowerPoint:

“Copilot creates the most value when it performs the work—formatting, restructuring, building visuals, and transforming data—rather than just suggesting steps.”

The features are the default for Microsoft 365 Copilot and Premium subscribers, and available to Personal and Family plans. It’s unclear whether businesses will receive similar reporting for agent-driven surfaces, a point not addressed in Microsoft’s post.

The Vocabulary Hasn’t Settled

Google uses “agentic” in its product language and announcements, describing features like calling and AI Mode as task-oriented. A SeatGeek partnership was called “Google’s Agentic AI Search Experience.” Other companies also use a similar agent framework language.

Pichai describes ‘Agent manager’ as Google’s role and envisions a future in which Search becomes ‘an agent manager’ overseeing various tasks. It positions Google as an orchestration layer on top of agents rather than a direct competitor.

Montti has used “task-based agentic search” in his recent SEJ coverage, sometimes shortened to TBAS. That’s his shorthand for this beat, not industry-standard terminology.

“Agentic” describes the capability. “Agent manager” refers to a specific architectural role that Google is claiming. “Task-based” centers the user’s goal. When three different labels show up in one month, the market is still working out what to call this.

Why This Matters For Search Professionals

Features introduced this week change the meaning of visibility across several business categories.

Local retailers now encounter a new discovery surface. When a store calls in AI Mode, Google’s agents, rather than users, will contact businesses to verify stock and details. Google hasn’t disclosed which stores its agents will contact first, how eligibility is decided, or if specific business information influences the process.

An analysis of 68 million AI crawler visits across 858,457 Duda-hosted sites shows that sites with connections to Yext, Google Business Profile, and review systems were crawled more often than those without. These findings describe crawler behavior, not agent calls. It’s unknown if similar signals influence which stores are contacted.

Hotels and travel businesses now face individual-property price monitoring. Trip itineraries are based on Canvas’s selection logic. No report shows if a hotel appeared in a Canvas plan, triggered an alert, or was named in an AI Mode response.

Publishers face continued pressure from AI-driven summarization. Index Exchange analyzed 1,200 publishers on its exchange platform, finding that 69% experienced year-over-year declines in ad opportunities, with an average drop of 14%.

Declines varied across verticals. Health and careers publishers saw 40-50% ad drops, while news and politics publishers saw only 7% declines.

Vanessa Otero, Founder and CEO of Ad Fontes Media, told Index Exchange for the same piece:

“When it’s important enough that you want to be accurately and fully informed about some big international, national, or local event, a quality news site is still a much better experience than asking an AI chatbot, which may give a genericized or inaccurate answer. AI users already know this, which is why most news consumers still go direct to their trusted sites. News has always performed well for advertisers, and if the trend of news site resilience holds, this inventory will likely become the most valuable on the open web of the future.”

Travel publishers face pressure as Canvas compiles itineraries without citing sources, making it impossible for publications to know if their coverage influences trip plans.

Ecommerce retailers lack visibility into which stores get called, so they can’t determine if inventory feeds, listing accuracy, or Google Business Profile signals are effective.

Multi-platform coverage complicates strategy. Google’s agents favor structured data and verified profiles. Perplexity Computer routes across 19 models with diverse retrieval preferences. ChatGPT Atlas scrapes browser content directly. OpenAI’s Operator uses GUI vision to interact with rendered pages.

One business has multiple discovery mechanisms with varying technical needs. Single-strategy optimization no longer covers all surfaces.

What’s Still Invisible

Since our coverage flagged the measurement gap, it has widened.

Search professionals still can’t see whether their business was included in a Canvas trip plan. They can’t see whether an agent called them. They can’t see whether their hotel was surfaced in a price-tracking alert. And they can’t see how often their content was used to assemble someone else’s itinerary.

No new reporting surfaces were shipped alongside the updates. Alphabet reported $63.1 billion in Google Search & Other advertising revenue for Q4 2025, up 17% year-over-year, with management crediting Search and Cloud acceleration and AI usage gains. No new reporting tools have arrived to help businesses track their role in AI-mediated search.

The pattern holds across platforms. ChatGPT referral data is limited to what OpenAI shares. Perplexity citation visibility is inside Perplexity. Google’s agent surfaces don’t cleanly map to Search Console.

Academic research on agent training continues to advance. Two April 2026 papers on arXiv show the pace. CW-GRPO, from Junzhe Wang and colleagues, proposes reinforcement-learning improvements for multi-turn search agents. SKILL0, developed by Zhengxi Lu and colleagues at Zhejiang University, trains agents to internalize skill packages. The result is agents that operate without instruction overhead during inference.

The training pipeline is evolving faster than the measurement pipeline businesses depend on. Search professionals can’t close that gap alone. Google, OpenAI, Perplexity, and Anthropic would all need to provide equivalent agent-surface reporting. None has publicly committed to doing so.

Looking Ahead

Pichai said that 2027 would be “an important inflection point for certain things.” He cited non-engineering workflows and some agentic business processes. Our coverage walked through that timeline.

May brings Google I/O and Microsoft Build. Both companies are likely to expand their agentic surfaces at those events, making reporting the most urgent thing to watch. If businesses can’t see their role in task-based search, they can’t optimize for it or argue about who should pay for it.

Two longer-running questions sit behind that. Pay-per-click worked when users clicked links. Store calling, Canvas planning, and price tracking don’t produce clicks, and no platform has described a replacement. Schema.org was designed for search engine crawling, not for agents that need real-time inventory, booking availability, and action endpoints. Standards for agent-readable business data haven’t caught up either.

What happens next depends on whether any platform builds the reporting alongside the capability. So far, none has described how it would. Until that changes, businesses will be optimizing for surfaces they can’t see. Next signals land at I/O and Build in three weeks.

More Resources