AI Overviews Clicks Get Tested, Earnings Tell Two Stories – SEO Pulse via @sejournal, @MattGSouthern

This week’s Pulse covers how AI Overviews affect click behavior, what independent research shows, and what earnings reports from Google and Microsoft reveal about search revenue.

Here’s what matters for you and your work.

Reid Repeats “Bounce Clicks” Argument On Bloomberg

Google’s head of Search, Liz Reid, told Bloomberg’s Odd Lots podcast that AI Overviews are reducing “bounce clicks” from publisher pages. She has made versions of this argument in public appearances since last year.

Key facts: Reid described bounce clicks as visits where users quickly click a page, get a fact, and leave, noting AI Overviews remove such visits rather than deeper ones. Google hasn’t provided data to verify this, and third-party analyses show lower click-through rates when AI Overviews are present.

Why This Matters

Reid’s explanation has stayed consistent across at least three public appearances over the past year. The argument is that lost clicks were low-value to begin with, so publishers aren’t losing the visits that matter. The problem is that Google still hasn’t shared the data behind that claim.

Until Google publishes traffic or engagement metrics that separate bounce clicks from deeper visits, the explanation is a narrative, not a finding.

Read our full coverage: Google Pushes “Bounce Clicks” Explanation For AI Overview Traffic Loss

Field Experiment Finds AI Overviews Cut Organic Clicks 38%

Researchers at the Indian School of Business and Carnegie Mellon University published a working paper, which tests the effects of AI Overviews on user behavior in a randomized field experiment.

Key facts: The study used a Chrome extension to assign 1,065 U.S. participants to three groups: normal Search, Search without AI Overviews, and AI Mode. When AI Overviews appeared, organic clicks dropped 38%, and zero-clicks rose 33%. Removing AI Overviews did not affect satisfaction, perceived quality, or ease of finding information.

Why This Matters

The authors describe their work as the first randomized experiment to isolate the causal effect of AI Overviews on clicks. Prior studies from Seer, Chartbeat, and Pew were observational or correlational. The randomized design allows the researchers to say that AI Overviews caused the click reduction, not just that the two appeared together.

The satisfaction finding puts pressure on Reid’s argument. If removing AI Overviews doesn’t reduce user satisfaction, it’s harder to argue that the lost clicks were primarily low-value visits.

Read our full coverage: Google’s AI Overviews Cut Clicks Without Satisfaction Gain: Report

Google Search Revenue Grew 19% In Q1

Alphabet reported Q1 2026 revenue of $109.9 billion. Google Search revenue hit $60.4 billion, up 19% year over year, accelerating from 17% growth in Q4 2025.

Key facts: CEO Sundar Pichai said queries are at an all-time high and that AI experiences are tied to increased Search usage. Google Cloud crossed the $20 billion quarterly revenue mark, up 63%. Pichai told analysts that more information about Search will come at Google I/O in May.

Why This Matters

The revenue growth doesn’t settle the click-impact question. Google reported higher Search revenue and more queries, but those numbers describe the ad business, not the publisher traffic side. Higher revenue is consistent with both “clicks are fine” and “clicks are down, but ad yield per query is up.”

Google’s AI features may be creating new ad opportunities, but the earnings data doesn’t show whether your pages are getting more or fewer clicks from AI-influenced results.

What People Are Saying

Matthew Scott Goldstein, Independent Analyst/Advisor/Consultant at .msg, wrote on LinkedIn:

“This is what extraction at scale looks like dressed up as innovation. The same content fueling AI Overviews, Gemini answers, and enterprise token volume is the content publishers have sued over, lost referral traffic over, and watched get re-monetized inside a closed product.”

Read our full coverage: Google Search Revenue Grew 19% In Q1, Pichai Cites AI

Microsoft Says Bing Reached 1 Billion Monthly Active Users

Microsoft announced during its Q3 FY2026 earnings call that Bing has reached 1 billion monthly active users for the first time. CEO Satya Nadella revealed the figure alongside an 18% overall revenue increase to $82.9 billion.

Key facts: Search ad revenue, excluding traffic acquisition costs, grew 12% year over year. Edge maintained browser market share gains for the 20th straight quarter. The segment that includes Bing was down 1% overall at $13.2 billion.

Why This Matters

The 1 billion MAU milestone is notable, but Bing’s global search share sits at about 5% per StatCounter’s March 2026 data. That gap suggests the MAU figure needs context. Microsoft hasn’t defined frequency, overlap, or how AI-related Bing usage is counted.

On the AI search measurement side, Microsoft previewed Citation Share and three other Bing Webmaster Tools features at SEO Week earlier this month. When those ship, they could give Bing Webmaster Tools users a clearer way to compare AI citation visibility against competitors on Bing.

Read our full coverage: Microsoft Says Bing Reached 1B Monthly Active Users

Theme Of The Week: Everyone Is Measuring A Different Part Of Search

Every story this week is about the same question asked from a different angle: What is AI doing to search traffic?

Reid says the lost clicks were low-value. The field experiment shows that the lost clicks came without any trade-off in user satisfaction. Google’s earnings say revenue is up 19%. Microsoft’s earnings say Bing hit a user milestone, but it still holds a 5% share. Each one measures something real, and none of them measure the same thing.

The gap between what platforms report and what publishers experience doesn’t appear to be closing. The public data needed to answer the click question directly still isn’t available. Per-query click behavior segmented by AI feature presence isn’t in any tool that Google or Microsoft has shipped.

Top Stories Of The Week:

More Resources:


Featured Image: PeopleImages/Shutterstock; Paulo Bobita/Search Engine Journal

New: AI Brief And Text Disclaimers Come To Google AI Max via @sejournal, @brookeosmundson

Google is rolling out two new features for AI Max that aim to address a common tension: bridging the gap between manual control and execution.

The first new feature is called AI Brief, which allows advertisers to guide AI using natural language inputs.

The other feature announced was Text Disclaimers, which address a long-standing limitation for regulated industries.

If you’re already using AI Max or debating whether to adopt it, keep reading to understand how these can impact your campaigns.

AI Brief Gives Advertisers A Direct Way To Guide AI

Google Gemini powers the new AI Brief feature. Advertisers can guide AI Max using their own words by providing more context on the brand, messaging inputs, and audiences.

Google grouped this into three types of guidelines:

  • Messaging Guidelines: Tell AI Brief what exactly ads should or shouldn’t say. Use words like “always” or “never” to make it clear.
  • Matching Guidelines: Create search query boundaries for the types of searches you want to show up for, or to avoid.
  • Audience Guidelines: Tell AI Brief about the type of consumer you’re going after to serve them more tailored messages.

AI Brief for AI Max is rolling out in English for Search campaigns in the upcoming months. Then, it will gradually roll out to Shopping and Performance Max campaigns.

Text Disclaimers With Final URL Expansion (FUE)

For anyone in a regulated industry that needed more control over ad copy, this update’s for you.

Up until now, text customization could be used as long as FUE wasn’t enabled.

Advertisers that require specific legal or compliance language have often avoided Final URL Expansion. Missing required disclosures can create legal, brand, and approval risk.

Now, Google launched text disclaimers to guarantee required text always appears in your ads, while being able to use FUE. This means advertisers can maintain their required ad compliance and can still leverage AI if a different landing page is better aligned with a user’s search.

Per the announcement, text disclaimers are rolling out in the coming weeks globally in all languages.

What This Means For Advertisers

These are the types of updates that should make every marketer happy, in my opinion.

Google is giving advertisers a clearer way to communicate intent with their AI Brief, instead of having to rely on signals like past performance or feeds. We can now define how the system should approach messaging, matching, and audiences from the start.

That matters in accounts where nuance plays a role. Brand voice, product positioning, and audience differences are not always captured cleanly through existing inputs.

Text disclaimers are a huge opportunity, not only for highly regulated industries, but for any advertiser who needed strict text control for one reason or another.

Google deserves credit here by starting to build in controls that make automation usable for advertisers with stricter requirements.

There will still be a need to validate how these features perform in practice. Advertisers should monitor how well AI Brief translates guidance into actual outputs, and confirm that disclaimers are consistently applied across variations.

But this is a meaningful step toward broader adoption of AI Max across industries that have historically been more cautious.

Looking Ahead

With Google Marketing Live coming up, this feels like more groundwork for other AI Max announcements.

If these features land well, it wouldn’t be surprising to see Google expand on them with more industry-specific control or deeper guidance inputs tied to business data.

Will you be testing out these new features when they’re launched now that some of the risk has been addressed?

More Resources:


Featured Image: Google/Edited by Author

AI Search Clicks Often Go To Local Domains: Report via @sejournal, @MattGSouthern

Aleyda Solis, founder of Orainti, analyzed 87 million AI search visits across 10 markets, finding most clicks go to local domains rather than global defaults.

Using Similarweb data, she examined more than 57,000 domain-market entries in the ‘click-producing layer.’ This layer includes visits to a domain after users click citations or links in AI-generated answers.

The analysis complicates the assumption that the biggest global brands automatically dominate AI search results.

The Main Pattern

In non-US markets, local domains with stronger signals drive the click layer. For example, Bol.com leads in Dutch ecommerce, MercadoLivre in Brazil, Bahn.de in Germany, and Lefrecce.it in Italy, ahead of global competitors like Amazon or Booking.com.

Solis suggests this reflects who has the usable answer locally, not brand size. For instance, Lefrecce has train route data for Milan to Rome, while Booking.com does not. Thus, AI search visibility often depends on local infrastructure.

Different Verticals, Different Rules

In ecommerce, five domains account for 50% of clicks, with platforms like Amazon dominating. Finance is less concentrated, accounting for 17 domains, while travel is highly fragmented with 47. Finance appears concentrated, with Stripe ranking first in 7 of 10 markets, driven by demand from B2B, developers, merchants, and infrastructure, rather than consumers.

PayPal leads in Germany and Italy. The investing sub-category accounts for 22.4% of finance AI clicks, with TradingView ranking in the top 20 across all markets. Travel discovery and booking are more dispersed. Italy’s ecommerce is concentrated, with Amazon.it capturing 46.2% of clicks; combined with Temu, over half. UK travel requires 129 domains for 50% of clicks.

Growth Is Uneven

The report reveals churn behind overall growth. The median monthly growth for the top 50 domains was +20% in ecommerce, +25% in finance, and +29.1% in travel. Many markets and verticals saw about 30% to 40% of top domains decline, e.g., Spain ecommerce with 21 of 49 domains and France finance with 22 of 50.

Solis notes that weighted averages can be distorted by small-base spikes, citing domains like azulviagens.com.br and innovasport.com with large one-month jumps, suggesting investigations rather than trends. Momentum offers more insight than a static snapshot, as a losing top domain may require more focus than a steady top-50 position.

Why This Matters

For brands working across multiple markets, the data suggests that AI search competitors may not be the same competitors they track in traditional SEO.

In Italian travel, the key domain for rail intent may be Lefrecce.it. In Dutch ecommerce, it may be Bol.com. In German travel, it may be Bahn.de.

Solis recommends a straightforward audit question: who holds the operational data, structured inventory, or institutional trust that AI needs for category tasks in each market?

Looking Ahead

The report highlights three gaps for international brands: presence in AI-driven answers, click acquisition, and domain ownership of customer relationships.

Solis plans to update the analysis monthly. The next pull will show whether the local-domain pattern holds.


Featured Image: RobinRmD/Shutterstock

Microsoft Says Bing Reached 1B Monthly Active Users via @sejournal, @MattGSouthern

Microsoft announced that Bing has reached 1 billion monthly active users for the first time. CEO Satya Nadella revealed this figure during the Q3 FY2026 earnings call.

Revenue from search ads, excluding traffic acquisition costs, increased by 12% year over year. Additionally, Edge has maintained its browser market share for 20 consecutive quarters.

Overall, Microsoft reported total revenue of $82.9 billion for the quarter, marking an 18% increase.

Search & Advertising

The segment that includes Bing was down 1% overall at $13.2 billion. Search advertising was the bright spot, with CFO Amy Hood pointing to higher volume and revenue per search.

Nadella was direct about where the consumer business stands:

“When it comes to our consumer business, we are doing the foundational work required to win back fans and strengthen engagement across Windows, Xbox, Bing, and Edge. In the near term, we are focused on fundamentals, prioritizing quality and serving our core users better.”

Search ad growth has held in double digits for three straight quarters. It grew 16% in Q1 FY2026, 10% in Q2, and 12% this quarter. For Q4, Microsoft guided that growth to the high single digits, a step down.

Back in 2023, Microsoft reported 100 million daily active users when it first added AI to Bing. Going from 100 million daily to 1 billion monthly is a big jump, though it’s unclear whether Copilot interactions count toward that number.

Edge is part of the story, too. It typically defaults to Bing, so five years of Edge growth means more people landing on Bing without actively choosing it.

Why This Matters

Edge has gained share for five straight years, and search ad revenue has grown in double digits for three consecutive quarters.

Microsoft has also been building the measurement tools to go with it. Bing Webmaster Tools now maps grounding queries to cited pages, and Microsoft previewed Citation Share at SEO Week earlier this month.

Still, Bing’s global search share sits at about 5% worldwide per StatCounter’s March 2026 data. That gap between 1 billion MAU and 5% share suggests a lot of those users are low-frequency or showing up through default settings rather than choosing Bing.

Looking Ahead

Microsoft’s next earnings call will show whether search ad growth picks back up or settles into single digits.

The Citation Share feature Microsoft previewed at SEO Week hasn’t shipped yet. When it does, it could be the first tool for tracking how your site’s AI visibility on Bing compares to competitors.

Google Launches AI Max For Shopping and Travel Campaigns via @sejournal, @brookeosmundson

Google has officially expanded AI Max to Shopping and Travel campaigns as it hits one year in market.

The announcement comes on the heels of its upcoming annual Google Marketing Live event on May 20th.

Google noted that AI Max has become the fastest-growing AI-powered Search ads product.

Both AI Max for Shopping and Travel campaigns are rolling out as closed betas globally in all languages.

Read on to understand how AI Max will work with Shopping campaigns and travel-specific vertical ads.

AI Max for Shopping Campaigns

Google confirmed that it will use an account’s linked Merchant Center feed to create dynamic Shopping ads that help answer “conversational queries.”

One of the reasons Google is expanding AI Max to Shopping ads format is that it’s become more “difficult to manually meet every search with the right ad.”

AI Max for Shopping is meant to better capture long-tail searches and showcase your ad in a way that meets the user where they’re at.

Three components of AI Max for Shopping campaigns include:

  • Text customization: Creates ad copy for Shopping ads to better align with shopper intent and conversational searches
  • Final URL Expansion (FUE): Matches your website’s most relevant landing page(s) to the shopper’s intent
  • Optimal Format Selection: Automatically selects the best format (either text-only or Shopping ads) based on what’s most relevant to the individual shopper
Credit: Google, April 2026

Similar to the rollout of AI Max for Search, advertisers will be able to upgrade to AI Max with one click. However, you can turn off Final URL Expansion (FUE) at any time.

Existing product targeting controls and bidding structures will still stay in place.

Travel Ads Shifting to Search Campaigns For Travel

Not only is AI Max coming to Travel ad formats, but the way travel ads are managed is changing.

Google announced the shift to Search Campaigns for Travel, which brings in travel feeds and formats into standard Search campaigns.

The goal is to simplify workflow while providing more AI-powered campaign management.

Credit: Google, April 2026

Some of the benefits Google noted with this change include:

  • Consolidated buying door: Removes multiple campaign types into a single campaign, while retaining all previous feature and advanced controls across travel formats.
  • Real-time enhancements: Utilize travel feed and keywords, as well as AI Max functionality.
  • New and unified reporting: Travel ad format data will now be in one view because of the migration to Search campaigns

What This Means For Advertisers

Google is expanding AI Max while many advertisers are still evaluating the first version of it.

But, for most advertisers, this isn’t yet available and it may be weeks or months before it rolls out to general availability.

Some accounts have seen positive results from broader query coverage and automated optimization. Others have questioned how much visibility they lose in exchange, especially as Dynamic Search Ads begin shifting into AI Max. For advertisers who relied on tighter controls, that hesitation is understandable.

In the meantime, while advertisers wait for AI Max expansion in their accounts, the best thing to do now is clean up the areas automation depends on.

That can include items like optimizing Shopping and Travel feeds, landing pages, and reviewing conversion tracking accuracy.

Additionally, if you’re already running AI Max for Search, keep close eye on what types of queries your ads are already showing up for. Having a good negative keyword strategy going into this expansion can help save time and money.

Looking Ahead

With Google Marketing Live coming up, this announcement likely sets up a broader AI Max push for further launches.

It wouldn’t be surprising to see Google expand it beyond individual campaign types, along with more clarity on reporting and when advertisers should or shouldn’t use it.

Measurement will likely be part of that conversation as well, especially as advertisers continue asking where performance is actually coming from.

We will have a clearer picture soon, but AI Max is quickly becoming a bigger part of how Google expects campaigns to run moving forward.

More Resources:


Featured Image: Prostock-studio/Shutterstock

Google Search Revenue Grew 19% In Q1, Pichai Cites AI via @sejournal, @MattGSouthern

Alphabet reported Q1 2026 earnings, with Google Search & Other revenue rising 19% year over year to $60.4 billion. CEO Sundar Pichai tied the quarter’s Search performance to AI Overviews and AI Mode, saying people are “coming back to Search more.”

Q1 revenue was lower sequentially than Q4 2025, when Search & Other came in at $63.1 billion, but year-over-year growth increased from 17% to 19%. Total Alphabet revenue reached $109.9 billion, up 22%.

What Pichai Said About Search

In his prepared remarks, Pichai connected the Search number to AI experiences, stating:

“People love our AI experiences like AI Mode and AI Overviews, and they’re coming back to Search more.”

Pichai also said, “queries are at an all-time high.” He described “strong growth in both users and usage of AI Mode globally” without sharing an exact figure. Past Google disclosures put AI Mode at roughly 100 million monthly active users and 75 million daily.

Pichai said AI Overviews “are driving overall Search growth.” Liz Reid made a similar engagement argument on Bloomberg’s Odd Lots earlier this month, describing AI Overviews as reducing low-value clicks rather than reducing useful traffic.

New Data On Search Speed And AI Costs

Pichai shared two efficiency figures.

The first was latency. Pichai said:

“Even as we’ve brought new AI features into our results page, we’ve reduced Search latency by more than 35% over the past five years.”

The second was the cost of running AI responses. He continued:

“Since upgrading AI Overviews and AI Mode to Gemini 3, we’ve reduced the cost of core AI responses by more than 30% thanks to continued hardware and engineering breakthroughs.”

Search Updates Pichai Highlighted

Pichai highlighted three Search rollout items from the quarter.

Personal Intelligence “expanded broadly in the U.S.,” referring to Google’s March expansion of Personal Intelligence to free U.S. users.

Agentic experiences shipped to new countries. Pichai cited restaurant booking as one of the early examples of what Pichai has called “search as an agent manager.”

Search Live multimodal capabilities went global.

Why This Matters

Over the past year, SEO professionals worried AI Overviews would reduce clicks to sites by satisfying user intent on the results page. Q1 numbers challenge that idea. If AI were cannibalizing traditional search, query volume and revenue would flatten. Instead, both increased.

But this doesn’t mean concerns are unfounded. “All-time high queries” doesn’t imply all-time high publisher clicks. Google hasn’t disclosed click-through rates or revenue split between AI Mode and traditional ads. More queries could mean fewer clicks per query if AI answers resolve intent early.

However, the revenue growth indicates the search ecosystem is expanding, even as user interaction patterns shift.

Looking Ahead

Google’s earnings show AI features are expanding search, but key questions remain about monetization and click-through rates.

Pichai said more info about Search will be shared at Google I/O in May and Google Marketing Live.

Comparison Of AI Citation Patterns Offers Strategic SEO Insights via @sejournal, @martinibuster

BrightEdge published new data showing the different kinds of sites five AI search surfaces tend to show in generated answers. The data makes it possible to see how those differences shape which types of sites each AI engine shows, with strong implications for how to promote to each one.

The research focused on five AI search surfaces:

  1. ChatGPT
  2. Google AI Overviews
  3. Google AI Mode
  4. Google Gemini
  5. Perplexity

AI Engines Cite Different Sources But Recommend The Same Brands

The BrightEdge research compared the top cited website sources across AI engines to measure how much they overlap (Source Overlap). What the data shows is that there was a wide discrepancy across the five AI search engines tested, with the lowest level of overlapping source citations between any two AI search surfaces at 16% and the highest level of agreement between any two engines at 59%.

  • Lowest level of agreement: 16%
  • Highest level of overlap: 59%

Significant Agreement In Brand Citations

BrightEdge also measured brand name overlap between the five AI search surfaces and found that there was more agreement between all five. The lowest overlap between any two AI surfaces was 36% and the highest level of overlap between any two surfaces was 59%.

  • Lowest level of overlap: 36%
  • Highest level of brand citation overlap: 55%

This suggests that name brands that are tightly associated with products and services tend to perform similarly across most of the tested AI search surfaces and may also reflect how widely brands are cited by trusted websites and possibly user intent and expectations.

In my opinion, the takeaway here is that associating a brand with a product or service in a consumer’s mind is a powerful way to influence user expectations which can then translate into branded search. This is something that the SEO community has been slow to pick up on, even though Google has been hinting at user signals playing a strong role in rankings. I say that the SEO community has been slow to pick it up because Google’s been doing this since at least 2004 (Navboost) and most directly with the brand navigation signals in search (Google’s brand signals patent).

Wide Divergence Of Cited Sources

BrightEdge analyzed citations from the five AI surfaces across three types of websites (Institutional, Commercial and Editorial, and User Generated Content) and discovered wide variance between all five engines, despite the convergence on citing strong brands.

Three Categories Of Sites Analyzed

  1. Institutional sites, including government, academic, and big brand industry leaders
  2. Commercial and editorial sites, including media, reviews, and listings
  3. User Generated Content (UGC), including forums, video platforms, and social content

The data shows that every engine draws from all three categories, but weights the mix differently: institutional sources range from a low citation rate of 10% to a high of 26% of citations. Citations of UGC sites range from a low of 0.2% to a high of 18% of citations.

The largest category overlap across all five search engines are found in citations of corporate brand, commercial, and editorial sites, with a low end of 37% on Gemini to as high as 51% on AI Overviews.

BrightEdge offers this takeaway about that data:

“Review sites, comparison content, trade press, retailer listings, and finance data are the sources AI most frequently reaches for. Investment in PR, trade coverage, review site visibility, and category comparison content translates into visibility across every engine, not just one.”

Something that BrightEdge doesn’t mention is that AI search engines surface sponsored articles from trusted websites that are clearly labeled to conform with FTC guidelines on native advertising and Google’s guidelines on sponsored posts. This enables companies to tightly associate their brands with specific products and services and increase the likelihood of being cited in AI search surfaces.

Gemini And AI Overviews Differ On Website Authoritativeness

The difference between the kinds of websites Gemini and Google AI Overviews uses as sources shows that Gemini is more conservative, tending to show more trust toward institutional sites at a higher rate than user generated content (UGC). Institutional sites are academic, government, academic, and big brand sites.

AI Overviews, on the other hand, trusts both institutional and UGC sources of information, with nearly twice as many citations going to UGC websites.

  • Authoritativeness Of Institutional Versus UGC Content
  • Gemini: 26% institutional, 0.2% community
  • AI Overviews: 10% institutional, 18% community

Another revealing finding is that there is a wide variance in thhe top level domains that are cited by each AI search surface. Gemini tended to link out to only the very most trustworthy and authoritative websites. For example, Gemini tended to cite .gov and .org websites at higher rates than any of the other AI engines.

Gemini: 13% .gov, 23% .org

Gemini’s answers tend to trust institutional websites more than user generated content, citing them 26% of the time but distrusts UGC sites, only citing them a fraction of a percentage point. AI Overviews trusts UGC content to a vastly greater extent. Why is that?

It could be that the technologies underlying Gemini and AI Overviews differ. For example, it could be that Google’s FastSearch, which prioritizes speed over other ranking signals, may be a reason why UGC sites are sources more often than they are in Gemini. It’s an interesting question.

I did an informal experiment by asking both Gemini and AI Overviews to compare the use of a specific op-amp (an electrical part) in a specific amplifier.

  • Gemini’s answer cited institutional sources (Texas Instruments and the amplifier’s manufacturer).
  • AI Overviews cited the two institutional websites but also multiple user generated content (UGC) sites.

Gemini’s answer was typically conservative, citing the institutional website (Texas Instruments, the manufacturer).

AI Overviews citations of various UGC sites were useful in the context of this question because actual users shared their experiences with this op-amp as well as actual electronic measurements of the op-amp and comparisons to other ones.

.Edu Sites Not Authoritative?

Another interesting finding is that all of the AI search engines don’t often cite .edu websites. Perplexity cited .edu sites at a higher rate than any of the other AI engines, citing .edu websites 3.2% of the time.

Those results contradict a longstanding belief in SEO circles that .edu sites are more authoritative. BrightEdge’s research shows that .edu sites are not authoritative for the kinds of questions that users are asking AI search engines.

ChatGPT Cites A Higher Diversity Of Sources

The data also shows that ChatGPT shows a more diverse variety of website sources, relying on its top ten sources only 18.5% of the time, with Google AI Mode right behind it with 19.4%. Gemini (26.3%) and Perplexity (26.7%) show a greater amount of the same sites drawn from their top ten.

Percentage Of Top 10 Sources

  • ChatGPT: 18.5%
  • Google AI Mode: 19.4%
  • Gemini: 26.3%
  • Perplexity: 26.7%

Gemini And Perplexity Rely On Authoritative Sites

Gemini and Perplexity tended to rely the most on authoritative websites. As already noted, Gemini trusted institutional sites the most and Perplexity cited .edu sites more than any of the other AI engines.

Perplexity showed a similar pattern of conservatively linking out to the most trusted and authoritative sites. BrightEdge’s report explains:

“Perplexity concentrates more of its citations in institutional medical, government, encyclopedic, and medical publisher sources than any other engine. Combined, those four categories account for approximately 30% of Perplexity’s citations.”

Five AI Engines, Five Distinct Citation Profiles

Here is the breakdown showing the citation distribution for each AI search surface, with Gemini and Perplexity showing a strong preference for authority sites.

Gemini

  • 26% institutional sites
  • 23% .org
  • 13% .gov
  • 0.2% UGC

Perplexity

  • 86% of brand mentions appear in position 5 or earlier
  • 30% of citations from institutional medical, government, encyclopedic, and publisher sources
  • 22% institutional sites
  • 3.2% .edu
  • 1.5% UGC sites

ChatGPT

  • Top 10 sources account for 18.5% of citations
  • 20% .org
  • 12% .gov
  • 0.5% UGC

Google AI Mode

  • Top 10 sources account for 19.4% of citations
  • 14% institutional sites
  • 7% UGC

Google AI Overviews

  • 18% UGC
  • 10.6% of citations from a single video platform
  • 10% institutional sites
  • 2.9% from a forum platform

Google AI Is Not One System

Google’s AI Mode and Ai Overviews show almost the same websites, with a 59% rate of overlap of cited websites. Gemini has the least amount of overlap.

  • Gemini vs AI Overviews: 34%
  • Gemini vs AI Mode: 27%

These differences show that the Google’s AI systems rely on different mixes of sources, with Gemini showing the widest amount of difference.

Takeaways

The data makes it easy to view each AI search surface with a shorthand description of what kinds of sources each AI engine tends to cite. There is a wide variance in source citations with clear preferences of which kinds of sites each engine prefers to link to. If there is one big takeaway from the data, in my opinion it would be the importance of establishing a brand connection to products and services.

Other Takeaways

  • Gemini and Perplexity rely on high authority brand and institutional websites.
  • ChatGPT cites a broader range of sources, showing a higher mix of websites.
  • Google’s AI Overviews cites UGC sites more than any other AI search.
  • Gemini shows the least amount of overlap among the three Google AI systems.
  • AI Overviews and AI Mode show the highest level of overlap.
  • Citation overlap varies widely across all five AI engines, indicating major differences in source selection.

Read the BrightEdge report: Why AI Engines Cite Different Sources but Recommend the Same Brands

Featured Image by Shutterstock/Toey Andante

OpenAI Crawl Activity Tripled Since GPT-5, Data Shows via @sejournal, @MattGSouthern

OpenAI’s automated crawl activity is estimated to have roughly tripled after the launch of GPT-5, according to a new analysis from Botify and guest author Chris Long.

In Botify’s dataset, OpenAI’s search crawler is now generating more log events than its training crawler. That’s a reversal from the period before GPT-5.

Long, co-founder of the SEO consultancy Nectiv, analyzed roughly 7 billion OpenAI-bot log events from Botify’s enterprise client dataset spanning November 2024 through March 2026.

What The Data Shows

Two of the three OpenAI user agents Botify measured saw activity spike around the GPT-5 launch.

OAI-SearchBot, which retrieves content when ChatGPT performs web searches, recorded about 3.5x more events after August 2025. That works out to roughly 2.2 billion additional events in Botify’s dataset.

GPTBot, which collects training data, recorded about 2.9x more events over the same period. That is another 1.8 billion events.

The third user agent, ChatGPT-User, moved in the opposite direction. Long reports a 28% drop in ChatGPT-User log events between December 2025 and March 2026. ChatGPT-User fires when a ChatGPT session fetches a page on behalf of a user, so the drop measures logged user-initiated fetches rather than ChatGPT usage overall.

Long offers two possible readings. One is that fewer sessions may be triggering real-time page fetches. The other, suggested by Botify’s team, is that OpenAI may be relying more on stored or indexed resources, reducing the need to fetch pages in real time. Long does not pick between them.

Search Bot Now Outpaces Training Bot

Before GPT-5, OAI-SearchBot and GPTBot ran at roughly even volumes in Botify’s dataset, with a ratio of about 0.95 search events per training event. After GPT-5, that ratio rose to about 1.14.

The pattern lines up with what Dan Petrovic wrote in August 2025 about GPT-5, arguing that OpenAI was sourcing more answers from live search than from trained memory. Botify’s data is consistent with that read.

Industry Breakdown

The post-GPT-5 search bot increases varied by industry. Healthcare sites saw about 740% more OAI-SearchBot activity after launch; Media and Publishing, 702%; and Marketplaces, Software, and Retail, 190-216%.

Travel sites had the smallest rise at 30%. The search and training balance also varies. Long reports a +256% OAI-SearchBot to GPTBot crawl difference for Media/Publishing, the largest gap. Software and Internet lean toward search, Healthcare and Retail favor training, with -50% and -33%. GPTBot is more active overall.

Botify and Long suggest OpenAI routes prompt types differently: news inquiries trigger live search, health and product queries rely on trained knowledge.

How OpenAI’s Crawl Compares To Google’s

Even after tripling, OpenAI’s crawl activity is much smaller than Google’s.

In Botify’s most recent 30-day window, Googlebot registered 18.2 billion events, compared with 887 million events from OpenAI’s crawlers combined. That puts OpenAI at about 4% of Google’s crawl volume.

A year earlier, the same comparison was 15 billion Google events to 207 million OpenAI events, or about 1.38%. The gap is closing, though Google’s crawl is still roughly 20 times larger in absolute terms.

Bingbot registered about 5.49 billion events in the most recent window, putting OpenAI at roughly 14% of Bing.

Methodology & Commercial Context

The dataset is Botify’s, covering enterprise clients in retail, ecommerce, technology, publishing, travel, and marketplaces. The analysis was conducted by Long as a guest author on Botify’s blog.

For transparency, Botify sells log file analysis and AI bot management software, and the post promotes a follow-up webinar and a product demo.

The dataset skews toward large enterprise websites rather than a representative cross-section of the web.

Why This Matters

In Botify’s dataset, OAI-SearchBot now generates more log events than GPTBot. Sites that block only GPTBot are not blocking the bot OpenAI says is used to surface websites in ChatGPT search answers.

Sites that block OAI-SearchBot may be excluding themselves from ChatGPT search answers.

How This Fits With Other Reports

Botify’s findings line up with patterns other vendors have reported. An Alli AI analysis covered earlier this month found OpenAI’s ChatGPT-User made 3.6x more requests than Googlebot in a smaller WordPress-heavy sample. A Hostinger analysis found OAI-SearchBot’s website coverage reaching 55% while GPTBot coverage fell. Akamai’s recent bot traffic report showed OpenAI leading AI bot traffic to publishing sites.

The reports suggest that AI training crawls and AI search crawls need to be measured separately, especially as OAI-SearchBot activity grows.

Google Tests ‘Ask YouTube’ Conversational Search Experiment via @sejournal, @MattGSouthern

YouTube is testing “Ask YouTube,” a conversational search experience that returns AI-generated text summaries alongside cited videos and supports follow-up questions in a persistent thread.

YouTube describes the feature on its Premium Early Access page as “a new way to search on YouTube that feels more like a conversation.” Users can ask complex questions, receive results that combine video and text, and ask follow-ups to dive deeper.

How It Works

After opting in to the experimental feature, An “Ask YouTube” button appears in the search bar.

Screenshot from: YouTube, April 2026.

When a query is submitted, the page briefly loads, then displays a text summary, a primary cited video linked to a timestamped section, and galleries of longform videos and Shorts.

The experiment is available to Premium subscribers in the US who are 18 or older, searching in English on desktop, and runs until June 8.

How It Behaves In Practice

I tested the feature with a query about reactions to Anthropic’s Claude Opus 4.7 model. Here’s an example to illustrate how Ask YouTube presents results:

Screenshot from: YouTube, April 2026.

In my test, the page displayed a generated title (“User Reactions to Claude Opus 4.7”), a subhead, a summary paragraph, and an embedded video with a timestamp to a related section. Below are citations, related videos, and Shorts.

Follow-up questions can be asked within the same thread. Here’s an example of a follow-up question I asked: “how does it compare to GPT 5.5”

Screenshot from: YouTube, April 2026.

This response even included a comparison table with links to the videos it pulled the data from:

Screenshot from: YouTube, April 2026.

YouTube notes on its experiment page that “quality and accuracy may vary” and asks users to submit thumbs-up or thumbs-down feedback with optional rationale.

Why This Matters

This expands YouTube’s AI search testing beyond the carousel. YouTube first tested AI Overviews in search results last year, showing video clips for product and location queries. Ask YouTube now summarizes content as text upfront, with videos as supporting sources and related results.

For creators, the key question is what makes a video the main citation rather than a supporting item or an omission. YouTube hasn’t shared selection or ranking signals for Ask YouTube.

Looking Ahead

The experiment ends June 8 unless YouTube extends it. We’ll provide an update if YouTube publishes selection signals or rolls the feature out more broadly.


Featured Image: Stockinq/Shutterstock

Bing Previews AI Citation Share For Webmaster Tools via @sejournal, @MattGSouthern

Microsoft previewed four new AI reporting features for Bing Webmaster Tools: citation share, grounding query-intent labels, grounding query topic labels, and Generative Engine Optimization (GEO)-focused recommendations.

Krishna Madhavan, Principal Product Manager at Microsoft AI and Bing, previewed the features during a presentation at SEO Week in New York City. Slides shared by attendees on X preview four additions to the AI Performance dashboard.

Citation Share would show the percentage of citations a site captures within a specific grounding query, sitting alongside the raw citation counts already available in the dashboard.

Grounding Query Intent would classify queries into 15 predefined intent labels. Visible labels in the shared screenshots include Learning, Informational Search, Navigational, Research, Comparison, Planning, Conversational, and Content Filtered.

Grounding Query Topic would group queries under topic labels, giving sites a second classification layer alongside intent.

The fourth addition, GEO-focused recommendations, would surface guidance tied to AI visibility. The slide shows recommendation areas, including content structure and crawlability, indexing and canonicalization signals, structured data adoption, and structured data quality.

Microsoft hasn’t published an official blog post about these features. The information available comes from attendee screenshots of the presentation.

https://x.com/ClaraSoteras/status/2048768514677244182?s=20

Why This Matters

The AI Performance dashboard launched in public preview in February, giving sites their first look at how often Microsoft Copilot and Bing AI summaries cite their content. Microsoft expanded it in March with a feature that mapped grounding queries to the specific pages cited for them.

Citation Share would expand that. Citation counts show visibility, while a share metric provides competitive context, indicating if a site captures most citations or appears with others for a query.

The intent and topic classifications could fix data limits in the dashboard. Queries vary in phrasing, making trend spotting hard. Grouping by intent and topic allows sites to gauge visibility against shared categories instead of individual phrases.

GEO recommendations are least defined. Labels imply focus areas are familiar SEO basics like crawlability, indexing, canonicalization, and structured data, but Microsoft hasn’t specified how recommendations are generated or triggered.

Looking Ahead

Microsoft hasn’t announced release dates for any of the four features. Details on Citation Share calculation, intent and topic taxonomies, and GEO recommendation methods remain undocumented publicly.

Treat these as previews, not shipped features. Watch for official Bing Webmaster or Microsoft Advertising blog posts confirming scope and timing.