Google has launched Audio Overviews, a new test feature in Search Labs. It creates audio summaries of search results using Google’s latest Gemini AI models.
How Audio Overviews Work
Audio Overviews turn Google Search results into audio content. When Google thinks an audio overview might help, you’ll see an option to create a short audio summary right on the results page.
You can see how the interface looks in the example below:
Screenshot from: labs.google.com/search/experiment/ June 2025.
After clicking the button to generate the summary, Google will process the information in the SERP and create an audio snippet.
Google says the feature helps users “get a lay of the land” when searching for topics they are unfamiliar with.
Audio Overviews retains the primary value of Google Search by displaying web pages directly within the audio player. This allows users to click through to explore specific sources.
Technical Requirements and Limitations
To use Audio Overviews, you must sign up for the experiment through Search Labs, Google’s testing platform for new search features. The feature only works in English and only for users in the United States right now.
After clicking the “Generate Audio Overview” button, creation can take up to 40 seconds. Once it’s done, the audio plays directly on the page.
Google has built-in ways for users to give feedback with thumbs-up or thumbs-down ratings. This feedback will likely help Google refine the feature before making it available to a wider audience.
AI Content Considerations
Google is upfront about the technology being experimental. The company notes that “content and voices in this experience are created with AI” and warn that “generative AI is experimental, so there may be inaccuracies and audio glitches.”
While Google emphasizes that Audio Overviews direct users to source content, some publishers may see this as part of a broader trend that reduces click-throughs from search. If AI-generated summaries satisfy user intent too well, they could further shift attention away from original creators.
Google’s inclusion of visible web links in the audio player suggests an effort to maintain attribution. Still, it’s unclear how effective these links are at driving traffic compared to traditional search listings.
Looking Ahead
Audio Overviews mark another step in Google’s efforts to make Search more multimodal and accessible. By offering spoken summaries powered by generative AI, the company is testing how voice-first experiences might complement traditional search behaviors.
While the feature prioritizes linking to source content, its long-term impact on publisher traffic and content attribution remains to be seen.
As with other generative AI experiments in Search, how users respond will likely shape whether and how Google expands this format.
In an attempt to keep up with the LLMs, Google launched AI Overviews and just announced AI Mode tabs.
The expectation is that SERPs will become blended with a Large Language Model (LLM) interface, and the nature of how users search will adapt to conversations and journeys.
However, there is an issue surrounding AI hallucinations and misinformation within LLM and Google AI Overview generated results, and it seems to be largely ignored, not just by Google but also by the news publishers it affects.
More worrying is that users are either unaware or prepared to accept the cost of misinformation for the sake of convenience.
Barry Adams is the authority on editorial SEO and works with the leading news publisher titles worldwide via Polemic Digital. Barry also founded the News & Editorial SEO Summit along with John Shehata.
“LLMs are incredibly dumb. There is nothing intelligent about LLMs. They’re advanced word predictors, and using them for any purpose that requires a basis in verifiable facts – like search queries – is fundamentally wrong.
But people don’t seem to care. Google doesn’t seem to care. And the tech industry sure as hell doesn’t care, they’re wilfully blinded by dollar signs.
I don’t feel the wider media are sufficiently reporting on the inherent inaccuracies of LLMs. Publishers are keen to say that generative AI could be an existential threat to publishing on the web, yet they fail to consistently point out GenAI’s biggest weakness.”
The post prompted me to speak to him in more detail about LLM hallucinations, their impact on publishing, and what the industry needs to understand about AI’s limitations.
You can watch the full interview with Barry on IMHO below, or continue reading the article summary.
Why Are LLMs So Bad At Citing Sources?
I asked Barry to explain why LLMs struggle with accurate source attribution and factual reliability.
Barry responded, “It’s because they don’t know anything. There’s no intelligence. I think calling them AIs is the wrong label. They’re not intelligent in any way. They’re probability machines. They don’t have any reasoning faculties as we understand it.”
He explained that LLMs operate by regurgitating answers based on training data, then attempting to rationalize their responses through grounding efforts and link citations.
Even with careful prompting to use only verified sources, these systems maintain a high probability of hallucinating references.
“They are just predictive text from your phone, on steroids, and they will just make stuff up and very confidently present it to you because that’s just what they do. That’s the entire nature of the technology,” Barry emphasized.
This confident presentation of potentially false information represents a fundamental problem with how these systems are being deployed in scenarios they’re not suited for.
Are We Creating An AI Spiral Of Misinformation?
I shared with Barry my concerns about an AI misinformation spiral where AI content increasingly references other AI content, potentially losing the source of facts and truth entirely.
Barry’s outlook was pessimistic, “I don’t think people care as much about truth as maybe we believe they should. I think people will accept information presented to them if it’s useful and if it conforms with their pre-existing beliefs.”
“People don’t really care about truth. They care about convenience.”
He argued that the last 15 years of social media have proven that people prioritize confirmation of their beliefs over factual accuracy.
LLMs facilitate this process even more than social media by providing convenient answers without requiring critical thinking or verification.
“The real threat is how AI is replacing truth with convenience,” Barry observed, noting that Google’s embrace of AI represents a clear step away from surfacing factual information toward providing what users want to hear.
Barry warned we’re entering a spiral where “entire societies will live in parallel realities and we’ll deride the other side as being fake news and just not real.”
Why Isn’t Mainstream Media Calling Out AI’s Limitations?
I asked Barry why mainstream media isn’t more vocal about AI’s weaknesses, especially given that publishers could save themselves by influencing public perception of Gen AI limitations.
Barry identified several factors: “Google is such a powerful force in driving traffic and revenue to publishers that a lot of publishers are afraid to write too critically about Google because they feel there might be repercussions.”
He also noted that many journalists don’t genuinely understand how AI systems work. Technology journalists who understand the issues sometimes raise questions, but general reporters for major newspapers often lack the knowledge to scrutinize AI claims properly.
Barry pointed to Google’s promise that AI Overviews would send more traffic to publishers as an example: “It turns out, no, that’s the exact opposite of what’s happening, which everybody with two brain cells saw coming a mile away.”
How Do We Explain The Traffic Reduction To News Publishers?
I noted research that shows users do click on sources to verify AI outputs, and that Google doesn’t show AI Overviews on top news stories. Yet, traffic to news publishers continues to decline overall.
Barry explained this involves multiple factors:
“People do click on sources. People do double-check the citations, but not to the same extent as before. ChatGPT and Gemini will give you an answer. People will click two or three links to verify.
Previously, users conducting their own research would click 30 to 40 links and read them in detail. Now they might verify AI responses with just a few clicks.
Additionally, while news publishers are less affected by AI Overviews, they’ve lost traffic on explainer content, background stories, and analysis pieces that AI now handles directly with minimal click-through to sources.”
Barry emphasized that Google has been diminishing publisher traffic for years through algorithm updates and efforts to keep users within Google’s ecosystem longer.
“Google is the monopoly informational gateway on the web. So you can say, ‘Oh, don’t be dependent on Google,’ but you have to be where your users are and you cannot have a viable publishing business without heavily relying on Google traffic.”
What Should Publishers Do To Survive?
I asked Barry for his recommendations on optimizing for LLM inclusion and how to survive the introduction of AI-generated search results.
Barry advised publishers to accept that search traffic will diminish while focusing on building a stronger brand identity.
“I think publishers need to be more confident about what they are and specifically what they’re not.”
He highlighted the Financial Times as an exemplary model because “nobody has any doubt about what the Financial Times is and what kind of reporting they’re signing up for.”
This clarity enables strong subscription conversion because readers understand the specific value they’re receiving.
Barry emphasized the importance of developing brand power that makes users specifically seek out particular publications, “I think too many publishers try to be everything to everybody and therefore are nothing to nobody. You need to have a strong brand voice.”
He used the example of the Daily Mail that succeeds through consistent brand identity, with users specifically searching for the brand name with topical searches such as “Meghan Markle Daily Mail” or “Prince Harry Daily Mail.”
The goal is to build direct relationships that bypass intermediaries through apps, newsletters, and direct website visits.
The Brand Identity Imperative
Barry stressed that publishers covering similar topics with interchangeable content face existential threats.
He works with publishers where “they’re all reporting the same stuff with the same screenshots and the same set photos and pretty much the same content.”
Such publications become vulnerable because readers lose nothing by substituting one source for another. Success requires developing unique value propositions that make audiences specifically seek out particular publications.
“You need to have a very strong brand identity as a publisher. And if you don’t have it, you probably won’t exist in the next five to ten years,” Barry concluded.
Barry advised news publishers to focus on brand development, subscription models, and building content ecosystems that don’t rely entirely on Google. That may mean fewer clicks, but more meaningful, higher-quality engagement.
Moving Forward
Barry’s opinion and the reality of the changes AI is forcing are hard truths.
The industry requires honest acknowledgment of AI limitations, strategic brand building, and acceptance that easy search traffic won’t return.
Publishers have two options: To continue chasing diminishing search traffic with the same content that everyone else is producing, or they invest in direct audience relationships that provide sustainable foundations for quality journalism.
Thank you to Barry Adams for offering his insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal
The new AI Mode tab in Google’s results, currently only active in the U.S., enables users to get an AI-generated answer to their query.
You can ask a detailed question in AI Mode, and Google will provide a summarized answer.
Google AI Mode answer for the question [what are the best ways to grow your calf muscles], providing a detailed summary of exercises and tips (Image Credit: Barry Adams)
The critical process is what Google calls a “query fan-out” technique, where many related queries are performed in the background.
The results from these related queries are collected, summarized, and integrated into the AI-generated response to provide more detail, accuracy, and usefulness.
Having played with AI Mode since its launch, I have to admit it’s pretty good. I get useful answers, often with detailed explanations that give me the information I am looking for. It also means I have less need to click through to cited source websites.
I have to admit that, in many cases, I find myself reluctant to click on a source webpage, even when I want additional information. It’s simpler to ask AI Mode a follow-up question rather than click to a webpage.
Much of the web has become quite challenging to navigate. Clicking on an unknown website for the first time means having to brave a potential gauntlet of cookie-consent forms, email signup pop-ups, app install overlays, autoplay videos, and a barrage of intrusive ads.
The content you came to the page for is frequently hidden behind several barriers-to-entry that the average user will only persist with if they really want to read that content.
And then in many cases, the content isn’t actually there, or is incomplete and not quite what the user was looking for.
AI Mode removes that friction. You get most of the content directly in the AI-generated answer.
You can still click to a webpage, but often it’s easier to simply ask the AI a more specific follow-up question. No need to brave unusable website experiences and risk incomplete content after all.
AI Mode & News
Contrary to AI Overviews, AI Mode will provide summaries for almost any query, including news-specific queries:
AI Mode answer for the [latest news] query (Image Credit: Barry Adams)
Playing with AI Mode, I’ve seen some answers to news-specific queries that don’t even cite news sources, but link only to Wikipedia.
For contrast, the regular Google SERP for the same query features a rich Top Stories box with seven news stories.
With these types of results in AI Mode, the shelf life of news is reduced even further.
Where in search, you can rely on a Top Stories news box to persist for a few days after a major news event, in AI Mode, news sources can be rapidly replaced by Wikipedia links. This further reduces the traffic potential to news publishers.
A Google SERP for [who won roland garros 2025] with a rich Top Stories box vs. the AI Mode answer linking only to Wikipedia (Image Credit: Barry Adams)
There is some uncertainty about AI Mode’s traffic impact. I’ve seen examples of AI Mode answers that provide direct links to webpages in-line with the response, which could help drive clicks.
Google is certainly not done experimenting with AI Mode. We haven’t seen the final product yet, and because it’s an experimental feature that most users aren’t engaged with (see below), there’s not much data on CTR.
As an educated guess, the click-through rate from AI Mode answers to their cited sources is expected to be at least as low, and probably lower, as the CTR from AI Overviews.
This means publishers could potentially see their traffic from Google search decline by 50% or more.
AI Mode User Adoption
The good news is that user adoption of AI Mode appears to be low.
The latest data from Similarweb shows that after an initial growth, usage of the AI Mode tab on Google.com in the U.S. has slightly dipped and now sits at just over 1%.
This makes it about half as popular as the News tab, which is not a particularly popular tab within Google’s search results to begin with.
It could be that Google’s users are satisfied with AI Overviews and don’t need expanded answers in AI Mode, or that Google hasn’t given enough visual emphasis to AI Mode to drive a lot of usage.
I suspect that Google may try to make AI Mode more prominent, with perhaps allowing users to click from an AI Overview into AI Mode (the same way you can click from a Top Stories box to the News tab), or integrate it more prominently into their default SERP.
When user adoption of AI Mode increases, the impact will be keenly felt by publishers. Google’s CEO has reiterated their commitment to sending traffic to the web, but the reality appears to contradict that.
In some of their newest documentation about AI, Google strongly hints at diminished traffic and encourages publishers to “[c]onsider looking at various indicators of conversion on your site, be it sales, signups, a more engaged audience, or information lookups about your business.”.
AI Mode Survival Strategies
Broad adoption of AI Mode, whatever form that may take, can have several impactful consequences for web publishers.
Worst case scenario, most Google search traffic to websites will disappear. If AI Mode becomes the new default Google result, expect to see a collapse of clicks from search results to websites.
Focusing heavily on optimizing for visibility in AI answers will not save your traffic, as the CTR for cited sources is likely to be very low.
In my view, publishers have roughly three strategies for survival:
1. Google Discover
Google’s Discover feed may soften the blow somewhat, especially with the rollout onto desktop Chrome browsers.
Expanded presence of Discover on all devices with a Chrome browser gives more opportunities for publishers to be visible and drive traffic.
However, a reliance on Discover as a traffic source can encourage bad habits. Disregarding Discover’s inherent volatility, the unfortunate truth is that clickbait headlines and cheap churnalism do well in the Discover feed.
Reducing reliance on search in favor of Discover is not a strategy that lends itself well to quality journalism.
There’s a real risk that, in order to survive a search apocalypse, publishers will chase after Discover clicks at any cost. I doubt this will result in a victory for content quality.
2. Traffic & Revenue Diversification
Publishers need to grow traffic and income from more channels than just search. Due to Google’s enormous monopoly in search, diversified traffic acquisition has been a challenge.
Google is the gatekeeper of most of the web’s traffic, so of course we’ve been focused on maximising that channel.
With the risk of a greatly diminished traffic potential from Google search, other channels need to pick up the slack.
We already mentioned Discover and its risks, but there are more opportunities for publishing brands to drive readers and growth.
Paywalls seem inevitable for many publishers. While I’m a fan of freemium models, publishers will have to decide for themselves what kind of subscription model they want to implement.
A key consideration is whether your output is objectively worth paying for. This is a question few publishers can honestly answer, so unbiased external opinions will be required to make the right business decision.
Podcasts have become a cornerstone of many publishers’ audience strategies, and for good reason. They’re easy to produce, and you don’t need that many subscribers to make a podcast economically feasible.
Another content format that can drive meaningful growth is video, especially short-form video that has multiplatform potential (YouTube, TikTok, Instagram, Discover).
Email newsletters are a popular channel, and I suspect this will only grow. The way many journalists have managed to grow loyal audiences on Substack is testament to this channel’s potential.
And while social media hasn’t been a key traffic driver for many years, it can still send significant visitor numbers. Don’t sleep on those Facebook open graph headlines (also valuable for Discover).
3. Direct Brand Visits
The third strategy, and probably the most important one, is to build a strong publishing brand that is actively sought out by your audience.
No matter the features that Google or any other tech intermediary rolls out, when someone wants to visit your website, they will come to you directly. Not even Google’s AI Mode would prevent you from visiting a site you specifically ask for.
A brand search for [daily mail] in Google AI Mode provides a link to the site’s homepage at the top of the response (Image credit: Barry Adams)
Brand strength translates into audience loyalty.
A recognizable publisher will find it easier to convince its readers to install their dedicated app, subscribe to their newsletters, watch their videos, and listen to their podcasts.
A strong brand presence on the web is also, ironically, a cornerstone of AI visibility optimization.
LLMs are, after all, regurgitators of the web’s content, so if your brand is mentioned frequently on the web (i.e., in LLMs’ training data), you are more likely to be cited as a source in LLM-generated answers.
Exactly how to build a strong online publishing brand is the real question. Without going into specifics, I’ll repeat what I’ve said many times before: You need to have something that people are willing to actively seek out.
If you’re just another publisher writing the same news that others are also writing, without anything that makes you unique and worthwhile, you’re going to have a very bad time. The worst thing you can be as a publisher is forgettable.
There is a risk here, too. In an effort to cater to a specific target segment, a publisher could fall victim to “audience capture“: Feeding your audience what they want to hear rather than what’s true. We already see many examples of this, to the detriment of factual journalism.
It’s a dangerous pitfall that even the biggest news brands find difficult to navigate.
Optimizing For AI
In my previous article, I wrote a bit about how to optimize for AI Overviews.
I’ll expand on this in future articles with more tips, both technical and editorial, for optimizing for AI visibility.
Yesterday, I had hiking boots in my cart. Size selected, reviews read, I was even picturing myself on the trail. Then I hesitated. “Will these pinch my wide feet?” Three clicks later, I bounced.
These types of hesitations cost businesses millions.
We’ve gotten excellent at grabbing attention and driving traffic. But success comes down to attention coupled with intention.
The real challenge is optimizing for the micro-moments that determine conversions. Those moments where a finger hovers over “buy.” Eyes flick to the return policy. And then, that dreaded tab back to your competitor.
An essential skill for today’s marketers is conversion design, where we decode hesitation as a behavioral signal.
How do you guide attention toward action? How do you eliminate the friction that causes hesitation? AI can help us spot and solve for these in a way that we haven’t been able to previously.
78% of organizations now use AI in at least one business function according to McKinsey’s 2025 State of AI research, yet most aren’t applying it where it matters most: the critical seconds when attention converts to action.
Understanding The Hesitation Moment
Your visitors have done their research. They’re on your product page, comparing options, genuinely considering a purchase. Then doubt creeps in:
“Will this integration work with our current setup?”
“Is this jacket too warm for Seattle?”
“Can I trust this company with a project this important?”
These small but significant moments determine whether someone converts or walks away. Behavioral science calls this “ambiguity aversion,” our brain’s tendency to avoid uncertain outcomes.
AI is now giving us visibility into these hesitation patterns that were invisible before. Let’s look at how leading brands are responding.
Retail: Removing Size Uncertainty
A Fortune 100 retailer analyzed cart abandonment and discovered shoppers were lingering over size charts before dropping off.
Instead of simply displaying standard measurements, they built a system that detects hesitation patterns and immediately surfaces:
Photos of real customers with height/weight stats wearing that exact item.
One-click connection to a live sizing consultant.
90-day wear reviews showing how fit changed over time.
This resulted in 22% fewer returns and 37% higher conversion rates [Source: Anonymized client data].
Lululemon: AI-Powered Customer Segmentation
Google’s recent case study on Lululemon shows how the activewear brand used AI to address hesitation at scale.
Instead of treating all visitors the same, Lululemon’s AI identifies where customers are in their decision journey and adjusts messaging accordingly.
Their approach included:
The results showed a substantial reduction in customer acquisition costs, increased new customer revenue from 6% to 15%, and an 8% boost in return on ad spend (ROAS). The strategy was so effective that it earned top honors at the Google Search Honours Awards in Canada.
B2B: Enterprise Software Hesitation
In B2B, hesitation moments are different but no less critical. Enterprise buyers often get stuck on three key concerns:
Integration compatibility: “Will this work with our existing systems?”
Implementation risk: “What if this disrupts our operations?”
Smart B2B companies use AI to detect these hesitation patterns:
When someone spends 60+ seconds on pricing pages, especially toggling between tiers.
Downloads technical specs, then immediately visits competitor comparison pages.
Views implementation timelines multiple times without requesting a demo.
Leading SaaS platforms can trigger personalized responses based on these signals, such as custom ROI calculators, implementation case studies from similar companies, or direct connection to technical specialists.
Microsoft’s Conversational AI In Action
Microsoft’s data shows the power of AI in addressing customer hesitation in real-time. Their recent analysis reveals:
AI-powered ads deliver 25% higher relevance compared to traditional search ads.
Copilot ad conversions increased by 1.3x across all ad types since the November 2024 relaunch.
40% of users say well-placed AI-powered ads enhance their online experience.
AI is well beyond automating existing processes to now anticipating uncertainty and responding in real time.
The Hesitation-To-Action Framework
Here’s how to start optimizing for hesitation reduction:
1. Identify Hesitation Moments
Use tools like:
Heatmaps to see where users pause or hover, e.g., Users hover over “compatibility” but don’t click. Add clarity to product specs.
Session recordings to watch actual user behavior, e.g., A user toggles pricing tiers, then exits, indicating confusion or doubt.
Behavioral tracking to identify patterns before drop-off, e.g., Users who view the return policy are 2x more likely to abandon cart.
Sales call logs to find commonly asked questions and concerns, e.g., “How long does onboarding take?” Add a visual onboarding timeline.
2. Create Confidence Content
Address uncertainty directly:
Technical specifications for B2B concerns, e.g., “Compare to Your Stack” chart.
Social proof from similar customers, e.g., Quotes from similar customers with similar concerns.
Transparent information about potential drawbacks, e.g., “Who This Isn’t Right For” section to builds trust (Sometimes, showing a drawback increases trust more than another benefit).
Comparison tools that highlight advantages, e.g., “Compare us to [Competitor X]” chart, to keep people on site.
3. Deploy Behavioral Triggers
Implement AI-powered responses:
Dynamic content that adapts based on user behavior, e.g., Lingers on “Team Plan” pricing tier? Show a testimonial from a similar-sized company.
Personalized chat prompts triggered by hesitation signals, e.g., Toggles pricing three times? Prompt: “Want help calculating ROI for your team size?”
Targeted offers that address specific concerns, e.g., Returning visitor? “Still deciding? Here’s 10% off.”
Smart recommendations based on similar customer patterns, e.g., Read three CRM blog posts? Show a case study on CRM integration.
4. Test And Optimize
Microsoft emphasizes the importance of continuous testing. 85% of marketers using generative AI report improved productivity across content and ad creation.
Start small:
Choose one campaign or conversion point to optimize, e.g., Demo sign-ups underperforming? Test new headline and CTA.
Monitor real-time insights to refine approaches, e.g., “See how it works” gets more clicks than “Get Started.”
Scale successful tactics across other touchpoints, e.g., Winning copy gets rolled into LinkedIn ads and webinar invites.
5. Solve For The Measurement Challenge
Lululemon’s success came from implementing what they called a “measurement trifecta by blending marketing mix modeling (MMM), experiments, and attribution to gain a more holistic view of performance.”
This comprehensive approach revealed:
How different activities influenced sales over time.
Which touchpoints were most effective in the customer journey.
Where hesitation was occurring and being resolved.
The Strategic Shift For Search And Social
SEO
AI Overviews (AIO) are changing how content gets discovered. It’s important to anticipate doubts before they form, structure answers for AI extraction, and prove claims with third-party data.
Create content that addresses hesitation at different stages of the buying journey. Your product pages need to rank and convert uncertain visitors into confident customers.
Paid Search
Use AI to detect behavioral signals that indicate hesitation. Adjust landing pages, ad copy, and bidding strategies based on where users are in their decision process.
Track micro-conversions that indicate reduced hesitation, such as time spent with size charts, clicks on customer reviews, and interactions with chat.
Social Media
Share case studies and video testimonials addressing common concerns.
Post behind-the-scenes content showing actual product usage.
Share first-party data and statistics as proof points.
Use polls to identify hesitation points in your audience.
Use sentiment analysis to identify hesitation in comments and messages.
For high impact, you need to earn trust in the seconds that matter most. AI gives us the power to see hesitation in real time and resolve it before it becomes regret.
Success often comes down to these micro-moments, these seconds when someone hovers between interest and action.
Master those micro-moments and everything else follows.
BrightEdge Enterprise SEO platform released new data showing distinctive patterns across major AI search and chatbot platforms and also called attention to potential disruption from Apple if it breaks with Google as the default search engine in Safari.
Desktop AI Traffic Dominance
One of the key findings in the BrightEdge data is that traffic to websites from AI chatbots and search engines is highest from desktop users. The exception is Google Search which is reported to send more traffic from mobile devices over desktop.
The report notes that 94% of the traffic from ChatGPT originates from desktop apps with just 6% of referrals coming from mobile apps. BrightEdge speculates that the reason why there’s less mobile traffic is because ChatGPT’s mobile app shows an in-app preview, requiring a user to execute a second click to navigate to an external site. This creates a referral bottleneck that doesn’t exist on the desktop.
But that doesn’t explain why Perplexity, Bing, and Google Gemini also show similar levels of desktop traffic dominance. Could it be a contextual difference where users on desktop are using AI for business and mobile use is less casual? The fact that Google Search sends more mobile referral traffic than desktop could suggest a contextual reason for the disparity in mobile traffic from AI search and chatbots.
BrightEdge shared their insights:
“While Google maintains an overwhelming market share in overall search (89%) and an even stronger position on mobile (93%), its dominance is particularly crucial in mobile web search. BrightEdge data indicates that Apple phones alone account for 57% of Google’s mobile traffic to US and European brand websites. But with Safari being the default for around a billion users, any change to that default could reallocate countless search queries overnight.
Apple’s vendor-agnostic Apple Intelligence also suggests opportunities for seismic shifts in web search. While generative AI tools have surged in popularity through apps on IOS, mobile web search—where the majority of search still occurs—remains largely controlled by Google via Safari defaults. This makes Apple’s control of Safari the most valuable real estate in the mobile search landscape.”
Here are the traffic referral statistics provided by BrightEdge:
Google Search: Only major AI search with mobile majority traffic referrals (53% mobile vs 44% desktop)
ChatGPT: 94% desktop, just 6% mobile referrals
Perplexity: 96.5% desktop, 3.4% mobile
Bing: 94% desktop, 4% mobile
Google Gemini: 91% desktop, 5% mobile
Apple May Play The Kingmaker?
With Apple’s Worldwide Developers Conference (WWDC) nearing, one of the changes that many will be alert to is any announcement relative to the company’s Safari browser which controls the default search settings on nearly a billion devices. A change in search provider in Safari could initiate dramatic changes to who the new winners and losers are in web search.
Perplexity asserts that the outcome of changes to Safari browser defaults may impact search marketing calculations for the following reasons:
“58% of Google’s mobile traffic to brand websites comes from iPhones
Safari remains the default browser for nearly a billion users
Apple has not yet embedded AI-powered search into its mobile web stack”
Takeaways
Desktop Users Of AI Search Account For The Majority Of Referral Traffic Most AI-generated search traffic from from ChatGPT, Perplexity, Bing, and Gemini comes from desktop usage, not mobile.
Google Search Is The Traffic Referral Outlier Unlike other AI search tools, Google Search still delivers a majority of its traffic via mobile devices.
In-App Previews May Limit ChatGPT Mobile AI Referrals ChatGPT’s mobile app requires an extra click to visit external sites, possibly explaining low mobile referral numbers.
Apple’s Position Is Pivotal To Search Marketing Apple devices account for over half of Google’s mobile traffic to brand websites, giving Apple an outsized impact on mobile search traffic.
Safari Default And Greater Market Share With Safari set as the default browser for nearly a billion users, Apple effectively controls the gate to mobile web search.
Perplexity Stands To Gain Market Share If Apple switches Safari’s default search to Perplexity, the resulting shift in traffic could remake the competitive balance in search marketing.
Search Marketers Should Watch WWDC Any change announced at Apple’s WWDC regarding Safari’s search engine could have large-scale impact on search marketing.
BrightEdge data shows that desktop usage is the dominant source of traffic referrals from AI-powered search tools like ChatGPT, Perplexity, Bing, and Gemini, with Google Search as the only major platform that sends more traffic via mobile.
This pattern could suggest a behavioral split between desktop users, who may be performing work-related or research-heavy tasks, and mobile users, who may be browsing more casually. BrightEdge also points to a bottleneck built into the ChatGPT app that creates a one-click barrier to mobile traffic referrals.
BrightEdge’s data further cites Apple’s control over Safari, which is installed on nearly a billion devices, as a potential disruptor due to a possible change in the default search engine away from Google. Such a shift could significantly alter mobile search traffic patterns.
One of the SEO industry’s SEO Rockstars recently shared his opinion about SEO for generative AI, calling attention to facts about Google and how the new AI search really works.
Greg Boser is a search marketing pioneer with a deep level of experience that few in the industry can match or even begin to imagine.
Digital Marketers And The History Of SEO
His post was in response to a tweet by someone else that in his opinion overstated that SEO is losing dominance. Greg began his SEO rant by pointing out how some search marketer’s conception of SEO is outdated but they’re so new to SEO that they don’t realize it.
For example, the practice of buying links is one of the oldest tactics in SEO, so old that newcomers to SEO gave it a new name, PBN (private blog network), as if giving link buying a new name changes it somehow. And by the way, I’ve never seen a PBN that was private. The moment you put anything out on the web Google knows about it. If an automated spambot can find it in literally five minutes, Google probably already knows about it, too.
Greg wrote:
“If anyone out there wants to write their own “Everything you think you know is wrong. GEO is the way” article, just follow these simple steps:
1. Frame “SEO” as everything that was a thing between 2000 – 2006. Make sure to mention buying backlinks and stuffing keywords. And try and convince people the only KPI was rankings.”
Google’s Organic Links
The second part of his post calls attention to the fact that Google has not been a ten organic links search engine for a long time. Google providing answers isn’t new.
He posted:
“2. Frame the current state of things as if it all happened in the last 2 weeks. Do not under any circumstances mention any of the following things from the past 15 years:
2009 – Rich Snippets 2011 – Knowledge Graph (things not strings) 2013 – Hummingbird (Semantic understanding of conversational queries) 2014 – Featured Snippets – (direct answers at position “Zero”) 2015 – PPA Boxes (related questions anticipating follow-up questions) 2015 – RankBrain (machine learning to interpret ambiguous queries) 2019 – BERT (NLP to better understand context) 2021 – MUM (BERT on Steroids) 2023 – SGE (The birth of AIO)”
Overstate The Problem
The next part is a reaction to the naive marketing schtick that tries to stir up fear about AI search in order to present themselves as the answer.
He wrote:
“3. Overstate the complexity to create a sense of fear and anxiety and then close with “Your only hope is to hire a GEO expert”
Is AI Search Complex And Does It Change Everything?
I think it’s reasonable to say that AI Search is complex because Google’s AI Mode and to a lesser extent AI Overviews, is showing links to a wider range of search intents than regular searches used to show. Even Google’s Rich Snippets were aligned to the search intent of the original search query.
That’s no longer the case with AIO and AI Mode search results. That’s the whole point about Query Fan-out (read about a patent that describes what Query Fan-out might be), that the original query is broken out into follow-up questions.
Greg Boser has a point though in a follow-up post where he said that the query fan-out technique is pretty similar to People Also Ask (PAA), Google’s just sticking it into the AI Mode results.
“Yeah the query fan thing is the rage of the day. It’s like PAA is getting memory holed.”
AI Mode Is A Serious Threat To SEO?
I agree with Greg to a certain extent that AI Mode is not a threat to SEO. The same principles about promoting your site, technical SEO and so on still apply. The big difference is that AI Mode is not directly answering the query but providing answers to the entire information journey. You can dismiss it as just PAA above the fold but that’s still a big deal because it complicates what you’re going to try to rank for.
“This is, you know, we have a funnel, we all know which is the awareness consideration phase and the whole center and then finally the purchase stage. The consideration stage is the critical side of our funnel. We’re not getting the data. How are we going to get the data?”
So yeah, AI Search is different than anything we’ve seen before but, as Greg points out, it’s still SEO and adapting to change is has always been a part of it.
If anyone out there wants to write their own “Everything you think you know is wrong. GEO is the way” article, just follow these simple steps:
1. Frame “SEO” as everything that was a thing between 2000 – 2006. Make sure to mention buying backlinks and stuffing keywords. And try… https://t.co/Eqy0spIj8B
Google has started rolling out interactive charts in AI Mode through Labs.
You can now ask complex financial questions and get both visual charts and detailed explanations.
The system builds these responses specifically for each user’s question.
Visual Analytics Come AI Mode
Soufi Esmaeilzadeh, Director of Product Management for Search at Google, explained that you can ask questions like “compare the stock performance of blue chip CPG companies in 2024” and get automated research with visual charts.
Google does the research work automatically. It looks up individual companies and their stock prices without requiring you to perform manual searches.
You can ask follow-up questions like “did any of these companies pay back dividends?” and AI Mode will understand what you’re looking for.
Technical Details
Google uses Gemini’s advanced reasoning and multimodal capabilities to power this feature.
The system analyzes what users are requesting, pulls both current and historical financial data, and determines the most effective way to present the information.
Implications For Publishers
Financial websites that typically receive traffic from comparison content should closely monitor their analytics. Google now provides direct visual answers to complex financial questions.
Searchers might click through to external sites less often for basic comparison data. But this also creates opportunities. Publishers that offer deeper analysis or expert commentary may find new ways to add value beyond basic data visualization.
Availability & Access
The data visualization feature is currently available through AI Mode in Labs. This means it’s still experimental. Google hasn’t announced plans for wider rollout or expansion to other types of data beyond financial information.
Users who want to try it out can access it through Google’s Labs program. Labs typically tests experimental search features before rolling them out more widely.
Looking Ahead
The trend toward comprehensive, visual responses continues Google’s strategy of becoming the go-to source for information rather than just a gateway to other websites.
While currently limited to financial data, the technology could expand to other data-heavy industries.
The feature remains experimental, but it offers a glimpse into how AI-powered search may evolve.
Google has shared new details about how it designed and built AI Mode.
In a blog post, the company reveals the user research, design challenges, and testing that shaped its advanced AI search experience.
These insights may help you understand how Google creates AI-powered search tools. The details show Google’s shift from traditional keyword searches to natural language conversations.
User Behavior Drove AI Mode Creation
Google built AI Mode in response to the ways people were using AI Overviews.
Google’s research showed a disconnect between what searchers wanted and what was available.
Claudia Smith, UX Research Director at Google, explains:
“People saw the value in AI Overviews, but they didn’t know when they’d appear. They wanted them to be more predictable.”
The research also found people started asking longer questions. Traditional search wasn’t built to handle these types of queries well.
This shift in search behavior led to a question that drove AI Mode’s creation, explains Product Management Director Soufi Esmaeilzadeh:
“How do you reimagine a Search gen AI experience? What would that look like?”
AI “Power Users” Guided Development Process
Google’s UX research team identified the most important use cases as: exploratory advice, how-to guides, and local shopping assistance.
This insight helped the team understand what people wanted from AI-powered search.
Esmaeilzadeh explained the difference:
“Instead of relying on keywords, you can now pose complex questions in plain language, mirroring how you’d naturally express yourself.”
According to Esmaeilzadeh, early feedback suggests that the team’s approach was successful:
“They appreciate us not just finding information, but actively helping them organize and understand it in a highly consumable way, with help from our most intelligent AI models.”
Industry Concerns Around AI Mode
While Google presents an optimistic development story, industry experts are raising valid concerns.
John Shehata, founder of NewzDash, reports that sites are already “losing anywhere from 25 to 32% of all their traffic because of the new AI Overviews.” For news publishers, health queries show 26% AI Overview penetration.
Mordy Oberstein, founder of Unify Brand Marketing, analyzed Google’s I/O demonstration and found the examples weren’t as complex as presented. He shows how Google combined readily available information rather than showcasing advanced AI reasoning.
Google’s claims about improved user engagement have not been verified. During a recent press session, Google executives claimed AI search delivers “more qualified clicks” but admitted they have “no data to share” on these quality improvements.
Further, Google’s reporting systems don’t differentiate between clicks from traditional search, AI overviews, and AI mode. This makes independent verification impossible.
Shehata believes that the fundamental relationship between search and publishers is changing:
“The original model was Google: ‘Hey, we will show one or two lines from your article, and then we will give you back the traffic. You can monetize it over there.’ This agreement is broken now.”
What This Means
For SEO professionals and content marketers, Google’s insights reveal important changes ahead.
The shift from keyword targeting to conversational queries means content strategies need to focus on directly answering user questions rather than optimizing for specific terms.
The focus on exploratory advice, how-to content, and local help shows these content types may become more important in AI Mode results.
Shehata recommends that publishers focus on content with “deep analysis of a situation or an event” rather than commodity news that’s “available on hundreds and thousands of sites.”
He also notes a shift in success metrics: “Visibility, not traffic, is the new metric” because “in the new world, we will get less traffic.”
Looking Ahead
Esmaeilzadeh said significant work continues:
“We’re proud of the progress we’ve made, but we know there’s still a lot of work to do, and this user-centric approach will help us get there.”
Google confirmed that more AI Mode features shown at I/O 2025 will roll out in the coming weeks and months. This suggests the interface will keep evolving based on user feedback and usage patterns.
Microsoft Clarity announced their new Model Context Protocol (MCP) server which enables developers, AI users and SEOs to query Clarity Analytics data with natural language prompts via AI.
The announcement listed the following ways users can access and interact with the data using MCP:
Query analytics data with natural prompts
Filter by dimensions like Browser, OS, Country/Region, or Device
Retrieve key metrics: Scroll Depth, Engagement Time, Total Traffic, etc.
Integrates with Claude for Desktop for AI-powered querying
MCP Server is a software package that needs to be installed and run on a server or a local machine where Node.js 16+ is supported. It’s a Node.js-based server that acts as a bridge between AI tools (like Claude) and Clarity analytics data.
This is a new way to interact with data using natural language, where a user tells the AI client what analytics metric they want to see and for what period of time and the AI interface pulls the data from Microsoft Clarity and displays it.
Micrsoft’s announcement says that this is the beginning of what is possible, sharing that they are encouraging feedback from users about features and improvements they’d like to see.
The current road map of features listed for the future:
“Higher API Limits: Increased daily limits for the Clarity data export API
Predictive Heatmaps: Predict engagement heatmaps by providing an image or a url
Deeper AI integration: Heatmap insights and more given the context
Multi-project support: for enterprise analytics teams
Ecosystem – Support more AI Agents and collaborate with more MCP servers “
New research reveals that Google’s AI Overviews tend to favor major news outlets.
The top 10 publishers capture nearly 80% of all news mentions. Meanwhile, smaller organizations struggle for visibility in AI-generated search results.
SE Ranking analyzed 75,550 AI Overview responses for this study. They found that only 20.85% cite any news source at all. This creates tough competition for limited citation spots.
Among those citations, three outlets dominate: BBC, The New York Times, and CNN account for 31% of all media mentions.
Citation Concentration
The research shows a winner-takes-all pattern in AI Overview citations. BBC leads with 11.37% of all mentions. This happens even though the study focused on U.S.-based queries.
The concentration gets worse when you look at the bigger picture. Just 12 outlets make up 40% of those studied. But they receive nearly 90% of mentions.
This leaves 18 remaining outlets sharing only 10% of citation opportunities.
The gap between major and minor outlets is notable. BBC appears 195 times more often than the Financial Times for the same keywords.
Several well-known outlets get little attention. Financial Times, MSNBC, Vice, TechCrunch, and The New Yorker together account for less than 1% of all news mentions.
The researchers explain the underlying cause:
“Well, Google mostly relies on well-known news sources in its AIOs, likely because they are seen as more trustworthy or relevant. This results in a strong bias toward major outlets, with smaller or lesser-known sources rarely mentioned. This makes it harder for these domains to gain visibility.”
Beyond Traditional Search Rankings
The concentration problem extends beyond citation counts.
40% of media URLs mentioned in AI Overviews appear in the top 10 traditional search results for the same keywords.
This means AI Overviews don’t just pull from the highest-ranking pages. Instead, they seem to favor sources based on authority signals and content quality.
The study measured citation inequality using something called a Gini coefficient. The score was 0.54, where 0 means perfect equality and 1 means maximum inequality. This shows moderate but significant imbalance in how AI Overviews distribute citations among news sources.
The researchers noted:
“AIOs consistently favor a subset of high-profile domains, instead of evenly citing all sources.”
Paywalled Content Concerns
The research also reveals patterns about paywalled content use.
Among AI Overview responses that link to paywalled content, 69% contain copied segments of five or more words. Another 2% include longer copied segments over 10 words.
The paywall dependency is strong for premium publishers. Over 96% of New York Times citations in AI Overviews come from behind a paywall. The Washington Post shows an even higher rate at over 99%.
Despite this heavy use of paywalled material, only 15% of responses with long copied segments included attribution. This raises questions about content licensing and fair use in AI-generated summaries.
Attribution Patterns & Link Behavior
When AI Overviews do cite news media, they average 1.74 citations per response.
But here’s the catch: 91.35% of news media citations appear in the links section rather than the main text of AI responses.
Media outlets face another challenge with brand recognition. Outlets are four times more likely to be cited with a hyperlink than mentioned by name.
But over 26% of brand mentions still appear without links. This often happens because AI systems get information through aggregators rather than original publishers.
Query Type Makes a Difference
The type of search query affects citation chances.
News-related queries are 2.5 times more likely to include media citations than general queries. The rates are 20.85% versus 8.23%.
This suggests opportunities exist for publishers who can become go-to sources for specific news topics or breaking news. But the overall trend still favors big players.
What This Means
The research suggests that established outlets benefit from existing authority signals. This creates a cycle where citation success leads to more citation opportunities.
As AI Overviews become more common in search results, smaller publishers may see less organic traffic and fewer chances to grow their audience.
For smaller publishers trying to compete, SE Ranking offers this advice:
“To increase brand mentions in AIOs, get backlinks from the sources they already cite for your target keywords. This is one of the greatest factors for improving your inclusion chances.”
Researchers note that the technical infrastructure also matters:
“AI tools do observe certain restrictions based on website metadata. The schema.orgmarkup, particularly the ‘isAccessibleForFree’ tag, plays a significant role in how content is treated.”
For smaller publishers and content marketers, the data points to a clear strategy: focus on building authority in specific niches rather than trying to compete broadly across topics.
Some specialized outlets get higher text inclusion rates when cited. This suggests topic expertise can provide advantages in certain cases.
Looking Ahead
SE Ranking’s research shows that only 20.85% of AI Overviews reference news sources, with a few major publishers dominating, capturing 31% of citations.
Despite this concentration, opportunities exist. Publishers who establish authority in specific niches experience higher inclusion rates in AI Overviews.
Additionally, since 60% of cited content doesn’t rank in the top 10, traditional SEO metrics alone don’t guarantee visibility. Success now requires building the trust signals and topical authority that AI systems prioritize.