How AI Really Weighs Your Links (Analysis Of 35,000 Datapoints) via @sejournal, @Kevin_Indig

Before we jump in:

  • I hate to brag, but I will say I’m extremely proud to have placed 4th in the G50 SEO World Championships this past week.
  • I’m speaking at NESS, the global News & Editorial SEO Summit, on October 22. Growth Memo readers get 20% off when code “kevin2025”

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Historically, backlinks have always been one of the most reliable currencies of visibility in search results.

We know links matter for visibility in AI-based search, but how they work inside LLMs – including AI Overviews, Gemini, or ChatGPT & Co.- is still somewhat of a black box.

The rise of AI search models changes the rules of organic visibility and the competition for share of voice in LLM results.

So the question is, do backlinks still earn visibility in AI-based modalities of search… and if so, which ones?

If backlinks were the currency of the pre-LLM web, this week’s analysis is a first look at whether they’re still legal tender in the new AI search economy.

Together with Semrush, I analyzed 1,000 domains and their AI mentions against core backlink metrics.

Image Credit: Kevin Indig

The data surfaced four clear takeaways:

  1. Backlink-earned authority helps, but it’s not everything.
  2. Link quality outweighs volume.
  3. Most surprisingly, nofollow links pull real weight.
  4. Image links can move the needle on authority.

These findings help us all understand how AI models surface sites, along with exposing what backlink levers marketers can pull to influence visibility.

Below, you’ll find the methodology, deeper data takeaways, and, for premium subscribers, recommendations (with benchmarks) to put these findings into action.

Methodology

For this analysis, I looked at relationships between AI mentions for 1,000 randomly selected web domains. All data is from the Semrush AI SEO Toolkit, Semrush’s AI visibility & search analytics platform.

Along with the Semrush team, I examined the number of mentions across:

  • ChatGPT.
  • ChatGPT with Search activated.
  • Gemini.
  • Google’s AI Overviews.
  • Perplexity.

(If you’re wondering where Claude.ai fits in this analysis, we didn’t include it at this time as its user base is generally less focused on web search and more on generative tasks.)

For the platforms above, we measured Share of Voice and the number of AI mentions against the following backlink metrics:

  • Total backlinks.
  • Unique linking domains.
  • Follow links.
  • Nofollow links.
  • Authority Score (a Semrush metric referred to as Ascore below).
  • Text links.
  • Image links.

In this analysis, I used two different ways of measuring correlation across the data: a Pearson correlation and a Spearman correlation.

If you are familiar with these concepts, skip to the next section where we dive into the results.

For everyone else, I’ll break these down so you have a better understanding of the findings below.

Both Pearson and Spearman are correlation coefficients – numbers between -1 and +1 that measure how strongly two different variables are related.

The closer the coefficient is to +1 or -1, the more likely and stronger the correlation. (Near 0 means weak or no correlation at all.)

  • Pearson’s r measures the strength and direction of a linear relationship between two variables. Pearson looks at a linear correlation across the data using the raw values. This way of measuring is sensitive to outliers. But, if the relationship curves or has thresholds, Pearson under-measures it.
  • Spearman’s ρ (rho) measures the strength and direction of a monotonic relationship, or whether values consistently move in the same or opposite direction, not necessarily in a straight line. Spearman looks at rank correlation across the data. It asks whether higher X tends to come with higher Y; Spearman correlation asks: “When one thing increases, does the other usually increase too?”. It’s a correlation that is more robust to outliers and accounts for non-linear, monotonic patterns.

A gap between Pearson and Spearman correlation coefficients can mean the gains are non-linear.

In other words: There’s a threshold to cross. And that means the effect of X on Y doesn’t kick in right away.

Examining both the Pearson and Spearman coefficients can tell us if nothing (or very little) happens until you pass a certain point – and then once you exceed that point, the relationship shows up strongly.

Here’s a quick example of what an analysis that involves both coefficients can reveal:

Spending $500 (action X) on ads might not move the needle on sales growth (outcome Y). But once you cross, say, $5,000/month (action X), sales start growing steadily (outcome Y).

And that’s the end of your statistics lesson for today.

Image Credit: Kevin Indig

The first signal we examined was the strength of the relationship between the number of backlinks a site gets versus its AI Share of Voice.

Here’s what the data showed:

  • Authority Score has a moderate link to Share of Voice (SoV): Pearson ~0.23, Spearman ~0.36.
  • Higher authority means higher SoV, but the gains are uneven. There’s a threshold you need to cross.
  • Authority supports visibility, yet it does not explain most of the variance. What this means is that backlinks do have an impact on AI visibility, but there is more to the story, like your content, brand perceptions, etc.

Also, the number of unique linking domains matters more than the total number of backlinks.

In plain terms, your site is more likely to have a larger SoV when you have links from many different websites than a huge number of links from just a few sites.

Image Credit: Kevin Indig

Across all models, the strongest relationship occurred between Authority Score (0.65 Pearson, 0.57 Spearman) and the number of mentions

Here’s how Semrush defines the Authority Score measurement:

Authority Score is our compound metric that grades the overall quality of a website or a webpage. The higher the score, the more assumed weight a domain’s or webpage’s outbound links to another site could have.

It takes into account the number and quality of backlinks, organic traffic to link source pages, and the spamminess of the link profile.

Of course, Ascore is just a proxy for quality. LLMs have their own way of arriving at backlink quality. But the data shows that we can use Semrush’s Ascore as a good representative.

Most models value this metric equally for mentions, but ChatGPT Search and Perplexity value it the least compared to the average.

Surprisingly, regular ChatGPT (without search activated) weighs Ascore the most out of all models.

Critical to know: Median mentions jump from ~21.5 in decile 8 to ~79.0 in decile 9. The relationship is non-linear. In other words, the biggest gains come when you hit the upper boundaries of authority, or Ascore in this case.

(For context, a decile is a way of splitting a dataset into 10 equal parts. Each segment, or decile, contains 10% of the data points when they’re sorted in order.)

Image Credit: Kevin Indig

Perhaps the most significant finding from this analysis is that it doesn’t matter much if the links are set to nofollow or not!

And this has huge implications.

Confirmation of the value of nofollow links is so important because these types of links tend to be easier to build than follow links.

This is where LLMs are distinctly different from search engines: We’ve known for a while that Google also counts nofollow links, but not how much and for what (crawling, ranking, etc).

Once again, you won’t see big gains until you’re in the top 3 deciles, or the top 30% of the data points.

Follow links → Mentions:

  • Pearson 0.334, Spearman 0.504

Nofollow links → Mentions:

  • Pearson 0.340, Spearman 0.509

Conversely, Google’s AI Overviews and Perplexity weighed regular links the highest and nofollow links the least.

And interestingly, Gemini and ChatGPT weigh nofollow links the highest (over regular follow links).

Here’s my own theory as to why Gemini and ChatGPT weigh nofollow more:

With Gemini, I’m curious if Google weighs nofollow links higher than we have believed them to be in the past. And with ChatGPT, my hypothesis is that Bing is also weighing nofollow links higher (once Google started doing it, too). But this is just a theory, and I don’t have the data to support it at this time.

Image Credit: Kevin Indig

Beyond text-based backlinks, we also tested if image-based backlinks carry the same weight.

And in some cases, they had a stronger relationship to mentions than text-based links.

But how strong?

  • Images vs mentions: Pearson 0.415, Spearman 0.538
  • Text links vs mentions: Pearson 0.334, Spearman 0.472

Image links really start to pay off once you already have some authority.

  • From mid decile tiers up, the relationship turns positive, then strengthens, and is strongest in the top deciles.
  • In low-Ascore deciles (deciles 1 and 2), the images → mentions tie is weak or negative.

If you are targeting mention growth on Perplexity or Search-GPT, image links are especially productive.

  • Images correlate with mentions most on Perplexity and Search-GPT (Spearman ≈ 0.55 and 0.53), then ChatGPT/Gemini (≈ 0.49 – 0.52), then Google-AI (≈ 0.46).

Featured Image: Paulo Bobita/Search Engine Journal

OpenAI Launches Sora iOS App Alongside Sora 2 Video Model via @sejournal, @MattGSouthern

OpenAI launched the Sora iOS app, beginning an invite-based rollout in the United States and Canada.

With Sora, OpenAI appears to be releasing its first non-ChatGPT consumer app and its first social product.

The app runs on the newly released Sora 2 model for video and synchronized audio.

What’s The Sora App?

Sora is positioned as a creation-first social experience rather than a public-broadcast platform.

It adds social features on top of Sora 2’s generation capabilities, including tools to remix videos and collaborate with friends inside the app.

Custom Feed

The app uses OpenAI’s language models to power a recommender algorithm that accepts natural language instructions.

Users can customize their feed through conversational commands rather than buried settings menus.

By default, the feed prioritizes content from people users follow or interact with.

The Sora team wrote:

“We are not optimizing for time spent in feed, and we explicitly designed the app to maximize creation, not consumption.”

Cameos

Sora centers on “cameos,” which let you place yourself or friends inside AI-generated scenes after a short one-time video and audio capture in the app.

OpenAI says people who appear in cameos control who can use their likeness and can revoke access or remove any video that includes it.

Content Creation

Beyond cameos and feed browsing, the app lets users create original videos through text prompts and remix other users’ generations.

The underlying Sora 2 model can follow multi-shot instructions, maintain world state across scenes, and generate synchronized dialogue and sound effects.

ChatGPT Pro subscribers can access an experimental higher-quality Sora 2 Pro model on sora.com, with app access planned.

The original Sora 1 Turbo remains available, and existing user content stays in personal libraries.

Monetization

OpenAI plans to keep Sora free initially, with generation limits determined by available compute resources.

The company’s revenue strategy involves charging users for extra generations when demand surpasses capacity. No plans for advertising or creator revenue sharing have been announced.

Availability

The app operates on an invite-only basis, with sign-ups available through the iOS app. The App Store listing is live.

Image Credit: Apple App Store

OpenAI says it made Sora invite-only to ensure users arrive with friends already in the app. The company cites feedback indicating that cameos drive the experience, making existing connections essential.

Looking Ahead

For marketers and creators, Sora serves as a new platform for distributing short, AI-generated videos, affirming OpenAI’s focus on developing consumer-oriented tools.

Sora’s adoption will largely depend on accessibility, real-world applications, and how well the feed encourages active creation instead of passive viewing.


Featured Image: Robert Way/Shutterstock

Google AI Mode Gets Visual + Conversational Image Search via @sejournal, @MattGSouthern

Google announced that AI Mode now supports visual search, letting you use images and natural language together in the same conversation.

The update is rolling out this week in English in the U.S.

What’s New

Visual Search Gets Conversational

Google’s update to AI Mode aims to address the challenge of searching for something that’s hard to describe.

You can start with text or an image, then refine results naturally with follow-up questions.

Robby Stein, VP of Product Management for Google Search, and Lilian Rincon, VP of Product Management for Google Shopping, wrote:

“We’ve all been there: staring at a screen, searching for something you can’t quite put into words. But what if you could just show or tell Google what you’re thinking and get a rich range of visual results?”

Google provides an example that begins with a search for “maximalist bedroom inspiration,” and is refined with “more options with dark tones and bold prints.”

Image Credit: Google
Image Credit: Google

Each image links to its source, so searchers can click through when they find what they want.

Shopping Without Filters

Rather than using conventional filters for style, size, color, and brand, you can describe products conversationally.

For example, asking “barrel jeans that aren’t too baggy” will find suitable products, and you can narrow down options further with requests like “show me ankle length.”

Image Credit: Google

This experience is powered by the Shopping Graph, which spans more than 50 billion product listings from major retailers and local shops.

The company says over 2 billion listings are refreshed every hour to keep details such as reviews, deals, available colors, and stock status up to date.

Technical Foundation

Building on Lens and Image Search, the visual abilities now include Gemini 2.5’s advanced multimodal and language understanding.

Google introduces a technique called “visual search fan-out,” where it runs several related queries in the background to better grasp what’s in an image and the nuances of your question.

Plus, on mobile devices, you can search within a specific image and ask conversational follow-ups about what you see.

Image Credit: Google

Additional Context

In a media roundtable attended by Search Engine Journal, a Google spokesperson said:

  • When a query includes subjective modifiers, such as “too baggy,” the system may use personalization signals to infer what you likely mean and return results that better match that preference. The spokesperson didn’t detail which signals are used or how they are weighted.
  • For image sources, the systems don’t explicitly differentiate real photos from AI-generated images for this feature. However, ranking may favor results from authoritative sources and other quality signals, which can make real photos more likely to appear in some cases. No separate policy or detection standard was shared.

Why This Matters

For SEO and ecommerce teams, images are becoming even more essential. As Google gets better at understanding detailed visual cues, high-quality product photos and lifestyle images may boost your visibility.

Since Google updates the Shopping Graph every hour, it’s important to keep your product feeds accurate and up-to-date.

As search continues to become more visual and conversational, remember that many shopping experiences might begin with a simple image or a casual description instead of exact keywords.

Looking Ahead

The new experience is rolling out this week in English in the U.S. Google hasn’t shared timing for other languages or regions.

What OpenAI’s Research Reveals About The Future Of AI Search

The launch of ChatGPT in 2022 didn’t so much cause a shift in the search landscape as trigger a series of seismic events. And, like seismologists, the SEO industry needs data if it’s to predict future tremors and aftershocks – let alone prepare itself for what the landscape might reshape itself into once the ground has finally settled.

So, when OpenAI released a 65-page research paper on Sept. 15, 2025, titled “How People Use ChatGPT,” some of us were understandably excited to finally have some authoritative usage data from inside a major large language model (LLM).

Two key findings leap out:

  1. We’re closer to mass adoption of AI than most probably realize.
  2. How users interact with ChatGPT has fundamentally shifted in the past year.

For SEOs, this isn’t just another adoption study: It’s strategic intelligence about where AI search is heading.

Mass Adoption Is Closer Than You Think

How close is ChatGPT to the tipping point where it will accelerate into mass adoption?

Developed by sociologist Everett Rogers, the diffusion of innovation theory provides us with a useful framework to explain how new technologies spread through society in predictable stages. First, there are the innovators, accounting for 2.5% of the market. Then, the early adopters come along (13.5%), to be followed by the early majority (34%). At this point, ~50% of the potential market has adopted the technology. Anyone jumping on board after this point can safely be described as either the late majority (34%), or laggards (16%).

The tipping point happens at around 20%, when the new technology is no longer confined to innovators or early adopters but is gradually taken up by the early majority. It’s at this point that mainstream adoption accelerates rapidly.

Now, let’s apply this to ChatGPT’s data.

Since launching in late 2022, ChatGPT’s growth has been staggering. The new report reveals that, in the five-month period from February to July 2025, ChatGPT grew from 400 million to 700 million weekly active users (WAU), sending 18 billion messages per week. That represents an average compound growth of roughly 11-12% month-over-month.

700 million WAU is equivalent to around 10% of the global adult population; impressive, but not quite mass adoption. Yet.

(Side note: Back in April, Sam Altman gave a figure of ~800 million weekly active users when speaking at TED 2025. To avoid confusion, we’ll stick with the official figure of 700 million WAU quoted in OpenAI’s report.)

It’s estimated there were approximately 5.65 billion internet users globally at the start of July 2025. This is the total addressable market (TAM) available to ChatGPT.

20% of 5.65 billion = 1.13 billion WAU. That’s the tipping point.

Even if the growth rate slows to a more conservative 5-6% per month, ChatGPT would already have reached at least 770 million WAU as I write this. At that rate of growth, ChatGPT will cross the mass adoption threshold between December 2025 and August 2026, with April 2026 as the most likely midpoint.

Of course, if the rate of growth remains closer to 11-12%, we can expect to tip over into mass adoption even earlier.

Start Level

July 2025

Growth (MoM) September 2025 Approx. Crossing Window
700 million 4% 757.12 Aug 2026
700 million 5% 771.75 May 2026
700 million 6% 786.52 Apr 2026
700 million 7% 801.43 Mar 2026
700 million 8% 816.48 Feb 2026
700 million 9% 839.30 Jan 2026
700 million 10% 847.00 Jan 2026
700 million 11% 862.47 Dec 2025
700 million 12% 878.08 Dec 2025

For SEOs, this timeline matters. We don’t have years to prepare for mass AI search adoption. We have months.

The window is rapidly closing for any brands not wanting to be left behind.

The Behavioral Revolution Hiding In Plain Sight

Buried within OpenAI’s usage data is perhaps the most significant finding for search marketers: a fundamental shift in how people are using AI tools.

In June 2024, non-work messages accounted for 53% of all ChatGPT interactions. By June 2025, this figure had climbed to 73%. This is a clear signal that ChatGPT is moving from workplace tool to everyday utility.

Things get even more interesting when we look at the intent behind those queries. OpenAI categorizes user interactions into three types:

  1. Asking (seeking information and guidance).
  2. Doing (generating content or completing tasks).
  3. Expressing (sharing thoughts or feelings with no clear intent).

The data reveals that “Asking” now makes up 51.6% of all interactions, compared to 34.6% for “Doing” and 13.8% for “Expressing.”

Let’s be clear: What ChatGPT categorizes as “Asking” is pretty much synonymous with what we think of as AI search. These are the queries that were once the exclusive domain of search engines.

Users are also increasingly satisfied with the quality of responses to “Asking” queries, rating interactions as either Good or Bad at a ratio of 4.45:1. For “Doing” interactions, the ratio of Good to Bad drops to 2.76.

The trend becomes even clearer when we break down interactions by topic. Three topics account for just under 78% of all messages.

  • Practical Guidance (29%).
  • Seeking Information (24%).
  • Writing (24%).

These figures are even more noteworthy when you consider that, in July 2024, “Writing” was easily the most common topic (36%), dropping 12 percentiles in just one year.

And while “Practical Guidance” has remained steady at 29%, “Seeking Information” has shot up 10 percentiles from 14%. What a difference a year makes.

And while “Writing” still accounts for 42% of all work-related messages, the nature of these requests has shifted. Instead of generating content from scratch, two-thirds of writing requests now focus on editing, translating, or summarizing text supplied by the user.

Whichever way you slice it, AI search is now the primary use case for ChatGPT, not content generation. But where does that leave traditional search?

The AI Wars: Battling For The Future Of Search

ChatGPT may be reshaping the landscape, but Google hasn’t been sitting idle.

Currently rolling out to 180 countries worldwide, AI Mode is Google’s biggest response yet to ChatGPT’s encroachment on its territory. Setting the scene for what is likely to become a competitive struggle between Google and OpenAI to define and dominate AI search.

ChatGPT has an advantage in having largely established the conversational search behaviors we’re now seeing. Instead of piecing together information by clicking back and forth on links in the SERPs, ChatGPT provides users with complete answers in a fraction of the time.

Meanwhile, Google’s advantage is that AI Mode grounds responses against a highly sophisticated search infrastructure, drawing on decades of web indexing expertise, contextual authority, and myriad other signals.

The stakes are high. If Google doesn’t transition aggressively enough to seize ground in AI search and protect its overall search dominance, it risks becoming the next Ask Jeeves.

That’s why I wouldn’t be surprised at all to see AI Mode become their primary search interface sooner rather than later.

Naturally, this would be a massive disruption to the traditional Google Ads model. Google’s recent launch of a new payment protocol suggests it is already hedging against the risk of falling ad revenue from traditional search.

With everything still so fluid, it’s virtually impossible to predict what the search landscape will eventually look like once the dust has settled and new business models have emerged.

Whichever platform ultimately dominates, it’s all but certain that AI search will be the victor.

Instead of focusing on what we don’t know and waiting for answers, brands can use what they do know about AI search to seize a strategic advantage.

Rethinking Traffic Value

With most websites only seeing ~1-2% of traffic coming from LLMs like ChatGPT, it would be tempting to dismiss AI search as insignificant, a distraction – at least for now.

But with ChatGPT about to hit mass adoption in months, this picture could change very rapidly.

Plus, AI search isn’t primarily about clicks. Users will often get the information they need from AI search without clicking on a single link. AI search is about influence, awareness, and decision support.

However, analyzing traffic from AI sources does reveal some interesting patterns.

Our own research indicates that, in some industries at least, LLM-referred visitors convert at a higher rate than traditional search traffic.

This makes sense. If someone has already engaged with your brand through one or more AI interactions and still chooses to visit your site, they’re doing so with more intent than someone clicking through in search of basic information. Perhaps they’re highly engaged in the topic and want to go deeper. Or perhaps the AI responses have answered their product queries, and they’re now ready to buy.

Even if it results in fewer clicks, this indirect form of brand exposure could become increasingly valuable as AI adoption reaches mass market levels.

If 1-2% of traffic currently comes from AI sources at 10% market adoption, what happens when we reach 20% or 30% adoption? AI-mediated traffic – with its higher conversion rate – could easily grow to 5-10% of total website visits within two years.

For many businesses, that’s enough to warrant strategic attention now.

Strategic Implications For Search Marketers

Traditional keyword optimization hasn’t been cutting it for a while. And things aren’t about to get any simpler for anyone hoping to capture the intent-driven queries dominating AI interactions.

Digital marketers and SEOs need to think beyond algorithms, considering aspects that aren’t always so easily captured in a spreadsheet, such as user goals and decision-making processes.

This doesn’t mean we should abandon those SEO fundamentals essential to healthy, scalable growth. And technical SEO remains as important as ever, including proper site structure, fast loading times, and crawlable content.

However, when it comes to the content itself, the emphasis needs to shift toward providing greater depth, expertise, and user value. AI systems are far more likely to reward original, comprehensive, and authoritative information over keyword-optimized but otherwise thin content.

In short, your content needs to be built for “Asking.”

Focus on the underlying needs of the user: information gathering, interpretation, or decision support. And plan your content around “answer objects.” These are modular content components designed to be reused and repurposed by AI when generating responses to specific queries.

Instead of traditional articles targeting specific keywords, build decision frameworks that include goals, options, criteria, trade-offs, and guardrails. Each of these components can provide useful material for AI to cite in responses, whichever AI system that might be.

Preparing for AI search isn’t about looking for ways to game an algorithm. It’s about creating genuinely useful content that helps users make decisions.

For many brands, this will mean moving away from individually optimized pages to entire content ecosystems.

The Way Ahead

OpenAI’s research gives us the most authoritative picture yet of AI search adoption and user behavior. The data shows that we’re approaching a tipping point where AI-mediated search will become mainstream, while user behavior has shifted dramatically toward information seeking over content generation.

Meanwhile, the competitive landscape remains extremely fluid.

The message is clear, for now at least: Build for “Asking.”

Start planning strategies around intent-driven, decision-supporting content now, while the landscape is still evolving.

The businesses that can establish their authority in AI responses now will be in the best position when AI search does reach mass adoption – regardless of which platforms ultimately dominate.

More Resources:


Featured Image: Collagery/Shutterstock

Brave Introduces Ask Brave, A Unified AI Search Interface via @sejournal, @MattGSouthern

Brave is rolling out Ask Brave, a unified search tool that combines AI chat features with regular search results.

It’s accessible on all browsers via the Brave Search homepage.

Ask Brave offers detailed answers, along with interactive elements like videos, webpages, and product listings, all within a single interface.

What’s ‘Ask Brave?’

Ask Brave builds on the company’s existing AI Answers feature, which Brave claims produces over 15 million responses daily.

The initial AI summarization tool was launched in 2023 as “Summarizer,” then renamed “Answer with AI,” and is now called “AI Answers.”

Josep M. Pujol, Chief of Search at Brave, says:

While AI Answers give our users quick summaries, Ask Brave provides longer answers, follow-ups, and a chat mode enhanced with Deep Research, and most importantly, contextually relevant enrichments such as videos, news articles, products, businesses, shopping, and more – in the right place, at the right time. Search makes it possible, LLMs glue it together. We anticipate that Ask Brave will generate millions more daily AI-powered answers with this powerful combination of search and chat, and look forward to deploying more useful AI-powered search tools for our users.

The company positions Ask Brave as a solution to a common frustration: switching between traditional search interfaces and chat tools. You can now access both from one entry point.

Grounded In Search

Brave reports Ask Brave achieves 94.9% accuracy on SimpleQA, using grounding tech with its Search API.

It taps into over 35 billion webpages to base responses on web info, reportedly reducing hallucinations and irrelevant results.

The Deep Research mode issues queries and analyzes thousands of pages to identify and address blind spots, Brave says.

Privacy

Brave affirms that Ask Brave follows its privacy-first policy.

Questions and chats aren’t used for training purposes. Conversations are encrypted, automatically deleted after 24 hours of inactivity, and IP addresses aren’t stored.

How To Use It

There are several ways to access Ask Brave:

  • Include double question marks (“??”) in queries when Brave Search is your default engine.
  • Click the “Ask” button on search.brave.com.
  • Choose the “Ask” tab on search results pages to switch traditional results to chat mode.
  • Directly set the homepage to the Ask Brave interface.

Broader Context

Brave claims Brave Search is the third-largest independent global search engine, handling about 1.5 billion monthly queries. The Brave browser reports over 97 million monthly active users worldwide, according to the company.

The launch lands as major search engines continue integrating AI into core experiences. Google has rolled out AI Mode across Search, while Microsoft has integrated Copilot into Bing and Edge.

Brave also offers a Search API that provides real-time data to AI language models.


Featured Image: bangla press/shutterstock

The Impact Of AI Overviews & How Publishers Need To Adapt via @sejournal, @MattGSouthern

Google rolled out AI Overviews to all U.S. users in May 2024. Since then, publishers have reported significant traffic losses, with some seeing click-through rates drop by as much as 89%. The question isn’t whether AI Overviews impact traffic, but how much damage they’re doing to specific content types.

Search (including Google Discover and traditional Google Search) consistently accounts for between 20% and 40% of referral traffic to most major publishers, making it their largest external traffic source. When DMG Media, which owns MailOnline and Metro, reports nearly 90% declines for certain searches, it’s a stark warning for traditional publishing.

After more than a year of AI Overviews (and Search Generative Experience), we have extensive data from publishers, researchers, and industry analysts. This article pulls together findings from multiple studies covering hundreds of thousands of keywords, tens of thousands of user searches, and real-world publisher experiences.

The evidence spans from Pew Research’s 46% average decline to DMG Media’s 89% worst-case scenarios. Educational platforms like Chegg report a 49% decline. But branded searches are actually increasing for some, suggesting there are survival strategies for those who adapt.

This article explains what’s really happening and why, including the types of content that face the biggest changes and which are staying relatively stable. You’ll understand why Google says clicks are “higher quality” even as publishers see traffic declines, and you’ll see what changes might make sense based on real data rather than guesses.

AI Overviews are the biggest change to search since featured snippets were introduced in 2014. They’re affecting the kinds of content publishers produce, and they’re increasing zero-click searches, which now make up 69% of all queries, according to Similarweb.

Whether your business relies on search traffic or you’re just watching industry trends, these patterns are significantly impacting digital marketing.

What we’re seeing is a new era in search and a change that is reshaping how online information is shared and how users interact with it.

AI Overview Studies: The Overwhelming Evidence

Google’s AI Overviews (AIO) have impacted traffic across most verticals and altered search behavior.

The feature, which was first introduced as Search Generative Experience (SGE) announced at Google I/O in May 2023, now appears in over 200 countries and 40 languages following a May 2025 expansion.

Independent research conducted throughout 2024 and 2025 shows click-through rate reductions ranging from 34% to 46% when AI summaries appear on search results pages.

Evidence from a variety of independent studies outlines the impact of AIO and shows a range of effects depending on the type of content and how it’s measured:

Reduced Click Through Rates – Pew Research Center

A study by Pew Research Center provides a rigorous analysis. By tracking 68,000 real search queries, researchers found that users clicked on results 8% of the time when AI summaries appeared, compared to 15% without them. That’s a 46.7% relative reduction.

Pew’s study tracked actual user behavior, rather than relying on estimates or keyword tools, validating publisher concerns.

Google questioned Pew’s methodology, claiming that the analysis period overlapped with algorithm testing unrelated to AI Overviews. However, the decline and its connection to AI Overview presence suggest a notable relationship, even if other factors played a role.

Position One Eroded – Ahrefs

Ahrefs’ analysis found that position one click-through rates dropped for informational keywords triggering AI Overviews.

Ryan Law, Director of Content Marketing at Ahrefs, stated on LinkedIn:

“AI Overviews reduce clicks by 34.5%. Google says being featured in an AI Overview leads to higher click-through rates… Logic disagrees, and now, so does our data.”

Law’s observation gets to the heart of a major contradiction: Google says appearing in AI Overviews helps publishers, but the math of fewer clicks suggests this is just corporate doublespeak to appease content creators.

His post garnered over 8,200 reactions, indicating widespread industry agreement with these findings.

More Zero-Click Searches – Similarweb

According to Similarweb data, zero-click searches increased from 56% to 69% between May 2024 and May 2025. While this captures trends beyond AI Overviews, the timing aligns with the rollout.

Zero-click searches work because they meet user needs. For example, when someone searches for “weather today” or a stock price, getting an instant answer without clicking is helpful. The issue comes when zero-click searches creep into areas where publishers used to offer in-depth content.

Stuart Forrest, global director of SEO digital publishing at Bauer Media, confirms the trend, telling the BBC:

“We’re definitely moving into the era of lower clicks and lower referral traffic for publishers.”

Forrest’s admitting to this new reality shows that the industry as a whole is coming to terms with the end of the golden age of search traffic. Not with a dramatic impact, but with a steady decline in clicks as AI meets users’ needs before they ever leave Google’s ecosystem.

Search Traffic Decline – Digital Content Next

An analysis by Digital Content Next found a 10% overall search traffic decline among member publishers between May and June.

Although modest compared to DMG’s worst-case scenarios, this represents millions of lost visits across major publishers.

AIO Placement Volatility – Authoritas

An Authoritas report finds that AI Overview placements are more volatile than organic ones. Over a two- to three-month period, about 70% of the pages cited in AI Overviews changed, and these changes weren’t linked to traditional organic rankings.

This volatility is why some sites experience sudden traffic drops even when their blue-link rankings seem stable.

Click-Based Economy Collapse For News Publishers – DMG Media

A statement from DMG Media to the UK’s Competition and Markets Authority reveals click-through rates dropped by as much as 89% when AI Overviews appeared for their content.

Although this figure represents a worst-case scenario rather than an average, it highlights the potential for traffic losses for certain search types.

Additionally, there are differences in how AI Overviews affect click-through rates depending on the device type.

The Daily Mail’s desktop CTR dropped from 25.23% to 2.79% when an AI Overview surfaced above a visible link (-89%), with mobile traffic declining by 87%; U.S. figures were similar.

These numbers indicate we’re facing more than just a temporary adjustment period. We’re witnessing a structural collapse of the click-based economy that has supported digital publishing since the early 2000s. With traffic declines approaching 90%, we’ve gone beyond optimization tactics and into existential crisis mode territory.

The submission to regulatory authorities suggests they’re confident in these numbers, despite their magnitude.

Educational Site Disruption – Chegg

Educational platforms are experiencing disruption from AI Overviews.

Learning platform Chegg reported a 49% decline in non-subscriber traffic between January 2024 and January 2025 in company statements accompanying their February antitrust lawsuit.

The decline coincided with AI Overviews answering homework and study questions that previously drove traffic to educational sites. Chegg’s lawsuit alleges that Google used content from educational publishers to train AI systems that now compete directly with those publishers.

Chegg’s case is a warning sign for educational content creators: If AI systems can successfully replace structured learning platforms, what’s the future for smaller publishers?

Reduced Visibility For Top Ranking Sites – Advanced Web Ranking

AI Overviews are dense and tall, impacting the visibility of organic results.

Advanced Web Ranking found that across 8,000 keywords, AI Overviews average around 169 words and include about seven links when expanded.

Once expanded, the first organic result often appears about 1,674px down the page. That’s well below the fold on most screens, reducing visibility for even top-ranked pages.

Branded Searches: The Surprising Exception

While most query types are seeing traffic declines, branded searches show the opposite trend. According to Amsive’s research, branded queries with AI Overviews see an 18% increase in click-through rate.

Several related factors likely contribute to this brand advantage. When AI Overviews mention specific brands, it conveys authority and credibility in ways that generic content can’t replicate.

People seeing their preferred brand in an AI Overview may be more likely to click through to the official site. Additionally, AI Overviews for branded searches often include rich information like store hours, contact details, and direct links, making it easier for users to find what they need.

This pattern has strategic implications as companies that have invested in brand building have a strong defense against AI disruption. The 18% increase in branded terms versus a 34-46% decrease in generic terms (as shown above) creates a performance gap that will likely impact marketing budgets.

The brand advantage extends beyond direct brand searches. Queries combining brand names with product categories show smaller traffic declines than purely generic searches. This suggests that even partial brand recognition provides some protection against AI Overview disruption. Companies with strong brands can leverage this by ensuring their brand appears naturally in relevant conversations and content.

This brand premium creates a two-tier internet, where established brands flourish while smaller content creators struggle financially. The impact on information diversity and market competition is troubling.

Google’s Defense: Stable Traffic, Better Quality

Google maintains a consistent three-part defense of AI Overviews:

  • Increased search usage.
  • Improved click quality.
  • Stable overall traffic.

The company frames AI Overviews as enhancing rather than replacing traditional search, though this narrative faces increasing skepticism from publishers experiencing traffic declines.

The company’s blog post from May, introducing the global expansion, stated:

“AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews. This means that once people use AI Overviews, they are coming to do more of these types of queries.”

Although this statistic shows a rise in Google Search engagement, it’s sparked intense debate and skepticism in the search and publishing worlds. Many experts agree that a 10% boost in AI Overview-driven searches could be due to changes in user behavior, but also warn that higher search volumes don’t automatically mean more traffic for content publishers.

A number of LinkedIn industry voices have publicly pushed back on Google’s 10% usage increase narrative. For example, Devansh Parashar writes:

“Google’s claim that AI Overviews have driven 10% more searches masks a troubling trend. Data from independent research firms, such as Pew, show that a majority of users do not click beyond the AI Overview— a figure that suggests Google’s LLM layer is quietly eating the web’s traffic pie.”

Similarly, Trevin Shirey points out concerns about the gap between increased engagement with search queries and the actual traffic publishers see:

“Although Google reports a surge in usage, many publishers are experiencing declines in organic click-through rates. This signals a silent crisis where users get quick answers from AI, but publishers are left behind.”

Google’s claim about increased usage needs to be read carefully. The increase is only for certain types of queries that show AI overviews, not overall search volume.

If users have to make multiple searches to find information they could have gotten in one click, their overall usage might go up, but their satisfaction could actually decrease.

In an August blog post, Google’s head of search, Liz Reid, claimed the volume of clicks from Google search to websites had been “relatively stable” year-over-year.

Reid also asserted that click quality had improved:

“With AI Overviews, people are searching more and asking new questions that are often longer and more complex. In addition, with AI Overviews people are seeing more links on the page than before. More queries and more links mean more opportunities for websites to surface and get clicked.”

A Google spokesperson told the BBC:

“More than any other company, Google prioritises sending traffic to the web, and we continue to send billions of clicks to websites every day.”

Google’s developer documentation states:

“We’ve seen that when people click from search results pages with AI Overviews, these clicks are higher quality (meaning, users are more likely to spend more time on the site).”

Publishers are understandably concerned and question the differences between Google’s description of stability and the actual data showing otherwise.

Jason Kint, CEO of Digital Content Next, notes:

“Since Google rolled out AI Overviews in your search results, median year-over-year referral traffic from Google Search to premium publishers down 10%.”

Kint’s data shatters Google’s carefully crafted image of stability, exposing what many publishers already suspect: The search giant’s promises are increasingly at odds with the realities reflected in their analytics dashboards and revenue reports.

The argument that higher-quality clicks are more valuable doesn’t provide much comfort when revenue is falling short. Even if engagement increases, losing such a large portion of clicks is a serious challenge for many ad-supported businesses.

Echoing these concerns, SEO Lead Jeff Domansky states:

“For publishers, AI Overviews are a direct hit to traffic and revenue models built around clicks and pageviews.”

Although Google claims that AI Overview clicks are of higher quality, many industry experts are skeptical.

Lily Ray, Vice President, SEO Strategy & Research at Amsive, highlights the lack of quality control on Google’s end:

“Since Google’s AI Overviews were launched, I (and many others) have shared dozens of examples of spam, misinformation, and inaccurate, biased, or incomplete results appearing in live AI Overview responses.”

And SEO specialist Barry Adams raises concerns about the quality and sustainability:

“Google’s AI Overviews are terrible at quoting the right sources… There is nothing intelligent about LLMs. They’re advanced word predictors, and using them for any purpose that requires a basis in verifiable facts – like search queries – is fundamentally wrong.”

Adams highlights a philosophical contradiction in AI Overviews: By relying on probabilistic language models to answer factual questions, Google may be misaligning technology with user needs.

This range of voices highlights a growing disconnect between Google’s hopeful engagement claims and the tough realities many publishers are facing as their referral traffic and revenue decrease.

Google hasn’t provided specific metrics defining “higher quality.” Publishers can’t verify these claims without access to comparative engagement data from AI Overview versus traditional search traffic.

Legal Challenges Mount

Publishers are seeking relief through regulatory and legal channels. In July, the Independent Publishers Alliance, tech justice nonprofit Foxglove, and the campaign group Movement for an Open Web filed a complaint with the UK’s Competition and Markets Authority. They claim that Google AI Overviews misuse publisher content, causing harm to newspapers.

The complaint urges the CMA to impose temporary measures that prevent Google from using publisher content in AI-generated responses without compensation.

It’s still unclear whether courts and regulators, which often move at a slow pace, can take action quickly enough to help publishers before market forces make any potential solutions irrelevant. A classic example of regulation trying to keep up with technological advancements.

The rapid growth of AI Overviews suggests that market realities may outstrip legal solutions.

Publisher Adaptations: Beyond Google Dependence

With threats looming, publishers are rushing to cut their reliance on Google. David Higgerson shares Reach’s approach in a statement to the BBC:

“We need to go and find where audiences are elsewhere and build relationships with them there. We’ve got millions of people who receive our alerts on WhatsApp. We’ve built newsletters.”

Instead of creating content for Google discovery, publishers need to develop direct relationships. Email newsletters, mobile apps, and podcast subscriptions provide traffic sources that aren’t affected by AI Overview disruptions.

Stuart Forrest stresses the importance of quality as a key differentiator:

“We need to make sure that it’s us being cited and not our rivals. Things like writing good quality content… it’s amazing the number of publishers that just give up on that.”

However, quality alone may not be enough if users never leave Google’s search results page. Publishers also need to master AI Overview optimization and understand how to make the most of remaining click opportunities.

Higgerson notes:

“Google doesn’t give us a manual on how to do it. We have to run tests and optimise copy in a way that doesn’t damage the primary purpose of the content.”

Another path that’s emerging is content licensing. Following News Corp and The Atlantic partnering with OpenAI, more publishers are exploring direct licensing relationships. These deals typically provide upfront payments and ongoing royalties for content usage in AI training, though terms remain confidential.

What We Don’t Know

There are still many uncertainties. The long-term trajectory of AI Mode, for example, could alter current patterns.

AI Mode

Google’s AI Mode may pose an even bigger threat than AI Overviews. This new interface displays search results in a conversational format instead of 10 blue links. Searchers have a back-and-forth with AI, with occasional reference links thrown in.

For publishers already struggling with AI-powered overviews, AI Mode could wipe out the rest of their traffic.

International Impact

The international effects outside English-language markets remain unmeasured. Since AI Overviews are available in over 200 countries and 40 languages, the impact likely varies by market. Factors like cultural differences in search behavior, language complexity, local competition dynamics, and varying digital literacy levels could lead to vastly different outcomes.

Most current research focuses on English-language markets in developed economies.

Content Creation

The feedback loop between AI Overviews and content creation could reshape what content gets produced and how information flows online.

If publishers stop creating certain types of content due to traffic losses, will AI Overview quality suffer as training data becomes stale?

Looking Ahead: Expanded AI Features

Google intends to continue expanding AI features despite mounting publisher concerns and legal challenges.

The company’s roadmap includes AI Mode international expansion and enhanced interactive features, including voice-activated AI conversations and multi-turn query refinement. Publishers should prepare for continued evolution rather than expecting stability in search traffic patterns.

Regulatory intervention may force greater transparency in the coming months. The Independent Publishers Alliance’s EU complaint requests detailed impact assessments and content usage documentation.

These proceedings could establish precedents affecting how AI systems can use publisher content.

Final Thoughts

The question isn’t whether AI Overviews affect traffic. Evidence overwhelmingly confirms they do. The question is how publishers adapt business models while maintaining sustainable operations.

The web is at a turning point, where the core agreement is being rewritten by the platforms that once promoted the open internet. Publishers who don’t acknowledge this change are jeopardizing their relevance in an AI-driven future.

Those who understand the impact, invest in brand building, and diversify traffic sources will be best positioned for success.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Gemini 2.5 Flash Update: Clearer Answers, Better Image Understanding via @sejournal, @MattGSouthern

Google updates Gemini 2.5 Flash with clearer step-by-step help, more structured responses, and stronger image understanding, now live in the Gemini app.

  • Gemini 2.5 Flash adds step-by-step guidance aimed at homework and complex topics.
  • Responses are formatted with headers, lists, and tables for faster scanning.
  • Image understanding can explain detailed diagrams and turn notes into flashcards.
Marketing Is 4th Most Exposed To GenAI, Indeed Study Finds via @sejournal, @MattGSouthern

Marketing professionals face one of the highest levels of potential AI disruption across all occupations, with 69% of marketing job skills positioned for transformation by generative AI, according to new data from Indeed.

The analysis evaluated nearly 2,900 work skills against U.S. job postings and found that marketing is the fourth most exposed profession, trailing only software development, data and analytics, and accounting.

The Shift From Doing To Directing

Indeed’s GenAI Skill Transformation Index groups skills into four levels: minimal, assisted, hybrid, and full transformation.

For marketing professionals, the majority of affected skills fall into hybrid transformation, where AI handles routine execution while humans provide oversight, validation, and strategic direction.

Indeed writes:

“Human oversight will remain critical when applying these skills, but GenAI can already perform a significant portion of routine work.”

That covers tasks AI can complete reliably in standard cases, with people stepping in to manage exceptions, interpret ambiguous situations, and ensure quality control.

What Marketing Skills Are Most at Risk?

Administrative, documentation, and text-processing tasks show high transformation potential, where AI already performs well at information retrieval, drafting, and analysis.

Communication-related work sits in the hybrid zone for many occupations. In one example from the report, communication skills appear in 23% of nursing postings and are classified as “hybrid.” This illustrates how routine language tasks are increasingly AI-assistable while human judgment remains essential.

How the Study Scored Skills

The study used multiple large language models and based its ratings on consistent results from OpenAI’s GPT-4.1 and Anthropic’s Claude Sonnet 4, noting that model performance varies.

The team evaluated each skill on two dimensions: problem-solving requirements and physical necessity. Marketing scores high on problem-solving and low on physical necessity, making many skills strong candidates for AI transformation.

A Change From Previous Research

Earlier Hiring Lab work found zero skills “very likely” to be fully replaced by GenAI.

In this update, the report identifies 19 skills (0.7% of the ~2,900 analyzed) that cross that “very likely” threshold. The authors frame this as incremental progress toward end-to-end automation for narrow, well-structured tasks, not broad replacement.

The Broader Employment Picture

Across the labor market, 26% of jobs on Indeed could be highly transformed by GenAI, 54% are moderately transformed, and 20% show low exposure.

These are measures of potential transformation. Actual outcomes depend on adoption, workflow design, and reskilling.

The report notes:

“Any realized impacts will depend entirely on whether and how businesses adopt and integrate GenAI tools…”

Marketing vs. Other Professions

Software development tops the list with 81% of skills facing transformation, followed by data and analytics (79%) and accounting (74%).

On the other end, nursing shows 33% skill transformation, with core patient-care responsibilities remaining human-centered.

Marketing’s position reflects its reliance on cognitive, screen-based work that AI can increasingly assist.

Not All AI Models Are Equal

The report emphasizes that model choice matters. Different models varied in output quality and stability, so teams should test tools against their own use cases rather than assume uniform performance.

Looking Ahead

The report’s authors, Annina Hering and Arcenis Rojas, created the GenAI Skill Transformation Index to reflect the level of transformation rather than simple replacement.

They advise developing skills that complement AI, such as strategy, creative problem-solving, and the ability to validate and interpret AI-generated outputs.

The timeline for these changes will differ depending on the size of the company, the industry, and how digitally advanced they are.

But the overall trend is clear: roles are evolving from hands-on task execution to overseeing AI and developing strategies. Those who stay ahead by adopting hybrid workflows will likely be in the best position.


Featured Image: Roman Samborskyi/Shutterstock

When Agents Replace Websites via @sejournal, @DuaneForrester

Let’s talk about an agentic future. As task-completing agents move from concept to adoption, their impact on how we discover and transact online will be significant. Websites won’t vanish, but in many cases, their utility will shrink as agents become the new intermediary layer between people and answers. Domains will still exist, but their value as discovery assets is likely to erode. Building and maintaining a site will increasingly mean structuring it for agents to retrieve from, not just for people to browse, and the idea of domains appreciating as scarce assets will feel less connected to how discovery actually happens.

The growth trajectory for AI agents is already clear in the data. Grand View Research valued the global AI agents market at USD 5.40 billion in 2024, with forecasts reaching USD 50.31 billion by 2030 at an annual growth rate of about 45.8%. Regionally, the Asia-Pacific market was USD 1.30 billion in 2024 and is projected to expand to USD 14.15 billion by 2030, with China alone expected to grow from USD 402.6 million to USD 3.98 billion over the same period. Europe is following a similar path, climbing from USD 1.32 billion in 2024 to USD 11.49 billion by 2030. Longer-term, Precedence Research projects the global agentic AI market will rise from USD 7.55 billion in 2025 to nearly USD 199.05 billion by 2034, a compound growth rate of 43.84%. These forecasts from multiple regions show a consistent global pattern: adoption is accelerating everywhere, and the shift toward agentic systems is not theoretical; it is underway. These figures are about task-completing agents, not casual chat use.

Image Credit: Duane Forrester

Do We Still Need Websites In An Agentic World?

It’s easy to forget how limited the internet felt in the 1990s. On AOL, you didn’t browse the web the way we think of it today. You navigated keywords. One word dropped you into chat rooms, news channels, or branded content. The open web was technically out there, but for most people, America Online WAS the internet.

That closed-garden model eventually gave way to the open web. Domains became navigation anchors. Owning a clean .com or a trusted extension like .org or .gov signaled legitimacy. Websites evolved into the front doors of digital identity, where brand credibility and consumer trust were built. Search rankings reinforced this. An exact-match domain once boosted visibility, and later the concept of “domain authority” helped indicate who showed up at the top of search results. For nearly three decades, websites have been the central hub of digital discovery and transactions.

But we may be circling back. Only this time, the keyword is no longer “AOL Keyword: Pizza Hut.” It’s your natural-language intent: “Book me a flight,” “Order flowers,” “Find me a dentist nearby.” And instead of AOL, the gatekeepers are LLMs and agentic systems.

From Navigation To Answers

The rise of agentic systems collapses the journey we’ve been used to. Where discovery once meant search, scanning results, clicking a domain, and navigating a site, it now means describing your intent and letting the system do the rest. You don’t need Expedia or United.com if your agent confirms your flight. You don’t need to touch OpenTable’s site if a reservation is placed automatically for tomorrow night. You don’t need to sift through Nike’s catalog if new running shoes just arrive at your door.

In this flow, the answer layer replaces the click, the task layer replaces the browsing session, and the source itself becomes invisible. The consumer no longer cares which site delivered the data or handled the transaction, as long as the result is correct.

Proof In Practice: WeChat

This shift isn’t hypothetical. In China, it’s already happening at scale. WeChat introduced Mini-Programs in 2017 as “apps within an app,” designed so users never need to leave the WeChat environment. By 2024, they had become mainstream: Recent reports suggest there are between 3.9 and 4.3 million WeChat Mini-Programs in the ecosystem today. (3.9m source4.3m source), with over 900 million monthly active users. And while Mini-Programs are closer to apps than actual AIs, it’s all about task completion and consumers adopting layers of task completion.

In food and beverage and hospitality, over 80% of top chain restaurants now run ordering or take-out flows directly through Mini-Programs, meaning customers never touch a separate website. International brands often prioritize Mini-Programs as their Chinese storefronts instead of building localized websites, since WeChat already handles discovery, product listings, payments, and customer service. Luxury brand LOEWE, for example, launched its 2024 “Crafted World” exhibition in Shanghai entirely via a WeChat Mini-Program, offering ticketing and interactive digital content without requiring users to leave the app.

For many domestic Chinese businesses, this has become the default strategy: their websites exist, if at all, as minimal shells, while the real customer experience lives entirely inside WeChat. And it’s worth keeping in mind, we talked about WeChat serving over 1 billion monthly active users. ChatGPT currently sees over 800 million a week, so roughly three times WeChat’s volume on a monthly basis. An agentic era of direct-to-consumer facilitated by platforms like ChatGPT, WeChat, Claude, Gemini, and CoPilot could bring a massive shift in consumer behavior.

Western Parallels

Western platforms are already moving in this direction. Instagram Checkout allows users to buy products directly inside Instagram, without ever visiting a retailer’s website. Shopify details this integration here. TikTok offers similar flows. Its partnership with Shopify enables in-app checkout so the consumer never leaves TikTok. Even services like Uber now function as APIs inside larger ecosystems. You can book a ride from within another app and never open Uber directly.

In each case, the website still exists, but the consumer may never see it. Discovery, consideration, and conversion all happen inside the closed flow.

The AOL Parallel

The resemblance to the mid-1990s is striking. AOL’s big push came in that period, when its “Keyword” model positioned the service as the internet itself. Instead of typing URLs, people entered AOL Keywords and stayed inside AOL’s curated walls. By mid-1996, AOL had roughly 6 million U.S. subscribers doing this, representing about 13% of the nation’s estimated 44 million internet users at the time.

Today, the “keyword” has become your intent. The agent interprets it, makes the decision, and fulfills the request. The outcome is the same: a closed environment where the gateway controls visibility and access. Only this time, it’s powered by LLMs and APIs instead of dial-up modems.

This is not an isolated evolution. There’s mounting evidence that the open web itself is weakening. Google recently stated in a legal filing that “the open web is already in rapid decline … harming publishers who rely on open-web display advertising revenue.” That report was covered by Search Engine Roundtable.

Pew Research found that when Google displays AI-generated summaries in search results, users click links only 8% of the time, compared to 15% when no summary is present. That’s nearly a 50% decline in link clicks. Digital Content Next reported that premium publishers saw a 10% year-over-year drop in referral traffic from Google during a recent eight-week span.

The Guardian covered MailOnline’s specific case, where desktop click-through dropped 56% when AI summaries appeared, and mobile click-through fell 48%. Advertising spend tells a similar story. MarketingProfs reports that professionally produced news content is projected to receive just 51% of global content ad spend in 2025, down from 72% in 2019. Search Engine Land shows that open-web display ads have fallen from about 40% of Google AdWords impressions in 2019 to only 11% by early 2025.

The story is consistent. Consumers click less, publishers earn less, and advertisers move their budgets elsewhere. The open web will likely no longer be the center of gravity.

If websites lose their central role, what takes their place? Businesses will still need technical infrastructure, but the front door will change. Instead of polished homepages, structured data and APIs will feed agents directly. Verification layers like schema, certifications, and machine-readable credentials will carry more weight than design. Machine-validated authority (how often your brand is retrieved or cited by LLMs) will become a core measure of trust. And partnerships or API integrations will replace traditional SEO in ensuring visibility.

This doesn’t mean websites vanish. They’ll remain important for compliance, long-form storytelling, and niches where users still seek a direct experience. But for mainstream interactions, the website is being demoted to plumbing.

And while design and user experience may lose ground to agentic flows, content itself remains critical. Agents still need to be fed with high-quality text, structured product data, verified facts, and fresh signals of authority. Video will grow in importance as agents surface summaries and clips in conversational answers. First-party user-generated content, especially reviews, will carry more weight as a trust signal. Product data like clean specs, accurate availability, transparent pricing will be non-negotiable inputs to agent systems.

In other words, the work of SEO isn’t disappearing. Technical SEO remains the plumbing that ensures content is discoverable and accessible to machines. Content creation continues to matter, both because it fuels agent responses and because humans still consume it when they step beyond the agent flow. The shift is less about content’s relevance and more about where and how it gets consumed. Web design and UX work, however, will inevitably come under scrutiny as optional costs as the agent interface takes over consumer experiences.

One consequence of this shift is that brands risk losing their direct line to the customer. When an agent books the flight, orders the shoes, or schedules the dentist, the consumer’s loyalty may end up with the agent itself, not the underlying business. Just as Amazon’s marketplace turned many sellers into interchangeable storefronts beneath the Amazon brand, agentic systems may flatten brand differentiation unless companies build distinctive signals that survive mediation. That could mean doubling down on structured trust markers, recognizable product data, or even unique content assets that agents consistently retrieve. Without those, the relationship belongs to the agent, not you.

That potential demotion for websites carries consequences. Domains will still matter for branding, offline campaigns, and human recall, but their value as entry points to discovery is shrinking. The secondary market for “premium” domains is already showing signs of stress. Registries have begun cutting or eliminating premium tiers; .art, for example, recently removed over a million names from its premium list to reprice them downward. Investor commentary also points to weaker demand, with TechStartups noting in 2025 that domain sales are “crashing” as AI and shifting search behaviors reduce the perceived need for expensive keyword names.

We’ve seen this arc before. Families once paid hundreds of dollars for full sets of printed encyclopedias. Owning Britannica on your shelf was a marker of credibility and access to knowledge. Today, those same volumes can be found in thrift stores for pennies, eclipsed by digital access that made the scarcity meaningless. Domains are on a similar path. They will remain useful for identity and branding, but the assumption that a keyword .com will keep appreciating looks more like nostalgia than strategy.

Defensive portfolios across dozens of ccTLDs will be harder to justify, just as stocking encyclopedias became pointless once Wikipedia existed. Websites will remain as infrastructure, but their role as front doors will continue to shrink.

Marketing strategies must adapt. The focus will move from polishing landing pages to ensuring your data is retrievable, your brand is trusted by agents, and your authority is machine-validated. SEO, as we know it, will transform from competing for SERP rankings to competing for retrieval and integration into agent responses.

Another underappreciated consequence of all this is measurement. For decades, marketers have relied on web analytics: page views, bounce rates, conversions. Agentic systems obscure that visibility. If a customer never lands on your site but still books through an agent, you may gain the revenue but lose the data trail. New metrics will be needed. Not just whether a page ranks, but whether your content was retrieved, cited, or trusted inside agent flows. In that sense, the industry will need to redefine what “traffic” and “conversion” even mean when the interface is a conversation rather than a website.

The Fear And The Possibility

The fear is obvious. We’ve been here before with AOL. A closed gateway can dominate visibility, commoditize brands, and reduce consumer choice. The open web and search engines broke us out of that in the late 1990s. No one wants to return to those walls.

But the possibility is also real. Businesses that adapt to agentic discovery (with structured signals, trusted data feeds, and machine-recognized authority) can thrive. The website may become plumbing, but plumbing matters. It carries the flow and information that powers the experience.

So the real question isn’t whether websites will still exist. Ultimately, they will, in some format. The question is whether your business is still focused on decorating the door, or whether you’re investing in the pipes that agents actually use to deliver value.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Collagery/Shutterstock

SISTRIX Reports Sharp Drop In ChatGPT Web Searches via @sejournal, @MattGSouthern

SISTRIX reports that ChatGPT is triggering live web searches far less often for people who use the app without logging in.

In daily spot-checks over the last two weeks, the share of answers that called the web fell from above 15% to below 2.5%. SISTRIX does not assign a cause and notes the observation applies to anonymous sessions.

What Changed

SISTRIX says it “analyses numerous ChatGPT responses to a wide variety of prompts” each day and recently “noticed that ChatGPT uses web searches significantly less frequently.”

It adds that, “at least when using the app without an account,” the measured rate of responses completed via a web search declined sharply in the period reviewed.

SISTRIX doesn’t publish a sample size, list of prompts, or detection method in the post.

SISTRIX also writes that ChatGPT has “traditionally” relied on Bing for web lookups and references rumors of Google data being used, but it doesn’t claim a direct link between any specific backend change and the measured decline.

Related Context

Microsoft Bing Search APIs Retirement

Microsoft announced that the Bing Search APIs were retired on August 11.

Some third-party tools have migrated to alternatives. This doesn’t prove a change inside ChatGPT, but it’s a relevant ecosystem shift.

Google’s SERP Access Changes

SISTRIX separately documented that Google no longer supports the “num=100” parameter and now returns 10 results per request, increasing the effort required to collect SERP data at scale.

Again, this is context rather than causation.

Recent ChatGPT Product Notes

OpenAI’s release notes list “improvements to search in ChatGPT” on September 16, without detailing backend sourcing.

That update may be unrelated to the SISTRIX measurement, but is worth noting in the same timeframe.

Why This Matters

If ChatGPT is consulting the web less frequently in anonymous sessions, you might notice fewer answers citing current sources and a greater reliance on the model’s internal knowledge for those users.

This could influence how often recent news is referenced in responses for users who aren’t logged in, although the behavior may differ for Plus or Enterprise accounts.

Looking Ahead

SISTRIX’s observation is limited to a specific time frame and anonymous usage. Currently, there’s no confirmed information from OpenAI about how frequently ChatGPT performs live lookups overall, and SISTRIX hasn’t provided a reason for the recent drop.

The most cautious conclusion is that one independent measurement showed a sharp short-term decline, which deserves further testing.


Featured Image: matakeris.creative/Shutterstock