How AI Is Redefining Search And What Leaders Must Do Now via @sejournal, @TaylorDanRW

Artificial intelligence is transforming how people search, discover, and act on information. For chief marketing officers and senior leaders, this is not a question of whether SEO is “dead” but of how to adapt to a new era where visibility spans AI-driven assistants, multimodal tools, and fragmented user journeys.

Two forces drive this disruption: rapid advances in technology and the accelerating adoption of new search behaviors by younger demographics.

As these forces converge, traditional measures of success such as rankings, traffic, and clicks are losing relevance.

What matters now is the ability to understand where visibility is shifting, how decisions are being shaped earlier in the funnel, and how to build adaptive strategies that secure brand presence across an expanding digital ecosystem.

The Disruption At Hand

The launch of ChatGPT marked a tipping point for digital marketing. Within months, generative AI became a mainstream tool, offering users new ways to answer questions, evaluate products, and plan decisions.

Industry debate has since centered on labels such as SEO, GEO (Generative Engine Optimization), and AIEO. But the label is secondary, the disruption is structural.

Gartner predicts that traditional search engine volumes will fall by roughly 25% as users increasingly turn to AI-powered platforms and assistants. While a 25% decline in a base as large as Google’s is still measured in trillions of searches, the shift is enough to destabilize established traffic models.

This does not spell the end of SEO. Instead, it signals a transformation of the internet itself. The way users seek and consume information is changing at the same pace as the technologies that enable it.

Why Visibility Is Changing Shape

Technology Drivers

Search is no longer confined to a search box. Google has introduced Circle to Search, Lens, AI Overviews and AI Mode. Perplexity and ChatGPT are establishing themselves as discovery platforms. Each of these represents a new entry point for user journeys, many of which bypass the traditional search results page altogether.

User Drivers

Younger demographics are accelerating the shift. At Google’s Search Central Live event in Bangkok, new data showed that Gen Z is not abandoning Google entirely in favor of TikTok or other alternatives, as commonly assumed. Instead, they are adopting AI-enabled features inside Google at a higher rate than any other age group. 1 in 10 Gen Z searches already begins with Circle or Lens, and one in five of those searches are commercial in nature.

The implication is clear: The next generation of consumers is interacting with the internet in ways that blend image recognition, voice, video, and AI assistance. Traditional keyword-driven search journeys are being replaced by multimodal, non-linear exploration.

The New Buyer Journey Dark Funnel

For years, marketers described the “funnel” as a linear path: awareness, consideration, decision. Today, that funnel is breaking apart.

AI intermediaries such as ChatGPT, Perplexity, or Google’s AI Overviews are now summarizing, curating, and interpreting information before users ever reach a brand-owned website. In many cases, research and decision-making occur entirely within these intermediaries.

At the same time, peer-generated content plays an outsized role. Reddit threads, product comparison lists, and third-party case studies are being pulled into AI-generated responses.

This ecosystem expands the number of sources that shape perception while reducing the likelihood that users visit a brand directly.

The result is a “dark funnel.” Purchase decisions are being made through fragmented, often opaque pathways that evade traditional tracking tools. For leaders, this means brand influence must extend beyond owned assets to encompass the broader ecosystem where AI models source their information.

Rethinking Organic Success Metrics

For nearly two decades, SEO success was measured through a narrow set of metrics such as keyword rankings, organic traffic, and click-through rates. In the AI-driven search environment, those measures are no longer sufficient.

Three shifts stand out:

  1. Cross-Channel Lift: SEO is often the first point of exposure, even if it does not capture the last click. Google Analytics 4 now makes it possible to measure this by analyzing how many users first encounter a brand through organic search before returning directly, via social, or through paid channels. This reframes SEO as a driver of brand lift across the marketing mix.
  2. Visibility In AI-Generated Citations: Being referenced in AI summaries does not always translate into immediate clicks, but it does influence perception and consideration. Success must account for brand presence within these outputs, even when user journeys bypass the website.
  3. Topic-Level Visibility: AI search retrieves information at a thematic level rather than matching individual keywords. Tracking topic visibility, breadth of coverage, and the quality of source material is becoming more valuable than measuring a single keyword position.

Traditional measures such as “average position” in Google Search Console are increasingly unreliable. AI citations are often recorded as position one, regardless of context, creating a distorted picture of performance.

Strategic Imperatives For Leaders

The changes unfolding in AI-driven search are structural, not cyclical. Leaders cannot treat them as temporary turbulence. Instead, the task is to create resilience and adaptability in marketing organizations by pursuing five imperatives:

1. Audit AI-Driven Traffic And Visibility

Leaders must first establish a baseline of how AI is already affecting their businesses. While AI referrals are still a small share of overall traffic, they represent an emerging channel with unique characteristics.

  • Practical Step: Use GA4 or Looker Studio to segment traffic from platforms such as ChatGPT, Gemini, and Copilot. These sources typically appear under “referral” in analytics, but regex filters can separate them cleanly.
  • Why It Matters: Treating AI traffic as a distinct channel allows organizations to analyze landing pages, conversions, and revenue, rather than dismissing it as “miscellaneous.”
  • Leadership Lens: Framing AI traffic as a channel elevates its importance in boardroom discussions and positions the organization to justify future investments in tooling, content, or partnerships.

2. Track The Market, Not Just Internal Performance

A common misinterpretation is to view every decline in traffic as a failure of execution. In reality, shrinking demand in traditional search is often the root cause.

  • Practical Step: Compare organic and paid impressions for the same set of keywords. If both decline, the issue is demand-side, not execution-side. Layer this with Google Trends to visualize whether volumes are falling market-wide.
  • Why It Matters: This approach reframes the narrative from “our SEO team is underperforming” to “our market is shifting.” This distinction is crucial for maintaining stakeholder confidence.
  • Leadership Lens: CMOs who can separate market-driven shifts from operational gaps will have sharper conversations with the C-suite about resource allocation and risk.

3. Invest In Top-Of-Funnel Presence Across The Ecosystem

AI models increasingly draw from third-party sites, reviews, and community forums when generating responses. This widens the playing field for visibility beyond a brand’s own domain.

  • Practical Step: Build a program to secure mentions in authoritative third-party contexts such as industry directories, product comparison lists, peer forums, and niche communities.
  • Why It Matters: Being present in these external ecosystems ensures that when AI models summarize options, your brand is more likely to appear in the conversation even if the user never reaches your website.
  • Example: For a travel brand, this might mean appearing not only in “best hotel” lists on major sites, but also in Reddit threads, YouTube reviews, and AI-cited blogs.
  • Leadership Lens: Leaders must expand their definition of SEO from domain optimization to ecosystem visibility. This is not an incremental task but a fundamental shift in scope.

4. Rethink The Funnel And Customer Journey

The traditional linear funnel is breaking apart. Users now move through fragmented journeys that blend passive discovery (social, video, peer reviews) with AI-assisted evaluation.

  • Practical Step: Map how AI intermediaries are reshaping specific stages of your funnel. Identify which queries are being absorbed into AI summaries and where direct interaction with your brand is reduced.
  • Why It Matters: In some cases, entire query categories may be “lost” to AI intermediaries. Recognizing these blind spots early allows marketers to find alternative pathways such as social amplification, partnerships, or paid distribution.
  • Example: A B2B software vendor may find that “best CRM for mid-size companies” is increasingly answered by AI summaries citing analyst reports and third-party reviews. To remain visible, the vendor must prioritize those external references rather than relying solely on owned content.
  • Leadership Lens: CMOs must lead organizations to think less about protecting a single funnel and more about orchestrating presence across a patchwork of fragmented pathways.

5. Measure Indirect Value And Cross-Channel Lift

SEO has always influenced channels beyond the last click, but AI disruption makes quantifying that influence more important than ever.

  • Practical Step: Use GA4’s Explore feature to track first-touch organic sessions that later convert through direct, social, or paid channels. Create custom segments that isolate cross-channel lift.
  • Why It Matters: This evidence shows how SEO fuels the broader marketing mix, even if conversions are attributed elsewhere. It strengthens the business case for continued investment in visibility.
  • Example: A retailer may find that 40% of “direct” purchases were first initiated by an organic search session weeks earlier. Without quantifying this, the value of SEO would be understated.
  • Leadership Lens: Demonstrating indirect value reframes SEO from a cost center to a growth driver, positioning CMOs to argue for resources with greater authority.

Closing Note On Execution

These imperatives are not one-time actions. They are ongoing disciplines that must evolve alongside user behavior and technological change. Leaders who embed them into their operating rhythm will be better prepared to adapt strategies, justify investments, and maintain visibility in an AI-led digital economy.

The Leadership Agenda

Understand Your Risk Exposure

Your audience determines your level of risk. Organizations serving younger, consumer-facing segments are already seeing accelerated adoption of AI search tools. For B2B businesses with locked-down environments, the shift may be slower, but it is coming.

Scrutinize Vendor Claims

Acronyms proliferate in times of disruption. What matters is not whether a vendor calls their practice SEO, GEO, or another label, but whether they can demonstrate measurable strategies for sustaining visibility in AI-led ecosystems.

Be Ready To Be Agile

A 12-month static plan is no longer viable. AI search strategies must be adaptive, continuously informed by data, and responsive to new entrants and technologies.

Visibility Beyond Search Requires New Metrics

SEO is not dead. It is evolving into a broader discipline of experience visibility, where brand presence must extend across AI models, multimodal search tools, and fragmented user journeys.

For leaders, the challenge is not to hold onto the old metrics or frameworks, but to recognize how the internet is reshaping itself and to understand we’re starting to tread new ground, and with new ground comes uncertainty and risk.

Those who measure differently, broaden their presence, and align with user-driven change will not only withstand the disruption but also secure competitive advantage in the AI-led future.

More Resources:


Featured Image: SvetaZi/Shutterstock

Yoast Announces New AI Visibility Tool via @sejournal, @martinibuster

Yoast announced the release of their Brand Insights tool, which helps track and monitor brand sentiment and visibility in AI platforms like ChatGPT. The new tool, currently in beta, is a new direction for Yoast because it’s not a plugin and doesn’t need CMS access. The complete tool is called Yoast SEO AI+.

The tool offers sentiment-tracking analysis by keywords, competitor rank benchmarking, citation analysis, and the ability to monitor specific brand questions.

The citation analysis is interesting because it tracks brand mentions. The sentiment analysis is also useful because it shows a graph based on keywords broken down by positive and negative sentiment.

Niko Körner, Senior Director of Product at Yoast explained:

“With Yoast AI Brand Insights, our customers can not only track their brand’s visibility, sentiment, and credibility in AI platforms like ChatGPT, but also see how they compare against the competition. As AI answers become a new starting point for customer journeys, this competitive perspective is crucial to staying ahead.

We worked hard to create a simplified KPI that truly reflects brand performance in the age of AI. Our AI Visibility Index combines sentiment, rank in LLM answers, brand mentions, and citations into one clear metric.

Soon, we will also be launching actionable recommendations to help businesses improve their AI visibility. This launch is only the beginning, and we are already working on improvements and expanding support for more large language models.”

The new Yoast tool is modestly priced, a sign that  Yoast is focusing on providing SEO tools for SMBs  who are interested in getting ahead in AI search.

Read more here:
Find out how your brand shows up in ai answers – Yoast SEO AI+

Featured Image by Shutterstock/Xharites

How People Really Use LLMs And What That Means For Publishers

OpenAI released the largest study to date on how users really use ChatGPT. I have painstakingly synthesized the ones you and I should pay heed to, so you don’t have to wade through the plethora of useful and pointless insights.

TL;DR

  1. LLMs are not replacing search. But they are shifting how people access and consume information.
  2. Asking (49%) and Doing (40%) queries dominate the market and are increasing in quality.
  3. The top three use cases – Practical Guidance, Seeking Information, and Writing – account for 80% of all conversations.
  4. Publishers need to build linkable assets that add value. It can’t just be about chasing traffic from articles anymore.
Image Credit: Harry Clarkson-Bennett

Chatbot 101

A chatbot is a statistical model trained to generate a text response given some text input. Monkey see, monkey do.

The more advanced chatbots have a two or more-stage training process. In stage one (less colloquially known as “pre-training”), LLMs are trained to predict the next word in a string.

Like the world’s best accountant, they are both predictable and boring. And that’s not necessarily a bad thing. I want my chefs fat, my pilots sober, and my money men so boring they’re next in line to lead the Green Party.

Stage two is where things get a little fancier. In the “post-training” phase, models are trained to generate “quality” responses to a prompt. They are fine-tuned on different strategies, like reinforcement learning, to help grade responses.

Over time, the LLMs, like Pavlov’s dog, are either rewarded or reprimanded based on the quality of their responses.

In phase one, the model “understands” (definitely in inverted commas) a latent representation of the world. In phase two, its knowledge is honed to generate the best quality response.

Without temperature settings, LLMs will generate exactly the same response time after time, as long as the training process is the same.

Higher temperatures (closer to 1.0) increase randomness and creativity. Lower temperatures (closer to 0) make the model(s) far more predictive and precise.

So, your use case determines the appropriate temperature settings. Coding should be set closer to zero. Creative, more content-focused tasks should be closer to one.

I have already talked about this in my article on how to build a brand post AI. But I highly recommend reading this very good guide on how temperature scales work with LLMs and how they impact the user base.

What Does The Data Tell Us?

That LLMs are not a direct replacement for search. Not even that close IMO. This Semrush study highlighted that LLM super users increased the amount of traditional searches they were doing. The expansion theory seems to hold true.

But they have brought on a fundamental shift in how people access and interact with information. Conversational interfaces have incredible value. Particularly in a workplace format.

Who knew we were so lazy?

1. Guidance, Seeking Information, And Writing Dominate

These top three use cases account for 80% of all human-robot conversations. Practical guidance, seeking information, and please help me write something bland and lacking any kind of passion or insight, wondrous robot.

I will concede that the majority of Writing queries are for editing existing work. Still. If I read something written by AI, I will feel duped. And deception is not an attractive quality.

2. Non-Work-Related Usage Is Increasing

  • Non-work-related messages grew from 53% of all usage to more than 70% by July 2025.
  • LLMs have become habitual. Particularly when it comes to helping us make the right decisions. Both in and out of work.

3. Writing Is The Most Common Workplace Application

  • Writing is the most common work use case, accounting for 40% of work-related messages on average in June 2025.
  • About two-thirds of all Writing messages are requests to modify existing user text rather than create new text from scratch.

I know enough people that just use LLMs to help them write better emails. I almost feel sorry for the tech bros that the primary use cases for these tools are so lacking in creativity.

4. Less So Coding

  • Computer coding queries are a relatively small share, at only 4.2% of all messages.*
  • This feels very counterintuitive, but specialist bots like Claude or tools like Lovable are better alternatives.
  • This is a point of note. Specialist LLM usage will grow and will likely dominate specific industries because they will be able to develop better quality outputs. The specialized stage two style training makes for a far superior product.

*Compared to 33% of work-related Claude conversations.

It’s important to note that other studies have some very different takes on what people use LLMs for. So this isn’t as cut and dry as we think. I’m sure things will continue to change.

5. Men No Longer Dominate

  • Early adopters were disproportionately male (around 80% with typically masculine names).
  • That number declined to 48% by June 2025, with active users now slightly more likely to have typically feminine names.

Sure, us men have our flaws. Throughout history maybe we’ve been a tad quick to battle and a little dominating. But good to see parity.

  • 89% of all queries are Asking and Doing related.
  • 49% Asking and 40% Doing, with just 11% for Expressing.
  • Asking messages have grown faster than Doing messages over the last year, and are rated higher quality.
A ChatGPT-built table with examples of each query type – Asking, Doing, and Expressing (Image Credit: Harry Clarkson-Bennett)

7. Relationships And Personal Reflection Are Not Prominent

  • There have been a number of studies that state that LLMs have become personal therapists for people (see above).
  • However, relationships and personal reflection only account for 1.9% of total messages according to OpenAI.

8. The Bloody Youth (*Shakes Fist*)

Takeaways

I don’t think LLMs are a disaster for publishers. Sure, they don’t send any referral traffic and have started to remove citations outside of paid users (classic). But none of these tech-heads are going to give us anything.

It’s a race to the moon, and we’re the dog they sent on the test flight.

But if you’re a publisher with an opinion, an audience, and – hopefully – some brand depth and assets to hand, you’ll be ok. Although their crawling behavior is getting out of hand.

Shit-quality traffic and not a lot of it (Image Credit: Harry Clarkson-Bennett)

One of the most practical outcomes we as publishers can take from this data is the apparent change in intents. For eons, we’ve been lumbered with navigational, informational, commercial, and transactional.

Now we have Doing. Or Generating. And it’s huge.

Even simple tools can still drive fantastic traffic and revenue (Image Credit: Harry Clarkson-Bennett)

SEO isn’t dead for publishers. But we do need to do more than just keep publishing content. There’s a lot to be said for espousing the values of AI, while keeping it at arm’s length.

Think BBC Verify. Content that can’t be synthesized by machines because it adds so much value. Tools and linkable assets. Real opinions from experts pushed to the fore.

But it’s hard to scale that quality. Programmatic SEO can drive amazing value. As can tools. Tools that answer users’ “Doing” queries time after time. We have to build things that add value outside of the existing corpus.

And if your audience is generally younger and more trusting, you’re going to have to lean into this more.

More Resources:


This post was originally published on Leadership in SEO.


Featured Image: Roman Samborskyi/Shutterstock

Google AI Overviews Overlaps Organic Search By 54% via @sejournal, @martinibuster

New research from BrightEdge offers insights into how Google’s AI Overviews ranks websites across different verticals, with implications for what SEOs and publishers should be focusing on.

AIO And Organic Search

The data shows that 54% of the AI Overviews citations matched the web pages ranked in the organic search results. This means that 46% of citations do not overlap with organic search results.  Could this be an artifact of Google’s FastSearch algorithm?

Google’s FastSearch is based on ranking signals generated by the RankEmbed deep-learning model that is trained on search logs and third-party quality raters. The search logs consist of user behavior data, what Google terms “click and query data.” Click data teaches the RankEmbed model about what users mean when they search.

Click behavior is feedback about queries and relevant documents, similar to how the ratings submitted by the quality raters teach RankEmbed about quality. User clicks are a behavioral signal of which documents are relevant. So, as a hypothetical example, if people who search for “How to” tend to click on videos and tutorials, this teaches the model that videos and tutorials tend to satisfy those kinds of queries. RankEmbed “learns” that documents that are semantically similar to a tutorial are good matches for that kind of query. The models aren’t learning in a human sense; they are identifying patterns in the click data.

This doesn’t mean that the 54% of AIO-ranked sites are there because of traditional ranking factors. It could be that the FastSearch algorithm retrieves results that are similar to the regular search results 54% of the time.

Insight About Ranking Factors

BrightEdge’s data could be reflecting the complexity of Google’s FastSearch algorithm, which prioritizes speed and semantic matching of queries to documents without the use of traditional ranking signals like links. This is something that SEOs and publishers should stop and consider because it highlights the importance of content and also the importance of matching the type of content that users prefer to see.

So, if they’re querying about a product, they don’t expect to see a page with an essay about the product; they expect to see a page with the product.

Organic And AIO Overlap Evolved Over Time

When AIO launched, there was only about a 32% overlap between AIO and the classic organic search results. BrightEdge’s data shows that the overlap has grown over the sixteen months between the debut of AI Overviews and today.

Organic And AIO Match Depends On The Vertical

The 54/46 percentage split isn’t across the board. The percentage of AIO-ranked sites that match the organic search results varies according to the vertical.

Your Money Or Your Life (YMYL) content showed a higher rate of overlap between organic and AIO.

BrightEdge’s data shows:

  • Healthcare has a strong overlap: 75.3% overlap (began at 63.3%).
  • Education overlap has increased significantly: 72.6% overlap between organic and AIO, showing +53.2 percentage points growth, from 19.4% to 72.6%.
  • Insurance also experienced increased overlap: 68.6%. That’s a +47.7 percentage points growth from the 20.9% overlap when AIO was first introduced.
  • E-commerce has very little overlap with the organic search results: 22.9% overlap (only +0.6 percentage points change).

I’m going to speculate here and say that Healthcare, Education, and Insurance search results may have a strong overlap because the pool of authoritative sites that users expect to see may be smaller. This may mean that websites in these verticals may have to work hard to be the kind of site that users expect to see. A broad and simplified explanation is that FastSearch does not use traditional organic search ranking factors. It’s ranking the kinds of web pages that match user expectations, meet certain quality standards, and are semantically relevant to the query.

What Is Going On With E-Commerce?

E-commerce is the one area where overlap between organic and AIO remained relatively steady with very little change. BrightEdge notes that AIO coverage actually decreased by 7.6%. AIO may be a good fit for research but is not a good format for users who are ready to make a purchase.

Final Takeaways

Although BrightEdge recommends focusing on traditional SEO for sites in verticals that have over 60% of overlap with organic search, it’s a good idea for all sites, regardless of vertical, to focus on traditional SEO and also to focus on precision, matching user expectations for each query, and pay attention to what users are saying so as to be able to react swiftly to changing trends.

BrightEdge offers the following advice:

“Step 1: Identify Your Overlap Profile Measure what percentage of your AI Overview citations also rank organically and benchmark against the 54% average to understand where you stand.

Step 2: Match Strategy to Intent. High overlap (>60%) means focus on SEO; low overlap (<30%>

Step 3: Monitor the Convergence Track your overlap percentage monthly as it has grown +22% industry-wide in 16 months, watching for shifts like September 2024’s +5.4% jump.”

Read BrightEdge’s report:

AI Overview Citations Now 54% from Organic Rankings

ChannelAdvisor Founder Launches GEO for Merchants

Scot Wingo observed consumer shopping journeys while running ChannelAdvisor, the marketplace management firm he started in 2001. He says consumers approach the process in three stages: researching the market, finding suitable products, and buying the right item.

The acronym — ReFiBuy — is the name of his latest company. It’s a generative engine optimization platform for retailers and brands.

By any measure, Scot is an ecommerce pioneer. We first interviewed him in 2006, when he introduced us to marketplace selling.

Last week, I asked him about ReFiBuy. The entire audio of our conversation is embedded below. The transcript is edited for length and clarity.

Kerry Murdock: Tell us about your ecommerce journey.

Scot Wingo: It began in 1999 when I launched Auction Rover, an auction search engine. We sold it to GoTo.com, which became Overture, the company that invented paid search. The auction search engine wasn’t great after the dot-com bubble burst. But we had built the selling tools, which became ChannelAdvisor, which I launched in 2001.

Murdock: What is ReFiBuy, your new venture?

Wingo: The idea started with my experience at ChannelAdvisor. The company went public in 2013, and I was still CEO and founder. By 2015, running a public company had become a drag.

I resigned from the CEO role but stayed on the board. ChannelAdvisor was ultimately taken private by a private equity firm, which merged it with Commerce Hub. It’s now called Rithum.

In 2015 I launched an on-demand car care company called Spiffy. Then, in August of 2024, I decided to start what is now ReFiBuy. I wanted to do something in the AI world. I have a technical degree, and as a technologist, I thought AI would create much disruption, which creates opportunity.

So I was poking around, learning more about it. And then, in December 2024, Anthropic, the makers of Claude, published a paper on “agentic” AI that can perform tasks. Prior to that, large language models were read-only. The agentic component meant they could do things.

And that reminded me of a problem we had at ChannelAdvisor. Our clients were retailers and brands with large product catalogs. The issue for us was the absence of an industry standard for electronically storing and sharing the product info, such as specs, colors, dimensions, and weight.

Clients would send us a file of their product catalog in a disorganized mess. Yet we had a hundred marketplaces that wanted to receive beautiful, clean catalog files. So our job became catalog cleaners, to convert clients’ inventory files into a format acceptable to those external channels. Again, there was no industry standard.

We came up with algorithms for cleaning the catalog that worked only half the time. The other half required humans. Eventually, when we had 300 people in Bulgaria working on it, serving our 3,000 customers and 15 billion annual transactions.

That memory was my light bulb moment for agentic AI. Could we solve the product catalog problem for LLMs? We started working on it late last year.

Simultaneously, Perplexity introduced what we now call agentic commerce, or agentic shopping, where you can not only research products but also buy them.

That’s the inspiration for our name. ReFiBuy is “research, find, buy.” It’s the shopper’s journey.

We launched our Commerce Intelligence Engine last week. It ensures that the LLMs — Perplexity, Claude, ChatGPT, and others — have accurate, current, and comprehensive product catalog data from our clients, which are retailers and brands.

Murdock: How do you do that — organize the data and then ensure the LLMs digest it?

Wingo: We start with the product catalog. We take a traditional Google Shopping feed or even data from a merchant’s ecommerce site. We analyze it through the lens of an LLM, which helps us identify missing or incorrect components. We then recommend changes, fixes, and additions. LLMs want every piece of content that ties products to the context of prompts. That includes Schema.org markup, Reddit discussions, prompt history — much more than product data alone.

That’s our evaluation phase. Then we help our clients whitelist the right bots to crawl their sites. Most retailers and brands block all bots except for Google. Certainly there are good reasons to do that, as many bots are malicious or from competitors.

So we help merchants know which LLM bots to allow.

Murdock: How do you know that an LLM receives and stores your optimized data?

Wingo: We monitor product cards, the visual representations by LLMs of recommended products. We run thousands of prompts daily across all the LLM engines to ensure our clients’ products appear in those cards and that the data is accurate.

Our AI agents evaluate the cards and classify them into buckets. If our client owns the product card, our job is done. We have achieved Nirvana for that SKU. If our client’s item appears in a card of another merchant, there are 20 to 30 things that have likely gone wrong. Our AI agents detect it. Sometimes it’s as simple as a missing slash or an extra space in the file.

The agent also detects missing SKUs — when our clients’ goods don’t appear in the cards at all. That’s usually caused by an infrastructure problem with the crawler, or something is broken on the merchant’s site.

We keep cranking the process until we’ve optimized our clients’ entire catalog.

Murdock: What is the cost of ReFiBuy?

Wingo: It depends on the number of SKUs. We start at roughly $2,000 per month — $20,000 to $25,000 per year.

Murdock: Where can merchants learn more?

Wingo: We’re at ReFiBuy.ai. My Substack newsletter is “Retailgentic.”

Google Explains Expired Domains And Ranking Issues via @sejournal, @martinibuster

Google’s John Mueller answered a question about an expired domain that was unable to rank for relevant search queries, including its own brand name. The answer sheds light on how expired domains are handled by Google after they are re-registered.

History Of Expired Domains And SEO

Buying expired domains for their link profiles was a quick way to rank a website about 25 years ago. In those days, it was possible to see the PageRank associated with a domain through Google’s browser toolbar. If the domain was penalized, the PageRank meter would show this with a completely zeroed-out PageRank value. Thus, an SEO could buy an expired domain, regardless of the topic associated with it, point it to their website, and experience a boost in PageRank and rankings.

The expired domain effect was not limited to actual expired domains. A little-known loophole was that links to non-existent domain names could also contain PageRank. For example, many SEO forums used to link to domains like example-domain.com during the course of their discussions. SEOs would purchase those domains and experience the benefit of the PageRank from all the websites linking to that domain.

Another related tactic was to crawl .edu and .org websites to identify domain name misspellings in (broken) links to external websites, register those domains, and within hours a site would have inbound links from authoritative web pages.

The expired domain loophole came to an end in the early 2000s after Google introduced domain PageRank resets. Interestingly, the domain reset also affected domain misspellings that had never been registered. So even that secret loophole was closed.

Google’s John Mueller, in his answer, seemed to provide some information about how the domain name reset works. Mueller specifically referred to the state of being a parked domain and then having that status removed internally within Google.

Expired Domain Is Not Ranking

A person posted about their expired domain issue on the SEO subreddit (r/SEO). They explained that they had recently launched a new website on an expired domain, and it was having trouble ranking for keywords, including its own branded keywords.

They explained:

“I launched a brand-new website on a new domain, everything looks solid:

Indexed in Google (shows up with site:domain).

No errors in Search Console.

Sitemap and robots.txt are clean.

Here’s the strange part: the site refuses to appear in SERPs for even the most basic branded queries. Not ranking for generic terms is one thing, but not showing up at all for my own company name (let’s call it Octigen GmbH)? That feels really odd.

Now, here’s the twist: this domain used to belong to a completely different company (also called Octigen) that went bust years ago. Old links still exist in forums, ecommerce sites, etc. I’m wondering if the domain’s past life could be holding it back — like a reputation penalty or some kind of lingering Google baggage.”

The person then asked the following questions:

  • “Can an old domain history actively suppress visibility, even if it’s re-verified, re-indexed, and fully rebuilt?
  • Is there a way to “reset” a domain’s reputation, or am I better off cutting losses and starting fresh?”

It Takes Time To “Shake Off” Old State Of Domain

Mueller answers the question with a reference to shaking off the previous “state” of a domain, which he describes as being unregistered or parked. Those are two different states of a domain.

Unregistered means that there’s nothing at a domain; it’s not registered by anyone, and it basically doesn’t exist, even if the domain was previously registered but now is not.

A parked domain means that the domain is registered and the DNS is pointing to a holding page, maybe even showing some advertising.

Mueller said it takes time for the state of that domain to change within Google:

“Sometimes it just takes a lot of time for the old state of a domain to be shaken off (sometimes that’s also the case when it was parked for a while), and the site to be treated like something new / independent.”

Expired Domain Name Reset

What Mueller is talking about sounds a lot like what we used to talk about over twenty years ago: an expired domain reset. The ways in which Google treats domains may have changed since then, so what Mueller is talking about could be related to a different process, like understanding where a site fits on the Internet.

Could this mean that a domain “state,” such as parked or expired, results in some kind of index notation at Google?

Mueller continued his answer by saying there’s nothing he can do to manually indicate the domain’s state has changed:

“There’s nothing manual that you can / need to do here.”

But he did recommend checking Search Console to make sure there are no penalties associated with the site:

“I would double-check in Search Console to make sure that there are no URL removal requests pending, and that there’s nothing in the manual actions section, but I’m guessing you already did that.”

What To Do If An Expired Domain Is Not Ranking?

At this point, most SEOs would not like to be told to sit tight and wait for Google to discover a new website. The natural inclination would be to increase natural links to a website and other promotional activities. Short of link building, that’s what Mueller advised.

He wrote:

“My suggestion for you specifically would be to keep using it, and to try to grow your visibility on other channels in the meantime. For example, it looks like you’re findable via your Linkedin page, which links to your domain name. If you’re active on Linkedin, and using that wisely to reference your domain, users can find it that way.

Similarly, you could be active in other places, such as YouTube or other social media sites (The YT video for your company name is currently on a private profile, which can be ok, but which you could also do on a company-branded profile. Or, of course, a Reddit profile)

In short, make it easy for people to find your content regardless of location when they search for it, especially for your company name. From there, expanding to the kinds of searches that could lead users who don’t yet know your company to your content, would be the next step — and even there it’s useful to be active on various platforms.”

Expired Domains Can Be Tricky

It’s clear that expired domains have, in the past, gone through a reset process where the link equity of a domain drops off and the domain essentially starts at position zero.

Google’s ranking algorithms can give a new site a temporary ranking boost. That makes it difficult to say with certainty whether a website with an expired domain is ranking because of the residual effects from the domain or because of Google’s new site ranking boost.

What’s important to keep in mind is that promoting a new website is essential, regardless of whether it’s built on an expired domain or one that’s never been registered.

Featured Image by Shutterstock/Andrii Iemelianenko

The Impact Of AI Overviews & How Publishers Need To Adapt via @sejournal, @MattGSouthern

Google rolled out AI Overviews to all U.S. users in May 2024. Since then, publishers have reported significant traffic losses, with some seeing click-through rates drop by as much as 89%. The question isn’t whether AI Overviews impact traffic, but how much damage they’re doing to specific content types.

Search (including Google Discover and traditional Google Search) consistently accounts for between 20% and 40% of referral traffic to most major publishers, making it their largest external traffic source. When DMG Media, which owns MailOnline and Metro, reports nearly 90% declines for certain searches, it’s a stark warning for traditional publishing.

After more than a year of AI Overviews (and Search Generative Experience), we have extensive data from publishers, researchers, and industry analysts. This article pulls together findings from multiple studies covering hundreds of thousands of keywords, tens of thousands of user searches, and real-world publisher experiences.

The evidence spans from Pew Research’s 46% average decline to DMG Media’s 89% worst-case scenarios. Educational platforms like Chegg report a 49% decline. But branded searches are actually increasing for some, suggesting there are survival strategies for those who adapt.

This article explains what’s really happening and why, including the types of content that face the biggest changes and which are staying relatively stable. You’ll understand why Google says clicks are “higher quality” even as publishers see traffic declines, and you’ll see what changes might make sense based on real data rather than guesses.

AI Overviews are the biggest change to search since featured snippets were introduced in 2014. They’re affecting the kinds of content publishers produce, and they’re increasing zero-click searches, which now make up 69% of all queries, according to Similarweb.

Whether your business relies on search traffic or you’re just watching industry trends, these patterns are significantly impacting digital marketing.

What we’re seeing is a new era in search and a change that is reshaping how online information is shared and how users interact with it.

AI Overview Studies: The Overwhelming Evidence

Google’s AI Overviews (AIO) have impacted traffic across most verticals and altered search behavior.

The feature, which was first introduced as Search Generative Experience (SGE) announced at Google I/O in May 2023, now appears in over 200 countries and 40 languages following a May 2025 expansion.

Independent research conducted throughout 2024 and 2025 shows click-through rate reductions ranging from 34% to 46% when AI summaries appear on search results pages.

Evidence from a variety of independent studies outlines the impact of AIO and shows a range of effects depending on the type of content and how it’s measured:

Reduced Click Through Rates – Pew Research Center

A study by Pew Research Center provides a rigorous analysis. By tracking 68,000 real search queries, researchers found that users clicked on results 8% of the time when AI summaries appeared, compared to 15% without them. That’s a 46.7% relative reduction.

Pew’s study tracked actual user behavior, rather than relying on estimates or keyword tools, validating publisher concerns.

Google questioned Pew’s methodology, claiming that the analysis period overlapped with algorithm testing unrelated to AI Overviews. However, the decline and its connection to AI Overview presence suggest a notable relationship, even if other factors played a role.

Position One Eroded – Ahrefs

Ahrefs’ analysis found that position one click-through rates dropped for informational keywords triggering AI Overviews.

Ryan Law, Director of Content Marketing at Ahrefs, stated on LinkedIn:

“AI Overviews reduce clicks by 34.5%. Google says being featured in an AI Overview leads to higher click-through rates… Logic disagrees, and now, so does our data.”

Law’s observation gets to the heart of a major contradiction: Google says appearing in AI Overviews helps publishers, but the math of fewer clicks suggests this is just corporate doublespeak to appease content creators.

His post garnered over 8,200 reactions, indicating widespread industry agreement with these findings.

More Zero-Click Searches – Similarweb

According to Similarweb data, zero-click searches increased from 56% to 69% between May 2024 and May 2025. While this captures trends beyond AI Overviews, the timing aligns with the rollout.

Zero-click searches work because they meet user needs. For example, when someone searches for “weather today” or a stock price, getting an instant answer without clicking is helpful. The issue comes when zero-click searches creep into areas where publishers used to offer in-depth content.

Stuart Forrest, global director of SEO digital publishing at Bauer Media, confirms the trend, telling the BBC:

“We’re definitely moving into the era of lower clicks and lower referral traffic for publishers.”

Forrest’s admitting to this new reality shows that the industry as a whole is coming to terms with the end of the golden age of search traffic. Not with a dramatic impact, but with a steady decline in clicks as AI meets users’ needs before they ever leave Google’s ecosystem.

Search Traffic Decline – Digital Content Next

An analysis by Digital Content Next found a 10% overall search traffic decline among member publishers between May and June.

Although modest compared to DMG’s worst-case scenarios, this represents millions of lost visits across major publishers.

AIO Placement Volatility – Authoritas

An Authoritas report finds that AI Overview placements are more volatile than organic ones. Over a two- to three-month period, about 70% of the pages cited in AI Overviews changed, and these changes weren’t linked to traditional organic rankings.

This volatility is why some sites experience sudden traffic drops even when their blue-link rankings seem stable.

Click-Based Economy Collapse For News Publishers – DMG Media

A statement from DMG Media to the UK’s Competition and Markets Authority reveals click-through rates dropped by as much as 89% when AI Overviews appeared for their content.

Although this figure represents a worst-case scenario rather than an average, it highlights the potential for traffic losses for certain search types.

Additionally, there are differences in how AI Overviews affect click-through rates depending on the device type.

The Daily Mail’s desktop CTR dropped from 25.23% to 2.79% when an AI Overview surfaced above a visible link (-89%), with mobile traffic declining by 87%; U.S. figures were similar.

These numbers indicate we’re facing more than just a temporary adjustment period. We’re witnessing a structural collapse of the click-based economy that has supported digital publishing since the early 2000s. With traffic declines approaching 90%, we’ve gone beyond optimization tactics and into existential crisis mode territory.

The submission to regulatory authorities suggests they’re confident in these numbers, despite their magnitude.

Educational Site Disruption – Chegg

Educational platforms are experiencing disruption from AI Overviews.

Learning platform Chegg reported a 49% decline in non-subscriber traffic between January 2024 and January 2025 in company statements accompanying their February antitrust lawsuit.

The decline coincided with AI Overviews answering homework and study questions that previously drove traffic to educational sites. Chegg’s lawsuit alleges that Google used content from educational publishers to train AI systems that now compete directly with those publishers.

Chegg’s case is a warning sign for educational content creators: If AI systems can successfully replace structured learning platforms, what’s the future for smaller publishers?

Reduced Visibility For Top Ranking Sites – Advanced Web Ranking

AI Overviews are dense and tall, impacting the visibility of organic results.

Advanced Web Ranking found that across 8,000 keywords, AI Overviews average around 169 words and include about seven links when expanded.

Once expanded, the first organic result often appears about 1,674px down the page. That’s well below the fold on most screens, reducing visibility for even top-ranked pages.

Branded Searches: The Surprising Exception

While most query types are seeing traffic declines, branded searches show the opposite trend. According to Amsive’s research, branded queries with AI Overviews see an 18% increase in click-through rate.

Several related factors likely contribute to this brand advantage. When AI Overviews mention specific brands, it conveys authority and credibility in ways that generic content can’t replicate.

People seeing their preferred brand in an AI Overview may be more likely to click through to the official site. Additionally, AI Overviews for branded searches often include rich information like store hours, contact details, and direct links, making it easier for users to find what they need.

This pattern has strategic implications as companies that have invested in brand building have a strong defense against AI disruption. The 18% increase in branded terms versus a 34-46% decrease in generic terms (as shown above) creates a performance gap that will likely impact marketing budgets.

The brand advantage extends beyond direct brand searches. Queries combining brand names with product categories show smaller traffic declines than purely generic searches. This suggests that even partial brand recognition provides some protection against AI Overview disruption. Companies with strong brands can leverage this by ensuring their brand appears naturally in relevant conversations and content.

This brand premium creates a two-tier internet, where established brands flourish while smaller content creators struggle financially. The impact on information diversity and market competition is troubling.

Google’s Defense: Stable Traffic, Better Quality

Google maintains a consistent three-part defense of AI Overviews:

  • Increased search usage.
  • Improved click quality.
  • Stable overall traffic.

The company frames AI Overviews as enhancing rather than replacing traditional search, though this narrative faces increasing skepticism from publishers experiencing traffic declines.

The company’s blog post from May, introducing the global expansion, stated:

“AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews. This means that once people use AI Overviews, they are coming to do more of these types of queries.”

Although this statistic shows a rise in Google Search engagement, it’s sparked intense debate and skepticism in the search and publishing worlds. Many experts agree that a 10% boost in AI Overview-driven searches could be due to changes in user behavior, but also warn that higher search volumes don’t automatically mean more traffic for content publishers.

A number of LinkedIn industry voices have publicly pushed back on Google’s 10% usage increase narrative. For example, Devansh Parashar writes:

“Google’s claim that AI Overviews have driven 10% more searches masks a troubling trend. Data from independent research firms, such as Pew, show that a majority of users do not click beyond the AI Overview— a figure that suggests Google’s LLM layer is quietly eating the web’s traffic pie.”

Similarly, Trevin Shirey points out concerns about the gap between increased engagement with search queries and the actual traffic publishers see:

“Although Google reports a surge in usage, many publishers are experiencing declines in organic click-through rates. This signals a silent crisis where users get quick answers from AI, but publishers are left behind.”

Google’s claim about increased usage needs to be read carefully. The increase is only for certain types of queries that show AI overviews, not overall search volume.

If users have to make multiple searches to find information they could have gotten in one click, their overall usage might go up, but their satisfaction could actually decrease.

In an August blog post, Google’s head of search, Liz Reid, claimed the volume of clicks from Google search to websites had been “relatively stable” year-over-year.

Reid also asserted that click quality had improved:

“With AI Overviews, people are searching more and asking new questions that are often longer and more complex. In addition, with AI Overviews people are seeing more links on the page than before. More queries and more links mean more opportunities for websites to surface and get clicked.”

A Google spokesperson told the BBC:

“More than any other company, Google prioritises sending traffic to the web, and we continue to send billions of clicks to websites every day.”

Google’s developer documentation states:

“We’ve seen that when people click from search results pages with AI Overviews, these clicks are higher quality (meaning, users are more likely to spend more time on the site).”

Publishers are understandably concerned and question the differences between Google’s description of stability and the actual data showing otherwise.

Jason Kint, CEO of Digital Content Next, notes:

“Since Google rolled out AI Overviews in your search results, median year-over-year referral traffic from Google Search to premium publishers down 10%.”

Kint’s data shatters Google’s carefully crafted image of stability, exposing what many publishers already suspect: The search giant’s promises are increasingly at odds with the realities reflected in their analytics dashboards and revenue reports.

The argument that higher-quality clicks are more valuable doesn’t provide much comfort when revenue is falling short. Even if engagement increases, losing such a large portion of clicks is a serious challenge for many ad-supported businesses.

Echoing these concerns, SEO Lead Jeff Domansky states:

“For publishers, AI Overviews are a direct hit to traffic and revenue models built around clicks and pageviews.”

Although Google claims that AI Overview clicks are of higher quality, many industry experts are skeptical.

Lily Ray, Vice President, SEO Strategy & Research at Amsive, highlights the lack of quality control on Google’s end:

“Since Google’s AI Overviews were launched, I (and many others) have shared dozens of examples of spam, misinformation, and inaccurate, biased, or incomplete results appearing in live AI Overview responses.”

And SEO specialist Barry Adams raises concerns about the quality and sustainability:

“Google’s AI Overviews are terrible at quoting the right sources… There is nothing intelligent about LLMs. They’re advanced word predictors, and using them for any purpose that requires a basis in verifiable facts – like search queries – is fundamentally wrong.”

Adams highlights a philosophical contradiction in AI Overviews: By relying on probabilistic language models to answer factual questions, Google may be misaligning technology with user needs.

This range of voices highlights a growing disconnect between Google’s hopeful engagement claims and the tough realities many publishers are facing as their referral traffic and revenue decrease.

Google hasn’t provided specific metrics defining “higher quality.” Publishers can’t verify these claims without access to comparative engagement data from AI Overview versus traditional search traffic.

Legal Challenges Mount

Publishers are seeking relief through regulatory and legal channels. In July, the Independent Publishers Alliance, tech justice nonprofit Foxglove, and the campaign group Movement for an Open Web filed a complaint with the UK’s Competition and Markets Authority. They claim that Google AI Overviews misuse publisher content, causing harm to newspapers.

The complaint urges the CMA to impose temporary measures that prevent Google from using publisher content in AI-generated responses without compensation.

It’s still unclear whether courts and regulators, which often move at a slow pace, can take action quickly enough to help publishers before market forces make any potential solutions irrelevant. A classic example of regulation trying to keep up with technological advancements.

The rapid growth of AI Overviews suggests that market realities may outstrip legal solutions.

Publisher Adaptations: Beyond Google Dependence

With threats looming, publishers are rushing to cut their reliance on Google. David Higgerson shares Reach’s approach in a statement to the BBC:

“We need to go and find where audiences are elsewhere and build relationships with them there. We’ve got millions of people who receive our alerts on WhatsApp. We’ve built newsletters.”

Instead of creating content for Google discovery, publishers need to develop direct relationships. Email newsletters, mobile apps, and podcast subscriptions provide traffic sources that aren’t affected by AI Overview disruptions.

Stuart Forrest stresses the importance of quality as a key differentiator:

“We need to make sure that it’s us being cited and not our rivals. Things like writing good quality content… it’s amazing the number of publishers that just give up on that.”

However, quality alone may not be enough if users never leave Google’s search results page. Publishers also need to master AI Overview optimization and understand how to make the most of remaining click opportunities.

Higgerson notes:

“Google doesn’t give us a manual on how to do it. We have to run tests and optimise copy in a way that doesn’t damage the primary purpose of the content.”

Another path that’s emerging is content licensing. Following News Corp and The Atlantic partnering with OpenAI, more publishers are exploring direct licensing relationships. These deals typically provide upfront payments and ongoing royalties for content usage in AI training, though terms remain confidential.

What We Don’t Know

There are still many uncertainties. The long-term trajectory of AI Mode, for example, could alter current patterns.

AI Mode

Google’s AI Mode may pose an even bigger threat than AI Overviews. This new interface displays search results in a conversational format instead of 10 blue links. Searchers have a back-and-forth with AI, with occasional reference links thrown in.

For publishers already struggling with AI-powered overviews, AI Mode could wipe out the rest of their traffic.

International Impact

The international effects outside English-language markets remain unmeasured. Since AI Overviews are available in over 200 countries and 40 languages, the impact likely varies by market. Factors like cultural differences in search behavior, language complexity, local competition dynamics, and varying digital literacy levels could lead to vastly different outcomes.

Most current research focuses on English-language markets in developed economies.

Content Creation

The feedback loop between AI Overviews and content creation could reshape what content gets produced and how information flows online.

If publishers stop creating certain types of content due to traffic losses, will AI Overview quality suffer as training data becomes stale?

Looking Ahead: Expanded AI Features

Google intends to continue expanding AI features despite mounting publisher concerns and legal challenges.

The company’s roadmap includes AI Mode international expansion and enhanced interactive features, including voice-activated AI conversations and multi-turn query refinement. Publishers should prepare for continued evolution rather than expecting stability in search traffic patterns.

Regulatory intervention may force greater transparency in the coming months. The Independent Publishers Alliance’s EU complaint requests detailed impact assessments and content usage documentation.

These proceedings could establish precedents affecting how AI systems can use publisher content.

Final Thoughts

The question isn’t whether AI Overviews affect traffic. Evidence overwhelmingly confirms they do. The question is how publishers adapt business models while maintaining sustainable operations.

The web is at a turning point, where the core agreement is being rewritten by the platforms that once promoted the open internet. Publishers who don’t acknowledge this change are jeopardizing their relevance in an AI-driven future.

Those who understand the impact, invest in brand building, and diversify traffic sources will be best positioned for success.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Why Reddit Is Driving The Conversation In AI Search – User Journey Over Short Tail via @sejournal, @brentcsutoras

The How AI Search Can Drive Sales & Boost Conversions webinar, presented recently by Bartosz Góralewicz, touched on something that I think every marketer needs to understand about how people actually make decisions today.

This isn’t just about Reddit anymore; we’re talking about the future of how brands actually connect with customers when they’re making real decisions.

Image from author, September 2025

Bartosz shared some data from Cloudflare that’s wild: 10 years ago, Google crawled two pages for every one click. Six months ago? Six pages per click. Today, it’s 18 pages for every single click! OpenAI is crawling 1,500 pages for each click they send. And get this, in 2024, 60% of Google searches ended in zero clicks, as LLMs increasingly serve answers directly on the page, according to Justin Turner, Head of Thought Leadership at Reddit.

As Bartosz put it, quoting Cloudflare’s CEO: “People trust AI more and they’re just not following the footnotes anymore.”

But here’s what everyone’s missing: Reddit is just the messenger.

What Reddit Really Shows Us

Reddit appears in nearly 98% of product review searches because it’s solving a problem that traditional marketing content can’t touch. When someone searches “iPhone 16 vs Samsung S25,” they’ll find millions of YouTube views but almost no traditional search volume data.

The conversation is happening, just not where we’ve been looking. Turner’s research shows Reddit is the No. 1 most cited domain across all major AI platforms, accounting for 3.5% of all citations across AI models, nearly three times more than Wikipedia.

What Reddit provides, and what Google and OpenAI are paying for, is authentic peer advice instead of corporate marketing messages. Users want to feel understood, not sold to. They want contextual advice that feels like someone actually gets their specific problem.

As Bartosz explained it, when someone is researching a car, they don’t want to hear from paid bloggers. They want to talk to someone who actually drives the thing every day and can tell them the radio breaks 11 times in the first year. That’s the stuff you won’t find on the company website.

The Real Journey People Take

During our webinar, Bartosz walked through this perfect example from his own experience. He bought a wool carpet, discovered he couldn’t use his Dyson on it (voids the warranty), and now needed a suction-only vacuum.

Image from author, September 2025

Bartosz showed how this creates a progression that most marketers never see:

  • Stage 1: “Why can’t I use Dyson on wool carpet?”
  • Stage 2: “Suction only vacuums for wool carpets”
  • Stage 3: “Miele C1 suction only vacuum safe”

Each answer informs the next question. As Bartosz explained, understanding this progression isn’t just about Reddit; it’s about understanding how people actually think and research!

The thing is, sometimes, this entire customer journey condenses into one perfect answer. Bartosz showed us how, when someone asked, “Why is it bad to use Dyson on wool carpet?” Perplexity immediately recommended Miele as the solution. One conversation, massive conversion potential.

But as Bartosz emphasized, you can’t manufacture this by guessing. You have to listen to actual conversations and understand the real problems people are trying to solve. This is exactly why he created ZipTie.ai, to help brands identify those critical moments in customer conversations where they can genuinely solve problems rather than just promote products.

And here’s proof that this approach actually works: Turner’s data shows users referred from ChatGPT view 42% more pages per session than those referred from Google, showing more intent, deeper curiosity, and stronger engagement.

Why This Changes Everything

I’ve been looking for this shift in marketing for years, waiting for it to come back to the actual science behind why people make decisions. The funnel is longer now, people are using more places along the way, and when you can find what people really need, honestly, content really is king again. But not content for content’s sake, problem-solving is all you really need.

Bartosz’s Miele example shows something that’s often overlooked. You wouldn’t see this in your regular website data or in traditional Google articles. It’s not visible to most brands because we’re so conditioned to look down this logical marketing path that we miss the conversations happening right in front of us.

We started seeing it more clearly when people began giving us signals by writing on Reddit. Why are they doing that? Because they want validation. When you give them that validation through genuine problem-solving, it works!

The New Success Metrics

Bartosz talked about how we need to stop chasing old metrics. Rankings, clicks, and keywords still matter, but they’re not the whole story anymore.

Image from author, September 2025

As he put it, here’s what actually matters now:

  • Are you the recommended solution throughout the customer journey?
  • Do you show contextual relevance that makes users feel understood?
  • Can you track your influence through actual conversion paths?

As Bartosz said, “The teams that are going to win nowadays are going to be the teams that are going to solve the most amount, the biggest amount of problems that users have.”

The Authenticity Problem

To be authentic, you have to talk about positives and negatives. The biggest challenge I have in discovery calls with huge brands is that they tell me, “we cannot say we don’t do this or we don’t do this.”

But that’s exactly what you need to do!

I always tell people Reddit success comes down to three overlapping areas: what Redditors expect from you, what you honestly have to give, and where your business goals align. That overlap is your area of influence.

A TikTok campaign I did years ago started with 300 messages telling me to basically get lost (wasn’t as kind though). But once people realized we were real humans having real conversations, everything changed. People started editing their posts, sending improvement ideas, giving us awards.

That’s the power of authentic engagement.

The Psychology Behind It All

People want to share every decision they make with somebody because it’s our nature to want to share responsibility. It’s a way of validating that we’re not total idiots; we at least explored the conversation. “I talked to my friend John and he said it was a good phone.”

But there’s more to it than just sharing responsibility. We’re also looking for validation that someone has actually experienced the issue, product, or service we’re researching and has real information to share about it.1 We want to hear from people who’ve been there, not from someone reading a spec sheet or writing content that’s been paid for, influenced, or even completely faked. There’s so little trust in traditional search results anymore because we know so much of what we find is compromised.

Also, we rarely have the right problem when we start searching. We think we need “the best vacuum” when what we really need is “a vacuum that won’t destroy my wool carpet.” It takes conversation and depth to uncover what the real problem actually is. That’s why those Reddit threads go so deep: People are working through layers of issues together.

Most importantly, we want to feel like we learned enough to come to our own decision. We don’t want someone to tell us what to buy; we want to feel smart about figuring it out ourselves with good information from people we trust.2

I’ve been talking about these concepts a lot lately, but this isn’t just my personal theory. This behavior is extensively researched across psychology, behavioral economics, and decision science. Studies consistently show that people actively seek to share decision responsibility to reduce regret and minimize the psychological burden of negative outcomes. Research demonstrates that individuals are more likely to join groups or seek validation after experiencing negative results, and that sharing responsibility helps shield people from the emotional consequences of bad decisions.

What This Means Going Forward

This approach works because it aligns with human psychology. When you understand that core element, solving users’ real problems, everything gets better. Your commercials, website copy, social media ads, customer service. Everything improves when you know what people actually need to feel comfortable making a decision.

Reddit just happens to be where these conversations are most visible right now. But the principles apply everywhere: Understand the real problems, join authentic conversations, and focus on solving issues rather than promoting solutions.

The brands that figure this out first will own the next phase of digital marketing. The ones that keep chasing traditional metrics will keep wondering why their traffic is declining while their competitors seem to effortlessly show up everywhere that matters.

Definitely, definitely take the time to understand your user’s journey. Don’t be lazy about it. Really understand what people need at each stage, what problems they’re actually trying to solve, and where they go to get that validation they need to make decisions.

It’s not complicated, but it requires you to slow down and actually listen to your customers instead of talking at them.

Sources:

  1. https://academic.oup.com/jcr/article-abstract/51/1/7/7672991?login=false
  2. https://acr-journal.com/article/consumer-trust-in-digital-brands-the-role-of-transparency-and-ethical-marketing-882/
  3. https://www.linkedin.com/pulse/convergence-product-marketing-seo-ai-search-era-ziptieai-aotnc/

More Resources:


Featured Image: Accogliente Design/Shutterstock

When Agents Replace Websites via @sejournal, @DuaneForrester

Let’s talk about an agentic future. As task-completing agents move from concept to adoption, their impact on how we discover and transact online will be significant. Websites won’t vanish, but in many cases, their utility will shrink as agents become the new intermediary layer between people and answers. Domains will still exist, but their value as discovery assets is likely to erode. Building and maintaining a site will increasingly mean structuring it for agents to retrieve from, not just for people to browse, and the idea of domains appreciating as scarce assets will feel less connected to how discovery actually happens.

The growth trajectory for AI agents is already clear in the data. Grand View Research valued the global AI agents market at USD 5.40 billion in 2024, with forecasts reaching USD 50.31 billion by 2030 at an annual growth rate of about 45.8%. Regionally, the Asia-Pacific market was USD 1.30 billion in 2024 and is projected to expand to USD 14.15 billion by 2030, with China alone expected to grow from USD 402.6 million to USD 3.98 billion over the same period. Europe is following a similar path, climbing from USD 1.32 billion in 2024 to USD 11.49 billion by 2030. Longer-term, Precedence Research projects the global agentic AI market will rise from USD 7.55 billion in 2025 to nearly USD 199.05 billion by 2034, a compound growth rate of 43.84%. These forecasts from multiple regions show a consistent global pattern: adoption is accelerating everywhere, and the shift toward agentic systems is not theoretical; it is underway. These figures are about task-completing agents, not casual chat use.

Image Credit: Duane Forrester

Do We Still Need Websites In An Agentic World?

It’s easy to forget how limited the internet felt in the 1990s. On AOL, you didn’t browse the web the way we think of it today. You navigated keywords. One word dropped you into chat rooms, news channels, or branded content. The open web was technically out there, but for most people, America Online WAS the internet.

That closed-garden model eventually gave way to the open web. Domains became navigation anchors. Owning a clean .com or a trusted extension like .org or .gov signaled legitimacy. Websites evolved into the front doors of digital identity, where brand credibility and consumer trust were built. Search rankings reinforced this. An exact-match domain once boosted visibility, and later the concept of “domain authority” helped indicate who showed up at the top of search results. For nearly three decades, websites have been the central hub of digital discovery and transactions.

But we may be circling back. Only this time, the keyword is no longer “AOL Keyword: Pizza Hut.” It’s your natural-language intent: “Book me a flight,” “Order flowers,” “Find me a dentist nearby.” And instead of AOL, the gatekeepers are LLMs and agentic systems.

From Navigation To Answers

The rise of agentic systems collapses the journey we’ve been used to. Where discovery once meant search, scanning results, clicking a domain, and navigating a site, it now means describing your intent and letting the system do the rest. You don’t need Expedia or United.com if your agent confirms your flight. You don’t need to touch OpenTable’s site if a reservation is placed automatically for tomorrow night. You don’t need to sift through Nike’s catalog if new running shoes just arrive at your door.

In this flow, the answer layer replaces the click, the task layer replaces the browsing session, and the source itself becomes invisible. The consumer no longer cares which site delivered the data or handled the transaction, as long as the result is correct.

Proof In Practice: WeChat

This shift isn’t hypothetical. In China, it’s already happening at scale. WeChat introduced Mini-Programs in 2017 as “apps within an app,” designed so users never need to leave the WeChat environment. By 2024, they had become mainstream: Recent reports suggest there are between 3.9 and 4.3 million WeChat Mini-Programs in the ecosystem today. (3.9m source4.3m source), with over 900 million monthly active users. And while Mini-Programs are closer to apps than actual AIs, it’s all about task completion and consumers adopting layers of task completion.

In food and beverage and hospitality, over 80% of top chain restaurants now run ordering or take-out flows directly through Mini-Programs, meaning customers never touch a separate website. International brands often prioritize Mini-Programs as their Chinese storefronts instead of building localized websites, since WeChat already handles discovery, product listings, payments, and customer service. Luxury brand LOEWE, for example, launched its 2024 “Crafted World” exhibition in Shanghai entirely via a WeChat Mini-Program, offering ticketing and interactive digital content without requiring users to leave the app.

For many domestic Chinese businesses, this has become the default strategy: their websites exist, if at all, as minimal shells, while the real customer experience lives entirely inside WeChat. And it’s worth keeping in mind, we talked about WeChat serving over 1 billion monthly active users. ChatGPT currently sees over 800 million a week, so roughly three times WeChat’s volume on a monthly basis. An agentic era of direct-to-consumer facilitated by platforms like ChatGPT, WeChat, Claude, Gemini, and CoPilot could bring a massive shift in consumer behavior.

Western Parallels

Western platforms are already moving in this direction. Instagram Checkout allows users to buy products directly inside Instagram, without ever visiting a retailer’s website. Shopify details this integration here. TikTok offers similar flows. Its partnership with Shopify enables in-app checkout so the consumer never leaves TikTok. Even services like Uber now function as APIs inside larger ecosystems. You can book a ride from within another app and never open Uber directly.

In each case, the website still exists, but the consumer may never see it. Discovery, consideration, and conversion all happen inside the closed flow.

The AOL Parallel

The resemblance to the mid-1990s is striking. AOL’s big push came in that period, when its “Keyword” model positioned the service as the internet itself. Instead of typing URLs, people entered AOL Keywords and stayed inside AOL’s curated walls. By mid-1996, AOL had roughly 6 million U.S. subscribers doing this, representing about 13% of the nation’s estimated 44 million internet users at the time.

Today, the “keyword” has become your intent. The agent interprets it, makes the decision, and fulfills the request. The outcome is the same: a closed environment where the gateway controls visibility and access. Only this time, it’s powered by LLMs and APIs instead of dial-up modems.

This is not an isolated evolution. There’s mounting evidence that the open web itself is weakening. Google recently stated in a legal filing that “the open web is already in rapid decline … harming publishers who rely on open-web display advertising revenue.” That report was covered by Search Engine Roundtable.

Pew Research found that when Google displays AI-generated summaries in search results, users click links only 8% of the time, compared to 15% when no summary is present. That’s nearly a 50% decline in link clicks. Digital Content Next reported that premium publishers saw a 10% year-over-year drop in referral traffic from Google during a recent eight-week span.

The Guardian covered MailOnline’s specific case, where desktop click-through dropped 56% when AI summaries appeared, and mobile click-through fell 48%. Advertising spend tells a similar story. MarketingProfs reports that professionally produced news content is projected to receive just 51% of global content ad spend in 2025, down from 72% in 2019. Search Engine Land shows that open-web display ads have fallen from about 40% of Google AdWords impressions in 2019 to only 11% by early 2025.

The story is consistent. Consumers click less, publishers earn less, and advertisers move their budgets elsewhere. The open web will likely no longer be the center of gravity.

If websites lose their central role, what takes their place? Businesses will still need technical infrastructure, but the front door will change. Instead of polished homepages, structured data and APIs will feed agents directly. Verification layers like schema, certifications, and machine-readable credentials will carry more weight than design. Machine-validated authority (how often your brand is retrieved or cited by LLMs) will become a core measure of trust. And partnerships or API integrations will replace traditional SEO in ensuring visibility.

This doesn’t mean websites vanish. They’ll remain important for compliance, long-form storytelling, and niches where users still seek a direct experience. But for mainstream interactions, the website is being demoted to plumbing.

And while design and user experience may lose ground to agentic flows, content itself remains critical. Agents still need to be fed with high-quality text, structured product data, verified facts, and fresh signals of authority. Video will grow in importance as agents surface summaries and clips in conversational answers. First-party user-generated content, especially reviews, will carry more weight as a trust signal. Product data like clean specs, accurate availability, transparent pricing will be non-negotiable inputs to agent systems.

In other words, the work of SEO isn’t disappearing. Technical SEO remains the plumbing that ensures content is discoverable and accessible to machines. Content creation continues to matter, both because it fuels agent responses and because humans still consume it when they step beyond the agent flow. The shift is less about content’s relevance and more about where and how it gets consumed. Web design and UX work, however, will inevitably come under scrutiny as optional costs as the agent interface takes over consumer experiences.

One consequence of this shift is that brands risk losing their direct line to the customer. When an agent books the flight, orders the shoes, or schedules the dentist, the consumer’s loyalty may end up with the agent itself, not the underlying business. Just as Amazon’s marketplace turned many sellers into interchangeable storefronts beneath the Amazon brand, agentic systems may flatten brand differentiation unless companies build distinctive signals that survive mediation. That could mean doubling down on structured trust markers, recognizable product data, or even unique content assets that agents consistently retrieve. Without those, the relationship belongs to the agent, not you.

That potential demotion for websites carries consequences. Domains will still matter for branding, offline campaigns, and human recall, but their value as entry points to discovery is shrinking. The secondary market for “premium” domains is already showing signs of stress. Registries have begun cutting or eliminating premium tiers; .art, for example, recently removed over a million names from its premium list to reprice them downward. Investor commentary also points to weaker demand, with TechStartups noting in 2025 that domain sales are “crashing” as AI and shifting search behaviors reduce the perceived need for expensive keyword names.

We’ve seen this arc before. Families once paid hundreds of dollars for full sets of printed encyclopedias. Owning Britannica on your shelf was a marker of credibility and access to knowledge. Today, those same volumes can be found in thrift stores for pennies, eclipsed by digital access that made the scarcity meaningless. Domains are on a similar path. They will remain useful for identity and branding, but the assumption that a keyword .com will keep appreciating looks more like nostalgia than strategy.

Defensive portfolios across dozens of ccTLDs will be harder to justify, just as stocking encyclopedias became pointless once Wikipedia existed. Websites will remain as infrastructure, but their role as front doors will continue to shrink.

Marketing strategies must adapt. The focus will move from polishing landing pages to ensuring your data is retrievable, your brand is trusted by agents, and your authority is machine-validated. SEO, as we know it, will transform from competing for SERP rankings to competing for retrieval and integration into agent responses.

Another underappreciated consequence of all this is measurement. For decades, marketers have relied on web analytics: page views, bounce rates, conversions. Agentic systems obscure that visibility. If a customer never lands on your site but still books through an agent, you may gain the revenue but lose the data trail. New metrics will be needed. Not just whether a page ranks, but whether your content was retrieved, cited, or trusted inside agent flows. In that sense, the industry will need to redefine what “traffic” and “conversion” even mean when the interface is a conversation rather than a website.

The Fear And The Possibility

The fear is obvious. We’ve been here before with AOL. A closed gateway can dominate visibility, commoditize brands, and reduce consumer choice. The open web and search engines broke us out of that in the late 1990s. No one wants to return to those walls.

But the possibility is also real. Businesses that adapt to agentic discovery (with structured signals, trusted data feeds, and machine-recognized authority) can thrive. The website may become plumbing, but plumbing matters. It carries the flow and information that powers the experience.

So the real question isn’t whether websites will still exist. Ultimately, they will, in some format. The question is whether your business is still focused on decorating the door, or whether you’re investing in the pipes that agents actually use to deliver value.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Collagery/Shutterstock