Are Google’s AI Travel Results Uncovering More Hidden Gems? [Data Study] via @sejournal, @TaylorDanRW

Google’s AI-generated results reshape how people search, and Google has said that websites should expect traffic fluctuations and that prior success in organic Search does not guarantee future success in the new ecosystem.

This is a big claim, and it’s been debated whether “Hidden Gems” are getting more visibility in modern Search and I’m doing my best to work through as much data as possible to identify if the claims from Google above have substance.

Google’s Hidden Gems initiative is its effort to highlight genuine, first‑hand content from smaller corners of the web.

It was first revealed in May 2023 and fully integrated into the core algorithm later that year, with official acknowledgment in mid-November 2023.

It targets posts with first-hand knowledge, personal insights, and unique experiences usually found on forums, blogs, social platforms, and niche sites.

Rather than favoring only high-authority domains, it now surfaces these overlooked “gems” because they offer genuine and practical perspectives from creators and brands, not powered by the traditional SEO metrics and big brand budgets.

Hidden Gems has the objective of:

Improving how we (Google) rank results in Search overall, with a greater focus on content with unique expertise and experience.

This brings me to the travel sector and the notion of Hidden Gems.

It has been a long-held belief in the travel sector that Google favors bigger travel brands. When I worked in regional agencies that had travel clients, this was almost a party line when pitching SME and challenger travel websites.

Now search is evolving, and we’re seeing more and more Search features either powered by or directly interfacing with AI, is this now an opportunity for challenger travel brands to gain further visibility within Google’s Search ecosystem?

Methodology

To investigate, we analyzed a dataset of 5,824 URLs surfaced in Google’s AI-generated results for travel-related queries.

As part of the methodology, we also reviewed traditional SEO metrics such as estimated site traffic, domain rating, and total number of domain keywords to validate a qualitative review of whether a site functions as a powerful travel brand or a challenger brand.

Each URL was manually reviewed and tagged based on whether Google identified it as a Hidden Gem. We compared their visibility, domain authority, and how often they appeared in AI results.

Quantity Vs. Frequency

The dataset revealed a nuanced dynamic: While Hidden Gems were more diverse, they were not more dominant.

From the 5,824 cited URLs, we identified 1,371 unique domains. We classified 590 of these as Hidden Gem domains compared to 781 established domains.

However, those 781 established domains appeared 4,576 times in total, a much higher return rate than the 1,248 total appearances of the Hidden Gems.

This suggests that while AI mode is surfacing a wide variety of lesser-known sources, it still leans heavily on established brands for repeated visibility.

As you would expect, domains we identified as not being “Hidden Gems” had a greater weighting of higher DR than not.

Image from author, August 2025

By contrast, the domains we identified as being Hidden Gems were not weighted in the opposite direction, but instead much more evenly spread out.

Image from author, August 2025

In other words, Google is sampling widely from the long tail but serving frequently from the head of the distribution.

Authority Still Has A Role

While traditional SEO has long placed emphasis on authority metrics like Domain Rating (DR) or Domain Authority (DA), our analysis shows that their influence may be diminishing in the context of AI-led search.

This shift aligns with broader trends we’ve observed in Google’s evolving ranking systems.

Instead of relying heavily on link-based authority, AI Overviews and similar experiences appear to prioritize content that demonstrates depth, originality, and strong alignment with user intent.

Authority hasn’t disappeared, but it’s been repositioned. Rather than acting as a gatekeeper for visibility, it’s now one of many factors, often taking a back seat to how well a piece of content anticipates and satisfies the user’s informational needs in the moment.

What This Means For Travel Brands

Hidden Gems are showing up in Google’s AI results, but they’re not displacing the giants. They’re appearing alongside them, offering more variety but less dominance.

For challenger brands, this represents both an opportunity and a challenge.

First-Hand Content Gains Ground

The opportunity is clear: Content that is specific, genuine, and useful is getting noticed, even from smaller or lesser-known sites.

AI-powered results seem to be more willing to include pages that deliver practical insights, first-hand experience, and niche relevance, even if they lack the traditional signals of authority.

This creates new openings for brands that previously struggled to compete on backlinks or brand strength alone.

Repetition And Recall Still Matter

But the challenge is equally clear in that visibility is not evenly distributed.

While Google may sample from a broader range of sources, the repetition and prominence still favor the dominant travel brands.

These brands appear more frequently, benefit from greater brand recall, and are more likely to be clicked simply because they’re familiar.

So for newer or challenger brands, the question becomes: How do you turn presence into preference?

Where Should I Be Focusing?

Consistency Of Presence

It starts with consistency. One or two appearances in AI Overviews won’t move the needle.

Travel brands need to think about sustained visibility, showing up across a range of topics, formats, and moments in the user journey.

That means building out content that doesn’t just answer common queries but anticipates nuanced needs, inspires curiosity, and offers unique, first-hand insight.

Clarity Of Voice

Next comes clarity of voice. AI systems are increasingly sensitive to content that signals credibility, experience, and originality.

Brands that find and articulate a clear editorial voice, whether that’s luxury travel with a local twist, slow travel for sustainability, or adventure itineraries from people who’ve actually been there, are more likely to stand out.

Intent Understanding

Finally, there’s an intent understanding. Challenger brands must shift from thinking in keywords to thinking in moments.

What’s the user trying to imagine, plan, solve, or feel at this point in their journey? How can your content speak directly to that?

A New Definition Of Authority

The travel giants still have scale on their side, but challenger brands now have a better chance to earn visibility through authenticity and depth. That’s a different kind of authority, one rooted in relevance and resonance.

For travel SEOs willing to rethink what authority means, and for brands ready to invest in meaningful, user-first content, AI-powered search is no longer just a threat. It’s an invitation.

Not to play the same game the giants are playing, but to play a different one, and win in different ways.

More Resources:


Featured Image: SvetaZi/Shutterstock

Tired Of SEO Spam, Software Engineer Creates A New Search Engine via @sejournal, @martinibuster

A software engineer from New York got so fed up with the irrelevant results and SEO spam in search engines that he decided to create a better one. Two months later, he has a demo search engine up and running. Here is how he did it, and four important insights about what he feels are the hurdles to creating a high-quality search engine.

One of the motives for creating a new search engine was the perception that mainstream search engines contained increasing amount of SEO spam. After two months the software engineer wrote about their creation:

“What’s great is the comparable lack of SEO spam.”

Neural Embeddings

The software engineer, Wilson Lin, decided that the best approach would be neural embeddings. He created a small-scale test to validate the approach and noted that the embeddings approach was successful.

Chunking Content

The next phase was how to process the data, like should it be divided into blocks of paragraphs or sentences? He decided that the sentence level was the most granular level that made sense because it enabled identifying the most relevant answer within a sentence while also enabling the creation of larger paragraph-level embedding units for context and semantic coherence.

But he still had problems with identifying context with indirect references that used words like “it” or “the” so he took an additional step in order to be able to better understand context:

“I trained a DistilBERT classifier model that would take a sentence and the preceding sentences, and label which one (if any) it depends upon in order to retain meaning. Therefore, when embedding a statement, I would follow the “chain” backwards to ensure all dependents were also provided in context.

This also had the benefit of labelling sentences that should never be matched, because they were not “leaf” sentences by themselves.”

Identifying The Main Content

A challenge for crawling was developing a way to ignore the non-content parts of a web page in order to index what Google calls the Main Content (MC). What made this challenging was the fact that all websites use different markup to signal the parts of a web page, and although he didn’t mention it, not all websites use semantic HTML, which would make it vastly easier for crawlers to identify where the main content is.

So he basically relied on HTML tags like the paragraph tag

to identify which parts of a web page contained the content and which parts did not.

This is the list of HTML tags he relied on to identify the main content:

  • blockquote – A quotation
  • dl – A description list (a list of descriptions or definitions)
  • ol – An ordered list (like a numbered list)
  • p – Paragraph element
  • pre – Preformatted text
  • table – The element for tabular data
  • ul – An unordered list (like bullet points)

Issues With Crawling

Crawling was another part that came with a multitude of problems to solve. For example, he discovered, to his surprise, that DNS resolution was a fairly frequent point of failure. The type of URL was another issue, where he had to block any URL from crawling that was not using the HTTPS protocol.

These were some of the challenges:

“They must have https: protocol, not ftp:, data:, javascript:, etc.

They must have a valid eTLD and hostname, and can’t have ports, usernames, or passwords.

Canonicalization is done to deduplicate. All components are percent-decoded then re-encoded with a minimal consistent charset. Query parameters are dropped or sorted. Origins are lowercased.

Some URLs are extremely long, and you can run into rare limits like HTTP headers and database index page sizes.

Some URLs also have strange characters that you wouldn’t think would be in a URL, but will get rejected downstream by systems like PostgreSQL and SQS.”

Storage

At first, Wilson chose Oracle Cloud because of the low cost of transferring data out (egress costs).

He explained:

“I initially chose Oracle Cloud for infra needs due to their very low egress costs with 10 TB free per month. As I’d store terabytes of data, this was a good reassurance that if I ever needed to move or export data (e.g. processing, backups), I wouldn’t have a hole in my wallet. Their compute was also far cheaper than other clouds, while still being a reliable major provider.”

But the Oracle Cloud solution ran into scaling issues. So he moved the project over to PostgreSQL, experienced a different set of technical issues, and eventually landed on RocksDB, which worked well.

He explained:

“I opted for a fixed set of 64 RocksDB shards, which simplified operations and client routing, while providing enough distribution capacity for the foreseeable future.

…At its peak, this system could ingest 200K writes per second across thousands of clients (crawlers, parsers, vectorizers). Each web page not only consisted of raw source HTML, but also normalized data, contextualized chunks, hundreds of high dimensional embeddings, and lots of metadata.”

GPU

Wilson used GPU-powered inference to generate semantic vector embeddings from crawled web content using transformer models. He initially used OpenAI embeddings via API, but that became expensive as the project scaled. He then switched to a self-hosted inference solution using GPUs from a company called Runpod.

He explained:

“In search of the most cost effective scalable solution, I discovered Runpod, who offer high performance-per-dollar GPUs like the RTX 4090 at far cheaper per-hour rates than AWS and Lambda. These were operated from tier 3 DCs with stable fast networking and lots of reliable compute capacity.”

Lack Of SEO Spam

The software engineer claimed that his search engine had less search spam and used the example of the query “best programming blogs” to illustrate his point. He also pointed out that his search engine could understand complex queries and gave the example of inputting an entire paragraph of content and discovering interesting articles about the topics in the paragraph.

Four Takeaways

Wilson listed many discoveries, but here are four that may be of interest to digital marketers and publishers interested in this journey of creating a search engine:

1. The Size Of The Index Is Important

One of the most important takeaways Wilson learned from two months of building a search engine is that the size of the search index is important because in his words, “coverage defines quality.” This is

2. Crawling And Filtering Are Hardest Problems

Although crawling as much content as possible is important for surfacing useful content, Wilson also learned that filtering low quality content was difficult because it required balancing the need for quantity against the pointlessness of crawling a seemingly endless web of useless or junk content. He discovered that a way of filtering out the useless content was necessary.

This is actually the problem that Sergey Brin and Larry Page solved with Page Rank. Page Rank modeled user behavior, the choice and votes of humans who validate web pages with links. Although Page Rank is nearly 30 years old, the underlying intuition remains so relevant today that the AI search engine Perplexity uses a modified version of it for its own search engine.

3. Limitations Of Small-Scale Search Engines

Another takeaway he discovered is that there are limits to how successful a small independent search engine can be. Wilson cited the inability to crawl the entire web as a constraint which creates coverage gaps.

4. Judging trust and authenticity at scale is complex

Automatically determining originality, accuracy, and quality across unstructured data is non-trivial

Wilson writes:

“Determining authenticity, trust, originality, accuracy, and quality automatically is not trivial. …if I started over I would put more emphasis on researching and developing this aspect first.

Infamously, search engines use thousands of signals on ranking and filtering pages, but I believe newer transformer-based approaches towards content evaluation and link analysis should be simpler, cost effective, and more accurate.”

Interested in trying the search engine? You can find it here and  you can read how the full technical details of how he did it here.

Featured Image by Shutterstock/Red Vector

Google Answers Question About Core Web Vitals “Poisoning” via @sejournal, @martinibuster

Someone posted details of a novel negative SEO attack that they said appeared to be a Core Web Vitals performance poisoning attack. Google’s John Mueller and Chrome’s Barry Pollard assisted in figuring out what was going on.

The person posted on Bluesky, tagging Google’s John Mueller and Rick Viscomi, the latter a DevRel Engineer at Google.

They posted:

“Hey we’re seeing a weird type of negative SEO attack that looks like core web vitals performance poisoning, seeing it on multiple sites where it seems like an intentional render delay is being injected, see attached screenshot.Seeing across multiple sites & source countries

..this data is pulled by webvitals-js. At first I thought dodgy AI crawler but the traffic pattern is from multiple countries hitting the same set of pages and forging the referrer in many cases”

The significance of the reference to “webvitals-js” is that the degraded Core Web Vitals data is from what’s hitting the server, actual performances scores recorded on the website itself, not the CrUX data, which we’ll discuss next.

Could This Affect Rankings?

The person making the post did not say if the “attack” had impacted search rankings, although that is unlikely, given that website performance is a weak ranking factor and less important than things like content relevance to user queries.

Google’s John Mueller responded, sharing his opinion that it’s unlikely to cause an issue, and tagging Chrome Web Performance Developer Advocate Barry Pollard (@tunetheweb) in his response.

Mueller said:

“I can’t imagine that this would cause issues, but maybe @tunetheweb.com has seen things like this or would be keen on taking a look.”

Barry Pollard wondered if it’s a bug in the web-vitals library and asked the original poster if it’s reflected in the CrUX data (Chrome User Experience Report), which is a record of actual user visits to websites.

The person who posted about the issue responded to Pollard’s question by answering that the CrUX report does not reflect the page speed issues.

They also stated that the website in question is experiencing a cache-bypass DoS (denial-of-service) attack, which is when an attacker sends a massive number of web page requests that bypass a CDN or a local cache, causing stress to server resources.

The method employed by a cache-bypass DoS attack is to bypass the cache (whether that’s a CDN or a local cache) in order to get the server to serve a web page (instead of a copy of it from the cache or CDN), thus slowing down the server.

The local web-vitals script is recording the performance degradation of those visits, but it is likely not registering with the CrUX data because that comes from actual Chrome browser users who have opted in to sharing their web performance data.

So What’s Going On?

Judging by the limited information in the discussion, it appears that a DoS attack is slowing down server response times, which in turn is affecting page speed metrics on the server. The Chrome User Experience Report (CrUX) data is not reflecting the degraded response times, which could be because the CDN is handling the page requests for the users recorded in CrUX. There’s a remote chance that the CrUX data isn’t fresh enough to reflect recent events but it seems logical that users are getting cached versions of the web page and thus not experiencing degraded performance.

I think the bottom line is that CWV scores themselves will not have an effect on rankings. Given that actual users themselves will hit the cache layer if there’s a CDN, the DoS attack probably won’t have an effect on rankings in an indirect way either.

Ex-Microsoft SEO Pioneer On Why AI’s Biggest Threat To SEO Isn’t What You Think via @sejournal, @theshelleywalsh

While industry professionals have debates over nomenclature of SEO, GEO, or AEO, and if ChatGPT or Google’s AI Overviews will replace traditional search, a more fundamental shift is happening that could disrupt the entire industry business model.

To get a better understanding of this, I spoke to the 25-year veteran and SEO pioneer Duane Forrester to discuss some of his recent articles about the shift from traditional SEO and the impact on how SEO roles are changing and adapting.

Duane previously worked at Microsoft as a senior program manager of SEO, where he helped to launch Bing Webmaster Tools and bring Schema.org to life. He has a deep understanding of how search engines work and has now turned his attention to adapting to the realities of AI-powered search and digital discovery.

His belief is that the real disruption isn’t AI replacing search engines; it’s the rise of AI agents. These “Agentic AI” systems will empower individuals to work like small agencies, and the jobs that thrive will be those that can effectively manage an AI team.

The Rise Of Agentic AI: Virtual Team Members

In Duane’s recent article “SEO’s Existential Threat is AI, but Not in the Way You Think,” he said it’s the rise of AI agents and retrieval-based systems that are already transforming how people interact with information, quietly eroding SEO’s return on investment. So, I asked him how agents and not SERPs are the future.

Duane explained:

“The most significant development isn’t AI replacing search engines; it’s the emergence of Agentic AI systems that can be given tasks and execute them autonomously … This is really a personal thing and I’ve been following this since I worked at Microsoft. I did some early work with Cortana with that program and training it for language recognition.”

Within six months, Duane predicts professionals will routinely instruct AI agents to perform work while they focus on higher-value activities. This is going to have the impact where individuals can behave much more like a small agency.

“If I can create a process and the process is largely executed by agents, then the 100% of my time that I can devote can be reapportioned to human-in-the-loop analysis.

This is going to be the way for us to create virtual players on our team and to do specific tasks to enable us to define the most valuable use of our time, whatever it happens to be. That valuable use of time for some people may be closing their next client. It may simply be the sales cycle. For other people who, maybe, lack knowledge and experience, it may actually be executing on what you promised the client.”

However, Dunae thinks that developing people management skills will be critical to success:

“If you step into the world of Agentic AI and you’re going down that path, you better have people management skills because you’re going to need them. That’s the skill set that will prove most valuable to managing Agentic AI work. You have to think of them not necessarily as humans, but as systems that need guidance.”

The Job Transformation: Writers As AI Instructors

I then asked Duane about his latest article, where he wrote about which SEO jobs AI will reshape and which might disappear.

He responded that the most dramatic changes will impact content creators, but not in the way many expect.

Duanes thinks that traditional writing roles face automation, but professionals who adapt will become more valuable than ever.

“If your full-time job is sitting down writing, that’s in jeopardy,” Duane acknowledges.

“The new model transforms writers from creators to instructors, managing multiple AI agents across different clients simultaneously. Instead of spending hours researching and writing, professionals can brief a dozen agents in minutes and focus on editing, refining tone, and ensuring accuracy.”

“You can tell a dozen agents for a dozen clients to all start and you can get them all started in less than two minutes and then in about 10 minutes have all of the output that you now will go in and edit one by one.”

Paradoxically, he thinks the role most in demand will be quality experienced writers, but only those who learn how to embrace and integrate AI to be efficient and effectively manage an AI team of writers.

By becoming a “human in the loop” editor who can guide AI output, an experienced writer can add value in ways machines can’t by refining tone, ensuring factual accuracy, and aligning copy with brand voice and client needs.

“I recently wrote about a Microsoft survey that showed the overlay of how AI can do a job versus humans doing that same job … their point was, if you’re in these jobs, you kind of want to figure out how to pivot to something different.”

Strategic Roles Remain Safe

The jobs that are vulnerable to AI are those with a repetitive nature that can be done by an AI faster, easier, and cheaper than a human.

While these execution-focused roles face disruption, strategic positions like CMOs remain relatively protected. These roles survive because they require experience-based decision-making that AI cannot replicate.

“It’s going to be harder to replace that level of experience because the system doesn’t have the experience,” Duane emphasizes.

The distinction isn’t about seniority but about the nature of the work. Repetitive tasks get automated first, while roles requiring strategic thinking, relationship building, and complex problem-solving remain human-dominated.

CMOs are considered “safe” not because they are senior, but because they are thinking in terms of strategy. They succeed by analyzing consumer behavior, identifying monetization opportunities, and aligning products with customer problems, capabilities that demand human insight and industry knowledge.

“They’re watching consumer behavior, and they’re trying to tease out from the consumer behavior: How do we make money from that? How do we align our product to solve a customer’s problem? And then that generates more sales. That’s the job of the CMO.

And then everything else under it, which is building and maintaining the team, running all the groups, and making sure everything is on track. It’s going to be harder to replace that level of experience because the system doesn’t have the experience.”

Preparing For The Future

Success in these evolving times requires immediate action on hiring and training. Companies must update job descriptions today to reflect skills needed in two to three years, or develop comprehensive training programs for existing staff.

“The people you’re hiring today, in theory, should still be with you in a couple of years. And if they are still with you in a couple of years and you don’t hire these new skills today, well then, you better have a training plan to get them there.”

I compared the current transformation with the early days of SEO, when pioneers navigated uncharted territory. Today’s professionals face a similar challenge of adapting to work alongside AI systems or risking obsolescence.

The future belongs to those who can embrace AI as a productivity multiplier rather than a replacement threat. Those who learn to instruct, guide, and optimize AI agents will find themselves more valuable than ever, while those who resist change may find themselves left behind.

“This isn’t just about surviving disruption,” Duane concluded. “It’s about positioning yourself to benefit from it.”

Watch the full video interview with Duane Forrester below.

Duane is currently writing about the shift from traditional SEO to vector-driven retrieval and AI-generated answers at Duane Forrester Decodes and featured here at Search Engine Journal.

Thank you to Duane for offering his insights and being my guest on IMHO.

More Resources: 


Featured Image: Shelley Walsh/Search Engine Journal

Google Explains Why They Need To Control Their Ranking Signals via @sejournal, @martinibuster

Google’s Gary Illyes answered a question about why Google doesn’t use social sharing as a ranking factor, explaining that it’s about the inability to control certain kinds of external signals.

Kenichi Suzuki Interview With Gary Illyes

Kenichi Suzuki (LinkedIn profile), of Faber Company (LinkedIn profile), is a respected Japanese search marketing expert who has at least 25 years of experience in digital marketing. I last saw him speak at a Pubcon session a few years back, where he shared his findings on qualities inherent to sites that Google Discover tended to show.

Suzuki published an interview with Gary Illyes, where he asked a number of questions about SEO, including this one about SEO, social media, and Google ranking factors.

Gary Illyes is an Analyst at Google (LinkedIn profile) who has a history of giving straightforward answers that dispel SEO myths and sometimes startle, like the time recently when he said that links play less of a role in ranking than most SEOs tend to believe. Gary used to be a part of the web publishing community before working at Google, and he was even a member of the WebmasterWorld forums under the nickname Methode. So I think Gary knows what it’s like to be a part of the SEO community and how important good information is, and that’s reflected in the quality of answers he provides.

Are Social Media Shares Or Views Google Ranking Factors?

The question about social media and ranking factors was asked by Rio Ichikawa (LinkedIn profile), also of Faber Company. She asked Gary whether social media views and shares were ranking signals.

Gary’s answer was straightforward and with zero ambiguity. He said no. The interesting part of his answer was the explanation of why Google doesn’t use them and will never use them as a ranking factor.

Ichikawa asked the following question:

“All right then. The next question. So this is about the SEO and social media. Is the number of the views and shares on social media …used as one of the ranking signals for SEO or in general?”

Gary answered:

“For this we have basically a very old, very canned response and something that we learned or it’s based on something that we learned over the years, or particularly one incident around 2014.

The answer is no. And for the future is also likely no.

And that’s because we need to be able to control our own signals. And if we are looking at external signals, so for example, a social network’s signals, that’s not in our control.

So basically if someone on that social network decides to inflate the number, we don’t know if that inflation was legit or not, and we have no way knowing that.”

Easily Gamed Signals Are Unreliable For SEO

External signals that Google can’t control but can be influenced by an SEO are untrustworthy. Googlers have expressed similar opinions about other things that are easily manipulated and therefore unreliable as ranking signals.

Some SEOs might say, “If that’s true, then what about structured data? Those are under the control of SEOs, but Google uses them.”

Yes, Google uses structured data, but not as a ranking factor; they just make websites eligible for rich results. Additionally, stuffing structured data with content that’s not visible on the web page is a violation of Google’s guidelines and can lead to a manual action.

A recent example is the LLMs.txt protocol proposal, which is essentially dead in the water precisely because it is unreliable, in addition to being superfluous. Google’s John Mueller has said that the LLMs.txt protocol is unreliable because it could easily be misused to show highly optimized content for ranking purposes, and that it is analogous to the keywords meta tag, which was used by SEOs for every keyword they wanted their web pages to rank for.

Mueller said:

“To me, it’s comparable to the keywords meta tag – this is what a site-owner claims their site is about … (Is the site really like that? well, you can check it. At that point, why not just check the site directly?)”

The content within an LLMs.txt and associated files are completely in control of SEOs and web publishers, which makes them unreliable.

Another example is the author byline. Many SEOs promoted author bylines as a way to show “authority” and influence Google’s understanding of Expertise, Experience, Authoritativeness, and Trustworthiness. Some SEOs, predictably, invented fake LinkedIn profiles to link from their fake author bios in the belief that author bylines were a ranking signal. The irony is that the ease of abusing author bylines should have been reason enough for the average SEO to dismiss them as a ranking-related signal.

In my opinion, the key statement in Gary’s answer is this:

“…we need to be able to control our own signals.”

I think that the SEO community, moving forward, really needs to rethink some of the unconfirmed “ranking signals” they believe in, like brand mentions, and just move on to doing things that actually make a difference, like promoting websites and creating experiences that users love.

Watch the question and answer at about the ten minute mark:

Featured Image by Shutterstock/pathdoc

AI Search Changes Everything – Is Your Organization Built To Compete? via @sejournal, @billhunt

Search has changed. Have you?

Search is no longer about keywords and rankings. It’s about relevance, synthesis, and structured understanding.

In the AI-powered era of Google Overviews, ChatGPT-style assistants, and concept-level rankings, traditional SEO tactics fall short.

Content alone won’t carry you. If your organization isn’t structurally and strategically aligned to compete in this new paradigm, you’re invisible even if you’re technically “ranking.”

This article builds on the foundation laid in my earlier article, From Building Inspector To Commissioning Authority,” where I argued that SEO must shift from reactive inspection to proactive orchestration.

It also builds upon my exploration of the real forces reshaping search, including the rise of Delphic Costs, where brands are extracted from the customer journey without attribution, and the organizational imperative to treat visibility as everyone’s responsibility, not just a marketing key performance indicator (KPI).

And increasingly, it’s not just about your monetization. It’s about the platform.

The Three Shifts Reshaping Search

1. Google AI Overviews: The Answer Layer Supersedes The SERP

Google is bypassing traditional listings with AI-generated answers. These overviews synthesize facts, concepts, and summaries across multiple sources.

Your content may power the answer, but without attribution, brand visibility, or clicks. In this model, being the source is no longer enough; being the credited authority is the new battle.

2. Generative Assistants: New Gatekeepers To Discovery

Tools like ChatGPT, Perplexity, and Gemini collapse the search journey into a single query/answer exchange. They prioritize clarity, conceptual alignment, and structured authority.

They don’t care about the quantity of backlinks; they care about structured understanding. Organizations relying on domain authority or legacy SEO tactics are being leapfrogged by competitors who embrace AI-readable content.

3. Concept-Based Ranking: From Keywords To Entities And Context

Ranking is no longer determined by exact-match phrases. It’s determined by how well your content reflects and reinforces the concepts, entities, and context behind a query.

AI systems think in knowledge graphs, not spreadsheets. They interpret meaning through structured data, relationships between entities, and contextual signals.

These three shifts mean that success now depends on how well your organization can make its expertise machine-readable and contextually integrated into AI ecosystems.

A New Era Of Monetization And Data Harvesting

Search platforms have evolved from organizing information to owning outcomes. Their mission is no longer to guide users to your site; it’s to keep users inside their ecosystem.

The more they can answer in place, the more behavioral data they collect, and the more control they retain over monetization.

Today, your content competes not just with other brands but with the platforms themselves. They’re generating “synthetic content” derived from your data – packaged, summarized, and monetized within their interfaces.

As Dotdash Meredith CEO Neil Vogel put it: “We were in the business of arbitrage. We’d buy traffic for a dollar, monetize it for two. That game is over. We’re now in the business of high-quality content that platforms want to reward.”

Behavioral consequence: If your content can’t be reused, monetized, or trained against, it’s less likely to be shown.

Strategic move: Make your content AI-friendly, API-ready, and citation-worthy. Retain ownership of your core value. Structured licensing, schema, and source attribution matter more than ever.

This isn’t just about visibility. It’s about defensibility.

The Strategic Risks

Enterprises that treat search visibility as a content problem – not a structural one – are walking blind into four key risks:

  • Disintermediation: You lose traffic, attribution, and control when AI systems summarize your insights without directing users to you. In an AI-mediated search world, your value can be extracted while your brand is excluded.
  • Market Dilution: Nimbler competitors who better align with AI content requirements will surface more often, even if they have less experience or credibility. This creates a reverse trust dynamic: newcomers gain exposure by leveraging the machine’s strengths, while legacy players lose visibility.
  • Performance Blind Spots: Traditional KPIs no longer capture the real picture. Traffic may appear stable while influence and presence erode behind the scenes. Executive dashboards often miss this erosion because they’re still tuned to clicks, not concept penetration or AI inclusion.
  • Delphic Costs: This, as defined by Andrei Broder and Preson McAfee, refers to the expenses incurred when AI systems extract your expertise without attribution or downstream benefits, resulting in brand invisibility despite active contributions. Being referenced but not represented becomes a strategic liability.

Are You Built To Compete?

Here’s a five-pillar diagnostic framework to assess your organization’s readiness for AI search:

1. Content Structure

  • Do you use schema markup to define your content’s meaning?
  • Are headings, tables, lists, and semantic formats prioritized?
  • Is your content chunked in ways AI systems can easily digest?
  • Are your most authoritative explanations embedded into the page using clear, concise writing and answer-ready?

2. Relevance Engineering

  • Do you map queries to concepts and entities?
  • Is your content designed for entity resolution, not just keyword targeting?
  • Are you actively managing topic clusters and knowledge structures?
  • Have you audited your internal linking and content silos to support knowledge graph connectivity?

3. Organizational Design (Shared Accountability)

  • Who owns “findability” in your organization?
  • Are SEO, content, product, and dev teams aligned around structured visibility?
  • Is there a commissioning authority that ensures strategy alignment from the start?
  • Do product launches and campaign rollouts include a visibility readiness review?
  • Are digital visibility goals embedded in executive and departmental KPIs?

In one example, a SaaS company I advised implemented monthly “findability sprints,” where product, dev, and content teams worked together to align schema, internal linking, and entity structure.

The result? A 30% improvement in AI-assisted surfacing – without publishing a single new page.

4. AI Feedback Loops

  • Are you tracking where and how your content appears in AI Overviews or assistants?
  • Do you have visibility into lost attribution or uncredited brand mentions?
  • Are you using tools or processes to monitor AI surface presence?
  • Have you incorporated AI visibility into your reporting cadence and strategic reviews?

5. Modern KPIs

  • Do your dashboards still prioritize traffic volume over influence?
  • Are you measuring presence in AI systems as part of performance?
  • Do your teams know what “visibility” actually means in an AI-dominant world?
  • Are your KPIs evolving to include citations, surface presence, and non-click influence metrics?

The Executive Mandate: From Visibility Theater To Strategic Alignment

Organizations must reframe search visibility as digital infrastructure, not a content marketing afterthought.

Just as commissioning authorities ensure a building functions as designed, your digital teams must be empowered to ensure your knowledge is discoverable, credited, and competitively positioned.

AI-readiness isn’t about writing more content. It’s about aligning people, process, and technology to match how AI systems access and deliver value. You can’t fix this with marketing alone. It requires a leadership-driven transformation.

Here’s how to begin:

  1. Reframe SEO as Visibility Engineering: Treat it as a cross-functional discipline involving semantics, structure, and systems design.
  2. Appoint a Findability or Answers Leader: This role connects the dots across content, code, schema, and reporting to ensure you are found and answering the market’s questions.
  3. Modernize Metrics: Track AI visibility, entity alignment, and concept-level performance – not just blue links.
  4. Run an AI Exposure Audit: Understand where you’re showing up, how you’re credited, and most critically, where and why you’re not. Just ask the AI system, and it will tell you exactly why you were not referenced.
  5. Reward Structural Alignment: Incentivize teams not just for publishing volume, but for findability performance. Celebrate contributions to visibility the same way you celebrate brand reach or campaign success. Make visibility a cross-team metric.

Final Thought: You Can’t Win If You’re Not Represented

AI is now the front end of discovery. If you’re not structured to be surfaced, cited, and trusted by machines, you’re losing silently.

You won’t fix this with a few blog posts or backlinks.

You fix it by building an organization designed to compete in the era of machine-mediated relevance.

This is your commissioning moment – not just to inspect the site after it’s built, but to orchestrate the blueprint from the start.

Welcome to the new search. Let’s build for it.

More Resources:


Featured Image: Master1305/Shutterstock

Earn 1,000+ Links & Boost Your SEO Visibility [Webinar] via @sejournal, @lorenbaker

Build the Authority You Need for AI-Driven Visibility

Struggling to get backlinks, even when your content is solid? 

You’re not alone. With Google’s AI Overviews and generative search dominating the results, traditional link-building tactics just don’t cut it anymore.

It’s time to earn the trust that boosts your brand’s visibility across Google, ChatGPT, and AI search engines.

Join Kevin Rowe, Founder & Head of Digital PR Strategy at PureLinq, on August 27, 2025, for an exclusive webinar. Learn the exact strategies Kevin’s team used to earn 1,000+ links and how you can replicate them without needing a massive budget or PR team.

What You’ll Learn:

  • How to identify media trends where your expertise is in demand.
  • The step-by-step process to create studies that can earn links on autopilot.
  • How to craft a story angle journalists will want to share.

Why This Webinar is Essential:

Earned links and citations are now key to staying visible in AI search results. This session will provide you with a proven, actionable playbook for boosting your SEO visibility and building the authority you need to succeed in this new era.

Register today to get the playbook for link-building success. Can’t attend live? Don’t worry, sign up anyway, and we’ll send you the full recording.

Is GEO the Same as SEO?

“Generative engine optimization” refers to tactics for increasing visibility in and traffic from AI answers. “Answer engine optimization” is synonymous with GEO, as are references to large language models, such as “LLM optimization.”

Whatever the name, optimizing for generative AI is different from traditional search engines. The distinction lies in the underlying technology:

  • LLM platforms don’t always perform a search to produce answers. The platforms use training data, which doesn’t typically have sources or URLs. It’s a knowledge base for accessing answers without necessarily knowing the origin.
  • Unlike search engines, LLMs don’t have an index or a cache of URLs. When they search, LLMs use external search engines, likely Google for ChatGPT.
  • After searching, AI crawlers go to the page, read it, and pull answers from it. AI crawlers are much less advanced than those of search engines and, accordingly, cannot render a page as easily as a Google crawler.

GEO-specific tactics include:

  • A brand in AI training data has long-term exposure in answers, but appearing in that data requires an approach beyond SEO. The keys are concise, relevant, problem-solving content on-site, and off-site exposure in reviews, forums, and other reputable mentions.
  • Being indexed by Google is more or less essential for AI answers, to a point. Additional optimization steps include (i) ensuring the site is accessible and crawlable by AI bots, (ii) structuring content to enable AI to pull answers easily, and (iii) optimizing for prompts, common needs, and, yes, keywords.
  • Keywords remain critical (and evolving) for GEO and SEO, although the former “fans out” to also answer likely follow-up prompts.

SEO

Reliance on Google varies by the genAI platform. ChatGPT, again, taps Google’s index. AI Overviews mostly summarize top-ranking URLs for the initial and fan-out queries. Higher rankings in organic search will likely directly elevate visibility in AI Overviews and Perplexity.

Google Search remains the most powerful discovery and visibility engine. And a brand that ranks high in Google is typically prominent, which drives visibility in AI Answers. As such, organic rankings also drive AI indirectly, through brand signals.

GEO

Thus GEO and SEO overlap. Pages that rank highly in organic search results will almost certainly end up in training data with elevated chances of appearing in AI answers.

Yet for training data, AI platforms continuously crawl sites with their own, limited bots and those of third-party providers, such as Common Crawl.

Hence AI platforms crawl pages via two paths: from links in organic search results and independently with their own (or outsourced) bots.

GEO kicks in when the bots reach a page. The sophistication of AI crawlers is much less than Google’s. GEO requires concise, relevant page content that’s easily accessed, without JavaScript, and succinctly summarizes a need and then answers it directly.

Structured data markup, such as from Schema.org, likely helps, too.

In short, a GEO-ready page has a clear purpose and clear answers, easily crawled.

Google Rolls Out ‘Preferred Sources’ For Top Stories In Search via @sejournal, @MattGSouthern

Google is rolling out a new setting that lets you pick which news outlets you want to see more often in Top Stories.

The feature, called Preferred Sources, is launching today in English in the United States and India, with broader availability in those markets over the next few days.

What’s Changing

Preferred Sources lets you choose one or more outlets that should appear more frequently when they have fresh, relevant coverage for your query.

Google will also show a dedicated From your sources section on the results page. You will still see reporting from other publications, so Top Stories remains a mix of outlets.

Google Product Manager Duncan Osborn says the goal is to help you “stay up to date on the latest content from the sites you follow and subscribe to.”

How To Turn It On

Image Credit: Google
  1. Search for a topic that is in the news.
  2. Tap the icon to the right of the Top stories header.
  3. Search for and select the outlets you want to prioritize.
  4. Refresh the results to see the updated mix.

You can update your selections at any time. If you previously opted in to the experiment through Labs, your saved sources will carry over.

In early testing through Labs, more than half of participants selected four or more sources. That suggests people value seeing a range of outlets while still leaning toward publications they trust.

Why It Matters

For publishers, Preferred Sources creates a direct way to encourage loyal readers to see more of your coverage in Search.

Loyal audiences are more likely to add your site as a preferred source, which can increase the likelihood of showing up for them when you have fresh, relevant reporting.

You can point your audience to the new setting and explain how to add your site to their list. Google has also published help resources for publishers that want to promote the feature to followers and subscribers.

This adds another personalization layer on top of the usual ranking factors. Google says you will still see a diversity of sources, and that outlets only appear more often when they have new, relevant content.

Looking Ahead

Preferred Sources fits into Google’s push to let you customize Search while keeping a variety of perspectives in Top Stories.

If you have a loyal readership, this feature is another reason to invest in retention and newsletters, and to make it easy for readers to follow your coverage on and off Search.

Is AI Cutting Into Your SEO Conversions? via @sejournal, @Kevin_Indig

Since March 2025 in the U.S. (and May elsewhere), many sites have noticed an uncomfortable pattern: organic conversions slipping.

It’s easy to blame falling traffic from Google’s intensified AI Overviews.

But purchase intent doesn’t just vanish. Does it?

If your conversions are holding steady, congratulations. If they’re not, the reasons may be more layered than you think.

In today’s Memo, I’m breaking down the five biggest forces I see behind SEO conversion declines across industries:

  • Loss of top-of-the-funnel (TOFU) traffic (and why it matters more than you thought).
  • Platform shifts pulling demand into other ecosystems.
  • Channel shifts from organic to paid search.
  • Attribution leakage that hides organic’s true impact.
  • Macro factors pressuring conversion rates.

I’ll also walk you through the signals to check, how to measure each, and – inside the premium section – the exact process I use to identify which drivers are hitting a site the hardest.

AI cutting SEO conversionsImage Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

How have your SEO conversions changed since Google intensified AI Overviews?

If they’ve grown – all the power to you!

If not, I’m seeing five underlying reasons that could be contributing to their decline across industry types:

  1. Loss of TOFU traffic.
  2. Platform shift.
  3. Channel shift.
  4. Attribution loss.
  5. Economic change.

Sites that are noticing an SEO conversion drop have seen it since 2025 (March in the U.S., May in other countries).

It’s logical to assume that the reason is a decline in organic traffic – makes sense – but purchase intent doesn’t just vanish.

Have your conversions gone to other sites, or could there be another explanation behind their decline?

Let’s dig in.

For decades, SEOs have created top-of-the-funnel content (like “what is X” or “why you need X”). This kind of content often has an unclear impact on the bottom line.

Now that organic clicks are dropping, conversions are dropping (to a lower degree) as well.

Was top-of-the-funnel content more impactful than we thought all along?

I raised that theory first in the AI Halftime Report:

AIOs are really mostly TOFU queries. In that case, TOFU content always had more impact on the bottom line than we were able to prove and we can expect the traffic decline to level off.

Or AIOs impact way more than MOFU and BOFU queries as well (which is what I think), and we’re in for a long decline of traffic.

If true, I expect revenue that’s attributed to organic search to decline at a lower rate – or not at all for certain companies – since purchase intent doesn’t just go away. Therefore, revenue results would relate more to our ability to influence purchase intent.

That’s where my concept of “Demand Activation” elasticity comes in.

In economics, price elasticity measures how much demand changes when prices change.

In marketing, Demand Activation elasticity describes how much eventual purchase behavior changes when you influence someone early in their journey.

Think about Demand Activation as how many potential customers you influence to buy from you.

If the “elasticity” is high, being visible at the very top of the funnel creates a disproportionate downstream impact on revenue, even if you can’t directly attribute it in analytics.

If this turns out to be correct, it’s an argument for earning AI visibility.

If Demand Activation has the impact I think it has, being visible in ChatGPT, AI Mode & Co. has stronger downstream effects than we can directly attribute. I’ve certainly seen more high-pipeline deals and purchases come from ChatGPT for some of my clients.

To illustrate the concept, let’s consider an economic example.

I’ve been searching for an excuse to write about the economic impact if Germany were to open stores on Sundays for a long time: Would people buy more if they could, or would purchases simply spread out across more days?

Studies by the EHI, IFO, and IW Köln show that people in Germany would actually buy more if stores were open on Sundays, especially non-food items. [123]

Stores in Germany do open a few Sundays a year.

And during those rare occasions, people shop more, especially for impulse buys.

Some research suggests that it’s mainly driven by events and tourism in higher spend areas, but looking at EU neighbors with an open-Sunday policy, like the Netherlands, we see consistently higher incremental retail spend.

To bring it back to Search, exposure early on in the user journey (as in “more open Sundays”) might have a stronger downstream impact (like more top-of-funnel visits) than we thought. Therefore, it could be critical to be broadly visible in LLMs.

Signals To Check:

1. TOFU traffic decline vs. MOFU/BOFU.

  • How to measure: In Search Console, filter queries using a TOFU regex (remove branded terms). Compare YOY clicks for TOFU vs. MOFU/BOFU.

2. Branded search volume change.

  • How to measure: Use Google Trends or a classic keyword tool (Ahrefs, Semrush) to track branded search volume over time. Correlate drops with TOFU traffic declines and conversions from organic.

3. Assisted conversions drop.

  • How to measure: In GA4 or another MTA model, compare YOY assisted conversions from organic search. A sharp drop suggests TOFU content was influencing downstream revenue.

Another explanation is that conversions are happening more on other platforms instead of Google Search.

While Google’s ad market share has grown over the last five years, search behavior has diversified across multiple ecosystems:

  • TikTok, YouTube, Reddit, LinkedIn, Instagram, niche forums – all of which have their own “search” layers.
  • YouTube has long been the second-largest search engine in the world.
  • Reddit is now the second-largest site in the U.S. (only Wikipedia is bigger), and Google is surfacing Reddit content more prominently, except in ecommerce.

The biggest shift, however, may be to LLMs.

ChatGPT alone sees 2.5 billion prompts per day.[4] While many prompts are additive to Google Search and exploratory in intent, it’s unlikely there’s no overlap with purchase-driven queries.

Why this is happening now:

  • Google’s increased integration of Reddit results (high-trust user content) changes click patterns.
  • New LLM model releases (ChatGPT o3, Gemini 2.5) improve quality and speed, keeping users inside AI environments longer.
  • AI-first platforms are beginning to feel less “experimental” and more like a default research tool.
Image Credit: Kevin Indig

Signals To Check:

1. Referral traffic from non-Google search platforms.

  • How to measure: In GA4, track YOY referral traffic from YouTube, Reddit, TikTok, LinkedIn. See if gains coincide with Google organic losses.

2. Share of search activity across platforms.

  • How to measure: Use Similarweb, Statcounter, or GWI to compare platform-specific search volumes and market share over time.

3. Self-reported attribution.

  • How to measure: Ask users to fill out a short survey about where they first and last saw your brand after signing up or buying.

It is also possible that the clicks that would have gone to organic search are now going to paid search. The logic is simple:

When AI Overviews or zero-click results satisfy most of the informational need, the only prominent offers left are often ads.

If users still want to explore a product or service after reading an AIO, they might be more likely to click the sponsored result than scroll further to organic links.

The timing matches AIO rollout phases. If we see Google reporting strong Search revenue growth while organic traffic declines, it is a sign the demand has not disappeared – it is just being monetized differently.

Image Credit: Kevin Indig

Alphabet’s Q1 2025 10‑Q [5] reveals that paid clicks from Google Search either grew or hit 0% growth in the last 10 quarters, but never declined.

Impressions (from Google Network), on the other hand, saw the opposite trend.

Whenever paid impressions drop, paid clicks go up because lower ad inventory means advertisers need to pay more for traffic.

Q2 2025 earnings [6] highlighted that Search ad revenue grew 12% year‑over‑year. Industry benchmark data reveals that the average Google CPC in 2025 lands at $5.26 – up approximately 13% year-over-year.[7]

So, less ad inventory leads to higher CPCs and more paid clicks.

Since we don’t know how many AI Overviews Google shows ads for, we can’t say with certainty that more clicks are going to ads as a direct result, but the data does show that more clicks are going to ads.

Signals To Check:

1. Organic vs. paid search traffic share.

  • How to measure: In GA4, compare YOY sessions from organic search vs. paid search. Look for paid’s share increasing as organic drops.

2. Paid search impression and click growth.

  • How to measure: Pull impressions and clicks from your Google Ads account (or industry benchmarks) over the last 12 months and compare to pre-AIO periods.

3. CPC and CPA trends.

  • How to measure: In Google Ads or industry benchmarks, track YOY CPC and CPA changes in your vertical. Rising CPC with organic decline suggests a mix shift.

One (popular) possibility is that the influence of organic search has not changed much, but the way we measure it has.

Essentially, classic attribution methods are broken.

In the classic model, the path was:

Search → Click → Landing Page → Conversion

Now, the path may look more like:

Search (or AIO) → Brand Recall → Direct Visit → Conversion

AI Overviews answer the user’s question before the click, so when they are ready to buy, they bypass the search click entirely and go straight to the homepage or an app.

In analytics, that conversion shows up as direct traffic, not organic search.

Attribution leakage has always been a challenge for SEO, but AI-driven summaries and brand mentions make it worse.

Because it’s a demand capture channel, consider that SEO takes much more time between first and last touch to convert users than the default 90-day lookback window.

Often, the last touch is prone to paid channels because advertising tips people over the edge.

Also, it’s not uncommon for users to switch devices during a purchase cycle, making attribution way harder. Lastly, most attribution tools are geared towards advertising.

If you only track last-click conversions, you may underestimate the true contribution of search visibility.

Signals To Check:

1. Direct conversions are up while organic conversions are down.

  • How to measure: In GA4, compare YOY direct channel conversions vs. organic. Look for inverse movement.

2. Branded search stable or rising.

  • How to measure: Use Google Trends or a keyword tool to track branded search queries. Stability with organic session decline suggests clicks are being skipped.

3. Multi-touch attribution still shows search influence.

  • How to measure: In GA4 (data-driven model) or a dedicated attribution tool, check if search remains a common first or assist touchpoint even when last-click conversions fall.

Are SEO conversion rates down because people simply have less money?

There is credible evidence that macro conditions in the U.S. are weighing on conversion rates:

1. Price sensitivity and promotion dependence

Adobe reports that shoppers were unusually price elastic during the holiday season of 2024.

A 1% drop in price produced a roughly 1.03% rise in demand, indicating elevated sensitivity to discounts. That effect implies conversions were heavily promotion-led.[8]

Adobe’s Digital Price Index shows online prices have fallen for 33 straight months through May 2025, suggesting merchants are discounting to stimulate demand.

Sustained discounting typically lifts conversions only when price cuts are material, and it compresses margins.[9]

2. Consumer caution and mix shift

Salesforce’s Shopping Index commentary notes U.S. shoppers “buying less,” prioritizing essentials, and trading down in 2025.

It also cites 0% U.S. ecommerce sales growth in Q1 2025, consistent with softer sensitivity to purchase.[10]

Consumer confidence has improved slightly but remains soft relative to 2024, which tends to dampen conversion rates.[11]

3. Household finance constraints

The New York Fed reports total household debt at a record $18.39 trillion in Q2 2025, with delinquency rates up from earlier periods and credit card balances at $1.21 trillion.

Higher borrowing costs and rising delinquencies constrain checkout conversion, especially for lower-income cohorts.[12]

4. Observed conversion pressure in digital benchmarks

Contentsquare’s 2025 Digital Experience Benchmark finds online conversion rates fell 6.1% year over year, attributing much to experience friction.

In context with the macro signals above, this supports a broader environment where it is harder to turn visits into orders without heavier incentives.[13]

But…

Overall, U.S. ecommerce dollars are still growing in many periods, including +5.6% year-over-year in Q1 2025 and strong holiday spend, so demand has not collapsed.

Growth is being “bought” through price cuts and promotions, which can mask weaker underlying conversion propensity.[1415]

Also, you could argue that these economic conditions have been in place for a few years.

Why would they impact SEO conversions so much now?

Signals To Check:

1. Organic conversion rate trend vs. other channels.

  • Track monthly SEO conversion rates alongside paid search, direct, and email.
  • If all channels decline in parallel, macroeconomic pressure is a likely driver.
  • If organic drops disproportionately, AI Overviews are adding to the decline.

2. Correlation with economic indicators.

  • Compare organic CR trends to macro metrics like CPI, inflation rate, Consumer Confidence Index, and online price trends (Adobe DPI).
  • Look for statistically significant correlations, like CR rising when CPI falls or confidence increases.
  • If patterns are similar across Paid Search and Direct, macroeconomic factors are likely influencing purchase readiness.

3. Promotion elasticity

  • Measure CR lift during promotions vs. baseline for organic, paid, and direct traffic.
  • A bigger lift than in prior years – especially if mirrored across channels – indicates conversions are increasingly discount-driven, a sign of macro pressure.

If you’re experiencing a decline in SEO conversions in 2025, it’s likely not due to one specific reason.

In fact, it’s likely that all five options are playing into SEO conversion drops across the web.

To what degree each option has an impact matters from site to site and industry to industry.

That’s why it’s so important to run the analysis I recommend in each section above for your own data.

AI Mode will intensify the downward trend of SEO conversions.

I don’t think SEO will decline to zero because a small fraction of people will still click, even in AI Mode.

And Google won’t show AI Mode everywhere, because adoption is generational (see the UX study of AIOs for more info).

I think AI Mode will launch at a broader scale (like showing up for more queries overall) when Google figures out monetization.

Plus, ChatGPT is not yet monetizing, so advertisers go to Google and Meta – for now. And that’s my hypothesis as to why Google Search is continuing to grow.

At least for the time being.

It’ll be interesting to see what happens next in the coming months.


Featured Image: Paulo Bobita/Search Engine Journal