Why Is Organic Traffic Down? Here’s How To Segment The Data via @sejournal, @torylynne

As an SEO, there are few things that stoke panic like seeing a considerable decline in organic traffic. People are going to expect answers if they don’t already.

Getting to those answers isn’t always straightforward or simple, because SEO is neither of those things.

The success of an SEO investigation hinges on the ability to dig into the data, identify where exactly the performance decline is happening, and connect the dots to why it’s happening.

It’s a little bit like an actual investigation: Before you can catch the culprit or understand the motive, you have to gather evidence. In an SEO investigation, that’s a matter of segmenting data.

In this article, I’ll share some different ways to slice and dice performance data for valuable evidence that can help further your investigation.

Using Data To Confirm There’s An SEO Issue

Just because organic traffic is down doesn’t inherently mean that it’s an SEO problem.

So, before we dissect data to narrow down problem areas, the first thing we need to do is determine whether there’s actually an SEO issue at play.

After all, it could be something else altogether. In which case, we’re wasting unnecessary resources chasing a problem that doesn’t exist.

Is This A Tracking Issue?

In many cases, what looks like a big traffic drop is just an issue with tracking on the site.

To determine whether tracking is functioning correctly, there are a couple of things we need to look for in the data.

The first is consistent drops across channels.

Zoom out of organic search and see what’s happening in other sources and channels.

If you’re seeing meaningful drops across email, paid, etc., that are consistent with organic search, then it’s more than likely that tracking isn’t working correctly.

The other thing we’re looking for here is inconsistencies between internal data and Google Search Console.

Of course, there’s always a bit of inconsistency between first-party data and GSC-reported organic traffic. But if those differences are significantly more pronounced for the time period in question, that hints at a tracking problem.

Is This A Brand Issue?

Organic search traffic from Google falls into two primary camps:

  • Brand traffic: Traffic driven by user queries that include the brand name.
  • Non-brand traffic: Traffic driven by brand-agnostic user queries.

Non-brand traffic is directly affected by SEO work. Whereas, brand traffic is mostly impacted by the work that happens in other channels.

When a user includes the brand in their search, they’re already brand-aware. They’re a return user or they’ve encountered the brand through marketing efforts in channels like PR, paid social, etc.

When marketing efforts in other channels are scaled back, the brand reaches fewer users. Since fewer people see the brand, fewer people search for it.

Or, if customers sour on the brand, there are fewer people using search to come back to the site.

Either way, it’s not an SEO problem. But in order to confirm that, we need to filter the data down.

Go to Performance in Google Search Console and exclude any queries that include your brand. Then compare the data against a previous period – usually YoY if you need to account for seasonality. Do the same for queries that don’t include the brand name.

If non-brand traffic has stayed consistent, while brand traffic has dropped, then this is a brand issue.

filtering queries using regex in Google Search Console
Screenshot from Google Search Console, November 2025

Tip: Account for users misspelling your brand name by filtering queries using fragments. For example, at Gray Dot Co, we get a lot of brand searches for things like “Gray Company” and “Grey Dot Company.” By using the simple regex expression “gray|grey” I can catch brand search activity that would otherwise fall through the cracks. 

Is It Seasonal Demand?

The most obvious example of seasonal demand is holiday shopping on ecommerce sites.

Think about something like jewelry. Most people don’t buy jewelry every day; they buy it for special occasions. We can confirm that seasonality by looking at Google Trends.

Zooming out to the past five years of interest in “jewelry,” it clearly peaks in November and December.

Google Trends graph for interest in jewelry over the past five years
Screenshot from Google Trends, November 2025

As a site that sells jewelry, of course, traffic in Q1 is going to be down from Q4.

I use a pretty extreme example here to make my point, but in reality, seasonality is widespread and often more subtle. It impacts businesses where you might not expect much seasonality at all.

The best way to understand its impact is to look at organic search data year-over-year. Do the peaks and valleys follow the same patterns?

If so, then we need to compare data YoY to get a true sense of whether there’s a potential SEO problem.

Is It Industry Demand?

SEOs need to keep tabs on not just what’s happening internally, but also what’s going on externally. A big piece of that is checking the pulse of organic demand for the topics and products that are central to the brand.

Products fall out of vogue, technologies become obsolete, and consumer behavior changes – that’s just the reality of business. When there are fewer potential customers in the landscape, there are fewer clicks to win, and fewer sessions to drive.

Take cameras, for instance. As the cameras on our phones got more sophisticated, digital cameras became less popular. And as they became less popular, searches for cameras dwindled.

Now, they’re making a comeback with younger generations. More people searching, more traffic to win.

npr article headline why gen z loves the digital compact cameras that millennials used to covet
Screenshot from npr.com, November 2025

You can see all of this at play in the search landscape by turning to Google Trends. The downtrend in interest caused by advances in technology, AND the uptrend boosted by shifts in societal trends.

Google Trends graph showing search interest in cameras since 2004
Screenshot from Google Trends, November 2025

When there are drops in industry, product, or topic demand within the landscape, we need to ask ourselves whether the brand’s organic traffic loss is proportional to the overall loss in demand.

Is Paid Search Cannibalizing Organic Search?

Even if a URL on the site ranks well in organic results, ads are still higher on the SERP. So, if a site is running an ad for the same query it already ranks for, then the ad is going to get more clicks by nature.

When businesses give their PPC budgets a boost, there’s potential for this to happen across multiple, key SERPs.

Let’s say a site drives a significant chunk of its organic traffic from four or five product landing pages. If the brand introduces ads to those SERPs, clicks that used to go to the organic result start going to the ad.

That can have a significant impact on organic traffic numbers. But search users are still getting to the same URLs using the same queries.

To confirm, pull sessions by landing pages from both sources. Then, compare the data from before the paid search changes to the period following the change.

If major landing pages consistently show a positive delta that cancels out the negative delta in organic search, you’re not losing organic traffic; you’re lending it.

YoY comparison of sessions by landing page for paid search and organic search in GA4
Screenshot from Google Analytics, November 2025

Segmenting Data To Find SEO Issues

Once we have confirmation that the organic traffic declines point to an SEO issue, we can start zooming in.

Segmenting data in different ways helps pinpoint problem areas and find patterns. Only then can we trace those issues to the cause and craft a strategy for recovery.

URL

Most SEOs are going to filter their organic traffic down by URL. It lets us see which pages are struggling and analyze those pages for potential improvements.

It also helps find patterns across pages that make it easier to isolate the cause of more widespread issues. For example, if the site is losing traffic across its product listing pages, it could signal that there’s a problem with the template for that page.

But segmenting by URL also helps us answer a very important question when we pair it with conversion data.

Do We Really Care About This Traffic?

Clicks are only helpful if they help drive business-valuable user interactions like conversions or ad views. For some sites, like online publications, traffic is valuable in and of itself because users coming to the site are going to see ads. The site still makes money.

But for brands looking to drive conversions, it could just be empty traffic if it’s not helping drive that primary key performance indicator (KPI).

A top-of-funnel blog post might drive a lot of traffic because it ranks for very high-volume keywords. If that same blog post is a top traffic-driving organic landing page, a slip in rankings means a considerable organic traffic drop.

But the users entering those high-volume keywords might not be very qualified potential customers.

Looking at conversions by landing page can help brands understand whether the traffic loss is ultimately hurting the bottom line.

The best way to understand is to turn to attribution.

First-touch attribution quantifies an organic landing page’s value in terms of the conversions it helps drive down the line. For most businesses, someone isn’t likely to convert the first time they visit the site. They usually come back and purchase.

Whereas, last-touch attribution shows the organic landing pages that people come to when they’re ready to make a purchase. Both are valuable!

Query

Filtering performance by query can help understand which terms or topic areas to focus improvements on. That’s not new news.

Sometimes, it’s as easy as doing a period-over-period comparison in GSC, ordering by clicks lost, and looking for obvious patterns, i.e., are the queries with the most decline just subtle variants of one another?

If there aren’t obvious patterns and the queries in decline are more widespread, that’s where topic clustering can come into the mix.

Topic Clustering With AI

Using AI for topic clustering helps quickly identify any potential relationships between queries that are seeing performance dips.

Go to GSC and filter performance by query, looking for any YoY declines in clicks and average position.

YoY comparison in Google Search Console for clicks and average position by query
Screenshot from Google Search Console, November 2025

Then export this list of queries and use your favorite ML script to group the keywords into topic clusters.

The resulting list of semantic groupings can provide an idea of topics where a site’s authority is slipping in search.

In turn, it helps narrow the area of focus for content improvements and other optimizations to potentially build authority for the topics or products in question.

Identifying User Intent

When users search using specific terms, the type of content they’re looking for – and their objective – differs based on the query. These user expectations can be broken out into four different high-level categories:

User Intent Objective
Informational

(Top of funnel)

Users are looking for answers to questions, explanations, or general knowledge about topics, products, concepts, or events.
Commercial

(Middle of funnel)

Users are interested in comparing products, reading reviews, and gathering information before making a purchase decision.
Transactional

(Bottom of funnel)

Users are looking to perform a specific action, such as making a purchase, signing up for a service, or downloading a file.
Navigational Brand-familiar users are using the search engine as a shortcut to find a specific website or webpage.

By leveraging user intent, we identify user objectives for which the site or pages on the site are falling short. It gives us a lens into performance decline, making it easier to identify possible causes from the perspective of user experience.

If the majority of queries losing clicks and positionality are informational, it could signal shortcomings in the site’s blog content. If the queries are consistently commercial, it might call for an investigation into how the site approaches product detail and/or listing pages.

GSC doesn’t provide user intent in its reporting, so this is where a third-party SEO tool can come into play. If you have position tracking set up and GSC connected, you can use the tool’s rankings report to identify queries in decline and their user intent.

If not, you can still get the data you need by using a mix of GSC and a tool like Ahrefs.

Device

This view of performance data is pretty simple, but it’s equally easy to overlook!

When the large majority of performance declines are attributed to ONLY desktop or mobile, device data helps identify potential tech or UX issues within the mobile or desktop experience.

The important thing to remember is that any declines need to be considered proportionally. Take the metrics for the site below…

YoY comparison in Google Search Console of clicks by device type
Screenshot from Google Search Console, November 2025

At first glance, the data makes it look like there might be an issue with the desktop experience. But we need to look at things in terms of percentages.

Desktop: 1 – (648/1545) x 100 = 58% decline

Mobile: 1 – (149/316) x 100 = 52% decline

While desktop shows a much larger decline in terms of click count, the percentage of decline YoY is fairly similar across both desktop and mobile. So we’re probably not looking for anything device-specific in this scenario.

Search Appearance

Rich results and SERP features are an opportunity to stand out on the SERP and drive more traffic through enhanced results. Using the search appearance filter in Google Search Console, you can see traffic from different types of rich results and SERP features:

  • Forums.
  • AMP Top Story (AMP page + Article markup).
  • Education Q&A.
  • FAQ.
  • Job Listing.
  • Job Details.
  • Merchant Listing.
  • Product Snippet.
  • Q&A.
  • Review Snippet.
  • Recipe Gallery.
  • Video.

This is the full list of possible features with rich results (courtesy of SchemaApp), though you’ll only see filters for search appearances where the domain is currently positioned.

In most cases, Google is able to generate these types of results because there is structured data on pages. The notable exceptions are Q&A, translated results, and video.

So when there are significant traffic drops coming from a specific type of search appearance, it signals that there’s potentially a problem with the structured data that enables that search feature.

YoY comparison in Google Search Console for search appearance
Screenshot from Google Search Console, November 2025

You can investigate structured data issues in the Enhancements reports in GSC. The exception is product snippets, which nest under the Shopping menu. Either way, the reports only show up in your left-hand nav if Google is aware of relevant data on the site.

For example, the product snippets report shows why some snippets are invalid, as well as ways to potentially improve valid results.

Product snippets report in Google Search Console
Screenshot from Google Search Console, November 2025

This context is valuable as you begin to investigate the technical causes of traffic drops from specific search features. In this case, it’s clear that Google is able to crawl and utilize product schema on most pages – but there are some opportunities to improve that schema with additional data.

Featured Snippets

When featured snippets originally came on the scene, it was a major change to the SERP structure that resulted in a serious hit to traditional organic results.

Today, AI Overviews are doing the same. In fact, research from Seer shows that CTR has dropped 61% for queries that now include an AI overview (21% of searches). And that impact is outsized for informational queries.

In cases where rankings have remained relatively static, but traffic is dropping, there’s good reason to investigate whether this type of SERP change is a driver of loss.

While Google Search Console doesn’t report on featured snippets (example: PAA questions) and AI Overviews, third-party tools do.

In the third-party tool Semrush, you can use the Domain Overview report to check for featured snippet availability across keywords where the site ranks.

filtering to keyword with available AI overviews in the Semrush Domain Overview report
Screenshot from Semrush, November 2025

Do the keywords where you’re losing traffic have AI overviews? If you’re not cited, it’s time to start thinking about how you’re going to win that placement.

Search Type

Search type is another way to filter GSC data, where you’re seeing traffic declines despite healthy and consistent rankings.

After all, web search is just one prong of Google Search. Think about it: How often do you use Google Image search? At least in my case, that’s fairly often.

Filter performance data by each of these search types to understand which one(s) are having the biggest impact on performance decline. Then use that insight to start connecting the dots to the cause.

filtering to Google image search performance in Google Search Console
Screenshot from Google Search Console, November 2025

Images are a great example. One simple line in the robots.txt can block Google from crawling a subfolder that hosts multitudes of images. As those images disappear from image search results, any clicks from those results disappear in tandem.

We don’t know to look for this issue until we slice the data accordingly!

Geography

If the business operates physically in specific cities and states, then it likely already has geo-specific performance tracking set up through a tool.

But domains for online-only businesses shouldn’t dismiss geographic data – even at the city/state level! Declines are still a trigger to check geo-specific performance data.

Country

Just because the brand only sells and operates in one country doesn’t mean that’s where all the domain’s traffic is coming from. Drilling down by country in GSC allows you to see whether declines are coming from the country the brand is focused on or, potentially, another country altogether.

performance by country in Google Search Console
Screenshot from Google Search Console, November 2025

If it’s another country, it’s time to decide whether that matters. If the site is a publisher, it probably cares more about that traffic than an ecommerce brand that’s more focused on purchases in its country of operation.

Localization

When tools are reporting positionality at the country level, then rankings shifts in specific markets fly under the radar. It certainly happens, and major markets can have major traffic impact!

Tools like BrightLocal, Whitespark, and Semrush let you analyze SERP rankings one level deeper than GSC, providing data down to the city.

Checking for rankings discrepancies across cities is possible by checking a small sample of keywords with the greatest declines in clicks.

If I’m an SEO at the University of Phoenix, which is an online university, I’m probably pretty excited about ranking #1 in the United States for “online business degree.”

top five serp results for online business degree in the United States
Screenshot from Semrush, November 2025

But if I drill down further, I might be a little distraught to find that the domain isn’t in the top five SERP results for users in Denver, CO…

top five serp results for online business degree in Denver, Colorado
Screenshot from Semrush, November 2025

…or Raleigh, North Carolina.

top five serp results for online business degree in Raleigh. North Carolina
Screenshot from Semrush, November 2025

Catch Issues Faster By Leveraging AI For Data Analysis

Data segmentation is an important piece of any traffic drop investigation, because humans can see patterns in data that bots don’t.

However, the opposite is true too. With anomaly detection tooling, you get the best of both worlds.

When combined with monitoring and alert notifications, anomaly detection makes it possible to find and fix issues faster. Plus, it enables you to find data patterns in any after-the-impact investigations

All of this helps ensure that your analysis is comprehensive, and might even point out gaps for further investigation.

This Colab tool from Sam Torres can help get your site set up!

Congrats, You’re Close To Closing This Case

As Sherlock Holmes would say about an investigation, “It is a capital mistake to theorize before one has data.” With the right data in hand, the culprits start to reveal themselves.

Data segmentation empowers SEOs to uncover leads that point to possible causes. By narrowing it down based on the evidence, we ensure more accuracy, less work, faster answers, and quicker recovery.

And while leadership might not love a traffic drop, they’re sure to love that.

More Resources:


Featured Image: Vanz Studio/Shutterstock

17 Data Reports That Every SEO Should Be Tracking in 2026 via @sejournal, @MattGSouthern

SEO and marketing are driven by the choices that you make, and those choices should be guided by clear, trustworthy data.

Having a range of sources that you track on a regular basis helps you to stay informed and to speak with authority in meetings with C-suite and clients.

Enable your strategies with real-world insights and answer questions such as, Should paid search budgets go up or down? Which international markets are worth expanding into? And how is traffic shifting towards social platforms or retail media networks? 

The following is a list of some well-known and some lesser-known reports that you should make yourself familiar with to always have qualified answers to your choices.

Financial & Markets Data

These high-level reports provide the map of the digital economy. They show where advertising dollars are flowing, where they are pooling, and where they might flow next.

They give you the “big picture” context for your own budget decisions, allowing you to speak the language of finance and justify your strategy with market-wide data.

IAB/PwC Internet Advertising Revenue Report

Cadence: Annual
Typical release: April
Access: Free, no registration required
Link

Why It Matters:

This report answers the question: Is the digital ad market still growing? For over 25 years, the IAB has been the definitive source for U.S. internet advertising revenue. Its historical data charted the shifts from dial-up to broadband, and then from desktop to mobile. Today, it’s charting the next great reallocation of capital. When your CFO wants authoritative numbers on the industry’s health, this is the gold standard.

The report surveys companies representing over 86% of U.S. internet ad revenue, meaning its figures are based on actual, verified spending. The format breakdowns show how much capital is flowing from established channels like traditional search into high-growth areas like social video, connected TV, and retail media.

Methodology & Limitations:

U.S.-only data; reflects reported revenue from participating companies, which may be delayed by one quarter compared to actual spending; excludes international markets and smaller ad networks below the survey threshold.

MAGNA Global Ad Forecast

Cadence: Biannual
Typical release: June & December
Access: Free summary; full datasets for IPG Mediabrands clients
Link

Why It Matters:

If the IAB report is a photograph of last year, MAGNA’s forecast is a detailed blueprint of the next 18 months. It helps you anticipate whether paid search costs (CPCs) are likely to spike based on an influx of advertiser demand. Their analysis is global, allowing you to see which regions are heating up and which are cooling down.

Their retail media breakouts are useful for making the case to invest in product feed optimization and marketplace SEO. For example, if MAGNA forecasts a 20% surge in retail media while projecting only 5% growth in search, it’s a signal that commercial intent is migrating.

The twice-yearly cadence is its secret weapon. The December update gives you fresh data for annual planning, while the June update allows for mid-year course corrections, making your strategy more agile.

Methodology & Limitations:

The forecast model relies on historical patterns, economic indicators, and advertiser surveys. It is subject to revision due to macroeconomic changes. Coverage varies globally, with the most robust data in North America and Europe, while forecasts for the China market carry higher uncertainty.

Global Entertainment & Media Outlook (PwC)

Cadence: Annual
Typical release: July
Access: Paid subscription; free overview and highlights
Link

Why It Matters:

This is your five-year planning guide, the ultimate tool for long-term strategic thinking.

While other reports focus on the next year, PwC projects revenue and user growth across 53 countries and 15+ media segments five years ahead. This macro view can help build a business case for large, multi-year investments.

Are you considering a major push into podcasting or developing a streaming video channel? This report’s audio and video forecasts can help you size the market and project a realistic timeline for ROI. Its search advertising forecasts by region can help you de-risk international expansion by prioritizing countries with high-growth projections.

The methodology takes into account regulatory changes, technology adoption curves, and demographic shifts. It can help you build strategies that are resilient to short-term fluctuations because they’re aligned with long-term trends.

Methodology & Limitations:

Full access is paid and limits how broadly information can be shared. Keep in mind, forecasts about the next five years are naturally uncertain and get updated every year. Also, these projections assume that regulatory environments stay stable, but changes can always happen.

Company Earnings Reports

While market reports offer an overview of the economy, the quarterly earnings from key companies reveal the reality of the platforms that are integral to the industry.

Financial data exposes the strategic priorities and weaknesses of search platforms, which can offer insights into where they might make significant changes.

Alphabet Quarterly Earnings

Cadence: Quarterly (fiscal year ends December 31)
Typical release: Q1 (late Apr), Q2 (late Jul), Q3 (late Oct), Q4 (late Jan/early Feb)
Access: Free
Link

Why It Matters:

This is the single most important quarterly report for anyone in search.

The key metric is revenue for the “Google Search & other” segment; its growth rate tells you if the core business is healthy, plateauing, or declining. Compare this to the growth rate of “YouTube ads” to see where user attention and ad dollars are shifting.

A secondary indicator to watch is “Google Cloud” revenue. As it grows, expect more integrations between Google’s enterprise tools and its core search products.

Pay close attention to Traffic Acquisition Costs (TAC), which includes the billions Google pays partners like Apple and Samsung to be the default search engine. If TAC is growing faster than Search revenue, it’s a major red flag that Google is paying more for traffic that is becoming less profitable.

In the current environment, the most critical part of the report is the management commentary and the analyst Q&A. Look for specific language about AI Overviews’ impact on query volume, user satisfaction, and any hint of revenue cannibalization.

Methodology & Limitations:

The “Google Search & other” bundles search with Maps, Gmail, and other properties, which prevents isolated analysis of search revenue. AI Overviews metrics are disclosed selectively and not on a comprehensive quarterly basis. Geographic revenue breakdowns are limited to broad regions.

Microsoft Quarterly Earnings

Cadence: Quarterly (fiscal year ends June 30)
Typical release: Q1 (Oct), Q2 (Jan), Q3 (Apr), Q4 (Jul)
Access: Free
Link

Why It Matters:

Microsoft’s report provides a direct scorecard for Bing’s performance via its “search and news advertising revenue” figures. This tells you whether the search engine is gaining or losing ground. Their integration of OpenAI’s models into Bing has made this a number to watch.

However, the bigger story often lies in their Intelligent Cloud and Productivity segments. Pay attention to commentary on the growth of Microsoft 365 Copilot and enterprise search features within Teams and SharePoint. This reveals how millions of professionals are finding information and getting answers without opening a traditional web browser.

Methodology & Limitations:

Search revenue is only reported as a percentage growth, not in actual dollar amounts, which makes market share calculations more complex. Details about enterprise search usage metrics are rarely shared openly. Geographic breakdowns are also limited. To estimate Bing’s market share, we need to infer from revenue growth compared to traffic data.

Amazon Quarterly Results

Cadence: Quarterly
Typical release: Q1 (late Apr), Q2 (late Jul), Q3 (late Oct), Q4 (late Jan/early Feb)
Access: Free
Link

Why It Matters:

This report tells you how much commercial search is shifting from Google to Amazon.

For years, Amazon’s advertising business has grown faster than its renowned AWS cloud unit. That’s an indicator of where brands are investing to capture customers at the point of purchase. The year-over-year ad revenue growth rate can help with justifying investment in Amazon SEO and enhanced content.

Look beyond the ad revenue to their commentary on logistics. When they discuss the expansion of their same-day delivery network, they are talking about widening their competitive moat against all other ecommerce and search players. A Prime member who can get a product in four hours has little incentive to start their product search on Google.

Also, look for the percentage of units sold by third-party sellers (typically 60-62%) to quantify the scale of the opportunity for brands on their marketplace.

Methodology & Limitations:

Advertising revenue is not separated by format (such as sponsored products, display, or video), and there is no disclosure of revenue by product category. International advertising revenue breakdowns are limited, and delivery network metrics are provided only selectively.

Apple Quarterly Results

Cadence: Quarterly
Typical release: Q1 (late Jan/early Feb), Q2 (late Apr/early May), Q3 (late Jul/early Aug), Q4 (late Oct/early Nov)
Access: Free
Link

Why It Matters:

Apple’s report is a barometer for the health of the mobile ecosystem and the impact of privacy. The key number is “Services” revenue, which includes the App Store, Apple Pay, and their burgeoning advertising business. When this number accelerates, expect more aggressive App Store features and search ads that can siphon traffic and clicks away from the mobile web.

Apple’s management commentary on privacy can have meaningful consequences for the digital marketing industry. Look for any hints about upcoming privacy features that could further limit tracking and attribution in search. Prior announcements around features like App Tracking Transparency on earnings calls gave marketers several months to prepare for the attribution shifts.

Snap Quarterly Results

Cadence: Quarterly
Typical release: Q1 (late Apr), Q2 (late Jul), Q3 (late Oct), Q4 (late Jan/early Feb)
Access: Free
Link

Why It Matters:

Snap’s daily active user growth and engagement patterns tell you where Gen Z discovers information. When DAU growth accelerates in markets where your organic search traffic is flat, younger audiences may not be using traditional search in those regions.

Snap reports specific metrics on AR lens usage. These metrics show you how users interact with visual and augmented reality content, previewing how visual search might evolve.

Methodology & Limitations:

Geographic breakdowns are limited to broad regions. Engagement metrics emphasize time spent rather than search or discovery behavior specifically. Revenue per user varies significantly by region, making it difficult to draw global conclusions. The data mainly reflects Gen Z behavior, not wider demographics.

Pinterest Quarterly Results

Cadence: Quarterly
Typical release: Q1 (late Apr/early May), Q2 (late Jul/early Aug), Q3 (late Oct/early Nov), Q4 (late Jan/early Feb)
Access: Free
Link

Why It Matters:

Pinterest’s monthly active user (MAU) growth shows you which markets embrace visual discovery. Their MAU growth rates by region reveal geographic patterns in visual search adoption, often previewing trends that influence how people search everywhere.

Their average revenue per user by region indicates where visual commerce drives revenue compared to just browsing, helping you decide whether Pinterest optimization deserves resources for your business.

Methodology & Limitations:

MAU counts only authenticated users, excluding logged-out traffic. ARPU includes all revenue types, not just search or discovery-related income. There is limited disclosure regarding search query volume or conversion behavior.

Internet Usage & Infrastructure

While earnings reports reveal the financial outcomes, they are lagging indicators of a more fundamental resource: human attention.

The following reports measure the underlying user behavior that drives these financial results, offering insight into audience attention and interaction.

Digital 2025 – Global Overview (We Are Social & Meltwater)

Cadence: Annual
Typical release: February
Access: Free with registration
Link

Why It Matters:

This report settles internal debates about platform usage with definitive, global data. It provides country-by-country breakdowns of everything from social media penetration and time spent on platforms to the most-visited websites and most-used search queries. It’s a reality check against media hype.

The platform adoption curves reveal which social networks are gaining momentum and which are stagnating. The data on time spent in social apps versus time on the “open web” is worth watching, as it provides some explanation for why website engagement metrics may be declining.

Methodology & Limitations:

Data is compiled from a variety of sources that use different methods. Some countries have smaller sample sizes, and certain metrics come from self-reported surveys. Generally, developed markets enjoy higher data quality compared to emerging markets. Additionally, how platform usage is defined can differ depending on the source.

Measuring Digital Development (ITU)

Cadence: Annual
Typical release: November
Access: Free
Link

Why It Matters:

The ITU, a specialized agency of the United Nations, provides the data for sizing total addressable markets for international SEO.

Its connectivity metrics show which countries have the infrastructure to support video-heavy or interactive content strategies, versus emerging markets where mobile-first, lightweight content is still essential.

The most actionable metric for spotting future growth is “broadband affordability.” History shows that when the cost of a basic internet plan in a developing country drops below the 2% threshold of its average monthly income, that market is poised for growth.

Methodology & Limitations:

Government-reported data quality differs across countries, with some nations providing infrequent or incomplete reports. Affordability calculations rely on national averages that might not account for regional differences. Additionally, infrastructure metrics often lag behind actual deployment by one to two years.

Global Internet Phenomena

Cadence: Annual
Typical release: March-April
Access: Free with registration
Link

Why It Matters:

This report helps you understand what people are actually doing online by tracking which applications consume the most internet bandwidth. Its findings are often staggering. In nearly every market analyzed, video streaming is the number one consumer of bandwidth, often accounting for over 50-60% of all traffic.

This provides proof that optimizing for video is no longer a niche strategy; it’s the main way people consume information and entertainment. The application rankings show whether YouTube or TikTok is the dominant force in your target markets, revealing which platform deserves the lion’s share of your video optimization priority.

This data provides the “why” behind other trends, such as the explosive growth of YouTube’s ad revenue seen in Alphabet’s earnings.

Methodology & Limitations:

Based on ISP-level traffic data from Sandvine’s partner networks, coverage varies by region, with the strongest data in North America and Europe. Mobile and fixed broadband breakdowns are not always comparable. The data excludes encrypted traffic that can’t be categorized. Sampling includes large ISPs but does not cover the entire market.

Privacy & Policy

Knowing where your audience is operating is only part of the challenge; understanding the rules of engagement is equally important. As privacy regulations and policies evolve, they set new guidelines for digital marketing, fundamentally altering how we target and measure audiences.

Data Privacy Benchmark Study (Cisco)

Cadence: Annual
Typical release: January
Access: Free with registration
Link

Why It Matters:

This report helps turn the idea of privacy into business results that everyone can understand. It highlights how good privacy practices can positively influence sales, customer loyalty, and brand reputation.

Whenever you’re making a case for investing in responsible data handling or privacy-focused technologies, this report offers valuable ROI insights. It reveals how many consumers might turn away from a brand if privacy isn’t clear, giving you strong support to promote user-friendly policies that also boost profits.

Methodology & Limitations:

The survey methodology relies on self-reported data.  Respondents mainly come from large enterprises. The geographic focus is on developed markets. ROI figures are based on correlation rather than establishing causation. Privacy maturity levels are self-assessed by respondents rather than independently verified.

Ads Safety Report (Google)

Cadence: Annual
Typical release: March
Access: Free
Link

Why It Matters:

Google’s enforcement data provides a glimpse into what may soon impact organic search results. Often, policies first appear in Google Ads before making their way into search quality guidelines. Violations by publishers show which types of sites might get banned from AdSense, sometimes acting as a warning sign for future manual actions in organic search.

When enforcement becomes more active in fields like crypto, healthcare, or financial services, it usually indicates that stricter E-E-A-T standards are on their way for organic results. Google’s actions to block ads due to misrepresentation or coordinated deceptive behavior are sometimes followed by similar issues in organic results.

Methodology & Limitations:

Google-reported data shows enforcement priorities, not industry violation rates. Detection methods, thresholds, and policies lack transparency and are not fully disclosed. Enforcement patterns are unclear, with no independent verification of metrics.

Media Use & Trust

Digital News Report (Reuters Institute)

Cadence: Annual
Typical release: June
Access: Free
Link

Why It Matters:

The Reuters Institute report explores how content discovery differs across countries and demographics. The analysis of over 40 nations details the ways people find information, whether directly, through search, social media, or aggregators.

A key insight is the concept of “side-door” traffic, referring to visitors who arrive via social feeds, mobile alerts, or aggregator apps, rather than visiting a homepage or using traditional search. In most developed nations, this type of traffic now makes up the majority, even for leading publishers.

This highlights the need for a distributed content strategy, emphasizing that your brand and expertise should be discoverable in many channels beyond Google.

Methodology & Limitations:

Survey-based methodology with ~2,000 online respondents per country, excluding offline or low-connectivity users, and overrepresents developed markets. Self-reported news habits may differ from actual behaviors, and the definition of “news” varies by culture and person.

Edelman Trust Barometer

Cadence: Annual
Typical release: January
Access: Free
Link

Why It Matters:

In today’s world of AI-generated content and widespread misinformation, trust is more important than ever. Edelman has been tracking public trust in four key institutions – Business, Government, Media, and NGOs – for over 20 years. Their findings offer a helpful guide for establishing your content’s authority.

When the data shows that “Business” is trusted more than “Media” in a particular country, it suggests that thought leadership from your company’s own qualified experts can be more believable and relatable than just quoting traditional news outlets.

The differences across generations and regions are especially useful to understand. They reveal which types of authority signals and credentials matter most for different audiences, giving you a clear, data-driven way to build E-E-A-T.

Methodology & Limitations:

Survey sample favors educated, high-income populations in most countries, with 1,000-1,500 respondents per nation. Trust is a self-reported perception, not behavioral; country choice focuses on larger economies. Trust in institutions may not reflect trust in brands or sources.

Digital Media Trends (Deloitte Insights)

Cadence: Annual
Typical release: April
Access: Free executive summary
Link

Why It Matters:

Deloitte monitors streaming service adoption, content consumption trends, and attention fragmentation, all of which influence your content strategy. Their research on subscription usage and churn rates shows how users distribute their entertainment budgets.

Their insights into ad-supported versus subscription preferences indicate which business models resonate with different demographics and content types. Data on cord-cutting and cord-never behaviors illustrate how various generations consume media.

Their analysis of social media patterns reveals declining platform popularity, providing early signals to diversify channels.

Methodology & Limitations:

This U.S.-focused study mainly involves higher-income digital adopters, with streaming behavior centered on entertainment, which may not reflect overall information habits. The sample size of 2,000-3,000 limits detailed demographics, and trends may lag six to 12 months behind mainstream adoption.

How To Use These Reports

Use this set of reports to connect market trends and signals with your search and content decisions.

Start with quarterly earnings and ad forecasts to calibrate budgets after core updates or seasonal swings. When planning campaigns, check platform adoption and discovery trends to decide where your audience is shifting and how they’re finding information.

For content format choices, compare attention and creative studies with what’s working in Search, YouTube, and short-form video to guide what you produce next.

Review earnings and forecasts each quarter when you set goals. Scan broader landscape studies when you refresh your annual plan. When something changes fast, cross-check at least two independent sources before you move resources. Look for consistency in your data and don’t act on one-off spikes.

Looking Ahead

The best SEO and marketing strategies are built on more than instinct; they’re grounded in data that stands up to scrutiny. By making these reports part of your regular reading cycle, you have a basis to make solid decisions that you can justify.

Each dataset offers a different lens that allows you to see both the macro trends shaping the industry and the micro signals to guide your next move.

The marketers who know where to find the right data and information are the ones who can be strategic and not reactionary.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Subscription Pro on Growth, Churn, LTV

For 12 years Andrei Rebrov managed infrastructure and operations at Scentbird, a perfume subscription company he co-founded in 2013. He learned the importance of acquiring the right subscribers, those who stay and generate lifetime value for the business.

The key, he says, was accurate, timely analytics to assess channels, creative, and promos. Finsi, his new company, provides those metrics, enabling merchants to predict a prospect’s value over the long term.

In our recent conversation, I asked Andrei to share acquisition tactics, churn avoidance, product selection, and more.

Our entire audio dialog is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Tell our listeners who you are and what you do.

Andrei Rebrov: I’m the co-founder of Finsi, an analytics platform for subscription-based businesses, launched in 2024. We help companies acquire and retain profitable subscribers.

Before that, I spent 12 years building and scaling Scentbird, a perfume subscription service, where I served as CTO.

I handled much of the engineering, including coding the website, building back-office systems, and managing online payments and warehouse infrastructure. We launched in August 2014 and surpassed 1 million subscribers by the end of 2024. I left the company in March 2025.

We started Scentbird alongside subscription pioneers such as cosmetic brands Ipsy and Birchbox, and apparel provider Fabletics. We were inspired by Warby Parker’s “try before you buy” model, and we applied the concept to fragrances. We built our own platform, which gave us flexibility and scalability over the years.

We began with fragrances from other brands. Some were hesitant, but over time, Scentbird became a mutually beneficial partner, giving brands access to younger audiences, online shoppers, and consumers who wanted to try before committing to a full bottle. Our website’s motto became: “Date your fragrance before you marry it.”

Customers could select their monthly fragrances or receive a default “fragrance of the month.” Thousands chose the default, enabling the rapid collection of reviews and insights that brands could use to refine formulas, marketing copy, and strategies.

Subscription businesses require nonstop acquisition to stay in place. The challenge isn’t just reducing churn; it’s acquiring customers who will stay. Most SaaS companies separate acquisition and retention teams, which can create disconnects. Success comes from collaboration — aligning acquisition, retention, and operations — so the entire company functions as one system. Increasing customer acquisition spend is usually worthwhile if it improves lifetime value.

We were vertically integrated, which meant that fulfillment, logistics, and marketing had to move together. If one team outpaced the others, something would break quickly.

Bandholz: What drives profitable acquisition?

Rebrov: Accurate analytics. It’s one of the hardest parts of running a subscription business, and it’s a big reason I started Finsi. At Scentbird, we invested early in analytics because every acquisition channel behaves differently. Each has its own lifetime value, payback period, and acquisition cost, so analyzing them separately was essential.

We needed to understand what customers purchased through each channel and how these purchases affected retention. Traditional LTV calculations rely on historical data, which is typically dated. That delay makes it impossible to know if current strategies are working. To solve this, we built predictive LTV models that provided early insight — often within a month — so we could gauge the impact of new creatives and A/B tests faster.

For example, we tested a two-product-per-month plan. It initially lowered conversion rates, but predictive data revealed much stronger long-term value. That insight helped justify warehouse adjustments for the new fulfillment process.

We explored various customer acquisition channels. TikTok Shops became a top performer. Since it integrated only through Shopify, we built a faceless Shopify store connected to TikTok, routed orders through it, and shipped sample bundles to introduce users to the Scentbird experience before converting them into subscribers.

We grandfathered long-term subscribers to reward loyalty. Some stayed seven or eight years, though many churned within 12 months. Early, accurate analytics made it possible to balance growth and retention effectively.

Bandholz: What size company benefits from Finsi’s analytics?

Rebrov: It’s less about size and more about the growth stage. Each stage faces different challenges. One of the biggest is cash flow. Every physical SKU has its own lead time, so if inventory takes three months, companies must accurately forecast demand, churn, and cash flow. For early-stage brands, those with annual revenue under $10 million, we help stabilize operations and predict cash needs.

At $10 to $50 million, segmentation becomes crucial: identifying lapsed customers for personalized win-backs and recognizing high-value customers early to offer premium experiences.

At $50 to $150 million, the focus shifts to eliminating surprises and aligning systems, ensuring promotions run correctly and teams understand how one decision impacts another. Larger brands often expand into new product lines and face the same scaling issues again. Across all stages, success depends on accurate, unified data to guide smarter decisions.

Effective retention depends on understanding why customers cancel.

Bandholz: How do you do that?

Rebrov: We usually start with surveys to gather both structured and unstructured feedback. Multiple-choice questions provide quantifiable insights, but the real value lies in open-ended responses, where customers share their personal stories. Surveys let you reach thousands of people efficiently, but phone conversations are invaluable. Talking directly with customers often reveals unique motivations and use cases that spark creativity and guide product development.

Spending even 30 minutes on the phone with a few customers, especially loyal ones, can uncover more insights than analytics ever could.

Certain products naturally fit subscriptions. Examples are consumable supplements, protein powders, and snacks. But infrequently purchased goods are better suited for one-off sales.

Companies must decide early because it shapes their marketing strategy. For traditional ecommerce, profitability often depends on the first sale. You aim to cover acquisition, cost of goods, and shipping upfront, often by selling bundles.

For subscriptions, the focus shifts to lifetime value. Sellers can afford to lose money initially if they know the customer will stay long enough to become profitable. Predictive LTV helps qualify customers early and informs how much you can spend to acquire them.

Simplicity wins. Don’t confuse prospects with multiple purchase paths or offers. Ensure a “subscribe and save” offer is consistent and easy to understand.

The beauty of subscriptions lies in predictable cash flow. Yet rising acquisition costs make retention even more vital.

Bandholz: Where can people follow you, reach out to you, or hire your services?

Rebrov: Our site is Finsi.ai. I’m on LinkedIn.

How Brands Boost ROI with Smart Data

Ecommerce marketers know the challenge of delivering relevant promotions to prospects without violating privacy rules and norms. Yet many providers now offer solutions that do both — personalize offers and respect privacy — for much greater performance.

Two of those providers are my guests in this week’s episode. Sean Larkin is CEO of Fueled, a customer data platform for merchants. Francesco Gatti is CEO of Opensend, a repository of consumer demographic and behavior data.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us an overview of what you do.

Sean Larkin: I’m CEO and founder of Fueled, a customer data platform for ecommerce. We help brands strengthen the data signals sent to advertising and marketing platforms such as Meta to improve tracking and performance. Our team collaborates with companies such as Built Basics, Dr. Squatch, and Oats Overnight, ensuring accurate pixel data and confidence in their marketing metrics.

Francesco Gatti: I’m CEO and co-founder of Opensend. We help brands identify site visitors who haven’t provided contact information. This includes new users who show sufficient engagement for retargeting and returning shoppers browsing on different devices or browsers. Our technology links these sessions, enabling brands and platforms such as Klaviyo and Shopify to distinguish between returning visitors and new ones.

We also offer a persona tool that segments customers by detailed demographics and behavior, enabling personalized marketing. We integrate directly with Klaviyo and other email platforms. By enhancing Klaviyo accounts, we help merchants reach unidentifiable visitors and maximize ad spend. Our re-identification capabilities are critical, as consumers often use multiple devices and frequently replace them, which can disrupt tracking. We work with roughly 1,000 brands, including Oats Overnight, Regent, and Alexander Wang.

Bandholz: How can your tools track a consumer across devices?

Gatti: We see two main use cases. First is cross-device and cross-browser identification. Imagine Joe bought from you last year on his old iPhone. This year, he returns using a new iPhone or his work computer. Typically, you wouldn’t know it’s the same person. Our technology matches signals such as user-agent data against our consumer graph, which holds multiple devices per person, allowing you to recognize Joe regardless of the device or browser.

The second use case involves capturing emails from high-intent visitors. Suppose Joe clicks an Instagram ad, views several product pages for over two minutes, even adds items to his cart, but leaves without buying or subscribing.

Through data partnerships with publishers such as Sports Illustrated and Quizlet, where users provide their email addresses in exchange for free content or promotions, we can match Joe’s anonymous activity to his known email. We then send that email, plus his on-site behavior, to Klaviyo and similar platforms. This triggers an abandonment flow, allowing us to retarget him with personalized messages and increase the chance of conversion.

Bandholz: What are other ways brands use the data?

Gatti: Brands mainly set up automated flows and let them run. Like Fueled, we send data to email platforms and customer data systems, allowing them to trigger personalized actions automatically. The data enables Klaviyo to distinguish between new and returning visitors to show pop-ups only to first-timers.

Larkin: We integrate hashed emails into Meta. Match scores rise 30–50%, and return on ad spend improves because we can prove an ad drove a sale and retarget that shopper.

Gatti: Our identity graph stores multiple data points, including email addresses, phone numbers, postal addresses, IP addresses, and devices. Sharing that with Fueled feeds richer details into Meta’s conversion API, dramatically increasing match rates and targeting accuracy.

Larkin: Privacy rules now limit simple pixel tracking. Since iOS 17, identifiers are stripped, making it harder for ecommerce brands to track visitors and run effective ads. Fueled collects first-party data, and Opensend’s third-party graph restores lost signals. With conversion API integrations, brands send detailed data directly to platforms such as Meta for stronger targeting and email automation.

Bandholz: When should a brand start using data technologies like yours?

Larkin: It depends on scale. If you’re spending under $20,000 a month on ads, the free Shopify integrations with Meta or Google usually suffice. Fueled is twice the cost of our competitors because we offer hands-on audits, proactive monitoring, and direct Slack support. Our typical clients do $8 million or more in annual revenue, often over $20 million. Some entrepreneurs bring us in from day one for the data advantage. Still, most brands should wait until ad spend grows and minor optimizations have a significant financial impact.

Gatti: For Opensend it’s about traffic, not revenue. We recommend at least 30,000 monthly unique visitors so our filtering can produce quality new emails. Services that identify returning visitors across devices work best for sites with 100,000 monthly visitors, where a 10x ROI is common. Our plans start at $250 per month.

Visitors who never share an email address convert less often, but our filtering narrows the gap. At apparel brand True Classic, for example, we captured 390,000 emails over three years, saw 65% open rates, and delivered a 5x return on investment within three months. As these contacts move through remarketing with holiday offers and seasonal promotions, ROI continues to compound.

Bandholz: When should a company remove an unresponsive subscriber?

Gatti: It varies by brand, average order value, and overall marketing strategy. We work with both high-end luxury companies and billion-dollar tire sellers with very different approaches. In general, if you’ve sent 10 to 15 emails with zero engagement, it’s time to drop those contacts. Continuing to send won’t help.

Larkin: I’d add that many brands, including big ones, don’t plan their retargeting or abandonment flows, especially heading into Black Friday and Cyber Monday. The pressure to discount everything can lead to leaving money on the table. Opensend reveals customer intent, allowing you to adjust offers. Someone who reaches the checkout may not require the same discount as someone who has only added a product to their cart.

We partner with agencies such as Brand.co and New Standard Co that help us build smart strategies. My biggest recommendation for the holidays is to review your flows, decide when a large discount is necessary, and avoid giving away the farm. If you blanket customers with huge discounts, many will disappear once the sale ends.

Bandholz: Where can people follow you, find you, use your services?

Gatti: Our site is Opensend.com. I’m on LinkedIn.

Larkin: We’re Fueled.io. I’m also on LinkedIn.

The Behavioral Data You Need To Improve Your Users’ Search Journey via @sejournal, @SequinsNsearch

We’re more than halfway through 2025, and SEO has already changed names many times to take into account the new mission of optimizing for the rise of large language models (LLMs): We’ve seen GEO (Generative Engine Optimization) floating around, AEO (Answer Engine Optimization), and even LEO (LLM Engine Optimization) has made an apparition in industry conversations and job titles.

However, while we are all busy finding new nomenclatures to factor in the machine part of the discovery journey, there is someone else in the equation that we risk forgetting about: the end beneficiary of our efforts, the user.

Why Do You Need Behavioral Data In Search?

Behavioral data is vital to understand what leads a user to a search journey, where they carry it out, and what potential points of friction might be blocking a conversion action, so that we can better cater to their needs.

And if we learned anything from the documents leaked from the Google trial, it is that users’ signals might actually be one of the many factors that influence rankings, something that was never fully confirmed by the company’s spokespeople, but that’s also been uncovered by Mark Wiliams Cook in his analysis of Google exploits and patents.

With search becoming more and more personalized, and data about users becoming less transparent now that simple search queries are expanding into full funnel conversations on LLMs, it’s important to remember that – while individual needs and experiences might be harder to isolate and cater for – general patterns of behavior tend to stick across the same population, and we can use some rules of thumb to get the basics right.

Humans often operate on a few basic principles aimed at preserving energy and resources, even in search:

  • Minimizing effort: following the path of least resistance.
  • Minimizing harm: avoiding threats.
  • Maximizing gain: seeking opportunities that present the highest benefit or rewards.

So while Google and other search channels might change the way we think about our daily job, the secret weapon we can use to future-proof our brands’ organic presence is to isolate some data about behavior, as it is, generally, much more predictable than algorithm changes.

What Behavioral Data Do You Need To Improve Search Journeys?

I would narrow it down to data that cover three main areas: discovery channel indicators, built-in mental shortcuts, and underlying users’ needs.

1. Discovery Channel Indicators

The days of starting a search on Google are long gone.

According to the Messy Middle research by Google, the exponential increase in information and new channels available has determined a shift from linear search behaviors to a loop of exploration and evaluation guiding our purchase decisions.

And since users now have an overwhelming amount of channels, they can consult in order to research a product or a brand. It’s also harder to cut through the noise, so by knowing more about them, we can make sure our strategy is laser-focused across content and format alike.

Discovery channel indicators give us information about:

  • How users are finding us beyond traditional search channels.
  • The demographic that we reach on some particular channels.
  • What drives their search, and what they are mostly engaging with.
  • The content and format that are best suited to capture and retain their attention in each one.

For example, we know that TikTok tends to be consulted for inspiration and to validate experiences through user-generated content (UGC), and that Gen Z and Millennials on social apps are increasingly skeptical of traditional ads (with skipping rates of 99%, according to a report by Bulbshare). What they favor instead is authentic voices, so they will seek out first-hand experiences on online communities like Reddit.

Knowing the different channels that users reach us through can inform organic and paid search strategy, while also giving us some data on audience demographics, helping us capture users that would otherwise be elusive.

So, make sure your channel data is mapped to reflect these new discovery channels at hand, especially if you are relying on custom analytics. Not only will this ensure that you are rightfully attributed what you are owed for organic, but it will also be an indication of untapped potential you can lean into, as searches become less and less trackable.

This data should be easily available to you via the referral and source fields in your analytics platform of choice, and you can also integrate a “How did you hear about us” survey for users who complete a transaction.

And don’t forget about language models: With the recent rise in queries that start a search and complete an action directly on LLMs, it’s even harder to track all search journeys. This replaces our mission to be relevant for one specific query at a time, to be visible for every intent we can cover.

This is even more important when we realize that everything contributes to the transactional power of a query, irrespective of how the search intent is traditionally labelled, since someone might decide to evaluate our offers and then drop out due to the lack of sufficient information about the brand.

2. Built-In Mental Shortcuts

The human brain is an incredible organ that allows us to perform several tasks efficiently every day, but its cognitive resources are not infinite.

This means that when we are carrying out a search, probably one of many of the day, while we are also engaged in other tasks, we can’t allocate all of our energy into finding the most perfect result among the infinite possibilities available. That’s why our attentional and decisional processes are often modulated by built-in mental shortcuts like cognitive biases and heuristics.

These terms are sometimes used interchangeably to refer to imperfect, yet efficient decisions, but there is a difference between the two.

Cognitive Biases

Cognitive biases are systematic, mostly unconscious errors in thinking that affect the way we perceive the world around us and form judgments. They can distort the objective reality of an experience, and the way we are persuaded into an action.

One common example of this is the serial position effect, which is made up of two biases: When we see an array of items in a list, we tend to remember best the ones we see first (primacy bias) and last (recency bias). And since cognitive load is a real threat to attention, especially now that we live in the age of 24/7 stimuli, primacy and recency biases are the reason why it’s recommended to lead with the core message, product, or item if there are a lot of options or content on the page.

Primacy and recency not only affect recall in a list, but also determine the elements that we use as a reference to compare all of the alternative options against. This is another effect called anchoring bias, and it is leveraged in UX design to assign a baseline value to the first item we see, so that anything we compare against it can either be perceived as a better or worse deal, depending on the goal of the merchant.

Among many others, some of the most common biases are:

  • Distance and size effects: As numbers increase in magnitude, it becomes harder for humans to make accurate judgments, reason why some tactics recommend using bigger digits in savings rather than fractions of the same value.
  • Negativity bias: We tend to remember and assign more emotional value to negative experiences rather than positive ones, which is why removing friction at any stage is so important to prevent abandonment.
  • Confirmation bias: We tend to seek out and prefer information that confirms our existing beliefs, and this is not only how LLMs operate to provide answers to a query, but it can be a window into the information gaps we might need to cover.

Heuristics

Heuristics, on the other hand, are rules of thumb that we employ as shortcuts at any stage of decision-making, and help us reach a good outcome without going through the hassle of analyzing every potential ramification of a choice.

A known heuristic is the familiarity heuristic, which is when we choose a brand or a product that we already know, because it cuts down on every other intermediate evaluation we would otherwise have to make with an unknown alternative.

Loss aversion is another common heuristic, showing that on average we are more likely to choose the least risky option among two with similar returns, even if this means we might miss out on a discount or a short-term benefit. An example of loss aversion is when we choose to protect our travels for an added fee, or prefer products that we can return.

There are more than 150 biases and heuristics, so this is not an exhaustive list – but in general, getting familiar with which ones are most common among our users helps us smooth out the journey for them.

Isolating Biases And Heuristics In Search

Below, you can see how some queries can already reveal subtle biases that might be driving the search task.

Bias/Heuristic Sample Queries
Confirmation Bias • Is [brand/products] the best for this [use case]?
• Is this [brand/product/service] better than [alternative brand/product service]?
• Why is [this service] more efficient than [alternative service]?
Familiarity Heuristic • Is [brand] based in [country]?
• [Brand]’s HQs
• Where do I find [product] in [country]?
Loss Aversion • Is [brand] legit?
• [brand] returns
• Free [service]
Social Proof • Most popular [product/brand]
• Best [product/brand]

You can use Regex to isolate some of these patterns and modifiers directly in Google Search Console, or you can explore other query tools like AlsoAsked.

If you’re working with large datasets, I recommend using a custom LLM or creating your own model for classifications and clustering based on these rules, so it becomes easier to spot a trend in the queries and figure out priorities.

These observations will also give you a window into the next big area.

3. Underlying Users’ Needs

While biases and heuristics can manifest a temporary need in a specific task, one of the most beneficial aspects that behavioral data can give us is the need that drives the starting query and guides all of the subsequent actions.

Underlying needs don’t only become apparent from clusters of queries, but from the channels used in the discovery and evaluation loop, too.

For example, if we see high prominence of loss aversion based on our queries, paired with low conversion rates and high traffic on UGC videos for our product or brand, we can infer that:

  • Users need reassurance on their investment.
  • There is not enough information to cover this need on our website alone.

Trust is a big decision-mover, and one of the most underrated needs that brands often fail to fulfill as they take their legitimacy for granted.

However, sometimes we need to take a step back and put ourselves in the users’ shoes in order to see everything with fresh eyes from their perspective.

By mapping biases and heuristics to specific users’ needs, we can plan for cross-functional initiatives that span beyond pure SEO and are beneficial for the entire journey from search to conversion and retention.

How Do You Obtain Behavioral Data For Actionable Insights?

In SEO, we are used to dealing with a lot of quantitative data to figure out what’s happening on our channel. However, there is much more we can uncover via qualitative measures that can help us identify the reason something might be happening.

Quantitative data is anything that can be expressed in numbers: This can be time on page, sessions, abandonment rate, average order value, and so on.

Tools that can help us extract quantitative behavioral data are:

  • Google Search Console & Google Merchant Center: Great for high-level data like click-through rates (CTRs), which can flag mismatches between the user intent and the page or campaign served, as well as cannibalization instances and incorrect or missing localization.
  • Google Analytics, or any custom analytics platform your brand relies on: These give us information on engagement metrics, and can pinpoint issues in the natural flow of the journey, as well as point of abandonment. My suggestion is to set up custom events tailored to your specific goals, in addition to the default engagement metrics, like sign-up form clicks or add to cart.
  • Heatmaps and eye-tracking data: Both of these can give us valuable insights into visual hierarchy and attention patterns on the website. Heatmapping tools like  Microsoft Clarity can show us clicks, mouse scrolls, and position data, uncovering not only areas that might not be getting enough attention, but also elements that don’t actually work. Eye-tracking data (fixation duration and count, saccades, and scan-paths) integrate that information by showing what elements are capturing visual attention, as well as which ones are often not being seen at all.

Qualitative data, on the other hand, cannot be expressed in numbers as it usually relies on observations. Examples include interviews, heuristic assessments, and live session recordings. This type of research is generally more open to interpretation than its quantitative counterpart, but it’s vital to make sure we have the full picture of the user journey.

Qualitative data for search can be extracted from:

  • Surveys and CX logs: These can uncover common frustrations and points of friction for returning users and customers, which can guide better messaging and new page opportunities.
  • Scrapes of Reddit, Trustpilot, and online communities conversations: These give us a similar output as surveys, but expand the analysis of blockers to conversion to users that we haven’t acquired yet.
  • Live user testing: The least scalable but sometimes most rewarding option, as it can cut down all the inference on quantitative data, especially when they are combined (for example, live sessions can be combined with eye-tracking and narrated by the user at a later stage via Retrospective Think-Aloud or RTA).

Behavioral Data In The AI Era

In the past year, our industry has been really good at two things: sensationalizing AI as the enemy that will replace us, and highlighting its big failures on the other end. And while it’s undeniable that there are still massive limitations, having access to AI presents unprecedented benefits as well:

  • We can use AI to easily tie up big behavioral datasets and uncover actionables that make the difference.
  • Even when we don’t have much data, we can train our own synthetic dataset based on a sample of ours or a public one, to spot existing patterns and promptly respond to users’ needs.
  • We can generate predictions that can be used proactively for new initiatives to keep us ahead of the curve.

How Do You Leverage Behavioral Data To Improve Search Journeys?

Start by creating a series of dynamic dashboards with the measures you can obtain for each one of the three areas we talked about (discovery channel indicators, built-in mental shortcuts, and underlying users’ needs). These will allow you to promptly spot behavioral trends and collect actions that can make the journey smoother for the user at every step, since search now spans beyond the clicks on site.

Once you get new insights for each area, prioritize your actions based on expected business impact and effort to implement.

And bear in mind that behavioral insights are often transferable to more than one section of the website or the business, which can maximize returns across several channels.

Lastly, set up regular conversations with your product and UX teams. Even if your job title keeps you in search, business success is often channel-agnostic. This means that we shouldn’t only treat the symptom (e.g., low traffic to a page), but curate the entire journey, and that’s why we don’t want to work in silos on our little search island.

Your users will thank you. The algorithm will likely follow.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Triple Whale’s Moby AI Gets Things Done

We’ve all heard the buzz surrounding agentic AI agents. What’s missing for many of us is how they can help our business. What is an AI agent? Can it really perform tasks and get things done?

I asked those questions and more to Anthony DelPizzo. He’s with Triple Whale, the Shopify-backed ecommerce analytics platform that has launched its own AI agent called Moby. It responds to ChatGPT-like prompts, suggests marketing channels, and even composes emails.

The entire audio of our conversation is embedded below. The transcript is condensed and edited for clarity.

Eric Bandholz: Who are you, and what do you do?

Anthony DelPizzo: I lead product marketing at Triple Whale and have been here for about nine months. Before that, I spent nearly four years at Klaviyo.

Triple Whale is an analytics platform for ecommerce brands. We merge fragmented data across marketing and sales into a single system and dashboard to help merchants make strategic decisions. To date, we’ve processed over $65 billion in gross merchandise volume.

We launched Moby, an agentic AI agent, about a month ago after a long testing phase. Moby is a set of AI tools that interact directly with merchants’ data. Think of it as ChatGPT focused on the platforms you already work with. Merchants can ask Moby both simple and complex questions and get answers tailored to their own data.

Moby Agents take it a step further. They’re akin to autonomous teammates that can analyze information, generate insights, and even take actions across ad platforms, marketing channels, operations, and more. The result could be much higher conversions or lower overhead.

Moby is built on Triple Whale’s massive data warehouse. It draws on those benchmarks and works natively with metrics such as CAC and ROAS. By using the data, Moby can connect cleanly with large language models such as Anthropic and OpenAI for each type of query.

Moby is embedded within the Triple Whale platform. It doesn’t just analyze; it can also perform tasks such as activating ads or drafting emails.

Bandholz: Do you share customer data with those LLMs?

DelPizzo: We have privacy agreements with all LLM  partners. Data stays within Triple Whale’s private environment. We’re not sending entire datasets to Anthropic, OpenAI, or any other company. Instead, Moby provides context to the LLMs based on the prompt, allowing our customers to use the LLMs securely.

For example, a prompt could be, “How should I prepare for BFCM to grow revenue 30%?” Moby’s Deep Dive feature breaks requests like this into multiple steps, with each acting as an agent examining a different aspect of the business. The result is a structured plan merchants can use to prepare for Black Friday and Cyber Monday.

Merchants use Moby for general prompts and analysis, not just seasonal planning. We provide a prompting guide to help start with effective questions and then refine the queries through follow-ups.

Bandholz: Say I prompt Moby to analyze my sales, margins, and ads, for guidance. What then?

DelPizzo: Moby would connect to your data as a Triple Whale client — product margins, SKUs, ad performance, Klaviyo, Attentive, logistics, and more. By analyzing these inputs, it can identify growth levers, such as which products or channels drove profit last year and which ones are trending now. For instance, if a brand has started performing well on AppLovin, the mobile ad platform, Moby might suggest scaling there for BFCM.

Triple Whale’s platform includes eight attribution models, along with post-purchase surveys, to track what’s driving results. We’ve also added marketing mix modeling to measure the impact from click and non-click channels, including Amazon. Moby can run correlations at a statistically significant level, which gives merchants confidence in the conclusions.

Based on that, it forecasts likely outcomes tied to business goals. If a brand wants to grow revenue by 30%, Moby highlights which levers — spending, channels, creative — are likely to help reach that target. Merchants can even see Moby’s reasoning step by step, like watching strategists think through a plan.

Moby’s analysis isn’t limited to numbers. Using AI vision, it can review ad creative, such as color choices, hooks, and copy. It also analyzes email performance by scanning HTML, subject lines, and preview text. It can draft email copy informed by this analysis, giving merchants ideas to test.

Bandholz: Can you cite anonymous customer wins from Moby?

DelPizzo: We rolled out early access to Moby and Moby Agents in February, five months ago. In April, a $100 million global brand used it during a four-day giveaway. On the final day, the team asked Moby, “What should we adjust in our plan?”

Moby responded with a detailed budget allocation by channel and predicted the revenue impact. They followed it exactly and ended up having their highest revenue day ever — 35% above their previous record, more than $200,000 higher.

Another example is LSKD, a fitness apparel brand in Australia with more than 50 stores. They used Moby to analyze the performance of their marketing channels. One agent uncovered over $100,000 in fraudulent spend from an influencer’s self-bidding, which saved the company that money. Since adopting Moby Agents, LSKD’s ROAS has grown about 40%.

Bandholz: How can merchants go wrong with Moby?

DelPizzo: The most common challenge is trying to adopt too much at once. Success usually comes from starting small. We provide a library of 70 pre-built agents, but using all of them right away can feel overwhelming.

The best outcomes are from teams that begin with a single agent, adapt it to their business, and build confidence with steady results. From there, they expand to other areas — maybe they start with the conversion rate optimization team, then retention, then other steps in the funnel. That gradual approach tends to be more sustainable.

Bandholz: Why use Moby instead of building a custom data tool with an LLM such as DeepSeek?

DelPizzo: One factor is the dataset it draws from. Moby is trained on $65 billion in GMV and has access to broad ecommerce benchmarks. It’s not about sharing brand-specific data but rather using aggregated insights to provide context — like knowing typical CAC or ROAS levels in different industries, or, say, margins for apparel versus skincare.

Another piece is the infrastructure. Building from scratch requires a unified schema for orders, events, and performance data. At Triple Whale, our large team of engineers has worked on this for years, and it’s still evolving. Without that groundwork, it’s hard to achieve the same level of ecommerce-specific intelligence.

Custom setups are possible, but Moby combines benchmarks, context, and infrastructure in a way that’s difficult to replicate.

Bandholz: Where can people support you, follow you, reach out?

DelPizzo: Our site is TripleWhale.com. Our socials include X and LinkedIn. I’m on LinkedIn.

Ad Attribution Gets a Crystal Ball

Advertising attribution is supposed to identify and assign credit to the actions and campaigns that lead to conversions. One might surmise that the process is simple with digital ads and large language models.

It is not.

Even the best forms of multi-touch attribution (MTA) are inexact owing to privacy regulations, platform changes, and the messy way shoppers move between websites and even physical stores.

Predictive Advantage

Imagine a retailer running Meta ads to drive traffic to its site. Those ads might inspire shoppers to buy later at Amazon. Contemporary attribution never sees those sales, so the ads look unprofitable. The marketing team might cut the campaign, not realizing it boosted revenue elsewhere.

The result is a blind spot. Marketers often undercount investments that create awareness, while lower-funnel ads look like heroes.

Yet MTA is better than last-touch attribution, and last-touch is better than guessing. But the next step toward understanding the impact of ads and marketing may be a form of predictive modeling similar to media mix modeling (MMM), but with channel-level accuracy.

Predictive attribution modeling “will take you at least to the campaign level,” said Cameron Bush, vice president of digital transformation at Meyer, a cookware manufacturer, as he described his experience.

“I have one campaign in Meta right now that I’m looking at in [Prescient AI, an attribution platform], where 100% of its revenue and MMM ROAS is being driven by Shopify,” Bush continued.

“The [campaign] right below it is 50/50 between Shopify and Amazon and has slightly higher ROAS. That’s a level of sophistication that I wouldn’t have had,” said Bush, comparing predictive models to MMM and MTA.

Screenshot of the Predictive AI dashboard for Meyer

Predictive AI forecasts each campaign’s impact on overall revenue, as illustrated by this example from Meyer. Click image to enlarge.

Decision-Making

Predictive modeling approaches the same goal as marketing mix modeling and multitouch attribution.

Instead of piecing together every customer touchpoint, it models the relationships between spend and revenue across channels. Then it simulates outcomes, combining MMM-style aggregate measurement with campaign-level outputs, informing marketers:

  • Influence of channels and campaigns on each other and overall revenue.
  • Impact of top-of-funnel campaigns on downstream revenue.
  • Effect of changes to promotional and marketing spend on profit.

The challenge is what to do with that info.

“We look at Excel spreadsheets. We look at dashboards. We look at all this kind of stuff, and it gives us a really good picture of what is going on today. But it doesn’t tell me what to do,” said Cody Greco, co-founder and chief technology officer at Prescient AI.

The work of answering “what should I do now?” is passed to the marketer to forecast.

“The cool thing about predictive modeling is it actually helps answer the next rational question,” Greco said.

A marketer can ask, say, what happens if she doubles her spend on Instagram, and receive an answer with a high degree of confidence.

Media Buying

Predictive modeling could affect retail media buying in a few ways.

  • Branding and content. Understanding how top-of-funnel promotions and content marketing aid advertising conversions may reinvigorate branding.
  • Budget clarity. Reallocate investments for the best returns.
  • Automation. Placing bids and adjusting spend could, eventually, become automatic.

Contemporary attribution often drags marketing teams into debates over detailed metrics. Predictive modeling reduces those arguments, freeing teams to focus on creative and campaign planning.

Shift in Focus

Hence marketers who delegate the tasks of identifying channels could achieve a renaissance in creativity and content, according to Meyer’s Bush.

To be sure, predictive modeling doesn’t erase uncertainty or replace marketers. Yet if successful, it will change promotions for ecommerce and omnichannel businesses.

Think of it like weather forecasting. Marketers will not explain every raindrop; they will focus on whether you’ll need an umbrella tomorrow.

Privacy-Safe Attribution Avoids User Tracking

Four years after Apple broke mobile app attribution in iOS 14.5, an emerging class of privacy-safe aggregated modeling tools promises to bring back visibility without tracking individuals.

The approach uses large sets of anonymized data to infer which advertising campaigns, mobile views, and cross-device activity led to revenue.

It is the method behind Apple’s SKAdNetwork, Google’s Integrated Conversion Measurement (ICM), Meta’s Aggregated Event Measurement (AEM), and tools such as Predictive Aggregate Measurement (PAM) from Branch, a marketing and measurement firm.

“Marketers don’t need to know who bought something — they need to know what drove the sale,” said Irina Bukatik, vice president of product at Branch. “Predictive Aggregate Measurement gives them that clarity in a way that’s compliant, privacy-safe, and works across both app and web.”

Screenshot of Branch.io's attribution web page

Branch’s Predictive Aggregate Measurement infers attribution from aggregate performance signals.

Why It Matters

Merchants that sell through multiple channels — mobile app, website, physical store — know the importance of understanding advertising’s impact on sales.

Apple’s iOS changes in 2021 created blind spots, especially for tracking users across devices and channels.

PAM, AEM, ICM, and similar systems close that attribution gap. These privacy-preserving tools analyze large datasets and estimate which ads and touchpoints are likely responsible for conversions. Thus marketers can tell if a mobile view influenced a desktop purchase or if an app install led to repeat orders, all without violating privacy.

The payoff is relatively better budget allocation, campaign optimization, and confidence that ad spend is going to the channels that generate revenue.

How It Works

Instead of capturing click-by-click records tied to a shopper, these privacy-compliant systems collect conversion signals in bulk and combine them with other relevant campaign data.

The tools do not track individuals, and some add “noise” to obscure personally identifiable information.

From there, statistical models look for patterns that suggest which ads, channels, or touchpoints are likely responsible for a sale.

The process is probabilistic, meaning the tool does not know that a specific customer saw an Instagram ad before buying, but it can conclude, with a high degree of confidence, that the campaign influenced sales based on aggregate trends, explained Branch’s Bukatik.

The models weigh several factors, presumably including:

  • Time between impressions and actions,
  • Number of conversions following a campaign,
  • Cross-device behaviors such as mobile views and desktop purchases,
  • Historical campaign performance under similar conditions.

Imagine the old connect-the-dot worksheets from elementary school that let you trace the shape of a cat or a butterfly. iOS 14.5 and similar privacy updates erased some of the dots, but higher math can help complete the picture.

Known Limits

Yet aggregated measurement is not a perfect replacement for the old, detailed, user-level tracking.

There are limits to the new systems’ accuracy.

  • Lower granularity. The tools lack the user-level detail of legacy tracking. Marketers cannot follow individual customer journeys end-to-end, complicating targeted, retargeted, or personalized campaigns.
  • Attribution delays. Frameworks such as Apple’s SKAdNetwork often delay reporting for privacy reasons. The result is slow optimization cycles, forcing marketing teams to wait before reallocating budget or testing new creative.
  • Thresholding. Some systems hide conversion data from smaller or niche campaigns until they reach a minimum volume to prevent identification. This too delays budget and creative decisions.

Limitations such as lower granularity are not as critical as they first appear. As Bukatik noted, in most cases “what a marketer wants to know is not whether someone clicked on the Facebook ad and purchased — it’s whether the Facebook ad drove the purchase.”

Adapting

For merchants, the continuing shift toward privacy-preserving aggregated measurement means building campaigns and reporting processes that work within the system’s constraints.

Start by focusing on bigger, more meaningful signals. Instead of chasing granular, click-by-click attribution across devices, set clear conversion events that matter, such as a first purchase, a new subscription, or a repeat order.

Consider these metrics as key performance indicators. Aggregated tools excel at gauging high-value actions.

Invest in creative and audience testing at the campaign level. A delay in reporting may require tests that run long enough to gather statistically significant results. Avoid overreacting to early data.

Blend first-party data from your ecommerce platform or loyalty program with aggregate reports. You won’t see individual journeys from ad click to checkout, but combining datasets can reveal channel lift, customer lifetime value, and repeat purchase behavior.

Finally, accept that modern attribution is increasingly probabilistic. The goal isn’t perfect precision but directional confidence — enough clarity to shift budget toward the channels, campaigns, and platforms likely to generate profitable growth.

Attribution Models for Ecommerce

My company helps merchants analyze and optimize marketing data. Clients’ most frequent questions involve attribution. What’s the source of truth? What drove the purchase? What prompted the visit to my site?

Let’s start with attribution tracking in Google Analytics.

Google Analytics

Google Analytics 4 now offers just two methods for attributing conversions:

  • “Data-driven” uses machine learning to distribute attribution across multiple sources based on users’ previous behavior, excluding direct traffic, although it appears to skew toward Google-owned channels.

Google Analytics 4 offers two methods for attributing conversions: “Data-driven” and “Last click.”

GA4 offers multiple attribution windows, depending on a business’s sales cycle. Some products require no research and are typically purchased in minutes. Others are complex and need much consideration. I typically set the window at 30, 60, or 90 days.

Rarely does an ecommerce platform’s conversion attribution reports match Google Analytics. Here’s why.

  • Technical errors, such as incorrect installation of pixels on Google or Meta ads, and mistakes with UTM parameters.
  • Privacy rules and regulations complicate tracking. Examples include the E.U.’s GDPR and cookie restrictions.
  • Non-digital promotions, such as ads on TV, print, radio, and billboards, do not appear in GA4.
  • Multiple touches. A consumer may see a product or brand offline, search for it on Google, click on a paid listing, and then abandon the journey. Later, the product may appear in the shopper’s Instagram feed, prompting the conversion. No attribution scenario can pinpoint the source(s), as it varies by shopper.
  • Repeat purchases. Some returning customers go directly to a website, while others respond to ads.

Despite the differences, Google Analytics remains the most-used attribution tool. It’s free, with an ecosystem of users, consultants, and resources. It’s a good choice for advertisers on Google-owned platforms, although it also captures referrals from other sources.

Other Methods

Still, merchants have other attribution options.

Ecommerce platforms. Shopify, for example, offers multiple attribution models — last click, last non-direct click, and first click — and multiple windows. Most platforms, including Shopify, show just one source per sale. Merchants with few marketing channels and single touchpoints can usually rely on their platform’s reporting.

Third-party tools. Segment, Adobe Analytics, and others utilize regression models for multi-touch attribution, similar to GA4’s Data-driven method of assigning a value to each source by channel or campaign. Third-party tools do the math but cost money. They are not as accurate as one would hope, in my experience.

Marketing platforms. Most marketing channels offer built-in reporting for performance tracking on that platform. Advertisers can monitor, for example, the creative, body text, and audience targeting. But in-platform reports are not ideal when contrasting, say, Google versus Meta.

Simplified approach. An easy-to-implement method is to compare daily sales from your ecommerce platform with GA4’s Data-Driven conversion attribution reports. Then assess GA4’s values to establish the source of truth. Apply over- or under-reporting in GA4 as a percentage to arrive at a return on investment per channel. Perhaps a TV ad or a brand campaign generated a sales boost. Neither would appear in GA4. While not exact, this simplified approach can provide a more accurate reflection of a channel’s impact on revenue.

Here’s an example. My firm just analyzed sales attributions for an ecommerce health food client. We found (i) a strong sales correlation with both Google Ads and email marketing, (ii) a moderate correlation with Instagram ads, and (iii) a weak to non-existent correlation with sales and TikTok Ads. However, we did see success with retargeting ads on TikTok.

No Perfect Model

I know of no perfect conversion attribution platform or technique. The purchase journeys of modern shoppers are too complex and varied. But we can consistently gauge the impact of a channel or campaign by establishing the right process for a merchant’s products, marketing tactics, and tech setup.

How to Track ChatGPT Traffic in GA4

ChatGPT is becoming a valuable traffic source. It may not appear in a Google Analytics overview because the volume is small, but ChatGPT traffic is often the most engaging source, even more than organic search.

I base those observations on my experience optimizing client sites for AI answers.

Screenshot of GA report showing engagement for ChatGPT traffic.

ChatGPT traffic is often the most engaging, per Google Analytics. Click image to enlarge.

I know of no studies examining why ChatGPT traffic performs well, but I have two theories:

  • Like organic search, ChatGPT provides solutions to problems, with occasional links to external sites to learn more.

The trend may change as genAI tools become mainstream. Until then, monitoring AI traffic is essential.

Track ChatGPT Referrals

In Google Analytics 4:

  • Go to Acquisition > Traffic acquisition,
  • Below the graph in the drop-down, choose “Session source / Medium,”
  • In the “Search” field, type “gpt” and click “Enter” to filter session sources.
Screenshot of GA4 traffic acqusition report for ChatGPT.

GA4, go to Acquisition > Traffic acquisition. Click image to enlarge.

Then create custom reports to access the data quickly.

Some external tools can filter GA4 data traffic. For example, Databox allows users to add the report to its dashboard and even overlay other data, such as conversions:

Screenshot of the Databox report.

Databox allows users to add the GA4 report for ChatGPT. Click image to enlarge.

ChatGPT does not disclose actual user prompts, but we can surmise the content by exploring the landing pages of those users. Each page solves a problem. Thus the prompt presumably requested that solution.

Analyze ChatGPT Referrals

In GA4:

  • Go to Engagement > Landing Pages,
  • Click “Add filter” below “Landing page,”
  • Select “Session source / Medium,”
  • Select “Contains” and type “gpt”
  • Click “Apply”

Build a “gpt” traffic source filter in GA4. Click image to enlarge.

This will filter traffic sources to those containing “gpt” and sort the landing pages by the most clicks from ChatGPT.

The resulting report will help identify pages that ChatGPT cites to solve relevant problems. From there, query ChatGPT to see the context of those citations, as in:

This is my URL: [URL]. What prompts would trigger ChatGPT to cite the page as a solution?