Why Is Organic Traffic Down? Here’s How To Segment The Data via @sejournal, @torylynne

As an SEO, there are few things that stoke panic like seeing a considerable decline in organic traffic. People are going to expect answers if they don’t already.

Getting to those answers isn’t always straightforward or simple, because SEO is neither of those things.

The success of an SEO investigation hinges on the ability to dig into the data, identify where exactly the performance decline is happening, and connect the dots to why it’s happening.

It’s a little bit like an actual investigation: Before you can catch the culprit or understand the motive, you have to gather evidence. In an SEO investigation, that’s a matter of segmenting data.

In this article, I’ll share some different ways to slice and dice performance data for valuable evidence that can help further your investigation.

Using Data To Confirm There’s An SEO Issue

Just because organic traffic is down doesn’t inherently mean that it’s an SEO problem.

So, before we dissect data to narrow down problem areas, the first thing we need to do is determine whether there’s actually an SEO issue at play.

After all, it could be something else altogether. In which case, we’re wasting unnecessary resources chasing a problem that doesn’t exist.

Is This A Tracking Issue?

In many cases, what looks like a big traffic drop is just an issue with tracking on the site.

To determine whether tracking is functioning correctly, there are a couple of things we need to look for in the data.

The first is consistent drops across channels.

Zoom out of organic search and see what’s happening in other sources and channels.

If you’re seeing meaningful drops across email, paid, etc., that are consistent with organic search, then it’s more than likely that tracking isn’t working correctly.

The other thing we’re looking for here is inconsistencies between internal data and Google Search Console.

Of course, there’s always a bit of inconsistency between first-party data and GSC-reported organic traffic. But if those differences are significantly more pronounced for the time period in question, that hints at a tracking problem.

Is This A Brand Issue?

Organic search traffic from Google falls into two primary camps:

  • Brand traffic: Traffic driven by user queries that include the brand name.
  • Non-brand traffic: Traffic driven by brand-agnostic user queries.

Non-brand traffic is directly affected by SEO work. Whereas, brand traffic is mostly impacted by the work that happens in other channels.

When a user includes the brand in their search, they’re already brand-aware. They’re a return user or they’ve encountered the brand through marketing efforts in channels like PR, paid social, etc.

When marketing efforts in other channels are scaled back, the brand reaches fewer users. Since fewer people see the brand, fewer people search for it.

Or, if customers sour on the brand, there are fewer people using search to come back to the site.

Either way, it’s not an SEO problem. But in order to confirm that, we need to filter the data down.

Go to Performance in Google Search Console and exclude any queries that include your brand. Then compare the data against a previous period – usually YoY if you need to account for seasonality. Do the same for queries that don’t include the brand name.

If non-brand traffic has stayed consistent, while brand traffic has dropped, then this is a brand issue.

filtering queries using regex in Google Search Console
Screenshot from Google Search Console, November 2025

Tip: Account for users misspelling your brand name by filtering queries using fragments. For example, at Gray Dot Co, we get a lot of brand searches for things like “Gray Company” and “Grey Dot Company.” By using the simple regex expression “gray|grey” I can catch brand search activity that would otherwise fall through the cracks. 

Is It Seasonal Demand?

The most obvious example of seasonal demand is holiday shopping on ecommerce sites.

Think about something like jewelry. Most people don’t buy jewelry every day; they buy it for special occasions. We can confirm that seasonality by looking at Google Trends.

Zooming out to the past five years of interest in “jewelry,” it clearly peaks in November and December.

Google Trends graph for interest in jewelry over the past five years
Screenshot from Google Trends, November 2025

As a site that sells jewelry, of course, traffic in Q1 is going to be down from Q4.

I use a pretty extreme example here to make my point, but in reality, seasonality is widespread and often more subtle. It impacts businesses where you might not expect much seasonality at all.

The best way to understand its impact is to look at organic search data year-over-year. Do the peaks and valleys follow the same patterns?

If so, then we need to compare data YoY to get a true sense of whether there’s a potential SEO problem.

Is It Industry Demand?

SEOs need to keep tabs on not just what’s happening internally, but also what’s going on externally. A big piece of that is checking the pulse of organic demand for the topics and products that are central to the brand.

Products fall out of vogue, technologies become obsolete, and consumer behavior changes – that’s just the reality of business. When there are fewer potential customers in the landscape, there are fewer clicks to win, and fewer sessions to drive.

Take cameras, for instance. As the cameras on our phones got more sophisticated, digital cameras became less popular. And as they became less popular, searches for cameras dwindled.

Now, they’re making a comeback with younger generations. More people searching, more traffic to win.

npr article headline why gen z loves the digital compact cameras that millennials used to covet
Screenshot from npr.com, November 2025

You can see all of this at play in the search landscape by turning to Google Trends. The downtrend in interest caused by advances in technology, AND the uptrend boosted by shifts in societal trends.

Google Trends graph showing search interest in cameras since 2004
Screenshot from Google Trends, November 2025

When there are drops in industry, product, or topic demand within the landscape, we need to ask ourselves whether the brand’s organic traffic loss is proportional to the overall loss in demand.

Is Paid Search Cannibalizing Organic Search?

Even if a URL on the site ranks well in organic results, ads are still higher on the SERP. So, if a site is running an ad for the same query it already ranks for, then the ad is going to get more clicks by nature.

When businesses give their PPC budgets a boost, there’s potential for this to happen across multiple, key SERPs.

Let’s say a site drives a significant chunk of its organic traffic from four or five product landing pages. If the brand introduces ads to those SERPs, clicks that used to go to the organic result start going to the ad.

That can have a significant impact on organic traffic numbers. But search users are still getting to the same URLs using the same queries.

To confirm, pull sessions by landing pages from both sources. Then, compare the data from before the paid search changes to the period following the change.

If major landing pages consistently show a positive delta that cancels out the negative delta in organic search, you’re not losing organic traffic; you’re lending it.

YoY comparison of sessions by landing page for paid search and organic search in GA4
Screenshot from Google Analytics, November 2025

Segmenting Data To Find SEO Issues

Once we have confirmation that the organic traffic declines point to an SEO issue, we can start zooming in.

Segmenting data in different ways helps pinpoint problem areas and find patterns. Only then can we trace those issues to the cause and craft a strategy for recovery.

URL

Most SEOs are going to filter their organic traffic down by URL. It lets us see which pages are struggling and analyze those pages for potential improvements.

It also helps find patterns across pages that make it easier to isolate the cause of more widespread issues. For example, if the site is losing traffic across its product listing pages, it could signal that there’s a problem with the template for that page.

But segmenting by URL also helps us answer a very important question when we pair it with conversion data.

Do We Really Care About This Traffic?

Clicks are only helpful if they help drive business-valuable user interactions like conversions or ad views. For some sites, like online publications, traffic is valuable in and of itself because users coming to the site are going to see ads. The site still makes money.

But for brands looking to drive conversions, it could just be empty traffic if it’s not helping drive that primary key performance indicator (KPI).

A top-of-funnel blog post might drive a lot of traffic because it ranks for very high-volume keywords. If that same blog post is a top traffic-driving organic landing page, a slip in rankings means a considerable organic traffic drop.

But the users entering those high-volume keywords might not be very qualified potential customers.

Looking at conversions by landing page can help brands understand whether the traffic loss is ultimately hurting the bottom line.

The best way to understand is to turn to attribution.

First-touch attribution quantifies an organic landing page’s value in terms of the conversions it helps drive down the line. For most businesses, someone isn’t likely to convert the first time they visit the site. They usually come back and purchase.

Whereas, last-touch attribution shows the organic landing pages that people come to when they’re ready to make a purchase. Both are valuable!

Query

Filtering performance by query can help understand which terms or topic areas to focus improvements on. That’s not new news.

Sometimes, it’s as easy as doing a period-over-period comparison in GSC, ordering by clicks lost, and looking for obvious patterns, i.e., are the queries with the most decline just subtle variants of one another?

If there aren’t obvious patterns and the queries in decline are more widespread, that’s where topic clustering can come into the mix.

Topic Clustering With AI

Using AI for topic clustering helps quickly identify any potential relationships between queries that are seeing performance dips.

Go to GSC and filter performance by query, looking for any YoY declines in clicks and average position.

YoY comparison in Google Search Console for clicks and average position by query
Screenshot from Google Search Console, November 2025

Then export this list of queries and use your favorite ML script to group the keywords into topic clusters.

The resulting list of semantic groupings can provide an idea of topics where a site’s authority is slipping in search.

In turn, it helps narrow the area of focus for content improvements and other optimizations to potentially build authority for the topics or products in question.

Identifying User Intent

When users search using specific terms, the type of content they’re looking for – and their objective – differs based on the query. These user expectations can be broken out into four different high-level categories:

User Intent Objective
Informational

(Top of funnel)

Users are looking for answers to questions, explanations, or general knowledge about topics, products, concepts, or events.
Commercial

(Middle of funnel)

Users are interested in comparing products, reading reviews, and gathering information before making a purchase decision.
Transactional

(Bottom of funnel)

Users are looking to perform a specific action, such as making a purchase, signing up for a service, or downloading a file.
Navigational Brand-familiar users are using the search engine as a shortcut to find a specific website or webpage.

By leveraging user intent, we identify user objectives for which the site or pages on the site are falling short. It gives us a lens into performance decline, making it easier to identify possible causes from the perspective of user experience.

If the majority of queries losing clicks and positionality are informational, it could signal shortcomings in the site’s blog content. If the queries are consistently commercial, it might call for an investigation into how the site approaches product detail and/or listing pages.

GSC doesn’t provide user intent in its reporting, so this is where a third-party SEO tool can come into play. If you have position tracking set up and GSC connected, you can use the tool’s rankings report to identify queries in decline and their user intent.

If not, you can still get the data you need by using a mix of GSC and a tool like Ahrefs.

Device

This view of performance data is pretty simple, but it’s equally easy to overlook!

When the large majority of performance declines are attributed to ONLY desktop or mobile, device data helps identify potential tech or UX issues within the mobile or desktop experience.

The important thing to remember is that any declines need to be considered proportionally. Take the metrics for the site below…

YoY comparison in Google Search Console of clicks by device type
Screenshot from Google Search Console, November 2025

At first glance, the data makes it look like there might be an issue with the desktop experience. But we need to look at things in terms of percentages.

Desktop: 1 – (648/1545) x 100 = 58% decline

Mobile: 1 – (149/316) x 100 = 52% decline

While desktop shows a much larger decline in terms of click count, the percentage of decline YoY is fairly similar across both desktop and mobile. So we’re probably not looking for anything device-specific in this scenario.

Search Appearance

Rich results and SERP features are an opportunity to stand out on the SERP and drive more traffic through enhanced results. Using the search appearance filter in Google Search Console, you can see traffic from different types of rich results and SERP features:

  • Forums.
  • AMP Top Story (AMP page + Article markup).
  • Education Q&A.
  • FAQ.
  • Job Listing.
  • Job Details.
  • Merchant Listing.
  • Product Snippet.
  • Q&A.
  • Review Snippet.
  • Recipe Gallery.
  • Video.

This is the full list of possible features with rich results (courtesy of SchemaApp), though you’ll only see filters for search appearances where the domain is currently positioned.

In most cases, Google is able to generate these types of results because there is structured data on pages. The notable exceptions are Q&A, translated results, and video.

So when there are significant traffic drops coming from a specific type of search appearance, it signals that there’s potentially a problem with the structured data that enables that search feature.

YoY comparison in Google Search Console for search appearance
Screenshot from Google Search Console, November 2025

You can investigate structured data issues in the Enhancements reports in GSC. The exception is product snippets, which nest under the Shopping menu. Either way, the reports only show up in your left-hand nav if Google is aware of relevant data on the site.

For example, the product snippets report shows why some snippets are invalid, as well as ways to potentially improve valid results.

Product snippets report in Google Search Console
Screenshot from Google Search Console, November 2025

This context is valuable as you begin to investigate the technical causes of traffic drops from specific search features. In this case, it’s clear that Google is able to crawl and utilize product schema on most pages – but there are some opportunities to improve that schema with additional data.

Featured Snippets

When featured snippets originally came on the scene, it was a major change to the SERP structure that resulted in a serious hit to traditional organic results.

Today, AI Overviews are doing the same. In fact, research from Seer shows that CTR has dropped 61% for queries that now include an AI overview (21% of searches). And that impact is outsized for informational queries.

In cases where rankings have remained relatively static, but traffic is dropping, there’s good reason to investigate whether this type of SERP change is a driver of loss.

While Google Search Console doesn’t report on featured snippets (example: PAA questions) and AI Overviews, third-party tools do.

In the third-party tool Semrush, you can use the Domain Overview report to check for featured snippet availability across keywords where the site ranks.

filtering to keyword with available AI overviews in the Semrush Domain Overview report
Screenshot from Semrush, November 2025

Do the keywords where you’re losing traffic have AI overviews? If you’re not cited, it’s time to start thinking about how you’re going to win that placement.

Search Type

Search type is another way to filter GSC data, where you’re seeing traffic declines despite healthy and consistent rankings.

After all, web search is just one prong of Google Search. Think about it: How often do you use Google Image search? At least in my case, that’s fairly often.

Filter performance data by each of these search types to understand which one(s) are having the biggest impact on performance decline. Then use that insight to start connecting the dots to the cause.

filtering to Google image search performance in Google Search Console
Screenshot from Google Search Console, November 2025

Images are a great example. One simple line in the robots.txt can block Google from crawling a subfolder that hosts multitudes of images. As those images disappear from image search results, any clicks from those results disappear in tandem.

We don’t know to look for this issue until we slice the data accordingly!

Geography

If the business operates physically in specific cities and states, then it likely already has geo-specific performance tracking set up through a tool.

But domains for online-only businesses shouldn’t dismiss geographic data – even at the city/state level! Declines are still a trigger to check geo-specific performance data.

Country

Just because the brand only sells and operates in one country doesn’t mean that’s where all the domain’s traffic is coming from. Drilling down by country in GSC allows you to see whether declines are coming from the country the brand is focused on or, potentially, another country altogether.

performance by country in Google Search Console
Screenshot from Google Search Console, November 2025

If it’s another country, it’s time to decide whether that matters. If the site is a publisher, it probably cares more about that traffic than an ecommerce brand that’s more focused on purchases in its country of operation.

Localization

When tools are reporting positionality at the country level, then rankings shifts in specific markets fly under the radar. It certainly happens, and major markets can have major traffic impact!

Tools like BrightLocal, Whitespark, and Semrush let you analyze SERP rankings one level deeper than GSC, providing data down to the city.

Checking for rankings discrepancies across cities is possible by checking a small sample of keywords with the greatest declines in clicks.

If I’m an SEO at the University of Phoenix, which is an online university, I’m probably pretty excited about ranking #1 in the United States for “online business degree.”

top five serp results for online business degree in the United States
Screenshot from Semrush, November 2025

But if I drill down further, I might be a little distraught to find that the domain isn’t in the top five SERP results for users in Denver, CO…

top five serp results for online business degree in Denver, Colorado
Screenshot from Semrush, November 2025

…or Raleigh, North Carolina.

top five serp results for online business degree in Raleigh. North Carolina
Screenshot from Semrush, November 2025

Catch Issues Faster By Leveraging AI For Data Analysis

Data segmentation is an important piece of any traffic drop investigation, because humans can see patterns in data that bots don’t.

However, the opposite is true too. With anomaly detection tooling, you get the best of both worlds.

When combined with monitoring and alert notifications, anomaly detection makes it possible to find and fix issues faster. Plus, it enables you to find data patterns in any after-the-impact investigations

All of this helps ensure that your analysis is comprehensive, and might even point out gaps for further investigation.

This Colab tool from Sam Torres can help get your site set up!

Congrats, You’re Close To Closing This Case

As Sherlock Holmes would say about an investigation, “It is a capital mistake to theorize before one has data.” With the right data in hand, the culprits start to reveal themselves.

Data segmentation empowers SEOs to uncover leads that point to possible causes. By narrowing it down based on the evidence, we ensure more accuracy, less work, faster answers, and quicker recovery.

And while leadership might not love a traffic drop, they’re sure to love that.

More Resources:


Featured Image: Vanz Studio/Shutterstock

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, but at times it can also be quite difficult. 

We collectively pitch dozens of ideas, and the editors meticulously review and debate the merits of each. We agonize over which ones might make the broadest impact, whether one is too similar to something we’ve featured in the past, and how confident we are that a recent advance will actually translate into long-term success. There is plenty of lively discussion along the way.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, I wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. 

These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. 

Male contraceptives 

There are several new treatments in the pipeline for men who are sexually active and wish to prevent pregnancy—potentially providing them with an alternative to condoms or vasectomies. 

Two of those treatments are now being tested in clinical trials by a company called Contraline. One is a gel that men would rub on their shoulder or upper arm once a day to suppress sperm production, and the other is a device designed to block sperm during ejaculation. (Kevin Eisenfrats, Contraline’s CEO, was recently named to our Innovators Under 35 list). A once-a-day pill is also in early-stage trials with the firm YourChoice Therapeutics. 

Though it’s exciting to see this progress, it will still take several years for any of these treatments to make their way through clinical trials—assuming all goes well.

World models 

World models have become the hot new thing in AI in recent months. Though they’re difficult to define, these models are generally trained on videos or spatial data and aim to produce 3D virtual worlds from simple prompts. They reflect fundamental principles, like gravity, that govern our actual world. The results could be used in game design or to make robots more capable by helping them understand their physical surroundings. 

Despite some disagreements on exactly what constitutes a world model, the idea is certainly gaining momentum. Renowned AI researchers including Yann LeCun and Fei-Fei Li have launched companies to develop them, and Li’s startup World Labs released its first version last month. And Google made a huge splash with the release of its Genie 3 world model earlier this year. 

Though these models are shaping up to be an exciting new frontier for AI in the year ahead, it seemed premature to deem them a breakthrough. But definitely watch this space. 

Proof of personhood 

Thanks to AI, it’s getting harder to know who and what is real online. It’s now possible to make hyperrealistic digital avatars of yourself or someone you know based on very little training data, using equipment many people have at home. And AI agents are being set loose across the internet to take action on people’s behalf. 

All of this is creating more interest in what are known as personhood credentials, which could offer a way to verify that you are, in fact, a real human when you do something important online. 

For example, we’ve reported on efforts by OpenAI, Microsoft, Harvard, and MIT to create a digital token that would serve this purpose. To get it, you’d first go to a government office or other organization and show identification. Then it’d be installed on your device and whenever you wanted to, say, log into your bank account, cryptographic protocols would verify that the token was authentic—confirming that you are the person you claim to be. 

Whether or not this particular approach catches on, many of us in the newsroom agree that the future internet will need something along these lines. Right now, though, many competing identity verification projects are in various stages of development. One is World ID by Sam Altman’s startup Tools for Humanity, which uses a twist on biometrics. 

If these efforts reach critical mass—or if one emerges as the clear winner, perhaps by becoming a universal standard or being integrated into a major platform—we’ll know it’s time to revisit the idea.  

The world’s oldest baby

In July, senior reporter Jessica Hamzelou broke the news of a record-setting baby. The infant developed from an embryo that had been sitting in storage for more than 30 years, earning him the bizarre honorific of “oldest baby.” 

This odd new record was made possible in part by advances in IVF, including safer methods of thawing frozen embryos. But perhaps the greater enabler has been the rise of “embryo adoption” agencies that pair donors with hopeful parents. People who work with these agencies are sometimes more willing to make use of decades-old embryos. 

This practice could help find a home for some of the millions of leftover embryos that remain frozen in storage banks today. But since this recent achievement was brought about by changing norms as much as by any sudden technological improvements, this record didn’t quite meet our definition of a breakthrough—though it’s impressive nonetheless.

The Download: four (still) big breakthroughs, and how our bodies fare in extreme heat

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, with plenty of lively discussion along the way, but at times it can also be quite difficult.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, we wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. Read the full story to learn what they are

MIT Technology Review Narrated: The quest to find out how our bodies react to extreme temperatures 

Scientists hope to prevent deaths from climate change, but heat and cold are more complicated than we thought. Researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. 

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A CDC panel voted to recommend delaying the hepatitis B vaccine for babies
Overturning a 30-year policy that has contributed to a huge decline in the virus. (STAT)
Why childhood vaccines are a public health success story. (MIT Technology Review)

2 Critical climate risks are growing across the Arab region 
Drought is the most immediate problem countries are having to grapple with. (Ars Technica)
+ Why Tehran is running out of water. (Wired $)

3 Netflix is buying Warner Bros for $83 billion 
If approved, it’ll be one of the most significant mergers in Hollywood history. (NBC)
+ Trump says the deal “could be a problem” due to Netflix’s already huge market share. (BBC)

4 The EU is fining X $140 million 
For failing to comply with its new Digital Services Act. (NPR)
Elon Musk is now calling for the entire EU to be abolished. (CNBC)
X also hit back by deleting the European Commission’s account. (Engadget)

5 AI slop is ruining Reddit
Moderators are getting tired of fighting the rising tide of nonsense. (Wired $)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 Scientists have deeply mixed feelings about AI tools
They can boost researchers’ productivity, but some worry about the consequences of relying on them. (Nature $)
‘AI slop’ is undermining trust in papers presented at computer science gatherings. (The Guardian)
+ Meet the researcher hosting a scientific conference by and for AI. (MIT Technology Review)

7 Australia is about to ban under 16s from social media
It’s due to come into effect in two days—but teens are already trying to maneuver around it. (New Scientist $)

8 AI is enshittifying the way we write 🖊🤖
And most people haven’t even noticed. (NYT $)
AI can make you more creative—but it has limits. (MIT Technology Review)

9 Tech founders are taking etiquette lessons
The goal is to make them better at pretending to be normal. (WP $)

10 Are we getting stupider? 
It might feel that way sometimes, but there’s little solid evidence to support it. (New Yorker $)

Quote of the day

“It’s hard to be Jensen day to day. It’s almost nightmarish. He’s constantly paranoid about competition. He’s constantly paranoid about people taking Nvidia down.” 

—Stephen Witt, author of ‘The Thinking Machine’, a book about Nvidia’s rise, tells the Financial Times what it’s like to be its founder and chief executive, Jensen Huang.

One more thing

fleet of ships at sea

COURTESY OF OCEANBIRD

How wind tech could help decarbonize cargo shipping

Inhabitants of the Marshall Islands—a chain of coral atolls in the center of the Pacific Ocean—rely on sea transportation for almost everything. For millennia they sailed largely in canoes, but much of their seafaring movement today involves big, bulky, diesel-fueled cargo ships that are heavy polluters.

They’re not alone. Cargo shipping is responsible for about 3% of the world’s annual greenhouse-­gas emissions, and that figure is currently on track to rise to 10% by 2050.

The islands have been disproportionately experiencing the consequences of human-made climate change: warming waters, more frequent extreme weather, and rising sea levels. Now its residents are exploring a surprisingly traditional method of decarbonizing its fleets. Read the full story.

—Sofia Quaglia

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Small daily habits can help build a life you enjoy.  
+ Using an air fryer to make an epic grilled cheese sandwich? OK, I’m listening
+ I’m sorry but AI does NOT get to ruin em dashes for the rest of us. 
+ Daniel Clarke’s art is full of life and color. Check it out!

The State of AI: A vision of the world in 2030

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. You can read the rest of the series here.

In this final edition, MIT Technology Review’s senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about where AI will go next, and what our world will look like in the next five years.

(As part of this series, join MIT Technology Review’s editor in chief, Mat Honan, and editor at large, David Rotman, for an exclusive conversation with Financial Times columnist Richard Waters on how AI is reshaping the global economy. Live on Tuesday, December 9 at 1:00 p.m. ET. This is a subscriber-only event and you can sign up here.)

state of AI

Will Douglas Heaven writes: 

Every time I’m asked what’s coming next, I get a Luke Haines song stuck in my head: “Please don’t ask me about the future / I am not a fortune teller.” But here goes. What will things be like in 2030? My answer: same but different. 

There are huge gulfs of opinion when it comes to predicting the near-future impacts of generative AI. In one camp we have the AI Futures Project, a small donation-funded research outfit led by former OpenAI researcher Daniel Kokotajlo. The nonprofit made a big splash back in April with AI 2027, a speculative account of what the world will look like two years from now. 

The story follows the runaway advances of an AI firm called OpenBrain (any similarities are coincidental, etc.) all the way to a choose-your-own-adventure-style boom or doom ending. Kokotajlo and his coauthors make no bones about their expectation that in the next decade the impact of AI will exceed that of the Industrial Revolution—a 150-year period of economic and social upheaval so great that we still live in the world it wrought.

At the other end of the scale we have team Normal Technology: Arvind Narayanan and Sayash Kapoor, a pair of Princeton University researchers and coauthors of the book AI Snake Oil, who push back not only on most of AI 2027’s predictions but, more important, on its foundational worldview. That’s not how technology works, they argue.

Advances at the cutting edge may come thick and fast, but change across the wider economy, and society as a whole, moves at human speed. Widespread adoption of new technologies can be slow; acceptance slower. AI will be no different. 

What should we make of these extremes? ChatGPT came out three years ago last month, but it’s still not clear just how good the latest versions of this tech are at replacing lawyers or software developers or (gulp) journalists. And new updates no longer bring the step changes in capability that they once did. 

And yet this radical technology is so new it would be foolish to write it off so soon. Just think: Nobody even knows exactly how this technology works—let alone what it’s really for. 

As the rate of advance in the core technology slows down, applications of that tech will become the main differentiator between AI firms. (Witness the new browser wars and the chatbot pick-and-mix already on the market.) At the same time, high-end models are becoming cheaper to run and more accessible. Expect this to be where most of the action is: New ways to use existing models will keep them fresh and distract people waiting in line for what comes next. 

Meanwhile, progress continues beyond LLMs. (Don’t forget—there was AI before ChatGPT, and there will be AI after it too.) Technologies such as reinforcement learning—the powerhouse behind AlphaGo, DeepMind’s board-game-playing AI that beat a Go grand master in 2016—is set to make a comeback. There’s also a lot of buzz around world models, a type of generative AI with a stronger grip on how the physical world fits together than LLMs display. 

Ultimately, I agree with team Normal Technology that rapid technological advances do not translate to economic or societal ones straight away. There’s just too much messy human stuff in the middle. 

But Tim, over to you. I’m curious to hear what your tea leaves are saying. 

Tim Bradshaw and Will Douglas Heaven

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

Tim Bradshaw responds

Will, I am more confident than you that the world will look quite different in 2030. In five years’ time, I expect the AI revolution to have proceeded apace. But who gets to benefit from those gains will create a world of AI haves and have-nots.

It seems inevitable that the AI bubble will burst sometime before the end of the decade. Whether a venture capital funding shakeout comes in six months or two years (I feel the current frenzy still has some way to run), swathes of AI app developers will disappear overnight. Some will see their work absorbed by the models upon which they depend. Others will learn the hard way that you can’t sell services that cost $1 for 50 cents without a firehose of VC funding.

How many of the foundation model companies survive is harder to call, but it already seems clear that OpenAI’s chain of interdependencies within Silicon Valley make it too big to fail. Still, a funding reckoning will force it to ratchet up pricing for its services.

When OpenAI was created in 2015, it pledged to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” That seems increasingly untenable. Sooner or later, the investors who bought in at a $500 billion price tag will push for returns. Those data centers won’t pay for themselves. By that point, many companies and individuals will have come to depend on ChatGPT or other AI services for their everyday workflows. Those able to pay will reap the productivity benefits, scooping up the excess computing power as others are priced out of the market.

Being able to layer several AI services on top of each other will provide a compounding effect. One example I heard on a recent trip to San Francisco: Ironing out the kinks in vibe coding is simply a matter of taking several passes at the same problem and then running a few more AI agents to look for bugs and security issues. That sounds incredibly GPU-intensive, implying that making AI really deliver on the current productivity promise will require customers to pay far more than most do today.

The same holds true in physical AI. I fully expect robotaxis to be commonplace in every major city by the end of the decade, and I even expect to see humanoid robots in many homes. But while Waymo’s Uber-like prices in San Francisco and the kinds of low-cost robots produced by China’s Unitree give the impression today that these will soon be affordable for all, the compute cost involved in making them useful and ubiquitous seems destined to turn them into luxuries for the well-off, at least in the near term.

The rest of us, meanwhile, will be left with an internet full of slop and unable to afford AI tools that actually work.

Perhaps some breakthrough in computational efficiency will avert this fate. But the current AI boom means Silicon Valley’s AI companies lack the incentives to make leaner models or experiment with radically different kinds of chips. That only raises the likelihood that the next wave of AI innovation will come from outside the US, be that China, India, or somewhere even farther afield.

Silicon Valley’s AI boom will surely end before 2030, but the race for global influence over the technology’s development—and the political arguments about how its benefits are distributed—seem set to continue well into the next decade. 

Will replies: 

I am with you that the cost of this technology is going to lead to a world of haves and have-nots. Even today, $200+ a month buys power users of ChatGPT or Gemini a very different experience from that of people on the free tier. That capability gap is certain to increase as model makers seek to recoup costs. 

We’re going to see massive global disparities too. In the Global North, adoption has been off the charts. A recent report from Microsoft’s AI Economy Institute notes that AI is the fastest-spreading technology in human history: “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.” And yet AI is useless without ready access to electricity and the internet; swathes of the world still have neither. 

I still remain skeptical that we will see anything like the revolution that many insiders promise (and investors pray for) by 2030. When Microsoft talks about adoption here, it’s counting casual users rather than measuring long-term technological diffusion, which takes time. Meanwhile, casual users get bored and move on. 

How about this: If I live with a domestic robot in five years’ time, you can send your laundry to my house in a robotaxi any day of the week. 

JK! As if I could afford one. 

Further reading 

What is AI? It sounds like a stupid question, but it’s one that’s never been more urgent. In this deep dive, Will unpacks decades of spin and speculation to get to the heart of our collective technodream. 

AGI—the idea that machines will be as smart as humans—has hijacked an entire industry (and possibly the US economy). For MIT Technology Review’s recent New Conspiracy Age package, Will takes a provocative look at how AGI is like a conspiracy

The FT examined the economics of self-driving cars this summer, asking who will foot the multi-billion-dollar bill to buy enough robotaxis to serve a big city like London or New York.
A plausible counter-argument to Tim’s thesis on AI inequalities is that freely available open-source (or more accurately, “open weight”) models will keep pulling down prices. The US may want frontier models to be built on US chips but it is already losing the global south to Chinese software.

5 Content Marketing Ideas for January 2026

Each new year is a time to reset, restart, and renew, even for content marketing.

In January 2026, ecommerce marketers can publish content celebrating the U.S.’s 250th year, share Wikipedia’s anniversary, take a deep dive into what makes products great, or even appreciate simple pleasures such as puzzles and Bloody Marys!

What follows are five content marketing ideas your business can use in January 2026.

250 Years

AI illustration of a U.S. flag

America’s 250th birthday will likely be a widespread event.

In 2026, the United States will celebrate its 250th year as a nation. The event will go by a few names (keywords) such as “anniversary,” “sestercentennial,” “quarter-millennial,” or “semiquincentennial.”

If America’s bicentennial in 1976 is an indication of what to expect in 2026, there will be promotions, parties, and opportunities. Many referred to the 1976 occasion as the “buycentennial” because of the increase in marketing and spending.

Promotional content can focus on patriotic products or emphasize history and how-tos.

For example, an apparel retailer could publish a 250-year fashion series that included videos, articles, and even interactive elements.

Collage of seven images showing apparel from 1776 to present.

Two hundred fifty years of fashion might reveal more continuity than change.

For a how-to content, something as simple as “How to Celebrate the Sestercentennial” could work.

Bloody Mary Day

Photo of a Bloody Mary with various garnishes on a bar

The Bloody Mary is a classic cocktail, often heavily garnished.

The classic Bloody Mary cocktail is a mixture of vodka, tomato juice, and spices, including salt, Worcestershire sauce, and Tabasco.

While it has no curative properties, the Bloody Mary is a popular hangover remedy. As such, Americans celebrate Bloody Mary Day each January 1.

The idea is simple enough. Folks drink a lot on New Year’s Eve and wake up suffering from the aftereffects. The drink’s tomato juice is hydrating. The salt helps to restore electrolytes. The spices wake one up a bit. And, ultimately, the vodka prolongs recovery.

Bloody Mary’s offer plenty of content opportunities. For example, a travel retailer selling high-end luggage and travel accessories could focus on the cocktail’s history. Fernand Petiot invented the drink in 1921 while bartending at the famous Harry’s New York Bar in Paris. The establishment on 5 Rue Daunou is still open and remains a favorite for tourists.

Other article ideas include various Bloody Mary recipes, New Year’s recovery checklists, and entertainment ideas.

Wikipedia at 25

Wikipedia home page

Wikipedia home page.

On January 15, 2001, Larry Sanger and Jimmy Wales created Wikipedia, the human-edited encyclopedia that focuses on “verifiability, not truth.”

The result was an information source shunned by academics — scholarly papers do not cite it — but which approaches the accuracy of classic encyclopedias for many topics.

The platform has also faced a recent challenge from Elon Musk’s upstart Grokipedia, which attempts to challenge Wikipedia’s alleged inaccuracies and political biases.

Both the anniversary and recent publicity make Wikipedia a good topic for content marketers. A common (and entertaining) approach is calling out Wikipedia errors, such as those related to the products your business sells.

For example, music-and-pop-culture-related businesses could write about Wikipedia’s false claim that a member of the band Bilk left after being implicated in a Jamaican corned beef theft. Or the same shop might cover another Wikipedia claim that the U.S. military used Yoko Ono’s music during interrogations.

Those errors from April 2024 and August 2025, respectively, have been corrected, but the humor remains.

National Puzzle Day

Photo of human hands assembling a jigsaw puzzle on a table

Puzzle solving turns chaos into a satisfying sense of order.

Established in 2002, National Puzzle Day occurs each January 29. The occasion reminds us of the joy and benefits that puzzles provide.

The topic is relevant for many types of ecommerce businesses. Here are examples.

  • Woodworking supply shop: “10 Easy Jigsaw Puzzle Templates for Scrollsaw Beginners.”
  • Toy store: “The Secret World of Puzzle Makers.”

Marketers could also publish puzzles, such as themed crosswords, visual challenges, or various product-related games. And producing puzzle content could be a good way to try out AI-powered vibe coding.

Bill of Materials

Illustration of a backpack with highlights of its materials.

Product quality can hide in details that most consumers never see.

Consumers purchase some products on impulse and others through inference — cues of quality, durability, and craftsmanship, even when they can’t articulate why.

A “bill of materials” article or video describes a product’s construction or the sourcing of its materials. Such content might interview a supplier or take apart a product to show its inner workings.

The audience of potential customers may not remember each material name, but they will remember the impact, thinking, “These people know what they are making, and they are not afraid to show it.”

Google Disputes Report Claiming Ads Are Coming To Gemini In 2026 via @sejournal, @MattGSouthern

Google is publicly pushing back on an Adweek report that claimed the company told advertising clients it plans to bring ads to its Gemini AI chatbot next year.

Dan Taylor, Google’s Vice President of Global Ads, responded directly on X shortly after the story published, calling the report inaccurate and denying any plans to monetize the Gemini app.

The Original Report

Adweek’s Trishla Ostwal reported that Google had informed advertising clients about plans to introduce ads to Gemini. According to the exclusive story, Google representatives held calls with at least two advertising clients indicating that ad placements in Gemini were targeted for a 2026 rollout.

The agency buyers who spoke to Adweek remained anonymous. They said details on ad formats, pricing, and testing were unclear, and that Google had not shared prototypes or technical specifications about how ads would appear in the chatbot.

Notably, the report said this plan would be separate from advertisements in AI Mode, Google’s AI-powered search experience.

Google’s Response

Taylor disputed the claims publicly on X, writing: “This story is based on uninformed, anonymous sources who are making inaccurate claims. There are no ads in the Gemini app and there are no current plans to change that.”

Google’s official AdsLiaison account amplified the denial, reiterating that there are no ads in the Gemini app and no current plans to add them, and pointing out that ads currently appear in AI Overviews in English in the US, with expansion to more English-speaking countries, and are being tested in AI Mode.

Logan Kilpatrick, who works on Google’s Gemini team, responded to Taylor’s post with “thanks for clarifying!!”

Where Google Is Monetizing AI

While the Gemini app itself remains ad-free according to Google, the company is actively monetizing other AI-powered search experiences.

Google began showing ads in AI Overviews earlier this year and has been expanding that program to additional English-speaking countries. The company also continues testing advertisements within AI Mode.

Why This Matters

The question of how AI chatbots will be monetized has become increasingly relevant as these products gain mainstream adoption. Google, OpenAI, and other AI companies face pressure to generate revenue from expensive-to-run conversational AI products.

Just last week, code discovered in ChatGPT’s Android app suggested OpenAI may be building an advertising framework, though the company has not confirmed any plans to introduce ads.

For now, Google maintains that Gemini users won’t see ads in the chatbot app. Whether that position changes as the AI landscape evolves remains to be seen.

What Google’s 2025 Year in Review Tells Us About the Future of PPC via @sejournal, @brookeosmundson

As December is quickly coming to a close, Google released its 2025 Year in Review, with a thorough list of product launches, upgrades, improvements all driven by AI.

These updates showed up across the board in Search, YouTube, Demand Gen, Performance Max, Merchant Center, and more.

Some updates felt like natural progressions from earlier releases. Others pushed Google’s vision for a more automated, more visual, and more data-informed ad system into clearer view.

For PPC managers and directors who spent the year testing generative AI, adjusting to new reporting controls, and rethinking creative workflows, Google’s recap is a useful way to understand what actually shaped paid media in 2025 and what still needs refinement.

The Biggest Releases of 2025

Before breaking down the themes and implications, here is a snapshot of the major updates Google highlighted in its year-end recap:

  • Ads in AI Overviews expanded to desktop and new global markets
  • AI Mode opened new mid-funnel inventory for deeper conversational queries
  • The launch of AI Max for Search, with new beta features being released in Q1 2026
  • Smart Bidding Exploration allowed for flexible ROAS targets
  • Full placement reporting expanded across the Search Partner Network
  • YouTube released Shoppable CTV, new Cultural Moments Sponsorship, new sports lineups, and a creator partnerships hub
  • Demand Gen added product feeds, target CPC bidding, campaign-level experiments, and channel controls
  • PMax gained channel-level reporting, full Search Terms, asset-level metrics, negative keyword lists, device targeting, and expanded search themes
  • App campaigns improved iOS measurement, Web-to-App flows, ROAS bidding, and conversion modeling
  • Merchant Center gained brand profiles, AI-powered visuals, loyalty tools, and priority fixes
  • Meridian introduced an open-sourced MMM approach with lower lift thresholds
  • Data Manager and Google tag gateway made data accuracy and consolidation easier
  • Asset Studio launched inside Google Ads with Nano Banana Pro powering image and video creation
  • Ads Advisor and Analytics Advisor delivered guided support for campaign building and analysis

Taken together, these updates show Google’s ongoing effort to blend automation with advertiser control, though some areas are maturing faster than others.

Below are details of some of the key updates worth digging into more.

How Google Repositioned Search for the Next Era

Google spent much of 2025 redefining how Search works, particularly around discovery moments and conversational intent. These shifts matter because they determine where ads can appear and how early advertisers can influence a buying journey.

Ads in AI Overviews

Google expanded Ads in AI Overviews across desktop and global markets. This placement sits inside AI-generated summaries and gives advertisers a chance to appear before users have clicked into a traditional results page. While Ads in AI Overviews was announced earlier this year, it wasn’t until the later part of 2025 where users were sharing their screenshots in the wild.

AI Mode

Still in testing, AI Mode answers multi-step or nuanced queries with structured responses. Google now allows ads to appear below and within these responses when relevant. These moments previously had no paid inventory, so this is a new mid-funnel opportunity for advertisers who want to influence complex decision-making.

AI Max for Search

AI Max extended its feature set and remains one of Google’s fastest-growing Search products. Experiments, creative guidelines, and text customization give advertisers more agency over AI-generated assets. The challenge is managing expectations. AI Max simplifies setup but still requires strategic human oversight to shape relevance and cost efficiency.

Smart Bidding Exploration

Google cited an average 18 percent increase in unique converting query categories and a 19 percent conversion lift when advertisers used flexible ROAS targets. For brands that struggle to expand reach without overspending, this may become one of the most practical levers in 2026.

YouTube and Demand Gen Continued Their Growth Spurt

YouTube delivered some of Google’s most impactful upgrades this year. Shoppable CTV allows viewers to browse products directly on the big screen or pass the experience to their phone.

Cultural Moments Sponsorships created a packaged approach for brands that want presence during tentpole events. With new sports lineups across college and women’s leagues, Google is betting heavily on live and fandom-driven environments.

Demand Gen also saw meaningful improvement. Google noted a 26 percent increase in conversions per dollar driven by more than 60 AI-powered enhancements.

Combined with product feeds, channel controls, and full compatibility with Custom Experiments, Demand Gen now feels like a maturing format rather than an experimental successor to Discovery.

Performance Max Became More Transparent and More Controllable

Performance Max received a set of long overdue reporting and control features that changed how many advertisers worked inside the platform.

Channel reporting, full Search terms, asset-level insights, customer acquisition visibility, and segmentation options let PPC managers understand where performance originates. Negative keyword lists, device targeting, demographic controls, and expanded search themes finally gave advertisers the ability to tighten or expand performance intentionally rather than reactively.

For many teams, this was the year PMax felt less like a ‘take-it-or-leave-it’ automation tool and more like a high-powered campaign framework that needs guidance rather than blind trust.

Creativity Became a Central Focus

One theme that Google emphasized more strongly this year was creative quality and workflow efficiency. With Asset Studio and Nano Banana Pro, Google is signaling that creative is no longer a side component of performance. It is a core lever.

Asset Studio

The new in-platform creative workspace lets advertisers generate, edit, and review creative directly inside Google Ads. Nano Banana Pro now supports:

  • Natural language editing
  • Seasonal variations
  • Photorealistic product scenes
  • Multi-product compositions
  • Bulk image generation
  • Shareable assets for team review

For lean teams that struggle to produce enough visual variation for PMax, Demand Gen, or YouTube, this removes a major bottleneck. The quality still varies depending on brand style, texture, or lighting, but Google is clearly positioning AI-assisted creative as a foundational element in campaign setup.

Ad Preview and Workflow Support

Updated previews show ads across channels without guesswork, and shareable previews remove a lot of friction with internal stakeholders. This is one of Google’s more underrated releases because it directly solves a common workflow challenge: aligning creative teams and media teams without lengthy back-and-forth.

Google also introduced Ads Advisor, a guided AI assistant for campaign building and troubleshooting, which reduces operational burden for teams who manage multiple accounts or frequent experiments.

Why the iOS Measurement Updates Are More Important Than It Looks

Buried within Google’s 2025 recap was an update most marketers will skim past, but app-focused advertisers immediately saw as one of the most meaningful improvements of the year.

Google expanded Web-to-App acquisition measurement for iOS, allowing advertisers to track when a user moves from a web campaign into an app install that ultimately leads to a valuable in-app action.

On the surface, this reads like a small reporting enhancement. In practice, it solves one of the most frustrating gaps in iOS app advertising since ATT went live in 2021.

For most advertisers who run traditional lead-gen or ecommerce campaigns, this update will feel distant. But for app marketers, it finally closes the loop on a user journey that used to look fragmented, inconsistent, or completely invisible.

Here’s what makes it so important:

  1. It brings back visibility that app advertisers lost years ago. After Apple’s App Tracking Transparency rollout, many advertisers lost the ability to see how web campaigns influenced app installs. That meant paid Search, Shopping, and even PMax often undervalued app growth, because installs and in-app actions didn’t get attributed correctly. Google’s new iOS Web-to-App measurement begins restoring that path, which helps app campaigns receive credit where it was previously impossible.
  2. It allows advertisers to optimize for higher-value actions, not just installs. Before this update, the disconnect between web traffic and app conversions often pushed advertisers toward shallow optimization goals. Now, Google can tie in-app action quality back to upstream campaigns. For app marketers, that means smarter bidding. For finance teams, it means cleaner forecasting.
  3. It makes cross-surface strategy practical again. Many app brands advertise across Search, YouTube, Shopping, and PMax but had to treat those touchpoints separately. This update reopens the door to a unified approach, where creative, bidding strategies, and budgets can align with actual user behavior instead of being fragmented by platform limitations.

App-focused teams have been navigating blind spots for years. They know how often web traffic influences app installs. They’ve seen how many high-value users start on mobile web before downloading. Without visibility, they’ve had to rely on directional data, blended reporting, or costly workarounds through MMP partners.

This update doesn’t solve every attribution limitation on iOS, but it does give app advertisers something they’ve wanted since ATT: a path to understanding the real value of web-driven app conversions.

It creates a more complete and realistic measurement loop, which is exactly what Google needs if it wants advertisers to invest confidently in App campaigns across Search, YouTube, Demand Gen, and Performance Max in 2026.

Where There’s Room for Improvement

A year-in-review should not only highlight progress but also acknowledge where advertisers still experience friction. My goal here is objective critique without negativity.

AI Overviews need clearer consistency

Advertisers still struggle to predict when AI Overviews will appear and how often ads surface within them. Before this becomes a must-have surface, Google needs more stability and clearer guidelines.

Creative control in AI Max is not fully predictable

Google is expanding customization settings, but advertisers still see unexpected rewrites or over-simplifications. More transparency around why AI chooses certain variations would help creative teams align expectations.

Asset Studio output varies by category

While the new tools are fast and flexible, certain product types still generate inconsistent or overly stylized visuals. This will improve, but brands that rely on strict visual identity may need hybrid workflows for now.

Measurement unification is still a challenge

Meridian is promising, but advertisers want easier alignment between Google’s lift results and those from Meta, Amazon, or independent MMM tools. The industry needs consistency, not isolated attribution logic.

These gaps do not diminish the significance of Google’s updates, but they remind us that AI-led advertising is still developing and requires both experimentation and skepticism.

Wrapping Up the Year

Google’s 2025 recap showed a platform that is evolving quickly but maturing steadily. Automation is no longer something advertisers fear or resist. The conversation has shifted to how PPC teams can direct these systems with clearer insight, smarter testing, and more intentional creative work.

If 2025 was about unlocking visibility and control, 2026 will be about applying those tools with discipline. Marketers who lean into experimentation, creative differentiation, and data strength will be the ones who stay ahead as Google’s ad ecosystem continues to change.

What was your biggest takeaway from Google’s updates this year?

Google Tests Social Channel Insights In Search Console via @sejournal, @MattGSouthern

Google Search Console is testing a feature that shows how linked social channels perform in Google Search, including clicks, impressions, and queries.

  • Search Console Insights now includes performance data for social channels Google has automatically linked to your website.
  • The feature shows clicks, impressions, top queries, and trending content for connected social profiles.
  • This is an early experiment available only to a limited set of sites with auto-detected social channels.
LLM Payments To Publishers: The New Economics Of Search via @sejournal, @MattGSouthern

For two decades, the arrangement between search engines and publishers was a symbiotic relationship where publishers allowed crawling, and search engines sent referral traffic back. That traffic helped to fund content creation for publishers through ads and subscriptions.

AI features are changing this, and the deal is starting to break down.

AI Overviews, ChatGPT, and answer engines keep users within their platform instead of sending them to source sites. The result is publishers are watching their traffic decline while AI companies crawl more content than ever.

New payment models are emerging to replace the old economics. some involve usage-based revenue sharing, others are flat licensing deals worth millions, and a few have ended in court settlements. But the terms vary widely, and it’s unclear whether any model can sustain the content ecosystem that AI depends on.

This article examines the payment models taking shape, how different publishers are responding, and what SEO professionals should consider as the industry figures out sustainable economics.

How The Traffic Exchange Has Changed

When AI Overviews appear in results, the traffic loss is measurable, with only 8% of users clicking any link compared to 15% without AI summaries. That’s a 46.7% drop. Just 1% of users clicked citation links within the AI Overview itself.

Zero-click searches increased from 56% to 69% between 2024 and 2025. Organic traffic to U.S. websites declined from 2.3 billion visits to under 1.7 billion in the same period.

Digital Content Next surveyed premium publishers and found year-over-year traffic declines. Some sites hit double-digit percentage drops during peak impact weeks.

The crawl-to-referral ratio shows how unbalanced this is. Cloudflare’s analysis tracks Google Search maintaining roughly a 10:1 ratio, crawling about 10 pages for every referral sent back. OpenAI’s ratio was estimated at around 1,200:1 to 1,700:1.

Fewer pageviews mean fewer ad impressions, lower subscription conversions, and reduced affiliate revenue.

Payment Models Taking Shape

Three payment models are emerging.

1. Usage-Based Revenue Sharing

Perplexity launched its Comet Plus program in 2025. The company shares subscription revenue with publishers after keeping a cut for compute costs, though the exact split isn’t disclosed.

Publishers get paid when articles appear in Comet browser results, when they drive traffic through the browser, and when AI agents use content. Participants include TIME, Fortune, Los Angeles Times, Adweek, and Blavity.

ProRata offers a 50/50 split through its Gist.ai answer engine, backed by the News/Media Alliance, using attribution algorithms to track how much each article contributed.

These models tie pay to usage, but the pools stay small compared to traditional search revenue and scaling depends on converting free users to paid subscribers.

2. Flat-Rate Licensing Deals

OpenAI has pursued licensing agreements with publishers. News Corp secured a multi-year deal reportedly worth hundreds of millions. Dotdash Meredith signed a reported $16 million agreement. Other deals include Financial Times, The Atlantic, Vox Media, and Associated Press.

These arrangements bundle three rights: training data access using archives to improve models, real-time content display with attribution in ChatGPT, and technology access letting publishers use OpenAI tools.

AI companies need both historical archives and current content, but this creates tiers where publishers with vast archives can negotiate deals while smaller publishers lack leverage.

Microsoft signed a reported $10 million deal with Informa’s Taylor & Francis for scholarly content. Google started licensing discussions with about 20 national news outlets in July. Most terms remain undisclosed.

3. Legal Settlements As Precedent

Anthropic settled with authors for $1.5 billion after Judge William Alsup’s June ruling in Bartz v. Anthropic. The ruling said training on legally purchased books was fair use. Downloading from pirate sites was infringement.

The settlement shows AI companies can afford to pay even while arguing in court they shouldn’t have to, and it provides a public benchmark other negotiations may reference, though specific terms remain sealed.

How Publishers Are Responding

Publishers have split into different camps.

Publishers Accepting Deals

Roger Lynch of Condé Nast said their OpenAI partnership “begins to make up for some of that revenue” lost from traditional search changes. Neil Vogel of Dotdash Meredith said “AI platforms should pay publishers for their content” when announcing their licensing agreement.

Publishers accepting deals cite new revenue streams, legal protection from copyright claims, influence over AI development, and recognition that AI search adoption appears inevitable, with many viewing early partnerships as positioning for future leverage.

Publishers Pursuing Litigation

The New York Times sued OpenAI and Microsoft in 2023. The complaint argues the companies created “a multi-billion-dollar for-profit business built in large part on the unlicensed exploitation of copyrighted works.”

Forbes declined a proposal from Perplexity, saying it “undervalued both our journalism and the Forbes brand.” By October 2024, lawsuits included News Corp properties against Perplexity, and eight daily newspapers against OpenAI and Microsoft.

Publishers refusing deals say the money’s too low and worry that accepting bad terms now legitimizes them going forward, plus AI summaries directly compete with their work.

Trade Organization Positions

Danielle Coffey, CEO of News/Media Alliance, called Google’s AI Mode practices “parasitic, unsustainable and pose a real existential threat.” She suggests that AI systems are only as good as the content they use to train them.

Jason Kint of Digital Content Next noted that despite Google sending large monthly revenue checks through advertising, 78% of member digital revenue still comes from ads. Every point of search traffic lost “squeezes the budgets that fund investigative reporting.”

Both organizations demand that AI systems provide transparency, clearly attribute content, respect publishers’ roles, comply with competition laws, and not misrepresent original works.

The Emerging Division: Licensed Web Vs. Open Web

The payment model differences are creating two tiers of web content with different economics.

A “Licensed Web” consists of premium content behind APIs and licensing agreements. Publishers with vast archives, specialized expertise, or unique data sets are negotiating direct access deals with LLM companies. This content gets used for training and real-time retrieval with attribution and compensation.

The “Open Web” includes crawlable pages without licensing agreements. User-generated content, marketing material, commodity information, and sites lacking leverage to negotiate terms. This content may still get crawled and used, but without direct compensation beyond minimal referral traffic.

This setup can lead to mismatched incentives. Publishers investing in differentiated, high-quality content may have licensing options to support their work. Meanwhile, those creating more easily replaceable information might struggle with commoditization, making it harder to find clear ways to earn revenue.

For practitioners, focus on developing your own research, unique data sets, specialized expertise, and original reporting. This increases both traditional search value and potential licensing value to AI platforms.

How Payment Models Are Reshaping SEO And Content Strategy

The shift from traffic to licensing is forcing changes across SEO.

The Citation Vs. Click Problem

Traditional SEO centered on rankings that drove clicks, but LLM citations work differently as content appears in AI answers with attribution, but fewer click-throughs. Lily Ray believes SEO is no longer just about ranking and traffic.

Practitioners are now tracking engagement quality, conversion rates, branded search, and direct traffic alongside traditional metrics. Some are quantifying AI citations across ChatGPT, Perplexity, and other platforms. This provides visibility into brand mentions even when referrals don’t materialize.

Bot Access Becomes A Business Decision

Publishers today find themselves making decisions about blocking content via robots.txt. These choices weren’t even considered two years ago. The decision weighs AI visibility with concerns about potential traffic loss and the benefits of licensing.

Many content publishers are open to allowing bot access, valuing their presence in AI results more than guarding content that competitors also produce. News organizations prioritize speed and broad coverage for breaking stories, aiming to reach as many people as possible.

On the other hand, some publishers choose to restrict access to their high-value research and specialized insights, knowing that scarcity can give them stronger negotiating power. Those with paywalled analysis often block AI crawlers to protect their subscription models, ensuring they maintain control over their most valuable content.

ProRata and TollBit offer selective licensing as a middle ground. Publishers maintain AI visibility while getting paid. But AI companies haven’t widely adopted these platforms.

Measurement Systems Under Pressure

Traffic declines may trigger discussions with stakeholders who expect a recovery, and for sites that rely solely on advertising, this can be a challenging discussion to have.

Publishers are exploring alternative revenue models such as subscriptions, memberships, consulting, events, and affiliate partnerships, while also prioritizing email, newsletters, and apps.

Branded search remains more stable than overall traffic levels, emphasizing the importance of brand-building beyond search rankings.

Content Investment Questions

Payment uncertainty can make it hard to decide what content is worth investing in. Publishers with licensing deals might focus on what AI companies need for training or retrieval, while those without deals have to consider different factors.

The division between Licensed Web and Open Web influences these choices. Original research, unique data, and specialized expertise may justify different levels of investment compared to more common material.

Smaller publishers often lack the leverage of licensing. Creating high-quality content while competing with AI-generated summaries that don’t drive traffic raises ongoing questions about sustainability.

Content Sustainability Concerns

Revenue declines are forcing news organizations to cut staff, reducing investigative capacity and the production of original reporting.

The Society of Authors reports 12,000+ members have written letters saying they “do not consent” to AI training. That signals creative professionals reconsidering publication if compensation doesn’t materialize.

More content is moving behind paywalls, which protects revenue but limits free information access. The News/Media Alliance warns that without fair compensation for publisher content, AI practices pose a significant threat to ongoing investment in journalism.

The challenge is that AI companies really rely on publishers to provide high-quality training data. But AI systems that don’t generate traffic can make it harder for publishers to fund their content creation efforts.

Right now, payment models might work well for big publishers who have more power, but mid-sized and small publishers face more uncertain financial situations.

Those with direct relationships to their audience and multiple sources of income are generally in a stronger position compared to those mainly relying on ads.

What’s Likely Next

Current LLM payment models don’t match what publishers earned from search traffic, and they also don’t reflect what AI companies extract through crawling.

Publishers are dividing into distinct camps, with some angling for deals while others are betting litigation will establish better terms than individual negotiations.

Trade organizations are pushing for regulatory solutions, but AI companies maintain their current approach works. OpenAI points to expanding partnerships and says deals provide fair value. Perplexity argues its revenue-sharing model aligns incentives. Google hasn’t announced plans beyond existing traffic-sharing arrangements.

What happens next depends on litigation outcomes, regulatory action, and whether market pressure forces AI platforms to improve terms.

Multiple paths forward remain possible, and for now, publishers face immediate decisions about bot access, content strategy, and revenue diversification without clarity on which approach will prove sustainable.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Inside ChatGPT’s Confidential Report Visibility Metrics [Part 1] via @sejournal, @VincentTerrasi

A few weeks ago, I was given access to review a confidential OpenAI partner-facing report, the kind of dataset typically made available to a small group of publishers.

For the first time, from the report, we have access to detailed visibility metrics from inside ChatGPT, the kind of data that only a select few OpenAI site partners have ever seen.

This isn’t a dramatic “leak,” but rather an unusual insight into the inner workings of the platform, which will influence the future of SEO and AI-driven publishing over the next decade.

The consequences of this dataset far outweigh any single controversy: AI visibility is skyrocketing, but AI-driven traffic is evaporating.

This is the clearest signal yet that we are leaving the era of “search engines” and entering the era of “decision engines,” where AI agents surface, interpret, and synthesize information without necessarily directing users back to the source.

This forces every publisher, SEO professional, brand, and content strategist to fundamentally reconsider what online visibility really means.

1. What The Report Data Shows: Visibility Without Traffic

The report dataset gives a large media publisher a full month of visibility. With surprising granularity, it breaks down how often a URL is displayed inside ChatGPT, where it appears inside the UI, how often users click on it, how many conversations it impacts, and the surface-level click-through rate (CTR) across different UI placements.

URL Display And User Interaction In ChaGPT

Image from author, November 2025

The dataset’s top-performing URL recorded 185,000 distinct conversation impressions, meaning it was shown in that many separate ChatGPT sessions.

Of these impressions, 3,800 were click events, yielding a conversation-level CTR of 2%. However, when counting multiple appearances within conversations, the numbers increase to 518,000 total impressions and 4,400 total clicks, reducing the overall CTR to 0.80%.

This is an impressive level of exposure. However, it is not an impressive level of traffic.

Most other URLs performed dramatically worse:

  • 0.5% CTR (considered “good” in this context).
  • 0.1% CTR (typical).
  • 0.01% CTR (common).
  • 0% CTR (extremely common, especially for niche content).

This is not a one-off anomaly; it’s consistent across the entire dataset and matches external studies, including server log analyses by independent SEOs showing sub-1% CTR from ChatGPT sources.

We have experienced this phenomenon before, but never on this scale. Google’s zero-click era was the precursor. ChatGPT is the acceleration. However, there is a crucial difference: Google’s featured snippets were designed to provide quick answers while still encouraging users to click through for more information. In contrast, ChatGPT’s responses are designed to fully satisfy the user’s intent, rendering clicks unnecessary rather than merely optional.

2. The Surface-Level Paradox: Where OpenAI Shows The Most, Users Click The Least

The report breaks down every interaction into UI “surfaces,” revealing one of the most counterintuitive dynamics in modern search behavior. The response block, where LLMs place 95%+ of their content, generates massive impression volume, often 100 times more than other surfaces. However, CTR hovers between 0.01% and 1.6%, and curiously, the lower the CTR, the better the quality of the answer.

LLM Content Placement And CTR Relationship

Image from author, November 2025

This is the new equivalent of “Position Zero,” except now it’s not just zero-click; it’s zero-intent-to-click. The psychology is different from that of Google. When ChatGPT provides a comprehensive answer, users interpret clicking as expressing doubt about the AI’s accuracy, indicating the need for further information that the AI cannot provide, or engaging in academic verification (a relatively rare occurrence). The AI has already solved its problem.

The sidebar tells a different story. This small area has far fewer impressions, but a consistently strong CTR ranging from 6% to 10% in the dataset. This is higher than Google’s organic positions 4 through 10. Users who click here are often exploring related content rather than verifying the main answer. The sidebar represents discovery mode rather than verification mode. Users trust the main answer, but are curious about related information.

Citations at the bottom of responses exhibit similar behavior, achieving a CTR of between 6% and 11% when they appear. However, they are only displayed when ChatGPT explicitly cites sources. These attract academically minded users and fact-checkers. Interestingly, the presence of citations does not increase the CTR of the main answer; it may actually decrease it by providing verification without requiring a click.

Search results are rarely triggered and usually only appear when ChatGPT determines that real-time data is needed. They occasionally show CTR spikes of 2.5% to 4%. However, the sample size is currently too small to be significant for most publishers, although these clicks represent the highest intent when they occur.

The paradox is clear: The more frequently OpenAI displays your content, the fewer clicks it generates. The less frequently it displays your content, the higher the CTR. This overturns 25 years of SEO logic. In traditional search, high visibility correlates with high traffic. In AI-native search, however, high visibility often correlates with information extraction rather than user referral.

“ChatGPT’s ‘main answer’ is a visibility engine, not a traffic engine.”

3. Why CTR Is Collapsing: ChatGPT Is An Endpoint, Not A Gateway

The comments and reactions on LinkedIn threads analyzing this data were strikingly consistent and insightful. Users don’t click because ChatGPT solves their problem for them. Unlike Google, where the answer is a link, ChatGPT provides the answer directly.

This means:

  • Satisfied users don’t click (they got what they needed).
  • Curious users sometimes click (they want to explore deeper).
  • Skeptical users rarely click (they either trust the AI or distrust the entire process).
  • Very few users feel the need to leave the interface.

As one senior SEO commented:

“Traffic stopped being the metric to optimize for. We’re now optimizing for trust transfer.”

Another analyst wrote:

“If ChatGPT cites my brand as the authority, I’ve already won the user’s trust before they even visit my site. The click is just a formality.”

This represents a fundamental shift in how humans consume information. In the pre-AI era, the pattern was: “I need to find the answer” → click → read → evaluate → decide. In the AI era, however, it has become: “I need an answer” → “receive” → “trust” → “act”, with no click required. AI becomes the trusted intermediary. The source becomes the silent authority.

Shift In Information Consumption

Image from author, November 2025

This marks the beginning of what some are calling “Inception SEO”: optimizing for the answer itself, rather than for click-throughs. The goal is no longer to be findable. The goal is to be the source that the AI trusts and quotes.

4. Authority Over Keywords: The New Logic Of AI Retrieval

Traditional SEO relies on indexation and keyword matching. LLMs, however, operate on entirely different principles. They rely on internal model knowledge wherever possible, drawing on trained data acquired through crawls, licensed content, and partnerships. They only fetch external data when the model determines that its internal knowledge is insufficient, outdated, or unverified.

When selecting sources, LLMs prioritize domain authority and trust signals, content clarity and structure, entity recognition and knowledge graph alignment, historical accuracy and factual consistency, and recency for time-sensitive queries. They then decide whether to cite at all based on query type and confidence level.

This leads to a profound shift:

  • Entity strength becomes more important than keyword coverage.
  • Brand authority outweighs traditional link building.
  • Consistency and structured content matter more than content volume
  • Model trust becomes the single most important ranking factor
  • Factual accuracy over long periods builds cumulative advantage

“You’re no longer competing in an index. You’re competing in the model’s confidence graph.”

This has radical implications. The old SEO logic was “Rank for 1,000 keywords → Get traffic from 1,000 search queries.” The new AI logic is “Become the authoritative entity for 10 topics → Become the default source for 10,000 AI-generated answers.”

In this new landscape, a single, highly authoritative domain has the potential to dominate AI citations across an entire topic cluster. “Long-tail SEO” may become less relevant as AI synthesizes answers rather than matching specific keywords. Topic authority becomes more valuable than keyword authority. Being cited once by ChatGPT can influence millions of downstream answers.

5. The New KPIs: “Share Of Model” And In-Answer Influence

As CTR is declining, brands must embrace metrics that reflect AI-native visibility. The first of these is “share of model presence,” which is how often your brand, entity, or URLs appear in AI-generated answers, regardless of whether they are clicked on or not. This is analogous to “share of voice” in traditional advertising, but instead of measuring presence in paid media, it measures presence in the AI’s reasoning process.

LLM Decision Hierarchy

Image from author, November 2025

How to measure:

  • Track branded mentions in AI responses across major platforms (ChatGPT, Claude, Perplexity, Google AI Overviews).
  • Monitor entity recognition in AI-generated content.
  • Analyze citation frequency in AI responses for your topic area.

LLMs are increasingly producing authoritative statements, such as “According to Publisher X…,” “Experts at Brand Y recommend…,” and “As noted by Industry Leader Z…”

This is the new “brand recall,” except it happens at machine speed and on a massive scale, influencing millions of users without them ever visiting your website. Being directly recommended by an AI is more powerful than ranking No. 1 on Google, as the AI’s endorsement carries algorithmic authority. Users don’t see competing sources; the recommendation is contextualized within their specific query, and it occurs at the exact moment of decision-making.

Then, there’s contextual presence: being part of the reasoning chain even when not explicitly cited. This is the “dark matter” of AI visibility. Your content may inform the AI’s answer without being directly attributed, yet still shape how millions of users understand a topic. When a user asks about the best practices for managing a remote team, for example, the AI might synthesize insights from 50 sources, but only cite three of them explicitly. However, the other 47 sources still influenced the reasoning process. Your authority on this topic has now shaped the answer that millions of users will see.

High-intent queries are another crucial metric. Narrow, bottom-of-funnel prompts still convert, showing a click-through rate (CTR) of between 2.6% and 4%. Such queries usually involve product comparisons, specific instructions requiring visual aids, recent news or events, technical or regulatory specifications requiring primary sources, or academic research requiring citation verification. The strategic implication is clear: Don’t abandon click optimization entirely. Instead, identify the 10-20% of queries where clicks still matter and optimize aggressively for those.

Finally, LLMs judge authority based on what might be called “surrounding ecosystem presence” and cross-platform consistency. This means internal consistency across all your pages; schema and structured data that machines can easily parse; knowledge graph alignment through presence in Wikidata, Wikipedia, and industry databases; cross-domain entity coherence, where authoritative third parties reference you consistently; and temporal consistency, where your authority persists over time.

This holistic entity SEO approach optimizes your entire digital presence as a coherent, trustworthy entity, not individual pages. Traditional SEO metrics cannot capture this shift. Publishers will require new dashboards to track AI citations and mentions, new tools to measure “model share” across LLM platforms, new attribution methodologies in a post-click world, and new frameworks to measure influence without direct traffic.

6. Why We Need An “AI Search Console”

Many SEOs immediately saw the same thing in the dataset:

“This looks like the early blueprint for an OpenAI Search Console.”

Right now, publishers cannot:

  • See how many impressions they receive in ChatGPT.
  • Measure their inclusion rate across different query types.
  • Understand how often their brand is cited vs. merely referenced.
  • Identify which UI surfaces they appear in most frequently.
  • Correlate ChatGPT visibility with downstream revenue or brand metrics.
  • Track entity-level impact across the knowledge graph.
  • Measure how often LLMs fetch real-time data from them.
  • Understand why they were selected (or not selected) for specific queries.
  • Compare their visibility to competitors.

Google had “Not Provided,” hiding keyword data. AI platforms may give us “Not Even Observable,” hiding the entire decision-making process. This creates several problems. For publishers, it’s impossible to optimize what you can’t measure; there’s no accountability for AI platforms, and asymmetric information advantages emerge. For the ecosystem, it reduces innovation in content strategy, concentrates power in AI platform providers, and makes it harder to identify and correct AI bias or errors.

Based on this leaked dataset and industry needs, an ideal “AI Search Console” would provide core metrics like impression volume by URL, entity, and topic, surface-level breakdowns, click-through rates, and engagement metrics, conversation-level analytics showing unique sessions, and time-series data showing trends. It would show attribution and sourcing details: how often you’re explicitly cited versus implicitly used, which competitors appear alongside you, query categories where you’re most visible, and confidence scores indicating how much the AI trusts your content.

Diagnostic tools would explain why specific URLs were selected or rejected, what content quality signals the AI detected, your entity recognition status, knowledge graph connectivity, and structured data validation. Optimization recommendations would identify gaps in your entity footprint, content areas where authority is weak, opportunities to improve AI visibility, and competitive intelligence.

OpenAI and other AI platforms will eventually need to provide this data for several reasons. Regulatory pressure from the EU AI Act and similar regulations may require algorithmic transparency. Media partnerships will demand visibility metrics as part of licensing deals. Economic sustainability requires feedback loops for a healthy content ecosystem. And competitive advantage means the first platform to offer comprehensive analytics will attract publisher partnerships.

The dataset we’re analyzing may represent the prototype for what will eventually become standard infrastructure.

AI Search Console

Image from author, November 2025

7. Industry Impact: Media, Monetization, And Regulation

The comments raised significant concerns and opportunities for the media sector. The contrast between Google’s and OpenAI’s economic models is stark. Google contributes to media financing through neighbouring rights payments in the EU and other jurisdictions. It still sends meaningful traffic, albeit declining, and has established economic relationships with publishers. Google also participates in advertising ecosystems that fund content creation.

By contrast, OpenAI and similar AI platforms currently only pay select media partners under private agreements, send almost no traffic with a CTR of less than 1%, extract maximum value from content while providing minimal compensation, and create no advertising ecosystem for publishers.

AI Overviews already reduce organic CTR. ChatGPT takes this trend to its logical conclusion by eliminating almost all traffic. This will force a complete restructuring of business models and raise urgent questions: Should AI platforms pay neighbouring rights like search engines do? Will governments impose compensatory frameworks for content use? Will publishers negotiate direct partnerships with LLM providers? Will new licensing ecosystems emerge for training data, inference, and citation? How should content that is viewed but not clicked on be valued?

Several potential economic models are emerging. One model is citation-based compensation, where platforms pay based on how often content is cited or used. This is similar to music streaming royalties, though transparent metrics are required.

Under licensing agreements, publishers would license content directly to AI platforms, with tiered pricing based on authority and freshness. This is already happening with major outlets such as the Associated Press, Axel Springer, and the Financial Times. Hybrid attribution models would combine citation frequency, impressions, and click-throughs, weighted by query value and user intent, in order to create standardized compensation frameworks.

Regulatory mandates could see governments requiring AI platforms to share revenue with content creators, based on precedents in neighbouring rights law. This could potentially include mandatory arbitration mechanisms.

This would be the biggest shift in digital media economics since Google Ads. Platforms that solve this problem fairly will build sustainable ecosystems. Those that do not will face regulatory intervention and publisher revolts.

8. What Publishers And Brands Must Do Now

Based on the data and expert reactions, an emerging playbook is taking shape. Firstly, publishers must prioritize inclusion over clicks. The real goal is to be part of the solution, not to generate a spike in traffic. This involves creating comprehensive, authoritative content that AI can synthesize, prioritizing clarity and factual accuracy over tricks to boost engagement, structuring content so that key facts can be easily extracted, and establishing topic authority rather than chasing individual keywords.

Strengthening your entity footprint is equally critical. Every brand, author, product, and concept must be machine-readable and consistent. Publishers should ensure their entity exists on Wikidata and Wikipedia, maintain consistent NAP (name, address, phone number) details across all properties, implement comprehensive schema markup, create and maintain knowledge graph entries, build structured product catalogues, and establish clear entity relationships, linking companies to people, products, and topics.

Building trust signals for retrieval is important because LLMs prioritize high-authority, clearly structured, low-ambiguity content. These trust signals include:

  • Authorship transparency, with clear author bios, credentials, and expertise.
  • Editorial standards, covering fact-checking, corrections policies, and sourcing.
  • Domain authority, built through age, backlink profile, and industry recognition.
  • Structured data, via schema implementation and rich snippets.
  • Factual consistency, maintaining accuracy over time without contradictions.
  • Expert verification, through third-party endorsements and citations.

Publishers should not abandon click optimization entirely. Instead, they should target bottom-funnel prompts that still demonstrate a measurable click-through rate (CTR) of between 2% and 4%, since AI responses are insufficient.

Examples of high-CTR queries:

  • “How to configure [specific technical setup]” (requires visuals or code).
  • “Compare [Product A] vs [Product B] specs” (requires tables, detailed comparisons).
  • “Latest news on [breaking event]” (requires recency).
  • “Where to buy [specific product]” (transactional intent).
  • “[Company] careers” (requires job portal access).

Strategy: Identify the 10–20% of your topic space where AI cannot fully satisfy user intent, and optimize those pages for clicks.

In terms of content, it is important to lead with the most important information, use clear and definitive language, cite primary sources, avoid ambiguity and hedging unless accuracy requires it, and create content that remains accurate over long timeframes.

Perhaps the most important shift is mental: Stop thinking in terms of traffic and start thinking in terms of influence. Value has shifted from visits to the reasoning process itself. New success metrics should track how often you are cited by AI, the percentage of AI responses in your field that mention you, how your “share of model” compares with that of your competitors, whether you are building cumulative authority that persists across model updates, and whether AI recognizes you as the definitive source for your core topics.

The strategic focus shifts from “drive 1 million monthly visitors” to “influence 10 million AI-mediated decisions.”

Publishers must also diversify their revenue streams so that they are not dependent on traffic-based monetization. Alternative models include building direct relationships with audiences through email lists, newsletters, and memberships; offering premium content via paywalls, subscriptions, and exclusive access; integrating commerce through affiliate programmes, product sales, and services; forming B2B partnerships to offer white-label content, API access, and data licensing; and negotiating deals with AI platforms for direct compensation for content use.

Publishers that control the relationship with their audience rather than depending on intermediary platforms will thrive.

The Super-Predator Paradox

A fundamental truth about artificial intelligence is often overlooked: these systems do not generate content independently; they rely entirely on the accumulated work of millions of human creators, including journalism, research, technical documentation, and creative writing, which form the foundation upon which every model is built. This dependency is the reason why OpenAI has been pursuing licensing deals with major publishers so aggressively. It is not an act of corporate philanthropy, but an existential necessity. A language model that is only trained on historical data becomes increasingly disconnected from the current reality with each passing day. It is unable to detect breaking news or update its understanding through pure inference. It is also unable to invent ground truth from computational power alone.

This creates what I call the “super-predator paradox”: If OpenAI succeeds in completely disrupting traditional web traffic, causing publishers to collapse and the flow of new, high-quality content to slow to a trickle, the model’s training data will become increasingly stale. Its understanding of current events will degrade, and users will begin to notice that the responses feel outdated and disconnected from reality. In effect, the super-predator will have devoured its ecosystem and will now find itself starving in a content desert of its own creation.

The paradox is inescapable and suggests two very different possible futures. In one, OpenAI continues to treat publishers as obstacles rather than partners. This would lead to the collapse of the content ecosystem and the AI systems that depend on it. In the other, OpenAI shares value with publishers through sustainable compensation models, attribution systems, and partnerships. This would ensure that creators can continue their work. The difference between these futures is not primarily technological; the tools to build sustainable, creator-compensating AI systems largely exist today. Rather, it is a matter of strategic vision and willingness to recognize that, if artificial intelligence is to become the universal interface for human knowledge, it must sustain the world from which it learns rather than cannibalize it for short-term gain. The next decade will be defined not by who builds the most powerful model, but by who builds the most sustainable one by who solves the super-predator paradox before it becomes an extinction event for both the content ecosystem and the AI systems that cannot survive without it.

Note: All data and stats cited above are from the Open AI partner report, unless otherwise indicated.

More Resources:


Featured Image: Nadya_Art/Shutterstock