This post was sponsored by 10Web. The opinions expressed in this article are the sponsor’s own.
Not long ago, building a website meant a discovery call, a proposal, a sitemap, and a few weeks of back and forth. Today, we go from “I need a website” to “Why isn’t it live yet?” People are getting used to typing a short prompt and seeing an entire site structure, design, and a first-draft of their site in minutes. That doesn’t replace all the strategy, UX, or growth work, but it changes expectations about how fast the first version should appear, and how teams work.
This shift puts pressure on everyone who sits between the user and the web: agencies, MSPs, hosting companies, domain registrars, and SaaS platforms. If your users can get an AI-generated site somewhere else in a few clicks, you better catch the wave or be forgotten.
That’s why the real competition is moving to those who control distribution and can embed an AI-native, white-label builder directly into products. WordPress still powers over 43% of all websites globally, and remains the default foundation for many of these distribution players.
Now that AI-native builders, reseller suites, and website builder APIs are available on top of WordPress, who will own that experience and the recurring revenue that comes with it.
AI & Vibe Coding Is Turning Speed-To-Launch Into a Baseline
AI site builders and vibe coding tools have taught people a new habit: describe what you want, get a working draft of a site almost immediately.
Instead of filling out long briefs and waiting for mockups, users can:
Type or paste a business description,
Point to a few example sites,
Click generate,
And see a homepage, key inner pages, and placeholder copy appear in minutes.
For non-technical users, this is magic. For agencies and infrastructure providers, it’s a new kind of pressure. The baseline expectation has become seeing something live quickly and refining it afterward.
This demand is everywhere:
Small businesses want a site as soon as they buy a domain or sign up for SaaS.
Creators expect their website to follow them seamlessly from the tools they already use.
Teams inside larger organizations need landing pages and microsites created on demand, without long internal queues.
If you’re an agency, MSP, hosting provider, domain registrar, or SaaS platform, you’re now measured against that baseline, no matter what your stack was designed for. Bolting on a generic external builder isn’t enough. Users want websites inside the experience they trust and already pay you for, with your branding, your billing, and your support.
AI-native builders that are built directly into your stack are no longer a nice bonus but an essential part of your product.
With Vibe Coding Leveling The Field: What Is Your Differentiator?
In this environment, the biggest advantage doesn’t belong to whoever ships the flashiest AI demo. It belongs to whoever owns the distribution channels:
Agencies and MSPs, the ground level players holding client relationships and trust.
Hosting and cloud providers where businesses park their infrastructure.
Domain registrars where the online journey starts.
SaaS platforms, already owning the critical data needed to reflect and sync with company websites.
These players already control the key moments when someone goes from thinking they need a website to taking action.
Buying a domain
Using a vertical SaaS product
Working with an MSP or agency retainer
Adding a new location, service, or product line
If, at those moments, the platform automatically provides an AI-generated, editable site under the same login, billing, and support, the choice of stack is made by default. Users simply stay with the builder that’s already built into the service or product they use.
This is why white-label builders, reseller suites, and website builder APIs matter. They give distribution owners the opportunity to:
Brand the website experience as their own
Decide on the underlying technology (e.g., AI-native WordPress)
Bundle sites with hosting, marketing, or other services
Keep the recurring revenue and data inside their ecosystem
In other words, as AI pushes the web toward instant presence, distribution owners who embed website creation into their existing flows become the gatekeepers of which tools, stacks, and platforms win.
How To Connect WordPress Development, SEO & Vibe Coding
For most distribution owners, WordPress is still the safest base to standardize on. It powers a huge share of the web, has a deep plugin and WooCommerce ecosystem, and a large talent pool, which makes it easier to run thousands of sites without being tied to a single vendor. Its open-source nature also allows full rebranding and custom flows, exactly what white-label providers need, while automated provisioning, multisite, and APIs make it a natural infrastructure layer for branded site creation at scale. The missing piece has been a truly AI-native, generation-first builder. The latest AI-powered WordPress tools are closing that gap and expanding what distribution owners can offer out of the box.
Use AI-Native WordPress & White Label Embeddable Solutions
Most of the visible WordPress innovation around AI and websites has happened in standalone AI builders or coding assistants, relying on scattered plugins and lightweight helpers. The CMS is solid, but the first version of a site is still mostly assembled by hand.
AI-native WordPress builders move AI into the core flow: from intent straight to a structured, production-ready WordPress site in one step. In 10Web’s case, Vibe for WordPress is the first to bring Vibe Coding to the market with a React front end and deep integrations with WordPress. As opposed to previous versions of the builder or other website builders working off of generic templates and content, Vibe for WordPress allows the customer to have unlimited freedom during and after website generation via chat based AI and using natural language.
For distribution owners, AI only matters if it is packaged in a way they can sell, support, and scale. At its core, the 10Web’s White Label solution is a fully white-labeled AI website builder and hosting environment that partners brand as their own, spanning the dashboard, onboarding flows, and even the WordPress admin experience.
Instead of sending customers to a third-party tool, partners work in a multi-tenant platform where they can:
Brand the entire experience (logo, colors, custom domain).
Provision and manage WordPress sites, hosting, and domains at scale.
Package plans, track usage and overages, and connect their own billing and SSO.
In practice, a telco, registrar, or SaaS platform can offer AI-built WordPress websites under its own brand without building an editor, a hosting stack, or a management console from scratch.
APIs and White-Label: Quickly Code New Sites Or Allow Your Clients To Feel In Control
There is one fine nuance, yet so important. Speed alone isn’t a deciding factor on who wins the next wave of web creation. Teams that can wire that speed directly into their distribution channels and workflows will be the first to the finish line.
The White label platforms and APIs are two sides of the same strategy. The reseller suite gives partners a turnkey, branded control center; the API lets them take the same capabilities and thread them through domain purchase flows, SaaS onboarding, or MSP client portals.
From there, partners can:
Generate sites and WooCommerce stores from prompts or templates.
Provision hosting, domains, and SSL, and manage backups and restore points via API.
Control plugins, templates, and vertical presets so each tenant or region gets a curated, governed stack.
Pull usage metrics, logs, and webhooks into their own analytics and billing layers.
For MSPs and agencies treating websites as a packaged, recurring service, see more predictable revenue and stickier client relationships. They bake “website included” into retainers, care plans, and bundles, using white-label reseller dashboard to keep everything under their own brand.
As for SaaS platform and vertical solutions, instead of just giving partners a branded dashboard, 10Web’s Website Builder API lets them embed AI-powered WordPress site creation and lifecycle management directly into their own products. At a high level, it’s a white-label AI builder you plug in via API so your users can create production-ready WordPress sites and stores in under a minute, without ever leaving your app.
In this model, when someone buys a domain, signs up for a SaaS tool, or comes under an MSP contract, they experience the AI website Builder as a built-in part of the product. And the distribution owner, armed with white-label and API tools, is the one who captures the recurring value of that relationship.
The Next Wave
WordPress remains the foundation distribution owners trust, the layer they know can scale from a single landing page to thousands of client sites. With 10Web’s AI-native builder, reseller dashboard, and API, it isn’t playing catch-up anymore, but is quickly becoming the engine behind fast, governed, repeatable site creation.
For agencies, MSPs, cloud infrastructure providers, and SaaS platforms, that means they can sell websites as a packaged service. The winners of the next wave are the ones who wire AI-native, white-label WordPress into their distribution and turn “website included” into their default.
Unlock new revenue by selling AI. Websites, Hosting, AI Branding, AI Agents, SMB Tools, and your own services.
Google is expanding its Preferred Sources feature to English-language users worldwide and launching a pilot program to test AI-powered features with major news publishers.
The announcement includes updates to how links appear in AI Mode and a new feature that will highlight content from users’ news subscriptions.
Preferred Sources Goes Global
Preferred Sources in Search lets users customize Top Stories to see more from their favorite outlets. Google is now rolling it out globally for English-language users, with all supported languages following early next year.
Google shared usage data from the feature’s initial rollout. Nearly 90,000 unique sources have been selected by users, ranging from local blogs to global news outlets. Users who pick a preferred source click to that site twice as often on average.
Subscription Highlighting
A new feature will highlight links from users’ paid news subscriptions in search results. Google will also prioritize links from subscribed publications and show them in a dedicated carousel.
The feature launches first in the Gemini app in the coming weeks. AI Overviews and AI Mode will follow, though Google didn’t provide a timeline.
AI Mode Link Updates
Google is increasing the number of inline links in AI Mode and updating their design. The company is also adding contextual introductions to embedded links. These are short statements explaining why a particular link might be useful.
Web Guide, which organizes links into topic groups using AI, is now twice as fast and appearing on more searches for users opted into the experiment.
Publisher AI Pilot Program
Google announced a commercial partnership pilot with publishers including Der Spiegel, El País, Folha de S. Paulo, Infobae, Kompas, The Guardian, The Times of India, The Washington Examiner, and The Washington Post.
The pilot will test AI-powered features in Google News. These include article overviews on participating publications’ Google News pages and audio briefings for those who prefer listening. Google says these features will include attribution and link to articles.
Separate partnerships with Estadão, Antara, Yonhap, and The Associated Press will provide real-time information for the Gemini app.
Google says it has partnered with over 3,000 publications, platforms, and content providers in more than 50 countries in the last few years.
Why This Matters
If you’ve been watching how Google handles publisher relationships in the AI era, this announcement outlines their current approach. The Preferred Sources data suggests users who customize their sources engage more with those sites.
The subscription highlighting feature could affect how your subscribed audiences find your content across Google’s surfaces.
Looking Ahead
Preferred Sources is available now for English-language users globally. Full language support arrives early 2026.
The subscription highlighting feature starts in the Gemini app in the coming weeks. The publisher AI pilot has begun with participating publications in Google News. Google didn’t provide timelines for when AI Mode and AI Overviews will get subscription highlighting.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Every year, I hold myself accountable for my previous predictions by scoring them.
This year, I got three misses, two mixed results, and five hits:
1. Agentic LLM Models Reach +100 Million Users
Score: Miss
Thought Process: When I made this prediction, I assumed that once models improved at reasoning, usage would shift from chat to action, and agents would become the obvious next step.
Reality: While general LLM usage (like Gemini and ChatGPT) cleared +800 million weekly users, true autonomous agent adoption, where the AI performs complex actions like “buying a product” without oversight, remained a niche power-user feature with very wonky performance.
Google’s “Project Mariner” and OpenAI’s agent features only entered broad public beta in mid-2025.
Most consumers still use AI for information (chat) rather than action (agents).
I over-weighted the speed of productization and user trust, and under-weighted how slow people are to let software spend their money or act without supervision.
2. More AI Victims
Score: Hit
Thought Process: Here, I zoomed out from early signals like Chegg and Stack Overflow and treated them as the first visible cracks in a broader margin collapse. My bet was that whenever AI sits between buyers and a labor-intensive industry, the middle layer will feel the pain first.
Reality: By Q3 2025, major call center outsourcing firms faced a crisis as enterprise clients switched to “AI Voice First” support layers. Translation services continued to shrink as browser-based, real-time AI translation became native to OS updates.
In RWS Holdings’ (translation services) 2025 half-year report, Adjusted EBITDA plummeted 41%, and profit before tax fell nearly 60%.
In September 2025, Concentrix shares dropped ~9% in a single day after missing earnings expectations and cutting its full-year guidance. Enterprise clients aggressively switched to “AI Voice First” layers. Instead of hiring 100 agents for a support queue, a client might hire 10 agents for complex issues and use an AI voice agent for the rest. This destroyed the traditional “per-seat” billing model that BPOs rely on.
3. AI Automation Becomes The Default For Marketing Teams
Score: Hit
Thought Process: This call came from watching clients quietly stitch together AirOps, Make, Zapier, and custom scripts while headcount stayed flat or shrank. I expected economic pressure plus better tooling to push marketing toward “systems thinking,” where workflows matter more than channels.
Reality: “System building” became the primary skill on marketing job descriptions in 2025.
Recruitment data from Ashdown Group (2025 Marketing Job Market Report) showed that roles involving campaign automation and AI tool integration commanded a 7-9% salary premium over generalist marketing roles.
A 2025 HubSpot State of Marketing report found that 78% of B2B organizations had shifted to relying on marketing automation as their primary infrastructure.
With marketing budgets remaining tight, the reliance on “Team of One” structures powered by automation chains (Make, Zapier, custom AI workflows) became the industry standard.
4. AI Overviews Evolve
Score: Hit
Thought Process: I read AI Overviews as an experiment, not a finished product, and assumed Google would iterate toward more personalized, richer SERP formats once multimodal models matured. The underlying belief was that Google had to change the page itself to defend its moat against standalone LLMs.
Reality: Google tested its Web Guide SERP layout extensively as of November. For me (opted into SGE, Search Generative Experience), it’s still the default. Instead of a standard list of blue links or a single AI answer at the top, “Web Guide” breaks the entire search result page into AI-generated “buckets” or headlines.
In June 2025, Google began embedding YouTube Shorts and timestamped video clips directly into AI Overviews.
In December, Google started integrating AI Mode deeper into AI Overviews.
5. Reddit Becomes Part Of The Default Channel Mix
Score: Hit
Thought Process: I assumed that once Reddit showed up for everything from product searches to troubleshooting, marketers would have no choice but to treat it like a core performance channel, not a side project. When you compare ad revenue from Meta and Alphabet platforms with how visible they are in Search, Reddit’s upside becomes clear.
Reality: Reddit had a banner year in 2025. Ad revenue grew 61% YoY Q1 2025 – and then another +68% in Q3 (to $585M). With Google Search continuing to prioritize forum discussions, Reddit became an unavoidable placement for advertisers seeking high-intent traffic.
Daily Active Users (DAUs): Reached 108.1 million in Q1 and climbed to 116 million by Q3 2025.
Reddit is the second largest site on Google by visibility (only Wikipedia is larger).
Reddit finally launched true ecommerce catalog ads (Dynamic Product Ads → DPA). Early beta tests in Q1 2025 showed these ads delivered a 2x higher return on ad spend (ROAS) compared to previous formats, making Reddit viable for “performance” marketers, not just “brand awareness” teams.
Of course, Reddit remains the most cited platform for most LLMs.
6. More Sites Cloak For LLMs
Score: Miss
Thought Process: I expected especially B2B sites to move tactically and fast by feeding bots a cleaner, more structured version of their sites once it became clear LLMs rewarded that pattern. Underneath sat the assumption that people would quietly bend the rules if there was upside and no obvious penalty.
Reality: The “Bot-Only Web” did not emerge via cloaking; it emerged via APIs and paywalls. Instead of creating optimized versions for bots (cloaking), most major publishers aggressively blocked bots via robots.txt and lawsuits (e.g., The New York Times vs. OpenAI continuing saga).
7. The Current Google Shopping Tab Will Become The Default
Score: Miss
Thought Process: I treated the Shopping tab as Google’s sandbox for a future Amazon competitor, similar to how past tab experiments leaked into the main SERP. The core belief was that Google would push harder on shoppable, personalized results in the face of AI pressure and ecommerce growth needs.
Reality: Google kept the main search tab distinct. While the Shopping experience became more personalized and AI-driven (often resembling a feed), Google did not replace the default search experience with the Shopping tab interface for commercial queries.
8. AI-Generated Audio And Video Hits Mass Adoption
Score: Hit
Thought Process: Here, I connected the dots between rapidly improving generative tools and the constant pressure on creators to ship more content with the same or less budget. I assumed that once tools like Sora, Veo, and ElevenLabs crossed a basic quality bar, they would seep into production pipelines even if audiences could still tell something was synthetic.
Reality: 2025 was the year of the “Synthetic Creator.” YouTube had to update its partner program policies in July 2025 specifically to manage the flood of AI-generated content. Despite the crackdown on “AI slop,” high-quality AI-generated B-roll and voiceovers became standard for millions of creators.
On July 15, 2025, YouTube officially updated its Partner Program (YPP) eligibility terms to include a specific clause against “Mass-Produced & Repetitious Synthetic Content.”
The policy explicitly demonetized channels that used “templated, programmatic generation” (slop) but protected creators who used AI tools for “production assistance” (B-roll, voiceovers, scripting) as long as there was “clear editorial oversight.”
Thought Process: I read the DOJ case as a structural threat to Google’s distribution deals and assumed judges would eventually push on the default search arrangements. My framing was that even a partial unwind of exclusivity could shift how search power is negotiated, without instantly changing who “wins” search. Judge Mehta’s first conclusion sounded a lot like he would take a hard stance on remedies.
Reality: The DOJ remedy ruling in September 2025 was disappointing. A toothless tiger. The court prohibited Google from paying for default exclusivity on browsers (Chrome/Safari) and devices, but:
Google can still pay Apple to be the default, but the contract cannot say “Apple must not use anyone else.”
The judge did not enforce a “choice screen” like the European Union, which leaves the door open for Apple to voluntarily implement choice screens or offer alternatives (like ChatGPT or Perplexity) without losing Google’s payments entirely.
Also, Apple is licensing Gemini for Siri.
So, was it really a divorce or a forced transition to an open marriage?
10. Apple Or OpenAI Announces Smart Glasses
Score: Mixed
Thought Process: This prediction came from treating smart glasses as the logical next hardware surface for AI assistants, especially with Meta gaining traction. I assumed that OpenAI plus Jony Ive, or Apple’s need for a new device story, would pull a prototype into the public eye even if real adoption stayed years out.
Reality: On November 24, 2025, OpenAI CEO Sam Altman and designer Jony Ive officially confirmed that their joint hardware venture (under the startup “LoveFrom”) had a finished prototype during an interview hosted by Laurene Powell Jobs. The device was described as “screen-free” and “less intrusive than a phone,” aligning with the “smart glasses” or “AI Pin” form factor predictions.
However, it’s not yet proven that OpenAI will publish glasses. It might also be some sort of necklace device. Meanwhile, Apple did not announce a new product and is dealing with leadership issues instead. And then, just before I hit publish on this Memo, Google announced new smart glasses for 2026.
My Conclusion: This Is The Year “AI Deployment” Began
For those of us in tech and digital marketing, we’re going to remember 2025 as the year the AI-driven “pilot programs” ended and official “deployment” began.
Not only internally across our teams, workflows, and tech stacks, but we also watched classic search habits (informed by decades of human + search engine behavior) transform right in front of us.
We didn’t get the sci-fi future of agents buying our groceries (Pred No. 1) or widespread smart glasses (Pred No. 10) just yet.
Instead, we got something more pragmatically disruptive: A world where marketing teams are half the size but twice as technical, where BPO industries (business process outsourcing) are collapsing, and where “Googling it” increasingly means “Reading a Reddit thread summarized by an AI.”
Featured Image: Paulo Bobita/Search Engine Journal
This post was sponsored by Editorial.Link. The opinions expressed in this article are the sponsor’s own.
“How do you find link-building services? You don’t, they find you,” goes the industry joke. It’s enough to think about backlinks and dozens of pitches that hit your inbox.
However, most of them offer spammy links with little long-term value. Link farms, PBNs, the lot.
This type of saturated market makes it hard to find a reputable link building agency that can navigate the current AI-influenced search landscape.
That’s why we’ve put together this guide.
We’ll share a set of steps that will help you vet link providers so you can find a reliable partner that will set you up for success in organic and AI search.
1. Understand How AI-Driven Search Changes Link Building
Before you can vet an agency, you must understand how the “AI-influenced” landscape is different. Many agencies are still stuck in the old playbook, which includes chasing guest posts, Domain Rating (DR), and raw link volume.
When vetting a service for AI-driven search, your criteria must shift from “How many links can you get?” to “Can you build discoverable authority that earns citations?”
This means looking for agencies that build your niche authority through tactics like original data studies, digital PR, and expert quotes, not just paid posts.
2. Verify Their Expertise and AI-Search Readiness
The first test is simple: do they practice what they preach?
Check Their Own AI & Search Visibility
Check the agency’s rankings in organic and AI search for major keywords in their sector.
Let’s say you want to vet Editorial.Link. If you search for “best link building services,” you will find it is one of the link providers listed in the AI Overviews.
Screenshot of Google’s AI Overviews, November 2025
It doesn’t mean an agency isn’t worth your time just because it doesn’t rank high, as some services thrive on referrals and don’t focus on their own SEO.
However, if they do rank, that’s a major green flag. SEO is a highly competitive niche; ranking their own website demonstrates the expertise to deliver similar results for you.
Ensure Their Tactics Build Citation-Worthy Authority
A modern agency’s strategy should focus on earning citations.
Ask them these questions to see whether they’ve adapted:
Do they talk about AI visibility, citation tracking, or brand mentions?
Do they build links through original data studies, digital PR, and expert quotes?
Can they show examples of clients featured in AI Overviews, Chat GPT, or Perplexity answers?
Can they help you get a link from top listicles in your niche? Ahrefs’ data shows “Best X” list posts dominated the field. They made up 43,8% of all pages referenced in the responses, and the gap between them and every other format looked huge. You can find relevant listicles in your niche using free services, like listicle.com.
Screenshot of Listicle, November 2025
3. Scrutinize Their Track Record Via Reviews, Case Studies & Link Samples
Past performance is a strong indicator of future results.
Analyze Third-Party Reviews
Reviews on independent platforms like Clutch, Trustpilot, or G2 reveal genuine clients’ sentiment better than hand-picked testimonials on a website.
When studying reviews, look for:
Mentions of real campaigns or outcomes.
Verified client names or company profiles.
Recent activity, such as new reviews, shows a steady flow of new business.
The total number of reviews (the more, the more representative).
Patterns in negative reviews and how the agency responds to them.
Screenshot of Editorial.Link’s profile on Clutch, November 2025
Dig Into Their Case Studies
Case studies and customer stories offer proof of concept and provide insights into their processes, strategies, and industry fit.
While case studies with named clients are ideal, some top-tier agencies are bound by client NDAs for competitive reasons. Be wary if all their examples are anonymous and vague, but don’t dismiss a vendor just for protecting client confidentiality.
If the clients’ names are provided, don’t take any figures at face value.
Use an SEO tool to examine their link profiles. If you know the campaign’s timeframe, zero in on that period to see how many links they acquired, their quality, and their relevance.
Screenshot of Thrive Internet Marketing, November 2025
Audit Their Link Quality
Inspecting link quality is the ultimate litmus test.
An agency’s theoretical strategy doesn’t matter if its final product is spam. Ask for 3 – 5 examples of links they have built for recent clients.
Once you have the samples, don’t just look at the linking site’s DR. Audit them with this checklist:
Editorial relevance: Is the linking page topically relevant to the target page?
Site authority & traffic: Does the linking website have real, organic traffic?
Placement & context: Is the link placed editorially within the body of an article?
AI-citation worthiness: Is this an authoritative site Google AI Overview, ChatGPT, or Perplexity would cite (e.g., a reputable industry publication or a data-driven report)?
4. Evaluate Their Process, Pricing & Guarantees
A reliable link-building service is fully transparent about its process and what you’re paying for.
Look For A Transparent Process
Can you see what you’re paying for? A reliable service will outline its process or share a list of potential prospects before starting outreach.
Ask them for a sample report. Does it include anchor texts, website GEO, URLs, target pages, and publication dates? A vague “built 20 links” report doesn’t cut it.
Finally, check if they offer consulting services.
For example, can they help you choose target pages that will benefit from a link boost most?
Or are they just a link-placing service, as this signals a lack of expertise?
Analyze Their Pricing Model
Price is a direct indicator of quality.
When someone offers links for $100 – $200 a pop, they are typically from PBNs or bulk guest posts, and frequently disappear within months.
Valuable backlinks from trusted sites cost significantly more on average, $508.95, according to the Editorial.Link report.
Prospecting, outreach, content creation, and communication require substantial time and effort.
Reputable agencies work on one of two models:
Retainer model: A fixed monthly fee for a consistent flow of links.
Custom outreach: Tailored campaigns with flexible volume and pricing.
Scrutinize Their “Guarantees” For Red Flags
This is where unrealistic promises expose low-quality vendors.
A reputable digital PR agency, for example, won’t guarantee the number of earned links. The final result depends on how well a story resonates with journalists.
The same applies to “guaranteed DR or DA.” These metrics don’t directly affect rankings, and it’s impossible to guarantee which websites will pick up a story.
Choosing A Link Building Partner For The AI Search Era
Not all link-building services have the necessary expertise to help you build visibility in the age of AI search.
When choosing your link-building partner, look for a proven track record, transparency, and adaptability.
A service with a strong search presence, demonstrable results, and a focus on AI visibility is a safer bet than one making unsubstantiated claims.
In 2006, Wired magazine editor Chris Anderson famously described the availability of niche products online as the “long tail.” Search optimizers adopted the term, calling queries of three words or more “long-tail keywords.”
Optimizing for long-tail searches has multiple benefits. Consumers searching on extended keywords tend to know what they want, and longer queries typically have less keyword competition. Yet the biggest benefit could now be AI visibility: Generative AI platforms such as ChatGPT fan out using multiword queries to answer user prompts.
Long-Tail Queries
A seed term plus modifiers
Any long-tail query consists of a seed term and one or more modifiers. For example, “shoes” is a seed term, and potential modifiers are:
“for women,”
“red,”
“near me,”
“on-sale.”
Combining the seed term and modifiers — “red shoes for women,” “on sale near me” — yields narrow queries that describe searchers’ needs, such as gender, color, location, and price.
Modifiers reflect the searcher’s intent and stage in a buying journey, from exploration to purchase. Thus, keyword research is the process of extending a core term with modifiers to optimize a site for buying journeys.
The more modifiers, the more specific the intent and, typically, the lesser the volume and clicks. Conversely, more modifiers improve the likelihood of conversions, provided the content of the landing page follows closely from that phrase. A query of “red shoes for women” should link to a page with women wearing red shoes.
Types of modifiers
A core term can have many modifiers, such as:
Location,
Description (“red”),
Price (typically from searchers eager to buy),
Brand,
Age and gender,
Questions (“how to clean shoes”).
Long-Tail Opportunities
Keyword research tools
Grouping keywords by modifier type can reveal your audience’s search patterns. Keyword research tools such as Semrush and others can filter lists by modifiers to reveal the most popular.
Semrush’s Keyword Magic Tool reveals the most popular modifiers for “shoes.”
Adjust Semrush’s “Advanced filters” to see queries that contain more words.
“Advanced filters” reveal queries that contain more words.
Search Console
Regular expressions (regex) in Search Console can identify longer queries, such as fan-out searches from ChatGPT and other genAI platforms. In Search Console, go to “Performance,” click “Add filter,” choose “Query,” and “Custom (regex).”
Then type:
([^” “]*s){10,}?
This regex filters queries to those with more than 10 words. Change “10” to “5” or “25” to find queries longer than 5 or 25 words, respectively.
Regex in Search Console can identify longer queries, such as fan-out searches from ChatGPT and other genAI platforms.
Keyword Dos and Don’ts
Search engines no longer match queries to exact word strings on web pages, focusing instead on the searcher’s intent or meaning. Hence a query for “red shoes for women” could produce an organic listing for “maroon slippers for busy moms.”
Keyword optimization circa 2025 reflects this evolution.
Avoid stuffing a page with keywords. Instead, enrich content with synonyms and related phrases.
Don’t create a page with variations of a single keyword. Group pages by modifiers and optimize for the entire group.
Include the main keyword in the page title and the H1 heading. Google could use either of those to create the all-important search snippet.
Assign products to only one category. Don’t confuse Google (and genAI platforms) by creating multiple categories for the same item to target different keywords.
Search Google (and genAI platforms) for your target query and study the results. Are there other opportunities, such as images and videos?
Don’t force an exact match keyword if it’s awkward or grammatically incorrect. Ask yourself, “How would I search for this item?” In other words, write for people, not search engines.
Google updated its core updates documentation to say smaller core updates happen on an ongoing basis, so sites can improve without waiting for named updates.
Google explicitly confirms it makes “smaller core updates” beyond the named updates announced several times per year.
Sites that improve their content can see ranking gains without waiting for the next major core update to roll out.
The documentation change addresses whether recovery between named updates is possible.
There are three inevitabilities in life. Death, taxes, and big tech companies dumping on the little guy. As zero-click searches reach an all-time high and content is stolen and repurposed for the gain of the almighty tech loser, there’s only one viable solution.
To paywall.
To create a value exchange that reduces reliance on third-party platforms. To become as self-sufficient as possible. Like an off-grid cabin or your mum’s basement, a paywall gives you a sense of security you just cannot put a price on.
Subscriber revenue is intrinsically more valuable to a business because it is predictable. Subscription and advertiser revenue are not created equal.
Don’t paywall everything. Use dynamic/metered paywalls and leave high-reach, generally lower-quality platforms like Google Discover free for email signups.
Subscription success relies on your USP – whether that’s exclusive data, deep, niche insights, or a certain vibe – you have to stand out.
The customer experience and understanding of your audience matter. Create habit-forming connections and products. Become an essential part of their life.
But What About Our Traffic?
Your traffic will decline. But guess what? You’re already hemorrhaging clicks and have been for some time. And traffic doesn’t pay the bills.
Two comparable pages, one with a paywall, one without (Image Credit: Harry Clarkson-Bennett)
The only way to sustain rankings over time is with high-quality engagement data. Navboost stores and uses 13 months of data to identify good vs bad clicks, click quality, the last longest click, and on-page interactions to establish the most relevant content. All at a query level.
Paywalls are not your friend when it comes to user engagement. Not for the masses. But for a small cohort of people who like you enough to pay, your engagement data will be excellent.
In an ultra-personalized world, you will still do well to the people who really matter.
We have data that pretty perfectly highlights the impact of a paywall on rankings. Over the course of three to four months in traditional search, your rankings start to steadily drop before settling in severe mediocrity. You’ve got to fight for every click. With great content, marketing, savviness. Everything.
We have used an image manager to try and generate a free-to-air badge. It rarely shows up unless there’s no featured image, but the idea is excellent (Image Credit: Harry Clarkson-Bennett)
In Google Discover – a highly personalized, click and engagement driven platform – this is even more pronounced. While Discover’s clickless traffic is lower quality, there will be a small cohort of highly engaged users that develop over time, you can target with a paywall.
Unpaywall for the masses, build your owned channels, and paywall for the highly engaged. The platform will take care of the personalization for you.
So, maximize your value exchange with ads and email signups for most users, but don’t neglect those with a high return rate.
There’s some psychology involved in all of this. When a brand becomes widely known for paywalling, I suspect the likelihood of a click goes down as users know what to expect. Or maybe what not to expect.
This likely perpetuates over time, so you should clarify what articles are free to air.
Is Our Content Good Enough?
To nail SEO bingo, it depends. It depends on what your value is in the market. There is a lot of free stuff out there already. But broadly rubbish. So as long as the bar keeps dropping, we’ll all be fine.
I am old-ish. I like words. Writing great content isn’t easy and is usurped in many cases by richer, more visually striking content. Content that satisfies all types of users. Scanners, deep readers, listeners and get the answer and go-ers.
In some ways, you can satisfy all types of users more effectively than ever. I think you have to hit three of the four Es of content creation. Make it resonate, be consistent, and understand youraudience. Whatever you create stands a chance.
But that doesn’t mean creating great stuff is any easier. If you work for a traditional publisher, the chances are you’ve brought a spoon to a gun fight. The war for attention is being fought on all fronts, and straight words are losing.
Fortunately, not every subscription model relies on the quality of the prose. It might be that you have unique data, granular insights into a specific market, or are just a bloody good laugh.
Subscriptions come in all shapes and sizes.
Ultimately, it comes down to your market, marketing, positioning and your USP. You have to know and speak to your audience and you have to stand out. As Barry would say, if you’re forgettable, you’re doomed.
How Do We Know If People Will Pay?
When it comes to paying for news, some markets are far more “advance” than others. The Scandinavian market is light-years ahead of almost everyone else when it comes to paying for news. You have to do your research to understand:
While it doesn’t align perfectly, it’s not surprising that those most likely to pay for news have higher income levels. Higher disposable income tends to create an environment where people buy more stuff.
And while the UK sits in a pretty shocking-looking position, almost 24 million of us pay for a BBC license fee. That is, in essence, paying for news. Insert joke about BBC bias and woke cultural agendas here.
Cultural and societal factors really matter. As does your understanding of the market.
“Most heartening is what this represents as the wider information ecosystem fractures: audiences recognise the value of professional journalism and are willing to pay for it.”
In an era of slop, paying for something good is not a bad thing.
Macro And Micro Factors Are Influential
You can only control what you can control. But you shouldn’t dismiss the wider climate.
In the UK and arguably globally, there is a cost-of-living crisis. Globally, there have been a number of very significant geopolitical issues that affect the wider economy. Money doesn’t go as far as it once did, and most subscriptions are a luxury purchase.
Is a £20 or £30 monthly subscription more valuable than a £10 Netflix one? Or Spotify? These are questions you need to ask. Why would someone subscribe and stick around?
How far your money goes has been declining for some time… (Image Credit: Harry Clarkson-Bennett)
And we aren’t just competing with other publishers. While screen time and content consumption are at an all-time high, video consumption and the creator economy are booming.
So your pricing strategy, customer service, and overall experience are hugely important. You are almost certainly going to be a nice-to-have. So make sure your customer journey and path to conversion are premium, and your audience feel listened to.
The standard customer experience (Image Credit: Harry Clarkson-Bennett)
You need to speak to your audience. You don’t have to go into this blind. Forging real connections with people is not impossible and making them feel listened to will go a long way.
You can try to figure out what they really value, how much they’re willing to spend and what’s stopping them.
Should I Paywall Everything?
No. Content is designed to do different things, and not everything is a premium product. Whatever journalists will tell you. If you shut down your site entirely, you become too closed off an ecosystem in my opinion.
Commercial Content: If you have affiliate-led content, paywalling is a questionable decision. It may not be wrong per se, but think about whether the pros outweigh the cons. Typically, it’s a good gateway drug for the rest of your content. And makes some money.
Content You Can Get Elsewhere: Evergreen content of a comparable quality to what already exists in the wider corpus is not a profitable opportunity. I’d argue that leaving this free-to-air has more pros than cons. You can always unpaywall the 100 best albums of all time, but gate the richer, individual album reviews.
Lower-Quality Platforms: A user that comes from a platform like Discover is far less likely to convert than someone who comes from organic search. So think about the role each platform plays in your content access ecosystem.
Paywall Vs. Newsletter signup: It is far easier to convert people to a paying subscriber from a newsletter database than from an on-page paywall. And the user journey is far less interrupted. Building an owned channel is never a bad thing, so think about how engaged users are and whether an email would be a more effective starting point.
As of just a few months ago, the search giant asked that publishers with paywalls change the way they block content to help Google out. The lighter touch paywall solution (a JavaScript-based one) includes the full content in the server response.
“…Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed.
This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”
According to Google, they are struggling to determine the difference. So the problem is on us, not them. They (and I strongly suspect other LLMs) are ingesting this content and training their models on us whether we like it or not.
For those of you who haven’t heard of Common Crawl, it stores a corpus of open web data accessible to “researchers.” By researchers, we now mean tech bros who don’t want to pay for, surprisingly, anything.
“If you didn’t want your content on the internet, you shouldn’t have put your content on the internet.”
It doesn’t stop there either. Even if you block all non-whitelisted bots from accessing your site at a CDN level, you may have syndication partnerships in place. If so, it’s likely your content is making it out into the wider world.
The internet is not exactly a leakproof vessel. If you’re setting one up now, try to implement a server-side option.
What Is The Right Paywall For Me?
I have written about the types of paywall available to you and the pros and cons of each. Generally, I think a metered or dynamic paywall is the best option for most publishers. At the very least, a freemium model. Something that gives people enough to draw them in.
And you can’t exactly draw them in if you just hard paywall everything.
You have to think of this as a full-blown marketing strategy. You need to know where people come from. How much of your content they have consumed. Whether it’s better to show them a newsletter signup as opposed to a paywall.
It is absolutely worth knowing that over time, a strong email database will convert far more effectively than a hard paywall.
So encouraging free signups and taking a longer-term view to conversions (you’ll need a good customer journey here) may be far more effective.
How Can I Set One Up?
There are a number of paywall management options out there for publishers. Leaky Paywall, Zephr, Piano. There are plenty.
The best ones integrate with your existing tech stacks, have excellent personalization and customization options, deploy ad-blocking strategies, and run flexible gating strategies.
Larger publishers tend to go with enterprise-level options with deep analytics and CRM integrations. Smaller publishers can work with lighter touch, cheaper operators. You really just need to scope out what will work best for you.
Particularly when it comes to monthly costs and revenue share options.
How Can I Map The Impact?
You’ll need to establish a few key things:
The average drop in traffic you expect to see.
The subsequent loss of existing revenue (probably ad-related, but there may be some knock-on wider commercial impact).
The average value of a subscription (and the expected conversion rate).
Focusing on Customer LTV shifts marketing from chasing traffic to profitable, loyal audience relationships. It makes businesses understand that not all audiences or subscriptions are created equal.
You generate more subs through paid media because the net is larger. But lots slip through the net. So you need a quality product (in both a product and marketing sense) alongside UX and customer service that reduces friction.
Search and owned channels are smaller, but far more likely to pay because they have taken an action to find you. In some cases, they actually want you in their inbox. The quality is higher, but the overall returns are lower.
So you just can’t treat everybody the same.
Closing Thoughts
Subscriber revenue is so valuable because it’s predictable. Subscription business models have boomed for that very reason. A pound of subscriber revenue is far more valuable than almost anything else, and it should be the focus of your business.
But that doesn’t mean you put all your eggs in one basket. You can have multiple subscription types on your website, and that can help you become habitual with all types of users. But you need to add value to their lives every day.
Puzzles, recipes, short and long-form videos, et al.
Businesses make money in many ways. A diverse business is resilient. Resilient to macro and micro factors that will decimate some publishers over the next few years. So talk to your audience, trial new ways of adding value, and commit when one works. Become habitual.
And, shock horror, people want to belong to something. So while the digital experience is crucial, making an effort to connect with people IRL matters.
“Should SEOs be focusing more on digital PR than traditional link building?”
Digital PR is synonymous with link building at this point as SEO’s needed a new way to package and resell the same service. Actual PR work will always be more valuable than link building because PR, whether digital or traditional, focuses on a core audience of customers and reaching specific demographics. This adds value to a business and drives revenue.
With that said, here’s how I’d define digital PR vs. link building if a client asked what the difference is.
Digital PR: Getting brand coverage and citations in media outlets, niche publications, trade journals, niche blogs, and websites that do not allow guest posting, paid links, or unvetted contributors with the goal of building brand awareness and driving traffic from the content.
Link Building: Getting links from websites as a way to try and increase SERP rankings. Traffic from the links, sales from the links, etc., are not being tracked, and the quality of the website can be questionable.
Digital PR is always going to be better than link building because you’re treating the technique as a business and not a scheme to try and game the rankings. Link building became a bad practice years ago as links became less relevant, they are still important, so I want to ensure that isn’t taken out of context, and we stopped doing link building completely. Quality content attracts links naturally, including media mentions. When this happens in a natural way, the website will begin rising as the site has a lot of value for users, and search engines can tell when the site is quality.
If you’re building links without evaluating the impact they have traffic and sales-wise, you’re likely setting your site up for failure. Getting a ton of links, just like creating content in mass with AI/LLMs or article spinners, can grow a site quickly. That URL/domain can then burn to the ground equally as fast.
That’s why when we purchase a link, an advertorial, or we’re doing a partnership, we always ask ourselves the following questions:
Is there an active audience on this website that is also coming back to the website via branded search for information?
Is the audience on this website part of our customer base?
Will the article we’re pitching or being featured in be helpful to the user, and is our product or service something that is part of the post naturally vs. being forced?
Are we ok with the link being nofollow or sponsored if we’re paying for the inclusion?
If the answer is yes to these four, then we’re good to go with the link. The active audience on the website and people returning by brand name means there is an audience that trusts them for information. If the readership, visitors, or customers are similar or the same demographics as our user base, then it makes sense we’d want to be in front of them where they go for information.
We may have knowledge that is helpful to the user, but if it is not on topic within the post, there is no reason for them to come through and use our services, buy our products, or subscribe to our newsletters. Instead, we’ll wait until there is a fit, so there is a direct “link” between the content we’re contributing, or being an expert on, and our website.
For the last question, our goal is always traffic and customer acquisition, not getting a link. The website owner controls this, and if they want to follow Google’s best practices (which we obviously recommend doing), we will still be happy if they mark it as sponsored or nofollow. This is the most important of the questions. Building links to game the SERPs is a bad idea; building a brand that people search for by name will overpower any link any day of the week. This is always our goal when it comes to Digital PR and link building. Driving that branded search.
So, that begs the question, where do we go for digital PR?
Sources To Get Digital PR Mentions And Links
When we’re about to start a Digital PR campaign, we create lists of the following targets to reach out to.
Mass Media: Household names like magazines, news websites, and local media, where everyone in the area, the customers, or the country or world knows them by name. The only stipulation we apply is if they have an active category vs. only a few articles here and there. The active category means it is something interesting enough to their reader base that they’re investing in it, so our customers may be there.
Trade Publications: Conferences, associations, and non-profits, as well as industry insiders will have websites and print publications that go out to members. Search Engine Journal could be considered a trade publication for the SEO and PPC industry, same with SEO Roundtable, and some of the communities like Webmaster World. They publish directly relevant content for search engine marketers and have active users, so if I was an SEO service provider or tool, this is where I’d be looking to get featured and ideally links from.
Niche Sites and Bloggers: There is no shortage of niche sites and content producers out there. The trick is finding ones that do not publicly allow guest contributions, advertorials, etc., and that do not link out to non-niche websites and content. This includes sites that got hacked and had link injections. Even if their “authority” is zero, there is value if they quality control and all links and mentions are earned.
Influencers: Whether it is YouTube, Facebook group leaders, LinkedIn that is crawlable, or other channels, getting coverage from people with subscribers and an active audience can let search engines crawl the link back to your website. It may not boost your rankings, but it drives customers to you and helps with page discoverability if the link gets crawled. LLMs are also citing their content as sources, so there could be value for AIO, too.
Link building is not dead by any means; links still matter. You just don’t need to build them anymore. Focus on quality where an active audience is and where you have a chance at getting traffic and revenue. This is what will move the needle for the long run and help you grow in SERPs that matter.
More Resources:
Featured Image: Paulo Bobita/Search Engine Journal
As an SEO, there are few things that stoke panic like seeing a considerable decline in organic traffic. People are going to expect answers if they don’t already.
Getting to those answers isn’t always straightforward or simple, because SEO is neither of those things.
The success of an SEO investigation hinges on the ability to dig into the data, identify where exactly the performance decline is happening, and connect the dots to why it’s happening.
It’s a little bit like an actual investigation: Before you can catch the culprit or understand the motive, you have to gather evidence. In an SEO investigation, that’s a matter of segmenting data.
In this article, I’ll share some different ways to slice and dice performance data for valuable evidence that can help further your investigation.
So, before we dissect data to narrow down problem areas, the first thing we need to do is determine whether there’s actually an SEO issue at play.
After all, it could be something else altogether. In which case, we’re wasting unnecessary resources chasing a problem that doesn’t exist.
Is This A Tracking Issue?
In many cases, what looks like a big traffic drop is just an issue with tracking on the site.
To determine whether tracking is functioning correctly, there are a couple of things we need to look for in the data.
The first is consistent drops across channels.
Zoom out of organic search and see what’s happening in other sources and channels.
If you’re seeing meaningful drops across email, paid, etc., that are consistent with organic search, then it’s more than likely that tracking isn’t working correctly.
The other thing we’re looking for here is inconsistencies between internal data and Google Search Console.
Of course, there’s always a bit of inconsistency between first-party data and GSC-reported organic traffic. But if those differences are significantly more pronounced for the time period in question, that hints at a tracking problem.
Is This A Brand Issue?
Organic search traffic from Google falls into two primary camps:
Brand traffic: Traffic driven by user queries that include the brand name.
Non-brand traffic: Traffic driven by brand-agnostic user queries.
Non-brand traffic is directly affected by SEO work. Whereas, brand traffic is mostly impacted by the work that happens in other channels.
When a user includes the brand in their search, they’re already brand-aware. They’re a return user or they’ve encountered the brand through marketing efforts in channels like PR, paid social, etc.
When marketing efforts in other channels are scaled back, the brand reaches fewer users. Since fewer people see the brand, fewer people search for it.
Or, if customers sour on the brand, there are fewer people using search to come back to the site.
Either way, it’s not an SEO problem. But in order to confirm that, we need to filter the data down.
Go to Performance in Google Search Console and exclude any queries that include your brand. Then compare the data against a previous period – usually YoY if you need to account for seasonality. Do the same for queries that don’t include the brand name.
If non-brand traffic has stayed consistent, while brand traffic has dropped, then this is a brand issue.
Screenshot from Google Search Console, November 2025
Tip: Account for users misspelling your brand name by filtering queries using fragments. For example, at Gray Dot Co, we get a lot of brand searches for things like “Gray Company” and “Grey Dot Company.” By using the simple regex expression “gray|grey” I can catch brand search activity that would otherwise fall through the cracks.
Is It Seasonal Demand?
The most obvious example of seasonal demand is holiday shopping on ecommerce sites.
Think about something like jewelry. Most people don’t buy jewelry every day; they buy it for special occasions. We can confirm that seasonality by looking at Google Trends.
Zooming out to the past five years of interest in “jewelry,” it clearly peaks in November and December.
Screenshot from Google Trends, November 2025
As a site that sells jewelry, of course, traffic in Q1 is going to be down from Q4.
I use a pretty extreme example here to make my point, but in reality, seasonality is widespread and often more subtle. It impacts businesses where you might not expect much seasonality at all.
The best way to understand its impact is to look at organic search data year-over-year. Do the peaks and valleys follow the same patterns?
If so, then we need to compare data YoY to get a true sense of whether there’s a potential SEO problem.
Is It Industry Demand?
SEOs need to keep tabs on not just what’s happening internally, but also what’s going on externally. A big piece of that is checking the pulse of organic demand for the topics and products that are central to the brand.
Products fall out of vogue, technologies become obsolete, and consumer behavior changes – that’s just the reality of business. When there are fewer potential customers in the landscape, there are fewer clicks to win, and fewer sessions to drive.
Take cameras, for instance. As the cameras on our phones got more sophisticated, digital cameras became less popular. And as they became less popular, searches for cameras dwindled.
Now, they’re making a comeback with younger generations. More people searching, more traffic to win.
Screenshot from npr.com, November 2025
You can see all of this at play in the search landscape by turning to Google Trends. The downtrend in interest caused by advances in technology, AND the uptrend boosted by shifts in societal trends.
Screenshot from Google Trends, November 2025
When there are drops in industry, product, or topic demand within the landscape, we need to ask ourselves whether the brand’s organic traffic loss is proportional to the overall loss in demand.
Is Paid Search Cannibalizing Organic Search?
Even if a URL on the site ranks well in organic results, ads are still higher on the SERP. So, if a site is running an ad for the same query it already ranks for, then the ad is going to get more clicks by nature.
When businesses give their PPC budgets a boost, there’s potential for this to happen across multiple, key SERPs.
Let’s say a site drives a significant chunk of its organic traffic from four or five product landing pages. If the brand introduces ads to those SERPs, clicks that used to go to the organic result start going to the ad.
That can have a significant impact on organic traffic numbers. But search users are still getting to the same URLs using the same queries.
To confirm, pull sessions by landing pages from both sources. Then, compare the data from before the paid search changes to the period following the change.
If major landing pages consistently show a positive delta that cancels out the negative delta in organic search, you’re not losing organic traffic; you’re lending it.
Screenshot from Google Analytics, November 2025
Segmenting Data To Find SEO Issues
Once we have confirmation that the organic traffic declines point to an SEO issue, we can start zooming in.
Segmenting data in different ways helps pinpoint problem areas and find patterns. Only then can we trace those issues to the cause and craft a strategy for recovery.
URL
Most SEOs are going to filter their organic traffic down by URL. It lets us see which pages are struggling and analyze those pages for potential improvements.
It also helps find patterns across pages that make it easier to isolate the cause of more widespread issues. For example, if the site is losing traffic across its product listing pages, it could signal that there’s a problem with the template for that page.
But segmenting by URL also helps us answer a very important question when we pair it with conversion data.
Do We Really Care About This Traffic?
Clicks are only helpful if they help drive business-valuable user interactions like conversions or ad views. For some sites, like online publications, traffic is valuable in and of itself because users coming to the site are going to see ads. The site still makes money.
But for brands looking to drive conversions, it could just be empty traffic if it’s not helping drive that primary key performance indicator (KPI).
A top-of-funnel blog post might drive a lot of traffic because it ranks for very high-volume keywords. If that same blog post is a top traffic-driving organic landing page, a slip in rankings means a considerable organic traffic drop.
But the users entering those high-volume keywords might not be very qualified potential customers.
Looking at conversions by landing page can help brands understand whether the traffic loss is ultimately hurting the bottom line.
The best way to understand is to turn to attribution.
First-touch attribution quantifies an organic landing page’s value in terms of the conversions it helps drive down the line. For most businesses, someone isn’t likely to convert the first time they visit the site. They usually come back and purchase.
Whereas, last-touch attribution shows the organic landing pages that people come to when they’re ready to make a purchase. Both are valuable!
Query
Filtering performance by query can help understand which terms or topic areas to focus improvements on. That’s not new news.
Sometimes, it’s as easy as doing a period-over-period comparison in GSC, ordering by clicks lost, and looking for obvious patterns, i.e., are the queries with the most decline just subtle variants of one another?
If there aren’t obvious patterns and the queries in decline are more widespread, that’s where topic clustering can come into the mix.
Topic Clustering With AI
Using AI for topic clustering helps quickly identify any potential relationships between queries that are seeing performance dips.
Go to GSC and filter performance by query, looking for any YoY declines in clicks and average position.
Screenshot from Google Search Console, November 2025
The resulting list of semantic groupings can provide an idea of topics where a site’s authority is slipping in search.
In turn, it helps narrow the area of focus for content improvements and other optimizations to potentially build authority for the topics or products in question.
Identifying User Intent
When users search using specific terms, the type of content they’re looking for – and their objective – differs based on the query. These user expectations can be broken out into four different high-level categories:
User Intent
Objective
Informational
(Top of funnel)
Users are looking for answers to questions, explanations, or general knowledge about topics, products, concepts, or events.
Commercial
(Middle of funnel)
Users are interested in comparing products, reading reviews, and gathering information before making a purchase decision.
Transactional
(Bottom of funnel)
Users are looking to perform a specific action, such as making a purchase, signing up for a service, or downloading a file.
Navigational
Brand-familiar users are using the search engine as a shortcut to find a specific website or webpage.
By leveraging user intent, we identify user objectives for which the site or pages on the site are falling short. It gives us a lens into performance decline, making it easier to identify possible causes from the perspective of user experience.
If the majority of queries losing clicks and positionality are informational, it could signal shortcomings in the site’s blog content. If the queries are consistently commercial, it might call for an investigation into how the site approaches product detail and/or listing pages.
GSC doesn’t provide user intent in its reporting, so this is where a third-party SEO tool can come into play. If you have position tracking set up and GSC connected, you can use the tool’s rankings report to identify queries in decline and their user intent.
If not, you can still get the data you need by using a mix of GSC and a tool like Ahrefs.
Device
This view of performance data is pretty simple, but it’s equally easy to overlook!
When the large majority of performance declines are attributed to ONLY desktop or mobile, device data helps identify potential tech or UX issues within the mobile or desktop experience.
The important thing to remember is that any declines need to be considered proportionally. Take the metrics for the site below…
Screenshot from Google Search Console, November 2025
At first glance, the data makes it look like there might be an issue with the desktop experience. But we need to look at things in terms of percentages.
Desktop: 1 – (648/1545) x 100 = 58% decline
Mobile: 1 – (149/316) x 100 = 52% decline
While desktop shows a much larger decline in terms of click count, the percentage of decline YoY is fairly similar across both desktop and mobile. So we’re probably not looking for anything device-specific in this scenario.
Search Appearance
Rich results and SERP features are an opportunity to stand out on the SERP and drive more traffic through enhanced results. Using the search appearance filter in Google Search Console, you can see traffic from different types of rich results and SERP features:
Forums.
AMP Top Story (AMP page + Article markup).
Education Q&A.
FAQ.
Job Listing.
Job Details.
Merchant Listing.
Product Snippet.
Q&A.
Review Snippet.
Recipe Gallery.
Video.
This is the full list of possible features with rich results (courtesy of SchemaApp), though you’ll only see filters for search appearances where the domain is currently positioned.
In most cases, Google is able to generate these types of results because there is structured data on pages. The notable exceptions are Q&A, translated results, and video.
So when there are significant traffic drops coming from a specific type of search appearance, it signals that there’s potentially a problem with the structured data that enables that search feature.
Screenshot from Google Search Console, November 2025
You can investigate structured data issues in the Enhancements reports in GSC. The exception is product snippets, which nest under the Shopping menu. Either way, the reports only show up in your left-hand nav if Google is aware of relevant data on the site.
For example, the product snippets report shows why some snippets are invalid, as well as ways to potentially improve valid results.
Screenshot from Google Search Console, November 2025
This context is valuable as you begin to investigate the technical causes of traffic drops from specific search features. In this case, it’s clear that Google is able to crawl and utilize product schema on most pages – but there are some opportunities to improve that schema with additional data.
Featured Snippets
When featured snippets originally came on the scene, it was a major change to the SERP structure that resulted in a serious hit to traditional organic results.
Today, AI Overviews are doing the same. In fact, research from Seer shows that CTR has dropped 61% for queries that now include an AI overview (21% of searches). And that impact is outsized for informational queries.
In cases where rankings have remained relatively static, but traffic is dropping, there’s good reason to investigate whether this type of SERP change is a driver of loss.
While Google Search Console doesn’t report on featured snippets (example: PAA questions) and AI Overviews, third-party tools do.
In the third-party tool Semrush, you can use the Domain Overview report to check for featured snippet availability across keywords where the site ranks.
Screenshot from Semrush, November 2025
Do the keywords where you’re losing traffic have AI overviews? If you’re not cited, it’s time to start thinking about how you’re going to win that placement.
Search Type
Search type is another way to filter GSC data, where you’re seeing traffic declines despite healthy and consistent rankings.
After all, web search is just one prong of Google Search. Think about it: How often do you use Google Image search? At least in my case, that’s fairly often.
Filter performance data by each of these search types to understand which one(s) are having the biggest impact on performance decline. Then use that insight to start connecting the dots to the cause.
Screenshot from Google Search Console, November 2025
Images are a great example. One simple line in the robots.txt can block Google from crawling a subfolder that hosts multitudes of images. As those images disappear from image search results, any clicks from those results disappear in tandem.
We don’t know to look for this issue until we slice the data accordingly!
Geography
If the business operates physically in specific cities and states, then it likely already has geo-specific performance tracking set up through a tool.
But domains for online-only businesses shouldn’t dismiss geographic data – even at the city/state level! Declines are still a trigger to check geo-specific performance data.
Country
Just because the brand only sells and operates in one country doesn’t mean that’s where all the domain’s traffic is coming from. Drilling down by country in GSC allows you to see whether declines are coming from the country the brand is focused on or, potentially, another country altogether.
Screenshot from Google Search Console, November 2025
If it’s another country, it’s time to decide whether that matters. If the site is a publisher, it probably cares more about that traffic than an ecommerce brand that’s more focused on purchases in its country of operation.
Localization
When tools are reporting positionality at the country level, then rankings shifts in specific markets fly under the radar. It certainly happens, and major markets can have major traffic impact!
Tools like BrightLocal, Whitespark, and Semrush let you analyze SERP rankings one level deeper than GSC, providing data down to the city.
Checking for rankings discrepancies across cities is possible by checking a small sample of keywords with the greatest declines in clicks.
If I’m an SEO at the University of Phoenix, which is an online university, I’m probably pretty excited about ranking #1 in the United States for “online business degree.”
Screenshot from Semrush, November 2025
But if I drill down further, I might be a little distraught to find that the domain isn’t in the top five SERP results for users in Denver, CO…
Screenshot from Semrush, November 2025
…or Raleigh, North Carolina.
Screenshot from Semrush, November 2025
Catch Issues Faster By Leveraging AI For Data Analysis
Data segmentation is an important piece of any traffic drop investigation, because humans can see patterns in data that bots don’t.
However, the opposite is true too. With anomaly detection tooling, you get the best of both worlds.
When combined with monitoring and alert notifications, anomaly detection makes it possible to find and fix issues faster. Plus, it enables you to find data patterns in any after-the-impact investigations
All of this helps ensure that your analysis is comprehensive, and might even point out gaps for further investigation.
As Sherlock Holmes would say about an investigation, “It is a capital mistake to theorize before one has data.” With the right data in hand, the culprits start to reveal themselves.
Data segmentation empowers SEOs to uncover leads that point to possible causes. By narrowing it down based on the evidence, we ensure more accuracy, less work, faster answers, and quicker recovery.
And while leadership might not love a traffic drop, they’re sure to love that.
A few weeks ago, I was given access to review a confidential OpenAI partner-facing report, the kind of dataset typically made available to a small group of publishers.
For the first time, from the report, we have access to detailed visibility metrics from inside ChatGPT, the kind of data that only a select few OpenAI site partners have ever seen.
This isn’t a dramatic “leak,” but rather an unusual insight into the inner workings of the platform, which will influence the future of SEO and AI-driven publishing over the next decade.
The consequences of this dataset far outweigh any single controversy: AI visibility is skyrocketing, but AI-driven traffic is evaporating.
This is the clearest signal yet that we are leaving the era of “search engines” and entering the era of “decision engines,” where AI agents surface, interpret, and synthesize information without necessarily directing users back to the source.
This forces every publisher, SEO professional, brand, and content strategist to fundamentally reconsider what online visibility really means.
1. What The Report Data Shows: Visibility Without Traffic
The report dataset gives a large media publisher a full month of visibility. With surprising granularity, it breaks down how often a URL is displayed inside ChatGPT, where it appears inside the UI, how often users click on it, how many conversations it impacts, and the surface-level click-through rate (CTR) across different UI placements.
URL Display And User Interaction In ChaGPT
Image from author, November 2025
The dataset’s top-performing URL recorded 185,000 distinct conversation impressions, meaning it was shown in that many separate ChatGPT sessions.
Of these impressions, 3,800 were click events, yielding a conversation-level CTR of 2%. However, when counting multiple appearances within conversations, the numbers increase to 518,000 total impressions and 4,400 total clicks, reducing the overall CTR to 0.80%.
This is an impressive level of exposure. However, it is not an impressive level of traffic.
Most other URLs performed dramatically worse:
0.5% CTR (considered “good” in this context).
0.1% CTR (typical).
0.01% CTR (common).
0% CTR (extremely common, especially for niche content).
This is not a one-off anomaly; it’s consistent across the entire dataset and matches external studies, including server log analyses by independent SEOs showing sub-1% CTR from ChatGPT sources.
We have experienced this phenomenon before, but never on this scale. Google’s zero-click era was the precursor. ChatGPT is the acceleration. However, there is a crucial difference: Google’s featured snippets were designed to provide quick answers while still encouraging users to click through for more information. In contrast, ChatGPT’s responses are designed to fully satisfy the user’s intent, rendering clicks unnecessary rather than merely optional.
2. The Surface-Level Paradox: Where OpenAI Shows The Most, Users Click The Least
The report breaks down every interaction into UI “surfaces,” revealing one of the most counterintuitive dynamics in modern search behavior. The response block, where LLMs place 95%+ of their content, generates massive impression volume, often 100 times more than other surfaces. However, CTR hovers between 0.01% and 1.6%, and curiously, the lower the CTR, the better the quality of the answer.
LLM Content Placement And CTR Relationship
Image from author, November 2025
This is the new equivalent of “Position Zero,” except now it’s not just zero-click; it’s zero-intent-to-click. The psychology is different from that of Google. When ChatGPT provides a comprehensive answer, users interpret clicking as expressing doubt about the AI’s accuracy, indicating the need for further information that the AI cannot provide, or engaging in academic verification (a relatively rare occurrence). The AI has already solved its problem.
The sidebar tells a different story. This small area has far fewer impressions, but a consistently strong CTR ranging from 6% to 10% in the dataset. This is higher than Google’s organic positions 4 through 10. Users who click here are often exploring related content rather than verifying the main answer. The sidebar represents discovery mode rather than verification mode. Users trust the main answer, but are curious about related information.
Citations at the bottom of responses exhibit similar behavior, achieving a CTR of between 6% and 11% when they appear. However, they are only displayed when ChatGPT explicitly cites sources. These attract academically minded users and fact-checkers. Interestingly, the presence of citations does not increase the CTR of the main answer; it may actually decrease it by providing verification without requiring a click.
Search results are rarely triggered and usually only appear when ChatGPT determines that real-time data is needed. They occasionally show CTR spikes of 2.5% to 4%. However, the sample size is currently too small to be significant for most publishers, although these clicks represent the highest intent when they occur.
The paradox is clear: The more frequently OpenAI displays your content, the fewer clicks it generates. The less frequently it displays your content, the higher the CTR. This overturns 25 years of SEO logic. In traditional search, high visibility correlates with high traffic. In AI-native search, however, high visibility often correlates with information extraction rather than user referral.
“ChatGPT’s ‘main answer’ is a visibility engine, not a traffic engine.”
3. Why CTR Is Collapsing: ChatGPT Is An Endpoint, Not A Gateway
The comments and reactions on LinkedIn threads analyzing this data were strikingly consistent and insightful. Users don’t click because ChatGPT solves their problem for them. Unlike Google, where the answer is a link, ChatGPT provides the answer directly.
This means:
Satisfied users don’t click (they got what they needed).
Curious users sometimes click (they want to explore deeper).
Skeptical users rarely click (they either trust the AI or distrust the entire process).
Very few users feel the need to leave the interface.
As one senior SEO commented:
“Traffic stopped being the metric to optimize for. We’re now optimizing for trust transfer.”
Another analyst wrote:
“If ChatGPT cites my brand as the authority, I’ve already won the user’s trust before they even visit my site. The click is just a formality.”
This represents a fundamental shift in how humans consume information. In the pre-AI era, the pattern was: “I need to find the answer” → click → read → evaluate → decide. In the AI era, however, it has become: “I need an answer” → “receive” → “trust” → “act”, with no click required. AI becomes the trusted intermediary. The source becomes the silent authority.
Shift In Information Consumption
Image from author, November 2025
This marks the beginning of what some are calling “Inception SEO”: optimizing for the answer itself, rather than for click-throughs. The goal is no longer to be findable. The goal is to be the source that the AI trusts and quotes.
4. Authority Over Keywords: The New Logic Of AI Retrieval
Traditional SEO relies on indexation and keyword matching. LLMs, however, operate on entirely different principles. They rely on internal model knowledge wherever possible, drawing on trained data acquired through crawls, licensed content, and partnerships. They only fetch external data when the model determines that its internal knowledge is insufficient, outdated, or unverified.
When selecting sources, LLMs prioritize domain authority and trust signals, content clarity and structure, entity recognition and knowledge graph alignment, historical accuracy and factual consistency, and recency for time-sensitive queries. They then decide whether to cite at all based on query type and confidence level.
This leads to a profound shift:
Entity strength becomes more important than keyword coverage.
Consistency and structured content matter more than content volume
Model trust becomes the single most important ranking factor
Factual accuracy over long periods builds cumulative advantage
“You’re no longer competing in an index. You’re competing in the model’s confidence graph.”
This has radical implications. The old SEO logic was “Rank for 1,000 keywords → Get traffic from 1,000 search queries.” The new AI logic is “Become the authoritative entity for 10 topics → Become the default source for 10,000 AI-generated answers.”
In this new landscape, a single, highly authoritative domain has the potential to dominate AI citations across an entire topic cluster. “Long-tail SEO” may become less relevant as AI synthesizes answers rather than matching specific keywords. Topic authority becomes more valuable than keyword authority. Being cited once by ChatGPT can influence millions of downstream answers.
5. The New KPIs: “Share Of Model” And In-Answer Influence
As CTR is declining, brands must embrace metrics that reflect AI-native visibility. The first of these is “share of model presence,” which is how often your brand, entity, or URLs appear in AI-generated answers, regardless of whether they are clicked on or not. This is analogous to “share of voice” in traditional advertising, but instead of measuring presence in paid media, it measures presence in the AI’s reasoning process.
LLM Decision Hierarchy
Image from author, November 2025
How to measure:
Track branded mentions in AI responses across major platforms (ChatGPT, Claude, Perplexity, Google AI Overviews).
Monitor entity recognition in AI-generated content.
Analyze citation frequency in AI responses for your topic area.
LLMs are increasingly producing authoritative statements, such as “According to Publisher X…,” “Experts at Brand Y recommend…,” and “As noted by Industry Leader Z…”
This is the new “brand recall,” except it happens at machine speed and on a massive scale, influencing millions of users without them ever visiting your website. Being directly recommended by an AI is more powerful than ranking No. 1 on Google, as the AI’s endorsement carries algorithmic authority. Users don’t see competing sources; the recommendation is contextualized within their specific query, and it occurs at the exact moment of decision-making.
Then, there’s contextual presence: being part of the reasoning chain even when not explicitly cited. This is the “dark matter” of AI visibility. Your content may inform the AI’s answer without being directly attributed, yet still shape how millions of users understand a topic. When a user asks about the best practices for managing a remote team, for example, the AI might synthesize insights from 50 sources, but only cite three of them explicitly. However, the other 47 sources still influenced the reasoning process. Your authority on this topic has now shaped the answer that millions of users will see.
High-intent queries are another crucial metric. Narrow, bottom-of-funnel prompts still convert, showing a click-through rate (CTR) of between 2.6% and 4%. Such queries usually involve product comparisons, specific instructions requiring visual aids, recent news or events, technical or regulatory specifications requiring primary sources, or academic research requiring citation verification. The strategic implication is clear: Don’t abandon click optimization entirely. Instead, identify the 10-20% of queries where clicks still matter and optimize aggressively for those.
Finally, LLMs judge authority based on what might be called “surrounding ecosystem presence” and cross-platform consistency. This means internal consistency across all your pages; schema and structured data that machines can easily parse; knowledge graph alignment through presence in Wikidata, Wikipedia, and industry databases; cross-domain entity coherence, where authoritative third parties reference you consistently; and temporal consistency, where your authority persists over time.
This holistic entity SEO approach optimizes your entire digital presence as a coherent, trustworthy entity, not individual pages. Traditional SEO metrics cannot capture this shift. Publishers will require new dashboards to track AI citations and mentions, new tools to measure “model share” across LLM platforms, new attribution methodologies in a post-click world, and new frameworks to measure influence without direct traffic.
6. Why We Need An “AI Search Console”
Many SEOs immediately saw the same thing in the dataset:
“This looks like the early blueprint for an OpenAI Search Console.”
Right now, publishers cannot:
See how many impressions they receive in ChatGPT.
Measure their inclusion rate across different query types.
Understand how often their brand is cited vs. merely referenced.
Identify which UI surfaces they appear in most frequently.
Correlate ChatGPT visibility with downstream revenue or brand metrics.
Track entity-level impact across the knowledge graph.
Measure how often LLMs fetch real-time data from them.
Understand why they were selected (or not selected) for specific queries.
Compare their visibility to competitors.
Google had “Not Provided,” hiding keyword data. AI platforms may give us “Not Even Observable,” hiding the entire decision-making process. This creates several problems. For publishers, it’s impossible to optimize what you can’t measure; there’s no accountability for AI platforms, and asymmetric information advantages emerge. For the ecosystem, it reduces innovation in content strategy, concentrates power in AI platform providers, and makes it harder to identify and correct AI bias or errors.
Based on this leaked dataset and industry needs, an ideal “AI Search Console” would provide core metrics like impression volume by URL, entity, and topic, surface-level breakdowns, click-through rates, and engagement metrics, conversation-level analytics showing unique sessions, and time-series data showing trends. It would show attribution and sourcing details: how often you’re explicitly cited versus implicitly used, which competitors appear alongside you, query categories where you’re most visible, and confidence scores indicating how much the AI trusts your content.
Diagnostic tools would explain why specific URLs were selected or rejected, what content quality signals the AI detected, your entity recognition status, knowledge graph connectivity, and structured data validation. Optimization recommendations would identify gaps in your entity footprint, content areas where authority is weak, opportunities to improve AI visibility, and competitive intelligence.
OpenAI and other AI platforms will eventually need to provide this data for several reasons. Regulatory pressure from the EU AI Act and similar regulations may require algorithmic transparency. Media partnerships will demand visibility metrics as part of licensing deals. Economic sustainability requires feedback loops for a healthy content ecosystem. And competitive advantage means the first platform to offer comprehensive analytics will attract publisher partnerships.
The dataset we’re analyzing may represent the prototype for what will eventually become standard infrastructure.
AI Search Console
Image from author, November 2025
7. Industry Impact: Media, Monetization, And Regulation
The comments raised significant concerns and opportunities for the media sector. The contrast between Google’s and OpenAI’s economic models is stark. Google contributes to media financing through neighbouring rights payments in the EU and other jurisdictions. It still sends meaningful traffic, albeit declining, and has established economic relationships with publishers. Google also participates in advertising ecosystems that fund content creation.
By contrast, OpenAI and similar AI platforms currently only pay select media partners under private agreements, send almost no traffic with a CTR of less than 1%, extract maximum value from content while providing minimal compensation, and create no advertising ecosystem for publishers.
AI Overviews already reduce organic CTR. ChatGPT takes this trend to its logical conclusion by eliminating almost all traffic. This will force a complete restructuring of business models and raise urgent questions: Should AI platforms pay neighbouring rights like search engines do? Will governments impose compensatory frameworks for content use? Will publishers negotiate direct partnerships with LLM providers? Will new licensing ecosystems emerge for training data, inference, and citation? How should content that is viewed but not clicked on be valued?
Several potential economic models are emerging. One model is citation-based compensation, where platforms pay based on how often content is cited or used. This is similar to music streaming royalties, though transparent metrics are required.
Under licensing agreements, publishers would license content directly to AI platforms, with tiered pricing based on authority and freshness. This is already happening with major outlets such as the Associated Press, Axel Springer, and the Financial Times. Hybrid attribution models would combine citation frequency, impressions, and click-throughs, weighted by query value and user intent, in order to create standardized compensation frameworks.
Regulatory mandates could see governments requiring AI platforms to share revenue with content creators, based on precedents in neighbouring rights law. This could potentially include mandatory arbitration mechanisms.
This would be the biggest shift in digital media economics since Google Ads. Platforms that solve this problem fairly will build sustainable ecosystems. Those that do not will face regulatory intervention and publisher revolts.
8. What Publishers And Brands Must Do Now
Based on the data and expert reactions, an emerging playbook is taking shape. Firstly, publishers must prioritize inclusion over clicks. The real goal is to be part of the solution, not to generate a spike in traffic. This involves creating comprehensive, authoritative content that AI can synthesize, prioritizing clarity and factual accuracy over tricks to boost engagement, structuring content so that key facts can be easily extracted, and establishing topic authority rather than chasing individual keywords.
Strengthening your entity footprint is equally critical. Every brand, author, product, and concept must be machine-readable and consistent. Publishers should ensure their entity exists on Wikidata and Wikipedia, maintain consistent NAP (name, address, phone number) details across all properties, implement comprehensive schema markup, create and maintain knowledge graph entries, build structured product catalogues, and establish clear entity relationships, linking companies to people, products, and topics.
Building trust signals for retrieval is important because LLMs prioritize high-authority, clearly structured, low-ambiguity content. These trust signals include:
Authorship transparency, with clear author bios, credentials, and expertise.
Editorial standards, covering fact-checking, corrections policies, and sourcing.
Domain authority, built through age, backlink profile, and industry recognition.
Structured data, via schema implementation and rich snippets.
Factual consistency, maintaining accuracy over time without contradictions.
Expert verification, through third-party endorsements and citations.
Publishers should not abandon click optimization entirely. Instead, they should target bottom-funnel prompts that still demonstrate a measurable click-through rate (CTR) of between 2% and 4%, since AI responses are insufficient.
Examples of high-CTR queries:
“How to configure [specific technical setup]” (requires visuals or code).
“Latest news on [breaking event]” (requires recency).
“Where to buy [specific product]” (transactional intent).
“[Company] careers” (requires job portal access).
Strategy: Identify the 10–20% of your topic space where AI cannot fully satisfy user intent, and optimize those pages for clicks.
In terms of content, it is important to lead with the most important information, use clear and definitive language, cite primary sources, avoid ambiguity and hedging unless accuracy requires it, and create content that remains accurate over long timeframes.
Perhaps the most important shift is mental: Stop thinking in terms of traffic and start thinking in terms of influence. Value has shifted from visits to the reasoning process itself. New success metrics should track how often you are cited by AI, the percentage of AI responses in your field that mention you, how your “share of model” compares with that of your competitors, whether you are building cumulative authority that persists across model updates, and whether AI recognizes you as the definitive source for your core topics.
The strategic focus shifts from “drive 1 million monthly visitors” to “influence 10 million AI-mediated decisions.”
Publishers must also diversify their revenue streams so that they are not dependent on traffic-based monetization. Alternative models include building direct relationships with audiences through email lists, newsletters, and memberships; offering premium content via paywalls, subscriptions, and exclusive access; integrating commerce through affiliate programmes, product sales, and services; forming B2B partnerships to offer white-label content, API access, and data licensing; and negotiating deals with AI platforms for direct compensation for content use.
Publishers that control the relationship with their audience rather than depending on intermediary platforms will thrive.
The Super-Predator Paradox
A fundamental truth about artificial intelligence is often overlooked: these systems do not generate content independently; they rely entirely on the accumulated work of millions of human creators, including journalism, research, technical documentation, and creative writing, which form the foundation upon which every model is built. This dependency is the reason why OpenAI has been pursuing licensing deals with major publishers so aggressively. It is not an act of corporate philanthropy, but an existential necessity. A language model that is only trained on historical data becomes increasingly disconnected from the current reality with each passing day. It is unable to detect breaking news or update its understanding through pure inference. It is also unable to invent ground truth from computational power alone.
This creates what I call the “super-predator paradox”: If OpenAI succeeds in completely disrupting traditional web traffic, causing publishers to collapse and the flow of new, high-quality content to slow to a trickle, the model’s training data will become increasingly stale. Its understanding of current events will degrade, and users will begin to notice that the responses feel outdated and disconnected from reality. In effect, the super-predator will have devoured its ecosystem and will now find itself starving in a content desert of its own creation.
The paradox is inescapable and suggests two very different possible futures. In one, OpenAI continues to treat publishers as obstacles rather than partners. This would lead to the collapse of the content ecosystem and the AI systems that depend on it. In the other, OpenAI shares value with publishers through sustainable compensation models, attribution systems, and partnerships. This would ensure that creators can continue their work. The difference between these futures is not primarily technological; the tools to build sustainable, creator-compensating AI systems largely exist today. Rather, it is a matter of strategic vision and willingness to recognize that, if artificial intelligence is to become the universal interface for human knowledge, it must sustain the world from which it learns rather than cannibalize it for short-term gain. The next decade will be defined not by who builds the most powerful model, but by who builds the most sustainable one by who solves the super-predator paradox before it becomes an extinction event for both the content ecosystem and the AI systems that cannot survive without it.
Note: All data and stats cited above are from the Open AI partner report, unless otherwise indicated.