Google Confirms Smaller Core Updates Happen Continuously via @sejournal, @MattGSouthern

Google updated its core updates documentation to say smaller core updates happen on an ongoing basis, so sites can improve without waiting for named updates.

  • Google explicitly confirms it makes “smaller core updates” beyond the named updates announced several times per year.
  • Sites that improve their content can see ranking gains without waiting for the next major core update to roll out.
  • The documentation change addresses whether recovery between named updates is possible.
YouTube AI Enforcement Questioned As Channels Get Restored via @sejournal, @MattGSouthern

YouTube creators are raising concerns about the platform’s AI-driven moderation system. Multiple accounts describe sudden channel terminations for “spam, deceptive practices and scams,” followed by rapid appeal rejections with templated responses.

In some cases, channels have been restored only after the creator generated attention on X or Reddit. YouTube’s message to creators states the company has “not identified any widespread issues” with channel terminations and says only “a small percentage” of enforcement actions are reversed.

There’s a gap between YouTube’s position and creator experiences that’s driving a debate.

What Creators Are Reporting

The pattern appearing across X and Reddit threads follows a similar sequence.

Channels receive termination notices citing “spam, deceptive practices and scams.” Appeals get rejected within hours, sometimes minutes, with generic language. When channels are restored, creators say they receive no explanation of what triggered the ban or how to prevent future issues.

One documented case comes from YouTube creator “Chase Car,” who runs an EV news channel. In a detailed post on r/YouTubeCreators, they describe a sequence where their channel was demonetized by an automated system, cleared by a human reviewer, then terminated months later for spam.

The creator says they escalated the case to an EU-certified dispute body under the Digital Services Act. According to their account, the decision found the termination “was not rightful.” As of their most recent update, YouTube had not acted on the ruling.

Channels Restored After Public Attention

A subset of terminated channels have been reinstated after their cases gained visibility on social media.

Film analysis channel Final Verdict shared a thread documenting a sudden spam-related termination and later reinstatement after posts on X gained traction.

True crime channel The Dark Archive had their channel removed and later restored after tagging TeamYouTube publicly.

Streamer ProkoTV said their channel was restricted from live streaming after a spam warning. TeamYouTube later acknowledged an error and restored access.

These reversals confirm that some enforcement actions are incorrect by YouTube’s own standards. They also suggest that escalation on X can function as a parallel appeal route.

YouTube Acknowledges Some Errors

In a few cases, YouTube or its representatives have publicly admitted mistakes.

Dexerto reported on a creator whose 100,000-plus subscriber channel was banned over a comment they wrote on a different account at age 13. YouTube eventually apologized, telling the creator the ban “was a mistake on our end.”

Tech YouTuber Enderman, with 350,000 subscribers, said an automated system shut down their channel after linking it to an unrelated banned account. Dexerto highlighted the case after it spread on X.

YouTube’s Official Position

YouTube frames its enforcement differently than creators describe.

The company’s spam, deceptive practices, and scams policy explains why it takes action on fraud, impersonation, fake engagement, and misleading metadata. The policy notes that YouTube may act at the channel level if an account exists “primarily” to violate rules.

In a FAQ post, YouTube says the “vast majority” of terminations are upheld on appeal. The company says it’s “confident” in its processes while acknowledging “a handful” of incorrect terminations that were later reversed.

YouTube also offers a “Second Chances” pilot program that allows some creators to start new channels if they meet specific criteria and were terminated more than a year ago. The program doesn’t restore lost videos or subscribers.

YouTube’s CEO recently indicated the company plans to expand AI moderation tools. In an interview with Time, he said YouTube will proceed with expanded AI enforcement despite creator concerns.

Why This Matters

If you rely on YouTube as a core channel, these accounts raise practical concerns. A channel termination removes your entire presence, including subscribers and revenue potential. When appeals feel automated, you have limited visibility into what triggered the enforcement.

The Chase Car timeline shows an AI system can overturn a positive human verdict months later. Creators without large followings may have fewer options for escalation if formal appeals fail.

Looking Ahead

The EU’s Digital Services Act gives European users access to certified dispute bodies for moderation decisions. The Chase Car case could test how platforms respond to unfavorable rulings under that system.

YouTube says its appeals process is the correct channel for enforcement disputes. The company has not announced changes to its moderation approach in response to creator complaints.

Monitor YouTube’s official help community for any updates to appeal procedures or policy clarifications.


Featured Image: T. Schneider/Shutterstock

So You Want To Paywall?

There are three inevitabilities in life. Death, taxes, and big tech companies dumping on the little guy. As zero-click searches reach an all-time high and content is stolen and repurposed for the gain of the almighty tech loser, there’s only one viable solution.

To paywall.

To create a value exchange that reduces reliance on third-party platforms. To become as self-sufficient as possible. Like an off-grid cabin or your mum’s basement, a paywall gives you a sense of security you just cannot put a price on.

As we’re all finding, any kind of reliance on these guys doesn’t put us in a good position. They do not want to send us traffic.

TL;DR

  1. Subscriber revenue is intrinsically more valuable to a business because it is predictable. Subscription and advertiser revenue are not created equal.
  2. Don’t paywall everything. Use dynamic/metered paywalls and leave high-reach, generally lower-quality platforms like Google Discover free for email signups.
  3. Subscription success relies on your USP – whether that’s exclusive data, deep, niche insights, or a certain vibe – you have to stand out.
  4. The customer experience and understanding of your audience matter. Create habit-forming connections and products. Become an essential part of their life.

But What About Our Traffic?

Your traffic will decline. But guess what? You’re already hemorrhaging clicks and have been for some time. And traffic doesn’t pay the bills.

Two comparable pages, one with a paywall, one without (Image Credit: Harry Clarkson-Bennett)

The only way to sustain rankings over time is with high-quality engagement data. Navboost stores and uses 13 months of data to identify good vs bad clicks, click quality, the last longest click, and on-page interactions to establish the most relevant content. All at a query level.

Paywalls are not your friend when it comes to user engagement. Not for the masses. But for a small cohort of people who like you enough to pay, your engagement data will be excellent.

In an ultra-personalized world, you will still do well to the people who really matter.

We have data that pretty perfectly highlights the impact of a paywall on rankings. Over the course of three to four months in traditional search, your rankings start to steadily drop before settling in severe mediocrity. You’ve got to fight for every click. With great content, marketing, savviness. Everything.

We have used an image manager to try and generate a free-to-air badge. It rarely shows up unless there’s no featured image, but the idea is excellent (Image Credit: Harry Clarkson-Bennett)

In Google Discover – a highly personalized, click and engagement driven platform – this is even more pronounced. While Discover’s clickless traffic is lower quality, there will be a small cohort of highly engaged users that develop over time, you can target with a paywall.

Unpaywall for the masses, build your owned channels, and paywall for the highly engaged. The platform will take care of the personalization for you.

So, maximize your value exchange with ads and email signups for most users, but don’t neglect those with a high return rate.

There’s some psychology involved in all of this. When a brand becomes widely known for paywalling, I suspect the likelihood of a click goes down as users know what to expect. Or maybe what not to expect.

This likely perpetuates over time, so you should clarify what articles are free to air.

Is Our Content Good Enough?

To nail SEO bingo, it depends. It depends on what your value is in the market. There is a lot of free stuff out there already. But broadly rubbish. So as long as the bar keeps dropping, we’ll all be fine.

I am old-ish. I like words. Writing great content isn’t easy and is usurped in many cases by richer, more visually striking content. Content that satisfies all types of users. Scanners, deep readers, listeners and get the answer and go-ers.

In some ways, you can satisfy all types of users more effectively than ever. I think you have to hit three of the four Es of content creation. Make it resonate, be consistent, and understand your audience. Whatever you create stands a chance.

But that doesn’t mean creating great stuff is any easier. If you work for a traditional publisher, the chances are you’ve brought a spoon to a gun fight. The war for attention is being fought on all fronts, and straight words are losing.

Fortunately, not every subscription model relies on the quality of the prose. It might be that you have unique data, granular insights into a specific market, or are just a bloody good laugh.

Subscriptions come in all shapes and sizes.

Ultimately, it comes down to your market, marketing, positioning and your USP. You have to know and speak to your audience and you have to stand out. As Barry would say, if you’re forgettable, you’re doomed.

How Do We Know If People Will Pay?

When it comes to paying for news, some markets are far more “advance” than others. The Scandinavian market is light-years ahead of almost everyone else when it comes to paying for news. You have to do your research to understand:

  • How many people currently pay for news?
  • What demographic of person pays?
  • How saturated is the market already?
  • What is your niche?
Where your audience are matters a lot (Image Credit: Harry Clarkson-Bennett)

While it doesn’t align perfectly, it’s not surprising that those most likely to pay for news have higher income levels. Higher disposable income tends to create an environment where people buy more stuff.

Shocking, I know.

But that doesn’t tell the whole story. Norwegian news outlets have (apparently) a long history of trust with their audience and have never had access to free multi-day newspapers. Ditto other Scandinavian countries. In an age of rubbish and spin, trust and E-E-A-T are more important than ever.

And while the UK sits in a pretty shocking-looking position, almost 24 million of us pay for a BBC license fee. That is, in essence, paying for news. Insert joke about BBC bias and woke cultural agendas here.

Cultural and societal factors really matter. As does your understanding of the market.

Important to note that according to Richard Reeves (AOP Director), Subscriptions have overtaken display advertising as the core source of digital revenue.

“Most heartening is what this represents as the wider information ecosystem fractures: audiences recognise the value of professional journalism and are willing to pay for it.”

In an era of slop, paying for something good is not a bad thing.

Macro And Micro Factors Are Influential

You can only control what you can control. But you shouldn’t dismiss the wider climate.

In the UK and arguably globally, there is a cost-of-living crisis. Globally, there have been a number of very significant geopolitical issues that affect the wider economy. Money doesn’t go as far as it once did, and most subscriptions are a luxury purchase.

Is a £20 or £30 monthly subscription more valuable than a £10 Netflix one? Or Spotify? These are questions you need to ask. Why would someone subscribe and stick around?

How far your money goes has been declining for some time… (Image Credit: Harry Clarkson-Bennett)

And we aren’t just competing with other publishers. While screen time and content consumption are at an all-time high, video consumption and the creator economy are booming.

It is quite literally a near half a trillion dollar market.

Not strengths for traditional publishers. While there have been some very good success stories in recent times (see Wired turning their journalists into individual subscription machines), legacy publishers need to adapt.

So your pricing strategy, customer service, and overall experience are hugely important. You are almost certainly going to be a nice-to-have. So make sure your customer journey and path to conversion are premium, and your audience feel listened to.

The standard customer experience (Image Credit: Harry Clarkson-Bennett)

You need to speak to your audience. You don’t have to go into this blind. Forging real connections with people is not impossible and making them feel listened to will go a long way.

You can try to figure out what they really value, how much they’re willing to spend and what’s stopping them.

Should I Paywall Everything?

No. Content is designed to do different things, and not everything is a premium product. Whatever journalists will tell you. If you shut down your site entirely, you become too closed off an ecosystem in my opinion.

  • Commercial Content: If you have affiliate-led content, paywalling is a questionable decision. It may not be wrong per se, but think about whether the pros outweigh the cons. Typically, it’s a good gateway drug for the rest of your content. And makes some money.
  • Content You Can Get Elsewhere: Evergreen content of a comparable quality to what already exists in the wider corpus is not a profitable opportunity. I’d argue that leaving this free-to-air has more pros than cons. You can always unpaywall the 100 best albums of all time, but gate the richer, individual album reviews.
  • Lower-Quality Platforms: A user that comes from a platform like Discover is far less likely to convert than someone who comes from organic search. So think about the role each platform plays in your content access ecosystem.
  • Paywall Vs. Newsletter signup: It is far easier to convert people to a paying subscriber from a newsletter database than from an on-page paywall. And the user journey is far less interrupted. Building an owned channel is never a bad thing, so think about how engaged users are and whether an email would be a more effective starting point.

The Type Of Paywall Matters (Now More Than Ever)

LLMs do not respect paywalls. As it turns out, neither does Google.

I, for one, am stunned.

As of just a few months ago, the search giant asked that publishers with paywalls change the way they block content to help Google out. The lighter touch paywall solution (a JavaScript-based one) includes the full content in the server response.

“…Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed.

This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”

According to Google, they are struggling to determine the difference. So the problem is on us, not them. They (and I strongly suspect other LLMs) are ingesting this content and training their models on us whether we like it or not.

For those of you who haven’t heard of Common Crawl, it stores a corpus of open web data accessible to “researchers.” By researchers, we now mean tech bros who don’t want to pay for, surprisingly, anything.

According to their CEO;

“If you didn’t want your content on the internet, you shouldn’t have put your content on the internet.”

It doesn’t stop there either. Even if you block all non-whitelisted bots from accessing your site at a CDN level, you may have syndication partnerships in place. If so, it’s likely your content is making it out into the wider world.

The internet is not exactly a leakproof vessel. If you’re setting one up now, try to implement a server-side option.

What Is The Right Paywall For Me?

I have written about the types of paywall available to you and the pros and cons of each. Generally, I think a metered or dynamic paywall is the best option for most publishers. At the very least, a freemium model. Something that gives people enough to draw them in.

And you can’t exactly draw them in if you just hard paywall everything.

You have to think of this as a full-blown marketing strategy. You need to know where people come from. How much of your content they have consumed. Whether it’s better to show them a newsletter signup as opposed to a paywall.

It is absolutely worth knowing that over time, a strong email database will convert far more effectively than a hard paywall.

So encouraging free signups and taking a longer-term view to conversions (you’ll need a good customer journey here) may be far more effective.

How Can I Set One Up?

There are a number of paywall management options out there for publishers. Leaky Paywall, Zephr, Piano. There are plenty.

The best ones integrate with your existing tech stacks, have excellent personalization and customization options, deploy ad-blocking strategies, and run flexible gating strategies.

Larger publishers tend to go with enterprise-level options with deep analytics and CRM integrations. Smaller publishers can work with lighter touch, cheaper operators. You really just need to scope out what will work best for you.

Particularly when it comes to monthly costs and revenue share options.

How Can I Map The Impact?

You’ll need to establish a few key things:

  • The average drop in traffic you expect to see.
  • The subsequent loss of existing revenue (probably ad-related, but there may be some knock-on wider commercial impact).
  • The average value of a subscription (and the expected conversion rate).
  • Your customer LTV.

Focusing on Customer LTV shifts marketing from chasing traffic to profitable, loyal audience relationships. It makes businesses understand that not all audiences or subscriptions are created equal.

You generate more subs through paid media because the net is larger. But lots slip through the net. So you need a quality product (in both a product and marketing sense) alongside UX and customer service that reduces friction.

Search and owned channels are smaller, but far more likely to pay because they have taken an action to find you. In some cases, they actually want you in their inbox. The quality is higher, but the overall returns are lower.

So you just can’t treat everybody the same.

Closing Thoughts

Subscriber revenue is so valuable because it’s predictable. Subscription business models have boomed for that very reason. A pound of subscriber revenue is far more valuable than almost anything else, and it should be the focus of your business.

But that doesn’t mean you put all your eggs in one basket. You can have multiple subscription types on your website, and that can help you become habitual with all types of users. But you need to add value to their lives every day.

Puzzles, recipes, short and long-form videos, et al.

Businesses make money in many ways. A diverse business is resilient. Resilient to macro and micro factors that will decimate some publishers over the next few years. So talk to your audience, trial new ways of adding value, and commit when one works. Become habitual.

And, shock horror, people want to belong to something. So while the digital experience is crucial, making an effort to connect with people IRL matters.

More Resources:


This post was originally published on Leadership in SEO.


Featured Image: beast01/Shutterstock

Ask An SEO: Digital PR Or Traditional Link Building, Which Is Better? via @sejournal, @rollerblader

This week’s ask an SEO question is:

“Should SEOs be focusing more on digital PR than traditional link building?”

Digital PR is synonymous with link building at this point as SEO’s needed a new way to package and resell the same service. Actual PR work will always be more valuable than link building because PR, whether digital or traditional, focuses on a core audience of customers and reaching specific demographics. This adds value to a business and drives revenue.

With that said, here’s how I’d define digital PR vs. link building if a client asked what the difference is.

  • Digital PR: Getting brand coverage and citations in media outlets, niche publications, trade journals, niche blogs, and websites that do not allow guest posting, paid links, or unvetted contributors with the goal of building brand awareness and driving traffic from the content.
  • Link Building: Getting links from websites as a way to try and increase SERP rankings. Traffic from the links, sales from the links, etc., are not being tracked, and the quality of the website can be questionable.

Digital PR is always going to be better than link building because you’re treating the technique as a business and not a scheme to try and game the rankings. Link building became a bad practice years ago as links became less relevant, they are still important, so I want to ensure that isn’t taken out of context, and we stopped doing link building completely. Quality content attracts links naturally, including media mentions. When this happens in a natural way, the website will begin rising as the site has a lot of value for users, and search engines can tell when the site is quality.

If you’re building links without evaluating the impact they have traffic and sales-wise, you’re likely setting your site up for failure. Getting a ton of links, just like creating content in mass with AI/LLMs or article spinners, can grow a site quickly. That URL/domain can then burn to the ground equally as fast.

That’s why when we purchase a link, an advertorial, or we’re doing a partnership, we always ask ourselves the following questions:

  • Is there an active audience on this website that is also coming back to the website via branded search for information?
  • Is the audience on this website part of our customer base?
  • Will the article we’re pitching or being featured in be helpful to the user, and is our product or service something that is part of the post naturally vs. being forced?
  • Are we ok with the link being nofollow or sponsored if we’re paying for the inclusion?

If the answer is yes to these four, then we’re good to go with the link. The active audience on the website and people returning by brand name means there is an audience that trusts them for information. If the readership, visitors, or customers are similar or the same demographics as our user base, then it makes sense we’d want to be in front of them where they go for information.

We may have knowledge that is helpful to the user, but if it is not on topic within the post, there is no reason for them to come through and use our services, buy our products, or subscribe to our newsletters. Instead, we’ll wait until there is a fit, so there is a direct “link” between the content we’re contributing, or being an expert on, and our website.

For the last question, our goal is always traffic and customer acquisition, not getting a link. The website owner controls this, and if they want to follow Google’s best practices (which we obviously recommend doing), we will still be happy if they mark it as sponsored or nofollow. This is the most important of the questions. Building links to game the SERPs is a bad idea; building a brand that people search for by name will overpower any link any day of the week. This is always our goal when it comes to Digital PR and link building. Driving that branded search.

So, that begs the question, where do we go for digital PR?

Sources To Get Digital PR Mentions And Links

When we’re about to start a Digital PR campaign, we create lists of the following targets to reach out to.

  • Mass Media: Household names like magazines, news websites, and local media, where everyone in the area, the customers, or the country or world knows them by name. The only stipulation we apply is if they have an active category vs. only a few articles here and there. The active category means it is something interesting enough to their reader base that they’re investing in it, so our customers may be there.
  • Trade Publications: Conferences, associations, and non-profits, as well as industry insiders will have websites and print publications that go out to members. Search Engine Journal could be considered a trade publication for the SEO and PPC industry, same with SEO Roundtable, and some of the communities like Webmaster World. They publish directly relevant content for search engine marketers and have active users, so if I was an SEO service provider or tool, this is where I’d be looking to get featured and ideally links from.
  • Niche Sites and Bloggers: There is no shortage of niche sites and content producers out there. The trick is finding ones that do not publicly allow guest contributions, advertorials, etc., and that do not link out to non-niche websites and content. This includes sites that got hacked and had link injections. Even if their “authority” is zero, there is value if they quality control and all links and mentions are earned.
  • Influencers: Whether it is YouTube, Facebook group leaders, LinkedIn that is crawlable, or other channels, getting coverage from people with subscribers and an active audience can let search engines crawl the link back to your website. It may not boost your rankings, but it drives customers to you and helps with page discoverability if the link gets crawled. LLMs are also citing their content as sources, so there could be value for AIO, too.

Link building is not dead by any means; links still matter. You just don’t need to build them anymore. Focus on quality where an active audience is and where you have a chance at getting traffic and revenue. This is what will move the needle for the long run and help you grow in SERPs that matter.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Why Is Organic Traffic Down? Here’s How To Segment The Data via @sejournal, @torylynne

As an SEO, there are few things that stoke panic like seeing a considerable decline in organic traffic. People are going to expect answers if they don’t already.

Getting to those answers isn’t always straightforward or simple, because SEO is neither of those things.

The success of an SEO investigation hinges on the ability to dig into the data, identify where exactly the performance decline is happening, and connect the dots to why it’s happening.

It’s a little bit like an actual investigation: Before you can catch the culprit or understand the motive, you have to gather evidence. In an SEO investigation, that’s a matter of segmenting data.

In this article, I’ll share some different ways to slice and dice performance data for valuable evidence that can help further your investigation.

Using Data To Confirm There’s An SEO Issue

Just because organic traffic is down doesn’t inherently mean that it’s an SEO problem.

So, before we dissect data to narrow down problem areas, the first thing we need to do is determine whether there’s actually an SEO issue at play.

After all, it could be something else altogether. In which case, we’re wasting unnecessary resources chasing a problem that doesn’t exist.

Is This A Tracking Issue?

In many cases, what looks like a big traffic drop is just an issue with tracking on the site.

To determine whether tracking is functioning correctly, there are a couple of things we need to look for in the data.

The first is consistent drops across channels.

Zoom out of organic search and see what’s happening in other sources and channels.

If you’re seeing meaningful drops across email, paid, etc., that are consistent with organic search, then it’s more than likely that tracking isn’t working correctly.

The other thing we’re looking for here is inconsistencies between internal data and Google Search Console.

Of course, there’s always a bit of inconsistency between first-party data and GSC-reported organic traffic. But if those differences are significantly more pronounced for the time period in question, that hints at a tracking problem.

Is This A Brand Issue?

Organic search traffic from Google falls into two primary camps:

  • Brand traffic: Traffic driven by user queries that include the brand name.
  • Non-brand traffic: Traffic driven by brand-agnostic user queries.

Non-brand traffic is directly affected by SEO work. Whereas, brand traffic is mostly impacted by the work that happens in other channels.

When a user includes the brand in their search, they’re already brand-aware. They’re a return user or they’ve encountered the brand through marketing efforts in channels like PR, paid social, etc.

When marketing efforts in other channels are scaled back, the brand reaches fewer users. Since fewer people see the brand, fewer people search for it.

Or, if customers sour on the brand, there are fewer people using search to come back to the site.

Either way, it’s not an SEO problem. But in order to confirm that, we need to filter the data down.

Go to Performance in Google Search Console and exclude any queries that include your brand. Then compare the data against a previous period – usually YoY if you need to account for seasonality. Do the same for queries that don’t include the brand name.

If non-brand traffic has stayed consistent, while brand traffic has dropped, then this is a brand issue.

filtering queries using regex in Google Search Console
Screenshot from Google Search Console, November 2025

Tip: Account for users misspelling your brand name by filtering queries using fragments. For example, at Gray Dot Co, we get a lot of brand searches for things like “Gray Company” and “Grey Dot Company.” By using the simple regex expression “gray|grey” I can catch brand search activity that would otherwise fall through the cracks. 

Is It Seasonal Demand?

The most obvious example of seasonal demand is holiday shopping on ecommerce sites.

Think about something like jewelry. Most people don’t buy jewelry every day; they buy it for special occasions. We can confirm that seasonality by looking at Google Trends.

Zooming out to the past five years of interest in “jewelry,” it clearly peaks in November and December.

Google Trends graph for interest in jewelry over the past five years
Screenshot from Google Trends, November 2025

As a site that sells jewelry, of course, traffic in Q1 is going to be down from Q4.

I use a pretty extreme example here to make my point, but in reality, seasonality is widespread and often more subtle. It impacts businesses where you might not expect much seasonality at all.

The best way to understand its impact is to look at organic search data year-over-year. Do the peaks and valleys follow the same patterns?

If so, then we need to compare data YoY to get a true sense of whether there’s a potential SEO problem.

Is It Industry Demand?

SEOs need to keep tabs on not just what’s happening internally, but also what’s going on externally. A big piece of that is checking the pulse of organic demand for the topics and products that are central to the brand.

Products fall out of vogue, technologies become obsolete, and consumer behavior changes – that’s just the reality of business. When there are fewer potential customers in the landscape, there are fewer clicks to win, and fewer sessions to drive.

Take cameras, for instance. As the cameras on our phones got more sophisticated, digital cameras became less popular. And as they became less popular, searches for cameras dwindled.

Now, they’re making a comeback with younger generations. More people searching, more traffic to win.

npr article headline why gen z loves the digital compact cameras that millennials used to covet
Screenshot from npr.com, November 2025

You can see all of this at play in the search landscape by turning to Google Trends. The downtrend in interest caused by advances in technology, AND the uptrend boosted by shifts in societal trends.

Google Trends graph showing search interest in cameras since 2004
Screenshot from Google Trends, November 2025

When there are drops in industry, product, or topic demand within the landscape, we need to ask ourselves whether the brand’s organic traffic loss is proportional to the overall loss in demand.

Is Paid Search Cannibalizing Organic Search?

Even if a URL on the site ranks well in organic results, ads are still higher on the SERP. So, if a site is running an ad for the same query it already ranks for, then the ad is going to get more clicks by nature.

When businesses give their PPC budgets a boost, there’s potential for this to happen across multiple, key SERPs.

Let’s say a site drives a significant chunk of its organic traffic from four or five product landing pages. If the brand introduces ads to those SERPs, clicks that used to go to the organic result start going to the ad.

That can have a significant impact on organic traffic numbers. But search users are still getting to the same URLs using the same queries.

To confirm, pull sessions by landing pages from both sources. Then, compare the data from before the paid search changes to the period following the change.

If major landing pages consistently show a positive delta that cancels out the negative delta in organic search, you’re not losing organic traffic; you’re lending it.

YoY comparison of sessions by landing page for paid search and organic search in GA4
Screenshot from Google Analytics, November 2025

Segmenting Data To Find SEO Issues

Once we have confirmation that the organic traffic declines point to an SEO issue, we can start zooming in.

Segmenting data in different ways helps pinpoint problem areas and find patterns. Only then can we trace those issues to the cause and craft a strategy for recovery.

URL

Most SEOs are going to filter their organic traffic down by URL. It lets us see which pages are struggling and analyze those pages for potential improvements.

It also helps find patterns across pages that make it easier to isolate the cause of more widespread issues. For example, if the site is losing traffic across its product listing pages, it could signal that there’s a problem with the template for that page.

But segmenting by URL also helps us answer a very important question when we pair it with conversion data.

Do We Really Care About This Traffic?

Clicks are only helpful if they help drive business-valuable user interactions like conversions or ad views. For some sites, like online publications, traffic is valuable in and of itself because users coming to the site are going to see ads. The site still makes money.

But for brands looking to drive conversions, it could just be empty traffic if it’s not helping drive that primary key performance indicator (KPI).

A top-of-funnel blog post might drive a lot of traffic because it ranks for very high-volume keywords. If that same blog post is a top traffic-driving organic landing page, a slip in rankings means a considerable organic traffic drop.

But the users entering those high-volume keywords might not be very qualified potential customers.

Looking at conversions by landing page can help brands understand whether the traffic loss is ultimately hurting the bottom line.

The best way to understand is to turn to attribution.

First-touch attribution quantifies an organic landing page’s value in terms of the conversions it helps drive down the line. For most businesses, someone isn’t likely to convert the first time they visit the site. They usually come back and purchase.

Whereas, last-touch attribution shows the organic landing pages that people come to when they’re ready to make a purchase. Both are valuable!

Query

Filtering performance by query can help understand which terms or topic areas to focus improvements on. That’s not new news.

Sometimes, it’s as easy as doing a period-over-period comparison in GSC, ordering by clicks lost, and looking for obvious patterns, i.e., are the queries with the most decline just subtle variants of one another?

If there aren’t obvious patterns and the queries in decline are more widespread, that’s where topic clustering can come into the mix.

Topic Clustering With AI

Using AI for topic clustering helps quickly identify any potential relationships between queries that are seeing performance dips.

Go to GSC and filter performance by query, looking for any YoY declines in clicks and average position.

YoY comparison in Google Search Console for clicks and average position by query
Screenshot from Google Search Console, November 2025

Then export this list of queries and use your favorite ML script to group the keywords into topic clusters.

The resulting list of semantic groupings can provide an idea of topics where a site’s authority is slipping in search.

In turn, it helps narrow the area of focus for content improvements and other optimizations to potentially build authority for the topics or products in question.

Identifying User Intent

When users search using specific terms, the type of content they’re looking for – and their objective – differs based on the query. These user expectations can be broken out into four different high-level categories:

User Intent Objective
Informational

(Top of funnel)

Users are looking for answers to questions, explanations, or general knowledge about topics, products, concepts, or events.
Commercial

(Middle of funnel)

Users are interested in comparing products, reading reviews, and gathering information before making a purchase decision.
Transactional

(Bottom of funnel)

Users are looking to perform a specific action, such as making a purchase, signing up for a service, or downloading a file.
Navigational Brand-familiar users are using the search engine as a shortcut to find a specific website or webpage.

By leveraging user intent, we identify user objectives for which the site or pages on the site are falling short. It gives us a lens into performance decline, making it easier to identify possible causes from the perspective of user experience.

If the majority of queries losing clicks and positionality are informational, it could signal shortcomings in the site’s blog content. If the queries are consistently commercial, it might call for an investigation into how the site approaches product detail and/or listing pages.

GSC doesn’t provide user intent in its reporting, so this is where a third-party SEO tool can come into play. If you have position tracking set up and GSC connected, you can use the tool’s rankings report to identify queries in decline and their user intent.

If not, you can still get the data you need by using a mix of GSC and a tool like Ahrefs.

Device

This view of performance data is pretty simple, but it’s equally easy to overlook!

When the large majority of performance declines are attributed to ONLY desktop or mobile, device data helps identify potential tech or UX issues within the mobile or desktop experience.

The important thing to remember is that any declines need to be considered proportionally. Take the metrics for the site below…

YoY comparison in Google Search Console of clicks by device type
Screenshot from Google Search Console, November 2025

At first glance, the data makes it look like there might be an issue with the desktop experience. But we need to look at things in terms of percentages.

Desktop: 1 – (648/1545) x 100 = 58% decline

Mobile: 1 – (149/316) x 100 = 52% decline

While desktop shows a much larger decline in terms of click count, the percentage of decline YoY is fairly similar across both desktop and mobile. So we’re probably not looking for anything device-specific in this scenario.

Search Appearance

Rich results and SERP features are an opportunity to stand out on the SERP and drive more traffic through enhanced results. Using the search appearance filter in Google Search Console, you can see traffic from different types of rich results and SERP features:

  • Forums.
  • AMP Top Story (AMP page + Article markup).
  • Education Q&A.
  • FAQ.
  • Job Listing.
  • Job Details.
  • Merchant Listing.
  • Product Snippet.
  • Q&A.
  • Review Snippet.
  • Recipe Gallery.
  • Video.

This is the full list of possible features with rich results (courtesy of SchemaApp), though you’ll only see filters for search appearances where the domain is currently positioned.

In most cases, Google is able to generate these types of results because there is structured data on pages. The notable exceptions are Q&A, translated results, and video.

So when there are significant traffic drops coming from a specific type of search appearance, it signals that there’s potentially a problem with the structured data that enables that search feature.

YoY comparison in Google Search Console for search appearance
Screenshot from Google Search Console, November 2025

You can investigate structured data issues in the Enhancements reports in GSC. The exception is product snippets, which nest under the Shopping menu. Either way, the reports only show up in your left-hand nav if Google is aware of relevant data on the site.

For example, the product snippets report shows why some snippets are invalid, as well as ways to potentially improve valid results.

Product snippets report in Google Search Console
Screenshot from Google Search Console, November 2025

This context is valuable as you begin to investigate the technical causes of traffic drops from specific search features. In this case, it’s clear that Google is able to crawl and utilize product schema on most pages – but there are some opportunities to improve that schema with additional data.

Featured Snippets

When featured snippets originally came on the scene, it was a major change to the SERP structure that resulted in a serious hit to traditional organic results.

Today, AI Overviews are doing the same. In fact, research from Seer shows that CTR has dropped 61% for queries that now include an AI overview (21% of searches). And that impact is outsized for informational queries.

In cases where rankings have remained relatively static, but traffic is dropping, there’s good reason to investigate whether this type of SERP change is a driver of loss.

While Google Search Console doesn’t report on featured snippets (example: PAA questions) and AI Overviews, third-party tools do.

In the third-party tool Semrush, you can use the Domain Overview report to check for featured snippet availability across keywords where the site ranks.

filtering to keyword with available AI overviews in the Semrush Domain Overview report
Screenshot from Semrush, November 2025

Do the keywords where you’re losing traffic have AI overviews? If you’re not cited, it’s time to start thinking about how you’re going to win that placement.

Search Type

Search type is another way to filter GSC data, where you’re seeing traffic declines despite healthy and consistent rankings.

After all, web search is just one prong of Google Search. Think about it: How often do you use Google Image search? At least in my case, that’s fairly often.

Filter performance data by each of these search types to understand which one(s) are having the biggest impact on performance decline. Then use that insight to start connecting the dots to the cause.

filtering to Google image search performance in Google Search Console
Screenshot from Google Search Console, November 2025

Images are a great example. One simple line in the robots.txt can block Google from crawling a subfolder that hosts multitudes of images. As those images disappear from image search results, any clicks from those results disappear in tandem.

We don’t know to look for this issue until we slice the data accordingly!

Geography

If the business operates physically in specific cities and states, then it likely already has geo-specific performance tracking set up through a tool.

But domains for online-only businesses shouldn’t dismiss geographic data – even at the city/state level! Declines are still a trigger to check geo-specific performance data.

Country

Just because the brand only sells and operates in one country doesn’t mean that’s where all the domain’s traffic is coming from. Drilling down by country in GSC allows you to see whether declines are coming from the country the brand is focused on or, potentially, another country altogether.

performance by country in Google Search Console
Screenshot from Google Search Console, November 2025

If it’s another country, it’s time to decide whether that matters. If the site is a publisher, it probably cares more about that traffic than an ecommerce brand that’s more focused on purchases in its country of operation.

Localization

When tools are reporting positionality at the country level, then rankings shifts in specific markets fly under the radar. It certainly happens, and major markets can have major traffic impact!

Tools like BrightLocal, Whitespark, and Semrush let you analyze SERP rankings one level deeper than GSC, providing data down to the city.

Checking for rankings discrepancies across cities is possible by checking a small sample of keywords with the greatest declines in clicks.

If I’m an SEO at the University of Phoenix, which is an online university, I’m probably pretty excited about ranking #1 in the United States for “online business degree.”

top five serp results for online business degree in the United States
Screenshot from Semrush, November 2025

But if I drill down further, I might be a little distraught to find that the domain isn’t in the top five SERP results for users in Denver, CO…

top five serp results for online business degree in Denver, Colorado
Screenshot from Semrush, November 2025

…or Raleigh, North Carolina.

top five serp results for online business degree in Raleigh. North Carolina
Screenshot from Semrush, November 2025

Catch Issues Faster By Leveraging AI For Data Analysis

Data segmentation is an important piece of any traffic drop investigation, because humans can see patterns in data that bots don’t.

However, the opposite is true too. With anomaly detection tooling, you get the best of both worlds.

When combined with monitoring and alert notifications, anomaly detection makes it possible to find and fix issues faster. Plus, it enables you to find data patterns in any after-the-impact investigations

All of this helps ensure that your analysis is comprehensive, and might even point out gaps for further investigation.

This Colab tool from Sam Torres can help get your site set up!

Congrats, You’re Close To Closing This Case

As Sherlock Holmes would say about an investigation, “It is a capital mistake to theorize before one has data.” With the right data in hand, the culprits start to reveal themselves.

Data segmentation empowers SEOs to uncover leads that point to possible causes. By narrowing it down based on the evidence, we ensure more accuracy, less work, faster answers, and quicker recovery.

And while leadership might not love a traffic drop, they’re sure to love that.

More Resources:


Featured Image: Vanz Studio/Shutterstock

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, but at times it can also be quite difficult. 

We collectively pitch dozens of ideas, and the editors meticulously review and debate the merits of each. We agonize over which ones might make the broadest impact, whether one is too similar to something we’ve featured in the past, and how confident we are that a recent advance will actually translate into long-term success. There is plenty of lively discussion along the way.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, I wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. 

These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. 

Male contraceptives 

There are several new treatments in the pipeline for men who are sexually active and wish to prevent pregnancy—potentially providing them with an alternative to condoms or vasectomies. 

Two of those treatments are now being tested in clinical trials by a company called Contraline. One is a gel that men would rub on their shoulder or upper arm once a day to suppress sperm production, and the other is a device designed to block sperm during ejaculation. (Kevin Eisenfrats, Contraline’s CEO, was recently named to our Innovators Under 35 list). A once-a-day pill is also in early-stage trials with the firm YourChoice Therapeutics. 

Though it’s exciting to see this progress, it will still take several years for any of these treatments to make their way through clinical trials—assuming all goes well.

World models 

World models have become the hot new thing in AI in recent months. Though they’re difficult to define, these models are generally trained on videos or spatial data and aim to produce 3D virtual worlds from simple prompts. They reflect fundamental principles, like gravity, that govern our actual world. The results could be used in game design or to make robots more capable by helping them understand their physical surroundings. 

Despite some disagreements on exactly what constitutes a world model, the idea is certainly gaining momentum. Renowned AI researchers including Yann LeCun and Fei-Fei Li have launched companies to develop them, and Li’s startup World Labs released its first version last month. And Google made a huge splash with the release of its Genie 3 world model earlier this year. 

Though these models are shaping up to be an exciting new frontier for AI in the year ahead, it seemed premature to deem them a breakthrough. But definitely watch this space. 

Proof of personhood 

Thanks to AI, it’s getting harder to know who and what is real online. It’s now possible to make hyperrealistic digital avatars of yourself or someone you know based on very little training data, using equipment many people have at home. And AI agents are being set loose across the internet to take action on people’s behalf. 

All of this is creating more interest in what are known as personhood credentials, which could offer a way to verify that you are, in fact, a real human when you do something important online. 

For example, we’ve reported on efforts by OpenAI, Microsoft, Harvard, and MIT to create a digital token that would serve this purpose. To get it, you’d first go to a government office or other organization and show identification. Then it’d be installed on your device and whenever you wanted to, say, log into your bank account, cryptographic protocols would verify that the token was authentic—confirming that you are the person you claim to be. 

Whether or not this particular approach catches on, many of us in the newsroom agree that the future internet will need something along these lines. Right now, though, many competing identity verification projects are in various stages of development. One is World ID by Sam Altman’s startup Tools for Humanity, which uses a twist on biometrics. 

If these efforts reach critical mass—or if one emerges as the clear winner, perhaps by becoming a universal standard or being integrated into a major platform—we’ll know it’s time to revisit the idea.  

The world’s oldest baby

In July, senior reporter Jessica Hamzelou broke the news of a record-setting baby. The infant developed from an embryo that had been sitting in storage for more than 30 years, earning him the bizarre honorific of “oldest baby.” 

This odd new record was made possible in part by advances in IVF, including safer methods of thawing frozen embryos. But perhaps the greater enabler has been the rise of “embryo adoption” agencies that pair donors with hopeful parents. People who work with these agencies are sometimes more willing to make use of decades-old embryos. 

This practice could help find a home for some of the millions of leftover embryos that remain frozen in storage banks today. But since this recent achievement was brought about by changing norms as much as by any sudden technological improvements, this record didn’t quite meet our definition of a breakthrough—though it’s impressive nonetheless.

The Download: four (still) big breakthroughs, and how our bodies fare in extreme heat

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, with plenty of lively discussion along the way, but at times it can also be quite difficult.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, we wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. Read the full story to learn what they are

MIT Technology Review Narrated: The quest to find out how our bodies react to extreme temperatures 

Scientists hope to prevent deaths from climate change, but heat and cold are more complicated than we thought. Researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. 

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A CDC panel voted to recommend delaying the hepatitis B vaccine for babies
Overturning a 30-year policy that has contributed to a huge decline in the virus. (STAT)
Why childhood vaccines are a public health success story. (MIT Technology Review)

2 Critical climate risks are growing across the Arab region 
Drought is the most immediate problem countries are having to grapple with. (Ars Technica)
+ Why Tehran is running out of water. (Wired $)

3 Netflix is buying Warner Bros for $83 billion 
If approved, it’ll be one of the most significant mergers in Hollywood history. (NBC)
+ Trump says the deal “could be a problem” due to Netflix’s already huge market share. (BBC)

4 The EU is fining X $140 million 
For failing to comply with its new Digital Services Act. (NPR)
Elon Musk is now calling for the entire EU to be abolished. (CNBC)
X also hit back by deleting the European Commission’s account. (Engadget)

5 AI slop is ruining Reddit
Moderators are getting tired of fighting the rising tide of nonsense. (Wired $)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 Scientists have deeply mixed feelings about AI tools
They can boost researchers’ productivity, but some worry about the consequences of relying on them. (Nature $)
‘AI slop’ is undermining trust in papers presented at computer science gatherings. (The Guardian)
+ Meet the researcher hosting a scientific conference by and for AI. (MIT Technology Review)

7 Australia is about to ban under 16s from social media
It’s due to come into effect in two days—but teens are already trying to maneuver around it. (New Scientist $)

8 AI is enshittifying the way we write 🖊🤖
And most people haven’t even noticed. (NYT $)
AI can make you more creative—but it has limits. (MIT Technology Review)

9 Tech founders are taking etiquette lessons
The goal is to make them better at pretending to be normal. (WP $)

10 Are we getting stupider? 
It might feel that way sometimes, but there’s little solid evidence to support it. (New Yorker $)

Quote of the day

“It’s hard to be Jensen day to day. It’s almost nightmarish. He’s constantly paranoid about competition. He’s constantly paranoid about people taking Nvidia down.” 

—Stephen Witt, author of ‘The Thinking Machine’, a book about Nvidia’s rise, tells the Financial Times what it’s like to be its founder and chief executive, Jensen Huang.

One more thing

fleet of ships at sea

COURTESY OF OCEANBIRD

How wind tech could help decarbonize cargo shipping

Inhabitants of the Marshall Islands—a chain of coral atolls in the center of the Pacific Ocean—rely on sea transportation for almost everything. For millennia they sailed largely in canoes, but much of their seafaring movement today involves big, bulky, diesel-fueled cargo ships that are heavy polluters.

They’re not alone. Cargo shipping is responsible for about 3% of the world’s annual greenhouse-­gas emissions, and that figure is currently on track to rise to 10% by 2050.

The islands have been disproportionately experiencing the consequences of human-made climate change: warming waters, more frequent extreme weather, and rising sea levels. Now its residents are exploring a surprisingly traditional method of decarbonizing its fleets. Read the full story.

—Sofia Quaglia

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Small daily habits can help build a life you enjoy.  
+ Using an air fryer to make an epic grilled cheese sandwich? OK, I’m listening
+ I’m sorry but AI does NOT get to ruin em dashes for the rest of us. 
+ Daniel Clarke’s art is full of life and color. Check it out!

The State of AI: A vision of the world in 2030

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. You can read the rest of the series here.

In this final edition, MIT Technology Review’s senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about where AI will go next, and what our world will look like in the next five years.

(As part of this series, join MIT Technology Review’s editor in chief, Mat Honan, and editor at large, David Rotman, for an exclusive conversation with Financial Times columnist Richard Waters on how AI is reshaping the global economy. Live on Tuesday, December 9 at 1:00 p.m. ET. This is a subscriber-only event and you can sign up here.)

state of AI

Will Douglas Heaven writes: 

Every time I’m asked what’s coming next, I get a Luke Haines song stuck in my head: “Please don’t ask me about the future / I am not a fortune teller.” But here goes. What will things be like in 2030? My answer: same but different. 

There are huge gulfs of opinion when it comes to predicting the near-future impacts of generative AI. In one camp we have the AI Futures Project, a small donation-funded research outfit led by former OpenAI researcher Daniel Kokotajlo. The nonprofit made a big splash back in April with AI 2027, a speculative account of what the world will look like two years from now. 

The story follows the runaway advances of an AI firm called OpenBrain (any similarities are coincidental, etc.) all the way to a choose-your-own-adventure-style boom or doom ending. Kokotajlo and his coauthors make no bones about their expectation that in the next decade the impact of AI will exceed that of the Industrial Revolution—a 150-year period of economic and social upheaval so great that we still live in the world it wrought.

At the other end of the scale we have team Normal Technology: Arvind Narayanan and Sayash Kapoor, a pair of Princeton University researchers and coauthors of the book AI Snake Oil, who push back not only on most of AI 2027’s predictions but, more important, on its foundational worldview. That’s not how technology works, they argue.

Advances at the cutting edge may come thick and fast, but change across the wider economy, and society as a whole, moves at human speed. Widespread adoption of new technologies can be slow; acceptance slower. AI will be no different. 

What should we make of these extremes? ChatGPT came out three years ago last month, but it’s still not clear just how good the latest versions of this tech are at replacing lawyers or software developers or (gulp) journalists. And new updates no longer bring the step changes in capability that they once did. 

And yet this radical technology is so new it would be foolish to write it off so soon. Just think: Nobody even knows exactly how this technology works—let alone what it’s really for. 

As the rate of advance in the core technology slows down, applications of that tech will become the main differentiator between AI firms. (Witness the new browser wars and the chatbot pick-and-mix already on the market.) At the same time, high-end models are becoming cheaper to run and more accessible. Expect this to be where most of the action is: New ways to use existing models will keep them fresh and distract people waiting in line for what comes next. 

Meanwhile, progress continues beyond LLMs. (Don’t forget—there was AI before ChatGPT, and there will be AI after it too.) Technologies such as reinforcement learning—the powerhouse behind AlphaGo, DeepMind’s board-game-playing AI that beat a Go grand master in 2016—is set to make a comeback. There’s also a lot of buzz around world models, a type of generative AI with a stronger grip on how the physical world fits together than LLMs display. 

Ultimately, I agree with team Normal Technology that rapid technological advances do not translate to economic or societal ones straight away. There’s just too much messy human stuff in the middle. 

But Tim, over to you. I’m curious to hear what your tea leaves are saying. 

Tim Bradshaw and Will Douglas Heaven

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

Tim Bradshaw responds

Will, I am more confident than you that the world will look quite different in 2030. In five years’ time, I expect the AI revolution to have proceeded apace. But who gets to benefit from those gains will create a world of AI haves and have-nots.

It seems inevitable that the AI bubble will burst sometime before the end of the decade. Whether a venture capital funding shakeout comes in six months or two years (I feel the current frenzy still has some way to run), swathes of AI app developers will disappear overnight. Some will see their work absorbed by the models upon which they depend. Others will learn the hard way that you can’t sell services that cost $1 for 50 cents without a firehose of VC funding.

How many of the foundation model companies survive is harder to call, but it already seems clear that OpenAI’s chain of interdependencies within Silicon Valley make it too big to fail. Still, a funding reckoning will force it to ratchet up pricing for its services.

When OpenAI was created in 2015, it pledged to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” That seems increasingly untenable. Sooner or later, the investors who bought in at a $500 billion price tag will push for returns. Those data centers won’t pay for themselves. By that point, many companies and individuals will have come to depend on ChatGPT or other AI services for their everyday workflows. Those able to pay will reap the productivity benefits, scooping up the excess computing power as others are priced out of the market.

Being able to layer several AI services on top of each other will provide a compounding effect. One example I heard on a recent trip to San Francisco: Ironing out the kinks in vibe coding is simply a matter of taking several passes at the same problem and then running a few more AI agents to look for bugs and security issues. That sounds incredibly GPU-intensive, implying that making AI really deliver on the current productivity promise will require customers to pay far more than most do today.

The same holds true in physical AI. I fully expect robotaxis to be commonplace in every major city by the end of the decade, and I even expect to see humanoid robots in many homes. But while Waymo’s Uber-like prices in San Francisco and the kinds of low-cost robots produced by China’s Unitree give the impression today that these will soon be affordable for all, the compute cost involved in making them useful and ubiquitous seems destined to turn them into luxuries for the well-off, at least in the near term.

The rest of us, meanwhile, will be left with an internet full of slop and unable to afford AI tools that actually work.

Perhaps some breakthrough in computational efficiency will avert this fate. But the current AI boom means Silicon Valley’s AI companies lack the incentives to make leaner models or experiment with radically different kinds of chips. That only raises the likelihood that the next wave of AI innovation will come from outside the US, be that China, India, or somewhere even farther afield.

Silicon Valley’s AI boom will surely end before 2030, but the race for global influence over the technology’s development—and the political arguments about how its benefits are distributed—seem set to continue well into the next decade. 

Will replies: 

I am with you that the cost of this technology is going to lead to a world of haves and have-nots. Even today, $200+ a month buys power users of ChatGPT or Gemini a very different experience from that of people on the free tier. That capability gap is certain to increase as model makers seek to recoup costs. 

We’re going to see massive global disparities too. In the Global North, adoption has been off the charts. A recent report from Microsoft’s AI Economy Institute notes that AI is the fastest-spreading technology in human history: “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.” And yet AI is useless without ready access to electricity and the internet; swathes of the world still have neither. 

I still remain skeptical that we will see anything like the revolution that many insiders promise (and investors pray for) by 2030. When Microsoft talks about adoption here, it’s counting casual users rather than measuring long-term technological diffusion, which takes time. Meanwhile, casual users get bored and move on. 

How about this: If I live with a domestic robot in five years’ time, you can send your laundry to my house in a robotaxi any day of the week. 

JK! As if I could afford one. 

Further reading 

What is AI? It sounds like a stupid question, but it’s one that’s never been more urgent. In this deep dive, Will unpacks decades of spin and speculation to get to the heart of our collective technodream. 

AGI—the idea that machines will be as smart as humans—has hijacked an entire industry (and possibly the US economy). For MIT Technology Review’s recent New Conspiracy Age package, Will takes a provocative look at how AGI is like a conspiracy

The FT examined the economics of self-driving cars this summer, asking who will foot the multi-billion-dollar bill to buy enough robotaxis to serve a big city like London or New York.
A plausible counter-argument to Tim’s thesis on AI inequalities is that freely available open-source (or more accurately, “open weight”) models will keep pulling down prices. The US may want frontier models to be built on US chips but it is already losing the global south to Chinese software.

5 Content Marketing Ideas for January 2026

Each new year is a time to reset, restart, and renew, even for content marketing.

In January 2026, ecommerce marketers can publish content celebrating the U.S.’s 250th year, share Wikipedia’s anniversary, take a deep dive into what makes products great, or even appreciate simple pleasures such as puzzles and Bloody Marys!

What follows are five content marketing ideas your business can use in January 2026.

250 Years

AI illustration of a U.S. flag

America’s 250th birthday will likely be a widespread event.

In 2026, the United States will celebrate its 250th year as a nation. The event will go by a few names (keywords) such as “anniversary,” “sestercentennial,” “quarter-millennial,” or “semiquincentennial.”

If America’s bicentennial in 1976 is an indication of what to expect in 2026, there will be promotions, parties, and opportunities. Many referred to the 1976 occasion as the “buycentennial” because of the increase in marketing and spending.

Promotional content can focus on patriotic products or emphasize history and how-tos.

For example, an apparel retailer could publish a 250-year fashion series that included videos, articles, and even interactive elements.

Collage of seven images showing apparel from 1776 to present.

Two hundred fifty years of fashion might reveal more continuity than change.

For a how-to content, something as simple as “How to Celebrate the Sestercentennial” could work.

Bloody Mary Day

Photo of a Bloody Mary with various garnishes on a bar

The Bloody Mary is a classic cocktail, often heavily garnished.

The classic Bloody Mary cocktail is a mixture of vodka, tomato juice, and spices, including salt, Worcestershire sauce, and Tabasco.

While it has no curative properties, the Bloody Mary is a popular hangover remedy. As such, Americans celebrate Bloody Mary Day each January 1.

The idea is simple enough. Folks drink a lot on New Year’s Eve and wake up suffering from the aftereffects. The drink’s tomato juice is hydrating. The salt helps to restore electrolytes. The spices wake one up a bit. And, ultimately, the vodka prolongs recovery.

Bloody Mary’s offer plenty of content opportunities. For example, a travel retailer selling high-end luggage and travel accessories could focus on the cocktail’s history. Fernand Petiot invented the drink in 1921 while bartending at the famous Harry’s New York Bar in Paris. The establishment on 5 Rue Daunou is still open and remains a favorite for tourists.

Other article ideas include various Bloody Mary recipes, New Year’s recovery checklists, and entertainment ideas.

Wikipedia at 25

Wikipedia home page

Wikipedia home page.

On January 15, 2001, Larry Sanger and Jimmy Wales created Wikipedia, the human-edited encyclopedia that focuses on “verifiability, not truth.”

The result was an information source shunned by academics — scholarly papers do not cite it — but which approaches the accuracy of classic encyclopedias for many topics.

The platform has also faced a recent challenge from Elon Musk’s upstart Grokipedia, which attempts to challenge Wikipedia’s alleged inaccuracies and political biases.

Both the anniversary and recent publicity make Wikipedia a good topic for content marketers. A common (and entertaining) approach is calling out Wikipedia errors, such as those related to the products your business sells.

For example, music-and-pop-culture-related businesses could write about Wikipedia’s false claim that a member of the band Bilk left after being implicated in a Jamaican corned beef theft. Or the same shop might cover another Wikipedia claim that the U.S. military used Yoko Ono’s music during interrogations.

Those errors from April 2024 and August 2025, respectively, have been corrected, but the humor remains.

National Puzzle Day

Photo of human hands assembling a jigsaw puzzle on a table

Puzzle solving turns chaos into a satisfying sense of order.

Established in 2002, National Puzzle Day occurs each January 29. The occasion reminds us of the joy and benefits that puzzles provide.

The topic is relevant for many types of ecommerce businesses. Here are examples.

  • Woodworking supply shop: “10 Easy Jigsaw Puzzle Templates for Scrollsaw Beginners.”
  • Toy store: “The Secret World of Puzzle Makers.”

Marketers could also publish puzzles, such as themed crosswords, visual challenges, or various product-related games. And producing puzzle content could be a good way to try out AI-powered vibe coding.

Bill of Materials

Illustration of a backpack with highlights of its materials.

Product quality can hide in details that most consumers never see.

Consumers purchase some products on impulse and others through inference — cues of quality, durability, and craftsmanship, even when they can’t articulate why.

A “bill of materials” article or video describes a product’s construction or the sourcing of its materials. Such content might interview a supplier or take apart a product to show its inner workings.

The audience of potential customers may not remember each material name, but they will remember the impact, thinking, “These people know what they are making, and they are not afraid to show it.”

Google Disputes Report Claiming Ads Are Coming To Gemini In 2026 via @sejournal, @MattGSouthern

Google is publicly pushing back on an Adweek report that claimed the company told advertising clients it plans to bring ads to its Gemini AI chatbot next year.

Dan Taylor, Google’s Vice President of Global Ads, responded directly on X shortly after the story published, calling the report inaccurate and denying any plans to monetize the Gemini app.

The Original Report

Adweek’s Trishla Ostwal reported that Google had informed advertising clients about plans to introduce ads to Gemini. According to the exclusive story, Google representatives held calls with at least two advertising clients indicating that ad placements in Gemini were targeted for a 2026 rollout.

The agency buyers who spoke to Adweek remained anonymous. They said details on ad formats, pricing, and testing were unclear, and that Google had not shared prototypes or technical specifications about how ads would appear in the chatbot.

Notably, the report said this plan would be separate from advertisements in AI Mode, Google’s AI-powered search experience.

Google’s Response

Taylor disputed the claims publicly on X, writing: “This story is based on uninformed, anonymous sources who are making inaccurate claims. There are no ads in the Gemini app and there are no current plans to change that.”

Google’s official AdsLiaison account amplified the denial, reiterating that there are no ads in the Gemini app and no current plans to add them, and pointing out that ads currently appear in AI Overviews in English in the US, with expansion to more English-speaking countries, and are being tested in AI Mode.

Logan Kilpatrick, who works on Google’s Gemini team, responded to Taylor’s post with “thanks for clarifying!!”

Where Google Is Monetizing AI

While the Gemini app itself remains ad-free according to Google, the company is actively monetizing other AI-powered search experiences.

Google began showing ads in AI Overviews earlier this year and has been expanding that program to additional English-speaking countries. The company also continues testing advertisements within AI Mode.

Why This Matters

The question of how AI chatbots will be monetized has become increasingly relevant as these products gain mainstream adoption. Google, OpenAI, and other AI companies face pressure to generate revenue from expensive-to-run conversational AI products.

Just last week, code discovered in ChatGPT’s Android app suggested OpenAI may be building an advertising framework, though the company has not confirmed any plans to introduce ads.

For now, Google maintains that Gemini users won’t see ads in the chatbot app. Whether that position changes as the AI landscape evolves remains to be seen.