Google Downplays GEO – But Let’s Talk About Garbage AI SERPs via @sejournal, @martinibuster

Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.

SEO And The Next Generation Of Search

Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.

What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”

So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.

The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.

Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?

How To Write For Longform Answers

Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?

Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?

It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.

Danny said:

“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?

So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”

Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.

Avoids Talking About Search Referrals

But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.

Focus On The Big Picture

Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.

He explained:

“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.

…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”

Let’s Talk About Garbage AI Search Results

I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.

Subject Matter Expertise Is Missing

Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.

How Google Hides Expert Web Pages

How Google hides expert web pages.

Google’s AI Mode Promotes Garbage And Sites Lacking Expertise

This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.

Google’s AI Mode cites the following pages:

1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.

2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?

3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?

Screenshot Of Google’s Garbage AI Results

Google Hides The Good Stuff In More > News Tab

Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.

Screenshot Of  Hidden High Quality Search Results

GEO Or SEO – It Doesn’t Matter

This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.

I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?

Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.

How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?

Listen to the podcast here:

Featured Image by Shutterstock/Kues

2026 Guide To Hiring A Link Building Agency In The AI Search Era via @sejournal, @jmoserr

This post was sponsored by uSERP. The opinions expressed in this article are the sponsor’s own.

Let’s get real. Most link building agencies are selling you an outdated playbook from 2015.

Volume. Guest posting on dead sites. Chasing domain ratings at all costs.

But if you’re a marketing leader in 2026, you know the game has changed.

I’ve spent the last decade completing over 575 link building campaigns and scaling my team at uSERP to 55+ people. I have worked with SaaS giants like monday.com and Robinhood.

I know first hand that the gap between a bad backlinks agency and a great one is no longer just about rankings. It is about revenue.

Here’s what I have learned, and how you can use it to pick a skilled link building agency in the AI era.

Traditional link-building isn’t dead. But the old methods are broken.

For years, SEO agencies focused only on domain authority (DA) or domain rating (DR).

They built backlinks from any site with a high number of backlinks. They ignored readership and content quality.

But that approach is dangerous now.

Because search engines have evolved, links now serve two masters: Google’s algorithm and AI model training data.

Ignoring this means losing search engine rankings (and watching your bottom line suffer).

In fact, uSERP’s 2025 State of Backlinks Report, which surveyed 800 SEO professionals, found that 67.5% believe backlinks influence overall search results (a rise from 2023).

But it’s not just quantity. Quality and brand authority work together, month after month, to drive traffic.

This data forced us to pivot at uSERP. We stopped chasing vanity metrics like DR.

Instead, we started prioritizing traffic and relevance.

It turns out that a single link from a site that appears in a Perplexity answer is worth more than 10 links from high-DR sites with zero readership.

Agencies that fail to adapt are dying, and so are their clients.

So the bottom-line question is:

How can you pick a link building agency that catapults your business in this AI era instead of leaving you stranded?

Green Flags: What Separates Elite Agencies

It’s easy to promise the world on a sales call. It’s harder to deliver natural links that drive revenue.

When vetting partners, look for these specific green flags.

They Focus On AI Visibility, Not Just Rankings

Elite agencies don’t just track Google SERPs.

They track brand mentions in LLMs. They understand that a link is a citation. It validates your expertise to both humans and machines.

Ask them this: “Can you show me examples of clients appearing in AI-generated answers?”

If they stare blankly, walk away.

If they have a proven system, that’s a green flag. It means they know what they’re doing.

For example, we developed proprietary AI visibility tracking tools because we had to. It was the only way to measure impact.

Any agency you hire must discuss citations and how search engines use links to verify facts.

They Lead With Digital PR And Original Research

Content creation is the backbone of modern link acquisition.

You cannot just beg for links anymore. You have to earn them with a content-driven approach.

That is why digital PR was the most effective link-building tactic in 2025, according to our State of Backlinks Report.

The winning strategy is simple. Produce linkable assets, such as original studies, interactive tools, and expert commentary.

These assets generate inbound links naturally. They get cited by AI and compound over time.

For example, a SaaS brand might create a salary calculator. Journalists and publishers love this data.

This approach also shifts the dynamic from cold outreach to relationship-based link building. Even if you do cold outreach, you should expect better results because it’s a win-win for both parties, and you’re leading with quality content and data they can’t ignore.

They Are Transparent About Process And Pricing

A skilled backlinks agency has nothing to hide.

Vague promises are red flags. Detailed reporting on publishers, anchor text, and traffic estimates is a green flag.

They are also realistic about costs.

For example, our data show that most SEO professionals spend between $5,000 and $10,000 per month on link building.

If someone offers you 100 links for $500, that’s a liability, not a deal.

They should also provide a dashboard that includes your link inventory, KPIs, and how your content is driving traffic over time.

Transparency builds trust. Secrecy usually hides black hat link building tactics.

Let’s look at red flags you should stay far away from.

Red Flags That Scream “Run Away”

I have worked with 100+ clients who got burned by cheap link building providers. They saw temporary spikes, then got hit by core Google updates.

This is the price of buying temporary tactics. It’s the equivalent of shiny object syndrome that wastes time, money, and reputation for the sake of slightly higher initial traffic that evaporates after a couple of months.

Here are the warning signs.

Promises Of Specific Ranking Positions

“We will get you to #1 in 30 days.”

This is a lie.

No agency controls Google. They can influence probabilities, but they cannot guarantee outcomes.

Ranking factors are very complex. Plus, some are unknown, and agencies can only estimate probabilities based on experience and data.

Anyone guaranteeing a spot is selling snake oil.

PBNs (Private Blog Networks) are poison for your site.

They’re fake “blogs” that exist for one reason: to pass authority. They violate Google’s rules and go against its spam policies.

If your agency is buying links off some “menu” or dropping niche edits on hacked, junk sites, that’s your cue to walk away.

Sure, these backlinks might temporarily boost your domain rating. But sooner or later, your search visibility winds up circling the drain.

Templated Outreach

If they use the same email template for everyone, they are failing.

Journalists receive dozens of these every day and just ignore or delete them. Website owners mark them as spam.

You need a personalized approach.

Sending thousands of generic emails daily reflects poorly on your brand.

This is a silent killer. Ahrefs found that 66.5% of links from 2013 to 2024 are now dead.

Cheap agencies take your money and move on.

You need a partner who monitors their work. They must check for link rot and take steps to fix it to protect your investment and your brand’s organic growth.

The Questions You Must Ask Before Signing

Don’t just trust a Clutch profile. Grill potential partners with these questions.

1. “What Is Your Process For Vetting Publishers?”

They should talk about how they verify traffic and how they check for spammy sites. If they’re not even looking at a site’s keyword rankings, that’s a big red flag.

2. “Can I See Examples Of Client Results In AI Overviews?”

This separates modern agencies from the dinosaurs.

Ask how they measure AI visibility by impact in ChatGPT or Perplexity.

3. “What Is Your Typical Timeline?”

If they say “immediate results,” they are lying.

You could have a severe technical issue that, once fixed, could cause a permanent spike in traffic. But that’s a rare exception.

Real SEO services take time. BuzzStream’s 2025 State of Digital PR Report states that most campaigns deliver results within 3-6 months.

4. “How Do You Measure Success Beyond DR Increases?”

Domain rating is a vanity metric if it doesn’t lead to revenue. They should track growth in organic search traffic and referral traffic.

Ask about backlink gap analysis and see if they share a high-level step-by-step of their link building process.

Given the high rate of link rot, a replacement policy is essential. You need backlink management that protects your investment.

Decide if you want digital PR or traditional link building with AI enhancements. But make sure there’s accountability and a process that actively monitors and replaces rotten links.

What Success Actually Looks Like

Let’s look at a real example. When monday.com reached out to my company, uSERP, they had 100+ internal SEO staff but still needed help with content production and PR.

The competitors were winning in organic search, taking over primary keywords, and gaining market share.

So, we focused on untapped keywords first. We created helpful content and optimized it to land crucial backlinks from publications like Crunchbase and G2.

We focused on quality plus relevance. Then, monday earned volume with the cause-and-effect principle.

The result was a 77.84% increase in traffic to 1.2M+ monthly visitors.

This is the lens you need: relationship-building techniques that demonstrate real authority and value, resulting in ROI. Not just rankings.

Whether in the United States, the United Kingdom, or Canada, quality link building like this takes 60-90 days for early signals and 6-12 months for full impact. But the dividends last for years.

Picking a link building agency in 2026 isn’t about finding the cheapest option. It is about finding partners who understand the AI-first future.

You need transparency, AI visibility results, and digital PR expertise.

Avoid anyone selling the 2015 playbook. The winners focus on citations, AI brand mentions, and revenue growth. Everything else is just noise.

Start asking the hard questions. Look for the green flags and don’t settle for vanity metrics.

For more foundational strategies, check out our complete link building guide.


Image Credits

Featured Image: Image by Shutterstock. Used with permission.

In-Post Images: Images by uSERP. Used with permission.

Apple Selects Google’s Gemini For New AI-Powered Siri via @sejournal, @MattGSouthern

Apple is partnering with Google to power its AI features, including a major Siri upgrade expected later this year.

The companies announced the multi-year collaboration on Monday. Google’s Gemini models and cloud technology will serve as the foundation for the next generation of Apple Foundation Models.

“After careful evaluation, Apple determined that Google’s AI technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users,” the joint statement said.

What’s New

The partnership makes Gemini a foundation for Apple’s next-generation models. Apple’s models will continue running on its devices and Private Cloud Compute infrastructure while maintaining what the company calls its “industry-leading privacy standards.”

Neither company disclosed the deal’s financial terms. Bloomberg previously reported Apple had discussed paying about $1 billion annually for Google AI access, though that figure remains unconfirmed for the final agreement.

By November, Bloomberg reported Apple had chosen Google over Anthropic based largely on financial terms.

Existing OpenAI Partnership Remains

Apple currently integrates OpenAI’s ChatGPT into Siri and Apple Intelligence for complex queries that draw on the model’s broader knowledge base.

Apple told CNBC the company isn’t making changes to that agreement. OpenAI did not immediately respond to a request for comment.

The distinction appears to be between the foundational models powering Apple Intelligence overall versus the external AI connection available for certain queries.

Context

The deal arrives as Google’s AI position strengthens. Alphabet surpassed Apple in market capitalization last week for the first time since 2019.

The default-search deal between Google and Apple has been under scrutiny after U.S. District Judge Amit Mehta ruled Google holds an illegal monopoly in online search and related advertising. In September 2025, he did not require Google to divest Chrome or Android.

Apple had originally planned to launch an AI-powered Siri upgrade in 2025 but delayed the release.

“It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year,” Apple said at the time.

Google introduced its upgraded Gemini 3 model late last year. CEO Sundar Pichai said in October that Google Cloud signed more deals worth over $1 billion through the first three quarters of 2025 than in the previous two years combined.

Why This Matters

I covered this partnership in November when Bloomberg first reported Apple was paying Google to build a custom Gemini model for Siri. Today’s joint statement confirms what was then unattributed sourcing.

The confirmation matters because it extends Gemini’s reach into one of the largest device ecosystems in the world. Apple has said Siri fields 1.5 billion user requests per day across more than 2 billion active devices. That installed base gives Gemini distribution Google couldn’t match through its own products alone.

The competitive signal is clearer now too. Apple evaluated Anthropic and chose Google. Eddy Cue testified in May that Apple planned to add Gemini to Siri, but today’s announcement frames it as a deeper infrastructure partnership, not just another assistant option.

If Siri becomes meaningfully more capable at answering queries directly, the implications mirror what’s happening with AI Overviews and AI Mode in search. More queries could be resolved without users reaching external websites.

Looking Ahead

The upgraded Siri is expected to roll out later in 2026. The companies haven’t provided a specific launch date.

Apple maintaining its OpenAI integration alongside the Google partnership suggests both relationships will continue, at least for now. How Apple balances these two AI providers for different use cases will become clearer as the new features launch.

Google: AI Overviews Show Less When Users Don’t Engage via @sejournal, @MattGSouthern

AI Overviews don’t show up consistently across Google Search because the system learns where they’re useful and pulls them back when people don’t engage.

Robby Stein, Vice President of Product at Google Search, described in a CNN interview how Google tests the summaries, measures interaction, and reduces their appearance for certain kinds of searches where they don’t help.

How Google Decides When To Show AI Overviews

Stein explained that AI Overviews appear based on learned usefulness rather than showing up by default.

“The system actually learns where they’re helpful and will only show them if users have engaged with that and find them useful,” Stein said. “For many questions, people just ask like a short question or they’re looking for very specific website, they won’t show up because they’re not actually helpful in many many cases.”

He gave a concrete example. When someone searches for an athlete’s name, they typically want photos, biographical details, and social media links. The system learned people didn’t engage with an AI Overview for those queries.

“The system will learn that if it tried to do an AI overview, no one really clicked on it or engaged with it or valued it,” Stein said. “We have lots of metrics we look at that and then it won’t show up.”

What “Under The Hood” Queries Mean For Visibility

Stein described the system as sometimes expanding a search beyond what you type. Google “in many cases actually issues additional Google queries under the hood to expand your search and then brings you the most relevant information for a given question,” he said.

That may help explain why pages sometimes show up in AI Overview citations even when they don’t match your exact query wording. The system pulls in content answering related sub-questions or providing context.

For image-focused queries, AI Overviews integrate with image results. For shopping queries, they connect to product information. The system adapts based on what serves the question.

Where AI Mode Fits In

Stein described AI Mode as the next step for complicated questions that need follow-up conversation. The design assumes you start in traditional Search, get an Overview if it helps, then go deeper into AI Mode when you need more.

“We really designed AI Mode to really help you go deeper with a pretty complicated question,” Stein said, citing examples like comparing cars or researching backup power options.

During AI Mode testing, Google saw “like a two to three … full increase in the query length” compared to typical Search queries. Users also started asking follow-up questions in a conversational pattern.

The longer AI Mode queries included more specificity. Stein’s example: instead of “things to do in Nashville,” users asked “restaurants to go to in Nashville if one friend has an allergy and we have dogs and we want to sit outside.”

Personalization Exists But Is Limited

Some personalization in AI Mode already exists. Users who regularly click video results might see videos ranked higher, for example.

“We are personalizing some of these experiences,” Stein said. “But right now that’s a smaller adjustment probably to the experience because we want to keep it as consistent as possible overall.”

Google’s focus is on maintaining consistency across users while allowing for individual preferences where it makes sense.

Why This Matters

In July 2024, research showed Google had dialed back AIO presence by 52%, from widespread appearance to showing for just 8% of queries. Stein’s description offers one possible explanation for that pattern.

If you’re tracking AIO presence week to week, the fluctuations may reflect user behavior patterns for different question types rather than algorithm changes.

The “under the hood” query expansion means content can appear in citations even without matching your exact phrasing. That matters when you’re explaining CTR drops internally or planning content for complex queries where Overviews are more likely to surface.

Looking Ahead

Google’s AI Overviews earn placement based on usefulness rather than appearing by default.

Personalization is limited today, but the direction is moving toward more tailored experiences that maintain overall consistency.

See the full interview with Stein below:


Featured Image: nwz/Shutterstock

Google Gemini Gains Share As ChatGPT Declines In Similarweb Data via @sejournal, @MattGSouthern

ChatGPT accounted for 64% of worldwide traffic share among gen AI chatbot websites as of January, while Google’s Gemini reached 21%, according to Similarweb’s Global AI Tracker.

Similarweb’s tracker (PDF link) measures total visits at the domain level, so it reflects people who go to these tools directly on the web. It doesn’t capture API usage, embedded assistants, or other integrations where much of the AI usage occurs now.

ChatGPT Down, Gemini Up In A Year Of Share Gains

The share movement is easiest to see year-over-year.

A year ago, Similarweb estimated ChatGPT accounted for 86% of worldwide traffic among tracked chatbot sites. Now, that figure is 64%. Over the same period, Gemini rose from 5% to 21%.

Other tools are much smaller by this measure. DeepSeek was at 3.7%, Grok at 3.4%, and Perplexity and Claude both at 2.0%.

Google has been promoting Gemini through products like Android and Workspace, which may help explain why it’s gaining share among users who access these tools directly.

Winter Break Pulled Down Total Visits

Similarweb pointed to seasonality during the holiday period:

Similarweb wrote on X:

“Driven by the winter break, the daily average visits to all tools dropped to August-September levels.”

That context matters because it helps distinguish overall category softness from shifts in market share.

Writing Tool Domain Traffic Declines

Writing and content generation sites were down 10% over the most recent 12-week window in Similarweb’s category view.

At the individual tool level, Similarweb’s table shows steep drops for several writing platforms. Growthbarseo was down 100%, while Jasper fell 16%, Writesonic dropped 17%, and Rytr declined 9%. Originality was up 17%.

These are still domain-level visit counts, so the clearest takeaway is that fewer people are going directly to specialized writing sites online. That can happen for several reasons, including users relying more on general assistants, switching to apps, or using these models through integrations.

Code Completion Shows Mixed Results

The developer tools category looked more mixed than the writing tools.

Similarweb’s code completion table shows Bolt down 39% over 12 weeks, while Cursor (up 8%), Replit (up 2%), and Base44 (up 49%) moved in different directions.

Traditional Search Looks Close To Flat

In Similarweb’s “disrupted sectors” view, traditional search traffic is down roughly 1% to 3% year-over-year across recent periods, which doesn’t indicate a sharp drop in overall search usage in this dataset.

The same table shows Reddit up 12% year-over-year and Quora down 53%, consistent with the idea that some Q&A behavior is being redistributed even as overall search remains relatively steady.

Why This Matters

When making sense of how AI is changing discovery and demand, these numbers can help you understand where direct, web-based attention is concentrating. That can influence which assistants you monitor for brand mentions, citations, and referral behavior.

Though you should treat this a snapshot, not the full picture. If your audience is interacting with AI through browsers, apps, or embedded assistants, your own analytics will be a better barometer than any domain-level tracker.

Looking Ahead

The next report should clarify whether category traffic rebounds after the holiday period and whether Gemini continues to gain share at the same pace. It will also be a useful read on whether writing tools stabilize or whether more of that usage continues to consolidate into general assistants and bundled experiences.


Featured Image: vfhnb12/Shutterstock

Most Major News Publishers Block AI Training & Retrieval Bots via @sejournal, @MattGSouthern

Most top news publishers block AI training bots via robots.txt, but they’re also blocking the retrieval bots that determine whether sites appear in AI-generated answers.

BuzzStream analyzed the robots.txt files of 100 top news sites across the US and UK and found 79% block at least one training bot. More notably, 71% also block at least one retrieval or live search bot.

Training bots gather content to build AI models, while retrieval bots fetch content in real time when users ask questions. Sites blocking retrieval bots may not appear when AI tools try to cite sources, even if the underlying model was trained on their content.

What The Data Shows

BuzzStream examined the top 50 news sites in each market based on SimilarWeb traffic share, then deduplicated the list. The study grouped bots into three categories: training, retrieval/live search, and indexing.

Training Bot Blocks

Among training bots, Common Crawl’s CCBot was the most frequently blocked at 75%, followed by Anthropic-ai at 72%, ClaudeBot at 69%, and GPTBot at 62%.

Google-Extended, which trains Gemini, was the least blocked training bot at 46% overall. US publishers blocked it at 58%, nearly double the 29% rate among UK publishers.

Harry Clarkson-Bennett, SEO Director at The Telegraph, told BuzzStream:

“Publishers are blocking AI bots using the robots.txt because there’s almost no value exchange. LLMs are not designed to send referral traffic and publishers (still!) need traffic to survive.”

Retrieval Bot Blocks

The study found 71% of sites block at least one retrieval or live search bot.

Claude-Web was blocked by 66% of sites, while OpenAI’s OAI-SearchBot, which powers ChatGPT’s live search, was blocked by 49%. ChatGPT-User was blocked by 40%.

Perplexity-User, which handles user-initiated retrieval requests, was the least blocked at 17%.

Indexing Blocks

PerplexityBot, which Perplexity uses to index pages for its search corpus, was blocked by 67% of sites.

Only 14% of sites blocked all AI bots tracked in the study, while 18% blocked none.

The Enforcement Gap

The study acknowledges that robots.txt is a directive, not a barrier, and bots can ignore it.

We covered this enforcement gap when Google’s Gary Illyes confirmed robots.txt can’t prevent unauthorized access. It functions more like a “please keep out” sign than a locked door.

Clarkson-Bennett raised the same point in BuzzStream’s report:

“The robots.txt file is a directive. It’s like a sign that says please keep out, but doesn’t stop a disobedient or maliciously wired robot. Lots of them flagrantly ignore these directives.”

Cloudflare documented that Perplexity used stealth crawling behavior to bypass robots.txt restrictions. The company rotated IP addresses, changed ASNs, and spoofed its user agent to appear as a browser.

Cloudflare delisted Perplexity as a verified bot and now actively blocks it. Perplexity disputed Cloudflare’s claims and published a response.

For publishers serious about blocking AI crawlers, CDN-level blocking or bot fingerprinting may be necessary beyond robots.txt directives.

Why This Matters

The retrieval-blocking numbers warrant attention here. In addition to opting out of AI training, many publishers are opting out of the citation and discovery layer that AI search tools use to surface sources.

OpenAI separates its crawlers by function: GPTBot gathers training data, while OAI-SearchBot powers live search in ChatGPT. Blocking one doesn’t block the other. Perplexity makes a similar distinction between PerplexityBot for indexing and Perplexity-User for retrieval.

These blocking choices affect where AI tools can pull citations from. If a site blocks retrieval bots, it may not appear when users ask AI assistants for sourced answers, even if the model already contains that site’s content from training.

The Google-Extended pattern is worth watching. US publishers block it at nearly twice the UK rate, though whether that reflects different risk calculations around Gemini’s growth or different business relationships with Google isn’t clear from the data.

Looking Ahead

The robots.txt method has limits, and sites that want to block AI crawlers may find CDN-level restrictions more effective than robots.txt alone.

Cloudflare’s Year in Review found GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives across top domains. The report also noted that most publishers use partial blocks for Googlebot and Bingbot rather than full blocks, reflecting the dual role Google’s crawler plays in search indexing and AI training.

For those tracking AI visibility, the retrieval bot category is what to watch. Training blocks affect future models, while retrieval blocks affect whether your content shows up in AI answers right now.


Featured Image: Kitinut Jinapuck/Shutterstock

Google’s Mueller Weighs In On SEO vs GEO Debate via @sejournal, @MattGSouthern

Google Search Advocate John Mueller says businesses that rely on referral traffic should think about how AI tools fit into the picture.

Mueller responded to a Reddit thread asking whether SEO is still enough or whether practitioners need to start considering GEO, a term some in the industry use for optimizing visibility in AI-powered answer engines like ChatGPT, Gemini, and Perplexity.

“If you have an online business that makes money from referred traffic, it’s definitely a good idea to consider the full picture, and prioritize accordingly,” Mueller wrote.

What Mueller Said

Mueller didn’t endorse or reject the GEO terminology. He framed the question in terms of practical business decisions rather than new optimization techniques.

“What you call it doesn’t matter, but ‘AI’ is not going away, but thinking about how your site’s value works in a world where ‘AI’ is available is worth the time,” he wrote.

He also pushed back on treating AI visibility as a universal priority. Mueller suggested practitioners look at their own data first.

Mueller added:

“Also, be realistic and look at actual usage metrics and understand your audience (what % is using ‘AI’? what % is using Facebook? what does it mean for where you spend your time?).”

Why This Matters

I’ve been tracking Mueller’s public statements for years, and this one lands differently than the usual “it depends” responses he’s known for. He’s reframing the GEO question as a resource allocation problem rather than a terminology debate.

The GEO conversation has picked up steam over the past year as AI answer engines started sending measurable referral traffic. I’ve covered the citation studies, the traffic analyses, and the research comparing Google rankings to LLM citations. What’s been missing is a clear signal from Google: is this a distinct discipline, or just rebranded SEO?

Mueller’s answer is consistent with what Google said at Search Central Live, when Gary Illyes emphasized that AI features share infrastructure with traditional Search. The message from both is that you probably don’t need a separate framework, but you do need to understand how discovery is changing.

What I find more useful is his emphasis on checking your own numbers. Current data shows ChatGPT referrals at roughly 0.19% of traffic for the average site. AI assistants combined still drive less than 1% for most publishers. That’s growing, but it’s not yet a reason to reorganize your entire strategy.

The industry has a habit of chasing trends that apply to some sites but not others. Mueller’s pushing back on that pattern. Look at what percentage of your audience actually uses AI tools before reallocating resources toward them.

Looking Ahead

The GEO terminology will likely stick, regardless of Google’s stance. Mueller’s framing puts the decision back on individual businesses to measure their own audience behavior.

For practitioners, this means the homework is in your analytics. If AI referrals are showing up in your traffic sources, they’re worth understanding. If they’re not, you have other priorities.


Featured Image: Roman Samborskyi/Shutterstock

The Guardian: Google AI Overviews Gave Misleading Health Advice via @sejournal, @MattGSouthern

The Guardian published an investigation claiming health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting and says many examples were based on incomplete screenshots.

The Guardian said it tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google told The Guardian the “vast majority” of AI Overviews are factual and helpful.

What The Guardian Reported Finding

The Guardian said it tested a range of health queries and asked health organizations to review the AI-generated summaries. Several reviewers said the summaries included misleading or incorrect guidance.

One example involved pancreatic cancer. Anna Jewell, director of support, research and influencing at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was “completely incorrect.” She added that following that guidance “could be really dangerous and jeopardise a person’s chances of being well enough to have treatment.”

The reporting also highlighted mental health queries. Stephen Buckley, head of information at Mind, said some AI summaries for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could lead people to avoid seeking help.”

The Guardian cited a cancer screening example too. Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said a pap test being listed as a test for vaginal cancer was “completely wrong information.”

Sophie Randall, director of the Patient Information Forum, said the examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.”

The Guardian also reported that repeating the same search could produce different AI summaries at different times, pulling from different sources.

Google’s Response

Google disputed both the examples and the conclusions.

A spokesperson told The Guardian that many of the health examples shared were “incomplete screenshots,” but from what the company could assess they linked “to well-known, reputable sources and recommend seeking out expert advice.”

Google told The Guardian the “vast majority” of AI Overviews are “factual and helpful,” and that it “continuously” makes quality improvements. The company also argued that AI Overviews’ accuracy is “on a par” with other Search features, including featured snippets.

Google added that when AI Overviews misinterpret web content or miss context, it will take action under its policies.

The Broader Accuracy Context

This investigation lands in the middle of a debate that’s been running since AI Overviews expanded in 2024.

During the initial rollout, AI Overviews drew attention for bizarre results, including suggestions involving glue on pizza and eating rocks. Google later said it would reduce the scope of queries that trigger AI-written summaries and refine how the feature works.

I covered that launch, and the early accuracy problems quickly became part of the public narrative around AI summaries. The question then was whether the issues were edge cases or something more structural.

More recently, data from Ahrefs suggests medical YMYL queries are more likely than average to trigger AI Overviews. In its analysis of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That’s more than double the overall baseline rate in the dataset.

Separate research on medical Q&A in LLMs has pointed to citation-support gaps in AI-generated answers. One evaluation framework, SourceCheckup, found that many responses were not fully supported by the sources they cited, even when systems provided links.

Why This Matters

AI Overviews appear above ranked results. When the topic is health, errors carry more weight.

Publishers have spent years investing in documented medical expertise to meet. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

The Guardian’s reporting also highlights a practical problem. The same query can produce different summaries at different times, making it harder to verify what you saw by running the search again.

Looking Ahead

Google has previously adjusted AI Overviews after viral criticism. Its response to The Guardian indicates it expects AI Overviews to be judged like other Search features, not held to a separate standard.

AI-Generated Content Isn’t The Problem, Your Strategy Is

“If AI can write, why are we still paying writers?” For any CMO or senior manager on a budget, you’ve probably already had a version of this conversation. It’s a seductive idea. After all, humans are expensive and can take hours or even days to write a single article. So, why not replace them with clever machines and watch the costs go down while productivity goes up?

It’s understandable. Buffeted by years of high inflation, high interest rates, and disrupted supply chains, organizations around the world are cutting costs wherever they can. These days, instead of “cost cutting,” CFOs and executive teams prefer the term “cost transformation,” a new jargon for the same old problem.

Whatever you call it, marketing is one department that is definitely feeling the impact. According to Gartner, in 2020, the average marketing budget was 11% of overall company revenue. By 2023, this had fallen to 9.1%. Today, the average budget is 7.7%.

Of course, some organizations will have made these cuts under the assumption that AI makes larger teams and larger budgets unnecessary. I’ve already seen some companies slash their content teams to the bone; no doubt believing that all you need is a few people capable of crafting a decent prompt. Yet a different Gartner study found that 59% of CMOs say they lack the budget to execute their 2025 strategy. I guess they didn’t get the memo.

Meanwhile, some other organizations refuse to let AI near their content at all, for a variety of reasons. They might have concerns over quality control, data privacy, complexity, and so on. Or perhaps they’re hanging onto the belief that this AI thing is a fad or a bubble, and they don’t want to implement something that might come crashing down at any moment.

Both camps likely believe they’ve adopted the correct, rational, financially prudent approach to AI. Both are dangerously wrong. AI might not be the solution, but it’s also not the problem.

Beeching’s Axe

Spanish philosopher George Santayana once wrote: “Those who cannot remember the past are condemned to repeat it.” With that in mind, let me share a cautionary tale.

In the 1960s, British Railways (later British Rail) made one of the most short-sighted decisions in transport history. With the railway network hemorrhaging money, the Conservative Government appointed Dr. Richard Beeching, a physicist from ICI with no transport experience, as the new chairman of the British Transport Commission, tasked with cutting costs and making the railways profitable.

Beeching’s solution was simple; do away with all unprofitable routes, identified by assessing the passenger numbers and operational costs of each route in isolation. Between 1963 and 1970, Beeching’s cost-cutting axe led to the closure of 2,363 stations and over 5,000 miles of track (~30% of the rail network), with the loss of 67,700 jobs.

Decades later, the country is spending billions rebuilding some of those same routes. As it turned out, many of those “unprofitable” routes were vital not only to the health of the wider rail network, but also to the communities in those regions in ways that Beeching’s team of bean counters simply didn’t have the imagination to value.

I’m telling you this because, right now, a lot of businesses are carrying out their own version of the Beeching cuts.

The Data-Led Trap

There’s a crucial distinction between being data-led and data-informed. Understanding this could be the difference between implementing a sound content production strategy and repeating Beeching’s catastrophe.

Data-led thinking treats the available data as the complete picture. It looks for a pattern and adopts it as an undeniable truth that points towards a clear course of action. “AI generates content for a fraction of our current costs. Therefore, we should replace the writers.”

Data-informed thinking sets out to understand what might be behind the pattern, extrapolate what’s missing from the picture, and stress-test the conclusions. The data becomes a starting point for inquiry, not an endpoint for decisions. “What value isn’t captured in this data? What would replacing our writers with AI actually mean for the effectiveness of our content when our competitors can do the exact the same thing with the exact same tools?”

That last question is the real challenge facing companies considering AI-generated content, but the answer won’t be found in a spreadsheet. If you can use AI to generate your content with minimal human input, so can everyone else. Very soon, everyone is generating similar content on similar topics to target the same audiences, with recycled information and reheated “insights” drawn from the same online sources.

Why would ChatGPT somehow generate a better blog post for you than for anyone else asking for 1,200 words on the same topic? It wouldn’t. You need to add your own secret sauce.

There is no competitive advantage to be gained by relying on AI-generated content alone. None.

AI-generated content is not a silver bullet. It’s the minimum benchmark your content needs to significantly exceed if your brand and your content is to have any chance of standing out in today’s noisy online marketplace.

Unfortunately, while organizations know they need to have content, far too many senior decision-makers don’t fully understand why, never mind all the things an effective content strategy needs to accomplish.

Content Isn’t A Cost, It’s An Infrastructure

Marketing content is often looked down upon as somehow easier or less worthy than other forms of writing. Yet it arguably has the hardest job of all. Every article, ebook, LinkedIn post, brochure, and landing page has to tick off a veritable to-do list of strategic requirements.

Of course, your content needs to have something to say. It must work on an informational level, backed by solid research and journalism. However, each asset or article also has a strategic role to play: attracting audiences, nurturing prospects, or converting customers, while aligning with the brand’s carefully mapped out messaging at every stage.

Your content must build authority, earn trust, and demonstrate expertise. It must be memorable enough to aid brand awareness and recall, and distinctive enough to differentiate the brand from its competitors. It must be structured for search engines with the right entities, topics, and relationships, without losing the attention of busy humans who can click away at any second. Ideally, it should also include a couple of quote-worthy lines or interesting stats capable of attracting attention when the content is distributed on social media.

ChatGPT or Claude can certainly string a bunch of convincing sentences together. But if you think they can spin all those other plates for you at the same time, and to the same standard as a skilled content creator, you’re going to be disappointed. No matter how detailed and nuanced your prompt, something will always be missing. You’re still asking AI to synthesize something brilliant by recycling what’s already out there.

Which brings me to the most ironic part of this discussion. With the rapid adoption of AI-mediated search, your content now needs to become a source that large language models will confidently cite in responses to relevant queries.

Expecting AI to create content likely to be cited by AI is like watching a dog chasing its tail: futile and frustrating. If AI provided the information and insights contained in your content, it already has better, more authoritative sources. Why would AI cite content that contains little if any fresh information or insight?

If your goal is to increase your brand’s visibility in AI responses, then your content needs to offer what can’t easily be found elsewhere.

The Limitations Of Online Knowledge

Despite appearances, AI cannot think. It cannot understand, in the sense we usually mean it. As it currently stands, it cannot reason. It certainly cannot imagine. Words like these have emerged as common euphemisms for how AI generates responses, but they also set the wrong expectations.

AI also cannot use information that isn’t already available and crawlable online. While we like to think that somehow the internet is a massive store of the entirety of human knowledge, the reality is that it’s not even close.

So much of the world we live in simply cannot be captured as structured, digitized information. While AI can tell you when and where the next local collectables market is on, it can’t tell you which dealer has that hard-to-find comic you’ve been chasing for years. That’s the kind of information you can only find out by digging through lots of comic boxes on the day.

And then there are cultural histories and localized experiences that exist more in verbal traditions than in history books. AI can tell me plenty of stuff about the First World War. But if I ask it about the Iranian famine during WW1, it’s going to struggle because it’s not that well documented outside of Iranian history books. Most of my knowledge of the famine comes almost entirely from stories my great grandma told my mother, who then passed them on to me, like how she had to survive on just one almond per day. But you won’t find her stories in any book.

How can AI draw upon the wealth of personal experience and memories we all have? The greatest source of knowledge is human. It’s us. It’s always us.

But while AI can’t do your thinking for you, it can still help in many other ways.

→ Read More: Can You Use AI To Write For YMYL Sites? (Read The Evidence Before You Do)

You Still Need A Brain Behind The Bot

Let me be clear: I use AI every day. My team uses AI every day. You should, too. The problem isn’t the tool. The problem is treating the tool as a strategy, and an efficiency or cost reduction strategy at that. Of course, it isn’t only marketing teams hoping to reduce costs and boost productivity with generative AI. Another industry has already discovered that AI doesn’t actually replace anything.

A recent survey conducted by the Australian Financial Review (AFR) found that most law firms reported using AI tools. However, far from reducing headcount, 70% of surveyed firms increased their hiring of lawyers to vet, review, and sign off on AI-generated outputs.

This isn’t a failure in their AI strategy, because the strategy was never about reducing headcount. They’re using AI tools as digital assistants (research, drafting, document handling, etc.) to free up more time and headspace for the kinds of strategic and insightful thinking that generates real business value.

Similarly, AI isn’t a like-for-like replacement for your writers, designers, and other content creators. It’s a force multiplier for them, helping your team reduce the drudgery that can so often get in the way of the real work.

  • Summarizing complex information.
  • Transcribing interviews.
  • Creating outlines.
  • Drafting related content like social media posts.
  • Checking your content against the brand style guide to catch inconsistencies.

Some writers might even use AI to generate a very rough first draft of an article to get past that blank page. The key is to treat that copy as a starting point, not the finished article.

All these tasks are massive time-savers for content creators, freeing up more of their mental bandwidth for the high-value work AI simply can’t do as well.

AI can only synthesize content from existing information. It cannot create new knowledge or come up with fresh ideas. It cannot interview subject matter experts within your business to draw out hidden wisdom and insights. It cannot draw upon personal experiences or perspectives to make your content truly yours.

AI is also riddled with algorithmic biases, potentially skewing your content and your messaging without you even realizing. For example, the majority of AI training data is in the English language, creating a huge linguistic and cultural bias. It might require an experienced and knowledgeable eye to spot the subtle hallucinations or distortions.

While AI can certainly accelerate execution, you still need skilled, experienced creatives to do the real thinking and crafting.

You Don’t Know What You Have, Until It’s Gone

Until Beeching closed the line in 1969, the route between Edinburgh and Carlisle was a vital transport artery for the Scottish Borders. On paper, the line was unprofitable, at least according to Beeching’s simplistic methodology. However, the closure had massive knock-on effects, reducing access to jobs, education, and social services, as well as impacting tourism. Meanwhile, forcing people onto buses or into cars placed greater strain on other transport infrastructures.

While Beeching might have solved one narrowly defined problem, he had undermined the broader purpose of British Railways: the mobility of people in all parts of Great Britain. In effect, Beeching had shifted the consequences and cost pressures elsewhere.

The route was partially reopened in 2015 as The Borders Railway, costing an estimated £300 million to reinstate just 30 miles of line with seven stations.

Beeching’s cuts illustrate the folly of evaluating infrastructure (or content strategy) purely on narrow, short-term financial metrics.

Organizations that cut their teams in favor of AI are likely to find it isn’t so easy to reverse course and undo the damage a few years from now. Replacing your writers with AI risks eroding the connective tissue that characterizes your content ecosystem and anchors long-term performance: authority, context, nuance, trust, and brand identity.

Experienced content creators aren’t going to wait around for organizations to realize their true value. If enough of them leave the industry, and with fewer opportunities available for the next generation of creators to gain the necessary skills and experience, the talent pool is likely to shrink massively.

As with the Beeching cuts, rebuilding your content team is likely to cost you far more in the long term than you saved in the short term, particularly when you factor in the months or years of low-performing content in the meantime.

Know What You’re Cutting Before You Wield The Axe

According to your spreadsheet, AI-generated content may well be cheaper to produce. But the effectiveness of your content strategy doesn’t hinge on whether you can publish more for less. This isn’t a case of any old content will do.

So, beware of falling into the Beeching trap. Your content workflows might only seem “loss-making” on paper because the metrics you’re looking at don’t adequately capture all the ways your content delivers strategic value to your business.

Content is not a cost center. It never was. Content is the infrastructure of your brand’s discoverability, which makes it more important than ever in the AI era.

This isn’t a debate about “human vs. AI content.” It’s about equipping skilled people with the tools to help them create work worthy of being found, cited, and trusted.

So, before you start swinging the axe, ask yourself: Are you cutting waste, or are you dismantling the very system that makes your brand visible and credible in the first place?

More Resources:


Featured Image: IM Imagery/Shutterstock

Microsoft CEO, Google Engineer Deflect AI Quality Complaints via @sejournal, @MattGSouthern

Within a week of each other, Microsoft CEO Satya Nadella and Jaana Dogan, a Principal Engineer working on Google’s Gemini API, posted comments about AI criticism that shared a theme. Both redirected attention away from whether AI output is “good” or “bad” and toward how people are reacting to the technology.

Nadella published “Looking Ahead to 2026” on his personal blog, writing that the industry needs to “get beyond the arguments of slop vs sophistication.”

Days later, Dogan posted on X that “people are only anti new tech when they are burned out from trying new tech.”

The timing coincides with Merriam-Webster naming “slop” its Word of the Year. For publishers, these statements can land less like reassurance and more like a request to stop focusing on quality.

Nadella Urges A Different Framing Than “AI Slop”

Nadella’s post argues that the conversation should move past the “slop” label and focus on how AI fits into human life and work. He characterizes AI as “cognitive amplifier tools” and believes that 2026 is the year in which AI must “prove its value in the real world.”

He writes: “We need to get beyond the arguments of slop vs sophistication,” and calls for “a new equilibrium” that accounts for humans having these tools. In the same section, he also calls it “the product design question we need to debate and answer,” which makes the point less about ending debate and more about steering it toward product integration and outcomes.

Dogan’s “Burnout” Framing Came Days After A Claude Code Post

Dogan’s post framed anti-AI sentiment as burnout from trying new technology. The line was blunt: “People are only anti new tech when they are burned out from trying new tech. It’s understandable.”

A few days earlier, Dogan had posted about using Claude Code to build a working prototype from a description of distributed agent orchestrators. She wrote that the tool produced something in about an hour that matched patterns her team had been building for roughly a year, adding: “In 2023, I believed these current capabilities were still five years away.”

Replies to the “burnout” post pushed back on Dogan. Many responses pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable within everyday workflows.

Dogan is a Principal Engineer on Google’s Gemini API and is not speaking as an official representative of Google policy.

The Standards Platforms Enforce On Publishers Still Matter

I’ve written E-E-A-T guides for Search Engine Journal for years. Those pieces reflected Google’s long-running expectation that publishers demonstrate experience, expertise, and trust, especially for “Your Money or Your Life” topics like health, finance, and legal content.

That’s why the current disconnect lands so sharply for publishers. Platforms have quality standards for ranking and visibility, while AI products increasingly present information directly to users with citations that can be difficult to evaluate at a glance.

When Google executives have been asked about declining click-through rates, the public framing has included “quality clicks” rather than addressing the volume loss publishers measure on their side.

What The Traffic Data Shows

Pew Research Center tracked 68,879 real Google searches. When AI Overviews appeared, only 8% of users clicked any link, compared to 15% when AI summaries did not appear. That works out to a 46.7% drop.

Publishers can be told the remaining clicks are higher intent, but volume still matters. It’s what drives ad impressions, subscriptions, and affiliate revenue.

Separately, Similarweb data indicates that the share of news-related searches that resulted in no click-through to news sites rose from 56% to 69%.

The crawl-to-referral imbalance adds another layer. Cloudflare has estimated Google Search at about a 14:1 crawl-to-referral ratio, compared with far higher ratios for OpenAI (around 1,700:1) and Anthropic (73,000:1).

Publishers have long operated on an implicit trade where they allow crawling in exchange for distribution and traffic. Many now argue that AI features weaken that trade because content can be used to answer questions without the same level of referral back to the open web.

Why This Matters

These posts from Nadella and Dogan help show how the AI quality debate may get handled in 2026.

When people are urged to move past “slop vs sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.

We see clear signs of traffic declines, and the crawling-to-referral ratios are also measurable. The economic impact is real.

Looking Ahead

Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related issue.

I’m eager to see whether these companies make any changes to their product design in response to user feedback.


Featured Image: Jack_the_sparow/Shutterstock