Head Of WordPress AI Team Explains SEO For AI Agents via @sejournal, @martinibuster

James LePage, Director Engineering AI at Automattic, and the co-lead of the WordPress AI Team, shared his insights into things publishers should be thinking about in terms of SEO. He’s the founder and co-lead of the WordPress Core AI Team, which is tasked with coordinating AI-related projects within WordPress, including how AI agents will interact within the WordPress ecosystem. He shared insights into what’s coming to the web in the context of AI agents and some of the implications for SEO.

AI Agents And Infrastructure

The first observation that he made was that AI agents will use the same web infrastructure as search engines. The main point he makes is that the data that the agents are using comes from the regular classic search indexes.

He writes, somewhat provocatively:

“Agents will use the same infrastructure the web already has.

  • Search to discover relevant entities.
  • “Domain authority” and trust signals to evaluate sources.
  • Links to traverse between entities.
  • Content to understand what each entity offers.

I find it interesting how much money is flowing into AIO and GEO startups when the underlying way agents retrieve information is by using existing search indexes. ChatGPT uses Bing. Anthropic uses Brave. Google uses Google. The mechanics of the web don’t change. What changes is who’s doing the traversing.”

AI SEO = Longtail Optimization

LePage also said that schema structured data, semantic density, and interlinking between pages is essential for optimizing for AI agents. Notable is that he said that AI optimization that AIO and GEO companies are doing is just basic longtail query optimization.

He explained:

“AI intermediaries doing synthesis need structured, accessible content. Clear schemas, semantic density, good interlinking. This is the challenge most publishers are grappling with now. In fact there’s a bit of FUD in this industry. Billions of dollars flowing into AIO and GEO when much of what AI optimization really is is simply long-tail keyword search optimization.”

What Optimized Content Looks Like For AI Agents

LePage, who is involved in AI within the WordPress ecosystem, said that content should be organized in an “intentional” manner for agent consumption, by which he means structured markdown, semantic markup, and content that’s easy to understand.

A little further he explains what he believes content should look like for AI agent consumption:

“Presentations of content that prioritize what matters most. Rankings that signal which information is authoritative versus supplementary. Representations that progressively disclose detail, giving agents the summary first with clear paths to depth. All of this still static, not conversational, not dynamic, but shaped with agent traversal in mind.

Think of it as the difference between a pile of documents and a well-organized briefing. Both contain the same information. One is far more useful to someone trying to quickly understand what you offer.”

A little later in the article he offers a seemingly contradictory prediction of the role of content in an agentic AI future, reversing today’s formula of a well organized briefing over a pile of documents, saying that agentic AI will not need a website, just the content, a pile of documents.

Nevertheless, he recommends that content have structure so that the information is well organized at the page level with clear hierarchical structure and at the site level as well where interlinking makes the relationships between documents clearer. He emphasizes that the content must communicate what it’s for.

He then adds that in the future websites will have AI agents that communicate with external AI agents, which gets into the paradigm he mentioned of content being split off from the website so that the data can be displayed in ways that make sense for a user, completely separated from today’s concept of visiting a website.

He writes:

“Think of this as a progression. What exists now is essentially Perplexity-style web search with more steps: gather content, generate synthesis, present to user. The user still makes decisions and takes actions. Near-term, users delegate specific tasks with explicit specifications, and agents can take actions like purchases or bookings within bounded authority. Further out, agents operate more autonomously based on standing guidelines, becoming something closer to economic actors in their own right.

The progression is toward more autonomy, but that doesn’t mean humans disappear from the loop. It means the loop gets wider. Instead of approving every action, users set guidelines and review outcomes.

…Before full site delegates exist, there’s a middle ground that matters right now.

The content an agent has access to can be presented in a way that makes sense for how agents work today. Currently, that means structured markdown, clean semantic markup, content that’s easy to parse and understand. But even within static content, there’s room to be intentional about how information is organized for agent consumption.”

His article, titled Agents & The New Internet (3/5), provides useful ideas of how to prepare for the agentic AI future.

Featured Image by Shutterstock/Blessed Stock

Google’s Mueller: Free Subdomain Hosting Makes SEO Harder via @sejournal, @MattGSouthern

Google’s John Mueller warns that free subdomain hosting services create unnecessary SEO challenges, even for sites doing everything else right.

The advice came in response to a Reddit post from a publisher whose site shows up in Google but doesn’t appear in normal search results, despite using Digitalplat Domains, a free subdomain service on the Public Suffix List.

What’s Happening

Mueller told the site owner that they likely aren’t making technical mistakes. The problem is the environment they chose to publish in.

He wrote:

“A free subdomain hosting service attracts a lot of spam & low-effort content. It’s a lot of work to maintain a high quality bar for a website, which is hard to qualify if nobody’s getting paid to do that.”

The issue comes down to association. Sites on free hosting platforms share infrastructure with whatever else gets published there. Search engines struggle to differentiate quality content from the noise surrounding it.

Mueller added:

“For you, this means you’re basically opening up shop on a site that’s filled with – potentially – problematic ‘flatmates’. This makes it harder for search engines & co to understand the overall value of the site – is it just like the others, or does it stand out in a positive way?”

He also cautioned against cheap TLDs for similar reasons. The same dynamics apply when entire domain extensions become overrun with low-quality content.

Beyond domain choice, Mueller pointed to content competition as a factor. The site in question publishes on a topic already covered extensively by established publishers with years of work behind them.

“You’re publishing content on a topic that’s already been extremely well covered. There are sooo many sites out there which offer similar things. Why should search engines show yours?”

Why This Matters

Mueller’s advice here fits a pattern I’ve covered repeatedly over the years. Previously, Google’s Gary Illyes warned against cheap TLDs for the same reason. Illyes put it bluntly at the time, telling publishers that when a TLD is overrun by spam, search engines might not want to pick up sitemaps from those domains.

The free subdomain situation creates a unique problem. While the Public Suffix List theoretically tells Google to treat these subdomains as separate sites, the neighborhood signal remains strong. If the vast majority of subdomains on that host are spam, Google’s systems may struggle to identify your site as the one diamond in the rough.

This matters for anyone considering free hosting as a way to test an idea before investing in a real domain. The test environment itself becomes the test. Search engines evaluate your site in the context of everything else published under that same domain.

The competitive angle also deserves attention. New sites on well-covered topics face a high bar regardless of domain choice. Mueller’s point about established publishers having years of work behind them is a reality check about where the effort needs to go.

Looking Ahead

Mueller suggested that search visibility shouldn’t be the first priority for new publishers.

“If you love making pages with content like this, and if you’re sure that it hits what other people are looking for, then I’d let others know about your site, and build up a community around it directly. Being visible in popular search results is not the first step to becoming a useful & popular web presence, and of course not all sites need to be popular.”

For publishers starting out, focus on building direct traffic through promotion and community engagement. Search visibility tends to follow after a site establishes itself through other channels.


Featured Image: Jozef Micic/Shutterstock

Amazon Rules Product Discovery, for Now

The Amazon marketplace is the world’s most popular product search engine. Yet its dominance faces emerging challenges from AI and social commerce.

For more than 20 years, Amazon has made it easy for shoppers to discover products, compare options, read reviews, and buy.

A 2024 Jungle Scout survey (PDF) of 1,000 U.S. online shoppers found that 56% initiated product searches on the Amazon marketplace, compared to 42% on traditional search engines (such as Google), and 29% on Walmart.com.

Why Amazon?

Amazon’s Prime membership was a stroke of ecommerce genius. The service changes the way some consumers think about prices and shipping.

Products on Amazon’s marketplace are often more expensive than competitors’, and Prime costs $139 per year. But to many shoppers, there’s little reason to look elsewhere when shipping is free, fast, and reliable.

Selection

Moreover, Amazon’s product selection is massive and all-inclusive. Amazon itself sells more than 12 million products. Third-party sellers add upwards of 600 million, according to published reports. A shopper looking for an item will likely find it on Amazon.

Trust

Consumers trust Amazon. They assume products will arrive on time, with returns and refunds issued without hassle.

This trust is worth a lot. A 2025 Salsify report (PDF) found that 87% of shoppers have paid more for a product because they trust the brand. Those same consumers would likely search for products on a trusted marketplace.

Reviews

The volume of reviews on Amazon attracts shoppers.

Reviews serve as decision insurance. They reduce uncertainty and shorten the research cycle, especially for products where use cases matter. Instead of reading a handful of articles, comparing retailer sites, and searching Reddit threads, shoppers can pull social proof from thousands of real buyers without leaving Amazon.

That convenience changes behavior. The marketplace becomes a place for decision-making, not just to buy. So why not start a product search where other shoppers can guide you?

Mobile app

Amazon’s mobile app provides an advantage.

Searching for products in a mobile web browser is frustrating, even in 2026. Pages load slowly. Pop-ups appear. Cookie prompts get in the way. Shoppers must pinch and zoom, navigate cluttered menus, and jump between tabs.

Amazon’s app eliminates much of that friction for mobile consumers. The search box is always one tap away, filters are quick to apply, product pages are consistent, and the comparison process happens naturally through scrolling rather than clicking across multiple sites.

It’s a good experience, and shoppers use it.

Search iteration

“Search iteration” is the refinement of a query.

Consumers in the product discovery mode typically have specific needs. Amazon search can route shoppers toward products they are likely to buy.

Brand and mindshare

Amazon is ubiquitous beyond products. Prime Video, Audible, Kindle, Fire TV, Echo devices, and Amazon’s creator and influencer content indirectly contribute to search dominance and habit.

Boston Consulting Group, for example, asserts that such “mindshare” is highly correlated with purchase consideration.

Put another way, the folks who watch Prime Video are likely to search for products on Amazon.

AI and Social

Taken together, these factors serve as a playbook for the leading product search engine and offer both lessons and dilemmas for merchants. A shop can, for example, decide to include products on Amazon solely for discovery benefits.

Another consideration is whether Amazon maintains its lead in product search.

Some 56% of respondents on the Jungle Scout 2024 survey began product searches on Amazon. But that percentage is down from the 61% reported by Jungle Scout in 2022 (PDF).

Something is chipping away at product search and discovery. In 2026, that “something” is likely AI and social.

AI commerce is likely to shift where the first query occurs, thus eroding Amazon’s product-search dominance.

As shoppers ask for “the best” product option, generative AI platforms will increasingly assemble shortlists from multiple sources, reducing the need to start with Amazon. AI will pull discovery and comparison out of the marketplace interface, although Amazon can still win the transaction.

Social commerce on TikTok, Instagram, and YouTube will increasingly resemble search engines for lifestyle-driven categories. Shoppers, especially younger ones, often arrive at Amazon with a product already selected.

In those cases, Amazon becomes the fulfillment destination rather than the discovery engine, which changes the economics of product search and advertising on the platform.

Google On Phantom Noindex Errors In Search Console via @sejournal, @martinibuster

Google’s John Mueller recently answered a question about phantom noindex errors reported in Google Search Console. Mueller asserted that these reports may be real.

Noindex In Google Search Console

A noindex robots directive is one of the few commands that Google must obey, one of the few ways that a site owner can exercise control over Googlebot, Google’s indexer.

And yet it’s not totally uncommon for search console to report being unable to index a page because of a noindex directive that seemingly does not have a noindex directive on it, at least none that is visible in the HTML code.

When Google Search Console (GSC) reports “Submitted URL marked ‘noindex’,” it is reporting a seemingly contradictory situation:

  • The site asked Google to index the page via an entry in a Sitemap.
  • The page sent Google a signal not to index it (via a noindex directive).

It’s a confusing message from Search Console that a page is preventing Google from indexing it when that’s not something the publisher or SEO can observe is happening at the code level.

The person asking the question posted on Bluesky:

“For the past 4 months, the website has been experiencing a noindex error (in ‘robots’ meta tag) that refuses to disappear from Search Console. There is no noindex anywhere on the website nor robots.txt. We’ve already looked into this… What could be causing this error?”

Noindex Shows Only For Google

Google’s John Mueller answered the question, sharing that there were always a noindex showing to Google on the pages he’s examined where this kind of thing was happening.

Mueller responded:

“The cases I’ve seen in the past were where there was actually a noindex, just sometimes only shown to Google (which can still be very hard to debug). That said, feel free to DM me some example URLs.”

While Mueller didn’t elaborate on what can be going on, there are ways to troubleshoot this issue to find out what’s going on.

How To Troubleshoot Phantom Noindex Errors

It’s possible that there is a code somewhere that is causing a noindex to show just for Google. For example, it may have happened that a page at one time had a noindex on it and a server-side cache (like a caching plugin) or a CDN (like Cloudflare) has cached the HTTP headers from that time, which in turn would cause the old noindex header to be shown to Googlebot (because it frequently visits the site) while serving a fresh version to the site owner.

Checking the HTTP Header is easy, there are many HTTP header checkers like this one at KeyCDN or this one at SecurityHeaders.com.

A 520 server header response code is one that’s sent by Cloudflare when it’s blocking a user agent.

Screenshot: 520 Cloudflare Response Code

Screenshot showing a 520 error response code

Below is a screenshot of a 200 server response code generated by cloudflare:

Screenshot: 200 Server Response Code

I checked the same URL using two different header checkers, with one header checker returning a a 520 (blocked) server response code and the other header checker sending a 200 (OK) response code. That shows how differently Cloudflare can respond to something like a header checker. Ideally, try checking with several header checkers to see if there’s a consistent 520 response from Cloudflare.

In the situation where a web page is showing something exclusively to Google that is otherwise not visible to someone looking at the code, what you need to do is to get Google to look at the page for you using an actual Google crawler and from a Google IP address. The way to do this is by dropping the URL into Google’s Rich Results Test. Google will dispatch a crawler from a Google IP address and if there’s something on the server (or a CDN) that’s showing a noindex, this will catch it. In addition to the structured data, the Rich Results test will also provide the HTTP response and a snapshot of the web page showing exactly what the server shows to Google.

When you run a URL through the Google Rich Results Test, the request:

  • Originates from Google’s Data Centers: The bot uses an actual Google IP address.
  • Passes Reverse DNS Checks: If the server, security plugin, or CDN checks the IP, it will resolve back to googlebot.com or google.com.

If the page is blocked by noindex, the tool will be unable to provide any structured data results. It should provide a status saying “Page not eligible” or “Crawl failed”. If you see that, click a link for “View Details” or expand the error section. It should show something like “Robots meta tag: noindex” or ‘noindex’ detected in ‘robots’ meta tag”.

This approach does not send the GoogleBot user agent, it uses the Google-InspectionTool/1.0 user agent string. That means if the server block is by IP address then this method will catch it.

Another angle to check is for the situation where a rogue noindex tag is specifically written to block GoogleBot, you can still spoof (mimic) the GoogleBot user agent string with Google’s own User Agent Switcher extension for Chrome or configure an app like Screaming Frog set to identify itself with the GoogleBot user agent and that should catch it.

Screenshot: Chrome User Agent Switcher

Phantom Noindex Errors In Search Console

These kinds of errors can feel like a pain to diagnose but before you throw your hands up in the air take some time to see if any of the steps outlined here will help identify the hidden reason that’s responsible for this issue.

Featured Image by Shutterstock/AYO Production

Three technologies that will shape biotech in 2026

Earlier this week, MIT Technology Review published its annual list of Ten Breakthrough Technologies. As always, it features technologies that made the news last year, and which—for better or worse—stand to make waves in the coming years. They’re the technologies you should really be paying attention to.

This year’s list includes tech that’s set to transform the energy industry, artificial intelligence, space travel—and of course biotech and health. Our breakthrough biotechnologies for 2026 involve editing a baby’s genes and, separately, resurrecting genes from ancient species. We also included a controversial technology that offers parents the chance to screen their embryos for characteristics like height and intelligence. Here’s the story behind our biotech choices.

A base-edited baby!

In August 2024, KJ Muldoon was born with a rare genetic disorder that allowed toxic ammonia to build up in his blood. The disease can be fatal, and KJ was at risk of developing neurological disorders. At the time, his best bet for survival involved waiting for a liver transplant.

Then he was offered an experimental gene therapy—a personalized “base editing” treatment designed to correct the specific genetic “misspellings” responsible for his disease. It seems to have worked! Three doses later, KJ is doing well. He took his first steps in December, shortly before spending his first Christmas at home.

KJ’s story is hugely encouraging. The team behind his treatment is planning a clinical trial for infants with similar disorders caused by different genetic mutations. The team members hope to win regulatory approval on the back of a small trial—a move that could make the expensive treatment (KJ’s cost around $1 million) more accessible, potentially within a few years.

Others are getting in on the action, too. Fyodor Urnov, a gene-editing scientist at the University of California, Berkeley, assisted the team that developed KJ’s treatment. He recently cofounded Aurora Therapeutics, a startup that hopes to develop gene-editing drugs for another disorder called phenylketonuria (PKU). The goal is to obtain regulatory approval for a single drug that can then be adjusted or personalized for individuals without having to go through more clinical trials.

US regulators seem to be amenable to the idea and have described a potential approval pathway for such “bespoke, personalized therapies.” Watch this space.

Gene resurrection

It was a big year for Colossal Biosciences, the biotech company hoping to “de-extinct” animals like the woolly mammoth and the dodo. In March, the company created what it called “woolly mice”—rodents with furry coats and curly whiskers akin to those of woolly mammoths.

The company made an even more dramatic claim the following month, when it announced it had created three dire wolves. These striking snow-white animals were created by making 20 genetic changes to the DNA of gray wolves based on genetic research on ancient dire wolf bones, the company said at the time.

Whether these animals can really be called dire wolves is debatable, to say the least. But the technology behind their creation is undeniably fascinating. We’re talking about the extraction and analysis of ancient DNA, which can then be introduced into cells from other, modern-day species.

Analysis of ancient DNA can reveal all sorts of fascinating insights into human ancestors and other animals. And cloning, another genetic tool used here, has applications not only in attempts to re-create dead pets but also in wildlife conservation efforts. Read more here.

Embryo scoring

IVF involves creating embryos in a lab and, typically, “scoring” them on their likelihood of successful growth before they are transferred to a person’s uterus. So far, so uncontroversial.

Recently, embryo scoring has evolved. Labs can pinch off a couple of cells from an embryo, look at its DNA, and screen for some genetic diseases. That list of diseases is increasing. And now some companies are taking things even further, offering prospective parents the opportunity to select embryos for features like height, eye color, and even IQ.

This is controversial for lots of reasons. For a start, there are many, many factors that contribute to complex traits like IQ (a score that doesn’t capture all aspects of intelligence at any rate). We don’t have a perfect understanding of those factors, or how selecting for one trait might influence another.

Some critics warn of eugenics. And others note that whichever embryo you end up choosing, you can’t control exactly how your baby will turn out (and why should you?!). Still, that hasn’t stopped Nucleus, one of the companies offering these services, from inviting potential customers to have their “best baby.” Read more here.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: cut through AI coding hype, and biotech trends to watch

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI coding is now everywhere. But not everyone is convinced.  

Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems. 

The problem is right now, it’s not easy to know which is true. 

As tech giants pour billions into large language models (LLMs), coding has been touted as the technology’s killer app. Executives enamored with the potential are pushing engineers to lean into an AI-powered future. But after speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem. Read the full story

—Edd Gent

Generative coding is one of our 10 Breakthrough Technologies this year. Learn more about why that is, and check out the rest of the list

This story was also part of our Hype Correction package. You can read the rest of the stories here.

The biotech trends to watch for in 2026

Earlier this week, MIT Technology Review published our annual list of Ten Breakthrough Technologies. 

This year’s list includes tech that’s set to transform the energy industry, artificial intelligence, space travel—and of course biotech and health. Our breakthrough biotechnologies for 2026 involve editing a baby’s genes and, separately, resurrecting genes from ancient species. We also included a controversial technology that offers parents the chance to screen their embryos for characteristics like height and intelligence. Here’s the story behind our biotech choices.

—Jessica Hamzelou

This story is from The Checkup, our weekly newsletter all about the latest in health and biotech. Sign upto receive it in your inbox every Thursday.

MIT Technology Review Narrated: What’s next for AI in 2026

Our AI writers have made some big bets for the coming year—read our story about the five hot trends to watch, or listen to it on SpotifyApple, or wherever you get your podcasts.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Minnesota shows how governing and content creation have merged
In another era, we’d have just called this propaganda. (NPR)
MAGA influencers are just straight up lying about what is happening there. (Vox
Activists are trying to identify individual ICE officers while protecting their own identities. (WP $)
+ A backlash against ICE is growing in Silicon Valley. (Wired $)
 
2 There’s probably more child abuse material online now than ever before
Of all Big Tech’s failures, this is surely the most appalling. (The Atlantic $)
US investigators are using AI to detect child abuse images made by AI. (MIT Technology Review)
Grok is still being used to undress images of real people. (Quartz)
 
3 ChatGPT wrote a suicide lullaby for a man who later killed himself
This shows it’s “still an unsafe product,” a lawyer representing a family in a tragically similar case said. (Ars Technica)
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it. (MIT Technology Review)
 
4 Videos emerging from Iran show how bloody the crackdown has become
Iranians are finding ways around the internet blackout to show the rest of the world how many of them have been killed. (NBC
Here’s how they’re getting around the blackout. (NPR
 
5 China dominates the global humanoid robot market 🤖
A new report by analysts found its companies account for over 80% of all deployments. (South China Morning Post
Just how useful are the latest humanoids, though? (Nature)
+ Why humanoid robots need their own safety rules. (MIT Technology Review)
 
6 How is Australia’s social media ban for kids going? 
It’s mixed—some teens welcome it, but others are finding workarounds. (CNBC)
 
7 Scientists are finding more objective ways to spot mental illness 
Biomarkers like voice cadence and heart rate proving pretty reliable for diagnosing conditions like depression. (New Scientist $)
 
8 The Pebble smartwatch be making a comeback
This could be the thing that tempts me back into buying wearables… (Gizmodo)
 
9 A new video game traps you in an online scam center
Can’t see the appeal myself, but… each to their own I guess? (NYT $)
 
10 Smoke detectors are poised to get a high-tech upgrade 
And one of the technologies boosting their capabilities is, of course, AI. (BBC)

Quote of the day

“I am very annoyed. I’m very disappointed. I’m seriously frustrated.” 

—Pfizer CEO Albert Bourla tells attendees at a healthcare conference this week his feelings about the anti-vaccine agenda Health Secretary Robert F. Kennedy Jr. has been implementing, Bloomberg reports.

One more thing

ARIEL DAVIS

How close are we to genuine “mind reading?”

Technically speaking, neuroscientists have been able to read your mind for decades. It’s not easy, mind you. First, you must lie motionless within a fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks. 

If you do elect to endure claustrophobic hours in the scanner, the software will learn to generate a bespoke reconstruction of what you were seeing or listening to, just by analyzing how blood moves through your brain.

More recently, researchers have deployed generative AI tools, like Stable Diffusion and GPT, to create far more realistic, if not entirely accurate, reconstructions of films and podcasts based on neural activity. So how close are we to genuine “mind reading?” Read the full story.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Still keen to do a bit of reflecting on the year behind and the one ahead? This free guide might help!
+ Turns out British comedian Rik Mayall had some pretty solid life advice.
+ I want to stay in this house in São Paolo.  
+ If you want to stop doomscrolling, it’s worth looking at your sleep habits. ($)

DIY Approach Fuels Craft Cocktail Brand

Chris Harrison says it all started with a single pot on a stove. He and two high school buddies launched Liber & Co., a manufacturer of premium cocktail syrups, with that tiny test batch in 2011 in Austin, Texas.

Fast forward to 2026, and batches are now in 1,500-gallon tanks and sold worldwide to restaurants, bars, and consumers. But the culture remains hands-on, do-it-yourself, and learn-by-doing.

Chris first appeared on the podcast in 2022. In our recent conversation, he shared the company’s origins, sourcing tactics, growth plans, and more. Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Who are you, and what do you do?

Chris Harrison: I’m a co-founder of Liber & Co. We make premium non-alcoholic cocktail syrups for bars, restaurants, coffee shops, and home consumers. We’re based in Georgetown, Texas, near Austin, and handle almost everything in-house: manufacturing, warehousing, marketing, ecommerce, wholesale, and even international sales.

Our founding team grew up together in the same small Texas town. We’re the same age, went to the same high school, and came from similar blue-collar backgrounds. We didn’t have a big professional network or capital to outsource everything, so if something needed to be done, we learned to do it ourselves.

We’re also food people. You can’t outsource being a foodie or understanding flavor. Even the best chefs are hands-on in the kitchen, tasting, adjusting, and refining. That mindset shaped Liber & Co. from the beginning. We wanted to be close to the product to understand the ingredients, sourcing, and flavor development firsthand. That do-it-yourself culture became part of our identity.

Bandholz: How did you learn production, moving from a kitchen to a manufacturing facility?

Harrison: It’s a long, incremental journey. We relied on research and trial and error. We started with a small stock pot on a stove, then moved to a 25-gallon pan, then a 200-gallon tank, and now we operate multiple 1,500-gallon tanks.

That gradual progression was critical. You can’t attempt too much without putting the business at risk. If we had jumped straight from a kitchen setup to our current scale, we would have made far more expensive mistakes. Iterating step by step gave us time to understand what worked and what didn’t. There aren’t many shortcuts when you’re building something physical.

Our product category also made things harder. Unlike breweries, which often follow well-established scaling paths, there wasn’t a clear blueprint for cocktail syrups. That meant a lot of independent study, testing equipment, ordering samples, and experimenting with processes. We made mistakes along the way, which were part of the learning curve.

Manufacturing your own product limits capacity. You can’t sell more than you can physically make. There’s no co-manufacturer to absorb demand — you are the bottleneck. That was especially true in the early days.

Early on, we did whatever it took to fulfill orders. I spent 18 hours straight in the kitchen more than once to fill large orders for H-E-B, the grocery chain. It was manual work: long days, minimal breaks, and just pushing through. Thirteen years later, we’re grateful we no longer have to operate that way.

Bandholz: How do you find ingredient suppliers?

Harrison: Most of our sourcing has come from research. That includes a lot of Googling, using ChatGPT and Gemini, and contacting suppliers directly. We typically send a detailed request for proposal outlining who we are, what we need, and our product specifications. Then we ask if they can meet those requirements, provide documentation, and send samples. From there, we test and evaluate.

We cast a wide net geographically. With ginger, for example, we looked at suppliers across Africa, China, Vietnam, and Hawaii before ultimately choosing a Peruvian source. Some leads come from word of mouth. Someone might say, “I saw great ginger in Peru.” I’ll track down the producer through Google or LinkedIn. That actually happened.

It takes persistence. My background is in biology, so I enjoy getting into the weeds, so to speak. We also try to maintain backup suppliers. Fresh produce is unpredictable; pineapple crops suffered globally this year, driving up prices. A frozen backup supply helped smooth costs, but sourcing is never easy or guaranteed.

Bandholz: Is frozen produce better than fresh?

Harrison: In many cases, yes, frozen can be better. Farmers can wait until fruit reaches peak ripeness before harvesting. For something like raspberries, they’ll test sugar content the day of harvest using a refractometer. They literally crush the fruit and measure Brix, the dissolved-sugar level. The U.S. Food and Drug Administration even publishes approved Brix ranges for various fruits, such as peaches, pomegranates, and raspberries.

Farmers aim to hit those targets because that’s where flavor, aroma, and sweetness are best. But it comes from ripening on the vine. Once harvested, the fruit must be used immediately or preserved. Freezing is one of the best ways to lock in that peak quality.

Frozen storage requires capital. Cold storage and refrigerated transportation are expensive, but the tradeoff is consistency and quality. The frozen supply chain has expanded significantly. We’re seeing more investment in large-scale frozen facilities across the country. Even in central Texas, companies are building new frozen warehouses. We use one in North Austin.

If you’re serious about sourcing high-quality food ingredients, the frozen cold chain is often the best option.

Plus, we typically purchase small portions. Large companies such as Smucker’s buy in massive bulk. We like buying from cooperatives of many smaller, independent farms. Certain regions grow crops naturally well. For raspberries, that’s the U.S. Pacific Northwest, parts of Washington and Oregon.

Those regions have family-run farms, often third-generation operations, managing anywhere from 20 to 200 acres. Around them are many similar farms, all growing the same crop in the same climate. That creates a strong network effect: consistent weather, shared knowledge, and reliable quality across the region.

Because these farms remain independent, you avoid some of the downsides of large, consolidated operations. There’s less pressure to cut corners, harvest early, or sacrifice quality to maximize margins. In our experience, the cooperative model prioritizes long-term quality and sustainability.

We might buy one or two truckloads of fruit per year — roughly 40,000 to 80,000 pounds. A cooperative, by contrast, may handle 400 or 500 truckloads in a single harvest. Being a small buyer reduces risk. If we relied on a single farm for everything, we’d be far more vulnerable to supply disruptions.

Bandholz: How do you plan to evolve the brand?

Harrison: We don’t feel limited. We’ve explored packaging formats beyond bottles, which we currently use for syrups. Cans are a natural extension for cocktails, mocktails, or even cannabis beverages. From a formulation, sourcing, and food safety perspective, we could make those products. Packaging is often the most expensive part of goods. It can feel like a constraint, but it’s more about investment and logistics than capability.

At our scale, outsourcing packaging formats is possible. Specialized manufacturers can handle canning at scale. The primary considerations are unit economics and lack of control. That’s a philosophical question as much as a business one.

Overall, we see opportunities to grow both vertically and horizontally. We can deepen what we already do with syrups or expand into new formats, product types, and channels. Brand evolution is more about strategy, resources, and willingness to experiment while maintaining quality and authenticity.

Bandholz: Where can people buy your syrups and get in touch?

Harrison: Our site is LiberAndCompany.com. I’m on LinkedIn.

ChatGPT To Begin Testing Ads In The United States via @sejournal, @brookeosmundson

Just today, OpenAI confirmed it will begin testing advertising in the United States for ChatGPT Free and ChatGPT Go users in the coming weeks, marking the first time ads will appear inside the ChatGPT experience.

The test coincides with the U.S. launch of ChatGPT Go, a low-cost subscription tier priced at $8 per month that has been available internationally since August.

The details reveal a cautious approach, with clear limits on where ads can appear, who will see them, and how they will be separated from ChatGPT’s responses.

Here’s what OpenAI shared, how the tests will work, and why this shift matters for users and advertisers alike.

What OpenAI Is Testing

ChatGPT ads are not being introduced as part of a broader redesign or monetization overhaul. Instead, OpenAI is framing this as a limited test, with narrow placement rules and clear separation from ChatGPT’s core function.

Ads will appear at the bottom of a response, only when there is a relevant sponsored product or service tied to the active conversation. They will be clearly labeled, visually distinct from organic answers, and dismissible.

Users will also be able to see why a particular ad is being shown and turn off ad personalization entirely if they choose.

Just as important is where ads will not appear.

OpenAI stated that ads will not be shown to users under 18 and will not be eligible to run near sensitive or regulated topics, including health, mental health, and politics. Conversations will not be shared with advertisers, and user data will not be sold.

Timing Ad Testing with the Go Tier Launch

The timing of the announcement doesn’t seem accidental.

Alongside the ad testing plans, OpenAI confirmed that ChatGPT Go is now available in the United States.

Priced at $8 per month, Go sits between the free tier and higher-cost subscriptions, offering expanded access to messaging, image generation, file uploads, and memory.

Ads are positioned as a way to support both the free tier and Go users, allowing more people to use ChatGPT with fewer restrictions without forcing an upgrade.

At the same time, OpenAI made it clear that Pro, Business, and Enterprise subscriptions will remain ad-free, reinforcing that paid tiers are still the preferred path for users who want an uninterrupted experience.

Explaining the Guardrails of Early Ad Testing

OpenAI spent as much time explaining what ads will not do as what they will.

The company was explicit that advertising will not influence ChatGPT’s responses. Answers are optimized for usefulness, not commercial outcomes. There is no intent to optimize for time spent, engagement loops, or other metrics commonly associated with ad-driven platforms.

This is a notable departure from how advertising has historically been introduced elsewhere on the internet. Rather than retrofitting ads into an existing product and adjusting incentives later, OpenAI is attempting to define the rules up front.

Whether those rules hold over time is an open question. But the clarity of the initial framework suggests OpenAI understands the risk of getting this wrong.

What Early Ad Formats Tell Us

OpenAI shared two examples of the ad formats it plans to test inside ChatGPT.

In the first example, a ChatGPT response provides recipe ideas for a Mexican dinner party. Below the response, a sponsored product recommendation appears for a grocery item. The ad is clearly labeled and visually separated from the organic answer.

Image credit: openai.com

In the second example, ChatGPT responds to a conversation about traveling to Santa Fe, New Mexico. A sponsored lodging listing appears below the response, labeled as sponsored. The example also shows a follow-up chat screen, indicating that users can continue interacting with ChatGPT after seeing the ad.

Image credit: openai.com

In both examples, the ads appear at the bottom of ChatGPT’s responses and are presented as separate from the main answer. OpenAI stated that these formats are part of its initial ad testing and may change as testing progresses.

Why This Matters for Advertisers

This is not something advertisers can plan for just yet.

There are no announced buying models, no targeting details, no measurement framework, and no indication of when access might expand beyond testing. OpenAI has been clear that this is not an open marketplace at the moment.

Still, the implications are hard to ignore. Ads placed alongside high-intent, problem-solving conversations could eventually represent a different kind of discovery environment. One where usefulness matters more than volume, and where poor creative or loose targeting would feel immediately out of place.

If this becomes a real channel, it is unlikely to reward the same tactics that work in search or social today.

How Marketers Are Reacting So Far

Early industry reaction has been measured, not alarmist.

Most commentary acknowledges that advertising inside ChatGPT was inevitable at this scale.

Lily Ray stated her curiosity to “see how this change impacts user experience.”

Most people in the comments of her post are not shocked by this:

There is also skepticism, particularly around whether relevance can be maintained over time without pressure to expand inventory. That skepticism is warranted. History suggests that once ads work, the temptation to scale them follows.

For now, though, this feels less like an ad platform launch and more like OpenAI testing whether ads can exist inside a conversational interface without changing how people trust the product.

The Bigger Signal for AI Platforms

For users, OpenAI is expanding access while trying to preserve the trust that has made ChatGPT widely used. Introducing ads without blurring the line between answers and monetization sets a high bar, especially for a product people rely on for personal and professional tasks.

Outside of ChatGPT itself, this update shows how AI-first products may think about revenue differently than search or social networks. Ads are positioned as a way to support access, not as the product, with paid tiers remaining central.

OpenAI says it will adjust how ads appear based on user feedback once testing begins in the U.S.

For now, this is a limited test rather than a full advertising launch. Whether those boundaries hold will matter, not just for ChatGPT, but for how monetization inside conversational interfaces is expected to work.

AI Search in 2026: The 5 Article GEO & SEO Playbook For Modern Visibility via @sejournal, @contentful

In the SEO world, when we talk about how to structure content for AI search, we often default to structured data – Schema.org, JSON-LD, rich results, knowledge graph eligibility – the whole shooting match.

While that layer of markup is still useful in many scenarios, this isn’t another article about how to wrap your content in tags.

Structuring content isn’t the same as structured data

Instead, we’re going deeper into something more fundamental and arguably more important in the age of generative AI: How your content is actually structured on the page and how that influences what large language models (LLMs) extract, understand, and surface in AI-powered search results.

Structured data is optional. Structured writing and formatting are not.

If you want your content to show up in AI Overviews, Perplexity summaries, ChatGPT citations, or any of the increasingly common “direct answer” features driven by LLMs, the architecture of your content matters: Headings. Paragraphs. Lists. Order. Clarity. Consistency.

In this article, I’m unpacking how LLMs interpret content — and what you can do to make sure your message is not just crawled, but understood.

How LLMs Actually Interpret Web Content

Let’s start with the basics.

Unlike traditional search engine crawlers that rely heavily on markup, metadata, and link structures, LLMs interpret content differently.

They don’t scan a page the way a bot does. They ingest it, break it into tokens, and analyze the relationships between words, sentences, and concepts using attention mechanisms.

They’re not looking for a tag or a JSON-LD snippet to tell them what a page is about. They’re looking for semantic clarity: Does this content express a clear idea? Is it coherent? Does it answer a question directly?

LLMs like GPT-4 or Gemini analyze:

  • The order in which information is presented.
  • The hierarchy of concepts (which is why headings still matter).
  • Formatting cues like bullet points, tables, bolded summaries.
  • Redundancy and reinforcement, which help models determine what’s most important.

This is why poorly structured content – even if it’s keyword-rich and marked up with schema – can fail to show up in AI summaries, while a clear, well-formatted blog post without a single line of JSON-LD might get cited or paraphrased directly.

Why Structure Matters More Than Ever In AI Search

Traditional search was about ranking; AI search is about representation.

When a language model generates a response to a query, it’s pulling from many sources – often sentence by sentence, paragraph by paragraph.

It’s not retrieving a whole page and showing it. It’s building a new answer based on what it can understand.

What gets understood most reliably?

Content that is:

  • Segmented logically, so each part expresses one idea.
  • Consistent in tone and terminology.
  • Presented in a format that lends itself to quick parsing (think FAQs, how-to steps, definition-style intros).
  • Written with clarity, not cleverness.

AI search engines don’t need schema to pull a step-by-step answer from a blog post.

But, they do need you to label your steps clearly, keep them together, and not bury them in long-winded prose or interrupt them with calls to action, pop-ups, or unrelated tangents.

Clean structure is now a ranking factor – not in the traditional SEO sense, but in the AI citation economy we’re entering.

What LLMs Look For When Parsing Content

Here’s what I’ve observed (both anecdotally and through testing across tools like Perplexity, ChatGPT Browse, Bing Copilot, and Google’s AI Overviews):

  • Clear Headings And Subheadings: LLMs use heading structure to understand hierarchy. Pages with proper H1–H2–H3 nesting are easier to parse than walls of text or div-heavy templates.
  • Short, Focused Paragraphs: Long paragraphs bury the lede. LLMs favor self-contained thoughts. Think one idea per paragraph.
  • Structured Formats (Lists, Tables, FAQs): If you want to get quoted, make it easy to lift your content. Bullets, tables, and Q&A formats are goldmines for answer engines.
  • Defined Topic Scope At The Top: Put your TL;DR early. Don’t make the model (or the user) scroll through 600 words of brand story before getting to the meat.
  • Semantic Cues In The Body: Words like “in summary,” “the most important,” “step 1,” and “common mistake” help LLMs identify relevance and structure. There’s a reason so much AI-generated content uses those “giveaway” phrases. It’s not because the model is lazy or formulaic. It’s because it actually knows how to structure information in a way that’s clear, digestible, and effective, which, frankly, is more than can be said for a lot of human writers.

A Real-World Example: Why My Own Article Didn’t Show Up

In December 2024, I wrote a piece about the relevance of schema in AI-first search.

It was structured for clarity, timeliness, and was highly relevant to this conversation, but didn’t show up in my research queries for this article (the one you are presently reading). The reason? I didn’t use the term “LLM” in the title or slug.

All of the articles returned in my search had “LLM” in the title. Mine said “AI Search” but didn’t mention LLMs explicitly.

You might assume that a large language model would understand “AI search” and “LLMs” are conceptually related – and it probably does – but understanding that two things are related and choosing what to return based on the prompt are two different things.

Where does the model get its retrieval logic? From the prompt. It interprets your question literally.

If you say, “Show me articles about LLMs using schema,” it will surface content that directly includes “LLMs” and “schema” – not necessarily content that’s adjacent, related, or semantically similar, especially when it has plenty to choose from that contains the words in the query (a.k.a. the prompt).

So, even though LLMs are smarter than traditional crawlers, retrieval is still rooted in surface-level cues.

This might sound suspiciously like keyword research still matters – and yes, it absolutely does. Not because LLMs are dumb, but because search behavior (even AI search) still depends on how humans phrase things.

The retrieval layer – the layer that decides what’s eligible to be summarized or cited – is still driven by surface-level language cues.

What Research Tells Us About Retrieval

Even recent academic work supports this layered view of retrieval.

A 2023 research paper by Doostmohammadi et al. found that simpler, keyword-matching techniques, like a method called BM25, often led to better results than approaches focused solely on semantic understanding.

The improvement was measured through a drop in perplexity, which tells us how confident or uncertain a language model is when predicting the next word.

In plain terms: Even in systems designed to be smart, clear and literal phrasing still made the answers better.

So, the lesson isn’t just to use the language they’ve been trained to recognize. The real lesson is: If you want your content to be found, understand how AI search works as a system – a chain of prompts, retrieval, and synthesis. Plus, make sure you’re aligned at the retrieval layer.

This isn’t about the limits of AI comprehension. It’s about the precision of retrieval.

Language models are incredibly capable of interpreting nuanced content, but when they’re acting as search agents, they still rely on the specificity of the queries they’re given.

That makes terminology, not just structure, a key part of being found.

How To Structure Content For AI Search

If you want to increase your odds of being cited, summarized, or quoted by AI-driven search engines, it’s time to think less like a writer and more like an information architect – and structure content for AI search accordingly.

That doesn’t mean sacrificing voice or insight, but it does mean presenting ideas in a format that makes them easy to extract, interpret, and reassemble.

Core Techniques For Structuring AI-Friendly Content

Here are some of the most effective structural tactics I recommend:

Use A Logical Heading Hierarchy

Structure your pages with a single clear H1 that sets the context, followed by H2s and H3s that nest logically beneath it.

LLMs, like human readers, rely on this hierarchy to understand the flow and relationship between concepts.

If every heading on your page is an H1, you’re signaling that everything is equally important, which means nothing stands out.

Good heading structure is not just semantic hygiene; it’s a blueprint for comprehension.

Keep Paragraphs Short And Self-Contained

Every paragraph should communicate one idea clearly.

Walls of text don’t just intimidate human readers; they also increase the likelihood that an AI model will extract the wrong part of the answer or skip your content altogether.

This is closely tied to readability metrics like the Flesch Reading Ease score, which rewards shorter sentences and simpler phrasing.

While it may pain those of us who enjoy a good, long, meandering sentence (myself included), clarity and segmentation help both humans and LLMs follow your train of thought without derailing.

Use Lists, Tables, And Predictable Formats

If your content can be turned into a step-by-step guide, numbered list, comparison table, or bulleted breakdown, do it. AI summarizers love structure, so do users.

Frontload Key Insights

Don’t save your best advice or most important definitions for the end.

LLMs tend to prioritize what appears early in the content. Give your thesis, definition, or takeaway up top, then expand on it.

Use Semantic Cues

Signal structure with phrasing like “Step 1,” “In summary,” “Key takeaway,” “Most common mistake,” and “To compare.”

These phrases help LLMs (and readers) identify the role each passage plays.

Avoid Noise

Interruptive pop-ups, modal windows, endless calls-to-action (CTAs), and disjointed carousels can pollute your content.

Even if the user closes them, they’re often still present in the Document Object Model (DOM), and they dilute what the LLM sees.

Think of your content like a transcript: What would it sound like if read aloud? If it’s hard to follow in that format, it might be hard for an LLM to follow, too.

The Role Of Schema: Still Useful, But Not A Magic Bullet

Let’s be clear: Structured data still has value. It helps search engines understand content, populate rich results, and disambiguate similar topics.

However, LLMs don’t require it to understand your content.

If your site is a semantic dumpster fire, schema might save you, but wouldn’t it be better to avoid building a dumpster fire in the first place?

Schema is a helpful boost, not a magic bullet. Prioritize clear structure and communication first, and use markup to reinforce – not rescue – your content.

How Schema Still Supports AI Understanding

That said, Google has recently confirmed at Search Central Live in Madrid that its LLM (Gemini), which powers AI Overviews, does leverage structured data to help understand content more effectively.

In fact, at the event, John Mueller recommends to use structured data because it gives models clearer signals about intent and structure.

That doesn’t contradict the point; it reinforces it. If your content isn’t already structured and understandable, schema can help fill the gaps. It’s a crutch, not a cure.

Schema is a helpful boost, but not a substitute, for structure and clarity.

In AI-driven search environments, we’re seeing content without any structured data show up in citations and summaries because the core content was well-organized, well-written, and easily parsed.

In short:

  • Use schema when it helps clarify the intent or context.
  • Don’t rely on it to fix bad content or a disorganized layout.
  • Prioritize content quality and layout before markup.

The future of content visibility is built on how well you communicate, not just how well you tag.

Conclusion: Structure For Meaning, Not Just For Machines

Optimizing for LLMs doesn’t mean chasing new tools or hacks. It means doubling down on what good communication has always required: clarity, coherence, and structure.

If you want to stay competitive, you’ll need to structure content for AI search just as carefully as you structure it for human readers.

The best-performing content in AI search isn’t necessarily the most optimized. It’s the most understandable. That means:

  • Anticipating how content will be interpreted, not just indexed.
  • Giving AI the framework it needs to extract your ideas.
  • Structuring pages for comprehension, not just compliance.
  • Anticipating and using the language your audience uses, because LLMs respond literally to prompts and retrieval depends on those exact terms being present.

As search shifts from links to language, we’re entering a new era of content design. One where meaning rises to the top, and the brands that structure for comprehension will rise right along with it.

More Resources:


Featured Image: Igor Link/Shutterstock

SEO Pulse: UCP Debate, Trends Gets Gemini, Health AIO Concerns via @sejournal, @MattGSouthern

Welcome to this week’s Pulse. Google is laying more groundwork for agent-led shopping, Google Trends is getting a Gemini helper inside Explore, and Google appears to have responded to a report we covered last week on AI Overviews health queries.

Here’s what matters for you and your work.

Universal Commerce Protocol (UCP) Brings Agent Checkout Closer

Google introduced the Universal Commerce Protocol as an open standard meant to help AI agents complete shopping tasks across merchants and platforms. The announcement landed around NRF and was framed as agent-based shopping infrastructure, not a consumer feature on its own.

Key facts: This story got attention for two reasons. First, it shows where Google wants AI Mode shopping to go next. Second, it triggered a familiar debate about personalization and pricing after critics connected Google’s “personalized upselling” language to surveillance pricing narratives. Google has pushed back on that framing, saying upselling means showing premium options and that its Direct Offers pilot cannot raise prices.

Why This Matters

I’ve been tracking this build-out since Google began expanding AI shopping features across Search and Gemini. The direction is consistent. Google keeps moving more of the purchase journey into its own interfaces, from product research to comparison to now checkout.

The question for ecommerce practitioners is which parts of the journey you still influence with classic SEO, which parts come down to feeds and structured data hygiene, and which parts are product decisions made inside Google’s surfaces. UCP doesn’t answer that question yet, but it clarifies the direction.

What SEO Professionals Are Saying

The most useful social commentary this week falls into “consumer risk” versus “plumbing and implementation.”

On the critique side, Lindsay Owens, executive director of Groundwork Collaborative, helped set the tone for the surveillance pricing argument around “personalized upselling.” Lee Hepner, senior legal counsel at the American Economic Liberties Project, posted along similar lines, treating individualized pricing as the bigger policy risk sitting behind these kinds of systems.

On the implementation side, Mani Fazeli, VP of Product at Shopify, described what Shopify sees as the point of UCP. He said it “models the entire shopping journey, not just payments” and that “merchants keep their business critical checkout customizations.”

Heiko Hotz, Generative AI Global Blackbelt at Google Cloud, framed it more bluntly from an agent-builder perspective. “Agents are great at reasoning, but they are terrible at navigating a visual website.” Eric Seufert, analyst and publisher of Mobile Dev Memo, weighed in from an incentives angle, arguing the endgame is keeping discovery, conversion, and optimization economically connected to paid media.

Read more: Google Announces AI Mode Checkout Protocol, Business Agent

Google Trends Explore Gets Gemini Suggestions

Google Trends is redesigning the Explore page with a Gemini-powered side panel that suggests related terms and makes comparisons easier.

Key facts: Google says the update can “automatically identify and compare relevant trends,” with the ability to compare up to eight terms and see more “top and rising” queries per term. The update is rolling out now.

Why This Matters

Google keeps making Trends more useful for the discovery phase of keyword research.

Trends has always been valuable, but it can be slow when you start with a vague idea and need to find the right comparison terms. The Gemini panel looks designed to reduce that friction. For practitioners who use Trends early in content planning, this could speed up the process of clustering related topics and spotting seasonal patterns.

What People Are Saying

Yossi Matias, vice president and head of Google Research, emphasized the Gemini side panel, which suggests related terms, supports comparisons of up to eight queries, and expands the “top” and “rising” query views.

In the SEO community, the initial framing is that this reduces friction in the Explore workflow by surfacing comparison terms faster, but there hasn’t been much detailed feedback yet beyond first impressions.

Read more: Google Trends Explore Redesign Announcement

Health AI Overviews Face Fresh Scrutiny After Guardian Reporting

After the Guardian published examples of AI Overviews giving misleading or potentially risky guidance on medical queries, Google stopped showing AI Overviews for some health searches.

Key facts: The Guardian’s reporting included examples involving pancreatic cancer diet advice and “normal range” explanations for liver tests that reviewers said lacked context. In follow-up coverage, multiple outlets reported that Google removed AI Overviews for certain medical searches after the reporting circulated. Google’s response leaned on two themes: Some examples were missing context or based on incomplete screenshots, and it says most AI Overviews are supported by reputable sources.

Why This Matters

I wrote about the Guardian investigation earlier this month, and it fits a pattern that keeps resurfacing as AI Overviews expand into sensitive categories. You also have independent data showing medical Your Money or Your Life (YMYL) queries have some of the highest AI Overview exposure rates.

The issue for SEO practitioners is measurement. You can’t easily verify what AI Overviews say about topics you cover, and the summaries can change or disappear between queries. For anyone working in health, finance, or other YMYL categories, the question is whether AI Overviews help or complicate the trust signals you’ve built through traditional content.

What People Are Saying

Patient Information Forum highlighted the investigation and pointed to a quote from Sophie Randall, Director of PIF, saying AI Overviews can put inaccurate health information “at the top of online searches, presenting a risk to people’s health.”

Pancreatic Cancer UK also posted about participating in the investigation and reiterated that one example summary was “incorrect.” Individual commentary from clinicians and researchers shared the Guardian link and framed it as a higher-stakes version of earlier AI Overview failures.

Read more: ‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk

Theme Of The Week: The “Done For You” Layer Keeps Growing

Each story this week shows Google building more layers between the query and the destination.

UCP moves checkout into Google’s surfaces. The Trends update makes discovery more guided inside Google’s tools. And the health reporting shows what happens when AI summaries sit at the top of results for sensitive queries.

For practitioners, the common theme is control. The more Google handles inside its own interfaces, the harder it becomes to measure what you influenced and what happened upstream of your site.

Top Stories Of The Week:

More Resources:


Featured Image: Accogliente Design/Shutterstock