The Download: inside the Vitalism movement, and why AI’s “memory” is a privacy problem

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the Vitalists: the hardcore longevity enthusiasts who believe death is “wrong”

Last April, an excited crowd gathered at a compound in Berkeley, California, for a three-day event called the Vitalist Bay Summit. It was part of a longer, two-month residency that hosted various events to explore tools—from drug regulation to cryonics—that might be deployed in the fight against death.

One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Nathan Cheng and his colleague Adam Gries a few years ago. Consider it longevity for the most hardcore adherents—a sweeping mission to which nothing short of total devotion will do.

Although interest in longevity has certainly taken off in recent years, not everyone in the broader longevity space shares Vitalists’ commitment to actually making death obsolete. And the Vitalists feel that momentum is building, not just for the science of aging and the development of lifespan-extending therapies, but for the acceptance of their philosophy that defeating death should be humanity’s top concern. Read the full story.

—Jessica Hamzelou

This is the latest in our Big Story series, the home for MIT Technology Review’s most important, ambitious reporting. You can read the rest of the series here

What AI “remembers” about you is privacy’s next frontier

—Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, & Ruchika Joshi, fellow at the Center for Democracy & Technology specializing in AI safety and governance

The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.

Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes.

But their ability to store and retrieve increasingly intimate details about their users over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities. So what can developers do to fix this problem? Read the full story.

How the grid can ride out winter storms

The eastern half of the US saw a monster snowstorm over the weekend. The good news is the grid has largely been able to keep up with the freezing temperatures and increased demand. But there were some signs of strain, particularly for fossil-fuel plants.

One analysis found that PJM, the nation’s largest grid operator, saw significant unplanned outages in plants that run on natural gas and coal. Historically, these facilities can struggle in extreme winter weather.

Much of the country continues to face record-low temperatures, and the possibility is looming for even more snow this weekend. What lessons can we take from this storm, and how might we shore up the grid to cope with extreme weather? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Telegram has been flooded with deepfake nudes 
Millions of users are creating and sharing falsified images in dedicated channels. (The Guardian)

2 China has executed 11 people linked to Myanmar scam centers
The members of the “Ming family criminal gang” caused the death of at least 14 Chinese citizens. (Bloomberg $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

3 This viral personal AI assistant is a major privacy concern
Security researchers are sounding the alarm on Moltbot, formerly known as Clawdbot. (The Register)
+ It requires a great deal more technical know-how than most agentic bots. (TechCrunch)

4 OpenAI has a plan to keep bots off its future social network
It’s putting its faith in biometric “proof of personhood” promised by the likes of World’s eyeball-scanning orb. (Forbes)
+ We reported on how World recruited its first half a million test users back in 2022. (MIT Technology Review)

5 Here’s just some of the technologies ICE is deploying
From facial recognition to digital forensics. (WP $)
+ Agents are also using Palantir’s AI to sift through tip-offs. (Wired $)

6 Tesla is axing its Model S and Model X cars 🚗
Its Fremont factory will switch to making Optimus robots instead. (TechCrunch)
+ It’s the latest stage of the company’s pivot to AI… (FT $)
+ …as profit falls by 46%. (Ars Technica)
+ Tesla is still struggling to recover from the damage of Elon Musk’s political involvement. (WP $)

7 X is rife with weather influencers spreading misinformation
They’re whipping up hype ahead of massive storms hitting. (New Yorker $)

8  Retailers are going all-in on AI
But giants like Amazon and Walmart are taking very different approaches. (FT $)
+ Mark Zuckerberg has hinted that Meta is working on agentic commerce tools. (TechCrunch)
+ We called it—what’s next for AI in 2026. (MIT Technology Review)

9 Inside the rise of the offline hangout
No phones, no problem. (Wired $)

10 Social media is obsessed with 2016
…why, exactly? (WSJ $)

Quote of the day

“The amount of crap I get for putting out a hobby project for free is quite something.”

—Peter Steinberger, the creator of the viral AI agent Moltbot, complains about the backlash his project has received from security researchers pointing out its flaws in a post on X.

One more thing

The flawed logic of rushing out extreme climate solutions

Early in 2022, entrepreneur Luke Iseman says, he released a pair of sulfur dioxide–filled weather balloons from Mexico’s Baja California peninsula, in the hope that they’d burst miles above Earth.

It was a trivial act in itself, effectively a tiny, DIY act of solar geoengineering, the controversial proposal that the world could counteract climate change by releasing particles that reflect more sunlight back into space.

Entrepreneurs like Iseman invoke the stark dangers of climate change to explain why they do what they do—even if they don’t know how effective their interventions are. But experts say that urgency doesn’t create a social license to ignore the underlying dangers or leapfrog the scientific process. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The hottest thing in art right now? Vertical paintings.
+ There’s something in the water around Monterey Bay—a tail walking dolphin!
+ Fed up of hairstylists not listening to you? Remember these handy tips the next time you go for a cut.
+ Get me a one-way ticket to Japan’s tastiest island.

DHS is using Google and Adobe AI to make videos

The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI—and as workers in tech have put pressure on their employers to denounce the agencies’ activities. 

The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. 

In a section about “editing images, videos or other public affairs materials using AI,” it reveals for the first time that DHS is using Google’s Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools. It also discloses that DHS uses Microsoft Copilot Chat for generating first drafts of documents and summarizing long reports and Poolside software for coding tasks, in addition to tools from other companies.

Google, Adobe, and DHS did not immediately respond to requests for comment.

The news provides details about how agencies like Immigrations and Customs Enforcement, which is part of DHS, might be creating the large amounts of content they’ve shared on X and other channels as immigration operations have expanded across US cities. They’ve posted content celebrating “Christmas after mass deportations,” referenced Bible verses and Christ’s birth, showed faces of those the agency has arrested, and shared ads aimed at recruiting agents. The agencies have also repeatedly used music without permissions from artists in their videos.

Some of the content, particularly videos, has the appearance of being AI-generated, but it hasn’t been clear until now what AI models the agencies might be using. This marks the first concrete evidence such generators are being used by DHS to create content shared with the public.

It still remains impossible to verify which company helped create a specific piece of content, or indeed if it was AI-generated at all. Adobe offers options to “watermark” a video made with its tools to disclose that it is AI-generated, for example, but this disclosure does not always stay intact when the content is uploaded and shared across different sites. 

The document reveals that DHS has specifically been using Flow, a tool from Google that combines its Veo 3 video generator with a suite of filmmaking tools. Users can generate clips and assemble entire videos with AI, including videos that contain sound, dialogue, and background noise, making them hyperrealistic. Adobe launched its Firefly generator in 2023, promising that it does not use copyrighted content in its training or output. Like Google’s tools, Adobe’s can generate videos, images, soundtracks, and speech. The document does not reveal further details about how the agency is using these video generation tools.

Workers at large tech companies, including more than 140 current and former employees from Google and more than 30 from Adobe, have been putting pressure on their employers in recent weeks to take a stance against ICE and the shooting of Alex Pretti on January 24. Google’s leadership has not made statements in response. In October, Google and Apple removed apps on their app stores that were intended to track sightings of ICE, citing safety risks. 

An additional document released on Wednesday revealed new details about how the agency is using more niche AI products, including a facial recognition app used by ICE, as first reported by 404Media in June.

The AI Hype Index: Grok makes porn, and Claude Code nails your job

Everyone is panicking because AI is very bad; everyone is panicking because AI is very good. It’s just that you never know which one you’re going to get. Grok is a pornography machine. Claude Code can do anything from building websites to reading your MRI. So of course Gen Z is spooked by what this means for jobs. Unnerving new research says AI is going to have a seismic impact on the labor market this year.

If you want to get a handle on all that, don’t expect any help from the AI companies—they’re turning on each other like it’s the last act in a zombie movie. Meta’s former chief AI scientist, Yann LeCun, is spilling tea, while Big Tech’s messiest exes, Elon Musk and OpenAI, are about to go to trial. Grab your popcorn.

Why Marketplaces Block AI Shopping Agents

Autonomous AI shopping agents are moving quickly from novelty to reality, with both financial and legal implications.

AI-first browsers such as Perplexity’s Comet and OpenAI’s Atlas can now search, compare, and initiate purchases with minimal human involvement.

That process, called agentic commerce, creates faster shopping for consumers and fewer clicks for merchants. It also challenges many ecommerce conventions, including the role marketplaces play in product discovery, transactions, and advertising.

Amazon and eBay have responded. Both are moving to restrict independent AI agents from completing purchases, citing security and user experience concerns. Yet in reality, the fight is almost certainly about control.

Shopping app icons on a smartphone screen

AI shopping agents threaten marketplaces such as eBay, Amazon, AliExpress, and many others.

Amazon vs. Perplexity

In November 2025, Amazon sued Perplexity, alleging that the Comet web browser masquerades as a human, accesses Amazon accounts, and places orders in violation of Amazon’s terms of service and computer fraud laws.

Third-party bots, according to Amazon, must operate openly and only with platform permission.

Perplexity countered that Comet acts on behalf of a human, with credentials stored locally for security, and suggested Amazon’s action was an attempt to protect its ad-driven business model and preserve control over shopping flows.

Essentially, Perplexity asks whether a platform can say no if a human authorizes an AI to shop.

eBay’s Ban

Just this month, eBay updated its user agreement to prohibit, without prior approval, “buy-for-me” agents and end-to-end LLM-driven checkout flows.

eBay positions the change as a safeguard against auction manipulation, fraud, and mistaken orders. The company, however, did leave room for “formally sanctioned” shopping agents, thus opening the door for partnerships that eBay can control.

Marketplace Concerns

Taken together, eBay’s update and Amazon’s lawsuit suggest that marketplaces seek to control agentic commerce relationships.

It makes sense. Marketplaces exist to aggregate and centralize shopping. It is the core service they provide and how they earn revenue. Hence agentic commerce is a threat.

Advertising. For the Amazon marketplace specifically and other marketplaces generally, advertising revenue is likely a chief concern.

According to its 2025 Q3 filing with the Securities and Exchange Commission, Amazon generated $47 billion in “advertising services” revenue in the first nine months of last year.

The company is much more than a product marketplace. It is a publisher, too, offering sponsored listings, recommendation units, and paid placements — all deeply embedded in search results and category pages.

Autonomous agents bypass the ads. Instead of scrolling through sponsored products and recommendations, the AI shopping agent skips to an item and initiates checkout.

First-party data. A related concern is shopper data.

Ecommerce marketplaces observe, track, and use shopper behavioral information. They monitor what shoppers search for, which products they view, and the items they abandon. Those signals feed ranking algorithms, recommendation systems, and personalization models.

That data disappears when an external AI agent performs comparisons and decision-making outside the marketplace, which sees only the final purchase.

Transactions. In its case against Perplexity, Amazon did not dispute that the AI agent completed the transaction via Amazon’s own checkout. Nonetheless, an AI-driven checkout creates at least two concerns.

First, the marketplace has no way to ensure that the transaction was proper. What if the AI agent made an error? What if the price is wrong? Could those errors lead to customer service problems or even increased return rates? Maybe.

Second, upselling becomes presumably impossible when the human shopper never sees it.

Compromise

Yet the developers of AI shopping agents disagree.

Agentic commerce startups argue that shoppers should be free to choose their preferred AI when they interact with services or websites. An AI agent, the argument goes, is more like a browser or an accessibility aid than a competitor.

Per the developers, marketplaces that allow only a few AI partners block human shoppers, stifle innovation, and foster monopolies.

The coming compromise will likely enable marketplaces to approve access within reasonable limits.

Thus AI agents, perhaps even Perplexity’s Comet, will eventually access marketplaces via official APIs, subject to rate limits, identity verification, and possibly commercial arrangements. Think affiliate programs for bots that pay for access.

For small-to-medium ecommerce businesses, the agent-marketplace relationship will likely be a primary route for getting products into Perplexity, ChatGPT, and similar platforms. It could be a key revenue channel.

The New Content Failure Mode: People Love It, Models Ignore It via @sejournal, @DuaneForrester

You publish a page that solves a real problem. It reads clean. It has examples, and it has the edge cases covered. You would happily hand it to a customer.

Then you ask an AI platform the exact question that page answers, and your page never shows up. No citation, no link, no paraphrase. Just omitted.

That moment is new. Not because platforms give different answers, as most people already accept that as reality. The shift is deeper. Human relevance and model utility can diverge.

If you are still using “quality” as a single universal standard, you will misdiagnose why content fails in AI answers, and you will waste time fixing the wrong things.

The Utility Gap is the simplest way to name the problem.

Image Credit: Duane Forrester

What The Utility Gap Is

This gap is the distance between what a human considers relevant and what a model considers useful for producing an answer.

Humans read to understand. They tolerate warm-up, nuance, and narrative. They will scroll to find the one paragraph that matters and often make a decision after seeing the whole page or most of the page.

A retrieval plus generation system works differently. It retrieves candidates, it consumes them in chunks, and it extracts signals that let it complete a task. It does not need your story, just the usable parts.

That difference changes how “good” works.

A page can be excellent for a human and still be low-utility to a model. That page can also be technically visible, indexed, and credible, and yet, it can still fail the moment a system tries to turn it into an answer.

This is not a theory we’re exploring here, as research already separates relevance from utility in LLM-driven retrieval.

Why Relevance Is No Longer Universal

Many standard IR ranking metrics are intentionally top-heavy, reflecting a long-standing assumption that user utility and examination probability diminish with rank. In RAG, retrieved items are consumed by an LLM, which typically ingests a set of passages rather than scanning a ranked list like a human, so classic position discounts and relevance-only assumptions can be misaligned with end-to-end answer quality. (I’m over-simplifying here, as IR is far more complex that one paragraph can capture.)

2025 paper on retrieval evaluation for LLM-era systems attempts to make this explicit. It argues classic IR metrics miss two big misalignments: position discount differs for LLM consumers, and human relevance does not equal machine utility. It introduces an annotation scheme that measures both helpful passages and distracting passages, then proposes a metric called UDCG (Utility and Distraction-aware Cumulative Gain). The paper also reports experiments across multiple datasets and models, with UDCG improving correlation with end-to-end answer accuracy versus traditional metrics.

The marketer takeaway is blunt. Some content is not merely ignored. It can reduce answer quality by pulling the model off-track. That is a utility problem, not a writing problem.

A related warning comes from NIST. Ian Soboroff’s “Don’t Use LLMs to Make Relevance Judgments” argues you should not substitute model judgments for human relevance judgments in the evaluation process. The mapping is not reliable, even when the text output feels human.

That matters for your strategy. If relevance were universal, a model could stand in for a human judge, and you would get stable results, but you do not.

The Utility Gap sits right in that space. You cannot assume that what reads well to a person will be treated as useful by the systems now mediating discovery.

Even When The Answer Is Present, Models Do Not Use It Consistently

Many teams hear “LLMs can take long context” and assume that means “LLMs will find what matters.” That assumption fails often.

Lost in the Middle: How Language Models Use Long Contexts” shows that model performance can degrade sharply based on where relevant information appears in the context. Results often look best when the relevant information is near the beginning or end of the input, and worse when it sits in the middle, even for explicitly long-context models.

This maps cleanly to content on the web. Humans will scroll. Models may not use the middle of your page as reliably as you expect. If your key definition, constraint, or decision rule sits halfway down, it can become functionally invisible.

You can write the right thing and still place it where the system does not consistently use it. This means that utility is not just about correctness; it’s also about extractability.

Proof In The Wild: Same Intent, Different Utility Target

This is where the Utility Gap moves from research to reality.

BrightEdge published research comparing how ChatGPT and Google AI approach visibility by industry. In healthcare, BrightEdge reports 62% divergence and gives an example that matters to marketers because it shows the system choosing a path, not just an answer. For “how to find a doctor,” the report describes ChatGPT pushing Zocdoc while Google points toward hospital directories. Same intent. Different route.

A related report from them also frames this as a broader pattern, especially in action-oriented queries, where the platform pushes toward different decision and conversion surfaces.

That is the Utility Gap showing up as behavior. The model is selecting what it considers useful for task completion, and those choices can favor aggregators, marketplaces, directories, or a competitor’s framing of the problem. Your high-quality page can lose without being wrong.

Portability Is The Myth You Have To Drop

The old assumption was simple. If you build a high-quality page and you win in search, you win in discovery, and that is no longer a safe assumption.

BCG describes the shift in discoverability and highlights how measurement is moving from rankings to visibility across AI-mediated surfaces. Their piece includes a claim about low overlap between traditional search and AI answer sources, which reinforces the idea that success does not transfer cleanly across systems.

Profound published a similar argument, positioning the overlap gap as a reason top Google visibility does not guarantee visibility in ChatGPT.

Method matters with overlap studies, so treat these numbers as directional signals rather than fixed constants. Search Engine Land published a critique of the broader trend of SEO research being over-amplified or generalized beyond what its methods can support, including discussion of overlap-style claims.

You do not need a perfect percent to act. You just need to accept the principle. Visibility and performance are not portable by default, and utility is relative to the system assembling the answer.

How You Measure The Utility Gap Without A Lab

You do not need enterprise tooling to start, but you do need consistency and intent discipline.

Start with 10 intents that directly impact revenue or retention. Pick queries that represent real customer decision points: choosing a product category, comparing options, fixing a common issue, evaluating safety or compliance, or selecting a provider. Focus on intent, not keyword volume.

Run the exact same prompt on the AI surfaces your customers use. That might include Google Gemini, ChatGPT, and an answer engine like Perplexity. You are not looking for perfection, just repeatable differences.

Capture four things each time:

  • Which sources get cited or linked.
  • Whether your brand is mentioned (cited, mentioned, paraphrased, or omitted).
  • Whether your preferred page appears.
  • Whether the answer routes the user toward or away from you.

Then, score what you see. Keep the scoring simple so you will actually do it. A practical scale looks like this in plain terms:

  • Your content clearly drives the answer.
  • Your content appears, but plays a minor role.
  • Your content is absent, and a third party dominates.
  • The answer conflicts with your guidance or routes users somewhere you do not want them to go.

That becomes your Utility Gap baseline.

When you repeat this monthly, you track drift. When you repeat it after content changes, you can see whether you reduced the gap or merely rewrote words.

How You Reduce The Utility Gap Without Turning Your Site Into A Checklist

The goal is not to “write for AI.” The goal is to make your content more usable to systems that retrieve and assemble answers. Most of the work is structural.

Put the decision-critical information up front. Humans accept a slow ramp. Retrieval systems reward clean early signals. If the user’s decision depends on three criteria, put those criteria near the top. If the safest default matters, state it early.

Write anchorable statements. Models often assemble answers from sentences that look like stable claims. Clear definitions, explicit constraints, and direct cause-and-effect phrasing increase usability. Hedged, poetic, or overly narrative language can read well to humans and still be hard to extract into an answer.

Separate core guidance from exceptions. A common failure pattern is mixing the main path, edge cases, and product messaging inside one dense block. That density increases distraction risk, which aligns with the utility and distraction framing in the UDCG work.

Make context explicit. Humans infer, but models benefit when you state assumptions, geography, time sensitivity, and prerequisites. If guidance changes based on region, access level, or user type, say so clearly.

Treat mid-page content as fragile. If the most important part of your answer sits in the middle, promote it or repeat it in a tighter form near the beginning. Long-context research shows position can change whether information gets used.

Add primary sources when they matter. You are not doing this for decoration. You are giving the model and the reader evidence to anchor trust.

This is content engineering, not gimmicks.

Where This Leaves You

The Utility Gap is not a call to abandon traditional SEO. It is a call to stop assuming quality is portable.

Your job now runs in two modes at once. Humans still need great content. Models need usable content. Those needs overlap, but they are not identical. When they diverge, you get invisible failure.

That changes roles.

Content writers cannot treat structure as a formatting concern anymore. Structure is now part of performance. If you want your best guidance to survive retrieval and synthesis, you have to write in a way that lets machines extract the right thing, fast, without getting distracted.

SEOs cannot treat “content” as something they optimize around at the edges. Technical SEO still matters, but it no longer carries the whole visibility story. If your primary lever has been crawlability and on-page hygiene, you now have to understand how the content itself behaves when it is chunked, retrieved, and assembled into answers.

The organizations that win will not argue about whether AI answers differ. They will treat model-relative utility as a measurable gap, then close it together, intent by intent.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: LariBat/Shutterstock

What The Latest Web Almanac Report Reveals About Bots, CMS Influence & llms.txt via @sejournal, @theshelleywalsh

The Web Almanac is an annual report that translates the HTTP Archive dataset into practical insight, combining large-scale measurement with expert interpretation from industry experts.

To get insights into what the 2025 report can tell us about what is actually happening in SEO, I spoke with one of the authors of the SEO chapter update, Chris Green, a well-known industry expert with over 15 years of experience.

Chris shared with me some surprises about the adoption of llms.txt files and how CMS systems are shaping SEO far more than we realize. Little-known facts that the data surfaced in the research, and surprising insights that usually would go unnoticed.

You can watch the full interview with Chris on the IMHO recording at the end, or continue reading the article summary.

“I think the data [in the Web Almanac] helped to show me that there’s still a lot broken. The web is really messy. Really messy.”

Bot Management Is No Longer ‘Google, Or Not Google?’

Although bot management has been binary for some time – allow/disallow Google – it’s becoming a new challenge. Something that Eoghan Henn had picked up previously, and Chris found in his research.

We began our conversation by talking about how robots files are now being used to express intent about AI crawler access.

Chris responded to say that, firstly, there is a need to be conscious of the different crawlers, what their intention is, and fundamentally what blocking them might do, i.e., blocking some bots has bigger implications than others.

Second to that, requires the platform providers to actually listen to those rules and treat those files as appropriate. That isn’t always happening, and the ethics around robots and AI crawlers is an area that SEOs need to know about and understand more.

Chris explained that although the Almanac report showed the symptom of robots.txt usage, SEOs need to get ahead and understand how to control the bots.

“It’s not only understanding what the impact of each [bot/crawler] is, but also how to communicate that with the business. If you’ve got a team who want to cut as much bot crawling as possible because they want to save money, that might desperately impact your AI visibility.”

Equally, you might have an editorial team that doesn’t want to get all of their work scraped and regurgitated. So, we, as SEOs, need to understand that dynamic, how to control it technically, but how to put that argument forward in the business as well.” Chris explained.

As more platforms and crawlers are introduced, SEO teams will have to consider all implications, and collaborate with other teams to ensure the right balance of access is applied to the site.

Llms.txt Is Being Applied Despite No Official Platform Adoption 

The first surprising finding of the report was that adoption for the proposed llms.txt standard is around 2% of sites in the dataset.

Llms.txt has been a heated topic in the industry, with many SEOs dismissing the value of the file. Some tools, such as Yoast, have included the standard, but as yet, there has been no demonstration of actual uptake by AI providers.

Chris admitted that 2% was a higher adoption than he expected. But much of that growth appears to be driven by SEO tools that have added llms.txt as a default or optional feature.

Chris is skeptical of its long-term impact. As he explained, Google has repeatedly stated it does not plan to use llms.txt, and without clear commitment from the major AI providers, especially OpenAI, it risks remaining a niche, symbolic gesture rather than a functional standard.

That said, Chris has experienced log-file data suggesting some AI crawlers are already fetching these files, and in limited cases, they may even be referenced as sources. Green views this less as a competitive advantage and more as a potential parity mechanism, something that may help certain sites be understood, but not dramatically elevate them.

“Google has time and again said they don’t plan to use llms.txt which they reiterated in Zurich at Search Central last year. I think, fundamentally, Google doesn’t need it as they do have crawling and rendering nailed. So, I think it hinges on whether OpenAI say they will or won’t use it and I think they have other problems than trying to set up a new standard.”

Different, But Reassuringly The Same Where It Matters

I went on to ask Chris about how SEOs can balance the difference between search engine visibility and machine visibility.

He thinks there is “a significant overlap between what SEO was before we started worrying about this and where we are at the start of 2026.”

Despite this overlap, Chris was clear that if anyone thinks optimizing for search and machines is the same, then they are not aware of the two different systems, the different weightings, the fact that interpretation, retrieval, and generation are completely different.

Although there are different systems and different capabilities in play, he doesn’t think SEO has fundamentally changed. His belief is that SEO and AI optimization are “kind of the same, reassuringly the same in the places that matter, but you will need to approach it differently” because it diverges in how outputs are delivered and consumed.

Chris did say that SEOs will move more towards feeds, feed management, feed optimization.

“Google’s universal commerce protocol where you could potentially transact directly from search results or from a Gemini window obviously changes a lot. It’s just another move to push the website out of the loop. But the information, what we’re actually optimizing still needs to be optimized. It’s just in a different place.”

CMS Platforms Shape The Web More Than SEOs Realize

Perhaps the biggest surprise from Web Almanac 2025 was the scale of influence exerted by CMS platforms and tooling providers.

Chris said that he hadn’t realized just how big that impact is. “Platforms like Shopify, Wix, etc. are shaping the actual state of tech SEO probably more profoundly than I think a lot of people truly give it credit for.”

Chris went on to explain that “as well-intentioned as individual SEOs are, I think our overall impact on the web is minimal outside of CMS platforms providers. I would say if you are really determined to have an impact outside of your specific clients, you need to be nudging WordPress or Wix or Shopify or some of the big software providers within those ecosystems.”

This creates opportunity: Websites that do implement technical standards correctly could achieve significant differentiation when most sites lag behind best practices.

One of the more interesting insights from this conversation was that so much on the web is broken and how little impact we [SEOs] really have.

Chris explained that “a lot of SEOs believe that Google owes us because we maintain the internet for them. We do the dirty work, but I also don’t think we have as much impact perhaps at an industry level as maybe some like to believe. I think the data in the Web Almanac kind of helped show me that there’s still a lot broken. The web is really messy. Really messy.”

AI Agents Won’t Replace SEOs, But They Will Replace Bad Processes

Our conversation concluded with AI agents and automation. Chris started by saying, “Agents are easily misunderstood because we use the term differently.”

He emphasized that agents are not replacements for expertise, but accelerators of process. Most SEO workflows involve repetitive data gathering and pattern recognition, areas well-suited to automation. The value of human expertise lies in designing processes, applying judgment, and contextualizing outputs.

Early-stage agents could automate 60-80% of the work, similar to a highly capable intern. “It’s going to take your knowledge and your expertise to make that applicable to your given context. And I don’t just mean the context of web marketing or the context of ecommerce. I mean the context of the business that you’re specifically working for,” he said.

Chris would argue that a lot of SEOs don’t spend enough time customizing what they do to the client specifically. He thinks there’s an opportunity to build an 80% automated process and then add your real value when your human intervention optimizes the last 20% business logic.

SEOs who engage with agents, refine workflows, and evolve alongside automation are far more likely to remain indispensable than those who resist change altogether.

However, when experimenting with automation, Chris warned we should avoid automating broken processes.

“You need to understand the process that you’re trying to optimize. If the process isn’t very good, you’ve just created a machine to produce mediocrity at scale, which frankly doesn’t help anyone.”

Chris thinks that this will give SEOs an edge as AI is more widely adopted. “I suggest the people that engage with it and make those processes better and show how they can be continually evolved, they’ll be the ones that have greater longevity.”

SEOs Can Succeed By Engaging With The Complexity

The Web Almanac 2025 doesn’t suggest that SEO is being replaced, but it does show that its role is expanding in ways many teams haven’t fully adapted to yet. Core principles like crawlability and technical hygiene still matter, but they now exist within a more complex ecosystem shaped by AI crawlers, feeds, closed systems, and platform-level decisions.

Where technical standards are poorly implemented at scale, those who understand the systems that shape them can still gain a meaningful advantage.

Automation works best when it accelerates well-designed processes and fails when it simply scales inefficiency. SEOs who focus on process design, judgment, and business context will remain essential as automation becomes more common.

In an increasingly messy and machine-driven web, the SEOs who succeed will be those willing to engage with that complexity rather than ignore it.

SEO in 2026 isn’t about choosing between search and AI; it’s about understanding how multiple systems consume content and where optimization now happens.

Watch the full video interview with Chris Green here:

Thank you to Chris Green for offering his insights and being my guest on IMHO.

More Resources: 


Featured Image: Shelley Walsh/Search Engine Journal

How Visibility Compounds In Brand-Led SEO via @sejournal, @TaylorDanRW

If building a brand is the new SEO cliche, then how visibility compounds is the part that rarely gets explained.

We can all agree, at least on principle, that repeated brand exposure matters. Brands become familiar because they appear consistently over time and across contexts. What is less well understood is how search visibility builds on itself, how it becomes easier to grow once you reach a certain threshold, and why this is often the difference between content that merely exists and content that genuinely drives preference.

This matters because the pressure around AI and LLM visibility has changed the tone of marketing conversations. Leaders want speed. They want the benefits of brand strength without the lead time it typically requires.

That gap between expectation and reality is where many teams end up panicking, producing more content, chasing more mentions, and hoping the sheer volume will create momentum and increase mental availability. That approach rarely works, because compounding is not the same as doing more. Compounding is what happens when each new piece of visibility makes the next one easier to earn.

What “Visibility Compounding” Actually Means

Visibility compounding is the effect where early wins create structural advantages that improve your ability to win again later. This is not an abstract concept, because in SEO, once you start to earn consistent impressions and real engagement across a topic area, certain things tend to follow in a fairly predictable way.

Your pages often get crawled more frequently because the site is being discovered, used, and referenced across the wider web, while your content becomes easier to rank because it sits inside a network of related pages rather than existing as isolated assets. Your internal linking becomes more meaningful because you are connecting real clusters of intent rather than trying to force relevance where it does not exist, and your brand becomes more familiar to users, which quietly improves your ability to earn clicks, repeat visits, and deeper browsing.

None of these things are brand building in the traditional sense, but they are the mechanics that can make brand building cheaper, faster, and more resilient over time. A simple way to describe it is that visibility compounds when your presence creates signals that make your future presence more likely.

Compounding Starts Before Loyalty

One of the reasons SEOs struggle with the brand conversation is that loyalty feels like the finish line, and when nobody is loyal yet, it can feel like the brand work is failing. I feel that this is because, as marketers, we’re trained to look at the conversion funnel, with loyalty/advocacy being the “end-goal.”

Image by Paulo Bobita/Search Engine Journal

In reality, compounding begins much earlier than loyalty, typically with recognition.

If a prospect sees your brand name in search results, then sees it again in a different query a few days later, and then sees it again while they are comparing options, something changes: You are no longer unknown and are now familiar enough to be considered. This is not emotional loyalty; it is mental availability, and it is the earliest stage of preference, which is where SEO can contribute more than many marketers realize.

This is also where AI complicates the picture; users may click less often, but they are still being exposed to sources, brands, and repeated content. Even when attribution becomes harder, the effect of familiarity still exists, and the question is whether your visibility is strong enough for familiarity to form at all.

One Strong Piece Of Content Is Rarely A Strategy

Many teams still treat content like a set of isolated tactical bets, such as one flagship thought leadership piece, one big report, one digital PR campaign, or one new pillar page. These can be valuable, but on their own, they do not tend to compound, because compounding needs continuity and coverage, and it needs a user to see you again and again in ways that feel natural rather than forced.

The truth is that a single great piece of content usually becomes a moment rather than a system, and while a moment might win attention for a week, a system keeps you present for months. Single pieces of content can be fantastic catalysts, but they require support, ladder-up tactics, and more than just distribution to turn them into brand assets that compound visibility.

How Compounding Usually Unfolds

When visibility truly compounds, it often follows a simple loop, even if it takes time to build. It usually starts with coverage, where you publish content that answers real queries, it gets indexed, it earns impressions, and the early performance may be modest, but it establishes presence.

Then you start to earn credibility, because some pages begin to attract links, mentions, engagement signals, and repeat discovery, and you become a source that is referenced rather than a page that exists. Over time, repetition kicks in, users see you again, they click more readily, they browse deeper, they return later, and your brand starts to feel like part of the landscape for that topic.

This is where the system begins to create momentum, because new content can rank faster as it is not fighting for relevance alone, and it is supported by an ecosystem that already signals topical authority and user demand.

Distribution Is Often The Real Differentiator

A lot of SEO conversations get stuck on quality, as though quality is a clear and objective threshold that guarantees results, and quality does matter. The problem is that quality is rarely the differentiator once you are operating in a competitive market, because the differentiator is often distribution.

If your content is not being seen, it cannot compound, and if your digital PR work is not creating repeated brand touchpoints, it cannot compound, while leadership content that does not earn readership cannot compound either. You do not need a perfect piece of content, but you do need content that gets consumed, referenced, and remembered.

This can be uncomfortable for organizations because it makes the work feel less controllable, since writing and publishing can be done internally, but distribution forces you to compete for attention in a public arena. If you want compounding effects, you have to treat distribution as a core capability rather than a nice-to-have.

Visibility Compounding Makes Brand Outcomes Realistic

This is the missing link in much of the current industry advice. Brand building is real, but it is slow, and visibility building is measurable, but it is not always meaningful, and compounding is what connects the two.

When you build visibility in a way that compounds, you create the conditions for brand outcomes to emerge, because familiarity becomes preference over time, preference becomes repeat engagement, repeat engagement becomes trust, and trust becomes the ability to win even when the channel changes.

That last part is what matters most going into 2026, because AI search and LLM interfaces will keep evolving, attribution will remain messy, the surfaces will shift, and traffic patterns will wobble. Brands that rely on isolated wins will keep feeling exposed, while brands that rely on compounding visibility will feel anchored, because their presence is not tied to one page, one keyword set, or one campaign.

What To Focus On If You Want Compounding Effects

If you want visibility to compound, you need to stop thinking only in terms of content output and start thinking in terms of coverage and reinforcement. You build around themes rather than one-off ideas, you publish sequences rather than isolated pieces, and you connect content so it behaves like an ecosystem rather than a library.

You also measure success in a way that reflects compounding, meaning you look beyond whether a page performed in isolation and ask whether it improved your ability to perform again. If content does not make the next piece easier to win, it may still be useful, but it is not compounding.

The Question SEO Leaders Should Be Asking

If AI has forced one useful change in SEO, it is that it has exposed how brittle many visibility strategies really were. Ranking for a handful of high-volume queries was never the same as owning a topic, being present was never the same as being preferred, and building a brand was never something you could do by simply saying the words.

The real question is not whether you need a brand to win in AI search, but whether your visibility strategy is designed to compound, or whether you are producing outputs and hoping time does the rest. Time compounds what is connected and reinforced, and it does not compound what is isolated.

More Resources:


Featured Image: KitohodkA/Shutterstock

New to Yoast SEO for Shopify: Enhanced pricing visibility in product schema 

We are excited to announce an update to our Offer schema within Yoast SEO for Shopify. This update introduces a more robust way to communicate pricing to search engines, specifically introducing sale price strikethroughs

What’s new? 

Previously, communicating a “sale” was often limited to showing a single price. With this update, we’ve refined how our schema handles the Offer object. You can now clearly define: 

  • The original price: The “base” price before any discounts. 
  • The sale price: The current active price the customer pays. 

Why this matters 

When search engines understand the relationship between your original and sale prices, they can better represent your deals in search results. This update is designed to help trigger those eye-catching strikethrough price treatments in Google Shopping and organic snippets, improving your click-through rate by visually highlighting the value you’re offering. 

How to use it 

The schema automatically bridges the gap between your product data and the structured data output. Simply ensure your product’s “Regular Price” and “Sale Price” are populated, and our updated schema handles the rest. For more information about the structured data included with all our products, check out our structured data feature page.

Get started

If you are a Yoast SEO for Shopify customer, you can access your product schema by opening a product in the Yoast product editor in your Shopify store. If you are not a customer and want to learn more, you can start a 14 day free trial of Yoast SEO for Shopify from the Shopify App Store.

What is the open web?

The open web is the part of the internet built on open standards that anyone can use. This concept creates a democratic digital space where people can build on each other’s work without restrictions, just like how WordPress.org is built. For website owners, understanding and leveraging the open web is increasingly crucial. Especially with the rise of AI-powered systems and the general direction that online search is taking. So, let’s explore what the open web is and what it means for your website.

What is the open web?

The open web refers to the part of the internet built on open, shared standards that are available to everyone. It’s powered by technologies like HTTP, HTML, RSS, and Schema.org, which make it easy for websites and online systems to interact with each other. But it is more than just technical protocols. It also includes open‑source code, public APIs, and the free flow of data and content across sites, services, and devices. Creating a democratic digital space where people can build on each other’s work without heavy restrictions.

Because these standards are not owned or patented, the open web remains largely decentralized. This allows content to be accessed, understood, and reused across devices and platforms. This not only encourages innovation but also ensures that information is discoverable without being locked behind proprietary ecosystems.

The benefits of an open web

The open web is built on publicly available protocols that enable access, collaboration, and innovation at a global scale. 

The most important benefits include:

  • Collaboration and innovation: Open protocols enable developers to build on each other’s work without proprietary restrictions.
  • Accessibility: Users and AI agents alike can access and interact with web content regardless of device, platform, or underlying technology.
  • Democratization: No single company controls access to information, giving publishers greater autonomy.
  • Inclusion: The open web creates a more level playing field, where everyone gets a chance to participate in the digital economy.

The open web vs the deep web

To give you a better idea of what the open web is, it helps to know about the “deep web” and closed or “walled garden” platforms. The deep web covers content not indexed by search engines, while closed systems or walled gardens restrict access and keep data siloed.

On the open web, anyone can access information freely. A good example of that is Wikipedia. Accessible to anyone looking for information on a topic and anyone who wants to contribute to its content. Closed-off platforms, like proprietary apps or social media ecosystems, create places where content is only available if you pay or use a specific service. Well-known examples of this are social media platforms such as Facebook and Instagram. Another example is a news website that requires a paid subscription to get access.

In essence, the open web keeps information discoverable, accessible, and interoperable – instead of locked inside a handful of platforms.

AI and the open web

The popularity of AI-powered search makes open web principles more important than ever. Decentralized and accessible information allows AI tools to interact with content directly and use it freely to generate an answer for a user. 

“We believe the future of AI is grounded in the open web.” 

Ramanathan Guha, CVP and Technical Fellow at Microsoft. 

Microsoft’s open project NLWeb is a prime example. It provides a standardized layer that enables AI agents to discover, understand, and interact with websites efficiently, without needing separate integrations for every platform. 

What this means for website owners

For website owners, including small business owners, embracing the open web means making your content freely available in ways that AI can interpret. By using structured data standards like Schema.org, your website becomes discoverable to AI tools. Increasing your reach and ensuring that your content remains part of the future of search. 

Yoast and Microsoft: collaborating towards a more open web

Yoast is proud to collaborate with NLWeb, a Microsoft project that makes your content easier to understand for AI agents without extra effort from website owners. Allowing your content to remain discoverable, reach a wider audience with and show up in AI-powered search results.  

The open web strives towards an accessible web where content is available for everyone. A web where it doesn’t matter how big your website or marketing budget is. Giving everyone the chance to be found and represented in AI-powered search. NLWeb helps turn this vision into reality by connecting today’s open web with tomorrow’s AI-driven search ecosystem 

Read more: Yoast collaborates with Microsoft to help AI understand Open Web »

Chrome Updated With 3 AI Features Including Nano Banana via @sejournal, @martinibuster

Gemini in Chrome has just been refreshed with three new features that integrate more Gemini capabilities within Chrome for Windows, MacOS, and Chromebook Plus. The update adds an AI side panel, agentic AI Auto Browse, and Nano Banana image editing of whatever image is in the browser window.

AI Side Panel For Multitasking

Chrome adds a new side panel that enables users to slide open a side panel to open up a session with Gemini without having to jump around across browser tabs. The feature is described as a way to save time by making it easier to multitask.

Google explains:

“Our testers have been using it for all sorts of things: comparing options across too-many-tabs, summarizing product reviews across different sites, and helping find time for events in even the most chaotic of calendars.”

Opt-In Requirement For AI Chat

Before enabling the side panel AI chat feature, a user must first consent to sending their URLs and browser data back to Google.

Screenshot Of Opt-In Form

Nano Banana In Chrome

Using the AI side panel, users can tell it to update and change an image in the browser window without having to do any copying, downloading, or uploading. Nano banana will change it right there in the open browser window.

Chrome Autobrowse (Agentic AI)

This feature is for subscribers of Google’s AI Pro and Ultra tiers. Autobrowse enables an agentic AI to take action on behalf of the user. It’s described as being able to researching hotel and flights and doing cost comparisons across a given range of dates, obtaining quotes for work, and checking if bills are paid.

Autobrowse is multimodal which means that it can identify items in a photo then go out and find where they can be purchased and add them to a cart, including adding any relevant discount codes. If given permission, the AI agent can also access passwords and log in to online stores and services.

Adds More Features To Existing Ones

Google announced on January 12, 2026 that Chrome’s AI was upgraded with app connections, able to connect to Calendar, Gmail,Google Shopping, Google Flights, Maps, and YouTube. This is part of Google’s Personal Intelligence initiative, which it said is Google’s first step toward a more personalized AI assistant.

Personalization And User Intent Extraction For AI Chat And Agents

On a related note, Google recently published a research paper that shows how an on-device and in-browser AI can extract a user’s intent so as to provide better personalized and proactive responses, pointing to how on-device AI may be used in the near future. Read Google’s New User Intent Extraction Method.

Featured Image by Shutterstock/f11photo