The 2026 AI Search Benchmark Every SEO Leader Needs [Webinar] via @sejournal, @lorenbaker

See Where Your Brand Stands in the New Search Frontier

AI search has become the new gateway to visibility. As Google’s AI Overviews and Answer Engine Optimization (AEO) reshape discovery, the question is no longer if your brand should adapt, but how fast.

Join Pat Reinhart, VP of Services and Thought Leadership at Conductor, and Shannon Vize, Sr. Content Marketing Manager at Conductor, for an exclusive first look at the 2026 AEO and GEO Benchmarks Report, the industry’s most comprehensive study of AI search performance across 10 key industries.

What You’ll Learn

  • The exclusive 2026 benchmarks for AI referral traffic, AIO visibility, and AEO/GEO performance across industries
  • How to identify where your brand stands against AI market share leaders
  • How AI search and AIO are transforming visibility and referral traffic

Why Attend?

This is your opportunity to see what top-performing brands are doing differently and how to measure your own visibility, referral traffic, and share of voice in AI search. You’ll gain data-backed insights to update your SEO and AEO strategy for 2026 and beyond.

📌 Register now to secure your seat and benchmark your brand’s performance in the new era of AI search.

🛑 Can’t make it live? Register anyway and we’ll send you the full recording after the event.

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: the future of AlphaFold, and chatbot privacy concerns

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster—returning results in hours instead of months.

Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. Read the full story.

—Will Douglas Heaven

The State of AI: Chatbot companions and the future of our privacy

—Eileen Guo & Melissa Heikkilä

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.

Some state governments are taking notice and starting to regulate companion AI. But tellingly, one area the laws fail to address is user privacy. Read the full story.

This is the fourth edition of The State of AI, our subscriber-only collaboration between the Financial Times and MIT Technology Review. Sign up here to receive future editions every Monday.

While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing on our site.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has signed an executive order to boost AI innovation 
The “Genesis Mission” will try to speed up the rate of scientific breakthroughs. (Politico)
+ The order directs government science agencies to aggressively embrace AI. (Axios)
+ It’s also being touted as a way to lower energy prices. (CNN)

2 Anthropic’s new AI model is designed to be better at coding
We’ll discover just how much better once Claude Opus 4.5 has been properly put through its paces. (Bloomberg $)
+ It reportedly outscored human candidates in an internal engineering test. (VentureBeat)
+ What is vibe coding, exactly? (MIT Technology Review)

3 The AI boom is keeping India hooked on coal
Leaving little chance of cleaning up Mumbai’s famously deadly pollution. (The Guardian)
+ It’s lethal smog season in New Delhi right now. (CNN)
+ The data center boom in the desert. (MIT Technology Review)

4 Teenagers are losing access to their AI companions
Character.AI is limiting the amount of time underage users can spend interacting with its chatbots. (WSJ $)
+ The majority of the company’s users are young and female. (CNBC)
+ One of OpenAI’s key safety leaders is leaving the company. (Wired $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

5 Weight-loss drugs may be riskier during pregnancy 
Recipients are more likely to deliver babies prematurely. (WP $)
+ The pill version of Ozempic failed to halt Alzheimer’s progression in a trial. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

6 OpenAI is launching a new “shopping research” tool
All the better to track your consumer spending with. (CNBC)
+ It’s designed for price comparisons and compiling buyer’s guides. (The Information $)
+ The company is clearly aiming for a share of Amazon’s e-commerce pie. (Semafor)

7 LA residents displaced by wildfires are moving into prefab housing 🏠
Their new homes are cheap to build and simple to install. (Fast Company $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 Why former Uber drivers are undertaking the world’s toughest driving test
They’re taking the Knowledge—London’s gruelling street test that bypasses GPS. (NYT $)

9 How to spot a fake battery
Great, one more thing to worry about. (IEEE Spectrum)

10 Where is the Trump Mobile?
Almost six months after it was announced, there’s no sign of it. (CNBC)

Quote of the day

“AI is a tsunami that is gonna wipe out everyone. So I’m handing out surfboards.”

—Filmmaker PJ Accetturo, tells Ars Technica why he’s writing a newsletter advising fellow creatives how to pivot to AI tools.

One more thing

The second wave of AI coding is here

Ask people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.

Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. This next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it.

But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re planning a visit to Istanbul here’s hoping you like cats—the city can’t get enough of them.
+ Rest in power reggae icon Jimmy Cliff.
+ Did you know the ancient Egyptians had a pretty accurate way of testing for pregnancy?
+ As our readers in the US start prepping for Thanksgiving, spare a thought for Astoria the lovelorn turkey 🦃

Where Investors See Ecommerce Heading

Crunchbase reported in November that ecommerce start-up funding was about to hit a five-year low as investors focus on AI, logistics, marketplace, and live and social shopping.

The ecommerce industry is maturing. Per Crunchbase, total 2025 ecommerce start-up funding in the United States will reach $2.73 billion. That’s down from $3.06 billion last year and $28.05 billion in the pandemic-fueled 2021.

The global market is similar. Worldwide ecommerce investments peaked in 2021 at $92.46 billion, dropping to $7.72 billion this year.

Ecommerce Funding 2020 to 2025

Year United States Global
2020 $10 billion $31.19 billion
2021 $28.05 billion $92.46 billion
2022 $9.98 billion $36.06 billion
2023 $2.87 billion $16.54 billion
2024 $3.06 billion $10.61 billion
2025 $2.73 billion $7.27 billion

Seed through growth rounds of $200,000 or more. Source: Crunchbase.

Investment in Focus

Still, the ecommerce industry continues to grow. Several sources, including the National Retail Federation, have estimated that U.S. ecommerce sales will increase by 7% to 9% year over year by the end of 2025, roughly double that of physical retail.

So why are start-up ecommerce investments down if the industry is growing? The answer is that investors are not abandoning ecommerce. Rather, they are concentrating on areas they believe will define the next phase.

Certainly that’s true for retail enterprises and platforms. Amazon, Walmart, Shopify, PayPal, Target, and prominent brands have announced AI partnerships. These deals are indicative of where enterprise retail is going.

In reviewing this year’s start-up investments and deals among large retailers and commerce platforms, I see five areas of interest that indicate what’s next for ecommerce.

  • AI shopping,
  • AI commerce infrastructure,
  • Rapid logistics and fulfillment,
  • Marketplaces,
  • Live and social commerce.

AI Shopping

AI product search, AI-assisted shopping, and AI-powered agentic commerce are the hottest topics in the industry. Seemingly every major ecommerce retailer, marketplace, and platform is rushing to various degrees of AI-guided ecommerce.

AI shopping tools aim to reduce friction, match products to intent, and increase conversions. The tools will offer shoppers fewer but hopefully more relevant options.

For small-to-medium ecommerce businesses, AI integration will depend on platform adoption. As Shopify, WooCommerce, and BigCommerce integrate AI-guided shopping tools, even small merchants can offer conversational search, personalized recommendations, and similar.

AI shopping assistants may become as standard as site search, shifting how shoppers interact with independent ecommerce stores individually and collectively.

AI Commerce Infrastructure

Another cluster of investments focuses on the underlying infrastructure that powers ecommerce. These include product feeds, merchandising, ad creation, and operations.

GrowthList reported that ShopVision Technologies and Beyond the Checkout each raised funds to automate analytics, product catalog management, and post-purchase workflows.

The industry should therefore expect new software tools and platforms that do the work, such as creating an ad campaign or analyzing sales trends.

Rapid Logistics and Fulfillment

Logistics continues to attract investments as well.

India-based Zepto raised $450 million to expand its fast-delivery grocery network. Wonder, an American food and household delivery service, secured roughly $600 million earlier in 2025, according to Crunchbase.

Other logistics investments included Coco, a last-mile delivery provider that raised $60 million, and Stord, a distributed fulfillment network that raised $80 million.

Getting an ecommerce order from the warehouse to the customer has always been a significant challenge. Amazon and others have mastered it with same-day delivery, yet more speed and efficiency are needed.

Marketplaces

Investors are funding ecommerce marketplaces and related automation tools.

Refurbed, the European recommerce marketplace, raised more than $60 million to scale its operations, while emerging marketplaces in the Middle East, Asia, and Latin America continue to attract capital.

Meanwhile, Amazon, Walmart, Target Plus, TikTok Shop, and Temu have all expanded API access and seller tools, signaling increased competition.

Hence ecommerce sales and distribution might continue to become more decentralized. Independent ecommerce merchants may need to participate in several marketplaces.

Live and Social Commerce

Finally, livestream commerce continues to attract investors. Whatnot, the live shopping marketplace, raised $225 million, according to Crunchbase, and reported more than $6 billion in sales in 2025.

The trend may be a move away from static product pages to interactive, personality-driven sales channels. Live commerce enables shoppers to ask questions and see products in use.

This human interaction could facilitate trust more quickly. It might also be a counterweight of sorts to agentic commerce, where all of the trust is with the AI.

Most retailers will likely host livestreams via platforms and integrations — larger sellers more frequently than smaller ones.

The AI Consistency Paradox via @sejournal, @DuaneForrester

Doc Brown’s DeLorean didn’t just travel through time; it created different timelines. Same car, different realities. In “Back to the Future,” when Marty’s actions in the past threatened his existence, his photograph began to flicker between realities depending on choices made across timelines.

This exact phenomenon is happening to your brand right now in AI systems.

ChatGPT on Monday isn’t the same as ChatGPT on Wednesday. Each conversation creates a new timeline with different context, different memory states, different probability distributions. Your brand’s presence in AI answers can fade or strengthen like Marty’s photograph, depending on context ripples you can’t see or control. This fragmentation happens thousands of times daily as users interact with AI assistants that reset, forget, or remember selectively.

The challenge: How do you maintain brand consistency when the channel itself has temporal discontinuities?

The AI Consistency Paradox

The Three Sources Of Inconsistency

The variance isn’t random. It stems from three technical factors:

Probabilistic Generation

Large language models don’t retrieve information; they predict it token by token using probability distributions. Think of it like autocomplete on your phone, but vastly more sophisticated. AI systems use a “temperature” setting that controls how adventurous they are when picking the next word. At temperature 0, the AI always picks the most probable choice, producing consistent but sometimes rigid answers. At higher temperatures (most consumer AI uses 0.7 to 1.0 as defaults), the AI samples from a broader range of possibilities, introducing natural variation in responses.

The same question asked twice can yield measurably different answers. Research shows that even with supposedly deterministic settings, LLMs display output variance across identical inputs, and studies reveal distinct effects of temperature on model performance, with outputs becoming increasingly varied at moderate-to-high settings. This isn’t a bug; it’s fundamental to how these systems work.

Context Dependence

Traditional search isn’t conversational. You perform sequential queries, but each one is evaluated independently. Even with personalization, you’re not having a dialogue with an algorithm.

AI conversations are fundamentally different. The entire conversation thread becomes direct input to each response. Ask about “family hotels in Italy” after discussing “budget travel” versus “luxury experiences,” and the AI generates completely different answers because previous messages literally shape what gets generated. But this creates a compounding problem: the deeper the conversation, the more context accumulates, and the more prone responses become to drift. Research on the “lost in the middle” problem shows LLMs struggle to reliably use information from long contexts, meaning key details from earlier in a conversation may be overlooked or mis-weighted as the thread grows.

For brands, this means your visibility can degrade not just across separate conversations, but within a single long research session as user context accumulates and the AI’s ability to maintain consistent citation patterns weakens.

Temporal Discontinuity

Each new conversation instance starts from a different baseline. Memory systems help, but remain imperfect. AI memory works through two mechanisms: explicit saved memories (facts the AI stores) and chat history reference (searching past conversations). Neither provides complete continuity. Even when both are enabled, chat history reference retrieves what seems relevant, not everything that is relevant. And if you’ve ever tried to rely on any system’s memory based on uploaded documents, you know how flaky this can be – whether you give the platform a grounding document or tell it explicitly to remember something, it often overlooks the fact when needed most.

Result: Your brand visibility resets partially or completely with each new conversation timeline.

The Context Carrier Problem

Meet Sarah. She’s planning her family’s summer vacation using ChatGPT Plus with memory enabled.

Monday morning, she asks, “What are the best family destinations in Europe?” ChatGPT recommends Italy, France, Greece, Spain. By evening, she’s deep into Italy specifics. ChatGPT remembers the comparison context, emphasizing Italy’s advantages over the alternatives.

Wednesday: Fresh conversation, and she asks, “Tell me about Italy for families.” ChatGPT’s saved memories include “has children” and “interested in European travel.” Chat history reference might retrieve fragments from Monday: country comparisons, limited vacation days. But this retrieval is selective. Wednesday’s response is informed by Monday but isn’t a continuation. It’s a new timeline with lossy memory – like a JPEG copy of a photograph, details are lost in the compression.

Friday: She switches to Perplexity. “Which is better for families, Italy or Spain?” Zero memory of her previous research. From Perplexity’s perspective, this is her first question about European travel.

Sarah is the “context carrier,” but she’s carrying context across platforms and instances that can’t fully sync. Even within ChatGPT, she’s navigating multiple conversation timelines: Monday’s thread with full context, Wednesday’s with partial memory, and of course Friday’s Perplexity query with no context for ChatGPT at all.

For your hotel brand: You appeared in Monday’s ChatGPT answer with full context. Wednesday’s ChatGPT has lossy memory; maybe you’re mentioned, maybe not. Friday on Perplexity, you never existed. Your brand flickered across three separate realities, each with different context depths, different probability distributions.

Your brand presence is probabilistic across infinite conversation timelines, each one a separate reality where you can strengthen, fade, or disappear entirely.

Why Traditional SEO Thinking Fails

The old model was somewhat predictable. Google’s algorithm was stable enough to optimize once and largely maintain rankings. You could A/B test changes, build toward predictable positions, defend them over time.

That model breaks completely in AI systems:

No Persistent Ranking

Your visibility resets with each conversation. Unlike Google, where position 3 carries across millions of users, in AI, each conversation is a new probability calculation. You’re fighting for consistent citation across discontinuous timelines.

Context Advantage

Visibility depends on what questions came before. Your competitor mentioned in the previous question has context advantage in the current one. The AI might frame comparisons favoring established context, even if your offering is objectively superior.

Probabilistic Outcomes

Traditional SEO aimed for “position 1 for keyword X.” AI optimization aims for “high probability of citation across infinite conversation paths.” You’re not targeting a ranking, you’re targeting a probability distribution.

The business impact becomes very real. Sales training becomes outdated when AI gives different product information depending on question order. Customer service knowledge bases must work across disconnected conversations where agents can’t reference previous context. Partnership co-marketing collapses when AI cites one partner consistently but the other sporadically. Brand guidelines optimized for static channels often fail when messaging appears verbatim in one conversation and never surfaces in another.

The measurement challenge is equally profound. You can’t just ask, “Did we get cited?” You must ask, “How consistently do we get cited across different conversation timelines?” This is why consistent, ongoing testing is critical. Even if you have to manually ask queries and record answers.

The Three Pillars Of Cross-Temporal Consistency

1. Authoritative Grounding: Content That Anchors Across Timelines

Authoritative grounding acts like Marty’s photograph. It’s an anchor point that exists across timelines. The photograph didn’t create his existence, but it proved it. Similarly, authoritative content doesn’t guarantee AI citation, but it grounds your brand’s existence across conversation instances.

This means content that AI systems can reliably retrieve regardless of context timing. Structured data that machines can parse unambiguously: Schema.org markup for products, services, locations. First-party authoritative sources that exist independent of third-party interpretation. Semantic clarity that survives context shifts: Write descriptions that work whether the user asked about you first or fifth, whether they mentioned competitors or ignored them. Semantic density helps: keep the facts, cut the fluff.

A hotel with detailed, structured accessibility features gets cited consistently, whether the user asked about accessibility at conversation start or after exploring ten other properties. The content’s authority transcends context timing.

2. Multi-Instance Optimization: Content For Query Sequences

Stop optimizing for just single queries. Start optimizing for query sequences: chains of questions across multiple conversation instances.

You’re not targeting keywords; you’re targeting context resilience. Content that works whether it’s the first answer or the fifteenth, whether competitors were mentioned or ignored, whether the user is starting fresh or deep in research.

Test systematically: Cold start queries (generic questions, no prior context). Competitor context established (user discussed competitors, then asks about your category). Temporal gap queries (days later in fresh conversation with lossy memory). The goal is minimizing your “fade rate” across temporal instances.

If you’re cited 70% of the time in cold starts but only 25% after competitor context is established, you have a context resilience problem, not a content quality problem.

3. Answer Stability Measurement: Tracking Citation Consistency

Stop measuring just citation frequency. Start measuring citation consistency: how reliably you appear across conversation variations.

Traditional analytics told you how many people found you. AI analytics must tell you how reliably people find you across infinite possible conversation paths. It’s the difference between measuring traffic and measuring probability fields.

Key metrics: Search Visibility Ratio (percentage of test queries where you’re cited). Context Stability Score (variance in citation rate across different question sequences). Temporal Consistency Rate (citation rate when the same query is asked days apart). Repeat Citation Count (how often you appear in follow-up questions once established).

Test the same core question across different conversation contexts. Measure citation variance. Accept the variance as fundamental and optimize for consistency within that variance.

What This Means For Your Business

For CMOs: Brand consistency is now probabilistic, not absolute. You can only work to increase the probability of consistent appearance across conversation timelines. This requires ongoing optimization budgets, not one-time fixes. Your KPIs need to evolve from “share of voice” to “consistency of citation.”

For content teams: The mandate shifts from comprehensive content to context-resilient content. Documentation must stand alone AND connect to broader context. You’re not building keyword coverage, you’re building semantic depth that survives context permutation.

For product teams: Documentation must work across conversation timelines where users can’t reference previous discussions. Rich structured data becomes critical. Every product description must function independently while connecting to your broader brand narrative.

Navigating The Timelines

The brands that succeed in AI systems won’t be those with the “best” content in traditional terms. They’ll be those whose content achieves high-probability citation across infinite conversation instances. Content that works whether the user starts with your brand or discovers you after competitor context is established. Content that survives memory gaps and temporal discontinuities.

The question isn’t whether your brand appears in AI answers. It’s whether it appears consistently across the timelines that matter: the Monday morning conversation and the Wednesday evening one. The user who mentions competitors first and the one who doesn’t. The research journey that starts with price and the one that starts with quality.

In “Back to the Future,” Marty had to ensure his parents fell in love to prevent himself from fading from existence. In AI search, businesses must ensure their content maintains authoritative presence across context variations to prevent their brands from fading from answers.

The photograph is starting to flicker. Your brand visibility is resetting across thousands of conversation timelines daily, hourly. The technical factors causing this (probabilistic generation, context dependence, temporal discontinuity) are fundamental to how AI systems work.

The question is whether you can see that flicker happening and whether you’re prepared to optimize for consistency across discontinuous realities.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Inkoly/Shutterstock

Google Isn’t Going Anywhere: Ahrefs Ambassador On LLM Inclusion & Why Relationships Still Win via @sejournal, @theshelleywalsh

There’s a divided line in the industry between those who think optimizing for AI is separate from SEO and those who think LLM discovery is just SEO. But, this is an unproductive argument, because whatever you think, LLM inclusion is now part of SEO discovery.

So, let’s just focus on how the search journey works now and where you can find real business value.

To discuss inclusion in LLMs, I invited Patrick Stox to the latest edition of IMHO to find out what he thinks. As product advisor, technical SEO, and brand ambassador at Ahrefs, Patrick has plenty of data to work with and insights into what’s actually working for LLM inclusion right now.

In the face of the AI takeover, Patrick’s take is that Google isn’t going anywhere, and he still thinks human relationships are critical.

You can watch the full interview with Patrick on IMHO below.

Google Isn’t Going Anywhere

With the industry obsessing over ChatGPT, AI Overviews, and AI Mode, it’s easy to assume that traditional search really is dead. However, Patrick was quick to say, “I’m not betting against Google.”

“Google is still everything for most people … Most of the people that are using [LLMs] are tech forward, but the majority of folks are still just Googling things”

Recent Ahrefs data estimated that Google owns an estimated 40% of all traffic to websites, with LLM referrals still a fraction by comparison. Although Google’s share of traffic may be down a couple of percent this year, it still dominates.

After experimenting with ChatGPT and Claude when they first launched, Patrick found himself returning to Google’s AI Mode and Gemini, and thinks others will do the same. “Even I just went back to Google,” he admitted. “I think we’re going to see more of that as they improve their systems.”

Google continues releasing competitive AI innovations, and Patrick predicts these will pull many users back into Google’s ecosystem.

“I’m not betting against Google,” he says. “They’ve got more data than anyone, and they’re still on the bleeding edge.”

The Attribution Problem: LLMs Might Drive Conversions, But We Can’t Prove It

Even though sites are seeing growing referrals from LLMs, establishing attribution to any real value from LLM traffic is a challenge right now. We can talk about brand awareness, but C-Suite is only interested in business value.

Patrick agreed that while you can count mentions and citations in AI answers, that doesn’t easily translate into board-level reporting.

“You can measure how often you’re mentioned versus competitors … but going back to a business, I can’t report on that stuff. It’s all secondary, tertiary metrics.”

For Patrick, revenue and revenue-adjacent metrics still matter. That said, Ahrefs has had some signals from AI search traffic.

“We did track the signups. When I first looked at this data back in July, all the traffic from AI search was half a percent of our traffic total. But at the time, it was 12.1% of our total conversions.” He explained.

This has now dropped below 10%, while the traffic share has grown slightly.

Two Strategies That Are Working For LLM Inclusion

I asked if Ahrefs is actively investing in LLM inclusion, and Patrick said they are trying a number of different things, and the two fundamental approaches that determine LLM visibility are repetition and differentiation.

“Whatever the internet says, that’s kind of what’s being returned in these systems,”

Repetition means ensuring consistent messaging across multiple websites. LLMs synthesize what “the internet says,” so if you want to be recognized for something, that narrative needs to exist broadly. For Ahrefs, this has meant actively spreading the message that they have evolved beyond just SEO tools into a comprehensive digital marketing platform.

Differentiation through original data works alongside the repetition to stand out. Ahrefs has invested heavily in unique data studies throughout the year, including non-English language research. “This data is being heavily cited, heavily returned in these systems because there’s nothing else out there like it,” Patrick explained.

The more surprising tactic that is also currently working is listicles.

“I hate to say it, but listicles … they work right now. I don’t think it’s future-proof at all, but at the same time, I don’t want to just not be there.”

Agentic AI And The Threat Of Closed Systems

I then asked about agentic AI and systems, and does Patrick have concerns about systems becoming closed.

As LLM agents begin booking travel, making purchases, or accessing APIs directly, most likely they would rely on a small set of partners from big brands.

“ChatGPT isn’t going to make deals with unknown companies,” Stox says. “If they book flights, they’ll use major providers. If they use a dictionary, they’ll pick one dictionary.”

This would be the real threat to smaller businesses. “If an agent decides ‘we only check out through Amazon,’ a lot of stores lose sales overnight,” Patrick warns. There is no guaranteed defense. The only strategy we can follow right now is to grow your brand and footprint.

“What was the thing they used to say for Google? Make them embarrassed to not have you included.”

Beyond LLM Optimization: Channels That Still Matter

Patrick emphasized a point that’s possibly been forgotten in the AI hype: “It’s not ChatGPT that’s the second largest search engine, it’s still YouTube by far.”

YouTube has been a hugely successful referral platform for Ahrefs, and the company invested heavily in video. Patrick recommends both long and short-form, for brand discovery.

Community participation on platforms such as Reddit, Slack, and Discord also offers substantial value, but only when companies genuinely participate rather than spam.

While many brands have tried to brute-force Reddit with spam, Patrick says there can be huge value in genuine participation, especially when employees are allowed to represent the company authentically.

“You have literally a paid workforce of advocates who work for your company. Let them go out and talk to people … answer questions, basically advertise for you. They want to do it already. So let them.”

If You Started A Product Today, Where Would You Bet?

As a final question, I asked Patrick where he’d invest if launching a startup today; he did not hesitate to say relationships.

“If I launched a startup, the first thing I’d invest in is relationships. That’s still the most powerful channel … I think if I did do something like that, I’d probably grow it pretty fast. More from my connections than anything else,” he said.

After relationships, he’d focus on YouTube, website content creation, and telling friends about the product. In other words, “just normal marketing.”

“We’ve gone through this tech revolution, and now we’re realizing everything still comes back to direct connections with people.”

And that may be the most important insight of all. In an era of AI-driven discovery, the brands that win are the ones that remain unmistakably human.

Watch the full video interview with Patrick Stox here:

Thank you to Patrick Stox for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

The Download: how to fix a tractor, and living among conspiracy theorists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the man building a starter kit for civilization

You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025.

This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS).

It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. It’s all part of his ethos that life-changing technology should be available to all, not controlled by a select few. Read the full story.

—Tiffany Ng

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.

What it’s like to find yourself in the middle of a conspiracy theory

Last week, we held a subscribers-only Roundtables discussion exploring how to cope in this new age of conspiracy theories. Our features editor Amanda Silverman and executive editor Niall Firth were joined by conspiracy expert Mike Rothschild, who explained exactly what it’s like to find yourself at the center of a conspiracy you can’t control. Watch the conversation back here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DOGE has been disbanded
Even though it’s got eight months left before its official scheduled end. (Reuters)
+ It leaves a legacy of chaos and few measurable savings. (Politico)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

2 How OpenAI’s tweaks to ChatGPT sent some users into delusional spirals
It essentially turned a dial that increased both usage of the chatbot and the risks it poses to a subset of people. (NYT $)
+ AI workers are warning loved ones to stay away from the technology. (The Guardian)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

3 A three-year old has received the world’s first gene therapy for Hunter syndrome
Oliver Chu appears to be developing normally one year after starting therapy. (BBC)

4 Why we may—or may not—be in an AI bubble 🫧
It’s time to follow the data. (WP $)
+ Even tech leaders don’t appear to be entirely sure. (Insider $)
+ How far can the ‘fake it til you make it’ strategy take us? (WSJ $)
+ Nvidia is still riding the wave with abandon. (NY Mag $)

5 Many MAGA influencers are based in Russia, India and Nigeria
X’s new account provenance feature is revealing some interesting truths. (The Daily Beast)

6 The FBI wants to equip drones with facial recognition tech
Civil libertarians claim the plans equate to airborne surveillance. (The Intercept)
+ This giant microwave may change the future of war. (MIT Technology Review)

7  Snapchat is alerting users ahead of Australia’s under-16s social media ban  
The platform will analyze an account’s “behavioral signals” to estimate a user’s age. (The Guardian)
+ An AI nudification site has been fined for skipping age checks. (The Register)
+ Millennial parents are fetishizing the notion of an offline childhood. (The Observer)

8 Activists are roleplaying ICE raids in Fortnite and Grand Theft Auto
It’s in a bid to prepare players to exercise their rights in the real world. (Wired $)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

9 The JWST may have uncovered colossal stars ⭐
In fact, they’re so big their masses are 10,000 times bigger than the sun. (New Scientist $)
+ Inside the hunt for the most dangerous asteroid ever. (MIT Technology Review)

10 Social media users are lying about brands ghosting them
Completely normal behavior. (WSJ $)
+ This would never have happened on Vine, I’ll tell you now. (The Verge)

Quote of the day

“I can’t believe we have to say this, but this account has only ever been run and operated from the United States.” 

The US Department of Homeland Security’s X account attempts to end speculation surrounding its social media origins, the New York Times reports.

One more thing

This company is planning a lithium empire from the shores of the Great Salt Lake

On a bright afternoon in August, the shore of Utah’s Great Salt Lake looks like something out of a science fiction film set in a scorching alien world.

This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China.

The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation’s lithium extraction industry. Read the full story.

—Alexander C. Kaufman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love the thought of clever crows putting their smarts to use picking up cigarette butts (thanks Alice!)
+ Talking of brains, sea urchins have a whole lot more than we originally suspected.
+ Wow—a Ukrainian refugee has won an elite-level sumo competition in Japan.
+ How to make any day feel a little bit brighter.

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

<div data-chronoton-summary="

  • Nobel-winning protein prediction AlphaFold creator John Jumper reflects on five years since the AI system revolutionized protein structure prediction. The DeepMind tool can determine protein shapes to atomic precision in hours instead of months.
  • Unexpected applications emerge Scientists have found creative “off-label” uses for AlphaFold, from studying honeybee disease resistance to accelerating synthetic protein design. Some researchers even use it as a search engine, testing thousands of potential protein interactions to find matches that would be impractical to verify in labs.
  • Future fusion with language models Jumper, at 39 the youngest chemistry Nobel laureate in 75 years, now aims to combine AlphaFold’s specialized capabilities with the broad reasoning of large language models. “I’ll be shocked if we don’t see more and more LLM impact on science,” he says, while avoiding the pressure of another Nobel-worthy breakthrough.

” data-chronoton-post-id=”1128322″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months.

AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry.

It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out.

“It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.”

AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science.

Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.”

A hard problem

Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard.

Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one.

Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle.

But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.”

They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.”

What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.”

Any projects stand out in particular? 

Honeybee science

Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’”

He also highlights a few examples of what he calls off-label uses of AlphaFold“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.”

Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can.

Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.    

“Basically, if AlphaFold confidently agrees with the structure you were trying to design [and] then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper.

Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab.

“This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.”

Five years on

When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on.

Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.”

But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time.

Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.”

“It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.”

Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing.

It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.”

New wave  

AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.  

Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.

AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.”

Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom.

“Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says.

With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.”

Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.”

At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.”

In other words, they want to make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?”

What’s next?

Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.  

“We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?”

That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.    

Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.”

Jumper was 39 when he won his Nobel Prize. What’s next for him?

“It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.” 

He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.

Eileen Guo writes:

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 

It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. 

Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 

But tellingly, one area the laws fail to address is user privacy.

This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.

After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.” 

Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023: 

“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.) 

All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place

So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe? 

Melissa Heikkilä replies:

Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids. 

In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything. 

Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable. 

This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave. 

Because people generally like answers that are agreeable, such responses are weighted more heavily in training. 

AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 

After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features. 

AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 

This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before. 

By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed. 

We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models. 

Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.

We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.

Eileen responds:

I think the comparison between AI companions and social media is both apt and concerning. 

As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.

Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.

And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.

In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening. 

Further reading 

FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges

Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy 

In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.

Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

How Google’s Web Guide Helps SEO

Google’s Web Guide is an experiment launched in July 2025 that uses AI to organize a user’s search results. To try the feature, enable it in Google Labs.

Unlike a fan-out (which guesses what additional information is helpful in a searcher), Web Guide analyzes the content of top-ranking pages and groups them by topic.

AI then summarizes each category, providing an overview of the pages.

Perhaps unintentionally, Web Guide is handy for search engine optimization by revealing Google’s understanding of keywords.

Targeted queries

Organic search results order web pages by ranking signals. Yet searchers cannot easily discern the pages’ content type or topics without visiting each one. Web Guide provides a summary, thus implying how Google interprets a query.

For example, Web Guide groups the search results for “how to build a website” by the following topics:

  • “Comprehensive guides to building a website”
  • “Building websites with no-code builders”
  • “Creating websites with Google Sites”
  • “Website building with Squarespace”
  • “Building websites with Wix”
  • “Building websites with Canva”
  • “Website development with HTML, CSS, and JavaScript”
  • “Learning web development: courses & tutorials”
  • “Choosing website builders”
  • “Community advice on website building (Reddit threads and forums)”
  • “Understanding domain names and hosting”
  • “Web design principles and best practices”

Creators looking to search-optimize an article or course on website building can use the list for topics to include.

Web Guide can also identify competitors. For example, searching “waterproof sneakers” in Web Guide generates a section of the best-known brands:

Google search results page for “Waterproof Sneaker,” showing branded results from Nike, Adidas, and On. Each result includes a headline about waterproof shoes and brief descriptions referencing materials like GORE-TEX and RAIN.RDY. A product photo appears next to the Adidas listing.

Web Guide can identify competitors, as shown in this example for “waterproof sneakers.”

It also reveals alternative keywords to target, such as “water resistant” and “water shoes”:

Water-Resistant and Water Shoes

Some sneakers offer water resistance or are designed as full water shoes, with specific technologies like HDry® membrane providing complete waterproofing and breathability, while others prioritize quick drying.

Brand search

Searching for a brand name in Web Guide provides insight into what Google knows about the company and the URLs that impact its understanding. For example, searching “home chef” in Web Guide generates a separate section for the prices of that service. AI summarizes each ranking page.

Web Guide results also help brands ensure off-site consistency and identify which user-generated content to monitor. For example, brands that change pricing can use Web Guide to find a list of URLs to update.

Google search results page for “Home Chef Pricing & Plans,” displaying links from Home Chef, MiumMium, YouTube, and Reddit. Listings highlight meal costs starting around $9.99 per serving, weekly cost estimates, and comparisons of Home Chef pricing versus grocery stores. A small profile image accompanies one of the results.

Searching for “home chef” in Web Guide returns a section on pages that address the service’s prices.

Competitors

Queries in Web Guide reveal its preference among competitors. Take “Home Chef” and “Green Chef,” for example. Searching “home chef vs green chef” reveals Web Guide’s AI prefers the latter:

Green Chef typically comes out ahead due to its organic ingredients, health-conscious dietary plans, and sustainability efforts, whereas Home Chef offers greater affordability, customization, and convenience with quick-prep meals.

The URLs listed below the initial summary are also AI-summarized, offering a list of publications and authors to contact for clarifications or enhancements.

Google search results page for “home chef vs green chef,” summarizing comparison content. Featured results from meal websites and review sites discuss differences in meal plans, pricing, ingredients, and dietary options between Home Chef and Green Chef. A food photo is shown next to one of the listings.

Queries in Web Guide reveal Google’s preference for top competitors, such as this comparison of “Home Chef” and “Green Chef.”

Web Guide may or may not become public. Many such Google Labs experiments never do. While aimed at consumers, it implicitly helps search optimizers by revealing how Google’s AI interprets a query or understands a brand.