Google AI Mode Link Update, Click Share Data & ChatGPT Fan-Outs – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse: updates affect how links appear in AI search results, where organic clicks are going, and which languages ChatGPT uses to find sources.

Here’s what matters for you and your work.

Google Redesigns Links In AI Overviews And AI Mode

Robby Stein, VP of Product for Google Search, announced on X that AI Overviews and AI Mode are getting a redesigned link experience on both desktop and mobile.

Key Facts: On desktop, groups of links will now appear in a pop-up when you hover over them, showing site names, favicons, and short descriptions. Google is also rolling out more descriptive and prominent link icons across desktop and mobile.

Why This Matters

This is the latest in a series of link-visibility updates Stein has announced since last summer, when he called showing more inline links Google’s “north star” for AI search. The pattern is consistent. Google keeps iterating on how links surface inside AI-generated responses.

The hover pop-up is a new interaction pattern for AI Overviews. Instead of small inline citations that are easy to miss, users now get a preview card with enough context to decide whether to click. That changes the calculus for publishers wondering how much traffic AI results actually send.

What The Industry Is Saying

SEO consultant Lily Ray (Amsive) wrote on X that she had been seeing the new link cards and was “REALLY hoping it sticks.”

Read our full coverage: Google Says Links Will Be More Visible In AI Overviews

43% Of ChatGPT Fan-Out Queries For Non-English Prompts Run In English

A report from AI search analytics firm Peec AI found that a large share of ChatGPT’s fan-out queries run in English, even when the original prompt was in another language.

Key Facts: Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across non-English prompts analyzed, 43% of the fan-out queries ran in English. Nearly 78% of non-English prompt sessions included at least one English-language fan-out query.

Why This Matters

When ChatGPT Search builds an answer, it can rewrite the user’s prompt into “one or more targeted queries,” according to OpenAI’s documentation. OpenAI does not describe how language is chosen for those rewritten queries. Peec AI’s data suggests that English gets inserted into the process even when the user and their location are clearly non-English.

SEO and content teams working in non-English markets may face a disadvantage in ChatGPT’s source selection that doesn’t map to traditional ranking signals. Language filtering appears to happen before citation signals come into play.

Read our full coverage: ChatGPT Search Often Switches To English In Fan-Out Queries: Report

Google’s Search Relations Team Can’t Say You Still Need A Website

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a definitive yes.

Key Facts: In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question. Both acknowledged that websites still offer advantages, including data sovereignty, control over monetization, and freedom from platform content moderation. But neither argued that the open web offers something irreplaceable.

Why This Matters

Google Search is built around crawling and indexing web content. The fact that Google’s own Search Relations team treats “do I need a website?” as a business decision rather than an obvious yes is worth noting.

Illyes offered the closest thing to a position. He said that if you want to make information available to as many people as possible, a website is probably still the way to go. But he called it a personal opinion, not a recommendation.

The conversation aligns with increasingly fragmented user journeys, now spanning AI chatbots, social feeds, community platforms, and traditional search. For practitioners advising clients on building websites, the answer increasingly depends on where the audience is, not where it used to be.

Read our full coverage: Google’s Search Relations Team Debates If You Still Need A Website

Theme Of The Week: The Ground Keeps Moving Under Organic

Each story this week shows a different force pulling attention, clicks, or visibility away from the organic channel as practitioners have known it.

Google is redesigning how links appear in AI responses, acknowledging the traffic concern. ChatGPT’s background queries introduce a language filter that can exclude non-English content before relevance signals even apply. And Google’s own team won’t say that websites are the default answer for visibility anymore.

These stories reinforce the idea of spreading your content across different platforms to reach more people. And track where your clicks are really coming from.

More Resources:


Featured Image: TippaPatt/Shutterstock; Paulo Bobita/Search Engine Journal

Bing AI Citation Tracking, Hidden HTTP Homepages & Pages Fall Under Crawl Limit – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates cover how you track AI visibility, how a ghost page can break your site name in search results, and what new crawl data reveals about Googlebot’s file size limits.

Here’s what matters for you and your work.

Bing Webmaster Tools Adds AI Citation Dashboard

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving publishers visibility into how often their content gets cited in Copilot and AI-generated answers. The feature is now in public preview.

Key Facts: The dashboard tracks total citations, average cited pages per day, page-level citation activity, and grounding queries. Grounding queries show the phrases AI used when retrieving your content for answers.

Why This Matters

Bing is now offering a dedicated dashboard for AI citation visibility. Google includes AI Overviews and AI Mode activity in Search Console’s overall Performance reporting, but it doesn’t break out a separate report or provide citation-style URL counts. AI Overviews also assign all linked pages to a single position, which limits what you can learn about individual page performance in AI answers.

Bing’s dashboard goes further by tracking which pages get cited, how often, and what phrases triggered the citation. The missing piece is click data. The dashboard shows when your content is cited, but not whether those citations drive traffic.

Now you can confirm which pages are referenced in AI answers and identify patterns in grounding queries, but connecting AI visibility to business outcomes still requires combining this data with your own analytics.

What SEO Professionals Are Saying

Wil Reynolds, founder of Seer Interactive, celebrated the feature on X and focused on the new grounding queries data:

“Bing is now giving you grounding queries in Bing Webmaster tools!! Just confirmed, now I gotta understand what we’re getting from them, what it means and how to use it.”

Koray Tuğberk GÜBÜR, founder of Holistic SEO & Digital, compared it directly to Google’s tooling on X:

“Microsoft Bing Webmaster Tools has always been more useful and efficient than Google Search Console, and once again, they’ve proven their commitment to transparency.”

Fabrice Canel, principal product manager at Microsoft Bing, framed the launch on X as a bridge between traditional and AI-driven optimization:

“Publishers can now see how their content shows up in the AI era. GEO meets SEO, power your strategy with real signals.”

The reaction across social media centered on a shared frustration. This is the data practitioners have been asking for, but it comes from Bing rather than Google. Several people expressed hope that Google and OpenAI would follow with comparable reporting.

Read our full coverage: Bing Webmaster Tools Adds AI Citation Performance Data

Hidden HTTP Homepage Can Break Your Site Name In Google

Google’s John Mueller shared a troubleshooting case on Bluesky where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results. The issue is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, hiding the problematic page from normal browsing.

Key Facts: The site used HTTPS, but a server-default HTTP homepage was still accessible. Chrome’s auto-upgrade meant the publisher never saw the HTTP version, but Googlebot doesn’t follow Chrome’s upgrade behavior, so Googlebot was pulling from the wrong page.

Why This Matters

This is the kind of problem you wouldn’t find in a standard site audit because your browser never shows it. If your site name or favicon in search results doesn’t match what you expect, and your HTTPS homepage looks correct, the HTTP version of your domain is worth checking.

Mueller suggested running curl from the command line to see the raw HTTP response without Chrome’s auto-upgrade. If it returns a server-default page instead of your actual homepage, that’s the source of the problem. You can also use the URL Inspection tool in Search Console with a Live Test to see what Google retrieved and rendered.

Google’s documentation on site names specifically mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what happens when an HTTP version contains content different from the HTTPS homepage you intended.

What People Are Saying

Mueller described the case on Bluesky as “a weird one,” noting that the core problem is invisible in normal browsing:

“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”

The case highlights a pattern where browser features often hide what crawlers see. Examples include Chrome’s auto-upgrade, reader modes, client-side rendering, and JavaScript content. To debug site name and favicon issues, check the server response directly, not just browser loadings.

Read our full coverage: Hidden HTTP Page Can Cause Site Name Problems In Google

New Data Shows Most Pages Fit Well Within Googlebot’s Crawl Limit

New research based on real-world webpages suggests most pages sit well below Googlebot’s 2 MB fetch cutoff. The data, analyzed by Search Engine Journal’s Roger Montti, draws on HTTP Archive measurements to put the crawl limit question into practical context.

Key Facts: HTTP Archive data suggests most pages are well below 2 MB. Google recently clarified in updated documentation that Googlebot’s limit for supported file types is 2 MB, while PDFs get a 64 MB limit.

Why This Matters

The crawl limit question has been circulating in technical SEO discussions, particularly after Google updated its Googlebot documentation earlier this month.

The new data answers the practical question that documentation alone couldn’t. Does the 2 MB limit matter for your pages? For most sites, the answer is no. Standard webpages, even content-heavy ones, rarely approach that threshold.

Where the limit could matter is on pages with extremely bloated markup, inline scripts, or embedded data that inflates HTML size beyond typical ranges.

The broader pattern here is Google making its crawling systems more transparent. Moving documentation to a standalone crawling site, clarifying which limits apply to which crawlers, and now having real-world data to validate those limits gives a clearer picture of what Googlebot handles.

What Technical SEO Professionals Are Saying

Dave Smart, technical SEO consultant at Tame the Bots and a Google Search Central Diamond Product Expert, put the numbers in perspective in a LinkedIn post:

“Googlebot will only fetch the first 2 MB of the initial html (or other resource like CSS, JavaScript), which seems like a huge reduction from 15 MB previously reported, but honestly 2 MB is still huge.”

Smart followed up by updating his Tame the Bots fetch and render tool to simulate the cutoff. In a Bluesky post, he added a caveat about the practical risk:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to cap text based files to 2 MB to simulate this.”

Google’s John Mueller endorsed the tool on Bluesky, writing:

“If you’re curious about the 2MB Googlebot HTML fetch limit, here’s a way to check.”

Mueller also shared Web Almanac data on Reddit to put the limit in context:

“The median on mobile is at 33kb, the 90-percentile is at 151kb. This means 90% of the pages out there have less than 151kb HTML.”

Roger Montti, writing for Search Engine Journal, reached a similar conclusion after reviewing the HTTP Archive data. Montti noted that the data based on real websites shows most sites are well under the limit, and called it “safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.”

Read our full coverage: New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough

Theme Of The Week: The Diagnostic Gap

Each story this week points to something practitioners couldn’t see before, or checked the wrong way.

Bing’s AI citation dashboard fills a measurement gap that has existed since AI answers started citing website content. Mueller’s HTTP homepage case reveals an invisible page that standard site audits and browser checks would miss entirely because Chrome hides it. And the Googlebot crawl limit data answers a question that documentation updates raised, but couldn’t resolve on their own.

The connecting thread isn’t that these are new problems. AI citations have been happening without measurement tools. Ghost HTTP pages have been confusing site name systems since Google introduced the feature. And crawl limits have been listed in Google’s docs for years without real-world validation. What changed this week is that each gap got a concrete diagnostic: a dashboard, a curl command, and a dataset.

The takeaway is that the tools and data for understanding how search engines interact with your content are getting more specific. The challenge is knowing where to look.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Discover Core Update, AI Mode Ads & Crawl Policy – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates affect how Google ranks content in Discover, how it plans to monetize AI search, and what content you serve to bots.

Here’s what matters for you and your work.

Google Releases Discover-Only Core Update

Google launched the February 2026 Discover core update, a broad ranking change targeting the Discover feed rather than Search. The rollout may take up to two weeks.

Key Facts: The update is initially limited to English-language users in the United States. Google plans to expand it to more countries and languages, but hasn’t provided a timeline. Google described it as designed to “improve the quality of Discover overall.” Existing core update and Discover guidance apply.

Why This Matters For SEOs

Google has historically rolled Discover ranking changes into broader core updates that affected Search as well. Announcing a Discover-specific core update means rankings in the feed can now move without any corresponding change in Search results.

That distinction creates a monitoring problem. When you track performance in Search Console, you should check Discover traffic independently over the next two weeks. Traffic drops that look like a core update penalty may be Discover-only. Treating them as Search problems leads to the wrong diagnosis.

Discover traffic concentration has grown for publishers. NewzDash CEO John Shehata reported that Discover accounts for roughly 68% of Google-sourced traffic to news sites. A core update targeting that surface independently raises the stakes for any publisher relying on the feed.

Read our full coverage: Google Releases Discover-Focused Core Update

Alphabet Q4 Earnings Reveal AI Mode Monetization Plans

Alphabet reported Q4 2025 earnings, showing Search revenue grew 17% to $63 billion. The call included the first detailed look at how Google plans to monetize AI Mode.

Key Facts: CEO Sundar Pichai said AI Mode queries are three times longer than traditional searches. Chief Business Officer Philipp Schindler described the resulting ad inventory as reaching queries that were “previously challenging to monetize.” Google is testing ads below AI Mode responses.

Why This Matters For SEOs

The monetization details matter more than the revenue headline. Google is treating AI Mode as additive inventory, not a replacement for traditional search ads. Longer queries create new ad surfaces that didn’t exist when users typed three-word searches. For paid search practitioners, that means new campaign territory in conversational queries.

The metrics Google celebrated on this call describe users staying on Google longer. Google framed longer AI Mode sessions as a growth driver, and the monetization infrastructure follows that logic. The tradeoff to watch is referral traffic.

AI Mode creates a seamless path from AI Overviews, as detailed in our coverage last week. The earnings data suggest Google sees that containment as part of the growth story.

Read our full coverage: Alphabet Q4 2025: AI Mode Monetization Tests And Search Revenue Growth

Mueller Pushes Back On Serving Markdown To LLM Bots

Google Search Advocate John Mueller pushed back on the idea of serving Markdown files to LLM crawlers instead of standard HTML, calling the concept “a stupid idea” on Bluesky and raising technical concerns on Reddit.

Key Facts: A developer described plans to serve raw Markdown to AI bots to reduce token usage. Mueller questioned whether LLM bots can recognize Markdown on a website as anything other than a text file, or follow its links. He asked what would happen to internal linking, headers, and navigation. On Bluesky, he was more direct, calling the conversion “a stupid idea.”

Why This Matters For SEOs

The practice exists because developers assume LLMs process Markdown more efficiently than HTML. Mueller’s response treats this as a technical problem, not an optimization. Stripping pages to Markdown can remove the structure that bots need to understand relationships between pages.

Mueller’s technical guidance is consistent, including his advice on multi-domain crawling and his crawl slump guidance. This fits a pattern where Mueller draws clear lines around bot-specific content formats. He previously compared llms.txt to the keywords meta tag, and SE Ranking’s analysis of 300,000 domains found no connection between having an llms.txt file and LLM citation rates.

Read our full coverage: Google’s Mueller Calls Markdown-For-Bots Idea ‘A Stupid Idea’

Google Files Bugs Against WooCommerce Plugins For Crawl Issues

Google’s Search Relations team said on the Search Off the Record podcast that they filed bugs against WordPress plugins. The plugins generate unnecessary crawlable URLs through action parameters like add-to-cart links.

Key Facts: Certain plugins create URLs that Googlebot discovers and attempts to crawl. The result is wasted crawl budget on pages with no search value. Google filed a bug with WooCommerce and flagged other plugin issues that remain unfixed. The team’s response targeted plugin developers rather than expecting individual sites to fix the problem.

Why This Matters For SEOs

Google intervening at the plugin level is unusual. Normally, crawl efficiency falls on individual sites. Filing bugs upstream suggests the problem is widespread enough that one-off fixes won’t solve it.

Ecommerce sites running WooCommerce should audit their plugins for URL patterns that generate crawlable action parameters. Check your crawl stats in Search Console for URLs containing cart or checkout parameters that shouldn’t be indexed.

Read our full coverage: Google’s Crawl Team Filed Bugs Against WordPress Plugins

LinkedIn Shares What Worked For AI Search Visibility

LinkedIn published findings from internal testing on what drives visibility in AI-generated search results. The company reported that non-brand awareness-driven traffic declined by up to 60% across the industry for a subset of B2B topics.

Key Facts: LinkedIn’s testing found that structured content performed better in AI citations, particularly pages with named authors, visible credentials, and clear publication dates. The company is developing new analytics to identify a traffic source for LLM-driven visits and to monitor LLM bot behavior in CMS logs.

Why This Matters For SEOs

What caught my attention is how much this overlaps with what AI platforms themselves are saying. Search Engine Journal’s Roger Montti recently interviewed Jesse Dwyer, head of communications at Perplexity. The AI platform’s own guidance on what drives citations lines up closely with what LinkedIn found. When both the cited source and the citing platform arrive at the same conclusions independently, that gives you something beyond speculation.

Read our full coverage: LinkedIn Shares What Works For AI Search Visibility

Theme Of The Week: Google Is Splitting The Dashboard

Every story this week points to the same realization. “Google” is no longer one thing to monitor.

Google is now announcing Discover core updates separately from Search core updates. AI Mode carries ad formats and checkout features that don’t exist in traditional results. Mueller drew a policy line around how bots consume content. Google filed crawl bugs upstream at the plugin level, and LinkedIn is building a separate measurement for AI-driven traffic.

A year ago, you could check one traffic graph in Search Console and get a reasonable picture. The picture now fragments across Discover, Search, AI Mode, and LLM-driven traffic. Ranking signals and update cycles differ, and the gaps between them haven’t been closed.

Top Stories Of The Week:

This week’s coverage spanned five developments across Discover updates, search monetization, crawl policy, and AI visibility.

More Resources:


Featured Image: Accogliente Design/Shutterstock

SEO Pulse: Google Explores AI Opt-Outs, Gemini 3 Powers AIOs via @sejournal, @MattGSouthern

Welcome to this week’s SEO Pulse: updates affect publisher control over AI features, how AI Overviews process queries, and what AI model tradeoffs mean for content workflows.

Here’s what matters for you and your work.

Google Explores Letting Sites Opt Out Of AI Search Features

Google says it’s exploring updates that could let websites opt out of AI-powered search features. The blog post came the same day the UK’s Competition and Markets Authority opened a consultation on potential new requirements for Google Search.

Key facts: Ron Eden, principal, product management at Google, wrote that the company is “exploring updates to our controls to let sites specifically opt out of Search generative AI features.” Google provided no timeline, technical specifications, or firm commitment.

Why This Matters For SEOs

Publishers and regulators have spent the past year pushing back on AI Overviews. The UK’s Independent Publishers Alliance, Foxglove, and Movement for an Open Web filed a complaint with the CMA last July, asking for the ability to opt out of AI summaries without being removed from search entirely.

A BuzzStream report we covered earlier this month found 79% of top news publishers block at least one AI training bot, and 71% block retrieval bots that affect AI citations. Publishers are already voting with their robots.txt files. Google’s post suggests it’s responding to pressure from the ecosystem by exploring controls it previously didn’t offer.

The practical question is what “opt out of AI search features” would mean technically. It’s unclear whether this would cover AI Overviews, AI Mode, or both, and whether sites would lose visibility in those experiences or only be excluded from summaries.

What People Are Saying

Early reactions on LinkedIn focused on the regulatory context and what this could mean for publishers.

David Skok, CEO & editor-in-chief at The Logic, wrote on LinkedIn:

“For the first time, a major regulator is publicly consulting on a requirement that would allow publishers to opt out of having their content used in Google’s AI Overviews or in training AI models without being removed from general search results.”

He added that the consultation would allow publishers to opt out of AI Overviews “without being removed from general search results.”

Matthew Allsop, the CMA’s principal digital markets adviser, framed it as a “meaningful choice” issue, pointing to measures that would allow publishers to opt out of AI Overviews.

In SEO and publisher discussions, the focus has been on whether any opt-out comes with tradeoffs, and whether Google will provide reporting that shows where content appears across AI surfaces.

Read our full coverage: Google May Let Sites Opt Out Of AI Search Features

Google AI Overviews Now Powered By Gemini 3

Google is making Gemini 3 the default model for AI Overviews globally, in markets where the feature is available. The update also adds a direct path into AI Mode conversations.

Key facts: Robby Stein, VP of Product for Google Search, announced the rollout, saying AI Overviews now reach over 1 billion users. The Gemini 3 upgrade brings the same reasoning capabilities to AI Overviews that powers AI Mode.

Why This Matters For SEOs

The model upgrade and the seamless transition into AI Mode work together. Better reasoning means AI Overviews can handle more complex queries at the top of results. The follow-up prompt means those who want to go deeper can do so without leaving Google’s AI interfaces.

This creates a smoother path that keeps people inside Google’s AI experiences longer. Someone who sees your content cited in an AI Overview might previously have clicked through to your site. Now they can ask a follow-up question and stay in AI Mode, which may reduce click-through opportunities even when your content continues to be cited.

The seamless transition continues the pattern of Google handling more of the search journey within its own surfaces.

Read our full coverage: Google AI Overviews Now Powered By Gemini 3

Sam Altman Says OpenAI “Screwed Up” GPT-5.2 Writing Quality

Sam Altman said OpenAI “screwed up” GPT-5.2’s writing quality during a developer town hall Monday evening. He said future GPT-5.x versions will address the gap.

Key facts: When asked about user feedback that GPT-5.2 produces writing that’s “unwieldy” and “hard to read” compared to GPT-4.5, Altman was blunt: “I think we just screwed that up.” He explained that OpenAI made a deliberate choice to focus GPT-5.2’s development on technical capabilities, putting “most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing.”

Why This Matters For SEOs

If you use ChatGPT for content workflows, you may have noticed the change. GPT-5.2 handles complex reasoning tasks better but produces prose that reads more mechanical. Altman confirmed this wasn’t a bug but a tradeoff.

The admission clarifies what to expect from AI writing tools going forward. Model developers are making explicit choices about what to improve. Writing quality competes with coding, reasoning, and other technical benchmarks for development resources.

This means matching the tool to the task. GPT-5.2 might excel at research synthesis, data analysis, and technical documentation, but it can produce awkward prose for blog posts or marketing copy. GPT-4.5 often reads more naturally, even if it couldn’t handle the same complexity.

Altman said future GPT-5.x versions will “hopefully” be much better at writing than 4.5 was, but gave no timeline.

What People Are Saying

On social media, the reaction focused on what the admission reveals about AI development priorities. Some framed it as a transparency win, noting that most companies would have reframed the issue as a design choice rather than acknowledging a mistake. Others pointed to the tension between optimizing for benchmarks versus optimizing for practical writing quality.

Read our full coverage: Sam Altman Says OpenAI “Screwed Up” GPT-5.2 Writing Quality

Theme Of The Week: Control And Tradeoffs

Each story this week involves platforms making choices about what to prioritize and who gets to decide.

Google is exploring whether to give publishers more control over AI features, responding to a year of regulatory pressure and ecosystem pushback. The Gemini 3 rollout gives users a smoother AI experience while reducing control over where that journey ends. And Altman’s admission shows that even model development involves tradeoffs between competing capabilities.

This week, the theme is about understanding which levers you can pull. Publisher opt-out controls might eventually let you decide how your content appears in AI search. Model selection lets you match AI tools to specific tasks. But the broader direction of these platforms is outside your control, and the choices they make shape the environment you’re optimizing for.

Top Stories Of The Week:

This week’s coverage focused on three developments worth tracking.

More Resources:

For deeper context on the publisher and AI visibility dynamics behind these stories, check these related pieces.


Featured Image: Accogliente Design/Shutterstock

SEO Pulse: Google’s AI Mode Gets Personal, AI Bots Blocked, Domains Matter in Search via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse. This week’s updates affect how AI Mode personalizes answers, which AI bots can access your site, and why your domain choice still matters for search visibility.

Here’s what matters for you and your work.

Google Connects Gmail And Photos To AI Mode

Google is rolling out Personal Intelligence, a feature that connects Gmail and Google Photos to AI Mode in Search, delivering personalized responses based on users’ own data.

Key facts: The feature is available to Google AI Pro and AI Ultra subscribers who opt in. It launches as a Labs experiment for eligible users in the U.S. Google says it doesn’t train on users’ Gmail inbox or Photos library.

Why This Matters

This is the personal context feature Google promised at I/O but delayed until now. We covered the delay in December when Nick Fox, Google’s SVP of Knowledge and Information, said the feature was “still to come” with no public timeline.

For the 75 million daily active users Fox reported in AI Mode, this could reduce how much context you need to type to get tailored responses. Google’s examples include trip recommendations that factor in hotel bookings from Gmail and past travel photos, or coat suggestions that account for preferred brands and upcoming travel weather.

The SEO effects depend on how this changes query patterns. If users rely on Google pulling context from their email and photos instead of typing it, queries may get shorter and more ambiguous. That makes it harder to target long-tail searches with explicit intent signals.

What People Are Saying

The early social reaction is framing this as Google pushing AI Mode from “ask and answer” into “already knows your context.” Robby Stein, VP of Product at Google Search, positioned it as a more personal search experience driven by opt-in data connections.

On LinkedIn, the discussion quickly moved to trust and privacy tradeoffs. Michele Curtis, a content marketing specialist, framed personalization as something that only works when trust comes first.

Curtis wrote:

“Personalization only works when trust is architected before intelligence.”

Syed Shabih Haider, founder of Fluxxy AI, raised security concerns about connecting multiple apps.

Haider wrote:

“Personal Intelligence.. yeah the features/benefits look amazing.. but cant help but wonder about the data security. Once all apps are connected, the risk for breach becomes extremely high..”

Read our full coverage: Google Launches Personal Intelligence In AI Mode

AI Training Bots Lose Access While Search Bots Expand

Hostinger analyzed 66 billion bot requests across more than 5 million websites and found AI crawlers are following two different paths. Training bots are losing access as more sites block them. Search and assistant bots are expanding their reach.

Key facts: Hostinger reports 55.67% coverage for GPTBot and 55.67% average coverage for OAI-SearchBot, but their trajectories differ. GPTBot, which collects training data, fell from 84% to 12% over the measurement period. OAI-SearchBot, which powers ChatGPT search, reached that average without the same decline. Googlebot maintained 72% coverage. Apple’s bot reached 24.33%.

Why This Matters

The data confirms what we’ve tracked through multiple studies over the past year. BuzzStream found 79% of top news publishers block at least one training bot. Cloudflare’s Year in Review showed GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives. The Hostinger data puts numbers on the access gap between training and search crawlers.

The distinction matters because these bots serve different purposes. Training bots collect data to build models, while search bots retrieve content in real time when users ask questions. Blocking training bots opts you out of future model updates, and blocking search bots means you won’t appear when AI tools try to cite sources.

As a best practice, check your server logs to see what’s hitting your site, then make blocking decisions based on your goals.

What People Are Saying

On the practical SEO side, the most consistent advice is to separate “training” from “search and retrieval” in your robots decisions where you can. Aleyda Solís previously summarized the idea as blocking GPTBot while still allowing OAI-SearchBot, so your content can be surfaced in ChatGPT-style search experiences without being used for model training.

Solís wrote:

“disallow the ‘GPTbot’ user-agent but allow ‘OAI-SearchBot’”

At the same time, developers and site operators keep emphasizing the cost side of bot traffic. In one r/webdev discussion, a commenter said AI bots made up 95% of requests before blocking and rate limiting.

A commenter in r/webdev wrote:

“95% of the requests to one of our websites was AI bots before I started blocking and rate limiting them”

Read our full coverage: OpenAI Search Crawler Passes 55% Coverage In Hostinger Study

Mueller: Free Subdomain Hosting Makes SEO Harder

Google’s John Mueller warned that free subdomain hosting services create SEO challenges even when publishers do everything else right. The advice came in response to a Reddit post from a publisher whose site shows up in Google but doesn’t appear in normal search results.

Key facts: The publisher uses Digitalplat Domains, a free subdomain service on the Public Suffix List. Mueller explained that free subdomain services attract spam and low-effort content, making it harder for search engines to assess individual site quality. He recommended building direct traffic through promotion and community engagement rather than expecting search visibility first.

Why This Matters

Mueller’s guidance fits a pattern we’ve covered over the years. Google’s Gary Illyes previously warned against cheap TLDs for the same reason. When a domain extension becomes overrun by spam, search engines may struggle to identify legitimate sites among the noise.

Free subdomain hosting creates a specific version of this problem. While the Public Suffix List is meant to treat these subdomains as separate registrable units, the neighborhood signal can still matter. If most subdomains on a host contain spam, Google’s systems have to work harder to find yours.

This affects anyone considering free hosting as a way to test an idea before buying a real domain. The test environment itself becomes part of the evaluation. As Mueller wrote, “Being visible in popular search results is not the first step to becoming a useful & popular web presence.”

For anyone advising clients or building new projects, the domain investment is part of the SEO foundation. Starting on a free subdomain may save money upfront, but it adds friction to visibility that a proper domain avoids.

What SEO Professionals Are Saying

Most of the social sharing here is treating Mueller’s “neighborhood” analogy as the headline takeaway. In the original Reddit exchange, he said publishing on free subdomain hosts can mean opening up shop among “problematic flatmates,” which makes it harder for search systems to understand your site’s value in context.

Mueller wrote:

“opening up shop on a site that’s filled with … potentially problematic ‘flatmates’.”

On LinkedIn, the story is being recirculated as a broader reminder that “cheap or free” hosting decisions can quietly cap performance even when everything else looks right. Fernando Paez V, a digital marketing specialist, called it out as a visibility issue tied to spam-heavy environments.

Paez V wrote:

“free subdomain hosting services … attract spam and make it more difficult for legitimate sites to gain visibility”

Read our full coverage: Google’s Mueller: Free Subdomain Hosting Makes SEO Harder

Theme Of The Week: Access Is The New Advantage

This week’s stories share a common element. Access, whether to personal data, to websites via bots, or to fair evaluation by choosing the right domain, shapes outcomes before any optimization happens.

Personal Intelligence gives AI Mode access to your email and photos, changing what kinds of queries even need to happen. The Hostinger data shows search bots gaining access while training bots get locked out. Mueller’s subdomain warning reminds us that domain choice determines whether Google’s systems give your content a fair evaluation at all.

The common thread is that visibility increasingly depends on what you allow in and where you build. Blocking the wrong bots can reduce your chances of being surfaced or cited in AI tools. Building on a spam-heavy domain puts you at a disadvantage before you write a word. And Google’s AI features now have access to personal context that publishers can’t access or observe.

For practitioners, this means access decisions, both yours and the platforms’, shape results more than incremental optimization gains. Review your crawler permissions and domain choices, and watch how personal context in AI Mode changes the queries you’re trying to rank for.

Top Stories Of The Week:

More Resources:


Featured Image: Accogliente Design/Shutterstock

SEO Pulse: UCP Debate, Trends Gets Gemini, Health AIO Concerns via @sejournal, @MattGSouthern

Welcome to this week’s Pulse. Google is laying more groundwork for agent-led shopping, Google Trends is getting a Gemini helper inside Explore, and Google appears to have responded to a report we covered last week on AI Overviews health queries.

Here’s what matters for you and your work.

Universal Commerce Protocol (UCP) Brings Agent Checkout Closer

Google introduced the Universal Commerce Protocol as an open standard meant to help AI agents complete shopping tasks across merchants and platforms. The announcement landed around NRF and was framed as agent-based shopping infrastructure, not a consumer feature on its own.

Key facts: This story got attention for two reasons. First, it shows where Google wants AI Mode shopping to go next. Second, it triggered a familiar debate about personalization and pricing after critics connected Google’s “personalized upselling” language to surveillance pricing narratives. Google has pushed back on that framing, saying upselling means showing premium options and that its Direct Offers pilot cannot raise prices.

Why This Matters

I’ve been tracking this build-out since Google began expanding AI shopping features across Search and Gemini. The direction is consistent. Google keeps moving more of the purchase journey into its own interfaces, from product research to comparison to now checkout.

The question for ecommerce practitioners is which parts of the journey you still influence with classic SEO, which parts come down to feeds and structured data hygiene, and which parts are product decisions made inside Google’s surfaces. UCP doesn’t answer that question yet, but it clarifies the direction.

What SEO Professionals Are Saying

The most useful social commentary this week falls into “consumer risk” versus “plumbing and implementation.”

On the critique side, Lindsay Owens, executive director of Groundwork Collaborative, helped set the tone for the surveillance pricing argument around “personalized upselling.” Lee Hepner, senior legal counsel at the American Economic Liberties Project, posted along similar lines, treating individualized pricing as the bigger policy risk sitting behind these kinds of systems.

On the implementation side, Mani Fazeli, VP of Product at Shopify, described what Shopify sees as the point of UCP. He said it “models the entire shopping journey, not just payments” and that “merchants keep their business critical checkout customizations.”

Heiko Hotz, Generative AI Global Blackbelt at Google Cloud, framed it more bluntly from an agent-builder perspective. “Agents are great at reasoning, but they are terrible at navigating a visual website.” Eric Seufert, analyst and publisher of Mobile Dev Memo, weighed in from an incentives angle, arguing the endgame is keeping discovery, conversion, and optimization economically connected to paid media.

Read more: Google Announces AI Mode Checkout Protocol, Business Agent

Google Trends Explore Gets Gemini Suggestions

Google Trends is redesigning the Explore page with a Gemini-powered side panel that suggests related terms and makes comparisons easier.

Key facts: Google says the update can “automatically identify and compare relevant trends,” with the ability to compare up to eight terms and see more “top and rising” queries per term. The update is rolling out now.

Why This Matters

Google keeps making Trends more useful for the discovery phase of keyword research.

Trends has always been valuable, but it can be slow when you start with a vague idea and need to find the right comparison terms. The Gemini panel looks designed to reduce that friction. For practitioners who use Trends early in content planning, this could speed up the process of clustering related topics and spotting seasonal patterns.

What People Are Saying

Yossi Matias, vice president and head of Google Research, emphasized the Gemini side panel, which suggests related terms, supports comparisons of up to eight queries, and expands the “top” and “rising” query views.

In the SEO community, the initial framing is that this reduces friction in the Explore workflow by surfacing comparison terms faster, but there hasn’t been much detailed feedback yet beyond first impressions.

Read more: Google Trends Explore Redesign Announcement

Health AI Overviews Face Fresh Scrutiny After Guardian Reporting

After the Guardian published examples of AI Overviews giving misleading or potentially risky guidance on medical queries, Google stopped showing AI Overviews for some health searches.

Key facts: The Guardian’s reporting included examples involving pancreatic cancer diet advice and “normal range” explanations for liver tests that reviewers said lacked context. In follow-up coverage, multiple outlets reported that Google removed AI Overviews for certain medical searches after the reporting circulated. Google’s response leaned on two themes: Some examples were missing context or based on incomplete screenshots, and it says most AI Overviews are supported by reputable sources.

Why This Matters

I wrote about the Guardian investigation earlier this month, and it fits a pattern that keeps resurfacing as AI Overviews expand into sensitive categories. You also have independent data showing medical Your Money or Your Life (YMYL) queries have some of the highest AI Overview exposure rates.

The issue for SEO practitioners is measurement. You can’t easily verify what AI Overviews say about topics you cover, and the summaries can change or disappear between queries. For anyone working in health, finance, or other YMYL categories, the question is whether AI Overviews help or complicate the trust signals you’ve built through traditional content.

What People Are Saying

Patient Information Forum highlighted the investigation and pointed to a quote from Sophie Randall, Director of PIF, saying AI Overviews can put inaccurate health information “at the top of online searches, presenting a risk to people’s health.”

Pancreatic Cancer UK also posted about participating in the investigation and reiterated that one example summary was “incorrect.” Individual commentary from clinicians and researchers shared the Guardian link and framed it as a higher-stakes version of earlier AI Overview failures.

Read more: ‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk

Theme Of The Week: The “Done For You” Layer Keeps Growing

Each story this week shows Google building more layers between the query and the destination.

UCP moves checkout into Google’s surfaces. The Trends update makes discovery more guided inside Google’s tools. And the health reporting shows what happens when AI summaries sit at the top of results for sensitive queries.

For practitioners, the common theme is control. The more Google handles inside its own interfaces, the harder it becomes to measure what you influenced and what happened upstream of your site.

Top Stories Of The Week:

More Resources:


Featured Image: Accogliente Design/Shutterstock