AI Overviews Clicks Get Tested, Earnings Tell Two Stories – SEO Pulse via @sejournal, @MattGSouthern

This week’s Pulse covers how AI Overviews affect click behavior, what independent research shows, and what earnings reports from Google and Microsoft reveal about search revenue.

Here’s what matters for you and your work.

Reid Repeats “Bounce Clicks” Argument On Bloomberg

Google’s head of Search, Liz Reid, told Bloomberg’s Odd Lots podcast that AI Overviews are reducing “bounce clicks” from publisher pages. She has made versions of this argument in public appearances since last year.

Key facts: Reid described bounce clicks as visits where users quickly click a page, get a fact, and leave, noting AI Overviews remove such visits rather than deeper ones. Google hasn’t provided data to verify this, and third-party analyses show lower click-through rates when AI Overviews are present.

Why This Matters

Reid’s explanation has stayed consistent across at least three public appearances over the past year. The argument is that lost clicks were low-value to begin with, so publishers aren’t losing the visits that matter. The problem is that Google still hasn’t shared the data behind that claim.

Until Google publishes traffic or engagement metrics that separate bounce clicks from deeper visits, the explanation is a narrative, not a finding.

Read our full coverage: Google Pushes “Bounce Clicks” Explanation For AI Overview Traffic Loss

Field Experiment Finds AI Overviews Cut Organic Clicks 38%

Researchers at the Indian School of Business and Carnegie Mellon University published a working paper, which tests the effects of AI Overviews on user behavior in a randomized field experiment.

Key facts: The study used a Chrome extension to assign 1,065 U.S. participants to three groups: normal Search, Search without AI Overviews, and AI Mode. When AI Overviews appeared, organic clicks dropped 38%, and zero-clicks rose 33%. Removing AI Overviews did not affect satisfaction, perceived quality, or ease of finding information.

Why This Matters

The authors describe their work as the first randomized experiment to isolate the causal effect of AI Overviews on clicks. Prior studies from Seer, Chartbeat, and Pew were observational or correlational. The randomized design allows the researchers to say that AI Overviews caused the click reduction, not just that the two appeared together.

The satisfaction finding puts pressure on Reid’s argument. If removing AI Overviews doesn’t reduce user satisfaction, it’s harder to argue that the lost clicks were primarily low-value visits.

Read our full coverage: Google’s AI Overviews Cut Clicks Without Satisfaction Gain: Report

Google Search Revenue Grew 19% In Q1

Alphabet reported Q1 2026 revenue of $109.9 billion. Google Search revenue hit $60.4 billion, up 19% year over year, accelerating from 17% growth in Q4 2025.

Key facts: CEO Sundar Pichai said queries are at an all-time high and that AI experiences are tied to increased Search usage. Google Cloud crossed the $20 billion quarterly revenue mark, up 63%. Pichai told analysts that more information about Search will come at Google I/O in May.

Why This Matters

The revenue growth doesn’t settle the click-impact question. Google reported higher Search revenue and more queries, but those numbers describe the ad business, not the publisher traffic side. Higher revenue is consistent with both “clicks are fine” and “clicks are down, but ad yield per query is up.”

Google’s AI features may be creating new ad opportunities, but the earnings data doesn’t show whether your pages are getting more or fewer clicks from AI-influenced results.

What People Are Saying

Matthew Scott Goldstein, Independent Analyst/Advisor/Consultant at .msg, wrote on LinkedIn:

“This is what extraction at scale looks like dressed up as innovation. The same content fueling AI Overviews, Gemini answers, and enterprise token volume is the content publishers have sued over, lost referral traffic over, and watched get re-monetized inside a closed product.”

Read our full coverage: Google Search Revenue Grew 19% In Q1, Pichai Cites AI

Microsoft Says Bing Reached 1 Billion Monthly Active Users

Microsoft announced during its Q3 FY2026 earnings call that Bing has reached 1 billion monthly active users for the first time. CEO Satya Nadella revealed the figure alongside an 18% overall revenue increase to $82.9 billion.

Key facts: Search ad revenue, excluding traffic acquisition costs, grew 12% year over year. Edge maintained browser market share gains for the 20th straight quarter. The segment that includes Bing was down 1% overall at $13.2 billion.

Why This Matters

The 1 billion MAU milestone is notable, but Bing’s global search share sits at about 5% per StatCounter’s March 2026 data. That gap suggests the MAU figure needs context. Microsoft hasn’t defined frequency, overlap, or how AI-related Bing usage is counted.

On the AI search measurement side, Microsoft previewed Citation Share and three other Bing Webmaster Tools features at SEO Week earlier this month. When those ship, they could give Bing Webmaster Tools users a clearer way to compare AI citation visibility against competitors on Bing.

Read our full coverage: Microsoft Says Bing Reached 1B Monthly Active Users

Theme Of The Week: Everyone Is Measuring A Different Part Of Search

Every story this week is about the same question asked from a different angle: What is AI doing to search traffic?

Reid says the lost clicks were low-value. The field experiment shows that the lost clicks came without any trade-off in user satisfaction. Google’s earnings say revenue is up 19%. Microsoft’s earnings say Bing hit a user milestone, but it still holds a 5% share. Each one measures something real, and none of them measure the same thing.

The gap between what platforms report and what publishers experience doesn’t appear to be closing. The public data needed to answer the click question directly still isn’t available. Per-query click behavior segmented by AI feature presence isn’t in any tool that Google or Microsoft has shipped.

Top Stories Of The Week:

More Resources:


Featured Image: PeopleImages/Shutterstock; Paulo Bobita/Search Engine Journal

Google’s Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how deep links appear in your snippets, how your robots.txt gets parsed, how agentic features work in Search, and how the EU’s data-sharing rules apply to AI chatbots.

Here’s what matters for you and your work.

Google Lists Best Practices For Read More Deep Links

Google updated its snippet documentation with a new section on “Read more” deep links in Search results. The documentation lists three best practices that can increase the likelihood of these links appearing.

Key facts: Content must be immediately visible to a human on page load, and content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use H2 or H3 headings. The snippet text needs to match the content that appears on the page, and pages with content loaded after scrolling or interaction may further reduce the likelihood.

Why This Matters

The three practices are the first specific guidance Google has published on this feature. Sites using expandable FAQ sections, tabbed product detail areas, or scroll-triggered content for core information may see fewer deep links in their snippets compared with sites that render the same content on page load.

The guidance matches a pattern Google has applied to other Search features. Content that renders without user interaction is more likely to appear in enhanced display.

Slobodan Manić, founder of No Hacks, made a related observation on LinkedIn:

“The documentation is framed around one snippet behavior (read more deep links in search results), but the language Google chose reads as a general preference. ‘Content immediately visible to a human’ is the structural instruction, not a read-more-specific tip.”

Manić’s point extends his April 16 IMHO interview with Managing Editor Shelley Walsh, where he argued that most websites are structurally broken for AI agents. He argues that search crawlers and AI agents now face the same structural problem, and the audit is the same for both.

For existing pages, the audit question is whether key information is contained within a click-to-expand element. If a page already has a “Read more” deep link for one section, that section’s structure serves as a guide to what works. For other sections on the same page, replicating that structure may also improve their chances.

Google describes the guidance as best practices that can “increase the likelihood” of deep links appearing. That hedging matters because this is not a list of requirements, and following all three may not guarantee the links appear.

Read our full coverage: Google Lists Best Practices For Read More Deep Links

Google May Expand Its Robots.txt Unsupported Rules List

Google may add rules to its robots.txt documentation based on analysis of real-world data collected through HTTP Archive. Gary Illyes and Martin Splitt described the project on the latest Search Off the Record podcast.

Key facts: Google’s team analyzed the most frequently unsupported rules in robots.txt files across millions of URLs indexed by the HTTP Archive. Illyes said the team plans to document the top 10 to 15 most-used unsupported rules beyond user-agent, allow, disallow, and sitemap. He also said the parser may expand the typos it accepts for disallow, though he did not commit to a timeline or name specific typos.

Why This Matters

If Google documents more unsupported directives, sites using custom or third-party rules will have clearer guidance on what Google ignores.

Anyone maintaining a robots.txt file with rules beyond user-agent, allow, disallow, and sitemap should audit for directives that have never worked for Google. The HTTP Archive data is publicly queryable on BigQuery, so the same distribution Google used is available to anyone who wants to examine it.

The typo tolerance is the more speculative part. Illyes’ phrasing implies that the parser already accepts some misspellings of “disallow,” and more may be honored over time. Audit any spelling variants now and correct them, rather than assuming they will be ignored.

Read our full coverage: Google May Expand Unsupported Robots.txt Rules List

EU Proposes Google Share Search Data With Rivals And AI Chatbots

The European Commission sent preliminary findings proposing that Google share search data with rival search engines across the EU and EEA, including AI chatbots that qualify as online search engines under the DMA. The measures are not yet binding, with a public consultation open until May 1 and a final decision due by July 27.

Key facts: The proposal covers four data categories shared on fair, reasonable, and non-discriminatory terms. The categories are ranking, query, click, and view data. Eligibility extends to AI chatbot providers that meet the DMA’s definition of online search engines. If the Commission maintains eligibility through the final decision, qualifying providers could gain access to anonymized Google Search data under the Commission’s proposed terms.

Why This Matters

This proposal explicitly extends search-engine data-sharing eligibility to AI chatbots under the DMA. If the eligibility survives the consultation, the regulatory category of “search engine” now includes products that most search marketing work has treated as a separate category.

The consequences vary depending on where you operate. For sites optimizing for EU/EEA visibility, the change could broaden the scope of where anonymized search signals flow. AI products competing with Google in that market could use the data to improve their retrieval and ranking systems, which could, in turn, affect which content they cite.

Outside the EU, the direct regulatory effect is zero. The category definition is a different matter. How the Commission draws the line between “AI chatbot” and “AI chatbot that qualifies as a search engine” is likely to be cited in future proceedings.

The eligibility question is the story to watch through May 1. If the Commission narrows the AI chatbot criteria in response to consultation feedback, the implications stay regulatory. If it holds the line, that would set a material precedent for how AI search is classified.

Read our full coverage: Google May Have To Share Search Data With Rivals

Google Adds New Task-Based Search Features

Google introduced new Search features that continue its evolution toward task completion. Users can now track individual hotel price drops via a new toggle in Search, and Google is adding the ability to launch AI agents directly from AI Mode.

Key facts: Hotel price tracking is available globally through a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. The AI agent launched from AI Mode allows users to initiate tasks handled by AI within the search interface. Rose Yao, a Google Search product leader, posted about the features on X.

Why This Matters

Each task-based feature moves a process that previously started on another site into Google’s own surface. Hotel price tracking has existed at the city level for months. Expansion to individual hotels adds a new signal that users can set inside Google rather than on hotel or aggregator sites.

Direct-booking visibility depends on being inside Google’s ecosystem. Sites relying on price-drop alerts as a return-trigger for users may see some of that engagement reallocated to Google’s tracking UI. For hotel brands, this raises the stakes for ensuring individual hotel pages are fully populated in Google Business Profile and hotel feeds.

On LinkedIn, Daniel Foley Carter connected the feature to a broader pattern:

“Google’s AI overviews, AI mode and now in-frame functionality for SERP + SITE is just Google eating more and more into traffic opportunities. Everything Google told US not to do its doing itself. SPAM / LOW VALUE CONTENT – don’t resummarise other peoples content – Google does it.”

The AI agent launch is more speculative. Google has not published detailed documentation explaining what kinds of tasks users can delegate or how sources get cited. The feature confirms that agentic search, described by Sundar Pichai as “search as an agent manager,” is appearing incrementally in Search rather than as a single launch.

Read Roger Montti’s full coverage: Google Adds New Tasked-Based Search Features

Theme Of The Week: The Rules Are Getting Written

Each story this week spells out something that was previously implicit or underway.

Google signaled plans to expand what its robots.txt documentation covers. The company listed specific practices that can increase the likelihood of “Read more” deep links appearing. The European Commission proposed measures that extend search-engine data-sharing eligibility to AI chatbots under the DMA. And task-based features that Sundar Pichai described in interviews are rolling out as toggles in the search bar.

For your day-to-day, the ground gets firmer. Fewer questions are judgment calls. What does and doesn’t qualify, what Google supports, and what counts as a search engine to a regulator are all getting written down. That works to your advantage when it means clearer audit criteria, and against you when “we weren’t sure” is no longer a defensible answer.

Top Stories Of The Week:

More Resources:


Featured Image: [Photographer]/Shutterstock

Google Bans Back Button Hijacking, Agentic Search Grows – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect what Google considers spam, what happens when you report it, and what agentic search looks like in practice.

Here’s what matters for you and your work.

Google’s New Spam Policy Targets Back Button Hijacking

Google added back button hijacking to its spam policies, with enforcement beginning June 15. The behavior is now an explicit violation under the malicious practices category.

Key facts: Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Pages engaging in the behavior face manual spam actions or automated demotions.

Why This Matters

Google called out that some back button hijacking originates from included libraries or advertising platforms, which means the liability sits with the publisher even when the behavior comes from a vendor.

You have two months to audit every script running on your site, including ad libraries and recommendation widgets you didn’t write yourself.

Sites that receive a manual action after June 15 can submit a reconsideration request through Search Console once the offending code is removed.

What SEO Professionals Are Saying

Daniel Foley Carter, SEO Consultant, summed up the community reaction on LinkedIn:

“So basically, that spammy thing you do to try and stop users leaving? Yeah, don’t do it.”

Manish Chauhan, SEO Head at Groww, added on LinkedIn that he was:

“glad this is being addressed. It always felt like a short-term hack for pageviews at the cost of user trust.”

Read our full coverage: New Google Spam Policy Targets Back Button Hijacking

Spam Reports May Now Trigger Manual Actions

Google updated its report-a-spam documentation on April 14 to say user submissions may now trigger manual actions against sites found violating spam policies. The previous guidance said spam reports were used to improve spam detection systems rather than to take direct action.

Key facts: Google may use spam reports to take manual action against violations. If Google issues a manual action, the report text is sent verbatim to the reported website through Search Console.

Why This Matters

Google now states that spam reports can be used to initiate manual actions, making reports explicitly part of its enforcement process in official documentation.

This also raises concerns about potential abuse, as grudge reports and competitor sabotage may become more appealing when reports have a tangible impact. Therefore, the true test will be the quality of reports that Google actually considers.

What SEO Professionals Are Saying

Gagan Ghotra, SEO Consultant, wrote on LinkedIn about why the change may lead to better reports:

“Now spam reports have direct relation to Google issuing manual actions against domains. Google announced if there is a spam report from a user and based upon that report Google decide to issue manual action against a domain then Google will just send the user submitted content in report to the site owner (Search Console – Manual Action report) and will ask them to fix those things. Seems like Google was getting too many generic spam reports and now as the incentive to report are aligned. That’s why I guess people are going to submit reports which have a lot of relevant information detailing why/how a specific site is violating Google’s spam policies.”

Read Roger Montti’s full coverage: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Agentic Restaurant Booking Expands In AI Mode

Google expanded agentic restaurant booking in AI Mode to additional markets on April 10, including the UK and India. Robby Stein, VP of Product for Google Search, announced the rollout on X.

Key facts: Searchers can describe group size, time, and preferences to AI Mode, which scans booking platforms simultaneously for real-time availability. The booking itself is completed through Google partners rather than directly on restaurant websites.

Why This Matters

Restaurant booking shows how task completion within search works. For local SEOs and marketers, traffic patterns shift: users now often stay within Google during discovery, with bookings routed through partners.

This depends on Google booking partners, which may limit visibility for restaurants outside those platforms, making presence on Google-supported booking sites more important than the restaurant’s own website. This model may or may not extend to other experiences.

What SEO Professionals Are Saying

Glenn Gabe, SEO and AI Search Consultant at G-Squared Interactive, flagged the rollout on X:

I feel like this is flying under the radar -> Google rolls out worldwide agentic restaurant booking via AI Mode. TBH, not sure how many people would use this in AI Mode versus directly in Google Maps or Search (where you can already make a reservation), but it does show how Google is moving quickly to scale agentic actions.

Aleyda Solís, SEO Consultant and Founder at Orainti, noted a key limitation in a LinkedIn post:

“Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though.”

Read Roger Montti’s full coverage: Google’s Task-Based Agentic Search Is Disrupting SEO Today, Not Tomorrow

Theme Of The Week: Google Gets Specific

What counts as spam, what happens when spam gets reported, and what agentic search looks like all got clearer definitions this week.

Back button hijacking becomes a named violation with an enforcement date. Google’s documentation now says spam reports may be used for manual actions, not just fed into detection systems. Agentic search becomes a live product for restaurant reservations in specific markets rather than a talking point about the future.

Now, the compliance work, reporting mechanics, and agentic experience are all clearly understood enough to be tracked directly, instead of just forecasted.

Top Stories Of The Week:

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Core Update Done, GSC Bug Fixed, Mueller On Gurus – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect when you can start analyzing core update performance, how much you can trust your impression data, and what Google’s CEO thinks AI will do to software security.

Here’s what matters for you and your work.

March 2026 Core Update Is Complete

Google’s March 2026 core update finished rolling out on April 8. The Google Search Status Dashboard confirms the completion.

Key facts: The rollout took 12 days, starting March 27 and finishing April 8. That’s within Google’s two-week estimate and faster than the December update, which took 18 days. Google called it “a regular update” and didn’t publish a companion blog post or new guidance. This was the third confirmed update in roughly five weeks, following the February Discover core update and the March spam update.

Why This Matters

You can now run a clean before-and-after comparison in Search Console. Google recommends waiting at least one full week after completion before drawing conclusions, which means mid-April is the earliest window for reliable analysis.

A ranking drop after a core update does not mean your site violated a policy. Core updates reassess content quality across the web. Some pages move up while others move down. Roger Montti, writing for Search Engine Journal, suggested the spam-then-core sequencing may not have been a coincidence, describing it as clearing the table before recalibrating quality signals.

What SEO Professionals Are Saying

Lily Ray, VP, SEO & AI Search at Amsive, noted on X that YouTube has gained visibility since the core update began rolling out:

“Just checked a client that ranked in AI Overviews last week and now the top 4 links in AI Overviews are all YouTube.

Let me guess: the core update was another way for Google to boost YouTube, like it did with the Discover core update.”

Aleyda Solís, SEO consultant and founder of Orainti, is running a poll on LinkedIn asking how the update impacted peoples’ websites. Currently, most respondants say the impact of the update with either positve or not noticeable.

Read our full coverage: Google Confirms March 2026 Core Update Is Complete

Google Fixes Search Console Bug That Inflated Impressions For Nearly A Year

Google confirmed a logging error in Search Console that over-reported impressions starting May 13, 2025. The company updated its Data Anomalies page on April 3 to acknowledge the issue.

Key facts: The bug ran for nearly 11 months before Google publicly acknowledged it. Clicks and other metrics were not affected. Google said the fix will roll out over the next several weeks, and sites may see a decrease in reported impressions during that period.

Why This Matters

If your impression numbers have looked unusually healthy since last May, this bug is likely part of the reason. The correction will change what your Performance report shows, but it will not change how your site actually performed in search. The impressions were logged incorrectly. Your actual visibility may not have changed.

Teams that reported impression-based metrics to clients or stakeholders since May were working with inflated numbers. Click data provides a cleaner signal for performance analysis while the fix rolls out. Treat May 13, 2025 as a data annotation point, similar to how you would mark an algorithm update date in your reporting.

What SEO Professionals Are Saying

Brodie Clark, independent SEO consultant, flagged the issue on March 30, four days before Google’s acknowledgment. He wrote:

“Heads-up: there is something bizarre going on with Google Search Console data right now.

Similar to the changes that came to light after the disabling of &num=100, impressions are again skyrocketing for specific surfaces on desktop.”

Clark documented impression spikes across merchant listings and Google Images filters on multiple ecommerce sites and called for the Search Console team to investigate.

Chris Long, co-founder of Nectiv, wrote on LinkedIn: “Holy moly SEOs. It turns out Google has been accidentally inflating impressions in Search Console reports for ALMOST A YEAR.” Long noted that Google did not indicate how much impressions would decrease, and that the profiles he checked appeared stable so far.

Source: Google Data Anomalies in Search Console

Pichai Says AI Could ‘Break Pretty Much All Software’

Google CEO Sundar Pichai said AI models are “going to break pretty much all software out there” during a podcast conversation with Stripe CEO Patrick Collison. The interview covered AI infrastructure constraints and security risks.

Key facts: Pichai framed software security as a hidden constraint on AI deployment alongside memory supply and energy. When investor Elad Gil mentioned hearing that black market zero-day prices were falling because AI was increasing the supply of discoverable vulnerabilities, Pichai said he was “not at all surprised.”

Why This Matters

The security conversation may feel distant from daily SEO work, but it connects to the infrastructure your sites run on. If AI accelerates the pace at which vulnerabilities are found and exploited, the window between a flaw existing and an attacker using it gets shorter. That puts more pressure on maintaining current patches and auditing dependencies.

Pichai’s comments were conversational, not a formal Google policy statement. But they came from someone who oversees both the company’s AI models and its threat intelligence operation. Google’s threat teams have been warning about software security risks tied to faster vulnerability discovery.

Read our full coverage: Pichai Says AI Could ‘Break Pretty Much All Software’

Mueller Calls Self-Described SEO Gurus ‘Clueless Imposters’

Google’s John Mueller responded to a blog post by SEO professional Preeti Gupta about how the word “guru” is misused in the SEO industry. Mueller shared his view on Bluesky.

Key facts: Mueller wrote:

“To me, when someone self-declares themselves as an SEO guru, it’s an extremely obvious sign that they’re a clueless imposter. SEO is not belief-based, nobody knows everything, and it changes over time. You have to acknowledge that you were wrong at times, learn, and practice more.”

Gupta’s original post explained that in India the word guru carries deep cultural and spiritual meaning that is trivialized when SEO practitioners use it as a self-applied label.

Why This Matters

The core of what Mueller said is that SEO changes over time and that nobody has it all figured out.

Just look at what happened this week. Core updates continue to happen without a clear explanation of what changed. A basic logging bug in Search Console went unnoticed for nearly a year. The tools and signals we rely on every day are imperfect, and treating any methodology or perspective as settled knowledge is how mistakes get made.

Read Roger Montti’s full coverage: Google’s Mueller On SEO Gurus Who Are “Clueless Imposters”

Theme Of The Week: The Day-to-Day Work Continue

The speculation about where search is going has never been louder. But this week’s events were a core update finishing, a data bug getting patched, and a Google Search Advocate reminding people that nobody has all the answers.

The future Pichai describes may be coming, but it hasn’t arrived yet. Right now, the job is still reading your Search Console data, waiting for a core update to settle, and staying honest about what you do and do not know.

Mueller’s comment that SEO “is not belief-based” and “changes over time” is as good a summary of this week as any. Those who will succeed in the next version of search are probably the ones paying attention to this version first.

Top Stories Of The Week:

Here are the main links from this week’s coverage.

More Resources:

For more context, these earlier stories help fill in the background.


Featured Image: [Photographer]/Shutterstock

Google Core Update, Crawl Limits & Gemini Traffic Data – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how Google ranks content, how its crawlers handle page size, and where AI referral traffic is heading. Here’s what matters for you and your work.

Google Rolls Out The March 2026 Core Update

Google began rolling out the March core update this week. This is the first broad core update of the year.

Key facts: The rollout may take up to two weeks. Google described it as a regular update designed to surface more relevant, satisfying content from all types of sites. It arrives two days after the March spam update completed in under 20 hours.

Why This Matters

The December core update was the most recent broad core update, finishing on December 29. That’s a three-month gap. The February 2026 update only affected Discover, so Search rankings haven’t been recalibrated since late December.

Ranking changes could appear throughout early April. Google recommends waiting at least a full week after the rollout finishes before analyzing Search Console performance. Compare against a baseline period before March 27.

What SEO Professionals Are Saying

John Mueller, a member of Google’s Search Relations team, wrote on Bluesky when asked whether the two updates overlap:

One is about spam, one is not about spam. If with some experience, you’re not sure whether your site is spam or not, it’s unfortunately probably spam.

Mueller later explained that core updates don’t follow a single deployment mechanism. Different teams and systems contribute changes, and those components can require step-by-step rollouts rather than a single release. That’s why rollouts take weeks and why ranking volatility often appears in waves rather than all at once.

Roger Montti, writing for Search Engine Journal, noted the proximity to the spam update may not be a coincidence. Spam fighting is logically part of the broader quality reassessment in a core update.

Read our full coverage: Google Begins Rolling Out March 2026 Core Update

Read Roger Montti’s coverage: Google Answers Why Core Updates Can Roll Out In Stages

Illyes Explains Googlebot’s Crawling Architecture And Byte Limits

Google’s Gary Illyes, an analyst on Google’s Search team, published a blog post explaining how Googlebot works within Google’s broader crawling systems. The post adds new technical details to the 2 MB crawl limit Google published earlier this year.

Key facts: Illyes described Googlebot as one client of a centralized crawling platform. Google Shopping, AdSense, and other products all route requests through the same system under different crawler names. HTTP request headers count toward the 2 MB limit. External resources like CSS and JavaScript get their own separate byte counters.

Why This Matters

When Googlebot hits 2 MB, it doesn’t reject the page. It stops fetching and passes the truncated content to indexing as if it were the complete file. Anything past 2 MB is never indexed. That matters for pages with large inline base64 images, heavy inline CSS or JavaScript, or oversized navigation menus.

The centralized platform detail also explains why different Google crawlers behave differently in server logs. Each client sets its own configuration, including byte limits. Googlebot’s 2 MB is a Search-specific override of the platform’s 15 MB default.

Google has now covered these limits in documentation updates, a podcast episode, and this blog post within two months. Illyes noted the 2 MB limit is not permanent and may change as the web evolves.

What SEO Professionals Are Saying

Cyrus Shepard, founder of Zyppy SEO, wrote on LinkedIn:

That said, as SEOs we often deal with extreme situations. If you notice certain content not getting indexed on VERY LARGE PAGES, you probably want to check your size.

Read our full coverage: Google Explains Googlebot Byte Limits And Crawling Architecture

Google’s Illyes And Splitt: Pages Are Getting Larger, And It Still Matters

Gary Illyes and Martin Splitt, Developer Advocate at Google, discussed page weight growth and crawling on a recent Search Off the Record podcast episode.

Key facts: Web pages have grown nearly 3x over the past decade. The 15 MB default applies across Google’s broader crawling systems, with individual clients like Googlebot for Search overriding it downward to 2 MB. Illyes raised whether structured data that Google asks websites to add is contributing to page bloat.

Why This Matters

The 2025 Web Almanac reports a median mobile homepage size of 2,362 KB. This indicates pages are getting larger, though it should not be considered safely below Googlebot’s 2 MB fetch limit. However, Illyes’s question about structured data contributing to bloat is worth monitoring. Google encourages sites to add schema markup for rich results, and that markup increases the weight of each page.

Splitt said he plans to address specific techniques for reducing page size in a future episode. Pages with heavy inline content should verify their critical elements load within the first 2 MB of the response.

Read our full coverage: Google: Pages Are Getting Larger & It Still Matters

Gemini Referral Traffic More Than Doubles, Overtakes Perplexity

Google Gemini more than doubled its referral traffic to websites between November 2025 and January 2026. The data comes from SE Ranking’s analysis of more than 101,000 sites with Google Analytics installed.

Key facts: SE Ranking measured a 115% combined increase over two months, with the jump starting around the time Google rolled out Gemini 3. In January, Gemini sent 29% more referral traffic than Perplexity globally and 41% more in the U.S. ChatGPT still generates about 80% of all AI referral traffic. For transparency, SE Ranking sells AI visibility tracking tools.

Why This Matters

In August 2025, Perplexity was sending about 2.9x more referral traffic than Gemini. Gemini’s December-January surge reversed that by January 2026. ChatGPT’s lead over Gemini also narrowed, from roughly 22x in October to about 8x in January.

All AI platforms combined still account for about 0.24% of global internet traffic, up from 0.15% in 2025. That’s measurable growth, but it’s still a small share compared to organic search. Two months of Gemini growth correlates with a known product launch, but it’s too early to call it a sustained pattern.

Gemini is now worth watching alongside ChatGPT and Perplexity in your referral reports.

Read our full coverage: Google Gemini Sends More Traffic To Sites Than Perplexity: Report


Theme Of The Week: Google Is Explaining Its Own Systems

Three of this week’s four stories are Google telling you how its systems work. Illyes published a blog post detailing Googlebot’s architecture. The same week, the Search Off the Record podcast covered page weight and crawl thresholds. Mueller explained why core updates roll out in waves rather than all at once. Each one fills a gap that documentation alone left open.

The Gemini traffic data provides a new perspective. Google is being open about how its crawlers and ranking systems operate. The traffic passing through its AI services is increasing rapidly enough to be reflected in third-party data, and Google isn’t explaining that part.

Top Stories Of The Week:

More Resources:

Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how your headlines appear in Search, how spam enforcement played out, and how AI content gets labeled.

Here’s what matters for you and your work.

Google Tests AI-Generated Headline Rewrites In Search

Google confirmed that it’s testing AI-generated headline rewrites in traditional search results. The test uses language similar to what Google used before reclassifying AI headlines in Discover as a feature.

Key facts: Google called the test “small and narrow.” The rewrites include no disclosure that Google changed the original headline. Google said any broader launch may not use generative AI but didn’t explain what the alternative would look like.

Why This Matters

Google called AI headlines in Discover “small” in December, reclassified them as a feature by January, and is now using the same language for Search. Google has not outlined an opt-out for this test, and the documented examples show Google changing meaning, not just formatting.

What Publishers And SEO Professionals Are Saying

Bastian Grimm, founder of Peak Ace AG, wrote on LinkedIn:

“Previous rewrites were primarily about matching query intent, fixing truncation, or improving readability. This test uses AI to rewrite for engagement – and documented examples show it changing tone and intent in ways that go well beyond formatting. That is a meaningful shift. A title rewritten to match a query is one thing. A title rewritten because Google’s model thinks a different framing will perform better is another.”

Brodie Clark, independent SEO consultant, wrote on LinkedIn:

“The big issue with this approach is that there were instances where the titles for the articles were rewritten, but the meaning of the article was lost in the rewrite or through formatting changes (such as using capitals for every word).”

Nilay Patel, editor-in-chief of The Verge, wrote on Bluesky:

“Google is now screwing with the 10 blue links in traditional search and rewriting headlines – including ours – to be the worst kind of slop. This sucks so bad”

James Ball, political editor at The New World Opinion and fellow at Tech Policy Press and Demos, wrote on Bluesky:

“Google is re-headlining articles in search results, including in ways that introduce errors. I think even 2-3 years ago it would’ve backed off this for fear of publisher backlash. Does the media have enough clout left wirh tech to get this one reversed?”

Read our full coverage: Google Tested AI Headlines In Discover. Now It’s Testing Them In Search

March 2026 Spam Update Completes In Under 20 Hours

Google’s March 2026 spam update started on March 24 and finished on March 25. The rollout was significantly faster than recent spam updates. The update applies globally and to all languages.

Key facts: The rollout began at 12:00 PM PT on March 24 and ended at 7:30 AM PT on March 25. Google didn’t announce new spam policies with this update. The community response has been notably quiet, with few reports of visible impact.

Why This Matters

The rollout window was short and is already complete, so March 24-25 is the clearest period to review in Search Console. Google’s current spam policies are still the main guidelines to follow, as no new categories have been introduced.

What SEO Professionals Are Saying

Nilesh Pansuriya, leading Guru99’s global content and SEO team, wrote on LinkedIn:

“I’ve been tracking Google updates for 15 years. I’ve never seen one move this fast. The March 2026 Spam Update rolled out on March 24th. Completely finished by March 25th. ⏱️ Total time: 19 hours and 30 minutes. → August 2025 spam update → 27 days → December 2024 spam update → 7 days → October 2022 spam update → 48 hours → March 2026 spam update → under 20 hours Done before most SEOs even noticed it started.”

Read our full coverage: Google Begins Rolling Out The March 2026 Spam Update

Google Adds AI And Bot Content Labels To Structured Data

Google updated its Discussion Forum and Q&A Page structured data documentation to include new properties, including a way for sites to label AI- and bot-generated content.

Key facts: The new digitalSourceType property uses IPTC enumeration values to distinguish content created by a trained model from content created by a simpler automated process. Google lists the property as recommended, not required. When it’s absent, Google assumes the content is human-generated.

Why This Matters

Forums and Q&A platforms now have a documented way to tell Google which content was created by AI or bots. The “recommended” status means adoption will be voluntary.

What SEO Professionals Are Saying

Jan-Willem Bobbink, founder of WebGeist, wrote on LinkedIn:

“Lets talk about a gap in Google’s new AI content labeling. They require it for product feeds but only ‘recommend’ it for forums. Google just updated its Discussion Forum and Q&A Page structured data docs with a new property called digitalSourceType. It lets sites flag when a post or comment was written by an AI model or an automated bot. The idea sounds great on paper. In practice, the implementation tells a different story. The property is listed as ‘recommended,’ not required. If a site leaves it out, Google assumes the content is human-generated. That is a massive loophole.”

Read our full coverage: Google Adds AI & Bot Labels To Forum, Q&A Structured Data

Bing Connects Grounding Queries To Cited Pages

Bing Webmaster Tools added a mapping feature to its AI Performance dashboard that connects grounding queries to the specific pages cited for them. The update works in both directions.

Key facts: You can click a grounding query to see which pages are cited for it. You can also click a page to see which grounding queries drive its citations. The dashboard covers AI experiences across Copilot, AI summaries in Bing, and select partner integrations. The data is still a sample, not a complete log.

Why This Matters

This gives you a way to connect AI citation data to specific content on your site. Knowing which pages earn citations for which phrases makes it easier to decide where to focus content updates for AI visibility.

Google’s Search Console includes AI Overviews and AI Mode in standard Performance reporting but hasn’t introduced a comparable page-level citation mapping.

What SEO Professionals Are Saying

Aleyda Solís, international SEO consultant and founder of Orainti, wrote on LinkedIn:

“New Bing Webmaster Tools AI Performance Dashboard Insights: We can now see which pages are being cited for a specific grounding query, and which grounding queries are driving citations to a specific pages Thanks so much for hearing the community feedback Krishna Madhavan, Fabrice Canel and team See the announcement in comments.”

Navah Hopkins, ads liaison at Microsoft Advertising, wrote on LinkedIn:

“Grounding queries reveal the key phrases AI used to retrieve content that was cited, offering insight into how AI interprets user intent. If you see your content is getting cited, that means you’re registering as visible to the AI. The page-level citation report sheds light on which pages are helping you win that visibility.”

Read our full coverage: Bing AI Dashboard Maps Grounding Queries To Cited Pages

Theme Of The Week: Google Tightens Control Over How Content Appears

Three of this week’s four stories show Google asserting more influence over how content is presented and categorized in its ecosystem.

AI headline rewrites let Google change how your pages appear in search results. The spam update completed in under 20 hours, the fastest rollout in recent memory. And the new structured data properties ask platforms to self-report whether content was created by humans or machines.

In contrast, while Google tightens control over how content appears, Bing is giving publishers greater visibility into how their content performs in AI-generated answers. The query-to-page mapping closes a measurement gap that Google hasn’t addressed on its side.

Top Stories Of The Week:

More Resources:


Google AI Mode Goes Personal, Crawl Limits Clarified – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how Google personalizes AI Mode, what Googlebot’s crawl limits look like in practice, and what new data shows about AIO click behavior and publisher traffic.

Here’s what matters for you and your work.

Google Personal Intelligence Now Free For US Users

Google expanded Personal Intelligence from paid AI Pro and Ultra subscribers to all free US users on personal Google accounts. The feature connects Gmail and Google Photos to AI Mode.

Key Facts: AI Mode access is available now. Gemini app and Chrome rollouts are starting. When enabled, AI Mode can reference email confirmations, travel bookings, and photo context to personalize responses. No expansion beyond the US or to Workspace accounts has been announced.

Why This Matters

Paid-to-free means a much larger user base gets access to personalized AI Mode results. People searching the same query could see different AI Mode responses depending on what’s in their Gmail. That makes it harder to benchmark what AI Mode shows for any given topic.

Read our full coverage: Google AI Mode’s Personal Intelligence Now Free In U.S.

Google Reveals Googlebot’s Crawl Limits Are Flexible

Google’s Gary Illyes and Martin Splitt discussed how Googlebot’s crawl limits work. The commonly cited limits aren’t as fixed as most people assume.

Key Facts: Google has long cited a 15 megabyte limit for its crawlers, but Illyes said internal teams can override it. Google Search works with a smaller 2 megabyte threshold in practice. The limits can be increased or decreased depending on what’s being crawled and why.

Why This Matters

The 15MB number has been treated as a hard ceiling in technical SEO guidance for years. Google Search working with a smaller 2MB threshold adds useful context to the long-cited 15MB figure. Most pages are well under 2MB, but pages with heavy inline scripts, large data objects, or extensive embedded content could be affected.

Read our full coverage: Google Shares More Information On Googlebot Crawl Limits

AI Overviews Cut Germany’s Top Organic Position CTR By 59%

SISTRIX analyzed over 100 million German keywords and found AI Overviews cut the position one click rate from 27% to 11%.

Key Facts: AI Overviews appear on about 20% of German keywords, up from 17% in August. SISTRIX estimates the total cost at 265 million lost organic clicks per month across the German market. Averaged across all keywords, including those without AIOs, that works out to a 6.6% click loss.

Why This Matters

The German data is directionally similar to US findings. Position one loses more than half its clicks when an AIO appears, and informational content takes the biggest hit. This suggests the pattern is not limited to the US.

What People Are Saying

Barry Adams, founder of Polemic Digital, wrote on LinkedIn:

“Citations in AIOs don’t matter, people don’t click. If you want to keep thriving on Google, you need to offer something AI can’t replicate. For publishers, breaking news is the golden goose.”

Read our full coverage: Google AI Overviews Cut Germany’s Top Organic CTR By 59%

Search Referral Traffic Down 60% For Small Publishers

Chartbeat shared new data that breaks down search referral traffic losses by publisher size. Most previous reporting on search traffic declines treated publishers as a single group.

Key Facts: Small publishers lost 60% of search referral traffic over two years. Mid-sized publishers lost 47%, and large publishers lost 22%. Google Discover referrals fell 15% over the same period. Larger publishers are partially offsetting losses through direct traffic, email, and app referrals.

Why This Matters

ChatGPT referrals grew over 200% in this data, and they still account for less than 1% of publisher page views. The growth rate sounds impressive until you compare it to what search took away. Chatbot traffic is still too small to offset those losses in this data.

What People Are Saying

Steven Waldman, founder of Rebuild Local News and Report for America, called the data “incredibly important” in a LinkedIn post, noting that larger publishers are more insulated because of stronger brand recognition and direct-to-consumer products.

Layne Bruce, Executive Director of the Mississippi Press Association, wrote on LinkedIn:

“Each week brings some new advancement in technology that’s great for consumers but threatening the ecosystem that generates the flow of information in the first place.”

Read our full coverage: Search Referral Traffic Down 60% For Small Publishers, Data Shows

Theme Of The Week: General Benchmarks Are Getting Less Useful

Each story this week shows a number that used to mean one thing now meaning something different depending on context.

AIO click losses in Germany are directionally similar to those in the US. The 15MB crawl limit isn’t 15MB in practice. And Personal Intelligence makes AI Mode results vary by user, so checking what “shows up” for a query depends on what personal Google services that person has connected.

This week’s stories show data is more useful when you read it against your own vertical, your own site size, and your own audience.

More Resources:


Featured Image: [Credit]/Shutterstock

Google AI Mode Link Update, Click Share Data & ChatGPT Fan-Outs – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse: updates affect how links appear in AI search results, where organic clicks are going, and which languages ChatGPT uses to find sources.

Here’s what matters for you and your work.

Google Redesigns Links In AI Overviews And AI Mode

Robby Stein, VP of Product for Google Search, announced on X that AI Overviews and AI Mode are getting a redesigned link experience on both desktop and mobile.

Key Facts: On desktop, groups of links will now appear in a pop-up when you hover over them, showing site names, favicons, and short descriptions. Google is also rolling out more descriptive and prominent link icons across desktop and mobile.

Why This Matters

This is the latest in a series of link-visibility updates Stein has announced since last summer, when he called showing more inline links Google’s “north star” for AI search. The pattern is consistent. Google keeps iterating on how links surface inside AI-generated responses.

The hover pop-up is a new interaction pattern for AI Overviews. Instead of small inline citations that are easy to miss, users now get a preview card with enough context to decide whether to click. That changes the calculus for publishers wondering how much traffic AI results actually send.

What The Industry Is Saying

SEO consultant Lily Ray (Amsive) wrote on X that she had been seeing the new link cards and was “REALLY hoping it sticks.”

Read our full coverage: Google Says Links Will Be More Visible In AI Overviews

43% Of ChatGPT Fan-Out Queries For Non-English Prompts Run In English

A report from AI search analytics firm Peec AI found that a large share of ChatGPT’s fan-out queries run in English, even when the original prompt was in another language.

Key Facts: Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across non-English prompts analyzed, 43% of the fan-out queries ran in English. Nearly 78% of non-English prompt sessions included at least one English-language fan-out query.

Why This Matters

When ChatGPT Search builds an answer, it can rewrite the user’s prompt into “one or more targeted queries,” according to OpenAI’s documentation. OpenAI does not describe how language is chosen for those rewritten queries. Peec AI’s data suggests that English gets inserted into the process even when the user and their location are clearly non-English.

SEO and content teams working in non-English markets may face a disadvantage in ChatGPT’s source selection that doesn’t map to traditional ranking signals. Language filtering appears to happen before citation signals come into play.

Read our full coverage: ChatGPT Search Often Switches To English In Fan-Out Queries: Report

Google’s Search Relations Team Can’t Say You Still Need A Website

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a definitive yes.

Key Facts: In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question. Both acknowledged that websites still offer advantages, including data sovereignty, control over monetization, and freedom from platform content moderation. But neither argued that the open web offers something irreplaceable.

Why This Matters

Google Search is built around crawling and indexing web content. The fact that Google’s own Search Relations team treats “do I need a website?” as a business decision rather than an obvious yes is worth noting.

Illyes offered the closest thing to a position. He said that if you want to make information available to as many people as possible, a website is probably still the way to go. But he called it a personal opinion, not a recommendation.

The conversation aligns with increasingly fragmented user journeys, now spanning AI chatbots, social feeds, community platforms, and traditional search. For practitioners advising clients on building websites, the answer increasingly depends on where the audience is, not where it used to be.

Read our full coverage: Google’s Search Relations Team Debates If You Still Need A Website

Theme Of The Week: The Ground Keeps Moving Under Organic

Each story this week shows a different force pulling attention, clicks, or visibility away from the organic channel as practitioners have known it.

Google is redesigning how links appear in AI responses, acknowledging the traffic concern. ChatGPT’s background queries introduce a language filter that can exclude non-English content before relevance signals even apply. And Google’s own team won’t say that websites are the default answer for visibility anymore.

These stories reinforce the idea of spreading your content across different platforms to reach more people. And track where your clicks are really coming from.

More Resources:


Featured Image: TippaPatt/Shutterstock; Paulo Bobita/Search Engine Journal

Bing AI Citation Tracking, Hidden HTTP Homepages & Pages Fall Under Crawl Limit – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates cover how you track AI visibility, how a ghost page can break your site name in search results, and what new crawl data reveals about Googlebot’s file size limits.

Here’s what matters for you and your work.

Bing Webmaster Tools Adds AI Citation Dashboard

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving publishers visibility into how often their content gets cited in Copilot and AI-generated answers. The feature is now in public preview.

Key Facts: The dashboard tracks total citations, average cited pages per day, page-level citation activity, and grounding queries. Grounding queries show the phrases AI used when retrieving your content for answers.

Why This Matters

Bing is now offering a dedicated dashboard for AI citation visibility. Google includes AI Overviews and AI Mode activity in Search Console’s overall Performance reporting, but it doesn’t break out a separate report or provide citation-style URL counts. AI Overviews also assign all linked pages to a single position, which limits what you can learn about individual page performance in AI answers.

Bing’s dashboard goes further by tracking which pages get cited, how often, and what phrases triggered the citation. The missing piece is click data. The dashboard shows when your content is cited, but not whether those citations drive traffic.

Now you can confirm which pages are referenced in AI answers and identify patterns in grounding queries, but connecting AI visibility to business outcomes still requires combining this data with your own analytics.

What SEO Professionals Are Saying

Wil Reynolds, founder of Seer Interactive, celebrated the feature on X and focused on the new grounding queries data:

“Bing is now giving you grounding queries in Bing Webmaster tools!! Just confirmed, now I gotta understand what we’re getting from them, what it means and how to use it.”

Koray Tuğberk GÜBÜR, founder of Holistic SEO & Digital, compared it directly to Google’s tooling on X:

“Microsoft Bing Webmaster Tools has always been more useful and efficient than Google Search Console, and once again, they’ve proven their commitment to transparency.”

Fabrice Canel, principal product manager at Microsoft Bing, framed the launch on X as a bridge between traditional and AI-driven optimization:

“Publishers can now see how their content shows up in the AI era. GEO meets SEO, power your strategy with real signals.”

The reaction across social media centered on a shared frustration. This is the data practitioners have been asking for, but it comes from Bing rather than Google. Several people expressed hope that Google and OpenAI would follow with comparable reporting.

Read our full coverage: Bing Webmaster Tools Adds AI Citation Performance Data

Hidden HTTP Homepage Can Break Your Site Name In Google

Google’s John Mueller shared a troubleshooting case on Bluesky where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results. The issue is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, hiding the problematic page from normal browsing.

Key Facts: The site used HTTPS, but a server-default HTTP homepage was still accessible. Chrome’s auto-upgrade meant the publisher never saw the HTTP version, but Googlebot doesn’t follow Chrome’s upgrade behavior, so Googlebot was pulling from the wrong page.

Why This Matters

This is the kind of problem you wouldn’t find in a standard site audit because your browser never shows it. If your site name or favicon in search results doesn’t match what you expect, and your HTTPS homepage looks correct, the HTTP version of your domain is worth checking.

Mueller suggested running curl from the command line to see the raw HTTP response without Chrome’s auto-upgrade. If it returns a server-default page instead of your actual homepage, that’s the source of the problem. You can also use the URL Inspection tool in Search Console with a Live Test to see what Google retrieved and rendered.

Google’s documentation on site names specifically mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what happens when an HTTP version contains content different from the HTTPS homepage you intended.

What People Are Saying

Mueller described the case on Bluesky as “a weird one,” noting that the core problem is invisible in normal browsing:

“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”

The case highlights a pattern where browser features often hide what crawlers see. Examples include Chrome’s auto-upgrade, reader modes, client-side rendering, and JavaScript content. To debug site name and favicon issues, check the server response directly, not just browser loadings.

Read our full coverage: Hidden HTTP Page Can Cause Site Name Problems In Google

New Data Shows Most Pages Fit Well Within Googlebot’s Crawl Limit

New research based on real-world webpages suggests most pages sit well below Googlebot’s 2 MB fetch cutoff. The data, analyzed by Search Engine Journal’s Roger Montti, draws on HTTP Archive measurements to put the crawl limit question into practical context.

Key Facts: HTTP Archive data suggests most pages are well below 2 MB. Google recently clarified in updated documentation that Googlebot’s limit for supported file types is 2 MB, while PDFs get a 64 MB limit.

Why This Matters

The crawl limit question has been circulating in technical SEO discussions, particularly after Google updated its Googlebot documentation earlier this month.

The new data answers the practical question that documentation alone couldn’t. Does the 2 MB limit matter for your pages? For most sites, the answer is no. Standard webpages, even content-heavy ones, rarely approach that threshold.

Where the limit could matter is on pages with extremely bloated markup, inline scripts, or embedded data that inflates HTML size beyond typical ranges.

The broader pattern here is Google making its crawling systems more transparent. Moving documentation to a standalone crawling site, clarifying which limits apply to which crawlers, and now having real-world data to validate those limits gives a clearer picture of what Googlebot handles.

What Technical SEO Professionals Are Saying

Dave Smart, technical SEO consultant at Tame the Bots and a Google Search Central Diamond Product Expert, put the numbers in perspective in a LinkedIn post:

“Googlebot will only fetch the first 2 MB of the initial html (or other resource like CSS, JavaScript), which seems like a huge reduction from 15 MB previously reported, but honestly 2 MB is still huge.”

Smart followed up by updating his Tame the Bots fetch and render tool to simulate the cutoff. In a Bluesky post, he added a caveat about the practical risk:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to cap text based files to 2 MB to simulate this.”

Google’s John Mueller endorsed the tool on Bluesky, writing:

“If you’re curious about the 2MB Googlebot HTML fetch limit, here’s a way to check.”

Mueller also shared Web Almanac data on Reddit to put the limit in context:

“The median on mobile is at 33kb, the 90-percentile is at 151kb. This means 90% of the pages out there have less than 151kb HTML.”

Roger Montti, writing for Search Engine Journal, reached a similar conclusion after reviewing the HTTP Archive data. Montti noted that the data based on real websites shows most sites are well under the limit, and called it “safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.”

Read our full coverage: New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough

Theme Of The Week: The Diagnostic Gap

Each story this week points to something practitioners couldn’t see before, or checked the wrong way.

Bing’s AI citation dashboard fills a measurement gap that has existed since AI answers started citing website content. Mueller’s HTTP homepage case reveals an invisible page that standard site audits and browser checks would miss entirely because Chrome hides it. And the Googlebot crawl limit data answers a question that documentation updates raised, but couldn’t resolve on their own.

The connecting thread isn’t that these are new problems. AI citations have been happening without measurement tools. Ghost HTTP pages have been confusing site name systems since Google introduced the feature. And crawl limits have been listed in Google’s docs for years without real-world validation. What changed this week is that each gap got a concrete diagnostic: a dashboard, a curl command, and a dataset.

The takeaway is that the tools and data for understanding how search engines interact with your content are getting more specific. The challenge is knowing where to look.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Discover Core Update, AI Mode Ads & Crawl Policy – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates affect how Google ranks content in Discover, how it plans to monetize AI search, and what content you serve to bots.

Here’s what matters for you and your work.

Google Releases Discover-Only Core Update

Google launched the February 2026 Discover core update, a broad ranking change targeting the Discover feed rather than Search. The rollout may take up to two weeks.

Key Facts: The update is initially limited to English-language users in the United States. Google plans to expand it to more countries and languages, but hasn’t provided a timeline. Google described it as designed to “improve the quality of Discover overall.” Existing core update and Discover guidance apply.

Why This Matters For SEOs

Google has historically rolled Discover ranking changes into broader core updates that affected Search as well. Announcing a Discover-specific core update means rankings in the feed can now move without any corresponding change in Search results.

That distinction creates a monitoring problem. When you track performance in Search Console, you should check Discover traffic independently over the next two weeks. Traffic drops that look like a core update penalty may be Discover-only. Treating them as Search problems leads to the wrong diagnosis.

Discover traffic concentration has grown for publishers. NewzDash CEO John Shehata reported that Discover accounts for roughly 68% of Google-sourced traffic to news sites. A core update targeting that surface independently raises the stakes for any publisher relying on the feed.

Read our full coverage: Google Releases Discover-Focused Core Update

Alphabet Q4 Earnings Reveal AI Mode Monetization Plans

Alphabet reported Q4 2025 earnings, showing Search revenue grew 17% to $63 billion. The call included the first detailed look at how Google plans to monetize AI Mode.

Key Facts: CEO Sundar Pichai said AI Mode queries are three times longer than traditional searches. Chief Business Officer Philipp Schindler described the resulting ad inventory as reaching queries that were “previously challenging to monetize.” Google is testing ads below AI Mode responses.

Why This Matters For SEOs

The monetization details matter more than the revenue headline. Google is treating AI Mode as additive inventory, not a replacement for traditional search ads. Longer queries create new ad surfaces that didn’t exist when users typed three-word searches. For paid search practitioners, that means new campaign territory in conversational queries.

The metrics Google celebrated on this call describe users staying on Google longer. Google framed longer AI Mode sessions as a growth driver, and the monetization infrastructure follows that logic. The tradeoff to watch is referral traffic.

AI Mode creates a seamless path from AI Overviews, as detailed in our coverage last week. The earnings data suggest Google sees that containment as part of the growth story.

Read our full coverage: Alphabet Q4 2025: AI Mode Monetization Tests And Search Revenue Growth

Mueller Pushes Back On Serving Markdown To LLM Bots

Google Search Advocate John Mueller pushed back on the idea of serving Markdown files to LLM crawlers instead of standard HTML, calling the concept “a stupid idea” on Bluesky and raising technical concerns on Reddit.

Key Facts: A developer described plans to serve raw Markdown to AI bots to reduce token usage. Mueller questioned whether LLM bots can recognize Markdown on a website as anything other than a text file, or follow its links. He asked what would happen to internal linking, headers, and navigation. On Bluesky, he was more direct, calling the conversion “a stupid idea.”

Why This Matters For SEOs

The practice exists because developers assume LLMs process Markdown more efficiently than HTML. Mueller’s response treats this as a technical problem, not an optimization. Stripping pages to Markdown can remove the structure that bots need to understand relationships between pages.

Mueller’s technical guidance is consistent, including his advice on multi-domain crawling and his crawl slump guidance. This fits a pattern where Mueller draws clear lines around bot-specific content formats. He previously compared llms.txt to the keywords meta tag, and SE Ranking’s analysis of 300,000 domains found no connection between having an llms.txt file and LLM citation rates.

Read our full coverage: Google’s Mueller Calls Markdown-For-Bots Idea ‘A Stupid Idea’

Google Files Bugs Against WooCommerce Plugins For Crawl Issues

Google’s Search Relations team said on the Search Off the Record podcast that they filed bugs against WordPress plugins. The plugins generate unnecessary crawlable URLs through action parameters like add-to-cart links.

Key Facts: Certain plugins create URLs that Googlebot discovers and attempts to crawl. The result is wasted crawl budget on pages with no search value. Google filed a bug with WooCommerce and flagged other plugin issues that remain unfixed. The team’s response targeted plugin developers rather than expecting individual sites to fix the problem.

Why This Matters For SEOs

Google intervening at the plugin level is unusual. Normally, crawl efficiency falls on individual sites. Filing bugs upstream suggests the problem is widespread enough that one-off fixes won’t solve it.

Ecommerce sites running WooCommerce should audit their plugins for URL patterns that generate crawlable action parameters. Check your crawl stats in Search Console for URLs containing cart or checkout parameters that shouldn’t be indexed.

Read our full coverage: Google’s Crawl Team Filed Bugs Against WordPress Plugins

LinkedIn Shares What Worked For AI Search Visibility

LinkedIn published findings from internal testing on what drives visibility in AI-generated search results. The company reported that non-brand awareness-driven traffic declined by up to 60% across the industry for a subset of B2B topics.

Key Facts: LinkedIn’s testing found that structured content performed better in AI citations, particularly pages with named authors, visible credentials, and clear publication dates. The company is developing new analytics to identify a traffic source for LLM-driven visits and to monitor LLM bot behavior in CMS logs.

Why This Matters For SEOs

What caught my attention is how much this overlaps with what AI platforms themselves are saying. Search Engine Journal’s Roger Montti recently interviewed Jesse Dwyer, head of communications at Perplexity. The AI platform’s own guidance on what drives citations lines up closely with what LinkedIn found. When both the cited source and the citing platform arrive at the same conclusions independently, that gives you something beyond speculation.

Read our full coverage: LinkedIn Shares What Works For AI Search Visibility

Theme Of The Week: Google Is Splitting The Dashboard

Every story this week points to the same realization. “Google” is no longer one thing to monitor.

Google is now announcing Discover core updates separately from Search core updates. AI Mode carries ad formats and checkout features that don’t exist in traditional results. Mueller drew a policy line around how bots consume content. Google filed crawl bugs upstream at the plugin level, and LinkedIn is building a separate measurement for AI-driven traffic.

A year ago, you could check one traffic graph in Search Console and get a reasonable picture. The picture now fragments across Discover, Search, AI Mode, and LLM-driven traffic. Ranking signals and update cycles differ, and the gaps between them haven’t been closed.

Top Stories Of The Week:

This week’s coverage spanned five developments across Discover updates, search monetization, crawl policy, and AI visibility.

More Resources:


Featured Image: Accogliente Design/Shutterstock