Apple Safari Update Enables Tracking Two Core Web Vitals Metrics via @sejournal, @martinibuster

Safari 26.2 adds support for measuring Largest Contentful Paint (LCP) and the Event Timing API, which is used to calculate Interaction to Next Paint (INP). This enables site owners to collect Largest Contentful Paint (LCP) and Interaction to Next Paint (INP) data from Safari users through the browser Performance API using their own analytics and real user monitoring tools.

LCP And INP In Apple Safari Browser

LCP is a Core Web Vital and a ranking signal. Interaction To Next Paint (INP), also a Core Web Vitals metric, measures how quickly your website responds to user interactions. Native Safari browser support enables accurate measurement, which closes a long-standing blind spot for performance diagnostics of site visitors using Apple devices.

INP is a particularly critical measurement because it reports on the total time between a user’s action (click, tap, or key press) and the visual update on the screen. It tracks the slowest interaction observed during a user’s visit. INP is important because it enables site owners to know if the page feels “frozen” or laggy for site visitors. Fast INP scores translate to a positive user experience for site visitors who are interacting with the website.

This change will have no effect on public tools like PageSpeed Insights and CrUX data because they are Chrome-based.

However, Safari site visitors can now be included in field performance data where site owners have configured measurement, such as in Google Analytics or other performance monitoring platforms.

The following analytics packages can now be configured to surface these metrics from Safari browser site visitors:

  • Google Analytics (GA4, via Web Vitals or custom event collection)
  • Adobe Analytics
  • Matomo
  • Amplitude (with performance instrumentation)
  • Mixpanel (with custom event pipelines)
  • Custom / In-House Monitoring

Apple Safari’s update also enables Real User Monitoring (RUM) platforms to surface this data for site owners:

  • Akamai mPulse
  • Cloudflare Web Analytics
  • Datadog RUM
  • Dynatrace
  • Elastic Observability (RUM)
  • New Relic Browser
  • Raygun
  • Sentry Performance
  • SpeedCurve
  • Splunk RUM

Apple’s official documentation explains:

“Safari 26.2 adds support for two tools that measure the performance of web applications, Event Timing API and Largest Contentful Paint.

The Event Timing API lets you measure how long it takes for your site to respond to user interactions. When someone clicks a button, types in a field, or taps on a link, the API tracks the full timeline — from the initial input through your event handlers and any DOM updates, all the way to when the browser paints the result on screen. This gives you insight into whether your site feels responsive or sluggish to users. The API reports performance entries for interactions that take longer than a certain threshold, so you can identify which specific events are causing delays. It makes measuring “Interaction to Next Paint” (INP) possible.

Largest Contentful Paint (LCP) measures how long it takes for the largest visible element to appear in the viewport during page load. This is typically your main image, a hero section, or a large block of text — whatever dominates the initial view. LCP gives you a clear signal about when your page feels loaded to users, even if other resources are still downloading in the background.”

Safari 26.2 provides new data that is critical for SEO and for monitoring the user experience, information that site owners rely on. Safari traffic represents a significant share of site visits. These improvements make it possible for site owners to have a more complete view of the real user experience across more devices and browsers.

Eight Overlooked Reasons Why Sites Lose Rankings In Core Updates via @sejournal, @martinibuster

There are multiple reasons why a site can drop in rankings due to a core algorithm update. The reasons may reflect specific changes to the way Google interprets content, a search query, or both. The change could also be subtle, like an infrastructure update that enables finer relevance and quality judgments. Here are eight commonly overlooked reasons for why a site may have lost rankings after a Google core update.

Ranking Where It’s Supposed To Rank?

If the site was previously ranking well and now it doesn’t, it could be what I call “it’s ranking where it’s supposed to rank.” That means that some part of Google’s algorithm has caught up to a loophole that the page was intentionally or accidentally taking advantage of and is currently ranking it where it should have been ranking in the first place.

This is difficult to diagnose because a publisher might believe that the web pages or links were perfect the way they previously were, but in fact there was an issue.

Topic Theming Defines Relevance

A part of the ranking process is determining what the topic of a web page is. Google admitted a year ago that a core topicality system is a part of the ranking process. The concept of topicality as part of the ranking algorithm is real.

The so-called Medic Update of 2018 brought this part of Google’s algorithm into sharp focus. Suddenly, sites that were previously relevant for medical keywords were nowhere to be found because they dealt in folk remedies, not medical ones. What happened was that Google’s understanding of what keyword phrases were about became more topically focused.

Bill Slawski wrote about a Google patent (Website representation vector) that describes a way to classify websites by knowledge domains and expertise levels that sounds like a direct match to what the Medic Update was about.

The patent describes part of what it’s doing:

“The search system can use information for a search query to determine a particular website classification that is most responsive to the search query and select only search results with that particular website classification for a search results page. For example, in response to receipt of a query about a medical condition, the search system may select only websites in the first category, e.g., authored by experts, for a search results page.”

Google’s interpretation of what it means to be relevant became increasingly about topicality in 2018 and continued to be refined in successive updates over the years. Instead of relying on links and keyword similarity, Google introduced a way to identify and classify sites by knowledge domain (the topic) in order to better understand how search queries and content are relevant to each other.

Returning to the medical queries, the reason many sites lost rankings during the Medic Update was that their topics were outside the knowledge domain of medical remedies and science. Sites about folk and alternative healing were permanently locked out of ranking for medical phrases, and no amount of links could ever restore their rankings. The same thing happened across many other topics and continues to affect rankings as Google’s ability to understand the nuances of topical relevance is updated.

Example Of Topical Theming

A way to think of topical theming is to consider that keyword phrases can be themed by topic. For example, the keyword phrase “bomber jacket” is related to both military clothing, flight clothing, and men’s jackets. At the time of writing, Alpha Industries, a manufacturer of military clothing, is ranked number one in Google. Alpha Industries is closely related to military clothing because the company not only focuses on selling military style clothing, it started out as a military contractor producing clothing for America’s military, so it’s closely identified by consumers with military clothing.

Screenshot Showing Topical Theming

Screenshot of SERPs showing how Google interprets a keyword phrase and web pages

So it’s not surprising that Alpha Industries ranks #1 for bomber jacket because it ticks both boxes for the topicality of the phrase Bomber Jacket:

  • Shopping > Military clothing
  • Shopping > Men’s clothing

If your page was previously ranking and now it isn’t, then it’s possible that the topical theme was redefined more sharply. The only way to check this is to review the top ranked sites, focusing, for example, on the differences between ranges such as position one and two, or sometimes positions one through three or positions one through five. The range depends on how the topic is themed. In the example of the Bomber Jacket rankings, positions one through three are themed by “military clothing” and “Men’s clothing.” Position three in my example is held by the Thursday Boot Company, which is themed more closely with “men’s clothing” than it is with military clothing. Perhaps not coincidentally, the Thursday Boot Company is closely identified with men’s fashion.

This is a way to analyze the SERPs to understand why sites are ranking and why others are not.

Topic Personalization

Sometimes the topical themes are not locked into place because user intents can change. In that case, opening a new browser or searching a second time in a different tab might cause Google to change the topical theme to a different topical intent.

In the case of the “bomber jacket” search results, the hierarchy of topical themes can change to:

  • Informational > Article About Bomber Jackets
  • Shopping > Military clothing
  • Shopping > Men’s clothing

The reason for that is directly related to the user’s information need which informs the intent and the correct topic. In the above case it looks like the military clothing theme may be the dominant user intent for this topic but the informational/discovery intent may be a close tie that’s triggered by personalization. This can vary by previous searches but also by geographic location, a user’s device, and even by the time of day.

The takeaway is that there may not be anything wrong with a site. It’s just ranking for a more specific topical intent. So if the topic is getting personalized so that your page no longer ranks, a solution may be to create another page to focus on the additional topic theme that Google is ranking.

Authoritativeness

In one sense, authoritativeness can be seen as an external validation of expertise of a website as a go-to source for a product, service, or content topic. While the expertise of the author contributes to authoritativeness and authoritativeness in a topic can be inherent to a website, ultimately it’s third-party recognition from readers, customers, and other websites (in the form of citations and links) that communicate a website’s authoritativeness back to Google as a validating signal.

The above can be reduced to these four points:

  1. Expertise and topical focus originate within the website.
  2. Authoritativeness is the recognition of that expertise.
  3. Google does not assess that recognition directly.
  4. Third-party signals can validate a site’s authoritativeness.

To that we can add the previously discussed Website Representation Vector patent that shows how Google can identify expertise and authoritativeness.

What’s going on then is that Google selects relevant content and then winnows that down by prioritizing expert content.

Here’s how Google explains how it uses E-E-A-T:

“Google’s automated systems are designed to use many different factors to rank great content. After identifying relevant content, our systems aim to prioritize those that seem most helpful. To do this, they identify a mix of factors that can help determine which content demonstrates aspects of experience, expertise, authoritativeness, and trustworthiness, or what we call E-E-A-T.”

Authoritativeness is not about how often a site publishes about a topic; any spammer can do that. It has to be about more than that. E-E-A-T is a standard to hold your site up to.

Stuck On Page Two Of Search Results? Try Some E-E-A-T

Speaking of E-E-A-T, many SEOs have the mistaken idea that it’s something they can add to websites. That’s not how it works. At the 2025 New York City Search Central Live event, Google’s John Mueller confirmed that E-E-A-T is not something you add to web pages.

He said:

“Sometimes SEOs come to us or like mention that they’ve added EEAT to their web pages. That’s not how it works. Sorry, you can’t sprinkle some experiences on your web pages. It’s like, that doesn’t make any sense.”

Clearly, content reflects qualities of authoritativeness, trustworthiness, expertise, and experience, but it’s not something that you add to content. So what is it?

E-E-A-T is just a standard to hold your site up to. It’s also a subjective judgment made by site visitors. A subjective judgment is like how a sandwich can taste great, with the “great” part being the subjective judgment. It is a matter of opinion.

One thing that is difficult for SEOs to diagnose is when their content is missing that extra something to push their site onto the first page of the SERPs. It can feel unfair to see competitors ranking on the first page of the SERPs even though your content is just as good as theirs.

Those differences indicate that their top-ranked web pages are optimized for people. Another reason is that more people know about them because they have a multimodal approach to content, whereas the site on page two of the SERPs mainly communicates via textual content.

In SERPs where Google prefers to rank government and educational sites for a particular keyword phrase, except for one commercial site, I almost always find evidence that their content and their outreach are resonating with site visitors in ways that the competitor websites do not. Websites that focus on multimodal, people-optimized content and experiences are usually what I find in those weird outlier rankings.

So if your site is stuck on page two, revisit the top-ranked web pages and identify ways that those sites are optimized for people and multimodal content. You may be surprised to see what makes those sites resonate with users.

Temporary Rankings

Some rankings are not made to last. This is the case with a new site or new page ranking boost. Google has a thing where it tastes a new site to see how it fits with the rest of the Internet. A lot of SEOs crow about their client’s new website conquering the SERPs right out of the gate. What you almost never hear about is when those same sites drop out of the SERPs.

This isn’t a bad thing. It’s normal. It simply means that Google has tried the site and now it’s time for the site to earn its place in the SERPs.

There’s Nothing Wrong With The Site?

Many site publishers find it frustrating to be told that there’s nothing wrong with their site even though it lost rankings. What’s going on may be that the site and web page are fine, but that the competitors’ pages are finer. These kinds of issues are typically where the content is fine and the competitors’ content is about the same but is better in small ways.

This is the one form of ranking drop that many SEOs and publishers easily overlook because SEOs generally try to identify what’s “wrong” with a site, and when nothing obvious jumps out at them, they try to find something wrong with the backlinks or something else.

This inability to find something wrong leads to recommendations like filing link disavows to get rid of spam links or removing content to fix perceived but not actual problems (like duplicate content). They’re basically grasping at straws to find something to fix.

But sometimes it’s not that something is wrong with the site. Sometimes it’s just that there’s something right with the competitors.

What can be right with competitors?

  • Links
  • User experience
  • Image content (for example, site visitors are reflected in image content).
  • Multimodal approach
  • Strong outreach to potential customers
  • In-person marketing
  • Cultivate word-of-mouth promotion
  • Better advertising
  • Optimized for people

SEO Secret Sauce: Optimized For People

Optimizing for people is a common blind spot. Optimizing for people is a subset of conversion optimization. Conversion optimization is about subtle signals that indicate a web page contains what the site visitor needs.

Sometimes that need is to be recognized and acknowledged. It can be reassurance that you’re available right now or that the business is trustworthy.

For example, a client’s site featured a badge at the top of the page that said something like “Trusted by over 200 of the Fortune 500.” That badge whispered, “We’re legitimate and trustworthy.”

Another example is how a business identified that most of their site visitors were mothers of boys, so their optimization was to prioritize images of mothers with boys. This subtly recognized the site visitor and confirmed that what’s being offered is for them.

Nobody loves a site because it’s heavily SEO’d, but people do love sites that acknowledge the site visitor in some way. This is the secret sauce that’s invisible to SEO tools but helps sites outrank their competitors.

It may be helpful to avoid mimicking what competitors are doing and increase ways that differentiate the site and outreach in ways that make people like your site more. When I say outreach, I mean actively seeking out places where your typical customer might be hanging out and figuring out how you can make your pitch there. Third-party signals have long been strong ranking factors at Google, and now, with AI Search, what people and other sites say about your site are increasingly playing a role in rankings.

Takeaways

  1. Core updates sometimes correct over-ranking, not punish sites
    Ranking drops sometimes reflect Google closing loopholes and placing pages where they should have ranked all along rather than identifying new problems.
  2. Topical theming has become more precise
    Core updates sometimes make existing algorithms more precise. Google increasingly ranks content based on topical categories and intent, not just keywords or links.
  3. Topical themes can change dynamically
    Search results may shift between informational and commercial themes depending on context such as prior searches, location, device, or time of day.
  4. Authoritativeness is externally validated
    Recognition from users, citations, links, and broader awareness can be the difference why one site ranks and another does not.
  5. SEO does not control E-E-A-T and can’t be reduced to an on-page checklist
    While concepts of expertise and authoritativeness are inherent in content, they’re still objective judgments that can be inferred from external signals, not something that can be directly added to content by SEOs.
  6. Temporary ranking boosts are normal
    New pages and sites are tested briefly, then must earn long-term placement through sustained performance and reception.
  7. Competitors may simply be better for users
    Ranking losses often occur because competitors outperform in subtle but meaningful ways, not because the losing site is broken.
  8. People-first optimization is a competitive advantage
    Sites that resonate emotionally, visually, and practically with visitors often outperform purely SEO-optimized pages.

Ranking changes after a core update sometimes reflect clearer judgments about relevance, authority, and usefulness rather than newly discovered web page flaws. As Google sharpens how it understands topics, pages increasingly compete on how well they align with what users are actually trying to accomplish and which sources people already recognize and trust. The lasting advantage comes from building a site that resonates with actual visitors, earns attention beyond search, and gives Google consistent evidence that users prefer it over alternatives. Marketing, the old-fashioned tell-people-about-a-business approach to promoting it, should not be overlooked.

Featured Image by Shutterstock/Silapavet Konthikamee

Google AI Mode & AI Overviews Cite Different URLs, Per Ahrefs Report via @sejournal, @MattGSouthern

Google’s AI Mode and AI Overviews can produce answers with similar meaning while citing different sources, according to new data from Ahrefs.

The report, published on the Ahrefs blog, analyzed September 2025 U.S. data from Ahrefs’ Brand Radar tool and compared AI Mode and AI Overview responses for the same queries.

The authors looked at 730,000 query pairs for content similarity and 540,000 query pairs for citation and URL analysis.

What The Study Found

Ahrefs reports that AI Mode and AI Overviews cited the same URLs only 13% of the time. When comparing only the top three citations in each response, overlap increased to 16%.

The language in the responses also varied. Ahrefs reports 16% overlap in unique words and states that AI Mode and AI Overviews share the exact same first sentence only 2.5% of the time.

Ahrefs reported strong semantic alignment, with an average semantic similarity score of 86%, and 89% of response pairs scoring above 0.8 on a scale where 1.0 indicates identical meaning.

Despina Gavoyannis, Senior SEO Specialist at Ahrefs, writes:

“Put simply: 9 out of 10 times, AI Mode and AI Overview agreed on what to say. They just said it differently and cited different sources.”

Different Source Preferences

Ahrefs reports differences in which websites and content types each feature tends to cite.

For example, Wikipedia appears in 28.9% of AI Mode citations compared to 18.1% in AI Overviews. The data also finds that AI Mode cited Quora 3.5x more often and cited health sites at roughly double the rate of AI Overviews.

AI Overviews, by contrast, leaned more heavily on video content. YouTube was the most frequently cited source for AI Overviews, whereas Reddit was cited at similar rates in both AI Mode and AI Overviews.

Ahrefs also reports that AI Overviews cited videos and core pages (such as homepages) nearly twice as often as AI Mode. At the same time, both features showed a strong preference for article-format pages overall.

Entity And Brand Mentions

Ahrefs found AI Mode responses were about four times longer than AI Overviews on average and included more entities.

In the dataset, AI Mode averaged 3.3 entity mentions per response compared to 1.3 for AI Overviews. Approximately 61% of the time, AI Mode included all entities mentioned in the AI Overview response and then added additional entities.

Many responses didn’t include brands or entities. Ahrefs reports that 59.41% of AI Overview responses and 34.66% of AI Mode responses contained no mentions of persons or brands, which the authors associate with informational queries in which named entities are not typically part of the answer.

Citation Gaps

The data finds that AI Mode was more likely to include citations than AI Overviews.

Only 3% of AI Mode responses lacked sources, compared to 11% of AI Overviews. Ahrefs reports that missing citations typically occur in cases such as calculations, sensitive queries, help center redirects, or unsupported languages.

Why This Matters

This report suggests that AI Mode and AI Overviews can differ in the sources they credit, even when they reach similar conclusions for the same query.

For monitoring purposes, this can affect how you interpret “visibility” across experiences. A citation (or a mention) in AI Overviews does not necessarily imply you will be cited in AI Mode for the same query, and AI Mode’s longer responses may include additional entities and competitors compared to the shorter AI Overview format.

Google’s documentation states that both AI Overviews and AI Mode may use “query fan-out,” which issues multiple related searches across subtopics and data sources while a response is being generated.

Google also notes that AI Mode and AI Overviews may use different models and techniques, so the responses and links they display will vary.

Looking Ahead

Ahrefs notes this analysis compares single generations of AI Mode and AI Overview responses. In related research, Ahrefs reported that 45.5% of AI Overview citations change when AI Overviews update, suggesting that overlap can appear different across repeated runs.

Even with that caveat, the low overlap observed in this dataset indicates that AI Mode and AI Overviews frequently select different URLs as supporting sources for the same query.


Featured Image: hafakot/Shutterstock

Google Explains Why Staggered Site Migrations Impact SEO Outcome via @sejournal, @martinibuster

Google’s John Mueller recently answered a question about how Google responds to staggered site moves where a site is partially moved from one domain to another. He said a standard site move is generally fine, but clarified his position when it came to partial site moves.

Straight Ahead Site Move?

Someone asked about doing a site move, initially giving the impression that they were moving the entire site. The question was in the context of using Google Search Console’s change of address feature.

They asked:

“Do you have any thoughts on this GSC Change of Address question?

Can we submit the new domain if a few old URLs still get traffic and aren’t redirected yet, or should we wait until all redirects are live?”

Mueller initially answered that it should be fine:

“It’s generally fine (for example, some site moves keep the robots.txt on the old domain with “allow: /” so that all URLs can be followed). The tool does check for the homepage redirect though.”

Google Explains Why Partial Site Moves Are Problematic

His opinion changed however after the OP responded with additional information indicating that the home page has been moved while many of the product and category pages on the old domain will stay put for now, meaning that they want to move parts of the site now and other parts later, retaining one foot in on a new domain and the other firmly planted on the old one.

That’s a different scenario entirely. Unsurprisingly, Mueller changed his opinion.

He responded:

“Practically speaking, it’s not going to be seen as a full site move. You can still use the change of address tool, but it will be a messy situation until you’ve really moved it all over. If you need to do this (sometimes it’s not easy, I get it :)), just know that it won’t be a clean slate.

…You’ll have a hard time tracking things & Google will have a hard time understanding your sites. My recommendation would be to clean it up properly as soon as you can. Even properly planned & executed site migrations can be hard, and this makes it much more challenging.”

Google’s Site Understanding

Something that I find intriguing is Mueller’s occasional reference to Google’s understanding of a website. He’s mentioned this factor in other contexts in the past and it seems to be a catchall for things that are related to quality but also to something else that he’s referred to in the past as a relevance topic related to understanding where a site fits in the Internet.

In this context, Mueller appears to be using the phrase to mean understanding the site relative to the domain name.

Featured Image by Shutterstock/Here

Google Warns Noindex Can Block JavaScript From Running via @sejournal, @MattGSouthern

Google updated its JavaScript SEO documentation to clarify that noindex tags may prevent rendering and JavaScript execution, blocking changes.

  • When Google encounters `noindex`, it may skip rendering and JavaScript execution.
  • JavaScript that tries to remove or change `noindex` may not run for Googlebot on that crawl.
  • If you want a page indexed, avoid putting `noindex` in the original page code.
Cloudflare Report: Googlebot Tops AI Crawler Traffic via @sejournal, @MattGSouthern

Cloudflare published its sixth annual Year in Review, offering a comprehensive looks at Internet traffic, security, and AI crawler activity across 2025.

The report draws on data from Cloudflare’s network, which spans more than 330 cities across 125 countries and handles over 81 million HTTP requests per second on average.

The AI crawler findings stand out. Googlebot crawled far more web pages than any other AI bot, reflecting Google’s dual-purpose approach to crawling for both search indexing and AI training.

Googlebot Top AI Crawler Traffic

Cloudflare analyzed successful requests for HTML content from leading AI crawlers during October and November 2025. The results showed Googlebot reached 11.6% of unique web pages in the sample.

That’s more than 3 times the pages seen by OpenAI’s GPTBot at 3.6%. It’s nearly 200 times more than PerplexityBot, which crawled just 0.06% of pages.

Bingbot came in third at 2.6%, followed by Meta-ExternalAgent and ClaudeBot at 2.4% each.

The report noted that because Googlebot crawls for both search indexing and AI model training, web publishers face a difficult choice. Blocking Googlebot’s AI training means risking search discoverability.

Cloudflare wrote:

“Because Googlebot is used to crawl content for both search indexing and AI model training, and because of Google’s long-established dominance in search, Web site operators are essentially unable to block Googlebot’s AI training without risking search discoverability.”

AI Bots Now Account For 4.2% of HTML Requests

Throughout 2025, AI bots (excluding Googlebot) averaged 4.2% of HTML requests across Cloudflare’s customer base. The share fluctuated between 2.4% in early April and 6.4% in late June.

Googlebot alone accounted for 4.5% of HTML requests, slightly more than all other AI bots combined.

The share of human-generated HTML traffic started 2025 at seven percentage points below non-AI bot traffic. By September, human traffic began exceeding non-AI bot traffic on some days. As of December 2, humans generated 47% of HTML requests while non-AI bots generated 44%.

Crawl-to-Refer Ratios Show Wide Variation

Cloudflare tracks how often AI and search platforms send traffic to sites relative to how often they crawl. A high ratio means heavy crawling without sending users back to source sites.

Anthropic had the highest ratios among AI platforms, ranging from approximately 25,000:1 to 100,000:1 during the second half of the year after stabilizing from earlier volatility.

OpenAI’s ratios reached as high as 3,700:1 in March. Perplexity maintained the lowest ratios among leading AI platforms, generally below 400:1 and under 200:1 from September onward.

For comparison, Google’s search crawl-to-refer ratio stayed much lower, generally between 3:1 and 30:1 throughout the year.

User-Action Crawling Grew Over 20X

Not all AI crawling is for model training. “User action” crawling occurs when bots visit sites in response to user questions posed to chatbots.

This category saw the fastest growth in 2025. User-action crawling volume increased more than 15 times from January through early December. The trend closely matched the traffic pattern for OpenAI’s ChatGPT-User bot, which visits pages when users ask ChatGPT questions.

The growth showed a weekly usage pattern starting in mid-February, suggesting increased use in schools and workplaces. Activity dropped during June through August when students were on break and professionals took vacations.

AI Crawlers Most Blocked In Robots.txt

Cloudflare analyzed robots.txt files across nearly 3,900 of the top 10,000 domains. AI crawlers were the most frequently blocked user agents.

GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives. These directives tell crawlers to stay away from entire sites.

Googlebot and Bingbot showed a different pattern. Their disallow directives leaned heavily toward partial blocks, likely focused on login endpoints and non-content areas rather than full site blocking.

Civil Society Became Most-Attacked Sector

For the first time, organizations in the “People and Society” vertical were the most targeted by attacks. This category includes religious institutions, nonprofits, civic organizations, and libraries.

The sector received 4.4% of global mitigated traffic, up from under 2% at the start of the year. Attack share jumped to over 17% in late March and peaked at 23.2% in early July.

Many of these organizations are protected by Cloudflare’s Project Galileo.

Gambling and games, the most-attacked vertical in 2024, saw its share drop by more than half to 2.6%.

Other Key Findings

Cloudflare’s report included several additional findings across traffic, security, and connectivity.

Global Internet traffic grew 19% year-over-year. Growth stayed relatively flat through mid-April, then accelerated after mid-August.

Post-quantum encryption now secures 52% of human traffic to Cloudflare, nearly double the 29% share at the start of the year.

ChatGPT remained the top generative AI service globally. Google Gemini, Windsurf AI, Grok/xAI, and DeepSeek were new entrants to the top 10.

Starlink traffic doubled in 2025, with service launching in more than 20 new countries.

Nearly half of the 174 major Internet outages observed globally were caused by government-directed shutdowns. Cable cut outages dropped nearly 50%, while power failure outages doubled.

European countries dominated Internet quality metrics. Spain topped the list for overall Internet quality, with average download speeds above 300 Mbps.

Why This Matters

The AI crawler data should affects how you think about bot access and traffic.

Google’s dual-purpose crawler creates a competitive advantage. You can block other AI crawlers while keeping Googlebot access for search visibility, but you can’t separate Google’s search crawling from its AI training crawling.

The crawl-to-refer ratios help quantify what publishers already suspected. AI platforms crawl heavily but send little traffic back. The gap between crawling and referring varies widely by platform.

The civil society attack data matters if you work with nonprofits or advocacy organizations. These groups now face the highest rate of attacks.

Looking Ahead

Cloudflare expects AI metrics to change as the space continues to evolve. The company added several new AI-related datasets to this year’s report that weren’t available in previous editions.

The crawl-to-refer ratios may change as AI platforms adjust their search features and referral behavior. OpenAI’s ratios already showed some decline through the year as ChatGPT search usage grew.

For robots.txt management, the data shows most publishers are choosing partial blocks for major search crawlers while fully blocking AI-only crawlers. The year-end state of these directives provides a baseline for tracking how publisher policies evolve in 2026.


Featured Image: Mamun_Sheikh/Shutterstock

Google Updates Search Live With Gemini Model Upgrade via @sejournal, @martinibuster

Google has updated Search Live with Gemini 2.5 Flash Native Audio, upgrading how voice functions inside Search while also extending the model’s use across translation and live voice agents. The update introduces more natural spoken responses in Search Live and reflects Google’s effort to improve natural voice queries, treating voice as a core interface as a way for users to get everything they can get from regular search plus enabling them to ask questions about the physical world around them and receive immediate voice translations between two people speaking different languages.

The new updated voice capabilities, rolling out this week in the  United States, will enable Google’s voice responses to sound more natural and can even be slowed down for instructional content.

According to Google:

“When you go Live with Search, you can have a back-and-forth voice conversation in AI Mode to get real-time help and quickly find relevant sites across the web. And now, thanks to our latest Gemini model for native audio, the responses on Search Live will be more fluid and expressive than ever before.”

Broader Gemini Native Audio Rollout

This Search upgrade is part of a broader update to Gemini 2.5 Flash Native Audio rolling out across Google’s ecosystem, including Gemini Live (in the Gemini App), Google AI Studio, and Vertex AI. The model processes spoken audio in real time and produces fluid spoken responses, reducing barriers to natural conversation, reducing friction in live interactions. Although Google’s announcement didn’t say that the model was a speech-to-speech model (as opposed to speech-to-text then text-to-speech), this update follows Google’s October announcement of “Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries.”

These changes show Google treating native audio as a core capability across consumer-facing products, making it easier for users to ask and receive information about the physical world around them in a natural manner that wasn’t previously possible.

Improvements For Voice-Based Systems

For developers and enterprises building voice-based systems, Google says the updated model improves reliability in several areas. Gemini 2.5 Flash Native Audio more consistently triggers external functions during conversations, follows complex instructions, and maintains context across multiple turns. These improvements make live voice agents more dependable in real-world workflows, where misinterpreted instructions or broken conversational flow reduce usability.

Smooth Conversational Translation

Beyond Search and voice agents, the update introduces native support for “live speech-to-speech translation.” Gemini translates spoken language in real time, either by continuously translating ambient speech into a target language or by handling conversations between speakers of different languages in both directions. The system preserves vocal characteristics such as speech rhythm and emphasis, supporting translation that sounds smoother and conversational.

Google highlights several capabilities supporting this translation feature, including broad language coverage, automatic language detection, multilingual input handling, and noise filtering for everyday environments. These features reduce setup friction and allow translation to occur passively during conversation rather than through manual controls. The result is a translation experience that behaves much like an actual person in the middle translating between two people.

Voice Search Realizing Google’s Aspirations

The update reflects Google’s continued iteration of voice search toward an ideal that was originally inspired by the science fiction voice interactions between humans and computers in the popular Star Trek television and movie series.

Read More:

Google Announces A New Era For Voice Search

You can now have more fluid and expressive conversations when you go Live with Search.

Improved Gemini audio models for powerful voice interactions

Gemini Live

5 ways to get real-time help by going Live with Search

Featured Image by Shutterstock/Jackbin

How People Use Copilot Depends On Device, Microsoft Says via @sejournal, @MattGSouthern

How people use Microsoft Copilot depends on whether they’re at a desk or on their phone.

That is the core theme in the company’s analysis of 37.5 million Copilot conversations sampled between January and September.

The research examines consumer Copilot usage patterns across device types and time of day. The authors say they used machine-based classifiers to categorize conversations by topic and intent without any human review of the messages.

What The Report Says

On mobile, Health and Fitness is the most common topic throughout the day

The authors summarize the split this way:

“On mobile, health is the dominant topic, which is consistent across every hour and every month we observed, with users seeking not just information but also advice.”

Desktop usage follows a different rhythm. Technology leads as the top topic overall, but the researchers report that work-related conversations rise during business hours.

They describe “three distinct modes of interaction: the workday, the constant personal companion, and the introspective night.”

During the workday, the paper says:

  • Between 8 a.m. and 5 p.m., “Work and Career” overtakes “Technology” as the top topic on desktop.
  • Education and science topics rise during business hours compared to nighttime.

Outside business hours, the paper describes a shift toward more personal and reflective topics. For example, it reports that “Religion and Philosophy” rises in rank during late-night hours through dawn.

Programming conversations are more common on weekdays, while gaming rises on weekends. They also note a spike in relationship conversations on Valentine’s Day.

Methodology Caveats

A few limitations are worth keeping in mind.

This is a preprint, so it hasn’t been peer reviewed. It also focuses on consumer Copilot usage and excludes enterprise-authenticated traffic, so it doesn’t describe how Copilot is used inside Microsoft 365 at work.

Finally, the topic and intent labels come from automated classifiers, which means the results reflect how Microsoft’s system groups conversations, not a human-coded review.

Why This Matters

This paper suggests that the use of AI chatbots varies with context. The researchers describe mobile behavior as consistently health-oriented, while desktop behavior is more tied to the workday.

The researchers connect the mobile health pattern to how people use their phones. They write:

“This suggests a device-specific usage pattern where the phone serves as a constant confidant for physical well-being, regardless of the user’s schedule.”

The big takeaway is that “Copilot usage” is not one uniform behavior. Device and time of day appear to shape what people ask for, and how they ask it.

Looking Ahead

Enterprise usage patterns may look different, especially inside Microsoft 365. Any follow-up research that includes workplace contexts, or that validates these patterns outside Microsoft’s own tooling and taxonomy, would help clarify how broadly these findings apply.

Google Releases December 2025 Core Update via @sejournal, @MattGSouthern

Google has released the December 2025 core update, the company confirmed through its Search Status Dashboard.

The rollout began at 9:25 a.m. Pacific Time on December 11, 2025.

This marks Google’s third core update of 2025, following the March and June core updates earlier this year.

What’s New

Google lists the update as an “incident affecting ranking” on its status dashboard.

The company states the rollout “may take up to three weeks to complete.”

Core updates are broad changes to Google’s ranking systems designed to improve search results overall. Unlike specific updates targeting spam or particular ranking factors, core updates affect how Google’s systems assess content across the web.

2025 Core Update Timeline

The December update follows two previous core updates this year.

The March 2025 core update rolled out from March 13-27, taking 14 days to complete. Data from SEO tracking providers suggested volatility similar to the December 2024 core update.

The June 2025 core update ran from June 30 to July 17, lasting about 16 days. SEO data providers indicated it was one of the larger core updates in recent memory. Some sites previously hit by the September 2023 Helpful Content Update saw partial recoveries during this rollout.

Documentation Update On Continuous Changes

Two days before this core update, Google updated its core updates documentation with new language about ongoing algorithm changes.

The updated documentation now states:

“However, you don’t necessarily have to wait for a major core update to see the effect of your improvements. We’re continually making updates to our search algorithms, including smaller core updates. These updates are not announced because they aren’t widely noticeable, but they are another way that your content can see a rise in position (if you’ve made improvements).”

Google explained that the addition was meant to clarify that content improvements can lead to ranking changes without waiting for the next announced update.

Why This Matters

If you notice ranking fluctuations over the coming weeks, this update is likely a major factor.

Core updates can shift rankings for pages that weren’t doing anything wrong. Google has consistently stated that pages losing visibility after a core update don’t necessarily have problems to fix. The systems are reassessing content relative to what else is available.

The documentation update is a reminder that rankings can change between major updates as Google rolls out smaller core changes behind the scenes.

Looking Ahead

Google will update the Search Status Dashboard when the rollout is complete.

Monitor your rankings and traffic over the next three weeks. If you see changes, document when they occurred relative to the rollout timeline.

Based on 2025’s previous updates, completion typically takes two to three weeks. Google will confirm completion through the dashboard and its Search Central social accounts.

YouTube Adds Comments To Shorts Ads, Expands To Mobile Web via @sejournal, @MattGSouthern

YouTube adds comment sections to eligible Shorts ads, lets creators link to brand websites, and expands Shorts ads to mobile web browsers.

  • Eligible Shorts ads can now display comment sections, matching the experience of organic Shorts content.
  • Shorts creators can link directly to brand websites when producing branded content.
  • Shorts ads now serve on mobile web browsers, extending beyond the mobile app.