Google’s Mueller Questions Need For LLM-Only Markdown Pages via @sejournal, @MattGSouthern

Google Search Advocate John Mueller has pushed back on the idea of building separate Markdown or JSON pages just for large language models (LLMs), saying he doesn’t see why LLMs would need pages that no one else sees.

The discussion started when Lily Ray asked on Bluesky about “creating separate markdown / JSON pages for LLMs and serving those URLs to bots,” and whether Google could share its perspective.

Ray asked:

Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots. Can you share Googleʼs perspective on this?

The question draws attention to a developing trend where publishers create “shadow” copies of important in formats that are easier for AI systems to understand.

There’s a more active discussion on this topic happening on X.

What Mueller Said About LLM-Only Pages

Mueller replied that he isn’t aware of anything on Google’s side that would call for this kind of setup.

He notes that LLMs have worked with regular web pages from the beginning:

I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?

When Ray followed up about whether a separate format might help “expedite getting key points across to LLMs quickly,” Mueller argued that if file formats made a meaningful difference, you would likely hear that directly from the companies running those systems.

Mueller added:

If those creating and running these systems knew they could create better responses from sites with specific file formats, I expect they would be very vocal about that. AI companies aren’t really known for being shy.

He said some pages may still work better for AI systems than others, but he doesn’t think that comes down to HTML versus Markdown:

That said I can imagine some pages working better for users and some better for AI systems, but I doubt that’s due to the file format, and it’s definitely not generalizable to everything. (Excluding JS which still seems hard for many of these systems).”

Taken together, Mueller’s comments suggest that, from Google’s point of view, you don’t need to create bot-only Markdown or JSON clones of existing pages just to be understood by LLMs.

How Structured Data Fits In

Other individuals in the thread drew a line between speculative “shadow” formats and cases where AI platforms have clearly defined feed requirements.

A reply from Matt Wright pointed to OpenAI’s eCommerce product feeds as an example where JSON schemas matter.

In that context, a defined spec governs how ChatGPT ingests and displays product data. Wright explains:

Interestingly, the OpenAI eCommerce product feeds are live: JSON schemas appear to have a key role in AI search already.

That example supports the idea that structured feeds and schemas are most important when a platform publishes a spec and asks you to use it.

Additionally, Wright points to a thread on LinkedIn where Chris Long observed that “editorial sites using product schemas, tend to get included in ChatGPT citations.”

Why This Matters

If you’re questioning whether to build “LLM-optimized” Markdown or JSON versions of your content, this exchange can help steer you back to the basics.

Mueller’s comments reinforce that LLMs have long been able to read and parse standard HTML.

For most sites, it’s more productive to keep improving speed, readability, and content structure on the pages you already have, and to implement schema where there’s clear platform guidance.

At the same time, the Bluesky thread shows that AI-specific formats are starting to emerge in narrow areas such as product feeds. Those are worth tracking, but they’re tied to explicit integrations, not a blanket rule that markdown is better for LLMs.

Looking Ahead

The conversation highlights how fast AI-driven search changes are turning into technical requests for SEO and dev teams, often before there is documentation to support them.

Until LLM providers publish more concrete guidelines, this thread points you back to work you can justify today: keep your HTML clean, reduce unnecessary JavaScript where it makes content hard to parse, and use structured data where platforms have clearly documented schemas.


Featured Image: Roman Samborskyi/Shutterstock

EU Plan To Simplify GDPR Targets AI Training And Cookie Consent via @sejournal, @MattGSouthern

The European Commission has proposed a “Digital Omnibus” package that would relax parts of the GDPR, the AI Act, and Europe’s cookie rules in the name of competitiveness and simplification.

If you work with EU traffic or rely on European data for analytics, advertising, or AI features, it’s worth tracking this proposal even though nothing has changed in law yet.

What The Digital Omnibus Would Change

The Digital Omnibus would revise several laws at once.

On AI, the proposal would push back stricter rules for high-risk systems from August 2026 to December 2027. It would also lighten documentation and reporting obligations for some systems and move more oversight to the EU AI Office.

Regarding data protection, the Commission aims to clarify when information is no longer considered ‘personal,’ making it easier to share and reuse anonymized and pseudonymized datasets, especially for AI training.

Privacy group noyb says this new wording isn’t just about clarifying the rules. They believe the proposal introduces a more subjective approach, hinging on what a controller claims it can or plans to do. Noyb warns this change could exclude parts of the adtech and data-broker industry from GDPR protections.

Cookies, Consent, And Browser Signals

The cookie section is likely to be the most visible change for your day-to-day work if the proposal moves forward.

The Commission wants to cut “banner fatigue” by exempting some non-risk cookies from consent pop-ups and shifting more control into browser-level settings that apply across sites.

In practice, that would mean fewer consent banners for low-risk uses, such as certain analytics or strictly functional storage, once categories are defined.

The proposal would also require websites to respect standardized, machine-readable privacy signals from browsers when those standards exist.

AI Training & Data Rights

One of the most contested pieces of the Digital Omnibus is how it treats data used to train AI systems.

The package would allow companies including Google, Meta, and OpenAI to use Europeans’ personal data to train AI models under a broadened legal basis.

Privacy groups have argued that this kind of training should rely on explicit opt-in consent, rather than the more flexible approach they see in the proposal.

Noyb warns that long-running behavioral data, such as social media histories, could be used to train AI systems with only an opt-out model that is difficult for people to exercise in practice.

Why This Matters

This proposal is worth keeping on your radar if you’re responsible for analytics, consent, or AI-driven products that reach EU users.

Over time, you might observe smaller, browser-driven consent experiences for EU traffic, along with a different compliance approach for AI features that depend on behavioral data.

For now, nothing in your cookie banners, GA4 setup, or AI workflows needs to change solely because of the Digital Omnibus.

Looking Ahead

The Digital Omnibus is an early signal that the EU is re-balancing its digital rulebook around AI and competitiveness, not privacy and enforcement alone.

Key items to monitor include Parliament’s amendments to AI training and data language, cookie and browser-signal provisions for CMPs and browsers, and changes to AI training and consent for EU users.


Featured Image: HJBC/Shutterstock

Pew: 84% Of Adults Use YouTube As Platform Growth Continues via @sejournal, @MattGSouthern

YouTube and Facebook continue to lead U.S. social media usage, but TikTok, Instagram, WhatsApp and Reddit are showing consistent growth, according to new data from Pew Research Center.

The report surveyed 5,022 U.S and found 84% use YouTube and 71% use Facebook. Instagram reached 50% adoption, making it the only other platform used by at least half of American adults.

What The Data Says

TikTok Growth Continues

TikTok usage among U.S. adults has increased to 37%, a slight rise from last year and nearly twice the 21% recorded in 2021. Approximately 24% of TikTok users visit the platform daily.

Instagram Reaches Milestone

Half of U.S. adults now use Instagram, matching 2024 levels but rising from 40% in 2021. The platform is especially popular among younger users.

WhatsApp and Reddit Gain Users

WhatsApp usage increased to 32%, rising from 23% in 2021. Reddit grew to 26%, up from 18% four years earlier.

New Platforms Show Limited Reach

Among U.S. adults, Threads has an 8% adoption rate, Bluesky is at 4%, and Truth Social stands at 3%.

Usage Frequency Varies by Platform

Approximately half of adults (52%) visit Facebook every day, with 37% checking it multiple times. YouTube has 48% daily usage, with 33% visiting more than once a day.

TikTok is used daily by 24% of adults, while X (formerly Twitter) has a 10% daily usage rate.

Platform Demographics

Age is the strongest predictor of platform use. Eight in ten adults aged 18-29 use Instagram, versus 19% of those 65+. Similar gaps are seen for Snapchat (58% vs. 4%), TikTok (63% vs. 5%) and Reddit (48% vs. 6%).

YouTube and Facebook are used by most age groups, but younger adults still lead in YouTube at 95%, versus 64% for those 65+.

Women are more likely to use Facebook (78% vs. 63%), Instagram (55% vs. 44%) and TikTok (42% vs. 30%), while men favor X (29% vs. 15%) and Reddit (37% vs. 15%). Adults with college degrees are more likely to use Reddit (40%), WhatsApp (41%) and Instagram (58%) than those with high school or less.

Why This Matters

These usage patterns can help inform your content distribution plans.

YouTube and Facebook are key for reaching a wide audience, while TikTok, Instagram, and newer platforms focus on specific groups.

Since different age groups prefer different platforms, it’s a good idea to tailor strategies for each platform rather than sharing the same content everywhere.

Looking Ahead

Pew’s data indicates gradual changes rather than sudden growth. Younger adults are continuing to favor familiar platforms like YouTube, Instagram, TikTok, Snapchat, and Reddit, while older adults are still more reliant on Facebook and YouTube.

Newer platforms such as Threads and Bluesky are still niche but indicate where politically active users might experiment next.

Pew’s trend series and methodology notes offer a baseline to monitor whether these divides increase, decrease, or stabilize in future data.


Featured Image: Vasylisa Dvoichenkova/Shutterstock

Google CTR Trends In Q3: Branded Clicks Fan Out, Longer Queries Hold via @sejournal, @MattGSouthern

Advanced Web Ranking released its Q3 Google organic clickthrough report, tracking CTR changes by ranking position across query types and industries.

The company compared July through September against April through June. The dataset is international, so the patterns reflect broad search behavior rather than a single region.

Here’s what stands out in this quarter’s report.

Branded Desktop Searches Shift Clicks Down-Page

The clearest movement this quarter shows up in branded queries on desktop.

For searches containing a brand or business name, position 1 lost 1.52 percentage points of CTR. Positions 2 through 6 gained a combined 8.71 points.

Unbranded queries were mostly unchanged, so this shift appears specific to how people navigate brand SERPs on desktop.

Commercial & Location Queries Lose Top CTR

When AWR sorted results by intent, commercial and location searches posted the clearest top-position declines.

Commercial queries, defined as searches including terms like “buy” or “price,” saw positions 1 and 2 on desktop drop a combined 4.20 points. Position 1 accounted for most of that loss at 3.01 points.

Location searches also weakened at the top. Position 1 fell 2.52 points on desktop and 2.13 points on mobile.

AWR doesn’t attribute cause, but these are the SERPs where rich results and other modules can crowd the page.

The takeaway is that top organic placements in commercial and local contexts captured a smaller share of clicks in Q3 than they did in Q2.

Longer Queries Hold Steady

Query length shows another split that matters for forecasting traffic.

On desktop, position-1 CTR fell for shorter multi-word searches. Two-word queries dropped 1.22 points and three-word queries dropped 1.24 points at the top spot.

AWR notes that 4+ word queries were the only group with steady CTR this quarter.

On mobile, the movement went the other way for the shortest queries. One-word searches gained 1.52 points at position 1.

The takeaway here is that short, generic desktop searches remain the most volatile category of CTR performance, while longer searches looked more stable in Q3.

Industry Winners And Losers

AWR tracked CTR shifts across 18 verticals and tied those changes to demand trends.

The report highlighted several large moves:

  • Arts & Entertainment had the steepest single-position decline, with position 1 on desktop down 5.13 points.
  • Travel showed the strongest gain, with position 2 on desktop up 2.46 points.
  • Shopping saw a redistribution near the top. Position 1 on desktop fell 2.10 points, while positions 2 and 3 gained a combined 2.83 points.

The takeaway is that CTR isn’t shifting evenly across verticals. Some categories are seeing a top-spot squeeze, while others are seeing clicks spread across more of the upper results.

Why This Matters For You

Q3 adds another data point for explaining CTR changes when rankings stay flat.

For branded desktop searches, position 1 is still dominant, but it’s no longer absorbing as much of the clickshare as last quarter.

If you track brand terms, it’s worth watching whether traffic is distributing across multiple listings on those SERPs.

And if your traffic depends on short, high-volume desktop queries, this report suggests those segments are still the most exposed to quarter-over-quarter click shifts. Longer searches were the only length group that held steady at the top in Q3.

Looking Ahead

AWR’s report reflects an international dataset and doesn’t isolate a single driver behind the CTR movement. Still, the direction in Q3 is clear in a few places.

Branded desktop clicks are spreading beyond position 1, and commercial and local SERPs continue to pressure the top organic slot.


Featured Image: Roman Samborskyi/Shutterstock

LLMs.txt Shows No Clear Effect On AI Citations, Based On 300k Domains via @sejournal, @MattGSouthern

A new analysis from SE Ranking suggests the llms.txt file isn’t delivering measurable benefits yet.

After examining roughly 300,000 domains, the company found no relationship between having llms.txt and how often a domain is cited in major LLM answers.

What The Data Says

Adoption Is Thin

SE Ranking’s crawl found llms.txt on 10.13% of domains. In other words, nearly nine out of ten sites they measured haven’t implemented it.

That low usage matters because the format is sometimes described as an emerging baseline for AI visibility. The data instead shows scattered experimentation. SE Ranking says adoption is fairly even across traffic tiers and not concentrated among the biggest brands.

High-traffic sites were slightly less likely to use the file than mid-tier websites in their dataset.

No Measurable Link To LLM Citations

To assess whether the llms.txt file affects AI visibility, SE Ranking analyzed domain-level citation frequency across responses from prominent LLMs. They employed statistical correlation tests and an XGBoost model to determine the extent to which each factor contributed to citations.

The main finding was that removing the llms.txt feature actually improved the model’s accuracy. SE Ranking concludes that llms.txt “doesn’t seem to directly impact AI citation frequency. At least not yet.”

Additionally, they found no significant correlation between citations and the file using simpler statistical methods.

How This Squares With Platform Guidance

SE Ranking notes that its results align with public platform guidance. But it’s important to be precise about what is confirmed.

Google hasn’t indicated that llms.txt is used as a signal in AI Overviews or AI Mode. In its AI search guidance, Google frames it as an evolution of Search that continues to rely on its existing Search systems and signals, without mentioning llms.txt as an input.

OpenAI’s crawler documentation similarly focuses on robots.txt controls. OpenAI recommends allowing OAI-SearchBot in robots.txt to support discovery for its search features, but does not say llms.txt affects ranking or citations.

SE Ranking also notes that some SEO logs show GPTBot occasionally fetching llms.txt files, though they say it doesn’t happen often and does not appear tied to citation outcomes.

Taken together, the dataset suggests that even if some models retrieve the file, it’s not influencing citation behavior at scale right now.

What This Means For You

If you want a clean, low-risk way to prepare for possible future adoption, adding llms.txt is easy and unlikely to cause technical harm.

But if the goal is a near-term visibility bump in AI answers, the data says you shouldn’t expect one.

That puts llms.txt in the same category as other early AI-visibility tactics. Reasonable to test if it fits your workflow, but not something to sell internally as a proven lever.


Featured Image: Mameraman/Shutterstock

SEO Community Reacts To Adobe’s Semrush Acquisition via @sejournal, @martinibuster

The SEO community is excited by the Semrush Adobe acquisition. The consensus is that it’s a milestone in the continuing evolution of SEO in the age of generative AI. Adobe’s purchase comes at a time of AI-driven uncertainty and may be a sign of the importance of data for helping businesses and marketers who are still trying to find a new way forward.

Cyrus Shepard tweeted that he believes the Semrush sale creates an opportunity for Ahrefs under the belief that Adobe’s scale and emphasis on the enterprise market will present an opportunity for Ahrefs to move fast to respond to rapidly changing needs of the marketing industry.

He tweeted:

“Adobe’s marketing tools lean towards ENTERPRISE (AEM, Adobe Analytics). If Adobe leans this way with Semrush, it may be a less attractive solution to smaller operators.

With this acquisition, @ahrefs remains the only large, independent SEO tool suite on the market. Ahrefs is able to move fast and innovate – I suspect this creates an opportunity for Ahrefs – not a problem.”

Shepard is right, some of Adobe’s products (like Adobe Analytics) do lean toward enterprise users but there’s a significant small and medium size business user base for design related tools with pricing at the $99/month range that make the tools relatively affordable. Nevertheless that’s a significant cost compared to the $600 range that Adobe used to charge for standalone versions for Windows and Mac.

I agree that Ahrefs is quite likely the best positioned tool to serve the needs of the SMB end of the SEO industry should Semrush increase focus on the enterprise market. But there are also smaller tools like SERPrecon that are tightly focused on helping businesses deliver results and may benefit from the vacuum left by Semrush.

Validates SEO Platforms

Seth Besmertnik, CEO of the enterprise SEO platform Conductor, sees the acquisition as validating SEO platforms, which is a valid observation considering how much money, in cash, Semrush was acquired for.

Besmertnik wrote:

“I’m feeling a lot this morning. HUGE news today. Adobe will be acquiring Semrush…our partner, competitor, and an ally in the broader SEO and AEO/GEO world for over a decade.

For a long time, big tech ignored SEO. It drove half of the internet’s traffic, yet somehow never cleared the bar as something to own. I always believed the day would come when major platforms took this category seriously. Today is that day.”

It’s an exciting moment! We’re starting to see some consolidation and this represents huge recognition of how important the work of SEOs is. From traditional SEO through optimizing for AI platforms, the work is important. Clearly Adobe is thinking this way on behalf of their clientele, which means great things ahead.”

Besmertnik also made the point that the industry is entering a transitional phase where platforms that are adapted to AI will be the leaders of tomorrow.

He added:

“This next era won’t be led by legacy architectures. It will be led by platforms that built their foundations for AI…and by companies engineered for the data-first, enterprise-grade world that’s now taking shape.”

Validates SEO

Duane Forrester, formerly of Bing, shared the insight that the acquisition shows how important SEO is, especially as the industry is evolving to meet the challenges of AI search.

Duane shared:

“It’s an exciting moment! We’re starting to see some consolidation and this represents huge recognition of how important the work of SEOs is. From traditional SEO through optimizing for AI platforms, the work is important. Clearly Adobe is thinking this way on behalf of their clientele, which means great things ahead.”

Online Reactions Were Mostly Positive

There were a few comments with negative sentiment published in response to Adobe’s announcement on X (formerly Twitter), where some used the post to vent about pricing and other grudges but many others from the SEO community offered congratulations to Semrush.

What It All Means

As multiple people have said, the sale of Semrush is a landmark moment for SEO and for SEO platforms because it puts a dollar figure on the importance of digital marketing at a time when the search marketing industry is struggling to reach consensus of how SEO should evolve to meet the many changes introduced by AI Search.

Many Questions Remain Unanswered

What Will Adobe Actually Do With Semrush’s Product?

Will Semrush remain a standalone product or will it be offered in multiple versions for enterprise users and SMBs or will it be folded into one of Adobe’s cloud offerings?

Pricing

A common concern is about pricing and whether the cost of Semrush will go up. Is it possible that the price could actually come down?

Semrush Is A Good Fit For Adobe

Adobe started as a software company focused on graphic design products but by the turn of the millenium it began acquiring companies directly related to digital marketing and web design, but increasingly focusing on the enterprise market. Data is useful for planning content and also for better understanding what’s going on at search engines and at AI-based search and chat. Semrush is a good fit for Adobe.

Featured Image by Shutterstock/Sunil prajapati

New Data Finds Gap Between Google Rankings And LLM Citations via @sejournal, @MattGSouthern

Large language models cite sources differently than Google ranks them.

Search Atlas, an SEO software company, compared citations from OpenAI’s GPT, Google’s Gemini, and Perplexity against Google search results.

The analysis of 18,377 matched queries finds a gap between traditional search visibility and AI platform citations.

Here’s an overview of the key differences Search Atlas found.

Perplexity Is Closest To Search

Perplexity performs live web retrieval, so you would expect its citations to look more like search results. The study supports that.

Across the dataset, Perplexity showed a median domain overlap of around 25–30% with Google results. Median URL overlap was close to 20%. In total, Perplexity shared 18,549 domains with Google, representing about 43% of the domains it cited.

ChatGPT And Gemini Are More Selective

ChatGPT showed much lower overlap with Google. Its median domain overlap stayed around 10–15%. The model shared 1,503 domains with Google, accounting for about 21% of its cited domains. URL matches typically remained below 10%.

Gemini behaved less consistently. Some responses had almost no overlap with search results. Others lined up more closely. Overall, Gemini shared just 160 domains with Google, representing about 4% of the domains that appeared in Google’s results, even though those domains made up 28% of Gemini’s citations.

What The Numbers Mean For Visibility

Ranking in Google doesn’t guarantee LLM citations. This report suggests the systems draw from the web in different ways.

Perplexity’s architecture actively searches the web and its citation patterns more closely track traditional search rankings. If your site already ranks well in Google, you are more likely to see similar visibility in Perplexity answers.

ChatGPT and Gemini rely more on pre-trained knowledge and selective retrieval. They cite a narrower set of sources and are less tied to current rankings. URL-level matches with Google are low for both.

Study Limitations

The dataset heavily favored Perplexity. It accounted for 89% of matched queries, with OpenAI at 8% and Gemini at 3%.

Researchers matched queries using semantic similarity scoring. Paired queries expressed similar information needs but were not identical user searches. The threshold was 82% similarity using OpenAI’s embedding model.

The two-month window provides a recent snapshot only. Longer timeframes would be needed to see whether the same overlap patterns hold over time.

Looking Ahead

For retrieval-based systems like Perplexity, traditional SEO signals and overall domain strength are likely to matter more for visibility.

For reasoning-focused models like ChatGPT and Gemini, those signals may have less direct influence on which sources appear in answers.


Featured Image: Ascannio/Shutterstock

Adobe To Acquire Semrush In $1.9 Billion Cash Deal via @sejournal, @MattGSouthern

Adobe and Semrush announced today that they have entered into a definitive agreement for Adobe to acquire Semrush in an all-cash transaction valued at approximately $1.9 billion. Adobe will pay $12.00 per share, describing Semrush as a “leading brand visibility platform.”

The acquisition brings a widely used SEO platform under Adobe’s Digital Experience umbrella.

The deal is expected to close in the first half of 2026, subject to regulatory approvals and the approval of Semrush stockholders.

What Adobe Is Buying

Semrush is a Boston-based SaaS platform best known in search marketing for keyword research, site audits, competitive intelligence, and online visibility tracking.

Over the past two years, Semrush has added enterprise products focused on AI-driven visibility, including tools that monitor how brands are referenced in responses from large language models such as ChatGPT and Gemini, alongside traditional search results.

Semrush has also been an active acquirer. Recent deals have included SEO education and community assets like Backlinko and Traffic Think Tank, as well as technology and media acquisitions such as Third Door Media, the publisher of Search Engine Land.

For Adobe, this gives the Experience Cloud portfolio a direct line into the SEO workflow that many in-house teams and agencies already use daily.

How Semrush Fits Adobe’s AI Marketing Stack

Adobe positions the deal as part of a broader strategy to support “brand visibility” in what it describes as an agentic AI era.

In the announcement, Anil Chakravarthy, president of Adobe’s Digital Experience business, says:

“Brand visibility is being reshaped by generative AI, and brands that don’t embrace this new opportunity risk losing relevance and revenue.”

Semrush’s “generative engine optimization” positioning aligns with that narrative. The company has been pitching GEO as a counterpart to traditional SEO, focused on keeping brands discoverable inside AI-generated answers, not just organic listings.

Adobe plans to integrate Semrush with products like Adobe Experience Manager, Adobe Analytics, and its newer Brand Concierge offering.

Deal Terms And Timeline

Under the terms of the agreement, Adobe will acquire Semrush for $12.00 per share in cash, representing a total equity value of roughly $1.9 billion.

Coverage from financial outlets notes that the price reflects a premium of around 77 percent over Semrush’s prior closing share price and that Semrush stock jumped more than 70 percent in early trading following the announcement.

According to the companies, the transaction has already been approved by both boards. An associated SEC filing shows the merger agreement was signed on November 18.

Closing is targeted for the first half of 2026, pending customary regulatory reviews and the approval of Semrush shareholders. Until then, Adobe and Semrush say they will continue to operate as separate companies.

Why This Matters

This deal continues a broader trend: core search and visibility tools are moving deeper into large enterprise suites.

If you already rely on Semrush, you can expect tighter integration with Adobe’s analytics and customer experience products over time.

It also raises practical questions:

  • How will Semrush be packaged and priced once it sits inside Adobe’s enterprise stack?
  • Can agencies and smaller teams keep using Semrush as a relatively independent tool?
  • How will Adobe choose to handle Semrush’s media holdings, including Search Engine Land and related properties?

For now, both companies are presenting the acquisition as a way to give marketers a more complete view of brand visibility across search results and AI-generated answers, rather than as a change to Semrush’s current product line.

Looking Ahead

In the near term, there are two things to watch.

First, regulators will review the transaction, particularly given Adobe’s history with large acquisitions in the digital experience space. That process will shape the closing timeline.

Second, Adobe will need to decide how quickly to integrate Semrush into Experience Cloud and how much to preserve the existing product and brand. Those choices will influence how disruptive this feels for your current workflows.

Watch for changes to Semrush’s API access, plan structure, and reporting integrations once the deal moves closer to completion.


Featured Image: IB Photography/Shutterstock

Google Brings Gemini 3 To Search’s AI Mode via @sejournal, @MattGSouthern

Google has integrated Gemini 3 in Search’s AI Mode. This marks the first time Google has shipped a Gemini model to Search on its release date.

Google AI Pro and Ultra subscribers in the U.S. can access Gemini 3 Pro by selecting “Thinking” from the model dropdown in AI Mode.

Robby Stein, VP and GM of Google Search, wrote on X:

“Gemini 3, our most intelligent model, is landing in Google Search today – starting with AI Mode. Excited that this is the first time we’re shipping a new Gemini model in Search on day one.”

Google plans to expand Gemini 3 in AI Mode to all U.S. users soon, with higher usage limits for Pro and Ultra subscribers.

What’s New

Search Updates

Google describes Gemini 3 as a model with state-of-the-art reasoning and deep multimodal understanding.

In the context of Search, it’s designed to explain advanced concepts, work through complex questions, and support interactive visuals that run directly inside AI Mode responses.

With Gemini 3 in place, Google says AI Mode has effectively re-architected what a “helpful response” looks like.

Stein explains:

“Gemini 3 is also making Search smarter by re-architecting what a helpful response looks like. With new generative UI capabilities, Gemini 3 in AI Mode can now dynamically create the overall response layout when it responds to your query – completely on the fly.”

Instead of only returning a block of text, AI Mode can design a response layout tailored to your query. That includes deciding when to surface images, tables, or other structured elements so the answer is clearer and easier to work with.

In the coming weeks, Google will add automatic model selection, Stein continues:

“Search will intelligently route tough questions in AI Mode and AI Overviews to our frontier model, while continuing to use faster models for simpler tasks.”

Enhanced Query Fan-Out

Gemini 3 upgrades Google’s query fan-out technique.

According to Stein, Search can now issue more related searches in parallel and better interpret what you’re trying to do.

A potential benefit, Stein adds, is that Google may find content it previously missed:

“It now performs more and much smarter searches because Gemini 3 better understands you. That means Search can now surface even more relevant web content for your specific question.”

Generative UI

Gemini 3 in AI Mode introduces generative UI features that build dynamic visual layouts around your query.

The model analyzes your question and constructs a custom response using visual elements such as images, tables, and grids. When an interactive tool would help, Gemini 3 can generate a small app in real time and embed it directly in the answer.

Examples from Google’s announcement include:

  • An interactive physics simulation for exploring the three-body problem
  • A custom mortgage loan calculator that lets you compare different options and estimate long-term savings

All of these responses include prominent links to high-quality content across the web so you can click through to source material.

See a demonstration in Google’s launch video below:

Why This Matters

Gemini 3 changes how your content is discovered and used in AI Mode. With deeper query fan-out, Google can access more pages per question, which might influence which sites are cited or linked during long, complex searches.

The updated layouts and interactive features change how links appear on your screen. On-page tools, explainers, and visualizations could now compete directly with Google’s own interface.

As Gemini 3 becomes available to more people, it will be important to watch how your content is shown or referenced in AI responses, in addition to traditional search rankings.

Looking Ahead

Google says it will continue refining these updates based on feedback as more people try the new tools. Automatic model selection is set to arrive in the coming weeks for Google AI Pro and Ultra subscribers in the U.S., with broader U.S. access to Gemini 3 in AI Mode planned but not yet scheduled.

Cloudflare Outage Triggers 5xx Spikes: What It Means For SEO via @sejournal, @MattGSouthern

A Cloudflare incident is returning 5xx responses for many sites and apps that sit behind its network, which means users and crawlers may be running into the same errors.

From an SEO point of view, this kind of outage often looks worse than it is. Short bursts of 5xx errors usually affect crawl behavior before they touch long-term rankings, but there are some details worth paying attention to.

What You’re Likely Seeing

Sites that rely on Cloudflare as a CDN or reverse proxy may currently be serving generic “500 internal server error” pages or failing to load at all. In practice, everything in that family of responses is treated as a server error.

If Googlebot happens to crawl while the incident is ongoing, it will record the same 5xx responses that users see. You may not notice anything inside Search Console immediately, but over the next few days you could see a spike in server errors, a dip in crawl activity, or both.

Keep in mind that Search Console data is rarely real-time and often lags by roughly 48 hours. A flat line in GSC today could mean the report hasn’t caught up yet. If you need to confirm that Googlebot is encountering errors right now, you will need to check your raw server access logs.

This can feel like a ranking emergency. It helps to understand how Google has described its handling of temporary server problems in the past, and what Google representatives are saying today.

How Google Handles Short 5xx Spikes

Google groups 5xx responses as signs that a server is overloaded or unavailable. According to Google’s Search Central documentation on HTTP status codes, 5xx and 429 errors prompt crawlers to temporarily slow down, and URLs that continue to return server errors can eventually be dropped from the index if the issue remains unresolved.

Google’s “How To Deal With Planned Site Downtime” blog post gives similar guidance for maintenance windows, recommending a 503 status code for temporary downtime and noting that long-lasting 503 responses can be treated as a sign that content is no longer available.

In a recent Bluesky post, Google Search Advocate John Mueller reinforced the same message in plainer language. Mueller wrote:

“Yeah. 5xx = Google crawling slows down, but it’ll ramp back up.”

He added:

“If it stays at 5xx for multiple days, then things may start to drop out, but even then, those will pop back in fairly quickly.”

Taken together, the documentation and Mueller’s comments draw a fairly clear line.

Short downtime is usually not a major ranking problem. Already indexed pages tend to stay in the index for a while, even if they briefly return errors. When availability returns to normal, crawling ramps back up and search results generally settle.

The picture changes when server errors become a pattern. If Googlebot sees 5xx responses for an extended period, it can start treating URLs as effectively gone. At that point, pages may drop from the index until crawlers see stable, successful responses again, and recovery can take longer.

The practical takeaway is that a one-off infrastructure incident is mostly a crawl and reliability concern. Lasting SEO issues tend to appear when errors linger well beyond the initial outage window.

See additional guidance from Google regarding 5xx errors:

Analytics & PPC Reporting Gaps

For many sites, Cloudflare sits in front of more than just HTML pages. Consent banners, tag managers, and third-party scripts used for analytics and advertising may all depend on services that run through Cloudflare.

If your consent management platform or tag manager was slow or unavailable during the outage, that can show up later as gaps in GA4 and ad platform reporting. Consent events may not have fired, tags may have timed out, and some sessions or conversions may not have been recorded at all.

When you review performance, you might see a short cliff in GA4 traffic, a drop in reported conversions in Google Ads or other platforms, or both. In many cases, that will reflect missing data rather than a real collapse in demand.

It’s safer to annotate today’s incident in your analytics and media reports and treat it as a tracking gap before you start reacting with bid changes or budget shifts based on a few hours of noisy numbers.

What To Do If You Were Hit

If you believe you’re affected by today’s outage, start by confirming that the problem is really tied to Cloudflare and not to your origin server or application code. Check your own uptime monitoring and any status messages from Cloudflare or your host so you know where to direct engineering effort.

Next, record the timing. Note when you first saw 5xx errors and when things returned to normal. Adding an annotation in your analytics, Search Console, and media reporting makes it much easier to explain any traffic or conversion drops when you review performance later.

Over the coming days, keep an eye on the Crawl Stats Report and index coverage in Search Console, along with your own server logs. You’re looking for confirmation that crawl activity returns to its usual pattern once the incident is over, and that server error rates drop back to baseline. If the graphs settle, you can treat the outage as a contained event.

If, instead, you continue to see elevated 5xx responses after Cloudflare reports the issue as resolved, it’s safer to treat the situation as a site-specific problem.

What you generally do not need to do is change content, internal linking, or on-page SEO purely in response to a short Cloudflare outage. Restoring stability is the priority.

Finally, resist the urge to hit ‘Validate Fix’ in Search Console the moment the site comes back online. If you trigger validation while the connection is still intermittent, the check will fail, and you will have to wait for the cycle to reset. It is safer to wait until the status page says ‘Resolved’ for a full 24 hours before validating.

Why This Matters

Incidents like this one are a reminder that search visibility is tied to reliability as much as relevance. When a provider in the middle of your stack has trouble, it can quickly look like a sudden drop, even when the root cause is outside your site.

Knowing how Google handles temporary 5xx spikes and how they influence analytics and PPC reports can help you communicate better with your clients and stakeholders. It allows you to set realistic expectations and recognize when an outage has persisted long enough to warrant serious attention.

Looking Ahead

Once Cloudflare closes out its investigation, the main thing to watch is whether your crawl, error, and conversion metrics return to normal. If they do, this morning’s 5xx spike is likely to be a footnote in your reporting rather than a turning point in your organic or paid performance.