Google Reports Search Console Page Indexing Report Delays via @sejournal, @MattGSouthern

Google announces delays in Search Console’s Page indexing report. The company confirms crawling, indexing, and ranking remain unaffected by the reporting issue.

  • Google is experiencing longer than usual delays in the Page indexing report within Search Console.
  • The issue affects reporting only, not actual crawling, indexing, or ranking of websites.
  • Google will provide an update when the issue is resolved.
Pragmatic Approach To AI Search Visibility via @sejournal, @martinibuster

Bing published a blog post about how clicks from AI Search are improving conversion rates, explaining that the entire research part of the consumer journey has moved into conversational AI search, which means that content must follow that shift in order to stay relevant.

AI Repurposes Your Content

They write:

“Instead of sending users through multiple clicks and sources, the system embeds high-quality content within answers, summaries, and citations, highlighting key details like energy efficiency, noise level, and smart home compatibility. This creates clarity faster and builds confidence earlier in the journey, leading to stronger engagement with less friction.”

Bing sent me advance notice about their blog post and I read it multiple times. I had a hard time getting past the part about AI Search taking over the research phase of the consumer journey because it seemingly leaves informational publishers with zero clicks. Then I realized that’s not necessarily how it has to happen, as is explained further on.

Here’s what they say:

“It’s not that people are no longer clicking. They’re just clicking at later stages in the journey, and with far stronger intent.”

Search used to be the gateway to the Internet. Today the internet (lowercase) is seemingly the gateway to AI conversations. Nevertheless, people enjoy reading content and learning, so it’s not that the audience is going away.

While AI can synthesize content, it cannot delight, engage, and surprise on the same level that a human can. This is our strength and it’s up to us to keep that in mind moving forward in what is becoming a less confusing future.

Create High-Quality Content

Bing’s blog post says that the priority is to create high-quality content:

“The priority now is to understand user actions and guide people toward high-value outcomes, whether that is a subscription, an inquiry, a demo request, a purchase, or other meaningful engagement.”

But what’s the point in creating high-quality content for consumers if Bing is no longer “sending users through multiple clicks and sources” because AI Search is embedding that high-quality content in their answers?

The answer is that Bing is still linking out to sources. This provides an opportunity for brands to identify those sources to verify if they’re in there and if they’re missing they now know to do something about it. Informational sites need to review those sources and identify why they’re not in there, something that’s discussed below.

Conversion Signals In AI Search

Earlier this year at the Google Search Central Live event in New York City, a member of the audience told the assembled Googlers that their client’s clicks were declining due to AI Overviews and asked them, “what am I supposed to tell my clients?” The audience member expressed the frustration that many ecommerce stores, publishers, and SEOs are feeling.

Bing’s latest blog post attempts to answer that question by encouraging online publishers to focus on three signals.

  • Citations
  • Impressions
  • Placement in AI answers.

This is their explanation:

“…the most valuable signals are the ones connected to visibility. By tracking impressions, placement in AI answers, and citations, brands can see where content is being surfaced, trusted, and considered, even before a visit occurs. More importantly, these signals reveal where interest is forming and where optimization can create lift, helping teams double down on what works to improve visibility in the moments when decisions are being shaped.”

But what’s the point if people are no longer clicking except at the later stages of the consumer journey?  Bing makes it clear that the research stage happens “within one environment” but they are still linking out to websites. As will be shown a little further in this article, there are steps that publishers can take to ensure their articles are surfaced in the AI conversational environment.

They write:

“In fewer steps than ever, the customer reaches a confident decision, guided by intent-aligned, multi-source content that reflects brand and third-party perspectives. This behavior shift, where discovery, research, and decision happen continuously within one environment, is redefining how site owners understand conversion.

…As AI-powered search reshapes how people explore information, more of the journey now happens inside the experience itself.

…Users now spend more of the journey inside AI experiences, shaping visibility and engagement in new ways. As a result, engagement is shifting upstream (pre-click) within summaries, comparisons, and conversational refinements, rather than through multiple outbound clicks.”

The change in which discovery, research, and decision making all happen inside the AI Search explains why traditional click-focused metrics are losing relevance. The customer journey is happening within the conversational AI environment, so the signals that are beginning to matter most are the ones generated before a user ever reaches a website. Visibility now depends on how well a brand’s information contributes to the summaries, comparisons, and conversational refinements that form the new upstream engagement layer.

This is the reality of where we are at right now.

How To Adapt To The New Customer Journey

AI Search has enabled consumers to do deeper research and comparisons during the early and middle part of the buying cycle, a significant change in consumer behavior.

In a podcast from May of this year, Michael Bonfils (LinkedIn profile) touched on this change in consumer behavior and underlined the importance of obtaining the signals from the consideration stage of consumer purchases. Read: 30-Year SEO Pro Shows How To Adapt To Google’s Zero-Click Search

He observed:

“We have a funnel, …which is the awareness consideration phase …and then finally the purchase stage. The consideration stage is the critical side of our funnel. We’re not getting the data. How are we going to get the data?

But that’s very important information that I need because I need to know what that conversation is about. I need to know what two people are talking about… because my entire content strategy in the center of my funnel depends on that greatly.”

Michael suggested that the keyword paradigm is inappropriate for the reality of AI Search and that rather than optimize for keywords, marketers and business people should be optimizing for the range of questions and comparisons that AI Search will be surfacing.

He explained:

“So let’s take the whole question, and as many questions as possible, that come up to whatever your product is, that whole FAQ and the answers, the question, and the answers become the keyword that we all optimize on moving forward.

Because that’s going to be part of the conversation.”

Bing’s blog post confirmed this aspect of consumer research and purchases, confirming that the click is happening more often on the conversion part of the consumer journey.

Tracking AI Metrics

Bing recommends using their Webmaster Tools and Clarity services in order to gain more insights into how people are engaging in AI search.

They explain:

“Bing Webmaster Tools continues to evolve to help site owners, publishers, and SEOs understand how content is discovered and where it appears across traditional search results and emerging AI-driven experiences. Paired with Microsoft Clarity’s AI referral insights, these tools connect upstream visibility with on-site behavior, helping teams see how discovery inside summaries, answers, and comparisons translates into real engagement. As user journeys shift toward more conversational, zero-UI-style interactions, these combined signals give a clearer view of influence, readiness, and conversion potential.”

The Pragmatic Takeaway

The emphasis for brands is to show up in review sites, build relationships with them, and try as much as possible to get in front of consumers and build positive word of mouth.

For news and informational sites, Bing recommends providing high-quality content that engages readers and providing an experience that will encourage readers to return.

Bing writes:

“Rather than focusing on product-driven actions, success may depend on signals such as read depth, article completion, returning reader patterns, recirculation into related stories, and newsletter sign-ups or registrations.

AI search can surface authoritative reporting earlier in the journey, bringing in readers who are more inclined to engage deeply with coverage or return for follow-up stories. As these upstream interactions grow, publishers benefit from visibility into how their work appears across AI answers, summaries, and comparisons, even when user journeys are shorter or involve fewer clicks.”

I have been a part of the SEO community for over twenty-five years and I have never seen a more challenging period for publishers than what we’re faced with today. The challenge is to build a brand, generate brand loyalty, focus on the long-term.

Read Bing’s blog post:

How AI Search Is Changing the Way Conversions are Measured 

Featured Image by Shutterstock/ImageFlow

Google’s Mueller Says Sites In A ‘Bad State’ May Need To Start Over via @sejournal, @MattGSouthern

Google’s John Mueller says sites with low-quality AI content should rethink their purpose rather than manually rewrite pages. Starting fresh may be faster than recovering.

  • Manually rewriting AI content doesn’t automatically restore a site’s value or authenticity
  • Mueller recommends treating recovery as starting over with no content, not as a page-by-page editing task
  • Recovering from a “bad state” may take longer than launching on a new domain
Mueller: Background Video Loading Unlikely To Affect SEO via @sejournal, @MattGSouthern

Google Search Advocate John Mueller says large video files loading in the background are unlikely to have a noticeable SEO impact if page content loads first.

A site owner on Reddit’s r/SEO asked whether a 100MB video would hurt SEO if the page prioritizes loading a hero image and content before the video. The video continues loading in the background while users can already see the page.

Mueller responded:

“I don’t think you’d notice an SEO effect.”

Broader Context

The question addresses a common concern for sites using large hero videos or animated backgrounds.

The site owner described an implementation where content and images load within seconds, displaying a “full visual ready” state. The video then loads asynchronously and replaces the hero image once complete.

This method aligns with Google’s documentation on lazy loading, which recommends deferring non-critical content to improve page performance.

Google’s help documents state that lazy loading is “a common performance and UX best practice” for non-critical or non-visible content. The key requirement is ensuring content loads when visible in the viewport.

Why This Matters

If you’re running hero videos or animated backgrounds on landing pages, this suggests that background loading strategies are unlikely to harm your rankings. The critical factor is ensuring your primary content reaches users quickly.

Google measures page experience through Core Web Vitals metrics like Largest Contentful Paint. In many cases, a video that loads after visible content is ready shouldn’t block these measurements.

Implementation Best Practices

Google’s web.dev documentation recommends using preload=”none” on video elements to avoid unnecessary preloading of video data. Adding a poster attribute provides a placeholder image while the video loads.

For videos that autoplay, the documentation suggests using the Intersection Observer API to load video sources only when the element enters the viewport. This lets you maintain visual impact without affecting initial page load performance.

Looking Ahead

Site owners using background video can generally continue doing so without major SEO concerns, provided content loads first. Focus on Core Web Vitals metrics to verify your implementation meets performance thresholds.

Test your setup using Google Search Console’s URL Inspection Tool to confirm video elements appear correctly in rendered HTML.


Featured Image: Roman Samborskyi/Shutterstock

New Data: Top Factors Influencing ChatGPT Citations via @sejournal, @MattGSouthern

SE Ranking analyzed 129,000 unique domains across 216,524 pages in 20 niches to identify which factors correlate with ChatGPT citations.

The number of referring domains ranked as the single strongest predictor of citation likelihood.

What The Data Says

Backlinks And Trust Signals

Link diversity showed the clearest correlation with citations. Sites with up to 2,500 referring domains averaged 1.6 to 1.8 citations. Those with over 350,000 referring domains averaged 8.4 citations.

The researchers identified a threshold effect at 32,000 referring domains. At that point, citations nearly doubled from 2.9 to 5.6.

Domain Trust scores followed a similar pattern. Sites with Domain Trust below 43 averaged 1.6 citations. The benefits accelerated significantly at the top end: sites scoring 91–96 averaged 6 citations, while those scoring 97–100 averaged 8.4.

Page Trust mattered less than domain-level signals. Any page with a Page Trust score of 28 or above received roughly the same citation rate (8.3 average), suggesting ChatGPT weighs overall domain authority more heavily than individual page metrics .

One notable finding: .gov and .edu domains didn’t automatically outperform commercial sites. Government and educational domains averaged 3.2 citations, compared to 4.0 for sites without trusted zone designations.

The authors wrote:

“What ultimately matters is not the domain name itself, but the quality of the content and the value it provides.”

Traffic & Google Rankings

Domain traffic ranked as the second most important factor, though the correlation only appeared at high traffic levels.

Sites under 190,000 monthly visitors averaged 2 to 2.9 citations regardless of exact traffic volume. A site receiving 20 organic visitors performed similarly to one receiving 20,000.

Only after crossing 190,000 monthly visitors did traffic correlate with increased citations. Domains with over 10 million visitors averaged 8.5 citations.

Homepage traffic specifically mattered. Sites with at least 7,900 organic visitors to their main page showed the highest citation rates.

Average Google ranking position also tracked with ChatGPT citations. Pages ranking between positions 1 and 45 averaged 5 citations. Those ranking 64 to 75 averaged 3.1.

The authors noted:

“While this doesn’t prove that ChatGPT relies on Google’s index, it suggests both systems evaluate authority and content quality similarly.”

Content Depth & Structure

Content length showed consistent correlation. Articles under 800 words averaged 3.2 citations. Those over 2,900 words averaged 5.1.

Structure mattered beyond raw word count. Pages with section lengths of 120 to 180 words between headings performed best, averaging 4.6 citations. Extremely short sections under 50 words averaged 2.7 citations.

Pages with expert quotes averaged 4.1 citations versus 2.4 for those without. Content with 19 or more statistical data points averaged 5.4 citations, compared to 2.8 for pages with minimal data.

Content freshness produced one of the clearer findings. Pages updated within three months averaged 6 citations. Outdated content averaged 3.6.

Surprisingly, the raw data showed that pages with FAQ sections actually received fewer citations (3.8) than those without (4.1). However, the researchers noted that their predictive model viewed the absence of an FAQ section as a negative signal. They suggest this discrepancy exists because FAQs often appear on simpler support pages that naturally earn fewer citations.

The report also found that using question-style headings (e.g., as H1s or H2s) underperformed straightforward headings, earning 3.4 citations versus 4.3. This contradicts standard voice search optimization advice, suggesting AI models may prefer direct topical labeling over question formats.

Social Signals & Review Platforms

Brand mentions on discussion platforms showed strong correlation with citations.

Domains with minimal Quora presence (up to 33 mentions) averaged 1.7 citations. Heavy Quora presence (6.6 million mentions) corresponded to 7.0 citations.

Reddit showed similar patterns. Domains with over 10 million mentions averaged 7 citations, compared to 1.8 for those with minimal activity.

The authors positioned this as particularly relevant for smaller sites:

“For smaller, less-established websites, engaging on Quora and Reddit offers a way to build authority and earn trust from ChatGPT, similar to what larger domains achieve through backlinks and high traffic.”

Presence on review platforms like Trustpilot, G2, Capterra, Sitejabber, and Yelp also correlated with increased citations. Domains listed on multiple review platforms earned 4.6 to 6.3 citations on average. Those absent from such platforms averaged 1.8.

Technical Performance

Page speed metrics correlated with citation likelihood.

Pages with First Contentful Paint under 0.4 seconds averaged 6.7 citations. Slower pages (over 1.13 seconds) averaged 2.1.

Speed Index showed similar patterns. Sites with indices below 1.14 seconds performed reliably well. Those above 2.2 seconds experienced steep decline.

One counterintuitive finding: pages with the fastest Interaction to Next Paint scores (under 0.4 seconds) actually received fewer citations (1.6 average) than those with moderate INP scores (0.8 to 1.0 seconds, averaging 4.5 citations). The researchers suggested extremely simple or static pages may not signal the depth ChatGPT looks for in authoritative sources.

URL & Title Optimization

The report found that broad, topic-describing URLs outperformed keyword-optimized ones.

Pages with low semantic relevance between URL and target keyword (0.00 to 0.57 range) averaged 6.4 citations. Those with highest semantic relevance (0.84 to 1.00) averaged only 2.7 citations.

Titles followed the same pattern. Titles with low keyword matching averaged 5.9 citations. Highly keyword-optimized titles averaged 2.8.

The researchers concluded: “ChatGPT prefers URLs that clearly describe the overall topic rather than those strictly optimized for a single keyword.”

Factors That Underperformed

Several commonly recommended AI optimization tactics showed minimal or negative correlation with citations.

FAQ schema markup underperformed. Pages with FAQ schema averaged 3.6 citations. Pages without averaged 4.2.

LLMs.txt files showed negligible impact. Outbound links to high-authority sites also showed minimal effect on citation likelihood.

Why This Matters

The findings suggest your existing SEO strategy may already serve AI visibility goals. If you’re building referring domains, earning traffic, maintaining fast pages, and keeping content updated, you’re addressing the factors this report identified as most predictive.

For smaller sites without extensive backlink profiles, the research points to community engagement on Reddit and Quora as a viable path to building authority signals The data also suggests focusing on content depth over keyword density.

The researchers note that factors are interdependent. Optimizing one signal while ignoring others reduces overall effectiveness.

Looking Ahead

SE Ranking analyzed ChatGPT specifically. Other AI systems may weight factors differently.

SE Ranking doesn’t specify which ChatGPT version or timeframe the data represents, so these patterns should be treated as directional correlations rather than proof of how ChatGPT’s ranking algorithm works.


Featured Image: BongkarnGraphic/Shuttersrtock

ChatGPT Adds Shopping Research For Product Discovery via @sejournal, @MattGSouthern

OpenAI launched shopping research in ChatGPT, a feature that creates personalized buyer’s guides by researching products across the web. The tool is rolling out today on mobile and web for logged-in users on Free, Go, Plus, and Pro plans.

The company is offering nearly unlimited usage through the holidays.

What’s New

Shopping research works differently from standard ChatGPT responses. Users describe what they need, answer clarifying questions about budget and preferences, and receive a buyer’s guide after a few minutes.

The feature pulls information including price, availability, reviews, specs, and images from across the web. You can guide the research by marking products as “Not interested” or “More like this” as options appear.

OpenAI’s announcement states:

“Shopping research is built for that deeper kind of decision-making. It turns product discovery into a conversation: asking smart questions to understand what you care about, pulling accurate, up-to-date details from high-quality sources, and bringing options back to you to refine the results.”

The company says the tool performs best in categories like electronics, beauty, home and garden, kitchen and appliances, and sports and outdoor.

Technical Details

Shopping research is powered by a shopping-specialized GPT-5 mini variant post-trained on GPT-5-Thinking-mini.

OpenAI’s internal evaluation shows shopping research reached 52% product accuracy on multi-constraint queries, compared with 37% for ChatGPT Search.

Product accuracy measures how well responses meet user requirements for attributes like price, color, material, and specs. The company designed the system to update and refine results in real time based on user feedback.

Privacy & Data Sharing

OpenAI states that user chats are never shared with retailers. Results are organic and based on publicly available retail sites.

Merchants who want to appear in shopping research results can follow an allowlisting process through OpenAI.

Limitations

OpenAI acknowledges the feature isn’t perfect. The model may make mistakes about product details like price and availability. The company encourages users to visit merchant sites for the most accurate information.

Why This Matters

This feature pulls more of the product comparison journey into one place.

As shopping research handles more of the “which one should I buy?” work inside ChatGPT, some of that early-stage discovery could happen without a traditional search click.

For retailers and affiliate publishers, that raises the stakes for inclusion in these results. Visibility may depend on how well your products and pages are represented in OpenAI’s shopping system and allowlisting process.

Looking Ahead

Shopping research in ChatGPT is now available to logged-in users starting today. OpenAI plans to add direct purchasing through ChatGPT for merchants participating in Instant Checkout, though no timeline was provided.


Featured Image: Koshiro K/Shutterstock

What Optmyzr’s Three-Year Study Reveals About Seasonality Adjustments During BFCM via @sejournal, @brookeosmundson

Every Q4, the same message shows up in our accounts:

“Use seasonality adjustments to get ready for Black Friday and Cyber Monday.”

On paper, it sounds reasonable. You expect conversion rates to rise, so you give Smart Bidding a heads up and tell it to bid more aggressively during the peak.

Optmyzr’s latest study puts a pretty big dent in that narrative.

Over three BFCM cycles from 2022 through 2024, Fred Vallaeys and the Optmyzr team analyzed performance for up to 6,000 advertisers per year, split into two cohorts: those who used seasonality bid adjustments and those who did not.

The question was simple: do these adjustments actually help during Black Friday and Cyber Monday, or are we just making Google bid higher for no meaningful gain?

Based on the data, seasonality adjustments often hurt efficiency and rarely deliver the breakthrough many advertisers expect.

Below is a breakdown of the study and what it means for PPC managers heading into peak season.

Key Findings from Optmyzr’s BFCM Seasonality Study

The study compared performance across three BFCM periods (2022–2024), defined as the Wednesday before Black Friday through the Wednesday after Cyber Monday. Each year’s results were then measured against a pre-BFCM baseline.

The accounts were grouped into:

  • Advertisers who did not use seasonality bid adjustments
  • Advertisers who did apply them

Across all three years, consistent patterns emerged from their study.

#1: Smart Bidding already adjusts for BFCM without manual prompts

For advertisers who skipped seasonality adjustments, Smart Bidding still responded to the conversion rate spike:

  • 2022: Conversion rate up 17.5%
  • 2023: Conversion rate up 11.9%
  • 2024: Conversion rate up 7.5%

In other words, the algorithm did exactly what it was designed to do. It detected higher intent and increased bids without needing an external nudge.

#2: Seasonality adjustments inflated CPCs far more than necessary

Seasonality adjustments tell Google’s system to raise bids based on your predicted conversion rate increase.

Optmyzr notes that:

When you apply a seasonality adjustment, you are effectively telling Google: ‘I expect conversion rate to increase by X%. Raise bids immediately by X%.

And Smart Bidding acts as if you’re exactly right. It usually doesn’t soften that prediction or test into it.

The study showed that this is why CPCs climbed much faster for advertisers who used adjustments:

CPC inflation (no adjustment vs. with adjustment)

  • 2022: +17% vs. +36.7%
  • 2023: +16% vs. +32%
  • 2024: +17% vs. +34%

Adjustments consistently doubled CPC inflation, even though Smart Bidding was already raising bids based on real-time conversion signals.

#3: ROAS dropped for advertisers using seasonality adjustments

When CPC increases outpace conversion rate increases, ROAS inevitably suffers.

ROAS change (no adjustment vs. with adjustment)

  • 2022: -2% vs. -17%
  • 2023: -1.5% vs. -10%
  • 2024: +5.7% vs. -15.7%

The “no adjustment” group maintained stable ROAS, even improving in 2024. The “with adjustment” group saw steep declines every year.

Why Do Seasonality Adjustments Struggle During BFCM?

Optmyzr explains this dynamic as a precision issue.

When you apply a seasonality adjustment, you are making a specific prediction about the conversion lift. If you estimate the lift at +40% and the real lift ends up being +32–35%, that gap translates directly into overbidding.

Fred Vallaeys writes:

Smart Bidding takes this literally. It does not hedge your bet. It assumes you have perfect foresight.

That’s the core problem.

Black Friday and Cyber Monday are also in the category of highly predictable retail events. Google has years of historical BFCM data to model expected shifts. As a result, Optmyzr concludes:

Seasonality adjustments work best when Google cannot anticipate the spike.

BFCM is not one of those situations. It’s practically encoded into Google’s models.

The Trade-Off: More Revenue, Lower Efficiency

The study did show that advertisers using seasonality adjustments often drove higher revenue growth:

Revenue growth (no adjustment vs. with adjustment)

  • 2022: +25% vs. +50.5%
  • 2023: +30.3% vs. +52.8%
  • 2024: +33.8% vs. 39.9%

In 2022 and 2023, the incremental revenue jump was significant. But again, those gains came with notable ROAS declines.

This supports a practical interpretation:

  • If your brand’s priority is aggressive market share capture, top-line revenue or inventory liquidation, seasonality adjustments can deliver more volume.
  • If your brand’s priority is profitable performance, adjustments tend to work against that goal during BFCM.

When Seasonality Adjustments Do Make Sense

In the study, Optmyzr made it very clear: seasonality adjustments themselves aren’t the problem. The misuse of them is.

They work well in scenarios where you genuinely have more insight into the spike than the platforms do, such as:

  • A short flash sale
  • A new one-time promotion with no historical precedent
  • A large, concentrated email push
  • Niche events with little global relevance

Situations where they may not make the most sense:

  • Black Friday and Cyber Monday (supported by their data study)
  • Christmas shopping windows
  • Valentine’s Day for gift categories

These events are already modeled extensively by Google’s bidding systems.

What Should PPC Managers Do With This Data?

If you’re looking to make some changes into your PPC accounts this holiday season, here’s a few ways to apply these findings in a practical way.

#1: Default to not using seasonality adjustments for BFCM

For the majority of advertisers, letting Smart Bidding handle the conversion rate spike naturally leads to steadier ROAS and fewer surprises.

The data supports this approach across three consecutive years.

#2: If leadership insists on volume, be explicit about the trade-off

You can lean on Optmzyr’s findings to set expectations, not just express an opinion.

For example:

  • “Optmyzr’s three-year analysis shows that seasonality adjustments can increase revenue but typically reduce ROAS by 10-17 percentage points.”
  • “We can use them if revenue volume is the priority, but we will need to prepare for much lower cost efficiency.”

These examples keep the conversation focused on the business, not just the tactical levers you pull.

#3: Spend your energy on guardrails, not the predictions

In the study, Optmzyr reminds advertisers that trusting the algorithm doesn’t mean blindly letting it run without any oversight.

Instead of guessing the exact uplift, your value during peak season come from:

  • Smart budget pacing
  • Hourly monitoring (with automated alerts, of course!)
  • Bid caps when necessary
  • Audience and device segmentation checks
  • Creative and offer readiness

These are some of the key areas where human judgment beats prediction.

Final Thoughts On Optmyzr’s Study

Optmyzr’s study doesn’t argue that seasonality bid adjustments are bad. What it does argue is that context is everything.

For predictable, high-volume retail events like BFCM, Google’s bidding systems already have the signal they need. Adding your own forecast often leads to overshooting, inflated CPCs, and unnecessary efficiency loss.

For unique or brand-specific spikes, adjustments remain valuable.

This research gives PPC managers something we rarely get during BFCM: solid data to support a more measured, less reactive approach. If nothing else, it gives you the backup you need the next time someone asks:

“Should we turn on seasonality adjustments this Black Friday?”

Your answer can be confident, data-driven, and clear.

Google’s Mueller Questions Need For LLM-Only Markdown Pages via @sejournal, @MattGSouthern

Google Search Advocate John Mueller has pushed back on the idea of building separate Markdown or JSON pages just for large language models (LLMs), saying he doesn’t see why LLMs would need pages that no one else sees.

The discussion started when Lily Ray asked on Bluesky about “creating separate markdown / JSON pages for LLMs and serving those URLs to bots,” and whether Google could share its perspective.

Ray asked:

Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots. Can you share Googleʼs perspective on this?

The question draws attention to a developing trend where publishers create “shadow” copies of important in formats that are easier for AI systems to understand.

There’s a more active discussion on this topic happening on X.

What Mueller Said About LLM-Only Pages

Mueller replied that he isn’t aware of anything on Google’s side that would call for this kind of setup.

He notes that LLMs have worked with regular web pages from the beginning:

I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?

When Ray followed up about whether a separate format might help “expedite getting key points across to LLMs quickly,” Mueller argued that if file formats made a meaningful difference, you would likely hear that directly from the companies running those systems.

Mueller added:

If those creating and running these systems knew they could create better responses from sites with specific file formats, I expect they would be very vocal about that. AI companies aren’t really known for being shy.

He said some pages may still work better for AI systems than others, but he doesn’t think that comes down to HTML versus Markdown:

That said I can imagine some pages working better for users and some better for AI systems, but I doubt that’s due to the file format, and it’s definitely not generalizable to everything. (Excluding JS which still seems hard for many of these systems).”

Taken together, Mueller’s comments suggest that, from Google’s point of view, you don’t need to create bot-only Markdown or JSON clones of existing pages just to be understood by LLMs.

How Structured Data Fits In

Other individuals in the thread drew a line between speculative “shadow” formats and cases where AI platforms have clearly defined feed requirements.

A reply from Matt Wright pointed to OpenAI’s eCommerce product feeds as an example where JSON schemas matter.

In that context, a defined spec governs how ChatGPT ingests and displays product data. Wright explains:

Interestingly, the OpenAI eCommerce product feeds are live: JSON schemas appear to have a key role in AI search already.

That example supports the idea that structured feeds and schemas are most important when a platform publishes a spec and asks you to use it.

Additionally, Wright points to a thread on LinkedIn where Chris Long observed that “editorial sites using product schemas, tend to get included in ChatGPT citations.”

Why This Matters

If you’re questioning whether to build “LLM-optimized” Markdown or JSON versions of your content, this exchange can help steer you back to the basics.

Mueller’s comments reinforce that LLMs have long been able to read and parse standard HTML.

For most sites, it’s more productive to keep improving speed, readability, and content structure on the pages you already have, and to implement schema where there’s clear platform guidance.

At the same time, the Bluesky thread shows that AI-specific formats are starting to emerge in narrow areas such as product feeds. Those are worth tracking, but they’re tied to explicit integrations, not a blanket rule that markdown is better for LLMs.

Looking Ahead

The conversation highlights how fast AI-driven search changes are turning into technical requests for SEO and dev teams, often before there is documentation to support them.

Until LLM providers publish more concrete guidelines, this thread points you back to work you can justify today: keep your HTML clean, reduce unnecessary JavaScript where it makes content hard to parse, and use structured data where platforms have clearly documented schemas.


Featured Image: Roman Samborskyi/Shutterstock

EU Plan To Simplify GDPR Targets AI Training And Cookie Consent via @sejournal, @MattGSouthern

The European Commission has proposed a “Digital Omnibus” package that would relax parts of the GDPR, the AI Act, and Europe’s cookie rules in the name of competitiveness and simplification.

If you work with EU traffic or rely on European data for analytics, advertising, or AI features, it’s worth tracking this proposal even though nothing has changed in law yet.

What The Digital Omnibus Would Change

The Digital Omnibus would revise several laws at once.

On AI, the proposal would push back stricter rules for high-risk systems from August 2026 to December 2027. It would also lighten documentation and reporting obligations for some systems and move more oversight to the EU AI Office.

Regarding data protection, the Commission aims to clarify when information is no longer considered ‘personal,’ making it easier to share and reuse anonymized and pseudonymized datasets, especially for AI training.

Privacy group noyb says this new wording isn’t just about clarifying the rules. They believe the proposal introduces a more subjective approach, hinging on what a controller claims it can or plans to do. Noyb warns this change could exclude parts of the adtech and data-broker industry from GDPR protections.

Cookies, Consent, And Browser Signals

The cookie section is likely to be the most visible change for your day-to-day work if the proposal moves forward.

The Commission wants to cut “banner fatigue” by exempting some non-risk cookies from consent pop-ups and shifting more control into browser-level settings that apply across sites.

In practice, that would mean fewer consent banners for low-risk uses, such as certain analytics or strictly functional storage, once categories are defined.

The proposal would also require websites to respect standardized, machine-readable privacy signals from browsers when those standards exist.

AI Training & Data Rights

One of the most contested pieces of the Digital Omnibus is how it treats data used to train AI systems.

The package would allow companies including Google, Meta, and OpenAI to use Europeans’ personal data to train AI models under a broadened legal basis.

Privacy groups have argued that this kind of training should rely on explicit opt-in consent, rather than the more flexible approach they see in the proposal.

Noyb warns that long-running behavioral data, such as social media histories, could be used to train AI systems with only an opt-out model that is difficult for people to exercise in practice.

Why This Matters

This proposal is worth keeping on your radar if you’re responsible for analytics, consent, or AI-driven products that reach EU users.

Over time, you might observe smaller, browser-driven consent experiences for EU traffic, along with a different compliance approach for AI features that depend on behavioral data.

For now, nothing in your cookie banners, GA4 setup, or AI workflows needs to change solely because of the Digital Omnibus.

Looking Ahead

The Digital Omnibus is an early signal that the EU is re-balancing its digital rulebook around AI and competitiveness, not privacy and enforcement alone.

Key items to monitor include Parliament’s amendments to AI training and data language, cookie and browser-signal provisions for CMPs and browsers, and changes to AI training and consent for EU users.


Featured Image: HJBC/Shutterstock

Pew: 84% Of Adults Use YouTube As Platform Growth Continues via @sejournal, @MattGSouthern

YouTube and Facebook continue to lead U.S. social media usage, but TikTok, Instagram, WhatsApp and Reddit are showing consistent growth, according to new data from Pew Research Center.

The report surveyed 5,022 U.S and found 84% use YouTube and 71% use Facebook. Instagram reached 50% adoption, making it the only other platform used by at least half of American adults.

What The Data Says

TikTok Growth Continues

TikTok usage among U.S. adults has increased to 37%, a slight rise from last year and nearly twice the 21% recorded in 2021. Approximately 24% of TikTok users visit the platform daily.

Instagram Reaches Milestone

Half of U.S. adults now use Instagram, matching 2024 levels but rising from 40% in 2021. The platform is especially popular among younger users.

WhatsApp and Reddit Gain Users

WhatsApp usage increased to 32%, rising from 23% in 2021. Reddit grew to 26%, up from 18% four years earlier.

New Platforms Show Limited Reach

Among U.S. adults, Threads has an 8% adoption rate, Bluesky is at 4%, and Truth Social stands at 3%.

Usage Frequency Varies by Platform

Approximately half of adults (52%) visit Facebook every day, with 37% checking it multiple times. YouTube has 48% daily usage, with 33% visiting more than once a day.

TikTok is used daily by 24% of adults, while X (formerly Twitter) has a 10% daily usage rate.

Platform Demographics

Age is the strongest predictor of platform use. Eight in ten adults aged 18-29 use Instagram, versus 19% of those 65+. Similar gaps are seen for Snapchat (58% vs. 4%), TikTok (63% vs. 5%) and Reddit (48% vs. 6%).

YouTube and Facebook are used by most age groups, but younger adults still lead in YouTube at 95%, versus 64% for those 65+.

Women are more likely to use Facebook (78% vs. 63%), Instagram (55% vs. 44%) and TikTok (42% vs. 30%), while men favor X (29% vs. 15%) and Reddit (37% vs. 15%). Adults with college degrees are more likely to use Reddit (40%), WhatsApp (41%) and Instagram (58%) than those with high school or less.

Why This Matters

These usage patterns can help inform your content distribution plans.

YouTube and Facebook are key for reaching a wide audience, while TikTok, Instagram, and newer platforms focus on specific groups.

Since different age groups prefer different platforms, it’s a good idea to tailor strategies for each platform rather than sharing the same content everywhere.

Looking Ahead

Pew’s data indicates gradual changes rather than sudden growth. Younger adults are continuing to favor familiar platforms like YouTube, Instagram, TikTok, Snapchat, and Reddit, while older adults are still more reliant on Facebook and YouTube.

Newer platforms such as Threads and Bluesky are still niche but indicate where politically active users might experiment next.

Pew’s trend series and methodology notes offer a baseline to monitor whether these divides increase, decrease, or stabilize in future data.


Featured Image: Vasylisa Dvoichenkova/Shutterstock