Google Ads Surfaces PMax Search Partner Domains In Placement Report via @sejournal, @MattGSouthern

Some advertisers are now seeing Performance Max placement data populate in Google Ads reporting, including Search Partner domains and impression counts that had previously been absent from the report.

PPC marketer Thomas Eccel flagged the change on LinkedIn, noting the report had been empty for his PMax campaigns until now.

“I finally see where and how Pmax is being displayed!” Eccel wrote. “But also cool to see finally who the real Google Search Partners are. That was always a blurry grey zone.”

What’s New

Google has documented a Performance Max placement report intended for brand safety review, and that report is now showing data for a wider set of accounts. The data includes individual placement domains, network type, placement type, and impression volume.

The Search Partner visibility is the detail getting attention. PMax campaigns have distributed ads across Google’s Search Partner Network since launch, but many advertisers saw an empty report when they looked for specifics. That’s now changing for at least some accounts.

Google hasn’t issued a formal announcement tied to this change. Google’s help documentation notes that starting in March 2024, the PMax placement report supports Search Partner Network sites. What’s new is the data appearing where it didn’t before.

The rollout is uneven, though. Some commenters on Eccel’s LinkedIn post said the report is still empty in their accounts.

What The Report Doesn’t Show

Google describes this placement reporting as a brand safety tool, not a performance report. The data shows impressions at the placement level but doesn’t break out clicks, conversions, or cost for individual placements.

You can see where your ads appeared and how many times, but you can’t calculate the return on any specific placement. Search Partner Network costs are reported as a single line item in channel performance reporting, rather than being attributed by domain.

Advertisers can use the data to make exclusion decisions for brand safety reasons. But tying outcomes to specific placements inside this view isn’t possible, which limits its use as an optimization tool.

This fits a pattern in how Google has rolled out PMax transparency over the past two years. Channel-level reporting launched in mid-2025 with performance data by surface type, and deeper asset segmentation followed in the fall. Each update has added visibility without giving advertisers full placement-level performance data.

Why This Matters

PMax placement visibility has been one of the most persistent requests from paid search practitioners since the campaign type launched. The placement report existed in the interface but returned no data, frustrating advertisers who wanted to know where their budgets were going.

The Search Partner detail matters because PMax doesn’t offer the same Search Partners toggle as standard Search campaigns, though advertisers can use exclusions. Seeing which partner domains are getting impressions and cross-referencing that against overall Search Partner performance in the channel report gives you a data point you didn’t have in practice before, even if the report itself isn’t new.

The brand safety framing is worth keeping in mind. Google’s documentation describes this report as a way to check where ads appear, not to evaluate performance. That distinction matters for how you use the data and how you talk about it with clients or stakeholders who may expect more granularity than it provides.

Looking Ahead

Google has steadily expanded PMax reporting over the past year, moving from limited channel visibility to surface-level breakdowns to the placement-level impression data now appearing for more accounts.

Whether placement-level performance metrics follow is an open question. Google hasn’t confirmed plans to add clicks, conversions, or cost to the placement report. For now, checking whether the data is available in your account and reviewing the Search Partner domains to get your impressions is the practical next step.

Google Offers AI Certificate Free For Eligible U.S. Small Businesses via @sejournal, @MattGSouthern

Google has launched the Google AI Professional Certificate, a self-paced program covering data analysis, content creation, research, and vibe coding.

Every participant receives three months of free access to Google AI Pro. Eligible U.S. small businesses can access the entire program at no cost through a separate application (more on eligibility below).

The certificate is available now on Coursera, Google Skills, and Udemy. In the U.S. and Canada, the subscription costs $49 per month.

What The Certificate Covers

The program consists of seven modules, each of which can be completed in about an hour. No prior AI experience is required.

Participants complete more than 20 hands-on activities. These include creating presentations and marketing materials, conducting deep research, building infographics, analyzing data, and building custom apps without writing code.

After completing all seven modules, participants earn a Google certificate they can add to LinkedIn and share with employers.

Free Access For Eligible U.S. Small Businesses

Google is offering the certificate at no cost to eligible U.S. small and medium-sized businesses with 500 or fewer employees. The offer also includes three months of free Google Workspace Business Standard (for new Workspace customers, up to 300 seats).

To qualify, businesses must be registered in the U.S. and submit their Employer Identification Number (EIN) through a dedicated application on Coursera. Coursera said the verification process takes 5-7 business days.

Businesses can also apply at grow.google/small-business. Google said it is working with the U.S. Chamber of Commerce and America’s Small Business Development Centers to distribute the program.

How This Helps

The program builds on Google AI Essentials, which has become the most popular course on Coursera. The AI Professional Certificate goes further, focusing on applied use cases rather than introductory concepts.

The certificate focuses on tools like Gemini, NotebookLM, and Google AI Studio, so the skills are tied to Google’s ecosystem. Google launched a separate Generative AI Leader certification for Google Cloud in May 2025, though that program focused on non-technical business leaders and required a $99 exam fee. The new AI Professional Certificate has no exam fee.

Looking Ahead

The Google AI Professional Certificate is available now on Coursera, Google Skills, and Udemy. Eligible U.S. small businesses can apply for no-cost access at grow.google/small-business.

For professionals already familiar with Google’s AI tools through earlier training programs, this certificate adds structured, employer-recognized credentials to practical skills you may already be developing on your own.

ChatGPT Search Often Switches To English In Fan-Out Queries: Report via @sejournal, @MattGSouthern

When ChatGPT Search builds an answer, it can generate background web queries to find sources. A new report from AI search analytics firm Peec AI found that a large share of those background queries run in English, even when the original prompt was in another language.

Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across all non-English prompts analyzed, the company reports that 43% of the fan-out steps were conducted in English.

What Are Fan-Out Queries

OpenAI’s ChatGPT Search documentation describes fan-out queries. When a user asks a question, ChatGPT Search “typically rewrites your query into one or more targeted queries” and sends them to search partners. After reviewing initial results, “ChatGPT search may send additional, more specific queries to other search providers.”

Peec AI refers to these rewritten sub-queries as “fan-outs.” The company’s report tracked which languages ChatGPT used when generating them.

OpenAI’s documentation does not describe how language is chosen for rewritten queries.

What Peec AI Found

Peec AI filtered its data to include only cases where the IP location matched the prompt language. Polish-language prompts from Polish IP addresses, German-language prompts from German IPs, and Spanish-language prompts from Spanish IPs. Mixed signals, such as German-language prompts from UK IP addresses, were excluded.

The filtered data showed that 78% of non-English prompt runs included at least one English-language fan-out query.

Turkish-language prompts included English fan-outs most often, at 94%. Spanish-language prompts were lowest, at 66%. No non-English language in Peec AI’s dataset fell below 60%.

Peec AI’s data showed a consistent pattern across languages. ChatGPT typically starts its fan-out queries in the prompt’s language, then adds English-language queries as it builds the response.

Examples From The Report

Peec AI’s blog post included several examples showing how the pattern can play out in practice.

When prompted in Polish from a Polish IP address about the best auction portals, ChatGPT either omitted or buried Allegro.pl in favor of eBay and other global platforms. Peec AI describes Allegro as Poland’s dominant ecommerce platform.

When prompted in German about German software companies, Peec AI reported the response listed no German companies. When prompted in Spanish about cosmetics brands, no Spanish brands appeared.

In the Spanish cosmetics example, Peec AI showed ChatGPT’s actual fan-out queries. The first ran in English. The second ran in Spanish but added the word “globales” (global), a qualifier the original prompt never used. The system appears to have interpreted a Spanish-language prompt from a Spanish IP address as a request for global brands.

These are individual examples from Peec AI’s testing, not necessarily representative of all ChatGPT Search behavior.

Why This Matters

SEO and content teams operating in non-English markets may face a disadvantage in ChatGPT’s source selection that may not map cleanly to traditional ranking signals. In Peec AI’s examples, English-language fan-out queries surfaced English-language sources that favored global brands over local competitors.

We’ve been covering ChatGPT’s citation patterns for over a year now, from SE Ranking’s report on citation factors to the Tow Center’s attribution accuracy findings. Those earlier reports showed which signals predict whether a source gets cited. Peec AI’s data suggests the language of the background query may filter which sources are even considered, before citation signals come into play.

Methodology Notes

Peec AI is a vendor in the AI search analytics space. The company’s documentation describes its data collection method as running customer-defined prompts daily via browser automation, interacting with AI platforms through their web interfaces rather than APIs. The 10 million prompts in this report came from Peec AI’s platform, not from a panel of consumer ChatGPT sessions.

The report didn’t detail the composition of those prompts, what categories or industries they covered, or how representative they are of broader ChatGPT usage patterns.

Tomek Rudzki, the report’s author, is presented by Peec AI as a “GEO Expert” on its blog. He is a well-known technical SEO practitioner who has spoken at BrightonSEO and SMX Munich and contributed to publications such as Moz.

Looking Ahead

OpenAI’s public ChatGPT Search docs describe query rewriting and follow-up queries but don’t explain how language is chosen for those queries. Whether the English fan-out pattern Peec AI identified is an intentional design choice or an emergent behavior of the system remains unclear.

The report raises a question worth monitoring. Will building English-language content become part of AI search optimization strategies, or will AI search platforms adjust their source selection to better reflect local markets?


Featured Image: arda savasciogullari/Shutterstock

Why Google Runs AI Mode On Flash, Explained By Google’s Chief Scientist via @sejournal, @MattGSouthern

Google Chief Scientist Jeff Dean said Flash’s low latency and cost are why Google can run Search AI at scale. Retrieval is a design choice, not a limitation, he added.

In an interview on the Latent Space podcast, Dean explained why Flash became the production tier for Search. He also laid out why the pipeline that narrows the web to a handful of documents will likely persist.

Google started rolling out Gemini 3 Flash as the default for AI Mode in December. Dean’s interview explains the rationale behind that decision.

Why Flash Is The Production Tier

Dean called latency the critical constraint for running AI in Search. As models handle longer and more complex tasks, speed becomes the bottleneck.

“Having low latency systems that can do that seems really important, and flash is one direction, one way of doing that.”

Podcast hosts noted Flash’s dominance across services like Gmail and YouTube. Dean said search is part of that expansion, with Flash’s use growing across AI Mode and AI Overviews.

Flash can serve at this scale because of distillation. Each generation’s Flash inherits the previous generation’s Pro-level performance, getting more capable without getting more expensive to run.

“For multiple Gemini generations now, we’ve been able to make the sort of flash version of the next generation as good or even substantially better than the previous generation’s pro.”

That’s the mechanism that makes the architecture sustainable. Google pushes frontier models for capability development, then distills those capabilities into Flash for production deployment. Flash is the tier Google designed to run at search scale.

Retrieval Over Memorization

Beyond Flash’s role in search, Dean described a design philosophy that keeps external content central to how these models work. Models shouldn’t waste capacity storing facts they can retrieve.

“Having the model devote precious parameter space to remember obscure facts that could be looked up is actually not the best use of that parameter space.”

Retrieval from external sources is a core capability, not a workaround. The model looks things up and works through the results rather than carrying everything internally.

Why Staged Retrieval Likely Persists

AI search can’t read the entire web at once. Current attention mechanisms are quadratic, meaning computational cost grows rapidly as context length increases. Dean said “a million tokens kind of pushes what you can do.” Scaling to a billion or a trillion isn’t feasible with existing methods.

Dean’s long-term vision is models that give the “illusion” of attending to trillions of tokens. Reaching that requires new techniques, not just scaling what exists today. Until then, AI search will likely keep narrowing a broad candidate pool to a handful of documents before generating a response.

Why This Matters

The model reading your content in AI Mode is getting better each generation. But it’s optimized for speed over reasoning depth, and it’s designed to retrieve your content rather than memorize it. Being findable through Google’s existing retrieval and ranking signals is the path into AI search results.

We’ve tracked every model swap in AI Mode and AI Overviews since Google launched AI Mode with Gemini 2.0. Google shipped Gemini 3 to AI Mode on release day, then started rolling out Gemini 3 Flash as the default a month later. Most recently, Gemini 3 became the default for AI Overviews globally.

Every model generation follows the same cycle. Frontier for capability, then distillation into Flash for production. Dean presented this as the architecture Google expects to maintain at search scale, not a temporary fallback.

Looking Ahead

Based on Dean’s comments, staged retrieval is likely to persist until attention mechanisms move past their quadratic limits. Google’s investment in Flash suggests the company expects to use this architecture across multiple model generations.

One change to watch is automatic model selection. Google’s Robby Stein described mentioned the concept previously, which involves routing complex queries to Pro while keeping Flash as the default.


Featured Image: Robert Way/Shutterstock

WooCommerce May Gain Sidekick-Type AI Through Extensions via @sejournal, @martinibuster

WooCommerce is approaching a turning point in 2026 thanks to the Model Context Protocol and the convergence of open source technologies that enable it to function as a layer any AI system can plug into, helping store owners and consumers accomplish more with less friction. Automattic’s Director Of Engineering AI, James LePage, discussed what’s possible right now, what’s coming in the near future, and why the current limitations are temporary.

WooCommerce

Because WooCommerce is built on WordPress and is highly extensible through plugins, APIs, and now MCP, it is rapidly evolving into a coordination layer where AI-based systems can plug in and work together through it. Automattic’s James LePage describes this approach as one in which WooCommerce fits perfectly in the center.

Model Context Protocol

Model Context Protocol is an open standard that enables platforms like WooCommerce to connect their capabilities to AI systems, making AI-powered features possible.

While MCP sounds like an API, which enables software systems to communicate, the key difference is that an API handles predefined requests, whereas MCP enables platforms like WooCommerce to support a broader range of AI interactions without building custom integrations for each one.

WooCommerce Sits In The Middle

ACP (Agentic Commerce Protocol), developed by OpenAI and Stripe, enables an AI agent to handle product, discovery, checkout, and payments from a chat interface like ChatGPT.

The UCP (Universal Commerce Protocol), an open source solution developed by Shopify and Google, provides a way for checkouts to happen through a buy button throughout Google’s AI and Search ecosystem as well as Anthropic’s Claude, regardless of whether the transaction is happening on a WooCommerce store or any other shopping platform. A developer only has to implement a UCP-compliant MCP Server for WooCommerce.

WooCommerce sits in the middle of those protocols, where their integrations come together.

Enablement Strategy For WooCommerce

LePage described a practical perspective for how AI fits into the WooCommerce platform through MCP. He calls this approach enablement.

He explains this approach:

“What’s interesting about that is it follows a strategy that we’re taking at WooCommerce, which is what I refer to as enablement, where WooCommerce is this core software, this core way that you run a digital business online.

And we want to make sure that core software is available and always in the middle of whatever’s happening in AI.

So we want to build AI features for it. We want to make it really easy for others to build AI features for it. But we absolutely want to make sure it will meet you wherever your AI tools are, wherever the best financial analysis AI tool exists, wherever the best general chatbot exists.

So to us, MCP represents a really strong opportunity there.”

Because MCP is flexible to whatever AI platform a user is on, WooCommerce is able to remain in the middle, regardless of which AI system a user subscribes to.

Practical Use Of AI In WooCommerce

LePage brought attention to practical uses of AI right now, where users can leverage ChatGPT Connectors and Claude Code from within WooCommerce in order to have multiple apps and AI communicate with each other to accomplish various tasks.

He explains:

“What’s also cool is if you use ChatGPT with connectors, if you use Cloud Code with their MCP support, there’s a lot of opportunity that you get when you add multiple pieces of software to one session.

So if I take my WooCommerce stuff and I take QuickBooks and I take X, Y, and Z, I can interact with all of them in a conversational manner.

And that’s got me very excited, but it’s also got all the merchants really excited.”

AI Is Developer-Facing Infrastructure

While profound AI implementations are quickly coming together for WooCommerce, LePage indicated that, at this moment, the current work is foundational, providing the building blocks that developers and agencies use to make it all work rather than delivering out-of-the-box merchant features today.

The question asked in the podcast was:

“…is that where we are with WooCommerce and AI at the moment is that you do need really a developer to hook it all up and make it work?”

LePage answered:

“So I’d say yes, if you want a really robust AI implementation that’s built and fits like a glove on your store and does everything that you ever want, the pieces are there.”

He later said that there are plugins that can implement some of those functionalities.

Sidekick-Type Functionality

LePage offered an exciting preview of what’s in store in the near future for WooCommerce when asked if WooCommerce will ship with deep native integration of AI similar to Shopify’s Sidekick AI assistant.

Shopify Sidekick is an AI assistant that can be invoked at various points in the store management workflow, enabling store owners to perform creative tasks like transforming product images or creating email marketing campaigns to handling common store management tasks.

The question asked was:

“One thing I’d love to know is what is planned for Core, possibly WordPress as a whole, certainly WooCommerce, in terms of like an interface built into Core, like how Shopify has Sidekick where wherever you are, you can just type what you want and it will do it for you.”

LePage answered that this kind of AI integration will likely be in the form of an extension, explaining that integrating this kind of functionality within core would be good, but doing it with a plugin would be great. He explained that all the pieces for doing this will be in place within core in version 7, which will be released on April 9, 2026.

He shared that WooCommerce will be an orchestration layer, where WooCommerce sits in the middle, directing and coordinating multiple services, tools, and data sources.

He explained:

“…it will work if we made it a very basic implementation in core, or as even like a very basic plugin, but it will be great when we can plug it into things like WooCommerce Analytics, when we can plug it into much more complex orchestration workflows under the hood to go and do things like really bulk product optimization and catalog stuff and analytics and deep number crunching, all of the fun stuff that we’re actually working on as we speak.

So you will see AI support in terms of this Sidekick-type implementation coming out from Automattic in this extension territory. And that extension also housing additional AI features to make it a much more approachable AI experience to merchants.”

Consumer-Facing AI In WooCommerce Stores

Another area discussed in the podcast was consumer-facing AI implementations that introduce more personalization and chat interfaces for retrieving order information or product selection.

At this point, the podcast jumps into agentic AI shopping, which is projected to become a thing between the near future and 2030.

But at the end, LePage circles back to affirming WordPress’s role as the orchestration layer intended to support whatever functionality and vision emerge.

LePage shared:

“These building blocks are intended to make WordPress into a platform where a developer can build any AI solution.”

WordPress and WooCommerce are very much in transition to providing the option of becoming an orchestration layer. While other content management systems are a little further down the road with these kinds of functionalities, WordPress and WooCommerce have a huge developer ecosystem that is already innovating new features that will become more powerful and useful in the very near future.

Watch the Do the Woo podcast with hosts Katie Keith and James Kemp:

AI Meets Woo: the Future of Ecommerce is Already Here

Featured Image/Screenshot Of Do the Woo Podcast

CleanTalk WordPress Plugin Vulnerability Threatens Up To 200K Sites via @sejournal, @martinibuster

An advisory was issued for a critical vulnerability rated 9.8/10 in the CleanTalk Antispam WordPress plugin, installed in over 200,000 websites. The vulnerability enables unauthenticated attackers to install vulnerable plugins that can then be used to launch remote code execution attacks.

CleanTalk Antispam Plugin

The CleanTalk Antispam plugin is a subscription based software as a service that protects websites from inauthentic user actions like spam subscriptions, registrations, form emails, plus a firewall for blocking bad bots.

Because it’s a subscription based plugin it relies on a valid API in to reach out to the CleanTalk servers and this is the part of the plugin is where the flaw that enabled the vulnerability was discovered.

CleanTalk Plugin Vulnerability CVE-2026-1490

The plugin contains a WordPress function that checks if a valid API key is being used to contact the CleanTalk servers. A WordPress function is PHP code that performs a specific task.

In this specific case, if the plugin cannot validate a connection to CleanTalk’s servers because of an invalid API key, it relies on the checkWithoutToken function to verify “trusted” requests.

The problem is that the checkWithoutToken function doesn’t properly verify the identity of the requester. An attacker is able to misrepresent their identity as coming from the cleantalk.org domain and then launch their attacks. Thus, this vulnerability only affects plugins that do not have a valid API key.

The Wordfence advisory describes the vulnerability:

“The Spam protection, Anti-Spam, FireWall by CleanTalk plugin for WordPress is vulnerable to unauthorized Arbitrary Plugin Installation due to an authorization bypass via reverse DNS (PTR record) spoofing on the ‘checkWithoutToken’ function…”

Recommended Action

The vulnerability affects CleanTalk plugin versions up to an including 6.71. Wordfence recommends users update their installations to the latest version at the time of writing, version 6.72.

Antitrust Filing Says Google Cannibalizes Publisher Traffic via @sejournal, @martinibuster

Penske Media Corporation (PMC) filed a federal court memorandum opposing Google’s motion to dismiss its antitrust lawsuit. The company argues that Google has broken the longstanding premise of a web ecosystem in which publishers allowed their content to be crawled in exchange for receiving search traffic in return.

PMC is the publisher of twenty brands like Deadline, The Hollywood Reporter, and Rolling Stone.

Web Ecosystem

The PMC legal filing makes repeated references to the “fundamental fair exchange” where Google sends traffic in exchange for allowing them to crawl and index websites, explicitly quoting Google’s expressions of support for “the health of the web ecosystem.”

And yet there are some industry outsiders on social media who deny that there is any understanding between Google and web publishers, a concept that even Google doesn’t deny.

This concept dates to pretty much the beginning of Google and is commonly understood by all web workers. It’s embedded in Google’s Philosophy, expressed at least as far back as 2004:

“Google may be the only company in the world whose stated goal is to have users leave its website as quickly as possible.”

In May 2025 Google published a blog post where they affirmed that sending users to websites remained their core goal:

“…our core goal remains the same: to help people find outstanding, original content that adds unique value.”

What’s relevant about that passage is that it’s framed within the context of encouraging publishers to create high quality content and in exchange they will be considered for referral traffic.

The concept of a web ecosystem where both sides benefit was discussed by Google CEO Sundar Pichai in a June 2025 podcast interview by Lex Fridman where Pichai said that sending people to the human created web in AI Mode was “going to be a core design principle for us.”

In response to a follow-up question referring to journalists who are nervous about web referrals, Sundar Pichai explicitly mentioned the ecosystem and Google’s commitment to it.

Pichai responded:

“I think news and journalism will play an important role, you know, in the future we’re pretty committed to it, right? And so I think making sure that ecosystem… In fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there. So it’s something I think you know I definitely value a lot and as we are designing we’ll continue prioritizing approaches.”

This “fundamental fair exchange” serves as the baseline competitive condition for their claims of coercive reciprocal dealing and unlawful monopoly maintenance.

That baseline helps PMC argue:

  • That Google changed the understood terms of participation in search in a way publishers cannot refuse.
  • And that Google used its dominance in search to impose those new terms.

And despite that Google’s own CEO expressed that sending people to websites is a core design principal and there are multiple instances in the past and the present where Google’s own documentation refers to this reciprocity between publishers and Google, Google’s legal response expressly denies that it exists.

The PMC document states:

“Google …argues that no reciprocity agreement exists because it has not “promised to deliver” any search referral traffic.”

Profound Consequences Of Google AI Search

PMC filed a federal court memorandum in February 2026 opposing Google’s motion to dismiss its antitrust complaint. The complaint details Google’s use of its search monopoly to “coerce” publishers into providing content for AI training and AI Overviews without compensation.

The suit argues that Google has pivoted from being a search engine (that sends traffic to websites) to an answer engine that removes the incentive for users to click to visit a website. The lawsuit claims that this change harms the economic viability of digital publishers.

The filing explains the consequences of this change:

“Google has shattered the longstanding bargain that allows the open internet to exist. The consequences for online publishers—to say nothing of the public at large—are profound.”

Google Is Using Their Market Power

The filing claims that the collapse of the traditional search ecosystem positions Google’s AI search system as coercive rather than innovative, arguing that publishers must either allow AI to reuse their content or risk losing search visibility.

The legal filing alleges that Google’s generative AI competes directly with online publishers for user’s attention, describing Google as cannibalizing publisher’s traffic, specifically alleging that Google is using their “market power” to maintain a situation in which publishers can’t block the AI without also negatively affecting what little search traffic is left.

The memorandum portrays a bleak choice offered by Google:

“Google’s search monopoly leaves publishers with no choice: acquiesce—even as Google cannibalizes the traffic publishers rely on—or perish.”

It also describes the role of AI grounding plays in cannibalizing publisher traffic for its sole benefit:

“Through RAG, or “grounding,” Google uses, repackages, and republishes publisher content for display on Google’s SERP, cannibalizing the traffic on which PMC depends.”

Expansion Of Zero-Click Search Results And Traffic Loss

The filing claims AI answers divert users away from publisher sites and diminish monetizable audience visits. Multiple parts of the filing directly confronts Google with the fact of reduced traffic from search due to the cannibalization of their content.

The filing alleges:

“Google reduces click‑throughs to publisher sites, increases zero‑click behavior, and diverts traffic that publishers need to support their advertising, affiliate, and subscription revenue.

…Google’s insinuation . . . that AI Overview is not getting in the way of the ten blue links and the traffic going back to creators and publishers is just 100% false . . . . [Users] are reading the overview and stopping there . . . . We see it.”

…The purpose is not to facilitate click-throughs but to have users consume PMC’s content, repackaged by Google, directly on the SERP.”

Zero-click searches are described as a component of a multi-part process in which publishers are injured by Google’s conduct. The filing accuses Google of using publisher content for training, grounding their AI on facts, and then republishing it within the zero-click AI search environment that either reduces or eliminates clicks back to PMC’s websites.

Should Google Send More Referral Traffic?

Everything that’s described in the PMC filing is the kind of thing that virtually all online businesses have been complaining about in terms of traffic losses as a result of Google’s AI search surfaces. It’s the reason why Lex Fridman specifically challenged Google’s CEO on the amount of traffic Google is sending to websites.

Google AI Shows A Site Is Offline Due To JS Content Delivery via @sejournal, @martinibuster

Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a note in the SERPs saying that the website was down since early 2026.

The Redditor didn’t create a post on Reddit, they just linked to their blog post that blamed Google and AI. This enabled Mueller to go straight to the site, identify the cause as having to do with JavaScript implementation, and then set them straight that it wasn’t Google’s fault.

Redditor Blames Google’s AI

The blog post by the Redditor blames Google, headlining the article with a computer science buzzword salad that over-complicates and (unknowingly) misstates the actual problem.

The article title is:

“Google Might Think Your Website Is Down
How Cross-page AI aggregation can introduce new liability vectors.”

That part about “cross-page AI aggregation” and “liability vectors” is eyebrow raising because none of those terms are established terms of art in computer science.

The “cross-page” thing is likely a reference to Google’s Query Fan-Out, where a question on Google’s AI Mode is turned into multiple queries that are then sent to Google’s Classic Search.

Regarding “liability vectors,” a vector is a real thing that’s discussed in SEO and is a part of Natural Language Processing (NLP). But “Liability Vector” is not a part of it.

The Redditor’s blog post admits that they don’t know if Google is able to detect if a site is down or not:

“I’m not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn’t be able to detect that since it’s behind a login wall.”

And they appear to maybe not be aware of how RAG or Query Fan-Out works, or maybe how Google’s AI systems work. The author seems to regard it as a discovery that Google is referencing fresh information instead of Parametric Knowledge (information in the LLM that was gained from training).

They write that Google’s AI answer says that the website indicated the site was offline since 2026.:

“…the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore.

…it clearly mentions the timeframe as early 2026. Since the website didn’t exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs!”

A little later in the blog post the Redditor admits that they don’t know why Google is saying that the website is offline.

They explained that they implemented a shot in the dark solution by removing a pop-up. They were incorrectly guessing that it was the pop-up that was causing the issue and this highlights the importance of being certain of what’s causing issues before making changes in the hope that this will fix them.

The Redditor shared they didn’t know how Google summarizes information about a site in response to a query about the site, and expressed their concern that they believe it’s possible that Google can scrape irrelevant information then show it as an answer.

They write:

“…we don’t know how exactly Google assembles the mix of pages it uses to generate LLM responses.

This is problematic because anything on your web pages might now influence unrelated answers.

…Google’s AI might grab any of this and present it as the answer.”

I don’t fault the author for not knowing how Google AI search works, I’m fairly certain it’s not widely known. It’s easy to get the impression that it’s an AI answering questions.

But what’s basically going on is that AI search is based on Classic Search, with AI synthesizing the content it finds online into a natural language answer. It’s like asking someone a question, they Google it, then they explain the answer from what they learned from reading the website pages.

Google’s John Mueller Explains What’s Going On

Mueller responded to the person’s Reddit post in a neutral and polite manner, showing why the fault lies in the Redditor’s implementation.

Mueller explained:

“Is that your site? I’d recommend not using JS to change text on your page from “not available” to “available” and instead to just load that whole chunk from JS. That way, if a client doesn’t run your JS, it won’t get misleading information.

This is similar to how Google doesn’t recommend using JS to change a robots meta tag from “noindex” to “please consider my fine work of html markup for inclusion” (there is no “index” robots meta tag, so you can be creative).”

Mueller’s response explains that the site is relying on JavaScript to replace placeholder text that is served briefly before the page loads, which only works for visitors whose browsers actually run that script.

What happened here is that Google read that placeholder text that the web page showed as the indexed content. Google saw the original served content with the “not available” message and treated it as the content.

Mueller explained that the safer approach is to have the correct information present in the page’s base HTML from the start, so that both users and search engines receive the same content.

Takeaways

There are multiple takeaways here that go beyond the technical issue underlying the Redditor’s problem. Top of the list is how they tried to guess their way to an answer.

They really didn’t know how Google AI search works, which introduced a series of assumptions that complicated their ability to diagnose the issue. Then they implemented a “fix” based on guessing what they thought was probably causing the issue.

Guessing is an approach to SEO problems that’s justified on Google being opaque but sometimes it’s not about Google, it’s about a knowledge gap in SEO itself and a signal that further testing and diagnosis is necessary.

Featured Image by Shutterstock/Kues

Google’s Search Relations Team Debates If You Still Need A Website via @sejournal, @MattGSouthern

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a one-size-fits-all answer.

The conversation stayed focused on trade-offs between owning a website and relying on platforms such as social networks or app stores.

In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question and repeatedly landed on the same conclusion: it depends.

What Was Said

Illyes and Splitt acknowledged that websites still offer distinct advantages, including data sovereignty, control over monetization, the ability to host services such as calculators or tools, and freedom from platform content moderation.

Both Googlers also emphasized situations where a website may not be necessary.

Illyes referenced a Google user study conducted in Indonesia around 2015-2016 where businesses ran entirely on social networks with no websites. He described their results as having “incredible sales, incredible user journeys and retention.”

Illyes also described mobile games that, in his telling, became multi-million-dollar and in some cases “billion-dollar” businesses without a meaningful website beyond legal pages.

Illyes offered a personal example:

“I know that I have a few community groups in WhatsApp for instance because that’s where the people I want to reach are and I can reach them reliably through there. I could set up a website but I never even considered because why? To do what?”

Splitt addressed trust and presentation, saying:

“I’d rather have a nicely curated social media presence that exudes trustworthiness than a website that is not well done.”

When pressed for a definitive answer, Illyes offered the closest thing to a position, saying that if you want to make information or services available to as many people as possible, a website is probably still the way to go in 2026. But he framed it as a personal opinion, not a recommendation.

Why This Matters

Google Search is built around crawling and indexing web content, but the hosts still frame “needing a website” as a business decision that depends on your goals and audience.

Neither made a case that websites are essential for every business in 2026. Neither argued that the open web offers something irreplaceable. The strongest endorsement was that websites provide a low barrier of entry for sharing information and that the web “isn’t dead.”

This is consistent with the fragmented discovery landscape that SEJ has been covering, where user journeys now span AI chatbots, social feeds, and community platforms alongside traditional search.

Looking Ahead

The Search Off the Record podcast has historically offered behind-the-scenes perspectives from the Search Relations team that sometimes run ahead of official positions.

This episode didn’t introduce new policy or guidance. But the Search Relations team’s willingness to validate social-only business models and app-only distribution reflects how the role of websites is changing in a multi-platform discovery environment.

The question is worth sitting with. If the Search Relations team frames website ownership as situational rather than essential, the value proposition rests on the specific use case, not on the assumption that every business needs one.


Featured Image: Diki Prayogo/Shutterstock