WooCommerce May Gain Sidekick-Type AI Through Extensions via @sejournal, @martinibuster

WooCommerce is approaching a turning point in 2026 thanks to the Model Context Protocol and the convergence of open source technologies that enable it to function as a layer any AI system can plug into, helping store owners and consumers accomplish more with less friction. Automattic’s Director Of Engineering AI, James LePage, discussed what’s possible right now, what’s coming in the near future, and why the current limitations are temporary.

WooCommerce

Because WooCommerce is built on WordPress and is highly extensible through plugins, APIs, and now MCP, it is rapidly evolving into a coordination layer where AI-based systems can plug in and work together through it. Automattic’s James LePage describes this approach as one in which WooCommerce fits perfectly in the center.

Model Context Protocol

Model Context Protocol is an open standard that enables platforms like WooCommerce to connect their capabilities to AI systems, making AI-powered features possible.

While MCP sounds like an API, which enables software systems to communicate, the key difference is that an API handles predefined requests, whereas MCP enables platforms like WooCommerce to support a broader range of AI interactions without building custom integrations for each one.

WooCommerce Sits In The Middle

ACP (Agentic Commerce Protocol), developed by OpenAI and Stripe, enables an AI agent to handle product, discovery, checkout, and payments from a chat interface like ChatGPT.

The UCP (Universal Commerce Protocol), an open source solution developed by Shopify and Google, provides a way for checkouts to happen through a buy button throughout Google’s AI and Search ecosystem as well as Anthropic’s Claude, regardless of whether the transaction is happening on a WooCommerce store or any other shopping platform. A developer only has to implement a UCP-compliant MCP Server for WooCommerce.

WooCommerce sits in the middle of those protocols, where their integrations come together.

Enablement Strategy For WooCommerce

LePage described a practical perspective for how AI fits into the WooCommerce platform through MCP. He calls this approach enablement.

He explains this approach:

“What’s interesting about that is it follows a strategy that we’re taking at WooCommerce, which is what I refer to as enablement, where WooCommerce is this core software, this core way that you run a digital business online.

And we want to make sure that core software is available and always in the middle of whatever’s happening in AI.

So we want to build AI features for it. We want to make it really easy for others to build AI features for it. But we absolutely want to make sure it will meet you wherever your AI tools are, wherever the best financial analysis AI tool exists, wherever the best general chatbot exists.

So to us, MCP represents a really strong opportunity there.”

Because MCP is flexible to whatever AI platform a user is on, WooCommerce is able to remain in the middle, regardless of which AI system a user subscribes to.

Practical Use Of AI In WooCommerce

LePage brought attention to practical uses of AI right now, where users can leverage ChatGPT Connectors and Claude Code from within WooCommerce in order to have multiple apps and AI communicate with each other to accomplish various tasks.

He explains:

“What’s also cool is if you use ChatGPT with connectors, if you use Cloud Code with their MCP support, there’s a lot of opportunity that you get when you add multiple pieces of software to one session.

So if I take my WooCommerce stuff and I take QuickBooks and I take X, Y, and Z, I can interact with all of them in a conversational manner.

And that’s got me very excited, but it’s also got all the merchants really excited.”

AI Is Developer-Facing Infrastructure

While profound AI implementations are quickly coming together for WooCommerce, LePage indicated that, at this moment, the current work is foundational, providing the building blocks that developers and agencies use to make it all work rather than delivering out-of-the-box merchant features today.

The question asked in the podcast was:

“…is that where we are with WooCommerce and AI at the moment is that you do need really a developer to hook it all up and make it work?”

LePage answered:

“So I’d say yes, if you want a really robust AI implementation that’s built and fits like a glove on your store and does everything that you ever want, the pieces are there.”

He later said that there are plugins that can implement some of those functionalities.

Sidekick-Type Functionality

LePage offered an exciting preview of what’s in store in the near future for WooCommerce when asked if WooCommerce will ship with deep native integration of AI similar to Shopify’s Sidekick AI assistant.

Shopify Sidekick is an AI assistant that can be invoked at various points in the store management workflow, enabling store owners to perform creative tasks like transforming product images or creating email marketing campaigns to handling common store management tasks.

The question asked was:

“One thing I’d love to know is what is planned for Core, possibly WordPress as a whole, certainly WooCommerce, in terms of like an interface built into Core, like how Shopify has Sidekick where wherever you are, you can just type what you want and it will do it for you.”

LePage answered that this kind of AI integration will likely be in the form of an extension, explaining that integrating this kind of functionality within core would be good, but doing it with a plugin would be great. He explained that all the pieces for doing this will be in place within core in version 7, which will be released on April 9, 2026.

He shared that WooCommerce will be an orchestration layer, where WooCommerce sits in the middle, directing and coordinating multiple services, tools, and data sources.

He explained:

“…it will work if we made it a very basic implementation in core, or as even like a very basic plugin, but it will be great when we can plug it into things like WooCommerce Analytics, when we can plug it into much more complex orchestration workflows under the hood to go and do things like really bulk product optimization and catalog stuff and analytics and deep number crunching, all of the fun stuff that we’re actually working on as we speak.

So you will see AI support in terms of this Sidekick-type implementation coming out from Automattic in this extension territory. And that extension also housing additional AI features to make it a much more approachable AI experience to merchants.”

Consumer-Facing AI In WooCommerce Stores

Another area discussed in the podcast was consumer-facing AI implementations that introduce more personalization and chat interfaces for retrieving order information or product selection.

At this point, the podcast jumps into agentic AI shopping, which is projected to become a thing between the near future and 2030.

But at the end, LePage circles back to affirming WordPress’s role as the orchestration layer intended to support whatever functionality and vision emerge.

LePage shared:

“These building blocks are intended to make WordPress into a platform where a developer can build any AI solution.”

WordPress and WooCommerce are very much in transition to providing the option of becoming an orchestration layer. While other content management systems are a little further down the road with these kinds of functionalities, WordPress and WooCommerce have a huge developer ecosystem that is already innovating new features that will become more powerful and useful in the very near future.

Watch the Do the Woo podcast with hosts Katie Keith and James Kemp:

AI Meets Woo: the Future of Ecommerce is Already Here

Featured Image/Screenshot Of Do the Woo Podcast

CleanTalk WordPress Plugin Vulnerability Threatens Up To 200K Sites via @sejournal, @martinibuster

An advisory was issued for a critical vulnerability rated 9.8/10 in the CleanTalk Antispam WordPress plugin, installed in over 200,000 websites. The vulnerability enables unauthenticated attackers to install vulnerable plugins that can then be used to launch remote code execution attacks.

CleanTalk Antispam Plugin

The CleanTalk Antispam plugin is a subscription based software as a service that protects websites from inauthentic user actions like spam subscriptions, registrations, form emails, plus a firewall for blocking bad bots.

Because it’s a subscription based plugin it relies on a valid API in to reach out to the CleanTalk servers and this is the part of the plugin is where the flaw that enabled the vulnerability was discovered.

CleanTalk Plugin Vulnerability CVE-2026-1490

The plugin contains a WordPress function that checks if a valid API key is being used to contact the CleanTalk servers. A WordPress function is PHP code that performs a specific task.

In this specific case, if the plugin cannot validate a connection to CleanTalk’s servers because of an invalid API key, it relies on the checkWithoutToken function to verify “trusted” requests.

The problem is that the checkWithoutToken function doesn’t properly verify the identity of the requester. An attacker is able to misrepresent their identity as coming from the cleantalk.org domain and then launch their attacks. Thus, this vulnerability only affects plugins that do not have a valid API key.

The Wordfence advisory describes the vulnerability:

“The Spam protection, Anti-Spam, FireWall by CleanTalk plugin for WordPress is vulnerable to unauthorized Arbitrary Plugin Installation due to an authorization bypass via reverse DNS (PTR record) spoofing on the ‘checkWithoutToken’ function…”

Recommended Action

The vulnerability affects CleanTalk plugin versions up to an including 6.71. Wordfence recommends users update their installations to the latest version at the time of writing, version 6.72.

Antitrust Filing Says Google Cannibalizes Publisher Traffic via @sejournal, @martinibuster

Penske Media Corporation (PMC) filed a federal court memorandum opposing Google’s motion to dismiss its antitrust lawsuit. The company argues that Google has broken the longstanding premise of a web ecosystem in which publishers allowed their content to be crawled in exchange for receiving search traffic in return.

PMC is the publisher of twenty brands like Deadline, The Hollywood Reporter, and Rolling Stone.

Web Ecosystem

The PMC legal filing makes repeated references to the “fundamental fair exchange” where Google sends traffic in exchange for allowing them to crawl and index websites, explicitly quoting Google’s expressions of support for “the health of the web ecosystem.”

And yet there are some industry outsiders on social media who deny that there is any understanding between Google and web publishers, a concept that even Google doesn’t deny.

This concept dates to pretty much the beginning of Google and is commonly understood by all web workers. It’s embedded in Google’s Philosophy, expressed at least as far back as 2004:

“Google may be the only company in the world whose stated goal is to have users leave its website as quickly as possible.”

In May 2025 Google published a blog post where they affirmed that sending users to websites remained their core goal:

“…our core goal remains the same: to help people find outstanding, original content that adds unique value.”

What’s relevant about that passage is that it’s framed within the context of encouraging publishers to create high quality content and in exchange they will be considered for referral traffic.

The concept of a web ecosystem where both sides benefit was discussed by Google CEO Sundar Pichai in a June 2025 podcast interview by Lex Fridman where Pichai said that sending people to the human created web in AI Mode was “going to be a core design principle for us.”

In response to a follow-up question referring to journalists who are nervous about web referrals, Sundar Pichai explicitly mentioned the ecosystem and Google’s commitment to it.

Pichai responded:

“I think news and journalism will play an important role, you know, in the future we’re pretty committed to it, right? And so I think making sure that ecosystem… In fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there. So it’s something I think you know I definitely value a lot and as we are designing we’ll continue prioritizing approaches.”

This “fundamental fair exchange” serves as the baseline competitive condition for their claims of coercive reciprocal dealing and unlawful monopoly maintenance.

That baseline helps PMC argue:

  • That Google changed the understood terms of participation in search in a way publishers cannot refuse.
  • And that Google used its dominance in search to impose those new terms.

And despite that Google’s own CEO expressed that sending people to websites is a core design principal and there are multiple instances in the past and the present where Google’s own documentation refers to this reciprocity between publishers and Google, Google’s legal response expressly denies that it exists.

The PMC document states:

“Google …argues that no reciprocity agreement exists because it has not “promised to deliver” any search referral traffic.”

Profound Consequences Of Google AI Search

PMC filed a federal court memorandum in February 2026 opposing Google’s motion to dismiss its antitrust complaint. The complaint details Google’s use of its search monopoly to “coerce” publishers into providing content for AI training and AI Overviews without compensation.

The suit argues that Google has pivoted from being a search engine (that sends traffic to websites) to an answer engine that removes the incentive for users to click to visit a website. The lawsuit claims that this change harms the economic viability of digital publishers.

The filing explains the consequences of this change:

“Google has shattered the longstanding bargain that allows the open internet to exist. The consequences for online publishers—to say nothing of the public at large—are profound.”

Google Is Using Their Market Power

The filing claims that the collapse of the traditional search ecosystem positions Google’s AI search system as coercive rather than innovative, arguing that publishers must either allow AI to reuse their content or risk losing search visibility.

The legal filing alleges that Google’s generative AI competes directly with online publishers for user’s attention, describing Google as cannibalizing publisher’s traffic, specifically alleging that Google is using their “market power” to maintain a situation in which publishers can’t block the AI without also negatively affecting what little search traffic is left.

The memorandum portrays a bleak choice offered by Google:

“Google’s search monopoly leaves publishers with no choice: acquiesce—even as Google cannibalizes the traffic publishers rely on—or perish.”

It also describes the role of AI grounding plays in cannibalizing publisher traffic for its sole benefit:

“Through RAG, or “grounding,” Google uses, repackages, and republishes publisher content for display on Google’s SERP, cannibalizing the traffic on which PMC depends.”

Expansion Of Zero-Click Search Results And Traffic Loss

The filing claims AI answers divert users away from publisher sites and diminish monetizable audience visits. Multiple parts of the filing directly confronts Google with the fact of reduced traffic from search due to the cannibalization of their content.

The filing alleges:

“Google reduces click‑throughs to publisher sites, increases zero‑click behavior, and diverts traffic that publishers need to support their advertising, affiliate, and subscription revenue.

…Google’s insinuation . . . that AI Overview is not getting in the way of the ten blue links and the traffic going back to creators and publishers is just 100% false . . . . [Users] are reading the overview and stopping there . . . . We see it.”

…The purpose is not to facilitate click-throughs but to have users consume PMC’s content, repackaged by Google, directly on the SERP.”

Zero-click searches are described as a component of a multi-part process in which publishers are injured by Google’s conduct. The filing accuses Google of using publisher content for training, grounding their AI on facts, and then republishing it within the zero-click AI search environment that either reduces or eliminates clicks back to PMC’s websites.

Should Google Send More Referral Traffic?

Everything that’s described in the PMC filing is the kind of thing that virtually all online businesses have been complaining about in terms of traffic losses as a result of Google’s AI search surfaces. It’s the reason why Lex Fridman specifically challenged Google’s CEO on the amount of traffic Google is sending to websites.

Google AI Shows A Site Is Offline Due To JS Content Delivery via @sejournal, @martinibuster

Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a note in the SERPs saying that the website was down since early 2026.

The Redditor didn’t create a post on Reddit, they just linked to their blog post that blamed Google and AI. This enabled Mueller to go straight to the site, identify the cause as having to do with JavaScript implementation, and then set them straight that it wasn’t Google’s fault.

Redditor Blames Google’s AI

The blog post by the Redditor blames Google, headlining the article with a computer science buzzword salad that over-complicates and (unknowingly) misstates the actual problem.

The article title is:

“Google Might Think Your Website Is Down
How Cross-page AI aggregation can introduce new liability vectors.”

That part about “cross-page AI aggregation” and “liability vectors” is eyebrow raising because none of those terms are established terms of art in computer science.

The “cross-page” thing is likely a reference to Google’s Query Fan-Out, where a question on Google’s AI Mode is turned into multiple queries that are then sent to Google’s Classic Search.

Regarding “liability vectors,” a vector is a real thing that’s discussed in SEO and is a part of Natural Language Processing (NLP). But “Liability Vector” is not a part of it.

The Redditor’s blog post admits that they don’t know if Google is able to detect if a site is down or not:

“I’m not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn’t be able to detect that since it’s behind a login wall.”

And they appear to maybe not be aware of how RAG or Query Fan-Out works, or maybe how Google’s AI systems work. The author seems to regard it as a discovery that Google is referencing fresh information instead of Parametric Knowledge (information in the LLM that was gained from training).

They write that Google’s AI answer says that the website indicated the site was offline since 2026.:

“…the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore.

…it clearly mentions the timeframe as early 2026. Since the website didn’t exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs!”

A little later in the blog post the Redditor admits that they don’t know why Google is saying that the website is offline.

They explained that they implemented a shot in the dark solution by removing a pop-up. They were incorrectly guessing that it was the pop-up that was causing the issue and this highlights the importance of being certain of what’s causing issues before making changes in the hope that this will fix them.

The Redditor shared they didn’t know how Google summarizes information about a site in response to a query about the site, and expressed their concern that they believe it’s possible that Google can scrape irrelevant information then show it as an answer.

They write:

“…we don’t know how exactly Google assembles the mix of pages it uses to generate LLM responses.

This is problematic because anything on your web pages might now influence unrelated answers.

…Google’s AI might grab any of this and present it as the answer.”

I don’t fault the author for not knowing how Google AI search works, I’m fairly certain it’s not widely known. It’s easy to get the impression that it’s an AI answering questions.

But what’s basically going on is that AI search is based on Classic Search, with AI synthesizing the content it finds online into a natural language answer. It’s like asking someone a question, they Google it, then they explain the answer from what they learned from reading the website pages.

Google’s John Mueller Explains What’s Going On

Mueller responded to the person’s Reddit post in a neutral and polite manner, showing why the fault lies in the Redditor’s implementation.

Mueller explained:

“Is that your site? I’d recommend not using JS to change text on your page from “not available” to “available” and instead to just load that whole chunk from JS. That way, if a client doesn’t run your JS, it won’t get misleading information.

This is similar to how Google doesn’t recommend using JS to change a robots meta tag from “noindex” to “please consider my fine work of html markup for inclusion” (there is no “index” robots meta tag, so you can be creative).”

Mueller’s response explains that the site is relying on JavaScript to replace placeholder text that is served briefly before the page loads, which only works for visitors whose browsers actually run that script.

What happened here is that Google read that placeholder text that the web page showed as the indexed content. Google saw the original served content with the “not available” message and treated it as the content.

Mueller explained that the safer approach is to have the correct information present in the page’s base HTML from the start, so that both users and search engines receive the same content.

Takeaways

There are multiple takeaways here that go beyond the technical issue underlying the Redditor’s problem. Top of the list is how they tried to guess their way to an answer.

They really didn’t know how Google AI search works, which introduced a series of assumptions that complicated their ability to diagnose the issue. Then they implemented a “fix” based on guessing what they thought was probably causing the issue.

Guessing is an approach to SEO problems that’s justified on Google being opaque but sometimes it’s not about Google, it’s about a knowledge gap in SEO itself and a signal that further testing and diagnosis is necessary.

Featured Image by Shutterstock/Kues

Google’s Search Relations Team Debates If You Still Need A Website via @sejournal, @MattGSouthern

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a one-size-fits-all answer.

The conversation stayed focused on trade-offs between owning a website and relying on platforms such as social networks or app stores.

In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question and repeatedly landed on the same conclusion: it depends.

What Was Said

Illyes and Splitt acknowledged that websites still offer distinct advantages, including data sovereignty, control over monetization, the ability to host services such as calculators or tools, and freedom from platform content moderation.

Both Googlers also emphasized situations where a website may not be necessary.

Illyes referenced a Google user study conducted in Indonesia around 2015-2016 where businesses ran entirely on social networks with no websites. He described their results as having “incredible sales, incredible user journeys and retention.”

Illyes also described mobile games that, in his telling, became multi-million-dollar and in some cases “billion-dollar” businesses without a meaningful website beyond legal pages.

Illyes offered a personal example:

“I know that I have a few community groups in WhatsApp for instance because that’s where the people I want to reach are and I can reach them reliably through there. I could set up a website but I never even considered because why? To do what?”

Splitt addressed trust and presentation, saying:

“I’d rather have a nicely curated social media presence that exudes trustworthiness than a website that is not well done.”

When pressed for a definitive answer, Illyes offered the closest thing to a position, saying that if you want to make information or services available to as many people as possible, a website is probably still the way to go in 2026. But he framed it as a personal opinion, not a recommendation.

Why This Matters

Google Search is built around crawling and indexing web content, but the hosts still frame “needing a website” as a business decision that depends on your goals and audience.

Neither made a case that websites are essential for every business in 2026. Neither argued that the open web offers something irreplaceable. The strongest endorsement was that websites provide a low barrier of entry for sharing information and that the web “isn’t dead.”

This is consistent with the fragmented discovery landscape that SEJ has been covering, where user journeys now span AI chatbots, social feeds, and community platforms alongside traditional search.

Looking Ahead

The Search Off the Record podcast has historically offered behind-the-scenes perspectives from the Search Relations team that sometimes run ahead of official positions.

This episode didn’t introduce new policy or guidance. But the Search Relations team’s willingness to validate social-only business models and app-only distribution reflects how the role of websites is changing in a multi-platform discovery environment.

The question is worth sitting with. If the Search Relations team frames website ownership as situational rather than essential, the value proposition rests on the specific use case, not on the assumption that every business needs one.


Featured Image: Diki Prayogo/Shutterstock

Cloudflare’s New Markdown for AI Bots: What You Need To Know via @sejournal, @MattGSouthern

Cloudflare launched a feature that converts HTML pages to markdown when AI systems request it. Sites on its network can now serve lighter content to bots without building separate pages.

The feature, called Markdown for Agents, works through HTTP content negotiation. An AI crawler sends a request with Accept: text/markdown in the header. Cloudflare intercepts it, fetches the original HTML from the origin server, converts it to markdown, and delivers the result.

The launch arrives days after Google’s John Mueller called the idea of serving markdown to AI bots “a stupid idea” and questioned whether bots can even parse markdown links properly.

What’s New

Cloudflare described the feature as treating AI agents as “first-class citizens” alongside human visitors. The company used its own blog post as an example. The HTML version consumed 16,180 tokens while the markdown conversion used 3,150 tokens.

“Feeding raw HTML to an AI is like paying by the word to read packaging instead of the letter inside,” the company wrote.

The conversion happens at Cloudflare’s edge network, not at the origin server. Websites enable it per zone through the dashboard, and it’s available in beta at no additional cost for Pro, Business, and Enterprise plan customers, plus SSL for SaaS customers.

Cloudflare noted that some AI coding tools already send the Accept: text/markdown header. The company named Claude Code and OpenCode as examples.

Each converted response includes an x-markdown-tokens header that estimates the token count of the markdown version. Developers can use this to manage context windows or plan chunking strategies.

Content-Signal Defaults

Converted responses include a Content-Signal header set to ai-train=yes, search=yes, ai-input=yes by default, signaling the content can be used for AI training, search use, and AI input (including agentic use). Whether a given bot honors those signals depends on the bot operator. Cloudflare said the feature will offer custom Content-Signal policies in the future.

The Content Signals framework, which Cloudflare announced during Birthday Week 2025, lets site owners set preferences for how their content gets used. Enabling markdown conversion also applies a default usage signal, not just a format change.

How This Differs From What Mueller Criticized

Mueller was criticizing a different practice. Some site owners build separate markdown pages and serve them to AI user agents through middleware. Mueller raised concerns about cloaking and broken linking, and questioned whether bots could even parse markdown properly.

Cloudflare’s feature uses a different mechanism. Instead of detecting user agents and serving alternate pages, it relies on content negotiation. The same URL serves different representations based on what the client requests in the header.

Mueller’s comments addressed user-agent-based serving, not content negotiation. In a Reddit thread about Cloudflare’s feature, Mueller responded with the same position. He wrote, “Why make things even more complicated (parallel version just for bots) rather than spending a bit of time improving the site for everyone?”

Google defines cloaking as showing different content to users and search engines with the intent to manipulate rankings and mislead users. The cloaking concern may apply differently here. With user-agent sniffing, the server decides what to show based on who’s asking. With content negotiation, the client requests a format and the server responds. The content is the same information in a different format, not different content for different visitors.

The practical result is still similar from a crawler’s perspective. Googlebot requesting standard HTML would see a full webpage. An AI agent requesting markdown would see a stripped-down text version of the same page.

New Radar Tracking

Cloudflare also added content type tracking to Cloudflare Radar for AI bot traffic. The data shows the distribution of content types returned to AI agents and crawlers, broken down by MIME type.

You can filter by individual bot to see what content types specific crawlers receive. Cloudflare showed OAI-SearchBot as an example, displaying the volume of markdown responses served to OpenAI’s search crawler.

The data is available through Cloudflare’s public APIs and Data Explorer.

Why This Matters

If you already run your site through Cloudflare, you can enable markdown conversion with a single toggle instead of building separate markdown pages.

Enabling Markdown for Agents also sets the Content-Signal header to ai-train=yes, search=yes, ai-input=yes by default. Publishers who have been careful about AI access to their content should review those defaults before toggling the feature on.

Looking Ahead

Cloudflare said it plans to add custom Content-Signal policy options to Markdown for Agents in the future.

Mueller’s criticism focused on separate markdown pages, not on standard content negotiation. Google hasn’t addressed whether serving markdown through content negotiation falls under its cloaking guidelines.

The feature is opt-in and limited to paid Cloudflare plans. Review the Content-Signal defaults before enabling it.

Google Clarifies Its Stance On Campaign Consolidation via @sejournal, @brookeosmundson

In the recent episode of Google’s Ads Decoded podcast, Ginny Marvin sat down with Brandon Ervin, Director of Product Management for Search Ads, to address a topic many PPC marketers have strong opinions about: campaign and ad group consolidation.

Ervin, who oversees product development across core Search and Shopping ad automation, including query matching, Smart Bidding, Dynamic Search Ads, budgeting, and AI-driven systems, made one thing clear.

Consolidation is not the end goal. Equal or better performance with less granularity is.

What Was Said

During the discussion, Ervin acknowledged that many legacy account structures were built with good reason.

“What people were doing before was quite rational,” he said.

For years, granular campaign builds gave advertisers control. Match type segmentation, tightly themed ad groups, layered bidding strategies, and regional splits all made sense in a manual or semi-automated environment.

But according to Ervin, the rise of Smart Bidding and AI has shifted that dynamic.

The big shift we’ve seen with the rise of Smart Bidding and AI, the machine in general can do much better than most humans. Consolidation is not necessarily the goal itself. This evolution we’ve gone through allows you to get equal or better performance with a lot less granularity.

In other words, the structure that once helped performance may now be limiting it.

Ervin also pushed back on the idea that consolidation means losing control.

“Control still exists,” he said. “It just looks different than it did before.”

Ginny Marvin described it as a “mindset shift.”

When Segmentation Still Makes Sense

Despite Google’s push toward leaner account structures, Ervin did not suggest collapsing everything into one campaign.

Segmentation still makes sense when it reflects how a business actually operates.

Examples he shared included:

  • Distinct product lines with separate budgets and bidding goals
  • Different business objectives that require their own targets or reporting
  • Regional splits if that mirrors how the company runs operations

The key distinction is intent. If structure supports real budget decisions, reporting requirements, or operational differences, it belongs. If it exists only because that was the best practice five years ago, it may be creating more friction than value.

Ervin also addressed a common concern: how do you know when you’ve consolidated enough?

His benchmark was 15 conversions over a 30-day period. Those conversions do not need to come from a single campaign. Shared budgets and portfolio bidding strategies can aggregate conversion data across campaigns to meet that threshold.

If your campaign or ad group segmentation dilutes learning and slows down bidding models, it may be time to rethink your structure.

Why This Matters

For many PPC professionals, granularity has long been associated with expertise. Highly segmented accounts, tightly themed ad groups, and cautious use of broad match were once signs of disciplined management.

In earlier versions of Google Ads, that level of control often made a measurable difference.

I used to build accounts that way, too. When I used to manage highly competitive and seasonal E-commerce brands, SKAG structures were common practice for good reason. It was a way to better control budget for high-volume, generic terms that performed differently than more niche, long-tail terms.

What has changed my mindset is not the importance of structure, but the role it plays in my accounts. As Smart Bidding and automation have matured, I have seen firsthand how legacy segmentation can dilute data and slow down learning.

In several accounts where consolidation was tested thoughtfully, performance stabilized and, in some cases, improved. Especially in accounts I managed that had low conversion volume as a whole. What I thought was a perfectly built account structure was actually limiting performance because I was trying to spread budget and conversion volume too thin.

After a few months of poor performance, I was essentially “forced” to test out a simpler campaign structure and let go of hold habits.

Was it uncomfortable? Absolutely. When you’ve been doing PPC for years (think back to when Google Shopping was first free!), you’re essentially unlearning years of ‘best practices’ and having to learn a new way of managing accounts.

That does not mean consolidation is always the answer. It does suggest that structure should be tied directly to business logic, not inherited from best practices that were built for a different version of the platform.

Looking Ahead

If you’re in the camp of needing to start consolidating campaigns or ad groups, know that these large structural changes should not happen overnight.

For many teams, especially those managing complex accounts, restructuring can carry risk and large volatility spikes if it is done too aggressively.

A more measured approach may make sense. Start by identifying splits that clearly align with budgets, reporting requirements, or business priorities. Then evaluate the ones that exist primarily because they were once considered best practice.

In some cases, consolidation may unlock stronger data signals and steadier bidding. In others, maintaining separation may still be justified. The key is being intentional about the reason each layer exists.

WP Engine Complaint Adds Unredacted Allegations About Mullenweg Plan via @sejournal, @martinibuster

WP Engine recently filed its third amended complaint against WordPress co-founder Matt Mullenweg and Automattic, which includes newly s allegations that Mullenweg identified ten companies to pursue for licensing fees and contacted a Stripe executive in an effort to persuade Stripe to cancel contracts and partnerships with WPE.

Mullenweg And “Nuclear War”

The defendants argued that Mullenweg did not use the phrase “nuclear war.” However, documents they produced show that he used the phrase in a message describing his response to WP Engine if it did not comply with his demands.

The footnote states:

“During the recent hearing before this Court, Defendants represented that “we have seen over and over again ‘nuclear war’ in quotes,” but Mullenweg “didn’t say it” and it “[d]idn’t happen.” August 28, 2025 Hrg. Tr. at 33. According to Defendants’ counsel, Mullenweg instead only “refers to nuclear,” not “nuclear war.””

While WPE alleges that both threats are abhorrent and wrongful, reflecting a distinction without a difference, documents recently produced by Defendants confirm that in a September 13, 2024 message sent shortly before Defendants launched their campaign against WPE, Mullenweg declared “for example with WPE . . . [i]f that doesn’t resolve well it’ll look like all-out nuclear war[.]”

Email From Matt Mullenweg To A Stripe Executive

Another newly unredacted detail is an email from Matt Mullenweg to a Stripe executive in which he asked Stripe to “cancel any contracts or partnerships with WP Engine.” Stripe is a financial infrastructure platform that enables companies to accept credit card payments online.

The new information appears in the third amended complaint:

“In a further effort to inflict harm upon WPE and the market, Defendants secretly sought to strongarm Stripe into ceasing any business dealings with WPE. Shocking documents Defendants recently produced in discovery reveal that in mid-October 2024, just days after WPE brought this lawsuit, Mullenweg emailed a Stripe senior executive, insisting that Stripe “cancel any contracts or partnerships with WP Engine,” and threatening, “[i]f you chose not to do so, we should exit our contracts.”

“Destroy All Competition”

In paragraphs 200 and 202, WP Engine alleges that Defendants acknowledged having the power to “destroy all competition” and were seeking contributions that benefited Automattic rather than the WordPress.org community. WPE argues that Mullenweg abused his roles as the head of a nonprofit foundation, the owner of critical “dot-org” infrastructure, and the CEO of a for-profit competitor, Automattic.

These paragraphs appear intended to support WP Engine’s claim that the “Five for the Future” program and other community-oriented initiatives were used as leverage to pressure competitors into funding Automattic’s commercial interests. The complaint asserts that only a monopolist could make such demands and successfully coerce competitors in this manner.

Here are the paragraphs:

“Indeed, in documents recently produced by Defendants, they shockingly acknowledge that they have the power to “destroy all competition” and would inflict that harm upon market participants unless they capitulated to Defendants’ extortionate demands.”

“…Defendants’ monopoly power is so overwhelming that, while claiming they are interested in encouraging their competitors to “contribute to the community,” internal documents recently produced by Defendants reveal the truth—that they are engaged in an anticompetitive campaign to coerce their competitors to “contribute to Automattic.” Only a monopolist could possibly make such demands, and coerce their competitors to meet them, as has occurred here.”

“They Get The Same Thing Today For Free”

Additional paragraphs allege that internal documents contradict the defendants’ claim that their trademark enforcement is legitimate by acknowledging that certain WordPress hosts were already receiving the same benefits for free.

The new paragraph states:

“Contradicting Defendants’ current claim that their enforcement of supposed trademarks is legitimate, Defendants conceded internally that “any Tier 1 host (WPE for example)” would “pushback” on agreeing to a purported trademark license because “they get the same thing today for free. They’ve never paid for [the WordPress] trademarks and won’t want to pay …”

“If They Don’t Take The Carrot We’ll Give Them The Stick”

Paragraphs 211, 214, and 215 cite internal correspondence that WP Engine alleges reflects an intention to enforce compliance using a “carrot” or “stick” approach. The complaint uses this language to support its claims of market power and exclusionary conduct, which form the basis of its coercion and monopolization allegations under the Sherman Act.

Paragraph 211:

“Given their market power, Defendants expected to be able to enforce compliance, whether with a “carrot” or a “stick.””

Paragraph 214

“Defendants’ internal discussions further reveal that if market participants did not acquiesce to the price increases via a partnership with a purported trademark license component, then “they are fair game” and Defendants would start stealing their sites, thereby effectively eliminating those competitors. As Defendants’ internal correspondence states, “if they don’t take the carrot we’ll give them the stick.””

Paragraph 215:

“As part of their scheme, Defendants initially categorized particular market participants as follows:
• “We have friends (like Newfold) who pay us a lot of money. We want to nurture and value these relationships.”
• “We have would-be friends (like WP Engine) who are mostly good citizens within the WP ecosystem but don’t directly contribute to Automattic. We hope to change this.”
• “And then there are the charlatans ( and ) who don’t contribute. The charlatans are free game, and we should steal every single WP site that they host.””

Plan To Target At Least Ten Competitors

Paragraphs 218, 219, and 220 serve to:

  • Support its claim that WPE was the “public example” of what it describes as a broader plan to target at least ten other competitors with similar trademark-related demands.
  • Allege that certain competitors were paying what it describes as “exorbitant sums” tied to trademark arrangements.

WP Engine argues that these allegations show the demands extended beyond WPE and were part of a broader pattern.

The complaint cites internal documents produced by Defendants in which Mullenweg claimed he had “shield[ed]” a competitor “from directly competitive actions,” which WP Engine cites as evidence that Defendants had and exercised the ability to influence competitive conditions through these arrangements.

In those same internal documents, proposed payments were described as “not going to work,” which the complaint uses to argue that the payment amounts were not standardized but could be increased at Defendants’ insistence.

Here are the paragraphs:

“218. Ultimately, WPE was the public example of the “stick” part of Defendants’ “trademark license” demand. But while WPE decided to stand and fight by refusing Defendants’ ransom demand, Defendants’ list included at least ten other competitors that they planned to target with similar demands to pay Defendants’ bounty.

219. Indeed, based on documents that Defendants have recently produced in discovery, other competitors such as Newfold and [REDACTED] are paying Defendants exorbitant sums as part of deals that include “the use of” Defendants’ trademarks.

220. Regarding [REDACTED], in internal documents produced by Defendants, [REDACTED] confirmed that “[t]he money we’re sending from the hosting page is going to you directly”.

In return, Mullenweg claimed he apparently “shield[ed]” [REDACTED] “from directly competitive actions from a number of places[.]”.

Mullenweg further criticized the level of contributions for the month of August 2024, claiming “I’d need 3 years of that to get a new Earthroamer”.

Confronted with Mullenweg’s demand for more, [REDACTED] described itself as “the smallest fish,” suggesting that Mullenweg “can get more money from other companies,” and asking whether [REDACTED] was “the only ones you’re asking to make this change” in an apparent reference to “whatever trademark guidelines you send over”.

Mullenweg responded “nope[.]”. Later, on November 26, 2024—the same day this Court held the preliminary injunction hearing—Mullenweg told [REDACTED] that its proposed “monthly payment of [REDACTED] and contributions to wordpress.org were not “going to work,” and wished it “[b]est of luck” in resisting Defendants’ higher demands.”

WP Engine Versus Mullenweg And Automattic

Much of the previously redacted material is presented to support WP Engine’s antitrust claims, including statements that Defendants had the power to “destroy all competition.” What happens next is up to the judge.

Featured Image by Shutterstock/Kues

Google’s Ads Chief Details UCP Expansion, New AI Mode Ads via @sejournal, @MattGSouthern

Google’s VP of Ads and Commerce, Vidhya Srinivasan, published her third annual letter to the industry, outlining how the company plans to connect advertising, commerce, and AI across Search, YouTube, and Gemini in 2026.

The letter covers agentic commerce, AI-powered ad formats, creator partnerships, and creative tools. Several of the announcements build on features Google previewed at NRF 2026 in January and detailed during its Q4 2025 earnings call earlier this month.

What’s New

UCP Adoption

The letter confirms that the Universal Commerce Protocol now powers purchases from Etsy and Wayfair for U.S. shoppers inside AI Mode in Search and Gemini. Google said it has received interest from “hundreds of top tech companies, payments partners and retailers” since launching UCP.

When Google announced UCP at NRF, the company said the protocol was co-developed with Shopify and that more than 20 companies had endorsed it.

Google also said UCP’s potential “extends far beyond retail,” describing it as the foundation for agentic experiences across all commercial categories.

AI Mode Ad Formats

Srinivasan wrote that Google is testing a new ad format in AI Mode that highlights retailers offering products relevant to a query and marks them as sponsored. The letter describes the format as helping “shoppers easily find convenient buying options” while giving retailers visibility during the consideration stage.

The letter also mentioned Direct Offers, the ad pilot Google introduced at NRF that lets businesses share tailored deals with shoppers in AI Mode. Google plans to expand Direct Offers beyond price-based promotions to include loyalty benefits and product bundles.

Creator-Brand Matching

Srinivasan described YouTube creators as “today’s most trusted tastemakers,” citing a Google/Kantar study of 2,160 weekly video viewers. YouTube CEO Neal Mohan outlined related creator and commerce priorities in his own annual letter last month.

The letter highlights new AI-powered tools that match brands with creator communities based on content and audience analysis. Google said it started with its “open call” feature for sourcing creator partnerships and plans to go further in 2026.

Creative Asset Stats

Google said it saw a 3x increase in Gemini-generated assets in 2025, and that Q4 alone accounted for nearly 70 million assets across AI Max and Performance Max campaigns, according to Google internal data.

Srinivasan wrote that Veo 3, Google’s video generation tool, is now in Google Ads Asset Studio alongside the previously launched Nano Banana.

AI Max Performance Claims

Srinivasan wrote that AI Max is “unlocking billions of net-new searches” that advertisers had not previously reached.

Google introduced AI Max as an expansion tool for Search campaigns and discussed its performance during the Q4 earnings call.

Why This Matters

We’ve covered each major announcement in this letter as it was made. The UCP checkout announcement came at NRF in January. The retailer tradeoff questions followed days later. The pricing controversy played out the same week. The AI Mode monetization details came through during the earnings call.

What this letter adds is a bigger picture of where Google’s leadership sees these pieces fitting together. Srinivasan says this is the year agentic commerce moves from concept to operating reality, with UCP as the connective layer across shopping, payments, and AI agents.

For advertisers, the notable updates are the expansion of Direct Offers beyond price discounts and the testing of AI Mode ad formats in travel. For ecommerce stores, the Etsy and Wayfair confirmation shows that UCP checkout is processing real transactions with recognizable retailers. But the open questions I raised in January’s coverage about Merchant Center controls, opt-in mechanics, and reporting remain unanswered.

Looking Ahead

Srinivasan’s letter didn’t include specific launch dates for the features coming later this year. Google Marketing Live, the company’s annual ads event, takes place in the spring and would be the likely venue for more detailed announcements.


Featured Image: Mijansk786/Shutterstock