Archive

News

Google Clarifies Its Stance On Campaign Consolidation via @sejournal, @brookeosmundson

In the recent episode of Google’s Ads Decoded podcast, Ginny Marvin sat down with Brandon Ervin, Director of Product Management for Search Ads, to address a topic many PPC marketers have strong opinions about: campaign and ad group consolidation.Ervin, who oversees product development across core Search and Shopping ad automation, including query matching, Smart Bidding, Dynamic Search Ads, budgeting, and AI-driven systems, made one thing clear.
Consolidation is not the end goal. Equal or better performance with less granularity is.
What Was Said
During the discussion, Ervin acknowledged that many legacy account structures were built with good reason.
“What people were doing before was quite rational,” he said.
For years, granular campaign builds gave advertisers control. Match type segmentation, tightly themed ad groups, layered bidding strategies, and regional splits all made sense in a manual or semi-automated environment.
But according to Ervin, the rise of Smart Bidding and AI has shifted that dynamic.

The big shift we’ve seen with the rise of Smart Bidding and AI, the machine in general can do much better than most humans. Consolidation is not necessarily the goal itself. This evolution we’ve gone through allows you to get equal or better performance with a lot less granularity.

In other words, the structure that once helped performance may now be limiting it.
Ervin also pushed back on the idea that consolidation means losing control.
“Control still exists,” he said. “It just looks different than it did before.”
Ginny Marvin described it as a “mindset shift.”
When Segmentation Still Makes Sense
Despite Google’s push toward leaner account structures, Ervin did not suggest collapsing everything into one campaign.
Segmentation still makes sense when it reflects how a business actually operates.
Examples he shared included:

Distinct product lines with separate budgets and bidding goals
Different business objectives that require their own targets or reporting
Regional splits if that mirrors how the company runs operations

The key distinction is intent. If structure supports real budget decisions, reporting requirements, or operational differences, it belongs. If it exists only because that was the best practice five years ago, it may be creating more friction than value.
Ervin also addressed a common concern: how do you know when you’ve consolidated enough?
His benchmark was 15 conversions over a 30-day period. Those conversions do not need to come from a single campaign. Shared budgets and portfolio bidding strategies can aggregate conversion data across campaigns to meet that threshold.
If your campaign or ad group segmentation dilutes learning and slows down bidding models, it may be time to rethink your structure.
Why This Matters
For many PPC professionals, granularity has long been associated with expertise. Highly segmented accounts, tightly themed ad groups, and cautious use of broad match were once signs of disciplined management.
In earlier versions of Google Ads, that level of control often made a measurable difference.
I used to build accounts that way, too. When I used to manage highly competitive and seasonal E-commerce brands, SKAG structures were common practice for good reason. It was a way to better control budget for high-volume, generic terms that performed differently than more niche, long-tail terms.
What has changed my mindset is not the importance of structure, but the role it plays in my accounts. As Smart Bidding and automation have matured, I have seen firsthand how legacy segmentation can dilute data and slow down learning.
In several accounts where consolidation was tested thoughtfully, performance stabilized and, in some cases, improved. Especially in accounts I managed that had low conversion volume as a whole. What I thought was a perfectly built account structure was actually limiting performance because I was trying to spread budget and conversion volume too thin.
After a few months of poor performance, I was essentially “forced” to test out a simpler campaign structure and let go of hold habits.
Was it uncomfortable? Absolutely. When you’ve been doing PPC for years (think back to when Google Shopping was first free!), you’re essentially unlearning years of ‘best practices’ and having to learn a new way of managing accounts.
That does not mean consolidation is always the answer. It does suggest that structure should be tied directly to business logic, not inherited from best practices that were built for a different version of the platform.
Looking Ahead
If you’re in the camp of needing to start consolidating campaigns or ad groups, know that these large structural changes should not happen overnight.
For many teams, especially those managing complex accounts, restructuring can carry risk and large volatility spikes if it is done too aggressively.
A more measured approach may make sense. Start by identifying splits that clearly align with budgets, reporting requirements, or business priorities. Then evaluate the ones that exist primarily because they were once considered best practice.
In some cases, consolidation may unlock stronger data signals and steadier bidding. In others, maintaining separation may still be justified. The key is being intentional about the reason each layer exists.

Read More »
SEO

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You And The Answer via @sejournal, @DuaneForrester

Most people still think visibility is a ranking problem. That worked when discovery lived in 10 blue links. It breaks down when discovery happens inside an answer layer.Answer engines have to filter aggressively. They are assembling responses, not returning a list. They are also carrying more risk. A bad result can become harmful advice, a scam recommendation, or a confident lie delivered in a friendly tone. So the systems that power search and LLM experiences rely on classification gates long before they decide what to rank or what to cite.
If you want to be visible in the answer layer, you need to clear those gates.
SSIT is a simple way to name what’s happening. Spam, Safety, Intent, Trust. Four classifier jobs sit between your content and the output a user sees. They sort, route, and filter long before retrieval, ranking, or citation.

Spam: The Manipulation Gate
Spam classifiers exist to catch scaled manipulation. They are upstream and unforgiving, and if you trip them, you can be suppressed before relevance even enters the conversation.
Google is explicit that it uses automated systems to detect spam and keep it out of search results. It also describes how those systems evolve over time and how manual review can complement automation.
Google has also named a system directly in its spam update documentation. SpamBrain is described as an AI-based spam prevention system that it continually improves to catch new spam patterns.
For SEOs, spam detection behaves like pattern recognition at scale. Your site gets judged as a population of pages, not a set of one-offs. Templates, footprints, link patterns, duplication, and scaling behavior all become signals. That’s why spam hits often feel unfair. Single pages look fine; the aggregate looks engineered.
If you publish a hundred pages that share the same structure, phrasing, internal links, and thin promise, classifiers see the pattern.
Google’s spam policies are a useful map of what the spam gate tries to prevent. Read them like a spec for failure modes, then connect each policy category to a real pattern on your site that you can remove.
Manual actions remain part of this ecosystem. Google documents that manual actions can be applied when a human reviewer determines a site violates its spam policies.
There is an uncomfortable SEO truth hiding in this. If your growth play relies on behaviors that resemble manipulation, you are betting your business on a classifier not noticing, not learning, and not adapting. That is not a stable bet.
Safety: The Harm And Fraud Gate
Safety classifiers are about user protection. They focus on harm, deception, and fraud. They do not care if your keyword targeting is perfect, but they do care if your experience looks risky.
Google has made public claims about major improvements in scam detection using AI, including catching more scam pages and reducing specific forms of impersonation scams.
Even if you ignore the exact numbers, the direction is clear. Safety classification is a core product priority, and it shapes visibility hardest where users can be harmed financially, medically, or emotionally.
This is where many legitimate sites accidentally look suspicious. Safety classifiers are conservative, and they work at the level of pattern and context. Monetization-heavy layouts, thin lead gen pages, confusing ownership, aggressive outbound pushes, and inflated claims can all resemble common scam patterns when they show up at scale.
If you operate in affiliate, lead gen, local services, finance, health, or any category where scams are common, you should assume the safety gate is strict. Then build your site so it reads as legitimate without effort.
That comes down to basic trust hygiene.
Make ownership obvious. Use consistent brand identifiers across the site. Provide clear contact paths. Be transparent about monetization. Avoid claims that cannot be defended. Include constraints and caveats in the content itself, not hidden in a footer.
If your site has ever been compromised, or if you operate in a neighborhood of risky outbound links, you also inherit risk. Safety classifiers treat proximity as a signal because threat actors cluster. Cleaning up your link ecosystem and site security is no longer only a technical responsibility; it’s visibility defense.
Intent: The Routing Gate
Intent classification determines what the system believes the user is trying to accomplish. That decision shapes the retrieval path, the ranking behavior, the format of the answer, and which sources get pulled into the response.
This matters more as search shifts from browsing sessions to decision sessions. In a list-based system, the user can correct the system by clicking a different result. In an answer system, the system makes more choices on the user’s behalf.
Intent classification is also broader than the old SEO debates about informational versus transactional. Modern systems try to identify local intent, freshness intent, comparative intent, procedural intent, and high-stakes intent. These intent classes change what the system considers helpful and safe. In fact, if you deep-dive into “intents,” you’ll find that so many more don’t even fit into our crisply defined, marketing-designed boxes. Most marketers build for maybe three to four intents. The systems you’re trying to win in often operate with more, and research taxonomies already show how intent explodes into dozens of types when you measure real tasks instead of neat categories.
If you want consistent visibility, make intent alignment obvious and commit each page to a primary task.

If a page is a “how to,” make it procedural. Lead with the outcome. Present steps. Include requirements and failure modes early.
If a page is a “best options” piece, make it comparative. Define your criteria. Explain who each option fits and who it does not.
If a page is local, behave like a local result. Include real local proof and service boundaries. Remove generic filler that makes the page look like a template.
If a page is high-stakes, be disciplined. Avoid sweeping guarantees. Include evidence trails. Use precise language. Make boundaries explicit.

Intent clarity also helps across classic ranking systems, and it can help reduce pogo behavior and improve satisfaction signals. More importantly for the answer layer, it gives the system clean blocks to retrieve and use.
Trust: The “Should We Use This” Gate
Trust is the gate that decides whether content is used, how much it is used, and whether it is cited. You can be retrieved and still not make the cut. You can be used and still not be cited. You can show up one day and disappear the next because the system saw slightly different context and made different selections.
Trust sits at the intersection of source reputation, content quality, and risk.
At the source level, trust is shaped by history. Domain behavior over time, link graph context, brand footprint, author identity, consistency, and how often the site is associated with reliable information.
At the content level, trust is shaped by how safe it is to quote. Specificity matters. Internal consistency matters. Clear definitions matter. Evidence trails matter. So does writing that makes it hard to misinterpret.
LLM products also make classification gates explicit in their developer tooling. OpenAI’s moderation guide documents classification of text and images for safety purposes, so developers can filter or intervene.
Even if you are not building with APIs, the existence of this tooling reflects the reality of modern systems. Classification happens before output, and policy compliance influences what can be surfaced. For SEOs, the trust gate is where most AI optimization advice gets exposed. Sounding authoritative is easy, but being safe to use takes precision, boundaries, evidence, and plain language.
It also comes in blocks that can stand alone.
Answer engines extract. They reassemble, and they summarize. That means your best asset is a self-contained unit that still makes sense when it is pulled out of the page and placed into a response.
A good self-contained block typically includes a clear statement, a short explanation, a boundary condition, and either an example or a source reference. When your content has those blocks, it becomes easier for the system to use it without introducing risk.
How SSIT Flows Together In The Real World
In practice, the gates stack.
First, the system evaluates whether a site and its pages look spammy or manipulative. This can affect crawl frequency, indexing behavior, and ranking potential. Next, it evaluates whether the content or experience looks risky. In some categories, safety checks can suppress visibility even when relevance is high. Then it evaluates intent. It decides what the user wants and routes retrieval accordingly. If your page does not match the intent class cleanly, it becomes less likely to be selected.
Finally, it evaluates trust for usage. That is where decisions get made about quoting, citing, summarizing, or ignoring. The key point for AI optimization is not that you should try to game these gates. The point is that you should avoid failing them.
Most brands lose visibility in the answer layer for boring reasons. They look like scaled templates. They hide important legitimacy signals. They publish vague content that is hard to quote safely. They try to cover five intents in one page and satisfy none of them fully.
If you address those issues, you are doing better “AI optimization” than most teams chasing prompt hacks.
Where “Classifiers Inside The Model” Fit, Without Turning This Into A Computer Science Lecture
Some classification happens inside model architectures as routing decisions. Mixture of Experts approaches are a common example, where a routing mechanism selects which experts process a given input to improve efficiency and capability. NVIDIA also provides a plain-language overview of Mixture of Experts as a concept.
This matters because it reinforces the broader mental model. Modern AI systems rely on routing and gating at multiple layers. Not every gate is directly actionable for SEO, but the presence of gates is the point. If you want predictable visibility, you build for the gates you can influence.
What To Do With This, Practical Moves For SEOs
Start by treating SSIT like a diagnostic framework. When visibility drops in an answer engine, do not jump straight to “ranking.” Ask where you might have failed in the chain.
Spam Hygiene Improvements
Audit at the template level. Look for scaled patterns that resemble manipulation when aggregated. Remove doorway clusters and near-duplicate pages that do not add unique value. Reduce internal link patterns that exist only to sculpt anchors. Identify pages that exist only to rank and cannot defend their existence as a user outcome.
Use Google’s spam policy categories as the baseline for this audit, because they map to common classifier objectives.
Safety Hygiene Improvements
Assume conservative filtering in categories where scams are common. Strengthen legitimacy signals on every page that asks for money, personal data, a phone call, or a lead. Make ownership and contact information easy to find. Use transparent disclosures. Avoid inflated claims. Include constraints inside the content.
If you publish in YMYL-adjacent categories, tighten your editorial standards. Add sourcing. Track updates. Remove stale advice. Safety gates punish stale content because it can become harmful.
Intent Hygiene Improvements
Choose the primary job of the page and make it obvious in the first screen. Align the structure to the task. A procedural page should read like a procedure. A comparison page should read like a comparison. A local page should prove locality.
Do not rely on headers and keywords to communicate this. Make it obvious in sentences that a system can extract.
Trust Hygiene Improvements
Build citeable blocks that stand on their own. Use tight definitions. Provide evidence trails. Include boundaries and constraints. Avoid vague, sweeping statements that cannot be defended. If your content is opinion-led, label it as such and support it with rationale. If your content is claim-led, cite sources or provide measurable examples.
This is also where authorship and brand footprint matter. Trust is not only on-page. It is the broader set of signals that tell systems you exist in the world as a real entity.
SSIT As A Measurement Mindset
If you are building or buying “AI visibility” reporting, SSIT changes how you interpret what you see.

A drop in presence can mean a spam classifier dampened you.
A drop in citations can mean a trust classifier avoided quoting you.
A mismatch between retrieval and usage can mean intent misalignment.
A category-level invisibility can mean safety gating.

That diagnostic framing matters because it leads to fixes you can execute. It also stops teams from thrashing, rewriting everything, and hoping the next version sticks.
SSIT also keeps you grounded. It is tempting to treat AI optimization as a new discipline with new hacks. Most of it is not hacks. It is hygiene, clarity, and trust-building, applied to systems that filter harder than the old web did. That’s the real shift.
The answer layer is not only ranking content, but it’s also selecting content. That selection happens through classifiers that are trained to reduce risk and improve usefulness. When you plan for Spam, Safety, Intent, and Trust, you stop guessing. You start designing content and experiences that survive the gates.
That is how you earn a place in the answer layer, and keep it.
More Resources:

This post was originally published on Duane Forrester Decodes.

Featured Image: Olga_TG/Shutterstock

Read More »
Enterprise SEO Column

From Performance SEO To Demand SEO via @sejournal, @TaylorDanRW

AI is fundamentally changing what doing SEO means. Not just in how results are presented, but in how brands are discovered, understood, and trusted inside the very systems people now rely on to learn, evaluate, and make decisions. This forces a reassessment of our role as SEOs, the tools and frameworks we use, and the way success is measured beyond legacy reporting models that were built for a very different search environment.Continuing to rely on vanity metrics rooted in clicks and rankings no longer reflects reality, particularly as people increasingly encounter and learn about brands without ever visiting a website.
For most of its history, SEO focused on helping people find you within a static list of results. Keywords, content, and links existed primarily to earn a click from someone who already recognized a need and was actively searching for a solution.
AI disrupts that model by moving discovery into the answer itself, returning a single synthesized response that references only a small number of brands, which naturally reduces overall clicks while simultaneously increasing the number of brand touchpoints and moments of exposure that shape perception and preference. This is not a traffic loss problem, but a demand creation opportunity. Every time a brand appears inside an AI-generated answer, it is placed directly into the buyer’s mental shortlist, building mental availability even when the user has never encountered the brand before.
Why AI Visibility Creates Demand, Not Just Traffic
Traditional SEO excelled at capturing existing demand by supporting users as they moved through a sequence of searches that refined and clarified a problem before leading them towards a solution.
AI now operates much earlier in that journey, shaping how people understand categories, options, and tradeoffs before they ever begin comparing vendors, effectively pulling what we used to think of as middle and bottom-of-funnel activity further upstream. People increasingly use AI to explore unfamiliar spaces, weigh alternatives, and design solutions that fit their specific context, which means that when a brand is repeatedly named, explained, or referenced, it begins to influence how the market defines what good looks like.
This repeated exposure builds familiarity over time, so that when a decision moment eventually arrives, the brand feels known and credible rather than new and untested, which is demand generation playing out inside the systems people already trust and use daily.
Unlike above-the-line advertising, this familiarity is built natively within tools that have become deeply embedded in everyday life through smartphones, assistants, and other connected devices, making this shift not only technical but behavioral, rooted in how people now access and process information.
How This Changes The Role Of SEO
As AI systems increasingly summarize, filter, and recommend on behalf of users, SEO has to move beyond optimizing individual pages and instead focus on making a brand easy for machines to understand, trust, and reuse across different contexts and queries.
This shift is most clearly reflected in the long-running move from keywords to entities, where keywords still matter but are no longer the primary organizing principle, because AI systems care more about who a brand is, what it does, where it operates, and which problems it solves.
That pushes modern SEO towards clearly defined and consistently expressed brand boundaries, where category, use cases, and differentiation are explicit across the web, even when that creates tension with highly optimized commercial landing pages.
AI systems rely heavily on trust signals such as citations, consensus, reviews, and verifiable facts, which means traditional ranking factors still play a role, but increasingly as proof points that an AI system can safely rely on when constructing answers. When an AI cannot confidently answer basic questions about a brand, it hesitates to recommend it, whereas when it can, that brand becomes a dependable component it can repeatedly draw upon.
This changes the questions SEO teams need to ask, shifting focus away from rankings alone and toward whether content genuinely shapes category understanding, whether trusted publishers reference the brand, and whether information about the brand remains consistent wherever it appears.
Narrative control also changes, because where brands once shaped their story through pages in a list of results, AI now tells the story itself, requiring SEOs to work far more closely with brand and communication teams to reinforce simple, consistent language and a small number of clear value propositions that AI systems can easily compress into accurate summaries.
What Brands Need To Do Differently
Brands need to stop starting their strategies with keywords and instead begin by assessing their strength and clarity as an entity, looking at what search engines and other systems already understand about them and how consistent that understanding really is.
The most valuable AI moments occur long before a buyer is ready to compare vendors, at the point where they are still forming opinions about the problem space, which means appearing by name in those early exploratory questions allows a brand to influence how the problem itself is framed and to build mental availability before any shortlist exists.
Achieving that requires focus rather than breadth, because trying to appear in every possible conversation dilutes clarity, whereas deliberately choosing which problems and perspectives to own creates stronger and more coherent signals for AI systems to work with.
This represents a move away from chasing as many keywords as possible in favor of standardizing a simple brand story that uses clear language everywhere, so that what you do, who it is for, and why it matters can be expressed in one clean, repeatable sentence.
This shift also demands a fundamental change in how SEO success is measured and reported, because if performance continues to be judged primarily through rankings and clicks, AI visibility will always look underwhelming, even though its real impact happens upstream by shaping preference and intent over time.
Instead, teams need to look at patterns across branded search growth, direct traffic, lead quality, and customer outcomes, because when reporting reflects that broader reality, it becomes clear that as AI visibility grows, demand follows, repositioning SEO from a purely tactical channel into a strategic lever for long-term growth.
More Resources:

Featured Image: Roman Samborskyi/Shutterstock

Read More »
News

WP Engine Complaint Adds Unredacted Allegations About Mullenweg Plan via @sejournal, @martinibuster

WP Engine recently filed its third amended complaint against WordPress co-founder Matt Mullenweg and Automattic, which includes newly s allegations that Mullenweg identified ten companies to pursue for licensing fees and contacted a Stripe executive in an effort to persuade Stripe to cancel contracts and partnerships with WPE.Mullenweg And “Nuclear War”
The defendants argued that Mullenweg did not use the phrase “nuclear war.” However, documents they produced show that he used the phrase in a message describing his response to WP Engine if it did not comply with his demands.
The footnote states:
“During the recent hearing before this Court, Defendants represented that “we have seen over and over again ‘nuclear war’ in quotes,” but Mullenweg “didn’t say it” and it “[d]idn’t happen.” August 28, 2025 Hrg. Tr. at 33. According to Defendants’ counsel, Mullenweg instead only “refers to nuclear,” not “nuclear war.””
While WPE alleges that both threats are abhorrent and wrongful, reflecting a distinction without a difference, documents recently produced by Defendants confirm that in a September 13, 2024 message sent shortly before Defendants launched their campaign against WPE, Mullenweg declared “for example with WPE . . . [i]f that doesn’t resolve well it’ll look like all-out nuclear war[.]”
Email From Matt Mullenweg To A Stripe Executive
Another newly unredacted detail is an email from Matt Mullenweg to a Stripe executive in which he asked Stripe to “cancel any contracts or partnerships with WP Engine.” Stripe is a financial infrastructure platform that enables companies to accept credit card payments online.
The new information appears in the third amended complaint:
“In a further effort to inflict harm upon WPE and the market, Defendants secretly sought to strongarm Stripe into ceasing any business dealings with WPE. Shocking documents Defendants recently produced in discovery reveal that in mid-October 2024, just days after WPE brought this lawsuit, Mullenweg emailed a Stripe senior executive, insisting that Stripe “cancel any contracts or partnerships with WP Engine,” and threatening, “[i]f you chose not to do so, we should exit our contracts.”
“Destroy All Competition”
In paragraphs 200 and 202, WP Engine alleges that Defendants acknowledged having the power to “destroy all competition” and were seeking contributions that benefited Automattic rather than the WordPress.org community. WPE argues that Mullenweg abused his roles as the head of a nonprofit foundation, the owner of critical “dot-org” infrastructure, and the CEO of a for-profit competitor, Automattic.
These paragraphs appear intended to support WP Engine’s claim that the “Five for the Future” program and other community-oriented initiatives were used as leverage to pressure competitors into funding Automattic’s commercial interests. The complaint asserts that only a monopolist could make such demands and successfully coerce competitors in this manner.
Here are the paragraphs:
“Indeed, in documents recently produced by Defendants, they shockingly acknowledge that they have the power to “destroy all competition” and would inflict that harm upon market participants unless they capitulated to Defendants’ extortionate demands.”
“…Defendants’ monopoly power is so overwhelming that, while claiming they are interested in encouraging their competitors to “contribute to the community,” internal documents recently produced by Defendants reveal the truth—that they are engaged in an anticompetitive campaign to coerce their competitors to “contribute to Automattic.” Only a monopolist could possibly make such demands, and coerce their competitors to meet them, as has occurred here.”
“They Get The Same Thing Today For Free”
Additional paragraphs allege that internal documents contradict the defendants’ claim that their trademark enforcement is legitimate by acknowledging that certain WordPress hosts were already receiving the same benefits for free.
The new paragraph states:
“Contradicting Defendants’ current claim that their enforcement of supposed trademarks is legitimate, Defendants conceded internally that “any Tier 1 host (WPE for example)” would “pushback” on agreeing to a purported trademark license because “they get the same thing today for free. They’ve never paid for [the WordPress] trademarks and won’t want to pay …”
“If They Don’t Take The Carrot We’ll Give Them The Stick”
Paragraphs 211, 214, and 215 cite internal correspondence that WP Engine alleges reflects an intention to enforce compliance using a “carrot” or “stick” approach. The complaint uses this language to support its claims of market power and exclusionary conduct, which form the basis of its coercion and monopolization allegations under the Sherman Act.
Paragraph 211:
“Given their market power, Defendants expected to be able to enforce compliance, whether with a “carrot” or a “stick.””
Paragraph 214
“Defendants’ internal discussions further reveal that if market participants did not acquiesce to the price increases via a partnership with a purported trademark license component, then “they are fair game” and Defendants would start stealing their sites, thereby effectively eliminating those competitors. As Defendants’ internal correspondence states, “if they don’t take the carrot we’ll give them the stick.””
Paragraph 215:
“As part of their scheme, Defendants initially categorized particular market participants as follows:• “We have friends (like Newfold) who pay us a lot of money. We want to nurture and value these relationships.”• “We have would-be friends (like WP Engine) who are mostly good citizens within the WP ecosystem but don’t directly contribute to Automattic. We hope to change this.”• “And then there are the charlatans ( and ) who don’t contribute. The charlatans are free game, and we should steal every single WP site that they host.””
Plan To Target At Least Ten Competitors
Paragraphs 218, 219, and 220 serve to:

Support its claim that WPE was the “public example” of what it describes as a broader plan to target at least ten other competitors with similar trademark-related demands.
Allege that certain competitors were paying what it describes as “exorbitant sums” tied to trademark arrangements.

WP Engine argues that these allegations show the demands extended beyond WPE and were part of a broader pattern.
The complaint cites internal documents produced by Defendants in which Mullenweg claimed he had “shield[ed]” a competitor “from directly competitive actions,” which WP Engine cites as evidence that Defendants had and exercised the ability to influence competitive conditions through these arrangements.
In those same internal documents, proposed payments were described as “not going to work,” which the complaint uses to argue that the payment amounts were not standardized but could be increased at Defendants’ insistence.
Here are the paragraphs:
“218. Ultimately, WPE was the public example of the “stick” part of Defendants’ “trademark license” demand. But while WPE decided to stand and fight by refusing Defendants’ ransom demand, Defendants’ list included at least ten other competitors that they planned to target with similar demands to pay Defendants’ bounty.
219. Indeed, based on documents that Defendants have recently produced in discovery, other competitors such as Newfold and [REDACTED] are paying Defendants exorbitant sums as part of deals that include “the use of” Defendants’ trademarks.
220. Regarding [REDACTED], in internal documents produced by Defendants, [REDACTED] confirmed that “[t]he money we’re sending from the hosting page is going to you directly”.
In return, Mullenweg claimed he apparently “shield[ed]” [REDACTED] “from directly competitive actions from a number of places[.]”.
Mullenweg further criticized the level of contributions for the month of August 2024, claiming “I’d need 3 years of that to get a new Earthroamer”.
Confronted with Mullenweg’s demand for more, [REDACTED] described itself as “the smallest fish,” suggesting that Mullenweg “can get more money from other companies,” and asking whether [REDACTED] was “the only ones you’re asking to make this change” in an apparent reference to “whatever trademark guidelines you send over”.
Mullenweg responded “nope[.]”. Later, on November 26, 2024—the same day this Court held the preliminary injunction hearing—Mullenweg told [REDACTED] that its proposed “monthly payment of [REDACTED] and contributions to wordpress.org were not “going to work,” and wished it “[b]est of luck” in resisting Defendants’ higher demands.”
WP Engine Versus Mullenweg And Automattic
Much of the previously redacted material is presented to support WP Engine’s antitrust claims, including statements that Defendants had the power to “destroy all competition.” What happens next is up to the judge.
Featured Image by Shutterstock/Kues

Read More »
Ai

Is a secure AI assistant possible?

<div data-chronoton-summary=" Risky business of AI assistants OpenClaw, a viral tool created by independent engineer Peter Steinberger, allows users to create personalized AI assistants. Security

Read More »
Platforms & Apps

New Ecommerce Tools: February 11, 2026

This week’s rundown of new products and services for ecommerce merchants includes rollouts for reverse logistics, fraud prevention, fulfillment, AI assistants, AI store builders, chargebacks, checkouts, agentic commerce, and automated marketing.
Got an ecommerce product release? Email updates@practicalecommerce.com.
New Tools for Merchants
ReturnPro partners with Clarity to detect fraud on returns. ReturnPro, a provider of returns management and reverse logistics, has partnered with Clarity, an item intelligence platform, to introduce AI-powered fraud-detection technology that identifies counterfeit, altered, and fraudulent returns and flags missing accessories at the point of return. Clarity’s AI technology combines X-ray intelligence with computer vision to see inside the actual product, comparing each returned item against its original manufacturer profile and detecting counterfeits, component swaps, and product manipulation.
ReturnPro
Bolt partners with Socure for ecommerce identity. Bolt, a financial technology platform for one-click checkout, has partnered with Socure to verify real people in real time at the moment of purchase. By integrating Socure’s RiskOS platform, Bolt delivers an ecommerce identity layer powered by predictive risk signals and compliance decisioning. Socure’s Identity Graph enables low-friction authentication for trusted consumers, adaptive protections, and cross-merchant trust signals.
Knowband launches generative AI plugins for PrestaShop. Knowband, an ecommerce developer, has launched two AI-based plugins for merchants. The PrestaShop AI Chatbot module answers product and order questions in real time. It supports multiple languages and currencies and uses vector search to understand query meanings. The PrestaShop LLMs Txt Generator module helps store owners automatically produce llms.txt files for their catalog, increasing the likelihood that genAI platforms discover and reference the products.
ShipTime acquires Warehowz to expand North American capabilities. ShipTime, a Canada-based logistics technology platform, has acquired an ownership stake in Warehowz, an on-demand warehousing and fulfillment marketplace with a network of  2,500 warehouses across North America. According to ShipTime, integrating Warehowz into ShipTime’s ecosystem enables merchants to gain greater control, visibility, and adaptability across the supply chain.
ShipTime
WordPress.com releases a Claude connector. WordPress.com has launched an official connector for Claude, the AI assistant developed by Anthropic. Once set up, Claude can answer questions using your WordPress.com site data, not estimates or generic guidance. According to WordPress, the Claude plugin can identify what readers respond to, surface content that needs refreshing, and spot opportunities for improvement.
Cside launches AI Agent Detection toolkit. Cside, a provider of website security and compliance, has launched its AI Agent Detection toolkit to identify agentic traffic and behavior from both traditional and AI-powered headless browsers. AI Agent Detection governs which AI agents can interact with the website, what they are allowed to do, and when human validation is required. Cside says the new toolkit enables merchants to leverage agentic commerce behavior for cross-selling, dynamic pricing, and additional verification requirements.
Chargebase launches to help merchants cut chargebacks. Chargebase has launched its chargeback-prevention platform for ecommerce and SaaS businesses. The platform automates the alert-resolution process, matching alerts to orders and handling backend communication with transaction dispute platforms such as Verifi and Ethoca. By receiving real-time alerts when a customer initiates a dispute with their bank, merchants can issue a quick refund and avoid a costly chargeback. Merchants pay when the platform helps avoid or resolve a dispute.
Chargebase
Loop expands Europe-based returns capabilities with Sendcloud integration. Loop, a post-purchase platform for Shopify sellers, has launched Ship by Loop 2.0, an upgraded version of its integrated return shipping service that now includes Sendcloud, a Europe-based shipping platform. With the Sendcloud integration, Loop merchants gain access to an expanded carrier network across Europe without leaving Loop’s returns portal. The enhancement also introduces QR code returns and InPost locker drop-offs.
SDLC Corp announces connector for syncing Shopify and Odoo ERP data. SDLC Corp, part of open-source developer Odoo, has launched an SDLC Connector for teams running Shopify and Odoo ERP. According to SDLC Corp, the connector synchronizes products, customers, orders, inventory, payments, and collections in real-time. The integration features include real-time Shopify-to-Odoo data sync, automated imports with validation, bidirectional inventory updates, webhook and scheduled auto-sync modes, multi-store support, custom field mapping within the Odoo dashboard, token-based authentication, and more.
Genstore launches AI tool to build and operate stores. Genstore, a store builder, has launched its ecommerce platform that uses autonomous AI to build and operate ecommerce sites. According to Genstore, its platform deploys coordinated AI agents that collaborate to execute real business tasks autonomously. The design agent creates layout, branding, and motion. The product agent generates listings, descriptions, and imagery. The launch agent prepares search engine, compliance, and store readiness. And the analytics agent uncovers conversion-driving insights.
Genstore
Prolisto launches Lite for creating eBay listings. Prolisto, a software development company specializing in ecommerce automation, has announced the launch of Prolisto Lite, a free AI-powered web app that simplifies and accelerates the process of creating eBay listings. According to Prolisto, Lite analyzes uploaded product images and generates an eBay title, a detailed search-engine-friendly description, and the appropriate item specifics.
EcomHint launches conversion rate optimization tool for Shopify and WooCommerce. EcomHint has launched its AI-powered conversion rate optimization tool for Shopify and WooCommerce merchants. The tool helps merchants identify conversion issues throughout the shopping journey and provides step-by-step guidance on how to fix them. EcomHint combines AI-based visual analysis, technical checks, and Lighthouse performance metrics to review key parts of the store, including home and product pages, cart and checkout friction points, and page speed. EcomHint bases its recommendations on an analysis of 700 online stores.
Veho introduces FlexSave delivery option. Veho, a parcel delivery platform for ecommerce, has launched FlexSave to help online brands offer cost-effective delivery. According to Veho, FlexSave enables shippers to reduce costs by replacing day-certain delivery dates with slightly broader delivery windows. Veho customers continue to receive proactive delivery updates, live support, and photo delivery confirmation.
Veho

Read More »