All In One SEO WordPress Vulnerability Affects Over 3 Million Sites via @sejournal, @martinibuster

A security vulnerability was discovered in the popular All in One SEO (AIOSEO) WordPress plugin that made it possible for low-privileged users to access a site’s global AI access token, potentially allowing them to misuse the plugin’s artificial intelligence features and could allow attackers to generate content or consume credits using the affected site’s AIOSEO AI credits and AI features. The plugin is installed on more than 3 million WordPress websites, making the exposure significant.

All in One SEO WordPress Plugin (AIOSEO)

All in One SEO is one of the most widely used WordPress SEO plugins, installed in over 3 million websites. It helps site owners manage search engine optimization tasks such as generating metadata, creating XML sitemaps, adding structured data, and providing AI-powered tools that assist with writing titles, descriptions, blog posts, FAQs, social medial posts, and generate images.

Those AI features rely on a site-wide AI access token that allows the plugin to communicate with the AIOSEO external AI services.

Missing Capability Check

According to Wordfence, the vulnerability was caused by a missing permission check on a specific REST API endpoint used by the plugin which enabled users with contributor level access to view the global AI access token.

In the context of a WordPress website, an API (Application Programming Interface) is like a bridge between the WordPress website and different software applications (including external apps like AIOSEO’s AI content generator) that enable them to securely communicate and share data with one another. A REST endpoint is a URL that exposes an interface to functionality or data.

The flaw was in the following REST API endpoint:

/aioseo/v1/ai/credits

That endpoint is meant to return information about a site’s AI usage and remaining credits. However, it failed to verify whether the user making the request was actually allowed to see that data. AIOSEO’s plugin failed to do a capability check to verify whether someone logged in with a contributor level access can have access to that data.

Because of that, any logged-in user with Contributor-level access or higher could call the endpoint and retrieve the site’s global AI access token.

Wordfence describes the flaw like this:

“This makes it possible for authenticated attackers, with Contributor-level access and above, to disclose the global AI access token.”

The problem was that the implementation of the REST API endpoint did not do a permission check, which enabled someone with contributor level access to see sensitive data.

In WordPress, REST API routes are supposed to include capability checks that ensure only authorized users can access them. In this case, that check was missing, so the plugin treated Contributors the same as administrators when returning the AI token.

Why The Vulnerability Is Problematic

In WordPress, the Contributor level role is one of the lowest privilege levels. Many sites grant Contributor level access to multiple people so that they can submit article drafts for review and publication.

By exposing the global AI token to those users, the plugin may have effectively handed out a site-wide credential that controls access to its AI features. That token could be used to:

1. Unauthorized AI Usage
The token functions as a site wide credential that authorizes AI requests. If an attacker obtains it, they could potentially use it to generate AI content through the affected site’s account, consuming whatever credits or usage limits are associated with that token.

2. Service Depletion
An attacker could automate requests using the exposed token to exhaust the site’s available AI quota. That would prevent site administrators from using the AI features they rely on, effectively creating a denial of service for the plugin’s AI tools.

Even though the vulnerability does not allow direct code execution, leaking a site-wide API token still represents a possible billing risk.

Part Of A Broader Pattern Of Vulnerabilities

This is not the first time All In One SEO has shipped with vulnerabilities related to missing authorization or low-privilege access. According to Wordfence, the plugin has had six vulnerabilities disclosed in 2025 alone, many of which allowed Contributor or Subscriber level users to access or modify data they should not have been able to access.

Those issues included SQL injection, information disclosure, arbitrary media deletion, missing authorization checks, sensitive data exposure, and stored cross-site scripting. The recurring theme across those reports is improper permission enforcement for low-privilege users, the same underlying class of flaw that led to the AI token exposure in this case.

Six vulnerabilities in one year is a high level for an SEO plugin. Yoast SEO plugin had zero vulnerabilities in 2025, RankMath had four vulnerabilities in 2025 and Squirrly SEO had only three vulnerabilities in 2025.

Screenshot Of Six AIOSEO Vulnerabilities In 2025

How The Vulnerability Was Fixed

The vulnerability affects all versions of All in One SEO up to and including 4.9.2. It was addressed in version 4.9.3, which included a security update described in the official plugin changelog by the plugin developers as:

“Hardened API routes to prevent AI access token from being exposed.”

That change corresponds directly to the REST API flaw identified by Wordfence.

What Site Owners Should Do

Anyone running All in One SEO should update to version 4.9.3 or newer as soon as possible. Sites that allow multiple external contributors are especially exposed since low-privilege accounts could access the site’s AI token on vulnerable versions.

Featured Image by Shutterstock/Shutterstock AI Generator

Survey: Publishers Expect Search Traffic To Fall Over 40% via @sejournal, @MattGSouthern

The Reuters Institute for the Study of Journalism has published its annual predictions report based on a survey of 280 senior media leaders across 51 countries and territories.

The report suggests publishers are preparing for two potential threats: generative AI tools, and creators who attract audiences with personality-led formats.

Note that the Reuters Institute survey reflects a strategic group of senior leaders. It’s not a representative sample of the entire industry.

What The Report Found

Search Traffic Is The Biggest Near-Term Concern

Survey respondents expect search engine traffic to decline by more than 40% over the next three years as AI-driven answers expand.

The report cites Chartbeat data showing aggregate Google Search traffic to hundreds of news sites has already started to dip. Lifestyle-focused publishers say they’ve been hit especially hard by Google’s AI Overviews rollout.

That comes on top of longer-running platform declines. The report notes referral traffic to news sites from Facebook fell 43% over the last three years, while referrals from X fell 46% over the same period.

Publishers Plan To Invest In Differentiation

In response to traffic pressure and AI summarization, publishers say they’ll invest more in original investigations, on-the-ground reporting, contextual analysis, and human stories.

Leaders surveyed say they plan to scale back service journalism and evergreen content, which many expect AI chatbots to commoditize.

Video & Off-Platform Distribution Rising

Publishers expect to invest more in video, including “watch tabs,” and more in audio formats such as podcasts. Text output is less of a priority.

On distribution, YouTube is the main off-platform channel cited in the report, alongside TikTok and Instagram.

Publishers are also trying to work out how to navigate distribution through AI platforms such as OpenAI’s ChatGPT, Google’s Gemini, and Perplexity.

Subscriptions Lead, Licensing Is Growing

For commercial publishers, paid content like subscriptions and memberships are the top focus. There’s also renewed interest in native advertising and face-to-face events as publishers look for revenue beyond traditional display ads.

Publishers are also looking at licensing and other platform payments. The report notes interest in platform funding has nearly doubled over the last two years as AI companies began offering large deals.

Why This Matters

I’ve watched publishers cycle through traffic crises before. When Facebook’s algorithm changes hit in 2018, the industry scrambled, and eventually most publishers adjusted by leaning harder into search. Search was supposed to be the stable channel.

That assumption is what this report challenges. A projected decline of 40%+ over three years has become a planning number, affecting budgets, headcount, and content strategy.

The content mix change warrants attention. When 280 senior media leaders say they’re scaling back service journalism and evergreen content, it signals which pages they think will still drive traffic in an AI-summarized environment. Original reporting and analysis survive because chatbots can’t replicate them. Commodity information doesn’t, because it can be synthesized without a click.

The doubling of interest in licensing deals over two years is the other number that jumped out to me. When AI companies started writing checks, the conversation changed from “should we license” to “what’s our leverage.”

This report is useful as a benchmark for where the industry’s head is at, even if individual outcomes vary.

Looking Ahead

Traffic from search and AI aggregators is unlikely to disappear, but the terms of trade are still being negotiated.

That includes how citations work, what licensing looks like at scale, and whether revenue-sharing becomes a standard arrangement.


Featured Image: Roman Samborskyi/Shutterstock

YouTube Expands Monetization For Some Controversial Issues via @sejournal, @MattGSouthern

YouTube is updating its Advertiser-friendly content guidelines to allow more videos about certain “controversial issues” to earn full ad revenue, as long as the content is non-graphic and presented in a dramatized or discussion-based context.

The change was outlined in a Creator Insider video and is reflected in YouTube’s Help Center policy language.

What’s Changing

YouTube is loosening monetization restrictions for videos focused on controversial issues that advertisers may define as sensitive, including abortion, self-harm, suicide, and domestic and sexual abuse, when the content is “dramatized or discussed in a non-graphic manner.”

YouTube’s Help Center update describes the change, stating that content focused on “Controversial issues” is now eligible to earn ad revenue when it’s non-graphic and dramatized, and that this replaces a previous policy that limited monetization regardless of graphicness or whether content was fictional.

The current “Controversial issues” policy section also explicitly includes “non-graphic but descriptive or dramatized content” related to domestic abuse, self-harm, suicide, adult sexual abuse, abortion, and sexual harassment under the category that “can earn ad revenue.”

How YouTube Defines “Controversial Issues”

YouTube defines “Controversial issues” as topics associated with trauma or abuse, and notes the policy may apply even if the content is purely commentary.

The Help Center list includes child abuse, adult sexual abuse, sexual harassment, self-harm, suicide, eating disorders, domestic abuse, and abortion.

It also distinguishes between content that is “focal” versus “fleeting.” A passing reference is not considered a focus, whereas a sustained segment or a full-video discussion is.

Why This Matters

This update can change whether videos qualify for full ad revenue.

YouTube is drawing a clearer line between non-graphic dramatization or discussion (more likely to be eligible) and content that includes graphic depictions or very explicit detail (still likely to be restricted).

As with past advertiser-friendly updates, real-world outcomes can depend on how a specific upload is categorized during review, including signals from the video itself plus title and thumbnail.

Looking Ahead

It’s unknown whether previously limited videos will be re-reviewed automatically, or only on appeal.

Regardless, you shouldn’t wait for YouTube to do the work. Now is a great time to submit an appeal if your videos were affected by YouTube’s controversial issues policy.


Featured Image: Reyanaska/Shutterstock

Google: AI Mode Checkout Can’t Raise Prices via @sejournal, @MattGSouthern

Google is disputing claims that its new AI-powered shopping checkout work could enable what critics describe as “surveillance pricing” or other forms of overcharging.

The back-and-forth started after Lindsay Owens, executive director of consumer economics think tank Groundwork Collaborative, criticized Google’s newly announced Universal Commerce Protocol and pointed to language in its public roadmap about “cross-sell and upsell modules.”

U.S. Sen. Elizabeth Warren amplified the criticism, saying Google is “using troves of your data to help retailers trick you into spending more money.”

Google’s corporate account News from Google replied that the claims “around pricing are inaccurate,” adding that merchants are prohibited from showing higher prices on Google than what appears on their own sites.

What Triggered The Back-And-Forth

Owens wrote on X that Google’s announcement about integrating shopping into AI Mode and Gemini included “personalized upselling,” which she described as “analyzing your chat data and using it to overcharge you.”

Warren then reposted Owens’ thread and echoed the allegation in stronger terms, calling it “plain wrong” that Google would use user data to help retailers “trick you into spending more money.”

Google responded publicly on X with a thread disputing the premise.

News from Google wrote on X:

“These claims around pricing are inaccurate. We strictly prohibit merchants from showing prices on Google that are higher than what is reflected on their site, period.”

Google also addressed the “upselling” term directly:

“The term ‘upselling’ is not about overcharging. It’s a standard way for retailers to show additional premium product options that people might be interested in.”

And it added that “Direct Offers” can only move in one direction:

“‘Direct Offers’ is a pilot that enables merchants to offer a lower priced deal or add extra services like free shipping … it cannot be used to raise prices.”

Where “Upsell Modules” Shows Up

The language critics are pointing to is in the Universal Commerce Protocol roadmap, which lists “Native cross-sell and upsell modules” as an upcoming initiative, described as enabling “personalized recommendations and upsells based on user context.”

Separately, Google’s technical write-up on UCP says AI shopping experiences need support for things like “real-time inventory checks, dynamic pricing, and instant transactions” within a conversational context. The “dynamic pricing” phrasing is broad, but it is part of what critics are interpreting through a consumer protection lens.

Google’s Ads & Commerce blog post presents UCP as covering the entire shopping journey, linking it to AI Mode and Gemini, while emphasizing that retailers stay the seller of record.

Why This Matters

I have covered Google’s price accuracy enforcement going back years, including Merchant Center policies meant to prevent situations where a shopper sees one price and gets a higher one at checkout. That history is why the “prices on Google versus prices on your site” line is doing so much work in Google’s response.

The bigger picture is that Google is trying to turn AI Mode and Gemini into places where product discovery can end with a transaction. When that happens, the conversation stops being purely about relevance and starts being about pricing rules, disclosures, and what “personalization” means in practice.

Looking Ahead

If this becomes another layer of feed requirements and policy edge cases, retailers will feel it immediately. If it reduces drop-off between product discovery and checkout, Google will likely push harder to make it a default part of AI Mode shopping.


Featured Image: zikg/Shutterstock

Google Downplays GEO – But Let’s Talk About Garbage AI SERPs via @sejournal, @martinibuster

Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.

SEO And The Next Generation Of Search

Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.

What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”

So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.

The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.

Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?

How To Write For Longform Answers

Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?

Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?

It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.

Danny said:

“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?

So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”

Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.

Avoids Talking About Search Referrals

But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.

Focus On The Big Picture

Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.

He explained:

“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.

…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”

Let’s Talk About Garbage AI Search Results

I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.

Subject Matter Expertise Is Missing

Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.

How Google Hides Expert Web Pages

How Google hides expert web pages.

Google’s AI Mode Promotes Garbage And Sites Lacking Expertise

This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.

Google’s AI Mode cites the following pages:

1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.

2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?

3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?

Screenshot Of Google’s Garbage AI Results

Google Hides The Good Stuff In More > News Tab

Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.

Screenshot Of  Hidden High Quality Search Results

GEO Or SEO – It Doesn’t Matter

This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.

I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?

Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.

How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?

Listen to the podcast here:

Featured Image by Shutterstock/Kues

WooCommerce WordPress Plugin Exploit Enables Fraudulent Charges via @sejournal, @martinibuster

The popular WooCommerce Square plugin for WordPress vulnerability enables unauthenticated attackers to uncover credit cards on file and make fraudulent charges. The vulnerability affects up to 80,000 installations.

WooCommerce Square WordPress Plugin

The WooCommerce Square plugin enables WordPress sites to accept payments through the Square POS, as well as synchronize product inventory data between Square and WooCommerce. Square plugin enables a WooCommerce merchant to support payments through Apple Pay®, Google Pay, WooCommerce Pre-Orders, and WooCommerce Subscriptions.

Insecure Direct Object Reference

The vulnerability in the plugin arises from an Insecure Direct Object Reference (IDOR) vulnerability, a flaw that happens when critical data is exposed in URL file parameters, such as identification numbers, which then enables an attacker to manipulate that data without proper access that would normally prevent them from accessing those files.

The Open Worldwide Application Security Project (OWASP) defines IDOR as:

“Insecure Direct Object Reference (IDOR) is a vulnerability that arises when attackers can access or modify objects by manipulating identifiers used in a web application’s URLs or parameters. It occurs due to missing access control checks, which fail to verify whether a user should be allowed to access specific data.”

Exploiting the vulnerability does not require that the attacker acquire any level of authentication or permission levels, making it easier for them to launch an attack on affected websites.

According to a Wordfence advisory:

“The WooCommerce Square plugin for WordPress is vulnerable to Insecure Direct Object Reference in all versions up to, and including, 5.1.1 via the get_token_by_id function due to missing validation on a user controlled key. This makes it possible for unauthenticated attackers to expose arbitrary Square “ccof” (credit card on file) values and leverage this value to potentially make fraudulent charges on the target site.”

There are multiple versions of the WooCommerce Square plugin that are patched, it’s recommended that users of the plugin update to at least one of the following versions:

  • 4.2.3
  • 4.3.2
  • 4.4.2
  • 4.5.2
  • 4.6.4
  • 4.7.4
  • 4.8.8
  • 4.9.9
  • 5.0.1
  • 5.1.2

The CVSS severity vulnerability score is rated at 7.5, indicating it’s a dangerous vulnerability that can be remotely exploitable but is mitigated by a constraint that keeps it from being rated as “Critical.”

Featured Image by Shutterstock/IgorZh

Apple Selects Google’s Gemini For New AI-Powered Siri via @sejournal, @MattGSouthern

Apple is partnering with Google to power its AI features, including a major Siri upgrade expected later this year.

The companies announced the multi-year collaboration on Monday. Google’s Gemini models and cloud technology will serve as the foundation for the next generation of Apple Foundation Models.

“After careful evaluation, Apple determined that Google’s AI technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users,” the joint statement said.

What’s New

The partnership makes Gemini a foundation for Apple’s next-generation models. Apple’s models will continue running on its devices and Private Cloud Compute infrastructure while maintaining what the company calls its “industry-leading privacy standards.”

Neither company disclosed the deal’s financial terms. Bloomberg previously reported Apple had discussed paying about $1 billion annually for Google AI access, though that figure remains unconfirmed for the final agreement.

By November, Bloomberg reported Apple had chosen Google over Anthropic based largely on financial terms.

Existing OpenAI Partnership Remains

Apple currently integrates OpenAI’s ChatGPT into Siri and Apple Intelligence for complex queries that draw on the model’s broader knowledge base.

Apple told CNBC the company isn’t making changes to that agreement. OpenAI did not immediately respond to a request for comment.

The distinction appears to be between the foundational models powering Apple Intelligence overall versus the external AI connection available for certain queries.

Context

The deal arrives as Google’s AI position strengthens. Alphabet surpassed Apple in market capitalization last week for the first time since 2019.

The default-search deal between Google and Apple has been under scrutiny after U.S. District Judge Amit Mehta ruled Google holds an illegal monopoly in online search and related advertising. In September 2025, he did not require Google to divest Chrome or Android.

Apple had originally planned to launch an AI-powered Siri upgrade in 2025 but delayed the release.

“It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year,” Apple said at the time.

Google introduced its upgraded Gemini 3 model late last year. CEO Sundar Pichai said in October that Google Cloud signed more deals worth over $1 billion through the first three quarters of 2025 than in the previous two years combined.

Why This Matters

I covered this partnership in November when Bloomberg first reported Apple was paying Google to build a custom Gemini model for Siri. Today’s joint statement confirms what was then unattributed sourcing.

The confirmation matters because it extends Gemini’s reach into one of the largest device ecosystems in the world. Apple has said Siri fields 1.5 billion user requests per day across more than 2 billion active devices. That installed base gives Gemini distribution Google couldn’t match through its own products alone.

The competitive signal is clearer now too. Apple evaluated Anthropic and chose Google. Eddy Cue testified in May that Apple planned to add Gemini to Siri, but today’s announcement frames it as a deeper infrastructure partnership, not just another assistant option.

If Siri becomes meaningfully more capable at answering queries directly, the implications mirror what’s happening with AI Overviews and AI Mode in search. More queries could be resolved without users reaching external websites.

Looking Ahead

The upgraded Siri is expected to roll out later in 2026. The companies haven’t provided a specific launch date.

Apple maintaining its OpenAI integration alongside the Google partnership suggests both relationships will continue, at least for now. How Apple balances these two AI providers for different use cases will become clearer as the new features launch.

Google’s UCP Checkout Brings New Tradeoffs For Retailers via @sejournal, @MattGSouthern

When Google announced that shoppers could complete purchases directly in AI Mode, the focus was on convenience and technical capability. A retailer who emailed Search Engine Journal raised different questions about what gets lost when the transaction moves to Google’s surfaces.

The retailer cited concerns that customers never visit the store, see accessory recommendations from other sellers, and lose brand connection when making purchases on Google.

The concern shows a tradeoff in Google’s Universal Commerce Protocol. Retailers gain potential access to customers at the moment of purchase intent. However, they may lose some of the brand environment, discovery patterns, and relationship-building that occur when shoppers visit owned sites.

What Changes When Checkout Leaves Your Site

The change affects several parts of how retailers interact with customers.

Cross-selling

Cross-selling may change shape. A customer buying a camera on your site might see lens recommendations, memory cards, or cases based on your merchandising strategy.

Google says it plans to add capabilities like discovering related products, applying loyalty rewards, and powering custom shopping experiences on Google, but it hasn’t detailed reporting, fees, or data-sharing for AI Mode checkout.

If loyalty rewards, saved preferences, and checkout work more smoothly on Google surfaces, some shoppers may prefer that experience even if retailers have less control over it. Whether that tradeoff benefits retailers depends on details Google hasn’t disclosed yet.

Brand Connection

Brand storytelling can get compressed into whatever product data feeds into Google’s systems. Retailers invest in site design, content, and navigation to communicate what makes them different. That investment may not fully transfer when the interaction happens in AI Mode’s standardized interface.

The customer relationship dynamics change. Retailers traditionally owned the full transaction flow: discovery, consideration, purchase, and post-purchase communication. For orders completed inside AI Mode, Google would host more of the discovery and checkout experience on its own surfaces, while retailers remain the seller of record.

The degree to which retailers can access customer journey data that normally informs merchandising and marketing is unknown.

The Amazon Parallel

The situation resembles dynamics that already exist with Amazon marketplace sellers. Third-party sellers on Amazon get access to massive customer traffic. Marketplace sellers often accept less control over the customer experience and limited access to relationship signals compared with selling on their own sites.

Google’s protocol creates similar dynamics but extends them across the open web rather than within a single marketplace. Google positions UCP as an open standard, in contrast to Amazon’s closed marketplace model. The key difference: Amazon requires sellers to list products on its platform. UCP lets Google insert checkout capabilities into AI Mode while products technically remain on participating retailers’ inventory systems.

Whether that distinction leads to more data for retailers or a different platform dependency depends on reporting and data-sharing details Google hasn’t specified.

When It Makes Sense, When It Doesn’t

Some retail business models rely heavily on price, convenience, and fulfillment speed. For these retailers, losing the site visit may matter less if UCP delivers customers when they’re ready to buy.

Other retailers compete on curation, brand experience, and discovery. A customer visiting a specialty outdoor gear retailer expects to explore complementary products, read buying guides, and engage with brand content. Moving more of the purchase flow onto Google surfaces could reduce how much of that value proposition happens on a retailer’s site.

The calculation also depends on customer acquisition costs. For example, if you’re paying $30 to acquire a customer through Google Ads and they buy a $50 product on your site, the unit economics work when you can cross-sell or build long-term relationship value. If checkout happens on Google’s surface and you can’t cross-sell or retarget, the same acquisition cost may not be worth it.

What’s Known Versus What’s Speculation

Google said eligible U.S. retailers will be able to participate in UCP checkout through AI Mode in Search and the Gemini app. Google says retailers remain the seller of record and can customize the integration.

A separate Google Developers blog post explains that merchants remain the Merchant of Record and highlights an embedded option for a customized checkout experience. But the announcement didn’t detail the data-sharing arrangement, fee structure, or the funnel-level reporting retailers will receive for AI Mode checkout events.

The protocol is described as “open,” but adoption requirements, integration complexity, and whether non-Google AI systems can use it are unclear.

Google’s Business Agent feature demonstrates one use of the new protocol: branded AI chat appears in Search results for participating retailers, but the interaction occurs on Google’s platform.

Some analysts frame the change as existential, using terms like “extinction event” for certain retail models. That’s based on assumptions about adoption rates, customer behavior, and competitive dynamics that haven’t played out yet.

The more measured question retailers are asking: Does this create fragmentation where they need to optimize for multiple checkout flows, or consolidation where Google becomes the dominant transaction layer for product searches?

Questions Without Clear Answers

Three implementation details will likely determine how disruptive AI Mode checkout becomes for retailers:

  1. Merchant Center control: whether participation is explicitly opt-in and retailers can limit checkout to specific products or categories.
  2. Measurement: what reporting retailers get for actions on Google surfaces and whether AI Mode orders can be distinguished from standard site conversions.
  3. Customer and journey data: what signals, if any, come back to retailers to support lifecycle marketing and merchandising decisions.

Google has outlined the direction for UCP but hasn’t detailed these operational components.

Looking Ahead

Google said UCP checkout will roll out to eligible U.S. retailers soon, but hasn’t provided specific timing. Business Agent, which puts branded AI chat on Search results, went live Jan. 12.

Retailers questioning the tradeoffs between visibility and control face a pattern that’s played out before with Amazon, Google Shopping, and social commerce. Early participants gain access to new traffic sources but accept platform rules they don’t control. Late adopters may find themselves at a disadvantage.

The core question several retailers have raised is: Can they maintain the brand differentiation and relationship-building that justified creating owned channels when the transaction occurs on someone else’s platform?

The protocol is too new to know yet.


Featured Image: michnik101/Shutterstock

WordPress X Account’s ‘Childish’ Trolling Causes Backlash via @sejournal, @martinibuster

An official WordPress.org social media account was used to troll the open source movement to decentralize the WordPress plugins and themes repository, creating what some feel was an undignified, even “childish”, representation of the WordPress community.

What Is The FAIR Project?

The Federated And Independent Repositories project is an open-source initiative that was launched in 2025 in response to actions by Matt Mullenweg and Automattic that exposed a weakness in how plugins and themes are distributed to WordPress sites. The project was initiated after Mullenweg cut off WP Engine from updating their plugins, disrupting the proper functioning of thousands of websites.

The FAIR goal of the FAIR project is to decentralize the distribution of WordPress plugins and themes to protect against one person from disrupting the free distribution of software.

FAIR is backed by open source giant Linux, announced in June 2025. The official announcement explained that the purpose of FAIR is to create a “vendor-neutral” method for distributing WordPress software within a trusted environment, writing:

“Vendor-neutral package management for content management systems like WordPress provides critical universal infrastructure that addresses the new realities of content, e-commerce and AI.

The FAIR Package Manager project helps make plugins and tools more discoverable and lets developers choose where to source those plugins depending on the needs of their supply chain. By giving commercial plugin developers, hosts, and application developers more options to control the tools they rely on, the FAIR Package Manager project promotes innovation and protects business continuity.”

What Caused An Issue With FAIR?

A WordPress user recently experienced a temporary problem updating their website using the FAIR repository, forcing them to manually SFTP the software updates to their server.

They posted on X:

“Here I am updating one of my sites for the new year, and it looks like FAIR broke my plugin and theme updates.”

After updating their site they returned to X with more thoughts about their experience with FAIR:

“Glad this was just a “for fun” site and not something critical. I like experimenting with stuff in the WordPress ecosystem, but this is a bit too experimental for my taste. Going back to stock updates, at least until 2.0.

…This is making me rethink how I organize my domains and sites. Should probably just set up a sandbox for things like this, but then again… the squeaky wheel gets the grease. If it’s all locked away in a sandbox, I’ll forget to ever touch it.”

There was an issue with an update to FAIR version 1.2.2. According to the release notes:

“FAIR Connect 1.2.2 Release Announcement

Version 1.2.2 of FAIR Connect is a fast follow up to our version 1.2.1 release. This release fixes a fatal error introduced in 1.2.1 that impacts the updating process.

If you previously updated to 1.2.1, you will need to perform this update manually.”

So apparently there’s an issue with updating the FAIR Connect plugin which requires manually deactivating the FAIR Connect plugin, downloading the updated version of the plugin from the FAIR repository, then manually uploading the plugin from the WordPress admin plugin dashboard (unless the site is unavailable, which necessitates SFTP’ing the updated plugin).

WordPress Trolls The FAIR Project

The official WordPress.org X account posted the following comment about the FAIR project:

“Looks like the Federated and Independent Repository project is going great. This is clearly going to rock the WordPress world. We don’t know how we’ll continue without these contributors. Maybe they need some REST.”

The post was highly unusual for the WordPress X account because it’s normally a feel-good destination of announcements and inspiration related to WordPress. The unprofessional tone of the post caught many in the WordPress community by surprise.

One person shared their disappointment:

“Hi Matt! These comments aren’t clearly going to rock the atmosphere in our community too. So, http://WP.org never had issues?”

RapidLightnings responded:

“These people working at or for WordPress are so childish and unprofessional. Professional people wouldn’t care or would not post stuff like that on official accounts.”

Responses Hidden By WordPress

There were additional responses that were hidden by WordPress:

Like this by o_be_one:

“For an OpenSource project, your take is toxic af.”

Rohan K called the post by the official WordPress account immature:

“Growing pains. Why are you gleefully gloating about this, when your immature and short-sighted actions led the creation of it? It makes you look bad.

Grow up.”

Aron Prins posted a one-word response:

“Ewww”

Thisbit commented on how it reflects poorly on the WordPress leadership:

“Shameful leadership.”

Jono Alderson reflected on the childishness of the tweet:

“Oh hush. Your misuse of this account for sniping is childish and tedious. Be better.”

Other posts were directed at Matt Mullenweg, with this one prematurely dancing on WordPress’s grave:

“SO HAPPY that AI is ending WordPress for good.
Ciao CattyMatty”

And this one:

“I’d say get a clue, but you’d probably steal it from another developer.”

Jono Alderson’s Response

Alderson started a new discussion to express his opinion about the WordPress troll-post:

“I love WordPress-the-software, but this kind of childish nonsense makes me ashamed and embarrassed to be associated with WordPress-the-brand. What childish, petty, unprofessional, shameful, amateur nonsense. All of these people need firing and replacing with capable grown-ups.”

The responses to Jono’s post generally expressed disappointment that the official WordPress account was used for trolling, with one person responding that it seemed crazy.

Featured Image by Shutterstock/AYO Production

Google Announces AI Mode Checkout Protocol, Business Agent via @sejournal, @MattGSouthern

Google announced tools that let shoppers complete purchases directly within AI Mode and chat with branded AI agents in Search results.

Users can purchase from eligible product listings on Google. Retailers are still the seller of record, while the checkout happens on Google surfaces instead of the retailer’s website.

Universal Commerce Protocol Powers AI Mode Checkout

Google launched the Universal Commerce Protocol, an open standard for what it calls “agentic commerce.” The protocol will power checkout on eligible Google product listings in AI Mode in Search and the Gemini app.

Google developed UCP with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 additional companies endorsed it, including Adyen, American Express, Best Buy, Mastercard, Stripe, The Home Depot, and Visa.

Shoppers will use Google Pay with payment methods and shipping info from Google Wallet. PayPal support is coming. UCP checkout starts with eligible U.S. retailers, with global expansion planned.

Business Agent Brings Branded Chat To Search

Business Agent lets shoppers chat with brands in Search results. Google describes it as a “virtual sales associate” that can answer product questions in the brand’s voice.

The feature goes live January 12 with Lowe’s, Michael’s, Poshmark, Reebok, and others. Eligible U.S. retailers can activate and customize the agent through Merchant Center.

Google plans to add capabilities for training agents on retailer data, providing product offers, and enabling purchases within the chat experience.

Direct Offers Pilot Tests Ads In AI Mode

Google also announced Direct Offers, a new ad pilot in AI Mode. It allows advertisers to offer exclusive discounts to people searching for products.

Google gave an example of a rug search where relevant retailers could feature a special 20% discount. Retailers set up offers in campaign settings, and Google determines when to display them.

Early partners include Petco, e.l.f. Cosmetics, Samsonite, Rugs USA, and Shopify merchants.

Why This Matters

Checkout in AI Mode means a user searching for a product can research, compare, and buy without ever reaching the retailer’s site.

For ecommerce sites, this changes the traffic equation. The sale still happens, but the site visit may not. Retailers participating in UCP gain access to high-intent buyers at the moment of decision. Those who don’t participate may find their products harder to surface when users expect to complete transactions without leaving Google.

Looking Ahead

Checkout in AI Mode rolls out to eligible U.S. retailers soon. Business Agent launches January 12. Direct Offers is in pilot with select advertisers.

Google said it plans to add new Merchant Center data attributes designed for discovery in AI Mode, Gemini, and Business Agent. The company will roll out the new attributes with a small group of retailers soon before expanding more broadly.


Featured Image: hafakot/Shutterstock