ChatGPT To Begin Testing Ads In The United States via @sejournal, @brookeosmundson

Just today, OpenAI confirmed it will begin testing advertising in the United States for ChatGPT Free and ChatGPT Go users in the coming weeks, marking the first time ads will appear inside the ChatGPT experience.

The test coincides with the U.S. launch of ChatGPT Go, a low-cost subscription tier priced at $8 per month that has been available internationally since August.

The details reveal a cautious approach, with clear limits on where ads can appear, who will see them, and how they will be separated from ChatGPT’s responses.

Here’s what OpenAI shared, how the tests will work, and why this shift matters for users and advertisers alike.

What OpenAI Is Testing

ChatGPT ads are not being introduced as part of a broader redesign or monetization overhaul. Instead, OpenAI is framing this as a limited test, with narrow placement rules and clear separation from ChatGPT’s core function.

Ads will appear at the bottom of a response, only when there is a relevant sponsored product or service tied to the active conversation. They will be clearly labeled, visually distinct from organic answers, and dismissible.

Users will also be able to see why a particular ad is being shown and turn off ad personalization entirely if they choose.

Just as important is where ads will not appear.

OpenAI stated that ads will not be shown to users under 18 and will not be eligible to run near sensitive or regulated topics, including health, mental health, and politics. Conversations will not be shared with advertisers, and user data will not be sold.

Timing Ad Testing with the Go Tier Launch

The timing of the announcement doesn’t seem accidental.

Alongside the ad testing plans, OpenAI confirmed that ChatGPT Go is now available in the United States.

Priced at $8 per month, Go sits between the free tier and higher-cost subscriptions, offering expanded access to messaging, image generation, file uploads, and memory.

Ads are positioned as a way to support both the free tier and Go users, allowing more people to use ChatGPT with fewer restrictions without forcing an upgrade.

At the same time, OpenAI made it clear that Pro, Business, and Enterprise subscriptions will remain ad-free, reinforcing that paid tiers are still the preferred path for users who want an uninterrupted experience.

Explaining the Guardrails of Early Ad Testing

OpenAI spent as much time explaining what ads will not do as what they will.

The company was explicit that advertising will not influence ChatGPT’s responses. Answers are optimized for usefulness, not commercial outcomes. There is no intent to optimize for time spent, engagement loops, or other metrics commonly associated with ad-driven platforms.

This is a notable departure from how advertising has historically been introduced elsewhere on the internet. Rather than retrofitting ads into an existing product and adjusting incentives later, OpenAI is attempting to define the rules up front.

Whether those rules hold over time is an open question. But the clarity of the initial framework suggests OpenAI understands the risk of getting this wrong.

What Early Ad Formats Tell Us

OpenAI shared two examples of the ad formats it plans to test inside ChatGPT.

In the first example, a ChatGPT response provides recipe ideas for a Mexican dinner party. Below the response, a sponsored product recommendation appears for a grocery item. The ad is clearly labeled and visually separated from the organic answer.

Image credit: openai.com

In the second example, ChatGPT responds to a conversation about traveling to Santa Fe, New Mexico. A sponsored lodging listing appears below the response, labeled as sponsored. The example also shows a follow-up chat screen, indicating that users can continue interacting with ChatGPT after seeing the ad.

Image credit: openai.com

In both examples, the ads appear at the bottom of ChatGPT’s responses and are presented as separate from the main answer. OpenAI stated that these formats are part of its initial ad testing and may change as testing progresses.

Why This Matters for Advertisers

This is not something advertisers can plan for just yet.

There are no announced buying models, no targeting details, no measurement framework, and no indication of when access might expand beyond testing. OpenAI has been clear that this is not an open marketplace at the moment.

Still, the implications are hard to ignore. Ads placed alongside high-intent, problem-solving conversations could eventually represent a different kind of discovery environment. One where usefulness matters more than volume, and where poor creative or loose targeting would feel immediately out of place.

If this becomes a real channel, it is unlikely to reward the same tactics that work in search or social today.

How Marketers Are Reacting So Far

Early industry reaction has been measured, not alarmist.

Most commentary acknowledges that advertising inside ChatGPT was inevitable at this scale.

Lily Ray stated her curiosity to “see how this change impacts user experience.”

Most people in the comments of her post are not shocked by this:

There is also skepticism, particularly around whether relevance can be maintained over time without pressure to expand inventory. That skepticism is warranted. History suggests that once ads work, the temptation to scale them follows.

For now, though, this feels less like an ad platform launch and more like OpenAI testing whether ads can exist inside a conversational interface without changing how people trust the product.

The Bigger Signal for AI Platforms

For users, OpenAI is expanding access while trying to preserve the trust that has made ChatGPT widely used. Introducing ads without blurring the line between answers and monetization sets a high bar, especially for a product people rely on for personal and professional tasks.

Outside of ChatGPT itself, this update shows how AI-first products may think about revenue differently than search or social networks. Ads are positioned as a way to support access, not as the product, with paid tiers remaining central.

OpenAI says it will adjust how ads appear based on user feedback once testing begins in the U.S.

For now, this is a limited test rather than a full advertising launch. Whether those boundaries hold will matter, not just for ChatGPT, but for how monetization inside conversational interfaces is expected to work.

WordPress Membership Plugin Flaw Exposes Sensitive Stripe Data via @sejournal, @martinibuster

An advisory was published about a vulnerability discovered in the Membership Plugin By StellarWP which exposes sensitive Stripe payment setup data on WordPress sites using the plugin. The flaw enables unauthenticated attackers to launch attacks and is rated 8.2 (High).

Membership Plugin By StellarWP

The Membership Plugin – Restrict Content By StellarWP is used by WordPress sites to manage paid and private content. It enables site owners to restrict access to pages, posts, or other resources so that only logged-in users or paying members can view them and manage what non-paying site visitors can see. The plugin is commonly deployed on membership and subscription-based sites.

Vulnerable to Unauthenticated Attackers

The Wordfence advisory states that the vulnerability can be exploited by unauthenticated attackers, meaning no login or WordPress user account is required to launch an attack. User permission roles do not factor into whether the issue can be triggered, and that’s what makes this particular vulnerability more dangerous because it’s easier to trigger.

What the Vulnerability Is

The issue stems from missing security checks related to Stripe payment handling. Specifically, the plugin failed to properly protect Stripe SetupIntent data.

A Stripe SetupIntent is used during checkout to collect and save a customer’s payment method for future use. Each SetupIntent includes a client_secret value that is intended to be shared during a checkout or account setup flow.

The official Wordfence advisory explains:

“The Membership Plugin – Restrict Content plugin for WordPress is vulnerable to Missing Authentication in all versions up to, and including, 3.2.16 via the ‘rcp_stripe_create_setup_intent_for_saved_card’ function due to missing capability check.

Additionally, the plugin does not check a user-controlled key, which makes it possible for unauthenticated attackers to leak Stripe SetupIntent client_secret values for any membership.”

According to Stripe’s official documentation, the Setup Intents API is used to set up a payment method for future charges without creating an immediate payment. A SetupIntent includes a client_secret. Stripe’s documentation states that client_secret values should not be stored, logged, or exposed to anyone other than the intended customer.

This is how Stripe’s documentation explains what the purpose is for the Setup Intents API:

“Use the Setup Intents API to set up a payment method for future payments. It’s similar to a payment, but no charge is created.

The goal is to have payment credentials saved and optimized for future payments, meaning the payment method is configured correctly for any scenario. When setting up a card, for example, it may be necessary to authenticate the customer or check the card’s validity with the customer’s bank. Stripe updates the SetupIntent object throughout that process.”

Stripe documentation also explains that client_secret values are used client-side to complete payment-related actions and are intended to be passed securely from the server to the browser. Stripe states that these values should not be stored, logged, or exposed to anyone other than the relevant customer.

This is how Stripe’s documentation explains the client_secret value:

“client_secret
The client secret of this Customer Session. Used on the client to set up secure access to the given customer.

The client secret can be used to provide access to customer from your frontend. It should not be stored, logged, or exposed to anyone other than the relevant customer. Make sure that you have TLS enabled on any page that includes the client secret.”

Because the plugin did not enforce the appropriate protections, Stripe SetupIntent client_secret values could be exposed.

What this means in real life is that Stripe payment setup data associated with memberships was accessible beyond its intended scope.

Affected Versions

The vulnerability affects all versions of the plugin up to and including version 3.2.16. Wordfence assigned the issue a CVSS score of 8.2, reflecting the sensitivity of the exposed data and the fact that no authentication is required to trigger the issue.

A score in this range indicates a high-severity vulnerability that can be exploited remotely without special access, increasing the importance of timely updates for sites that rely on the plugin for managing paid memberships or restricted content.

Patch Availability

The plugin has been updated with a patch and is available now. The issue was fixed in version 3.2.17 of the plugin. The update adds missing nonce and permission checks related to Stripe payment handling, addressing the conditions that allowed SetupIntent client_secret values to be exposed. A nonce is a temporary security token that ensures a specific action on a WordPress website was intentionally requested by the user and not by a malicious attacker.

The official Membership Plugin changelog responsibly discloses the updates:

“3.2.17
Security: Added nonce and permission checks for adding Stripe payment methods.
3.2.16
Security: Improved escaping and sanitization for [restrict] and [register_form] shortcode attributes.”

What Site Owners Should Do

Sites using Membership Plugin – Restrict Content should update to version 3.2.17 or newer.

Failure to update the plugin will leave the Stripe SetupIntent client_secret data exposed to unauthenticated attackers.

Featured Image by Shutterstock/file404

Google Health AI Overviews Cite YouTube More Than Any Hospital Site via @sejournal, @MattGSouthern

Google’s AI Overviews may be relying on YouTube more than official medical sources when answering health questions, according to new research from SEO platform SE Ranking.

The study analyzed 50,807 German-language health prompts and keywords, captured in a one-time snapshot from December using searches run from Berlin.

The report lands amid renewed scrutiny of health-related AI Overviews. Earlier this month, The Guardian published an investigation into misleading medical summaries appearing in Google Search. The outlet later reported Google had removed AI Overviews for some medical queries.

What The Study Measured

SE Ranking’s analysis focused on which sources Google’s AI Overviews cite for health-related queries. In that dataset, the company says AI Overviews appeared on more than 82% of health searches, making health one of the categories where users are most likely to see a generated summary instead of a list of links.

The report also cites consumer survey findings suggesting people increasingly treat AI answers as a substitute for traditional search, including in health. It cites figures including 55% of chatbot users trusting AI for health advice and 16% saying they’ve ignored a doctor’s advice because AI said otherwise.

YouTube Was The Most Cited Source

Across SE Ranking’s dataset, YouTube accounted for 4.43% of all AI Overview citations, or 20,621 citations out of 465,823.

The next most cited domains were ndr.de (14,158 citations, 3.04%) and MSD Manuals (9,711 citations, 2.08%), according to the report.

The authors argue that the ranking matters because YouTube is a general-purpose platform with a mixed pool of creators. Anyone can publish health content there, including licensed clinicians and hospitals, but also creators without medical training.

To check what the most visible YouTube citations looked like, SE Ranking reviewed the 25 most-cited YouTube videos in its dataset. It found 24 of the 25 came from medical-related channels, and 21 of the 25 clearly noted the content was created by a licensed or trusted source. It also warned that this set represents less than 1% of all YouTube links cited by AI Overviews.

Government & Academic Sources Were Rare

SE Ranking categorized citations into “more reliable” and “less reliable” groups based on the type of organization behind each source.

It reports that 34.45% of citations came from the more reliable group, while 65.55% came from sources “not designed to ensure medical accuracy or evidence-based standards.”

Within the same breakdown, academic research and medical journals accounted for 0.48% of citations, German government health institutions accounted for 0.39%, and international government institutions accounted for 0.35%.

AI Overview Citations Often Point To Different Pages Than Organic Search

The report compared AI Overview citations to organic rankings for the same prompts.

While SE Ranking found that 9 out of 10 domains overlapped between AI citations and frequent organic results, it says the specific URLs frequently diverged. Only 36% of AI-cited links appeared in Google’s top 10 organic results, 54% appeared in the top 20, and 74% appeared somewhere in the top 100.

The biggest domain-level exception in its comparison was YouTube. YouTube ranked first in AI citations but only 11th in organic results in its analysis, appearing 5,464 times as an organic link compared to 20,621 AI citations.

How This Connects To The Guardian Reporting

The SE Ranking report explicitly frames its work as broader than spot-checking individual responses.

“The Guardian investigation focused on specific examples of misleading advice. Our research shows a bigger problem,” the authors wrote, arguing that AI health answers in their dataset relied heavily on YouTube and other sites that may not be evidence-based.

Following The Guardian’s reporting, the outlet reported that Google removed AI Overviews for certain medical queries.

Google’s public response, as reported by The Guardian, emphasized ongoing quality work while also disputing aspects of the investigation’s conclusions.

Why This Matters

This report adds a concrete data point to a problem that’s been easier to talk about in the abstract.

I covered The Guardian’s investigation earlier this month, and it raised questions about accuracy in individual examples. SE Ranking’s research tries to show what the source mix looks like at scale.

Visibility in AI Overviews may depend on more than being the most prominent “best answer” in organic search. SE Ranking found many cited URLs didn’t match top-ranking pages for the same prompts.

The source mix also raises questions about what Google’s systems treat as “good enough” evidence for health summaries at scale. In this dataset, government and academic sources barely showed up compared to media platforms and a broad set of less reliability-focused sites.

That’s relevant beyond SEO. The Guardian reporting showed how high-stakes the failure modes can be, and Google’s pullback on some medical queries suggests the company is willing to disable certain summaries when the scrutiny gets intense.

Looking Ahead

SE Ranking’s findings are limited to German-language queries in Germany and reflect a one-time snapshot, which the authors acknowledge may vary over time, by region, and by query phrasing.

Even with that caveat, the combination of this source analysis and the recent Guardian investigation puts more focus on two open questions. The first is how Google weights authority versus platform-level prominence in health citations. The second is how quickly it can reduce exposure when specific medical query patterns draw criticism.


Featured Image: Yurii_Yarema/Shutterstock

All In One SEO WordPress Vulnerability Affects Over 3 Million Sites via @sejournal, @martinibuster

A security vulnerability was discovered in the popular All in One SEO (AIOSEO) WordPress plugin that made it possible for low-privileged users to access a site’s global AI access token, potentially allowing them to misuse the plugin’s artificial intelligence features and could allow attackers to generate content or consume credits using the affected site’s AIOSEO AI credits and AI features. The plugin is installed on more than 3 million WordPress websites, making the exposure significant.

All in One SEO WordPress Plugin (AIOSEO)

All in One SEO is one of the most widely used WordPress SEO plugins, installed in over 3 million websites. It helps site owners manage search engine optimization tasks such as generating metadata, creating XML sitemaps, adding structured data, and providing AI-powered tools that assist with writing titles, descriptions, blog posts, FAQs, social medial posts, and generate images.

Those AI features rely on a site-wide AI access token that allows the plugin to communicate with the AIOSEO external AI services.

Missing Capability Check

According to Wordfence, the vulnerability was caused by a missing permission check on a specific REST API endpoint used by the plugin which enabled users with contributor level access to view the global AI access token.

In the context of a WordPress website, an API (Application Programming Interface) is like a bridge between the WordPress website and different software applications (including external apps like AIOSEO’s AI content generator) that enable them to securely communicate and share data with one another. A REST endpoint is a URL that exposes an interface to functionality or data.

The flaw was in the following REST API endpoint:

/aioseo/v1/ai/credits

That endpoint is meant to return information about a site’s AI usage and remaining credits. However, it failed to verify whether the user making the request was actually allowed to see that data. AIOSEO’s plugin failed to do a capability check to verify whether someone logged in with a contributor level access can have access to that data.

Because of that, any logged-in user with Contributor-level access or higher could call the endpoint and retrieve the site’s global AI access token.

Wordfence describes the flaw like this:

“This makes it possible for authenticated attackers, with Contributor-level access and above, to disclose the global AI access token.”

The problem was that the implementation of the REST API endpoint did not do a permission check, which enabled someone with contributor level access to see sensitive data.

In WordPress, REST API routes are supposed to include capability checks that ensure only authorized users can access them. In this case, that check was missing, so the plugin treated Contributors the same as administrators when returning the AI token.

Why The Vulnerability Is Problematic

In WordPress, the Contributor level role is one of the lowest privilege levels. Many sites grant Contributor level access to multiple people so that they can submit article drafts for review and publication.

By exposing the global AI token to those users, the plugin may have effectively handed out a site-wide credential that controls access to its AI features. That token could be used to:

1. Unauthorized AI Usage
The token functions as a site wide credential that authorizes AI requests. If an attacker obtains it, they could potentially use it to generate AI content through the affected site’s account, consuming whatever credits or usage limits are associated with that token.

2. Service Depletion
An attacker could automate requests using the exposed token to exhaust the site’s available AI quota. That would prevent site administrators from using the AI features they rely on, effectively creating a denial of service for the plugin’s AI tools.

Even though the vulnerability does not allow direct code execution, leaking a site-wide API token still represents a possible billing risk.

Part Of A Broader Pattern Of Vulnerabilities

This is not the first time All In One SEO has shipped with vulnerabilities related to missing authorization or low-privilege access. According to Wordfence, the plugin has had six vulnerabilities disclosed in 2025 alone, many of which allowed Contributor or Subscriber level users to access or modify data they should not have been able to access.

Those issues included SQL injection, information disclosure, arbitrary media deletion, missing authorization checks, sensitive data exposure, and stored cross-site scripting. The recurring theme across those reports is improper permission enforcement for low-privilege users, the same underlying class of flaw that led to the AI token exposure in this case.

Six vulnerabilities in one year is a high level for an SEO plugin. Yoast SEO plugin had zero vulnerabilities in 2025, RankMath had four vulnerabilities in 2025 and Squirrly SEO had only three vulnerabilities in 2025.

Screenshot Of Six AIOSEO Vulnerabilities In 2025

How The Vulnerability Was Fixed

The vulnerability affects all versions of All in One SEO up to and including 4.9.2. It was addressed in version 4.9.3, which included a security update described in the official plugin changelog by the plugin developers as:

“Hardened API routes to prevent AI access token from being exposed.”

That change corresponds directly to the REST API flaw identified by Wordfence.

What Site Owners Should Do

Anyone running All in One SEO should update to version 4.9.3 or newer as soon as possible. Sites that allow multiple external contributors are especially exposed since low-privilege accounts could access the site’s AI token on vulnerable versions.

Featured Image by Shutterstock/Shutterstock AI Generator

Survey: Publishers Expect Search Traffic To Fall Over 40% via @sejournal, @MattGSouthern

The Reuters Institute for the Study of Journalism has published its annual predictions report based on a survey of 280 senior media leaders across 51 countries and territories.

The report suggests publishers are preparing for two potential threats: generative AI tools, and creators who attract audiences with personality-led formats.

Note that the Reuters Institute survey reflects a strategic group of senior leaders. It’s not a representative sample of the entire industry.

What The Report Found

Search Traffic Is The Biggest Near-Term Concern

Survey respondents expect search engine traffic to decline by more than 40% over the next three years as AI-driven answers expand.

The report cites Chartbeat data showing aggregate Google Search traffic to hundreds of news sites has already started to dip. Lifestyle-focused publishers say they’ve been hit especially hard by Google’s AI Overviews rollout.

That comes on top of longer-running platform declines. The report notes referral traffic to news sites from Facebook fell 43% over the last three years, while referrals from X fell 46% over the same period.

Publishers Plan To Invest In Differentiation

In response to traffic pressure and AI summarization, publishers say they’ll invest more in original investigations, on-the-ground reporting, contextual analysis, and human stories.

Leaders surveyed say they plan to scale back service journalism and evergreen content, which many expect AI chatbots to commoditize.

Video & Off-Platform Distribution Rising

Publishers expect to invest more in video, including “watch tabs,” and more in audio formats such as podcasts. Text output is less of a priority.

On distribution, YouTube is the main off-platform channel cited in the report, alongside TikTok and Instagram.

Publishers are also trying to work out how to navigate distribution through AI platforms such as OpenAI’s ChatGPT, Google’s Gemini, and Perplexity.

Subscriptions Lead, Licensing Is Growing

For commercial publishers, paid content like subscriptions and memberships are the top focus. There’s also renewed interest in native advertising and face-to-face events as publishers look for revenue beyond traditional display ads.

Publishers are also looking at licensing and other platform payments. The report notes interest in platform funding has nearly doubled over the last two years as AI companies began offering large deals.

Why This Matters

I’ve watched publishers cycle through traffic crises before. When Facebook’s algorithm changes hit in 2018, the industry scrambled, and eventually most publishers adjusted by leaning harder into search. Search was supposed to be the stable channel.

That assumption is what this report challenges. A projected decline of 40%+ over three years has become a planning number, affecting budgets, headcount, and content strategy.

The content mix change warrants attention. When 280 senior media leaders say they’re scaling back service journalism and evergreen content, it signals which pages they think will still drive traffic in an AI-summarized environment. Original reporting and analysis survive because chatbots can’t replicate them. Commodity information doesn’t, because it can be synthesized without a click.

The doubling of interest in licensing deals over two years is the other number that jumped out to me. When AI companies started writing checks, the conversation changed from “should we license” to “what’s our leverage.”

This report is useful as a benchmark for where the industry’s head is at, even if individual outcomes vary.

Looking Ahead

Traffic from search and AI aggregators is unlikely to disappear, but the terms of trade are still being negotiated.

That includes how citations work, what licensing looks like at scale, and whether revenue-sharing becomes a standard arrangement.


Featured Image: Roman Samborskyi/Shutterstock

YouTube Expands Monetization For Some Controversial Issues via @sejournal, @MattGSouthern

YouTube is updating its Advertiser-friendly content guidelines to allow more videos about certain “controversial issues” to earn full ad revenue, as long as the content is non-graphic and presented in a dramatized or discussion-based context.

The change was outlined in a Creator Insider video and is reflected in YouTube’s Help Center policy language.

What’s Changing

YouTube is loosening monetization restrictions for videos focused on controversial issues that advertisers may define as sensitive, including abortion, self-harm, suicide, and domestic and sexual abuse, when the content is “dramatized or discussed in a non-graphic manner.”

YouTube’s Help Center update describes the change, stating that content focused on “Controversial issues” is now eligible to earn ad revenue when it’s non-graphic and dramatized, and that this replaces a previous policy that limited monetization regardless of graphicness or whether content was fictional.

The current “Controversial issues” policy section also explicitly includes “non-graphic but descriptive or dramatized content” related to domestic abuse, self-harm, suicide, adult sexual abuse, abortion, and sexual harassment under the category that “can earn ad revenue.”

How YouTube Defines “Controversial Issues”

YouTube defines “Controversial issues” as topics associated with trauma or abuse, and notes the policy may apply even if the content is purely commentary.

The Help Center list includes child abuse, adult sexual abuse, sexual harassment, self-harm, suicide, eating disorders, domestic abuse, and abortion.

It also distinguishes between content that is “focal” versus “fleeting.” A passing reference is not considered a focus, whereas a sustained segment or a full-video discussion is.

Why This Matters

This update can change whether videos qualify for full ad revenue.

YouTube is drawing a clearer line between non-graphic dramatization or discussion (more likely to be eligible) and content that includes graphic depictions or very explicit detail (still likely to be restricted).

As with past advertiser-friendly updates, real-world outcomes can depend on how a specific upload is categorized during review, including signals from the video itself plus title and thumbnail.

Looking Ahead

It’s unknown whether previously limited videos will be re-reviewed automatically, or only on appeal.

Regardless, you shouldn’t wait for YouTube to do the work. Now is a great time to submit an appeal if your videos were affected by YouTube’s controversial issues policy.


Featured Image: Reyanaska/Shutterstock

Google: AI Mode Checkout Can’t Raise Prices via @sejournal, @MattGSouthern

Google is disputing claims that its new AI-powered shopping checkout work could enable what critics describe as “surveillance pricing” or other forms of overcharging.

The back-and-forth started after Lindsay Owens, executive director of consumer economics think tank Groundwork Collaborative, criticized Google’s newly announced Universal Commerce Protocol and pointed to language in its public roadmap about “cross-sell and upsell modules.”

U.S. Sen. Elizabeth Warren amplified the criticism, saying Google is “using troves of your data to help retailers trick you into spending more money.”

Google’s corporate account News from Google replied that the claims “around pricing are inaccurate,” adding that merchants are prohibited from showing higher prices on Google than what appears on their own sites.

What Triggered The Back-And-Forth

Owens wrote on X that Google’s announcement about integrating shopping into AI Mode and Gemini included “personalized upselling,” which she described as “analyzing your chat data and using it to overcharge you.”

Warren then reposted Owens’ thread and echoed the allegation in stronger terms, calling it “plain wrong” that Google would use user data to help retailers “trick you into spending more money.”

Google responded publicly on X with a thread disputing the premise.

News from Google wrote on X:

“These claims around pricing are inaccurate. We strictly prohibit merchants from showing prices on Google that are higher than what is reflected on their site, period.”

Google also addressed the “upselling” term directly:

“The term ‘upselling’ is not about overcharging. It’s a standard way for retailers to show additional premium product options that people might be interested in.”

And it added that “Direct Offers” can only move in one direction:

“‘Direct Offers’ is a pilot that enables merchants to offer a lower priced deal or add extra services like free shipping … it cannot be used to raise prices.”

Where “Upsell Modules” Shows Up

The language critics are pointing to is in the Universal Commerce Protocol roadmap, which lists “Native cross-sell and upsell modules” as an upcoming initiative, described as enabling “personalized recommendations and upsells based on user context.”

Separately, Google’s technical write-up on UCP says AI shopping experiences need support for things like “real-time inventory checks, dynamic pricing, and instant transactions” within a conversational context. The “dynamic pricing” phrasing is broad, but it is part of what critics are interpreting through a consumer protection lens.

Google’s Ads & Commerce blog post presents UCP as covering the entire shopping journey, linking it to AI Mode and Gemini, while emphasizing that retailers stay the seller of record.

Why This Matters

I have covered Google’s price accuracy enforcement going back years, including Merchant Center policies meant to prevent situations where a shopper sees one price and gets a higher one at checkout. That history is why the “prices on Google versus prices on your site” line is doing so much work in Google’s response.

The bigger picture is that Google is trying to turn AI Mode and Gemini into places where product discovery can end with a transaction. When that happens, the conversation stops being purely about relevance and starts being about pricing rules, disclosures, and what “personalization” means in practice.

Looking Ahead

If this becomes another layer of feed requirements and policy edge cases, retailers will feel it immediately. If it reduces drop-off between product discovery and checkout, Google will likely push harder to make it a default part of AI Mode shopping.


Featured Image: zikg/Shutterstock

Google Downplays GEO – But Let’s Talk About Garbage AI SERPs via @sejournal, @martinibuster

Google’s Danny Sullivan and John Mueller’s Search Off The Record podcast offered guidance to SEOs and publishers who have questions about ranking in LLM-based search and chat, debunking the commonly repeated advice to “chunk your content.” But that’s really not the conversation Googlers should be having right now.

SEO And The Next Generation Of Search

Google used to rank content based on keyword matching and PageRank was a way to extend that paradigm using the anchor text of links. The introduction of the Knowledge Graph in 2012 was described as a step toward ranking answers based on things (entities) in the real world. Google called this a shift from strings to things.

What’s happening today is what Google in 2012 called “the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”

So, when people say that nothing has changed with SEO, it’s true to the extent that the underlying infrastructure is still Google Search. What has changed is that the answers are in a long-form format that answers three or more additional questions beyond the user’s initial query.

The answer to the question of what’s different about SEO for AI is that the paradigm of optimizing for one keyword for one search result is shattered, splintered by the query fan-out.

Google’s Danny Sullivan and John Mueller took a crack at offering guidance on what SEOs should be focusing on. Do they hit the mark?

How To Write For Longform Answers

Given that Google is surfacing multi-paragraph long answers, does it make sense to create content that’s organized into bite-sized chunks? How does that affect how humans read content, will they like it or leave it?

Many SEOs are recommending that publishers break up the page up into “chunks” based on the intuition that AI understands content in chunks, dividing up the page into sections. But that’s an arbitrary approach that ignores the fact that a properly structured web page is already broken into chunks through the use of headings, HTML elements like ordered and unordered lists. A properly marked up and formatted web page should already be formatted into logical structure that a human and a machine can easily understand. Duh… right?

It’s not surprising that Google’s Danny Sullivan warns SEOs and publishers to not break their content up into chunks.

Danny said:

“To go to one of the things, you know, I talked about the specific things people like, “What is the thing I need to improve.” One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?

So we don’t want you to do that. I was talking to some engineers about that. We don’t want you to do that. We really don’t. We don’t want people to have to be crafting anything for Search specifically. That’s never been where we’ve been at and we still continue to be that way. We really don’t want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net.”

Danny talked about chunking with some Google engineers and his takeaway from that conversation is to recommend against chunking. The second takeaway is that their systems are set up to access content the way human readers access it and for that reason he says to craft the content for humans.

Avoids Talking About Search Referrals

But again, he avoids talking about what I think is the more important facet of AI search, query fan-out and the impact to referrals. Query fan-out impacts referrals because Google is ranking a handful of pages for multiple queries for every one query that a user makes. But compounds this situation, as you will see further on, is that the sites Google is ranking do not measure up.

Focus On The Big Picture

Danny Sullivan next discusses the downside of optimizing for a machine, explaining that systems eventually improve that usually means that optimization for machines stop working.

He explained:

“And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.

…Again, you have to make your own decisions. But I think that what you tend to see is, over time, these very little specific things are not the things that carry you through, but you know, you make your own decisions. But I think also that many people who have been in the SEO space for a very long time will see this, will recognize that, you know, focusing on these foundational goals, that’s what carries you through.”

Let’s Talk About Garbage AI Search Results

I have known Danny Sullivan for a long time and have a ton of respect for him, I know that he has publishers in mind and that he truly wants for them to succeed. What I wished he would talk about is the declining traffic opportunities for subject-matter experts and the seemingly arbitrary garbage search results that Google consistently surfaces.

Subject Matter Expertise Is Missing

Google is intentionally hiding expert publications in the search results, hidden away in the More tab. In order to find expert content, a user has to click the More tab and then click the News tab.

How Google Hides Expert Web Pages

How Google hides expert web pages.

Google’s AI Mode Promotes Garbage And Sites Lacking Expertise

This search was not cherry-picked to show poor results. This is literally the one search I did asking a legit question about styling a sweatshirt.

Google’s AI Mode cites the following pages:

1. An abandoned Medium Blog from 2018, that only ever had two blog posts, both of which have broken images. That’s not authoritative.

2. An article published on LinkedIn, a business social networking website. Again, that’s not authoritative nor trustworthy. Who goes to LinkedIn for expert style advice?

3. An article about sweatshirts published on a sneaker retailer’s website. Not expert, not authoritative. Who goes to a sneaker retailer to read articles about sweatshirts?

Screenshot Of Google’s Garbage AI Results

Google Hides The Good Stuff In More > News Tab

Had Google defaulted to actual expert sites they may have linked to an article from GQ or the New York Times, both reputable websites. Instead, Google hides the high quality web pages under the More tab.

Screenshot Of  Hidden High Quality Search Results

GEO Or SEO – It Doesn’t Matter

This whole thing about GEO or AEO and whether it’s all SEO doesn’t really matter. It’s all a bunch of hand waving and bluster. What matters is that Google is no longer ranking high quality sites and high quality sites are withering from a lack of traffic.

I see these low quality SERPs all day long and it’s depressing because there is no joy of discovery in Google Search anymore. When was the last time you discovered a really cool site that you wanted to tell someone about?

Garbage on garbage, on garbage, on top of more garbage. Google needs a reset.

How about Google brings back the original search and we can have all the hand-wavy Gemini stuff under the More tab somewhere?

Listen to the podcast here:

Featured Image by Shutterstock/Kues

WooCommerce WordPress Plugin Exploit Enables Fraudulent Charges via @sejournal, @martinibuster

The popular WooCommerce Square plugin for WordPress vulnerability enables unauthenticated attackers to uncover credit cards on file and make fraudulent charges. The vulnerability affects up to 80,000 installations.

WooCommerce Square WordPress Plugin

The WooCommerce Square plugin enables WordPress sites to accept payments through the Square POS, as well as synchronize product inventory data between Square and WooCommerce. Square plugin enables a WooCommerce merchant to support payments through Apple Pay®, Google Pay, WooCommerce Pre-Orders, and WooCommerce Subscriptions.

Insecure Direct Object Reference

The vulnerability in the plugin arises from an Insecure Direct Object Reference (IDOR) vulnerability, a flaw that happens when critical data is exposed in URL file parameters, such as identification numbers, which then enables an attacker to manipulate that data without proper access that would normally prevent them from accessing those files.

The Open Worldwide Application Security Project (OWASP) defines IDOR as:

“Insecure Direct Object Reference (IDOR) is a vulnerability that arises when attackers can access or modify objects by manipulating identifiers used in a web application’s URLs or parameters. It occurs due to missing access control checks, which fail to verify whether a user should be allowed to access specific data.”

Exploiting the vulnerability does not require that the attacker acquire any level of authentication or permission levels, making it easier for them to launch an attack on affected websites.

According to a Wordfence advisory:

“The WooCommerce Square plugin for WordPress is vulnerable to Insecure Direct Object Reference in all versions up to, and including, 5.1.1 via the get_token_by_id function due to missing validation on a user controlled key. This makes it possible for unauthenticated attackers to expose arbitrary Square “ccof” (credit card on file) values and leverage this value to potentially make fraudulent charges on the target site.”

There are multiple versions of the WooCommerce Square plugin that are patched, it’s recommended that users of the plugin update to at least one of the following versions:

  • 4.2.3
  • 4.3.2
  • 4.4.2
  • 4.5.2
  • 4.6.4
  • 4.7.4
  • 4.8.8
  • 4.9.9
  • 5.0.1
  • 5.1.2

The CVSS severity vulnerability score is rated at 7.5, indicating it’s a dangerous vulnerability that can be remotely exploitable but is mitigated by a constraint that keeps it from being rated as “Critical.”

Featured Image by Shutterstock/IgorZh

Apple Selects Google’s Gemini For New AI-Powered Siri via @sejournal, @MattGSouthern

Apple is partnering with Google to power its AI features, including a major Siri upgrade expected later this year.

The companies announced the multi-year collaboration on Monday. Google’s Gemini models and cloud technology will serve as the foundation for the next generation of Apple Foundation Models.

“After careful evaluation, Apple determined that Google’s AI technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users,” the joint statement said.

What’s New

The partnership makes Gemini a foundation for Apple’s next-generation models. Apple’s models will continue running on its devices and Private Cloud Compute infrastructure while maintaining what the company calls its “industry-leading privacy standards.”

Neither company disclosed the deal’s financial terms. Bloomberg previously reported Apple had discussed paying about $1 billion annually for Google AI access, though that figure remains unconfirmed for the final agreement.

By November, Bloomberg reported Apple had chosen Google over Anthropic based largely on financial terms.

Existing OpenAI Partnership Remains

Apple currently integrates OpenAI’s ChatGPT into Siri and Apple Intelligence for complex queries that draw on the model’s broader knowledge base.

Apple told CNBC the company isn’t making changes to that agreement. OpenAI did not immediately respond to a request for comment.

The distinction appears to be between the foundational models powering Apple Intelligence overall versus the external AI connection available for certain queries.

Context

The deal arrives as Google’s AI position strengthens. Alphabet surpassed Apple in market capitalization last week for the first time since 2019.

The default-search deal between Google and Apple has been under scrutiny after U.S. District Judge Amit Mehta ruled Google holds an illegal monopoly in online search and related advertising. In September 2025, he did not require Google to divest Chrome or Android.

Apple had originally planned to launch an AI-powered Siri upgrade in 2025 but delayed the release.

“It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year,” Apple said at the time.

Google introduced its upgraded Gemini 3 model late last year. CEO Sundar Pichai said in October that Google Cloud signed more deals worth over $1 billion through the first three quarters of 2025 than in the previous two years combined.

Why This Matters

I covered this partnership in November when Bloomberg first reported Apple was paying Google to build a custom Gemini model for Siri. Today’s joint statement confirms what was then unattributed sourcing.

The confirmation matters because it extends Gemini’s reach into one of the largest device ecosystems in the world. Apple has said Siri fields 1.5 billion user requests per day across more than 2 billion active devices. That installed base gives Gemini distribution Google couldn’t match through its own products alone.

The competitive signal is clearer now too. Apple evaluated Anthropic and chose Google. Eddy Cue testified in May that Apple planned to add Gemini to Siri, but today’s announcement frames it as a deeper infrastructure partnership, not just another assistant option.

If Siri becomes meaningfully more capable at answering queries directly, the implications mirror what’s happening with AI Overviews and AI Mode in search. More queries could be resolved without users reaching external websites.

Looking Ahead

The upgraded Siri is expected to roll out later in 2026. The companies haven’t provided a specific launch date.

Apple maintaining its OpenAI integration alongside the Google partnership suggests both relationships will continue, at least for now. How Apple balances these two AI providers for different use cases will become clearer as the new features launch.