WooCommerce Customer Review Plugin Vulnerability Affects 80,000+ Sites via @sejournal, @martinibuster

An advisory was issued about a vulnerability in the Customer Reviews for WooCommerce plugin, which is installed on over 80,000 websites. The plugin enables unauthenticated attackers to launch a stored cross-site scripting attack.

Customer Reviews for WooCommerce Vulnerability

The Customer Reviews for WooCommerce plugin enables users to send customers an email reminder to leave a review and also offers other features designed to increase customer engagement with a brand.

Wordfence issued an advisory about a flaw in the plugin that makes it possible for attackers to inject scripts into web pages that execute whenever a user visits the affected page.

The exploit is due to a failure to “sanitize” inputs and “escape” outputs. Sanitizing inputs in this context is a basic WordPress security measure that checks if uploaded data conforms to expected types and removes dangerous content like scripts. Output escaping is another security measure that ensures any special characters produced by the plugin aren’t executable.

According to the official Wordfence advisory:

“The Customer Reviews for WooCommerce plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘author’ parameter in all versions up to, and including, 5.80.2 due to insufficient input sanitization and output escaping. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that will execute whenever a user accesses an injected page.”

Users of the plugin are advised to update to version 5.81.0 or newer version.

Featured Image by Shutterstock/Brilliant Eye

Query Fan-Out Technique in AI Mode: New Details From Google via @sejournal, @MattGSouthern

In a recent interview, Google’s VP of Product for Search, Robby Stein, shared new information about how query fan-out works in AI Mode.

Although the existence of query fan-out has been previously detailed in Google’s blog posts, Stein’s comments expand on its mechanics and offer examples that clarify how it works in practice.

Background On Query Fan-Out Technique

When a person types a question into Google’s AI Mode, the system uses a large language model to interpret the query and then “fan out” multiple related searches.

These searches are issued to Google’s infrastructure and may include topics the user never explicitly mentioned.

Stein said during the interview:

“If you’re asking a question like things to do in Nashville with a group, it may think of a bunch of questions like great restaurants, great bars, things to do if you have kids, and it’ll start Googling basically.”

He described the system as using Google Search as a backend tool, executing multiple queries and combining the results into a single response with links.

This functionality is active in AI Mode, Deep Search, and some AI Overview experiences.

Scale And Scope

Stein said AI-powered search experiences, including query fan-out, now serve approximately 1.5 billion users each month. This includes both text-based and multimodal input.

The underlying data sources include traditional web results as well as real-time systems like Google’s Shopping Graph, which updates 2 billion times per hour.

He referred to Google Search as “the largest AI product in the world.”

Deep Search Behavior

In cases where Google’s systems determine a query requires deeper reasoning, a feature called Deep Search may be triggered.

Deep Search can issue dozens or even hundreds of background queries and may take several minutes to complete.

Stein described using it to research home safes, a purchase he said involved unfamiliar factors like fire resistance ratings and insurance implications.

He explained:

“It spent, I don’t know, like a few minutes looking up information and it gave me this incredible response. Here are how the ratings would work and here are specific safes you can consider and here’s links and reviews to click on to dig deeper.”

AI Mode’s Use Of Internal Tools

Stein mentioned that AI Mode has access to internal Google tools, such as Google Finance and other structured data systems.

For example, a stock comparison query might involve identifying relevant companies, pulling current market data, and generating a chart.

Similar processes apply to shopping, restaurant recommendations, and other query types that rely on real-time information.

Stein stated:

“We’ve integrated most of the real-time information systems that are within Google… So it can make Google Finance calls, for instance, flight data… movie information… There’s 50 billion products in the shopping catalog… updated I think 2 billion times every hour or so. So all that information is able to be used by these models now.”

Technical Similarities To Google’s Patent

Stein described a process similar to a Google patent from December about “thematic search.”

The patent outlines a system that creates sub-queries based on inferred themes, groups results by topic, and generates summaries using a language model. Each theme can link to source pages, but summaries are compiled from multiple documents.

This approach differs from traditional search ranking by organizing content around inferred topics rather than specific keywords. While the patent doesn’t confirm implementation, it closely matches Stein’s description of how AI Mode functions.

Looking Ahead

With Google explaining how AI Mode generates its own searches, the boundaries of what counts as a “query” are starting to blur.

This creates challenges not just for optimization, but for attribution and measurement.

As search behavior becomes more fragmented and AI-driven, marketers may need to focus less on ranking for individual terms and more on being included in the broader context AI pulls from.

Listen to the full interview below:


Featured Image: Screenshot from youtube.com/@GoogleDevelopers, July 2025. 

WordPress AI Engine Plugin Vulnerability Affects Up To 100,000 Websites via @sejournal, @martinibuster

A security advisory was issued for the AI Engine WordPress plugin, installed on over 100,000 websites, the fourth one this month. Rated 8.8, this vulnerability enables attackers with only subscriber-level authentication to upload malicious files when the REST API is enabled.

AI Engine Plugin: Fifth Vulnerability In 2025

This is the fourth vulnerability discovered in the AI Engine plugin in July, following the first one of the year discovered in June, making a total of five vulnerabilities discovered in the plugin so far in 2025. There were nine vulnerabilities discovered in 2024, one of which was rated 9.8 because it enabled unauthenticated attackers to upload malicious files, plus another rated 9.1 that also enabled arbitrary uploads.

Authenticated (Subscriber+) Arbitrary File Upload

The latest vulnerability enables authenticated file uploads. What makes this exploit more dangerous is that it requires only subscriber-level authentication for an attacker to take advantage of the security weakness. That isn’t as bad as a vulnerability that doesn’t require authentication, but it’s still rated 8.8 on a scale of 1 to 10.

Wordfence describes the vulnerability as being due to missing file type validation in a function related to the REST API in versions 2.9.3 and 2.9.4.

File type validation is a security measure typically used within WordPress to make sure that the content of a file matches the type of file being uploaded to the website.

According to Wordfence:

“This makes it possible for authenticated attackers, with Subscriber-level access and above, to upload arbitrary files on the affected site’s server when the REST API is enabled, which may make remote code execution possible.”

Users of the AI Engine plugin are recommended updating their plugin to the latest version, 2.9.5, or a newer version.

The plugin changelog for version 2.9.5 shares what was updated:

“Fix: Resolved a security issue related to SSRF by validating URL schemes in audio transcription and sanitizing REST API parameters to prevent API key misuse.

Fix: Corrected a critical security vulnerability that allowed unauthorized file uploads by adding strict file type validation to prevent PHP execution.”

Featured Image by Shutterstock/Jiri Hera

B2B Marketing Is Starting to Look a Lot Like B2C (And It’s Working) via @sejournal, @MattGSouthern

B2B marketers are taking a page from the B2C playbook and seeing real results.

According to LinkedIn’s B2B Marketing Benchmark Report, strategies once considered too informal for business audiences, like short-form video and influencer collabs, are now central to building trust and driving growth.

The study, based on responses from 1,500 senior marketers across six countries, found that 94% believe trust is the key to success in B2B.

But many brands are moving away from traditional lead-gen tactics and turning instead to emotionally resonant content and credible voices.

Lee Moskowitz, Growth Marketer and Podcast Host at Lee2B, is quoted in the report:

“We’re in an era of ‘AI slop,’ long sales cycles and growing buying committees. Brands need to build trust, prove their expertise and earn their place in the buying process.”

This shift toward more consumer-style tactics is evident in the adoption of video content across B2B teams.

B2B Video Marketing Hits a Tipping Point

Video is now foundational to B2B marketing, with 78% of marketers including it in their programs and over half planning to increase investments in the coming year.

Screenshot from: youtube.com/@LinkedInMktg, July 2025.

The most successful teams aren’t using video in isolation, they’re building multi-channel strategies that map to different funnel stages.

According to LinkedIn’s data, marketers with a video strategy are:

  • 2.2x more likely to say their brand is well trusted
  • 1.8x more likely to say their brand is well known

Popular formats include short-form social clips, brand storytelling, and customer testimonials. Content types long associated with B2C engagement are now proving effective in B2B.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

AJ Wilcox, founder of B2Linked, states in the report:

“Capturing that major B2B deal requires trust, and nothing builds trust faster than personal video content. I feel more trusting of a brand after watching a 1-min clip of their founder talking than if I read five of their blog posts.”

B2B Influencer Marketing Moves Into the Mainstream

Fifty-five percent of marketers in the study said they now work with influencers. The top reasons include trust, authenticity, and credibility.

B2B influencers are typically subject matter experts, practitioners, or respected voices in their fields. And their impact appears to be tied to business outcomes: 84% of marketers using influencer marketing expect budget increases next year, compared to just 58% of non-users.

Brendan Gahan, CEO and Co-Founder of Creator Authority, states:

“This feels like a YouTube moment. LinkedIn is entering that same phase now. It already generates more weekly comments than Reddit. Its creator ecosystem is thriving and growing fast.”

Buyers trust people they relate to. Marketers are shifting their influencer strategies to reflect that, prioritizing alignment and authority over follower counts.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

What This Means

Trust signals are becoming more important across the board, especially as search engines continue to emphasize expertise, authority, and trust (E-E-A-T). Relying on blog posts alone may no longer be enough to demonstrate what your brand stands for.

Video gives you a way to show expertise in a more personal, credible way. Whether it’s a founder explaining your product or a customer sharing their experience.

For long sales cycles and complex buying decisions, what’s working now looks a lot more human: authentic voices, visible experts, and content that’s easy to connect with.


Featured Image: Roman Samborskyi/Shutterstock

Research Shows Differences In ChatGPT And Google AIO Answers via @sejournal, @martinibuster

New research from enterprise search marketing platform BrightEdge discovered differences in how Google and ChatGPT surface content. These differences matter to digital marketers and content creators because they show how content is recommended by each system. Recognizing the split enables brands to adapt their content strategies to stay relevant across both platforms.

BrightEdge’s findings were surfaced through an analysis of B2B technology, education, healthcare, and finance queries. It’s possible to cautiously extrapolate the findings to other niches where there could be divergences in how Google and ChatGPT respond, but that’s highly speculative, so this article won’t do that.

Core Differences: Task Vs. Information Orientation

BrightEdge’s research discovered that ChatGPT and Google AI Overviews take two different approaches to helping users take action. ChatGPT is more likely to recommend tools and apps, behaving in the role of a guide for making immediate decisions. Google provides informational content that encourages users to read before acting. This difference matters for SEO because it enables content creators and online stores to understand how their content is processed and presented to users of each system.

BrightEdge explains:

“In task-oriented prompts, ChatGPT overwhelmingly suggests tools and apps directly, while Google continues to link to informational content. While Google thrives as a research assistant, ChatGPT acts like a trusted coach for decision making, and that difference shapes which tool users instinctively choose for different needs.”

Divergence On Action-Oriented Queries

ChatGPT and Google tend to show similar kinds of results when users are querying for comparisons, but the results begin to diverge when the user intent implies they want to act. BrightEdge found that prompts about credit card comparisons or learning platforms generated similar kinds of results.

Questions with an action intent, like “how to create a budget” or “learn Python,” lead to different answers. ChatGPT appears to treat action intent prompts as requiring a response with tools, while Google treats them as requiring information.

BrightEdge notes that Healthcare has the highest rate of divergence:

“At 62% divergence, healthcare demonstrates the most significant split between platforms.

  • When prompts pertain to symptoms or medical information, both ChatGPT and Google will mention the CDC and The Mayo Clinic.
  • However, when prompted to help with things like “How to find a doctor,” ChatGPT pushes users towards Zocdoc, while Google points to hospital directories.”

B2B Technology niche has the second highest level of divergence:

“With 47% divergence, B2B tech shows substantial platform differences.

  • When comparing technology, such as cloud platforms, both suggest AWS and Azure.
  • When asked “How to deploy things (such as specific apps),” ChatGPT relies on tools like Kubernetes and the AWS CLI, while Google offers tutorials and Stack Overflow.”

Education follows closely behind B2B technology:

“At 45% divergence, education follows the same trend.

  • When comparing “Best online learning platforms,” both platforms surface Coursera, EdX, and LinkedIn Learning.
  • When a user’s prompt pertains to learning a skill such as “How to learn Python,” ChatGPT recommends Udemy, whereas Google directs users to user-generated content hubs like GitHub and Medium.”

Finance shows the lowest levels of divergence, at 39%.

BrightEdge concludes that this represents a “fundamental shift” in how AI platforms interpret intent, which means that marketers need to examine the intent behind the search results for each platform and make content strategy decisions based on that research.

Tools Versus Topics

BrightEdge uses the example of the prompt “What are some resources to help plan for retirement?” to show how Google and ChatGPT differ. ChatGPT offers calculators and tools that users can act on, while Google suggests topics for further reading.

Screenshot Of ChatGPT Responding With Financial Tools

There’s a clear difference in the search experience for users. Marketers, SEOs, and publishers should consider how to meet both types of expectations: practical, action-based responses from ChatGPT and informational content from Google.

Takeaways

  • Split In User Intent Interpretation:
    Google interprets queries as requests for information, while ChatGPT tends to interpret many of the same queries as a call for action that’s solved by tools.
  • Platform Roles:
    ChatGPT behaves like a decision-making coach, while Google acts as a research assistant.
  • Domain-Specific Differences:
    Healthcare has the highest divergence (62%), especially in task-based queries like finding a doctor.
    B2B Technology (47%) and Education (45%) also show significant splits in how guidance is delivered.
    Finance shows the least divergence (39%) in how results are presented.
  • Tools vs. Topics:
    ChatGPT recommends actionable resources; Google links to authoritative explainer content.
  • SEO Insight:
    Content strategies must reflect each platform’s interpretation of intent. For example, creating actionable responses for ChatGPT and comprehensive informational content for Google. This may even mean creating and promoting a useful tool that can surface in ChatGPT.

BrightEdge’s research shows that, for some queries, Google and ChatGPT interpret the same user intent in profoundly different ways. While Google treats action-oriented queries as a prompt to deliver informational content, ChatGPT responds by recommending tools and services users can immediately act on. This divergence calls attention to the need to understand when ChatGPT is delivering actionable responses in order for marketers and content creators to create platform-specific content and web experiences.

Read the original research:

Brand Visibility: ChatGPT and Google AI Approaches by Industry

Featured Image by Shutterstock/wenich_mit

Microsoft Adds Copilot Mode To Edge With Multi-Tab AI Analysis via @sejournal, @MattGSouthern

Microsoft launches Copilot Mode in Edge, introducing multi-tab AI analysis, voice navigation, and more features in development.

  • Copilot Mode brings AI tools to Microsoft’s Edge browser
  • Available tools include multi-tab content analysis, voice navigation, and a unified search/chat interface.
  • Features in development include task execution, topic-based organization, and a persistent AI assistant.
OpenAI Study Mode Brings Guided Learning to ChatGPT via @sejournal, @MattGSouthern

OpenAI has launched a new feature in ChatGPT called Study Mode, offering a step-by-step learning experience designed to guide users through complex topics.

While aimed at students, Study Mode reflects a broader trend in how people use AI tools for information and adapt their search habits.

As more people start using conversational AI tools to seek information, Study Mode could represent the next step of AI-assisted discovery.

A Shift Toward Guided Learning

Activate Study Mode by selecting “Study and learn” from the tools in ChatGPT and ask a question.

Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.

Instead of giving direct answers, this feature promotes deeper engagement by asking questions, providing hints, and tailoring explanations to meet user needs.

Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.

Study Mode runs on custom instructions developed with input from teachers and learning experts. The feature incorporates research-based strategies, including:

  • Encouraging people to take part actively
  • Helping manage how much information people can handle
  • Supporting self-awareness and a desire to learn
  • Giving helpful and practical feedback.

Robbie Torney, Senior Director of AI Programs at Common Sense Media, explains:

“Instead of doing the work for them, study mode encourages students to think critically about their learning. Features like these are a positive step toward effective AI use for learning. Even in the AI era, the best learning still happens when students are excited about and actively engaging with the lesson material.”

How It Works

Study Mode adjusts responses based on a user’s skill level and context from prior chats.

Key features include:

  • Interactive Prompts: Socratic questioning and self-reflection prompts promote critical thinking.
  • Scaffolded Responses: Content is broken into manageable segments to maintain clarity.
  • Knowledge Checks: Quizzes and open-ended questions help reinforce understanding.
  • Toggle Functionality: Users can turn Study Mode on or off as needed during a conversation.

Early testers describe it as an on-demand tutor, useful for unpacking dense material or revisiting difficult subjects.

Looking Ahead

Study Mode is now available to logged-in users across Free, Plus, Pro, and Team plans, with ChatGPT Edu support expected in the coming weeks.

OpenAI plans to integrate Study Mode behavior directly into its models after gathering feedback. Future updates may include visual aids, goal tracking, and more personalized support.


Featured Image: Roman Samborskyi/Shutterstock

Google AI Mode Update: File Uploads, Live Video Search, More via @sejournal, @MattGSouthern

Google is expanding AI Mode in Search with new tools that include PDF uploads, persistent planning documents, and real-time video assistance.

The updates begin rolling out today, with the AI Mode button now appearing on the Google homepage for desktop users.

PDF Uploads Now Supported On Desktop

Desktop users can now upload images directly into search queries, a feature previously available only on mobile.

Support for PDFs is coming in the weeks ahead, allowing you to ask questions about uploaded files and receive AI-generated responses based on both document content and relevant web results.

For example, a student could upload lecture slides and use AI Mode to get help understanding the material. Responses include suggested links for deeper exploration.

Image Credit: Google

Google plans to support additional file types and integrate with Google Drive “in the months ahead.”

Canvas: A Tool For Multi-Session Planning

A new AI Mode feature called Canvas can help you stay organized across multiple search sessions.

When you ask AI Mode for help with planning or creating something, you’ll see an option to “Create Canvas.” This opens a dynamic side panel that saves and updates as queries evolve.

Use cases include building study guides, travel itineraries, or task checklists.

Image Credit: Google

Canvas is launching for desktop users in the U.S. enrolled in the AI Mode Labs experiment.

Real-Time Assistance With Search Live

Search Live with video input also launches this week on mobile. This allows you to utilize AI Mode while pointing your phone camera at real-world objects or scenes.

The feature builds on Project Astra and is available through Google Lens. Start by tapping the ‘Live’ icon in the Google app, then engage in back-and-forth conversations with AI Mode using live video as visual context.

Image Credit: Google

Chrome Adds Contextual AI Answers

Lens is getting expanded desktop functionality within Chrome. Soon, you’ll see a “Ask Google about this page” option in the address bar.

When selected, it opens a panel where you can highlight parts of a page, like a diagram or snippet of text, and receive an AI Overview.

This update also allows follow-up questions via AI Mode from within the Lens experience, either through a button labeled “Dive deeper” or by selecting AI Mode directly.

Looking Ahead

These updates reflect Google’s vision of search as a multi-modal, interactive experience rather than a one-off text query.

While most of these tools are limited to U.S.-based Labs users for now, they point to a future where AI Mode becomes central to how searchers explore, learn, and plan.

Rollout timelines vary by feature. So keep a close eye on how these capabilities add to the search experience and consider how to adapt your content strategies accordingly.

Google Explains The Process Of Indexing The Main Content via @sejournal, @martinibuster

Google’s Gary Illyes discussed the concept of “centerpiece content,” how they go about identifying it, and why soft 404s are the most critical error that gets in the way of indexing content. The context of the discussion was the recent Google Search Central Deep Dive event in Asia, as summarized by Kenichi Suzuki.

Main Body Content

According to Gary Illyes, Google goes to great lengths to identify the main content of a web page. The phrase “main content” will be familiar to those who have read Google’s Search Quality Rater Guidelines. The concept of “main content” is first introduced in Part 1 of the guidelines, in a section that teaches how to identify main content, which is followed by a description of main content quality.

The quality guidelines define main content (aka MC) as:

“Main Content is any part of the page that directly helps the page achieve its purpose. MC can be text, images, videos, page features (e.g., calculators, games), and it can be content created by website users, such as videos, reviews, articles, comments posted by users, etc. Tabs on some pages lead to even more information (e.g., customer reviews) and can sometimes be considered part of the MC.

The MC also includes the title at the top of the page (example). Descriptive MC titles allow users to make informed decisions about what pages to visit. Helpful titles summarize the MC on the page.”

Google’s Illyes referred to main content as the centerpiece content, saying that it is used for “ranking and retrieval.” The content in this section of a web page has greater weight than the content in the footer, header, and navigation areas (including sidebar navigation).

Suzuki summarized what Illyes said:

“Google’s systems heavily prioritize the “main content” (which he also calls the “centerpiece”) of a page for ranking and retrieval. Words and phrases located in this area carry significantly more weight than those in headers, footers, or navigation sidebars. To rank for important terms, you must ensure they are featured prominently within the main body of your page.”

Content Location Analysis To Identify Main Content

This part of Illyes’ presentation is important to get right. Gary Illyes said that Google analyzes the rendered web page to located the content so that it can assign the appropriate amount of weight to the words located in the main content.

This isn’t about the identifying the position of keywords in the page. It’s just about identifying the content within a web page.

Here’s what Suzuki transcribed:

“Google performs positional analysis on the rendered page to understand where content is located. It then uses this data to assign an importance score to the words (tokens) on the page. Moving a term from a low-importance area (like a sidebar) to the main content area will directly increase its weight and potential to rank.”

Insight: Semantic HTML is an excellent way to help Google identify the main content and the less important areas. Semantic HTML makes web pages less ambiguous because it uses HTML elements to identify the different areas of a web page, like the top header section, navigational areas, footers, and even to identify advertising and navigational elements that may be embedded within the main content area. This technical SEO process of making a web page less ambiguous is called disambiguation.

3. Tokenization Is Foundation Of Google’s Index

Because of the prevalence of AI technologies today, many SEOs are aware of the concept of tokenization. Google also uses tokenization to convert words and phrases into a machine-readable format for indexing. What gets stored in Google’s index isn’t the original HTML; it’s the tokenized representation of the content.

4. “Soft 404s Are A Critical Error

This part is important because it frames soft 404s as a critical error. Soft 404s are pages that should return a 404 response but instead return a 200 OK response. This can happen when an SEO or publisher redirects a missing web page to the home page in order to conserve their PageRank. Sometimes a missing web page will redirect to an error page that returns a 200 OK response, which is also incorrect.

Many SEOs mistakenly believe that the 404 response code is an error that needs fixing. A 404 is something that needs fixing only if the URL is broken and is supposed to point to a different URL that is live with actual content.

But in the case of a URL for a web page that is gone and is likely never returning because it has not been replaced by other content, a 404 response is the correct one. If the content has been replaced or superseded by another web page, then it’s proper in that case to redirect the old URL to the URL where the replacement content exists.

The point of all this is that, to Google, a soft 404 is a critical error. That means that SEOs who try to fix a non-error event like a 404 response by redirecting the URL to the home page are actually creating a critical error by doing so.

Suzuki noted what Illyes said:

“A page that returns a 200 OK status code but displays an error message or has very thin/empty main content is considered a “soft 404.” Google actively identifies and de-prioritizes these pages as they waste crawl budget and provide a poor user experience. Illyes shared that for years, Google’s own documentation page about soft 404s was flagged as a soft 404 by its own systems and couldn’t be indexed.”

Takeaways

  • Main Content
    Google gives priority to the main content portion of a given web page. Although Gary Illyes didn’t mention it, it may be helpful to use semantic HTML to clearly outline what parts of the page are the main content and which parts are not.
  • Google Tokenizes Content For Indexing
    Google’s use of tokenization enables semantic understanding of queries and content. The importance for SEO is that Google no longer relies heavily on exact-match keywords, which frees publishers and SEOs to focus on writing about topics (not keywords) from the point of view of how they are helpful to users.
  • Soft 404s Are A Critical Error
    Soft 404s are commonly thought of as something to avoid, but they’re not generally understood as a critical error that can negatively impact the crawl budget. This elevates the importance of avoiding soft 404s.

Featured Image by Shutterstock/Krakenimages.com

Google’s Mueller Advises Testing Ecommerce Sites For Agentic AI via @sejournal, @martinibuster

Google’s John Mueller re-posted the results of an experiment that tested if ecommerce sites were accessible by AI Agents, commenting that it may be useful to check if your ecommerce site works for AI agents that are shopping on behalf of actual customers.

AI Agent Experiment On Ecommerce Sites

Malte Polzin posted commentary on LinkedIn on an experiment he did to test if the top 50 Swiss ecommerce sites were open for business for users who are shopping online with ChatGPT agents.

They reported that most of the ecommerce stores were accessible to ChatGPT’s AI agent but he also found some stores were not for a few reasons.

Reasons Why ChatGPT’s AI Agent Couldn’t Shop

  • CAPTCHA prevented ChatGPT’s AI agent from shopping
  • Blocked by Cloudflare’s Turnstile tool that’s a CAPTCHA alternative.
  • Store blocked access with a maintenance page
  • Bot defense blocked access

Google’s John Mueller Offers Advice

Google’s John Mueller recommended checking if your ecommerce store is open for business to shoppers who use AI agents. It may become more commonplace that users employ agentic search for online shopping.

He wrote:

“Pro tip: check your ecommerce site to see if it works for shoppers using the common agents. (Or, if you’d prefer they go elsewhere because you have too much business, maybe don’t.)

Bot-detection sometimes triggers on users with agents, and it can be annoying for them to get through. (Insert philosophical discussion on whether agents are more like bots or more like users, and whether it makes more sense to differentiate by actions rather than user-agent.)”

Should SEOs Add Agentic AI Testing To Site Audits?

SEOs want to consider adding Agentic AI accessibility to their site audits for ecommerce sites. There may be other use cases where an AI agent may need access to filling out forms, for example on a local services website.