OpenAI Is Pulling Shared ChatGPT Chats From Google Search via @sejournal, @MattGSouthern

OpenAI has rolled back a feature that allowed ChatGPT conversations shared via link to appear in Google Search results.

The company confirms it has disabled the toggle that enabled shared chats to be “discoverable” by search engines and is working to remove existing indexed links.

Shared Chats Were “Short-Lived Experiment”

When users shared a ChatGPT conversation using the platform’s built-in “Share” button, they were given the option to make the chat visible in search engines.

That feature, introduced quietly earlier this year, caused concern after thousands of personal chats started showing up in search results.

Fast Company first reported the issue, finding over 4,500 shared ChatGPT links indexed by Google, some containing personally identifiable information such as names, resumes, emotional reflections, and confidential work content.

In a statement, OpenAI confirms:

“We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines (see below).

Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning.

Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features.”

How the Feature Worked

By default, shared ChatGPT links were accessible only to people with the URL. But users could choose to toggle on discoverability, allowing search engines like Google to index the conversation.

That setting has now been removed, and previously shared chats will no longer be indexed going forward. However, OpenAI cautions that already-indexed content may still appear in search results temporarily due to caching.

Importantly, deleting a conversation from your ChatGPT history does not delete the public share link or remove it from search engines.

Why It Matters

The discoverability toggle was intended to encourage people to reuse outputs generated in ChatGPT, but the company acknowledges it came with unintended privacy tradeoffs.

Even though OpenAI offered explicit controls over visibility, many people may not have understood the implications of enabling search indexing.

This is a reminder to be cautious about what kinds of information you enter into AI chatbots. Although a chat starts out private, features like sharing, logging, or model training can create paths for that content to be exposed publicly.

Looking Ahead

OpenAI says it’s working with Google and other search engines to remove indexed shared links and is reassessing how public sharing features are handled in ChatGPT.

If you’ve shared a ChatGPT conversation in the past, you can check your visibility settings and delete shared links through the ChatGPT Shared Links dashboard.

Featured Image: Mehaniq/Shutterstock

Google URL Removal Bug Enabled Attackers To Deindex URLs via @sejournal, @martinibuster

Google recently fixed a bug that enabled anyone to anonymously use an official Google tool to remove any URL from Google search and get away with it. The tool had the potential to be used to devastate competitor rankings by removing their URLs completely from Google’s index. The bug was known by Google since 2023 but until now Google hadn’t taken action to fix it.

Tool Exploited For Reputation Management

A report by the Freedom of the Press Foundation recounted the case of a tech CEO who had employed numerous tactics to “censor” negative reporting by a journalist, ranging from legal action to identify the reporter’s sources, an “intimidation campaign” via the San Francisco city attorney and a DMCA takedown request.

Through it all, the reporter and the Freedom of the Press Foundation prevailed in court, and the article at the center of the actions remained online until it began getting removed through abuse of Google’s Remove Outdated Content tool. Restoring the web page with Google Search Console was easy, but the abuse continued. This led to opening a discussion on the Google Search Console Help Community.

The person posted a description of what was happening and asked if there was a way to block abuse of the tool. The post alleged that the attacker was choosing a word that was no longer in the original article and using that as the basis for claiming an article is outdated and should be removed from Google’s search index.

This is what the report on Google’s Help Community explained:

“We have a dozen articles that got removed this way. We can measure it by searching Google for the article, using the headline in quotes and with the site name. It shows no results returned.

Then, we go to GSC and find it has been “APPROVED” under outdated content removal. We cancel that request. Moments later, the SAME search brings up an indexed article. This is the 5th time we’ve seen this happen.”

Four Hundred Articles Deindexed

What was happening was an aggressive attack against a website, and Google apparently was unable to do anything to stop the abuse, leaving the user in a very bad position.

In a follow-up post, they explained the devastating effect of the sustained negative SEO attack:

“Every week, dozens of pages are being deindexed and we have to check the GSC every day to see if anything else got removed, and then restore that.

We’ve had over 400 articles deindexed, and all of the articles were still live and on our sites. Someone went in and submitted them through the public removal tool, and they got deindexed.”

Google Promised To Look Into It

They asked if there was a way to block the attacks, and Google’s Danny Sullivan responded:

“Thank you — and again, the pages where you see the removal happening, there’s no blocking mechanism on them.”

Danny responded to a follow-up post, saying that they would look into it:

“The tool is designed to remove links that are no longer live or snippets that are no longer reflecting live content. We’ll look into this further.”

How Google’s Tool Was Exploited

The initial report said that the negative SEO attack was leveraging changed words within the content to file a successful outdated content removal. But it appears that they later discovered that another attack method was being used.

Google’s Outdated Content Removal tool is case-sensitive, which means that if you submit a URL containing an uppercase letter, the crawler will go out to specifically check for the uppercase version, and if the server returns a 404 Not Found error response, Google will remove all versions of the URL.

The Freedom of the Press Foundation writes that the tool is case insensitive, but that’s not entirely correct because if it were insensitive, the case wouldn’t matter. But the case does matter, which means that it is case sensitive.

By the way, the victim of the attack could have created a workaround by rewriting all requests for uppercase URLs to lowercase and enforcing lowercase URLs across the entire website.

That’s the flaw the attacker exploited. So, while the tool was case sensitive, at some point in the system Google’s removal system is case agnostic, which resulted in the correct URL being removed.

Here’s how the Freedom of the Press Foundation described it:

“Our article… was vanished from Google search using a novel maneuver that apparently hasn’t been publicly well documented before: a sustained and coordinated abuse of Google’s “Refresh Outdated Content” tool.

This tool is supposed to allow those who are not a site’s owner to request the removal from search results of web pages that are no longer live (returning a “404 error”), or to request an update in search of web pages that display outdated or obsolete information in returned results.

However, a malicious actor could, until recently, disappear a legitimate article by submitting a removal request for a URL that resembled the target article but led to a “404 error.” By altering the capitalization of a URL slug, a malicious actor apparently could take advantage of a case-insensitivity bug in Google’s automated system of content removal.”

Other Sites Affected By Thes Exploit

Google responded to the Freedom of the Press Foundation and admitted that this exploit did, in fact, affect other sites.

They are quoted as saying the issue only impacted a “tiny fraction of websites” and that the wrongly impacted sites were reinstated.

Google responded by email to note that this bug has been fixed.

Reddit Prioritizes Search, Sees 5X Growth in AI-Powered Answers via @sejournal, @MattGSouthern

Reddit is investing heavily in search, with CEO Steve Huffman announcing plans to position the platform as a destination for people seeking answers online.

In its Q2 shareholder letter, Reddit revealed that more than 70 million people now use its on-platform search each week.

Its AI-powered Reddit Answers feature is also gaining traction, reaching 6 million weekly users, up five times from the previous quarter.

Search Becomes a Strategic Priority

Reddit is now focusing on three key areas: improving the core product, growing its search presence, and expanding internationally.

As part of this shift, the company is scaling back work on its user economy initiatives.

Huffman stated:

“Reddit is one of the few platforms positioned to become a true search destination. We offer something special: a breadth of conversations and knowledge you can’t find anywhere else.”

The company plans to integrate Reddit Answers more deeply into its search experience, expand the feature to more markets, and launch marketing efforts to grow adoption globally.

Reddit Answers Gains Momentum

Reddit Answers, introduced earlier this year, uses the platform’s archive of human discussions to generate relevant responses to search queries.

It now has 6 million weekly active users and is available in the U.S., U.K., Canada, Australia, and India.

Integration with Reddit’s primary search experience is also being tested to make discovery more seamless.

Why This Matters

Reddit’s focus on search may offer new visibility opportunities. Its posts already rank well in Google results, now its internal search tools are being enhanced to surface answers directly.

Reddit also emphasizes its commercial value. The company says 40% of posts demonstrate purchase intent, making it a destination for people researching products and services.

Looking Ahead

As AI-generated content becomes more widespread, Reddit is betting that human perspectives will remain valuable.

The company expects Q3 revenue between $535 million and $545 million, with deeper integration of Reddit Answers planned as it continues to build out its search capabilities.


Featured Image: PJ McDonnell/Shutterstock

Bing Recommends lastmod Tags For AI Search Indexing via @sejournal, @MattGSouthern

Bing has updated its sitemap guidance with a renewed focus on the lastmod tag, highlighting its role in AI-powered search to determine which pages need to be recrawled.

While real-time tools like IndexNow offer faster updates, Bing says accurate lastmod values help keep content discoverable, especially on frequently updated or large-scale sites.

Bing Prioritizes lastmod For Recrawling

Bing says the lastmod field in your sitemap is a top signal for AI-driven indexing. It helps determine whether a page needs to be recrawled or can be skipped.

To make it work effectively, use ISO 8601 format with both date and time (e.g. 2004-10-01T18:23:17+00:00). That level of precision helps Bing prioritize crawl activity based on actual content changes.

Avoid setting lastmod to the time your sitemap was generated, unless the page was truly updated.

Bing also confirmed that changefreq and priority tags are ignored and no longer affect crawling or ranking.

Submission & Verification Tips

Bing recommends submitting your sitemap in one of two ways:

  • Reference it in your robots.txt file
  • Submit it via Bing Webmaster Tools

Once submitted, Bing fetches the sitemap immediately and rechecks it daily.

You can verify whether it’s working by checking the submission status, last read date, and any processing errors in Bing Webmaster Tools.

Combine With IndexNow For Better Coverage

To increase the chances of timely indexing, Bing suggests combining sitemaps with IndexNow.

While sitemaps give Bing a full picture of your site, IndexNow allows real-time URL-level updates—useful when content changes frequently.

The Bing team states:

“By combining sitemaps for comprehensive site coverage with IndexNow for fast, URL-level submission, you provide the strongest foundation for keeping your content fresh, discoverable, and visible.”

Sitemaps at Massive Scale

If you manage a large website, Bing’s sitemap capacity limits are worth your attention:

  • Up to 50,000 URLs per sitemap
  • 50,000 sitemaps per index file
  • 2.5 billion URLs per index
  • Multiple index files support indexing up to 2.5 trillion URLs

That makes the standard sitemap protocol scalable enough even for enterprise-level ecommerce or publishing platforms.

Fabrice Canel and Krishna Madhavan of Microsoft AI, Bing, noted that using these limits to their full extent helps ensure content remains discoverable in AI search.

Why This Matters

As search becomes more AI-driven, accurate crawl signals matter more.

Bing’s reliance on sitemaps, especially the lastmod field, shows that basic technical SEO practices still matter, even as AI reshapes how content is surfaced.

For large sites, Bing’s support for trillions of URLs offers scalability. For everyone else, the message is simpler: keep your sitemaps clean, accurate, and updated in real-time. This gives your content the best shot at visibility in AI search.


Featured Image: PJ McDonnell/Shutterstock

Vulnerability Uncovered In Wix Vibe Coding Platform via @sejournal, @martinibuster

Cloud security company Wiz discovered a critical flaw in Wix’s Base44 vibe coding platform that enabled attackers to bypass authentication and gain access to private enterprise applications. The relative simplicity of finding what should have been a secret app ID number, and using it to gain access, made the vulnerability a serious concern.

Exposed Sensitive Identification Number

An apparently randomly generated identification number, called an app_id, was embedded in public-facing paths such as the application URL and the manifest.json file. Attackers could use that data to generate a verified account, even when user registration was disabled. This bypassed the platform’s access controls, including Single Sign-On (SSO), which many organizations use for enterprise security.

The Wiz security report notes how easy it was to find the sensitive app_id numbers:

“When we navigate to any application developed on top of Base44, the app_id is immediately visible in the URI and manifest.json file path, all applications have their app_ids value hardcoded in their manifest path: manifests/{app_id}/manifest.json.”

Creating A Rogue Account Was Relatively Trivial

The vulnerability did not require privileged access or deep technical expertise. Once an attacker identified a valid app_id, they could use tools like the open source Swagger-UI to register a new account, receive a one-time password (OTP) via email, and verify the account without restriction.

From there, logging in through the application’s SSO flow granted full access to internal systems, despite the original access being restricted to specific users or teams. This process exposed a serious flaw in the platform’s assumption that the app_id would not be tampered with or reused externally.

Authentication Flaw Risked Exposure of Sensitive Data

Many of the affected apps were built using the popular Base44 vibe coding platform for internal use, supporting operations such as HR, chatbots, and knowledge bases. These systems contained personally identifiable information (PII) and were used for HR operations. The exploit enabled attackers to bypass identity controls and access private enterprise applications, potentially exposing sensitive data.

Wix Fixes Flaw Within 24 Hours

The cloud security company discovered the flaw by using a methodical process of examining public information for potential weak points, eventually culminating in finding the exposed app_id numbers, and from there creating the workflow for generating access to accounts. They next contacted Wix, which immediately fixed the issue.

According to the report published by the security company, there is no evidence that the flaw was exploited, and the vulnerability has been fully addressed.

Threat To Entire Ecosystems

The Wiz security report noted that the practice of vibe coding is proceeding at a rapid pace and with not enough time to address potential security issues, expressing the opinion that it creates “systemic risks” not just to individual apps but to “entire ecosystems.”

Why Did This Security Incident Happen?

Wix States It Is Proactive On Security

The report published a statement from Wix that states that they are proactive about security:

“We continue to invest heavily in strengthening the security of all products and potential vulnerabilities are proactively managed. We remain committed to protecting our users and their data.”

Security Company Says Discovery Of Flaw Was Simple

The report by Wiz describes the discovery as a relatively simple matter, explaining that they used “straightforward reconnaissance techniques,” including “passive and active discovery of subdomains,” which are widely accessible methods.

The security report explained that exploiting the flaw was simple:

“What made this vulnerability particularly concerning was its simplicity – requiring only basic API knowledge to exploit. This low barrier to entry meant that attackers could systematically compromise multiple applications across the platform with minimal technical sophistication.”

The existence of that report, in itself, raises the concern that if discovering the issue was “straightforward” and exploiting it had a “low barrier to entry,” how is it that Wix was proactive and yet this was not discovered?

  • If they had used a third-party security testing company, why hadn’t they discovered the publicly available app_id numbers?
  • The manifest.json exposure is trivial to detect. Why hadn’t that been flagged by a security audit?

The contradiction between a simple discovery/exploit process and Wix’s claimed proactive security posture raises a reasonable doubt about the thoroughness or effectiveness of their proactive measures.

Takeaways:

  • Simple Discovery and Exploitation:
    The vulnerability could be found and exploited using basic tools and publicly available information, with no need for advanced skills or insider access.
  • Bypassing Enterprise Controls:
    Attackers could gain full access to internal apps despite controls like disabled registration and SSO-based identity restrictions.
  • Systemic Risk from Vibe Coding:
    Wiz warns that fast-paced vibe coding platforms may introduce widespread security risks across application ecosystems.
  • Discrepancy Between Claims and Reality:
    The ease of exploitation contrasts with Wix’s claims of proactive security, prompting questions about the thoroughness of their security audits.

Wiz discovered that Wix’s Base44 vibe coding platform exposed a critical vulnerability that could have enabled attackers to bypass authentication and access internal enterprise applications.  The security company that discovered the flaw expressed the opinion that this incident highlights potential risks of insufficient security considerations, which can put entire ecosystems at risk.

Read the original report:

Wiz Research Uncovers Critical Vulnerability in AI Vibe Coding platform Base44 Allowing Unauthorized Access to Private Applications

Featured Image by Shutterstock/mailcaroline

WooCommerce Customer Review Plugin Vulnerability Affects 80,000+ Sites via @sejournal, @martinibuster

An advisory was issued about a vulnerability in the Customer Reviews for WooCommerce plugin, which is installed on over 80,000 websites. The plugin enables unauthenticated attackers to launch a stored cross-site scripting attack.

Customer Reviews for WooCommerce Vulnerability

The Customer Reviews for WooCommerce plugin enables users to send customers an email reminder to leave a review and also offers other features designed to increase customer engagement with a brand.

Wordfence issued an advisory about a flaw in the plugin that makes it possible for attackers to inject scripts into web pages that execute whenever a user visits the affected page.

The exploit is due to a failure to “sanitize” inputs and “escape” outputs. Sanitizing inputs in this context is a basic WordPress security measure that checks if uploaded data conforms to expected types and removes dangerous content like scripts. Output escaping is another security measure that ensures any special characters produced by the plugin aren’t executable.

According to the official Wordfence advisory:

“The Customer Reviews for WooCommerce plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘author’ parameter in all versions up to, and including, 5.80.2 due to insufficient input sanitization and output escaping. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that will execute whenever a user accesses an injected page.”

Users of the plugin are advised to update to version 5.81.0 or newer version.

Featured Image by Shutterstock/Brilliant Eye

Query Fan-Out Technique in AI Mode: New Details From Google via @sejournal, @MattGSouthern

In a recent interview, Google’s VP of Product for Search, Robby Stein, shared new information about how query fan-out works in AI Mode.

Although the existence of query fan-out has been previously detailed in Google’s blog posts, Stein’s comments expand on its mechanics and offer examples that clarify how it works in practice.

Background On Query Fan-Out Technique

When a person types a question into Google’s AI Mode, the system uses a large language model to interpret the query and then “fan out” multiple related searches.

These searches are issued to Google’s infrastructure and may include topics the user never explicitly mentioned.

Stein said during the interview:

“If you’re asking a question like things to do in Nashville with a group, it may think of a bunch of questions like great restaurants, great bars, things to do if you have kids, and it’ll start Googling basically.”

He described the system as using Google Search as a backend tool, executing multiple queries and combining the results into a single response with links.

This functionality is active in AI Mode, Deep Search, and some AI Overview experiences.

Scale And Scope

Stein said AI-powered search experiences, including query fan-out, now serve approximately 1.5 billion users each month. This includes both text-based and multimodal input.

The underlying data sources include traditional web results as well as real-time systems like Google’s Shopping Graph, which updates 2 billion times per hour.

He referred to Google Search as “the largest AI product in the world.”

Deep Search Behavior

In cases where Google’s systems determine a query requires deeper reasoning, a feature called Deep Search may be triggered.

Deep Search can issue dozens or even hundreds of background queries and may take several minutes to complete.

Stein described using it to research home safes, a purchase he said involved unfamiliar factors like fire resistance ratings and insurance implications.

He explained:

“It spent, I don’t know, like a few minutes looking up information and it gave me this incredible response. Here are how the ratings would work and here are specific safes you can consider and here’s links and reviews to click on to dig deeper.”

AI Mode’s Use Of Internal Tools

Stein mentioned that AI Mode has access to internal Google tools, such as Google Finance and other structured data systems.

For example, a stock comparison query might involve identifying relevant companies, pulling current market data, and generating a chart.

Similar processes apply to shopping, restaurant recommendations, and other query types that rely on real-time information.

Stein stated:

“We’ve integrated most of the real-time information systems that are within Google… So it can make Google Finance calls, for instance, flight data… movie information… There’s 50 billion products in the shopping catalog… updated I think 2 billion times every hour or so. So all that information is able to be used by these models now.”

Technical Similarities To Google’s Patent

Stein described a process similar to a Google patent from December about “thematic search.”

The patent outlines a system that creates sub-queries based on inferred themes, groups results by topic, and generates summaries using a language model. Each theme can link to source pages, but summaries are compiled from multiple documents.

This approach differs from traditional search ranking by organizing content around inferred topics rather than specific keywords. While the patent doesn’t confirm implementation, it closely matches Stein’s description of how AI Mode functions.

Looking Ahead

With Google explaining how AI Mode generates its own searches, the boundaries of what counts as a “query” are starting to blur.

This creates challenges not just for optimization, but for attribution and measurement.

As search behavior becomes more fragmented and AI-driven, marketers may need to focus less on ranking for individual terms and more on being included in the broader context AI pulls from.

Listen to the full interview below:


Featured Image: Screenshot from youtube.com/@GoogleDevelopers, July 2025. 

WordPress AI Engine Plugin Vulnerability Affects Up To 100,000 Websites via @sejournal, @martinibuster

A security advisory was issued for the AI Engine WordPress plugin, installed on over 100,000 websites, the fourth one this month. Rated 8.8, this vulnerability enables attackers with only subscriber-level authentication to upload malicious files when the REST API is enabled.

AI Engine Plugin: Fifth Vulnerability In 2025

This is the fourth vulnerability discovered in the AI Engine plugin in July, following the first one of the year discovered in June, making a total of five vulnerabilities discovered in the plugin so far in 2025. There were nine vulnerabilities discovered in 2024, one of which was rated 9.8 because it enabled unauthenticated attackers to upload malicious files, plus another rated 9.1 that also enabled arbitrary uploads.

Authenticated (Subscriber+) Arbitrary File Upload

The latest vulnerability enables authenticated file uploads. What makes this exploit more dangerous is that it requires only subscriber-level authentication for an attacker to take advantage of the security weakness. That isn’t as bad as a vulnerability that doesn’t require authentication, but it’s still rated 8.8 on a scale of 1 to 10.

Wordfence describes the vulnerability as being due to missing file type validation in a function related to the REST API in versions 2.9.3 and 2.9.4.

File type validation is a security measure typically used within WordPress to make sure that the content of a file matches the type of file being uploaded to the website.

According to Wordfence:

“This makes it possible for authenticated attackers, with Subscriber-level access and above, to upload arbitrary files on the affected site’s server when the REST API is enabled, which may make remote code execution possible.”

Users of the AI Engine plugin are recommended updating their plugin to the latest version, 2.9.5, or a newer version.

The plugin changelog for version 2.9.5 shares what was updated:

“Fix: Resolved a security issue related to SSRF by validating URL schemes in audio transcription and sanitizing REST API parameters to prevent API key misuse.

Fix: Corrected a critical security vulnerability that allowed unauthorized file uploads by adding strict file type validation to prevent PHP execution.”

Featured Image by Shutterstock/Jiri Hera

B2B Marketing Is Starting to Look a Lot Like B2C (And It’s Working) via @sejournal, @MattGSouthern

B2B marketers are taking a page from the B2C playbook and seeing real results.

According to LinkedIn’s B2B Marketing Benchmark Report, strategies once considered too informal for business audiences, like short-form video and influencer collabs, are now central to building trust and driving growth.

The study, based on responses from 1,500 senior marketers across six countries, found that 94% believe trust is the key to success in B2B.

But many brands are moving away from traditional lead-gen tactics and turning instead to emotionally resonant content and credible voices.

Lee Moskowitz, Growth Marketer and Podcast Host at Lee2B, is quoted in the report:

“We’re in an era of ‘AI slop,’ long sales cycles and growing buying committees. Brands need to build trust, prove their expertise and earn their place in the buying process.”

This shift toward more consumer-style tactics is evident in the adoption of video content across B2B teams.

B2B Video Marketing Hits a Tipping Point

Video is now foundational to B2B marketing, with 78% of marketers including it in their programs and over half planning to increase investments in the coming year.

Screenshot from: youtube.com/@LinkedInMktg, July 2025.

The most successful teams aren’t using video in isolation, they’re building multi-channel strategies that map to different funnel stages.

According to LinkedIn’s data, marketers with a video strategy are:

  • 2.2x more likely to say their brand is well trusted
  • 1.8x more likely to say their brand is well known

Popular formats include short-form social clips, brand storytelling, and customer testimonials. Content types long associated with B2C engagement are now proving effective in B2B.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

AJ Wilcox, founder of B2Linked, states in the report:

“Capturing that major B2B deal requires trust, and nothing builds trust faster than personal video content. I feel more trusting of a brand after watching a 1-min clip of their founder talking than if I read five of their blog posts.”

B2B Influencer Marketing Moves Into the Mainstream

Fifty-five percent of marketers in the study said they now work with influencers. The top reasons include trust, authenticity, and credibility.

B2B influencers are typically subject matter experts, practitioners, or respected voices in their fields. And their impact appears to be tied to business outcomes: 84% of marketers using influencer marketing expect budget increases next year, compared to just 58% of non-users.

Brendan Gahan, CEO and Co-Founder of Creator Authority, states:

“This feels like a YouTube moment. LinkedIn is entering that same phase now. It already generates more weekly comments than Reddit. Its creator ecosystem is thriving and growing fast.”

Buyers trust people they relate to. Marketers are shifting their influencer strategies to reflect that, prioritizing alignment and authority over follower counts.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

What This Means

Trust signals are becoming more important across the board, especially as search engines continue to emphasize expertise, authority, and trust (E-E-A-T). Relying on blog posts alone may no longer be enough to demonstrate what your brand stands for.

Video gives you a way to show expertise in a more personal, credible way. Whether it’s a founder explaining your product or a customer sharing their experience.

For long sales cycles and complex buying decisions, what’s working now looks a lot more human: authentic voices, visible experts, and content that’s easy to connect with.


Featured Image: Roman Samborskyi/Shutterstock

Research Shows Differences In ChatGPT And Google AIO Answers via @sejournal, @martinibuster

New research from enterprise search marketing platform BrightEdge discovered differences in how Google and ChatGPT surface content. These differences matter to digital marketers and content creators because they show how content is recommended by each system. Recognizing the split enables brands to adapt their content strategies to stay relevant across both platforms.

BrightEdge’s findings were surfaced through an analysis of B2B technology, education, healthcare, and finance queries. It’s possible to cautiously extrapolate the findings to other niches where there could be divergences in how Google and ChatGPT respond, but that’s highly speculative, so this article won’t do that.

Core Differences: Task Vs. Information Orientation

BrightEdge’s research discovered that ChatGPT and Google AI Overviews take two different approaches to helping users take action. ChatGPT is more likely to recommend tools and apps, behaving in the role of a guide for making immediate decisions. Google provides informational content that encourages users to read before acting. This difference matters for SEO because it enables content creators and online stores to understand how their content is processed and presented to users of each system.

BrightEdge explains:

“In task-oriented prompts, ChatGPT overwhelmingly suggests tools and apps directly, while Google continues to link to informational content. While Google thrives as a research assistant, ChatGPT acts like a trusted coach for decision making, and that difference shapes which tool users instinctively choose for different needs.”

Divergence On Action-Oriented Queries

ChatGPT and Google tend to show similar kinds of results when users are querying for comparisons, but the results begin to diverge when the user intent implies they want to act. BrightEdge found that prompts about credit card comparisons or learning platforms generated similar kinds of results.

Questions with an action intent, like “how to create a budget” or “learn Python,” lead to different answers. ChatGPT appears to treat action intent prompts as requiring a response with tools, while Google treats them as requiring information.

BrightEdge notes that Healthcare has the highest rate of divergence:

“At 62% divergence, healthcare demonstrates the most significant split between platforms.

  • When prompts pertain to symptoms or medical information, both ChatGPT and Google will mention the CDC and The Mayo Clinic.
  • However, when prompted to help with things like “How to find a doctor,” ChatGPT pushes users towards Zocdoc, while Google points to hospital directories.”

B2B Technology niche has the second highest level of divergence:

“With 47% divergence, B2B tech shows substantial platform differences.

  • When comparing technology, such as cloud platforms, both suggest AWS and Azure.
  • When asked “How to deploy things (such as specific apps),” ChatGPT relies on tools like Kubernetes and the AWS CLI, while Google offers tutorials and Stack Overflow.”

Education follows closely behind B2B technology:

“At 45% divergence, education follows the same trend.

  • When comparing “Best online learning platforms,” both platforms surface Coursera, EdX, and LinkedIn Learning.
  • When a user’s prompt pertains to learning a skill such as “How to learn Python,” ChatGPT recommends Udemy, whereas Google directs users to user-generated content hubs like GitHub and Medium.”

Finance shows the lowest levels of divergence, at 39%.

BrightEdge concludes that this represents a “fundamental shift” in how AI platforms interpret intent, which means that marketers need to examine the intent behind the search results for each platform and make content strategy decisions based on that research.

Tools Versus Topics

BrightEdge uses the example of the prompt “What are some resources to help plan for retirement?” to show how Google and ChatGPT differ. ChatGPT offers calculators and tools that users can act on, while Google suggests topics for further reading.

Screenshot Of ChatGPT Responding With Financial Tools

There’s a clear difference in the search experience for users. Marketers, SEOs, and publishers should consider how to meet both types of expectations: practical, action-based responses from ChatGPT and informational content from Google.

Takeaways

  • Split In User Intent Interpretation:
    Google interprets queries as requests for information, while ChatGPT tends to interpret many of the same queries as a call for action that’s solved by tools.
  • Platform Roles:
    ChatGPT behaves like a decision-making coach, while Google acts as a research assistant.
  • Domain-Specific Differences:
    Healthcare has the highest divergence (62%), especially in task-based queries like finding a doctor.
    B2B Technology (47%) and Education (45%) also show significant splits in how guidance is delivered.
    Finance shows the least divergence (39%) in how results are presented.
  • Tools vs. Topics:
    ChatGPT recommends actionable resources; Google links to authoritative explainer content.
  • SEO Insight:
    Content strategies must reflect each platform’s interpretation of intent. For example, creating actionable responses for ChatGPT and comprehensive informational content for Google. This may even mean creating and promoting a useful tool that can surface in ChatGPT.

BrightEdge’s research shows that, for some queries, Google and ChatGPT interpret the same user intent in profoundly different ways. While Google treats action-oriented queries as a prompt to deliver informational content, ChatGPT responds by recommending tools and services users can immediately act on. This divergence calls attention to the need to understand when ChatGPT is delivering actionable responses in order for marketers and content creators to create platform-specific content and web experiences.

Read the original research:

Brand Visibility: ChatGPT and Google AI Approaches by Industry

Featured Image by Shutterstock/wenich_mit