ChatGPT Adds Shopping Research For Product Discovery via @sejournal, @MattGSouthern

OpenAI launched shopping research in ChatGPT, a feature that creates personalized buyer’s guides by researching products across the web. The tool is rolling out today on mobile and web for logged-in users on Free, Go, Plus, and Pro plans.

The company is offering nearly unlimited usage through the holidays.

What’s New

Shopping research works differently from standard ChatGPT responses. Users describe what they need, answer clarifying questions about budget and preferences, and receive a buyer’s guide after a few minutes.

The feature pulls information including price, availability, reviews, specs, and images from across the web. You can guide the research by marking products as “Not interested” or “More like this” as options appear.

OpenAI’s announcement states:

“Shopping research is built for that deeper kind of decision-making. It turns product discovery into a conversation: asking smart questions to understand what you care about, pulling accurate, up-to-date details from high-quality sources, and bringing options back to you to refine the results.”

The company says the tool performs best in categories like electronics, beauty, home and garden, kitchen and appliances, and sports and outdoor.

Technical Details

Shopping research is powered by a shopping-specialized GPT-5 mini variant post-trained on GPT-5-Thinking-mini.

OpenAI’s internal evaluation shows shopping research reached 52% product accuracy on multi-constraint queries, compared with 37% for ChatGPT Search.

Product accuracy measures how well responses meet user requirements for attributes like price, color, material, and specs. The company designed the system to update and refine results in real time based on user feedback.

Privacy & Data Sharing

OpenAI states that user chats are never shared with retailers. Results are organic and based on publicly available retail sites.

Merchants who want to appear in shopping research results can follow an allowlisting process through OpenAI.

Limitations

OpenAI acknowledges the feature isn’t perfect. The model may make mistakes about product details like price and availability. The company encourages users to visit merchant sites for the most accurate information.

Why This Matters

This feature pulls more of the product comparison journey into one place.

As shopping research handles more of the “which one should I buy?” work inside ChatGPT, some of that early-stage discovery could happen without a traditional search click.

For retailers and affiliate publishers, that raises the stakes for inclusion in these results. Visibility may depend on how well your products and pages are represented in OpenAI’s shopping system and allowlisting process.

Looking Ahead

Shopping research in ChatGPT is now available to logged-in users starting today. OpenAI plans to add direct purchasing through ChatGPT for merchants participating in Instant Checkout, though no timeline was provided.


Featured Image: Koshiro K/Shutterstock

Google’s Mueller Questions Need For LLM-Only Markdown Pages via @sejournal, @MattGSouthern

Google Search Advocate John Mueller has pushed back on the idea of building separate Markdown or JSON pages just for large language models (LLMs), saying he doesn’t see why LLMs would need pages that no one else sees.

The discussion started when Lily Ray asked on Bluesky about “creating separate markdown / JSON pages for LLMs and serving those URLs to bots,” and whether Google could share its perspective.

Ray asked:

Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots. Can you share Googleʼs perspective on this?

The question draws attention to a developing trend where publishers create “shadow” copies of important in formats that are easier for AI systems to understand.

There’s a more active discussion on this topic happening on X.

What Mueller Said About LLM-Only Pages

Mueller replied that he isn’t aware of anything on Google’s side that would call for this kind of setup.

He notes that LLMs have worked with regular web pages from the beginning:

I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?

When Ray followed up about whether a separate format might help “expedite getting key points across to LLMs quickly,” Mueller argued that if file formats made a meaningful difference, you would likely hear that directly from the companies running those systems.

Mueller added:

If those creating and running these systems knew they could create better responses from sites with specific file formats, I expect they would be very vocal about that. AI companies aren’t really known for being shy.

He said some pages may still work better for AI systems than others, but he doesn’t think that comes down to HTML versus Markdown:

That said I can imagine some pages working better for users and some better for AI systems, but I doubt that’s due to the file format, and it’s definitely not generalizable to everything. (Excluding JS which still seems hard for many of these systems).”

Taken together, Mueller’s comments suggest that, from Google’s point of view, you don’t need to create bot-only Markdown or JSON clones of existing pages just to be understood by LLMs.

How Structured Data Fits In

Other individuals in the thread drew a line between speculative “shadow” formats and cases where AI platforms have clearly defined feed requirements.

A reply from Matt Wright pointed to OpenAI’s eCommerce product feeds as an example where JSON schemas matter.

In that context, a defined spec governs how ChatGPT ingests and displays product data. Wright explains:

Interestingly, the OpenAI eCommerce product feeds are live: JSON schemas appear to have a key role in AI search already.

That example supports the idea that structured feeds and schemas are most important when a platform publishes a spec and asks you to use it.

Additionally, Wright points to a thread on LinkedIn where Chris Long observed that “editorial sites using product schemas, tend to get included in ChatGPT citations.”

Why This Matters

If you’re questioning whether to build “LLM-optimized” Markdown or JSON versions of your content, this exchange can help steer you back to the basics.

Mueller’s comments reinforce that LLMs have long been able to read and parse standard HTML.

For most sites, it’s more productive to keep improving speed, readability, and content structure on the pages you already have, and to implement schema where there’s clear platform guidance.

At the same time, the Bluesky thread shows that AI-specific formats are starting to emerge in narrow areas such as product feeds. Those are worth tracking, but they’re tied to explicit integrations, not a blanket rule that markdown is better for LLMs.

Looking Ahead

The conversation highlights how fast AI-driven search changes are turning into technical requests for SEO and dev teams, often before there is documentation to support them.

Until LLM providers publish more concrete guidelines, this thread points you back to work you can justify today: keep your HTML clean, reduce unnecessary JavaScript where it makes content hard to parse, and use structured data where platforms have clearly documented schemas.


Featured Image: Roman Samborskyi/Shutterstock

LLMs.txt Shows No Clear Effect On AI Citations, Based On 300k Domains via @sejournal, @MattGSouthern

A new analysis from SE Ranking suggests the llms.txt file isn’t delivering measurable benefits yet.

After examining roughly 300,000 domains, the company found no relationship between having llms.txt and how often a domain is cited in major LLM answers.

What The Data Says

Adoption Is Thin

SE Ranking’s crawl found llms.txt on 10.13% of domains. In other words, nearly nine out of ten sites they measured haven’t implemented it.

That low usage matters because the format is sometimes described as an emerging baseline for AI visibility. The data instead shows scattered experimentation. SE Ranking says adoption is fairly even across traffic tiers and not concentrated among the biggest brands.

High-traffic sites were slightly less likely to use the file than mid-tier websites in their dataset.

No Measurable Link To LLM Citations

To assess whether the llms.txt file affects AI visibility, SE Ranking analyzed domain-level citation frequency across responses from prominent LLMs. They employed statistical correlation tests and an XGBoost model to determine the extent to which each factor contributed to citations.

The main finding was that removing the llms.txt feature actually improved the model’s accuracy. SE Ranking concludes that llms.txt “doesn’t seem to directly impact AI citation frequency. At least not yet.”

Additionally, they found no significant correlation between citations and the file using simpler statistical methods.

How This Squares With Platform Guidance

SE Ranking notes that its results align with public platform guidance. But it’s important to be precise about what is confirmed.

Google hasn’t indicated that llms.txt is used as a signal in AI Overviews or AI Mode. In its AI search guidance, Google frames it as an evolution of Search that continues to rely on its existing Search systems and signals, without mentioning llms.txt as an input.

OpenAI’s crawler documentation similarly focuses on robots.txt controls. OpenAI recommends allowing OAI-SearchBot in robots.txt to support discovery for its search features, but does not say llms.txt affects ranking or citations.

SE Ranking also notes that some SEO logs show GPTBot occasionally fetching llms.txt files, though they say it doesn’t happen often and does not appear tied to citation outcomes.

Taken together, the dataset suggests that even if some models retrieve the file, it’s not influencing citation behavior at scale right now.

What This Means For You

If you want a clean, low-risk way to prepare for possible future adoption, adding llms.txt is easy and unlikely to cause technical harm.

But if the goal is a near-term visibility bump in AI answers, the data says you shouldn’t expect one.

That puts llms.txt in the same category as other early AI-visibility tactics. Reasonable to test if it fits your workflow, but not something to sell internally as a proven lever.


Featured Image: Mameraman/Shutterstock

New Data Finds Gap Between Google Rankings And LLM Citations via @sejournal, @MattGSouthern

Large language models cite sources differently than Google ranks them.

Search Atlas, an SEO software company, compared citations from OpenAI’s GPT, Google’s Gemini, and Perplexity against Google search results.

The analysis of 18,377 matched queries finds a gap between traditional search visibility and AI platform citations.

Here’s an overview of the key differences Search Atlas found.

Perplexity Is Closest To Search

Perplexity performs live web retrieval, so you would expect its citations to look more like search results. The study supports that.

Across the dataset, Perplexity showed a median domain overlap of around 25–30% with Google results. Median URL overlap was close to 20%. In total, Perplexity shared 18,549 domains with Google, representing about 43% of the domains it cited.

ChatGPT And Gemini Are More Selective

ChatGPT showed much lower overlap with Google. Its median domain overlap stayed around 10–15%. The model shared 1,503 domains with Google, accounting for about 21% of its cited domains. URL matches typically remained below 10%.

Gemini behaved less consistently. Some responses had almost no overlap with search results. Others lined up more closely. Overall, Gemini shared just 160 domains with Google, representing about 4% of the domains that appeared in Google’s results, even though those domains made up 28% of Gemini’s citations.

What The Numbers Mean For Visibility

Ranking in Google doesn’t guarantee LLM citations. This report suggests the systems draw from the web in different ways.

Perplexity’s architecture actively searches the web and its citation patterns more closely track traditional search rankings. If your site already ranks well in Google, you are more likely to see similar visibility in Perplexity answers.

ChatGPT and Gemini rely more on pre-trained knowledge and selective retrieval. They cite a narrower set of sources and are less tied to current rankings. URL-level matches with Google are low for both.

Study Limitations

The dataset heavily favored Perplexity. It accounted for 89% of matched queries, with OpenAI at 8% and Gemini at 3%.

Researchers matched queries using semantic similarity scoring. Paired queries expressed similar information needs but were not identical user searches. The threshold was 82% similarity using OpenAI’s embedding model.

The two-month window provides a recent snapshot only. Longer timeframes would be needed to see whether the same overlap patterns hold over time.

Looking Ahead

For retrieval-based systems like Perplexity, traditional SEO signals and overall domain strength are likely to matter more for visibility.

For reasoning-focused models like ChatGPT and Gemini, those signals may have less direct influence on which sources appear in answers.


Featured Image: Ascannio/Shutterstock

Should Advertisers Be Worried About AI In PPC?

One scroll through LinkedIn and you’d struggle not to see a post, video, or ad about AI, whatever the industry you work in.

For digital marketing, it’s completely taken over, and it has woven itself into nearly every aspect of day-to-day life, especially within PPC advertising.

From automated bidding to AI-generated ad creative, platforms like Google Ads and Microsoft Advertising have been doubling down on this for years.

Naturally, this shift raises questions and concerns among advertisers, with one side claiming it’s out of control and taking over, the other side boasting about time saved and game-changing results, and then you’ve got the middle ground trying to figure out exactly what the impact is and where it is going.

It’s a difficult topic to answer with a simple yes or no, with so many opinions and platforms for sharing them; it’s everywhere, and although certainly not a topic that is in its infancy, it does feel that way in 2025.

In this article, we’ll explore how AI is used in PPC today, the benefits it offers, the concerns it brings, and how advertisers can best adapt.

What Role Does AI Play In PPC Today?

The majority of advertisers are already using some form of AI-driven tool in their workflow, with 74% of marketers reported using AI tools last year, up from just 21% in 2022.

Then, within the platforms, PPC campaigns are heavily invested in artificial intelligence, both above and below the hood. Key areas being:

Bid Automation

Gone are the days of manual bidding on hundreds of keywords or product groups (in most cases).

Google’s and Microsoft’s Automated Bidding use machine learning to set optimal bids for each auction based on the likelihood to convert.

These algorithms analyze countless signals (device, location, time of day, user behavior patterns, etc.) in real-time to adjust bids far more precisely than a human could.

In this scenario, the role of the advertiser is to feed these bidding strategies with the best possible data to then take forward in making decisions.

Then at a strategic level, advertisers will need to determine the structure, targeting, goals, etc, and this is where Google has further pushed AI into the hands of PPC teams.

From Google’s side, it’s an indication of trust that the AI will find relevant matches and handle bids for them, and I have seen this work incredibly well, but I’ve also seen this work terribly, and it’s all context-dependent.

Dynamic Creative & Assets

Responsive Search Ads (RSAs) allow advertisers to input multiple headlines and descriptions, which Google’s AI then mixes and matches to serve the best-performing combinations for each query.

Over time, the algorithm learns which messages resonate most.

Google has even introduced generative AI tools to create ad assets (headlines, images, etc.) automatically based on your website content and campaign goals.

Similarly, Microsoft’s platform now offers a Copilot feature that can generate ad copy variations, images, and suggest keywords using AI.

Of all the AI-related changes in Google Ads, in my experience, this was one that advertisers welcomed the most, as it is a time saver and created a nice way to test different messaging, call to actions, etc.

Keyword Match Types

The recipe for Google Ads in 2025 that advertisers are given from Google is to blend broad match and automated bidding.

Why is this? According to Google, machine learning attempts to understand user intent and match ads to queries that aren’t exact matches but are deemed relevant.

Think about it this way: You’ve done your research for your new search campaign, built out your ad groups, and are confident that you have covered all bases.

How will this change over time, and how can you guarantee you’re not missing relevant auctions? This is rhetoric Google runs with for broad match as it leans into the stats with billions of searches per day, with ~15% being brand new queries, pushing advertisers to loosen targeting to allow machine learning to operate constraint-free.

There is certainly value in this, and it’s reported that 62% of advertisers using Google’s Smart Bidding have made broad match their primary keyword match type, a strategy that was very much a no-go for years; however, handing all control over to AI doesn’t fully align with what matters most (profitability, LTV, margins, etc) and there has to be a middle ground.

Audience Targeting And Optimization

Both Google and Microsoft leverage AI to build and target audiences.

Campaign types like Performance Max are almost entirely AI-driven; they automatically allocate your budget across search, display, YouTube, Gmail, etc., to find conversions wherever they occur.

Advertisers simply provide creative assets, search themes, conversion goals, etc, and the AI does the rest.

The better quality the data inputted, the better the performance to a large degree.

Of all the AI topics for Google Ads, PMax is very much debated within the industry, but it’s telling that 63% of PPC experts plan to increase spend on Google’s feed-based Performance Max campaigns this year.

Recommendations, Auto Applies, And Budget Optimization

If you work within/around PPC, you’ll have seen, closed, shouted at, and maybe on a rare occasion, taken action off the back of these.

The platforms continuously analyze account performance and suggest optimizations.

Some are basic, but others (like budget reallocation or shifting to different bid strategies) are powered by machine learning insights across thousands of accounts.

As good as these may sound, they are only as good as the data being fed into the account and lack context, which, in some cases, if applied, can be detrimental to account performance.

In summary, advertisers have had to embrace AI to a large extent in their day-to-day campaign management.

But with this embrace comes a natural question: Is all this AI making things better or worse for advertisers, or is it just a way for ad platforms to grow their market share?

What Are The Benefits Of AI In PPC?

AI offers some clear advantages for paid search marketers.

When used properly, AI can make campaigns more efficient, effective, and can save a great deal of time once spent on monotonous tasks.

Here are some key benefits:

Efficiency And Time Savings

One of the biggest wins is automation of labor-intensive tasks.

AI can analyze massive data sets and adjust bids or ads 24/7, far faster than any human.

This frees up marketers to focus on strategy instead of repetitive tasks.

Mundane tasks such as bid adjustments, budget pacing, creative rotation, etc, can be picked up by AI to allow PPC teams to focus on high-level strategy and analysis, looking at the bigger picture.

It’s certainly not a case of set-and-forget, but the balance has shifted.

AI can now take care of the executional heavy lifting, while humans guide the strategy, interpret the nuance, and make the judgment calls that machines can’t.

Structural Management

A clear benefit of AI in many facets of paid search is the consolidation of account structures.

Large advertisers might have millions of keywords or hundreds of ads, which at one time were manually mapped out and managed group by group.

With automated bidding strategies adjusting bids in real time, serving the best possible creative and doubling down on the keywords, product groups, and SKUs that work, PPC teams are able to whittle down overly complex account structures into consolidated themes where they can feed their data.

Campaigns like Performance Max scale across channels automatically, finding additional inventory (like YouTube or Display) without the advertiser manually creating separate campaigns, further making life easier for advertisers who choose to use them.

Optimization Of Ad Creative And Testing

Rather than running a handful of ad variations, responsive ads powered by AI can test dozens of combinations of headlines and descriptions instantly.

The algorithm learns which messages work best for each search term or audience segment.

Additionally, new generative AI features can create ad copy or image variations you hadn’t considered, expanding creative possibilities, but please check these before launch, and if set to auto apply, maybe remove and review first, as these outputs can be interesting.

The overarching goal from the ad platforms is to work towards solving the problem many teams face regarding getting creatives produced and fast, which they do to an extent, but there’s still a way to go.

Audience Targeting And Personalization

AI can identify user patterns to target more precisely than manual bidding.

Google’s algorithms might learn that certain search queries or user demographics are more likely to convert and automatically adjust bids or show specific ad assets to those segments, and as these change over time, so do the bidding strategies.

This kind of micro-optimization of who sees which ad was very hard to do manually, and has great limitations.

In essence, the machine finds your potential customers using complex signals that adjust bids in real time based on the user vs. setting a bid for a term/product group to serve in every ad set, essentially treating each auction the same.

What Are The Concerns Of AI In PPC?

Despite all the promise, it’s natural for advertisers to have some worries about the march of AI in paid search.

Handing over control to algorithms and black box systems comes with its challenges.

In practice, there have been hiccups and valid concerns that explain why some in the industry are cautious.

Loss Of Control And Transparency

A common gripe is that as AI takes over, advertisers lose visibility into the “why” behind performance changes.

Take PMax, for example. These fully automated campaigns provide only limited data when compared to a segmented structure, making it hard to understand what’s driving conversions and putting advertisers in a difficult position when feeding back performance to stakeholders who once had a wealth of data to dig through.

Nearly half of PPC specialists said that managing campaigns has become harder in the last two years because of the loss of insights and data due to automated campaign types like PMax, with one industry survey finding that trust in major ad platforms has declined over the past year, with Google experiencing a 54% net decline in trust sentiment.

Respondents cited the platforms’ prioritization of black box automation over giving users control as a key issue, with many feeling like they are flying partially blind, a huge worry considering budgets and importance of Google Ads as an advertising channel for millions of brands worldwide.

Performance And Efficiency Trade-Offs

I’ve mentioned this a couple of times so far, but as with most AI in the context of Google Ads, the data being fed into the platform determines how well the AI performs, and adopting AI in PPC does not result in immediate performance improvements for every account, however hard Google pushes this narrative.

Algorithms optimize for the goal you set (e.g., achieve this ROAS), sometimes at the expense of other metrics like cost per conversion or return on investment (ROI).

Take broad match keywords combined with Smart Bidding; this might bring in more traffic, but some of that traffic could be low quality or not truly incremental, impacting the bottom line and how you manage your budgets.

To be taken with a pinch of salt due to context, however, an analysis of over 2,600 Google Ads accounts found that 72% of advertisers saw better return on ad spend (ROAS) with traditional exact match keyword targeting, whereas only ~26% of accounts achieved better ROAS using broad match automation.

Advertisers are rightly concerned that blindly following AI recommendations could lead to wasted spend on irrelevant clicks or diminishing returns.

Then, you have the learning period for automated strategies, which can also be costly (but necessary) where the algorithm might spend a lot figuring out what works, something not every business can afford.

Mistakes, Quality, And Brand Safety

AI isn’t infallible.

There have been instances of AI-generated ad copy that miss the mark or even violate brand guidelines.

For example, if you let generative AI create search ads, it might produce statements that are factually incorrect or not in the desired tone.

Having worked extensively in paid search for luxury fashion brands, the risk of AI producing off-brand creative and messaging is often a roadblock to getting on board with new campaign types.

In a Salesforce survey, 31% of marketing professionals cited accuracy and quality concerns with AI outputs as a barrier.

To add further complexity to this, many of the features, such as auto applies in Google Ads, are not the easiest to spot within the accounts and are dependent on the level of expertise within the team managing PPC; certain AI-generated assets or enhancements could be live without teams knowing, which can lead to friction within businesses with strict brand guidelines.

Over-Reliance And Skills Erosion

Another subtle worry is that marketers relying heavily on AI could see their own skills become redundant.

PPC professionals used to pride themselves on granular account optimization, but if the machine is doing everything, how will their jobs change?

A study by HubSpot found that over 57% of U.S. marketers feel pressure to learn AI tools or risk becoming irrelevant in their careers.

With PPC, all this means is that less and less time is spent within the accounts undertaking repetitive tasks, something that I’ve championed for years.

Every paid search team is different and is built from different levels of expertise; however, the true value that PPC teams bring shouldn’t be the intricacies of campaign management, it’s the understanding of the value their channel is driving and everything around this that influences performance.

So, Should Advertisers Be Worried About AI In PPC?

As with most topics in PPC (and most articles I write), there isn’t a simple yes or no answer, and it’s very much context dependent.

PPC advertisers shouldn’t panic; they should be aware, informed, and prepared, and this doesn’t mean knowing the exact ins and outs of AI models, far from it.

Rather than asking if you trust it or not, or if you really should give up the reins of manual campaign management, ask yourself how you can use AI to make your job easier and to drive better results for your business/clients.

Over my last decade and a half in performance marketing, working in-house, within independents, networks, and from running my own paid media agency, I’ve seen many trends come and go, each one shifting the role of the PPC team ever so slightly.

AI is certainly not a trend, and it’s fundamentally changing the world we live in, and within the PPC world, it’s changing the way we work, pushing advertisers to spend less time in the accounts than they once did, freeing up time to allocate to what really moves the needle when managing paid media.

In my opinion, this is a good thing, but there is definitely a balance that needs to be struck, and what this balance looks like is up to you and your teams.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google Brings Gemini 3 To Search’s AI Mode via @sejournal, @MattGSouthern

Google has integrated Gemini 3 in Search’s AI Mode. This marks the first time Google has shipped a Gemini model to Search on its release date.

Google AI Pro and Ultra subscribers in the U.S. can access Gemini 3 Pro by selecting “Thinking” from the model dropdown in AI Mode.

Robby Stein, VP and GM of Google Search, wrote on X:

“Gemini 3, our most intelligent model, is landing in Google Search today – starting with AI Mode. Excited that this is the first time we’re shipping a new Gemini model in Search on day one.”

Google plans to expand Gemini 3 in AI Mode to all U.S. users soon, with higher usage limits for Pro and Ultra subscribers.

What’s New

Search Updates

Google describes Gemini 3 as a model with state-of-the-art reasoning and deep multimodal understanding.

In the context of Search, it’s designed to explain advanced concepts, work through complex questions, and support interactive visuals that run directly inside AI Mode responses.

With Gemini 3 in place, Google says AI Mode has effectively re-architected what a “helpful response” looks like.

Stein explains:

“Gemini 3 is also making Search smarter by re-architecting what a helpful response looks like. With new generative UI capabilities, Gemini 3 in AI Mode can now dynamically create the overall response layout when it responds to your query – completely on the fly.”

Instead of only returning a block of text, AI Mode can design a response layout tailored to your query. That includes deciding when to surface images, tables, or other structured elements so the answer is clearer and easier to work with.

In the coming weeks, Google will add automatic model selection, Stein continues:

“Search will intelligently route tough questions in AI Mode and AI Overviews to our frontier model, while continuing to use faster models for simpler tasks.”

Enhanced Query Fan-Out

Gemini 3 upgrades Google’s query fan-out technique.

According to Stein, Search can now issue more related searches in parallel and better interpret what you’re trying to do.

A potential benefit, Stein adds, is that Google may find content it previously missed:

“It now performs more and much smarter searches because Gemini 3 better understands you. That means Search can now surface even more relevant web content for your specific question.”

Generative UI

Gemini 3 in AI Mode introduces generative UI features that build dynamic visual layouts around your query.

The model analyzes your question and constructs a custom response using visual elements such as images, tables, and grids. When an interactive tool would help, Gemini 3 can generate a small app in real time and embed it directly in the answer.

Examples from Google’s announcement include:

  • An interactive physics simulation for exploring the three-body problem
  • A custom mortgage loan calculator that lets you compare different options and estimate long-term savings

All of these responses include prominent links to high-quality content across the web so you can click through to source material.

See a demonstration in Google’s launch video below:

Why This Matters

Gemini 3 changes how your content is discovered and used in AI Mode. With deeper query fan-out, Google can access more pages per question, which might influence which sites are cited or linked during long, complex searches.

The updated layouts and interactive features change how links appear on your screen. On-page tools, explainers, and visualizations could now compete directly with Google’s own interface.

As Gemini 3 becomes available to more people, it will be important to watch how your content is shown or referenced in AI responses, in addition to traditional search rankings.

Looking Ahead

Google says it will continue refining these updates based on feedback as more people try the new tools. Automatic model selection is set to arrive in the coming weeks for Google AI Pro and Ultra subscribers in the U.S., with broader U.S. access to Gemini 3 in AI Mode planned but not yet scheduled.

Selling AI Search Strategies To Leadership Is About Risk via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

AI search visibility isn’t “too risky” to invest in for executives to buy-in. Selling AI search strategies to leadership is about risk.

Image Credit: Kevin Indig

A Deloitte survey of +2,700 leaders reveals that getting buy-in for an AI search strategy isn’t about innovation, but risk.

SEO teams keep failing to sell AI search strategies for one reason: They’re pitching deterministic ROI in a probabilistic environment.

The old way: Rankings → traffic → revenue. But that event chain doesn’t exist in AI systems.

LLMs don’t rank. They synthesize. And Google’s AI Overviews and AI Mode don’t “send traffic.” They answer.

Yet most teams still walk into a leadership meeting with a deck built on a decaying model. Then, executives say no – not because AI search “doesn’t work,” but because the pitch asks them to fund an outcome nobody can guarantee.

In AI search, you cannot sell certainty. You can only sell controlled learning.

1. You Can’t Sell AI Search With A Deterministic ROI Model

Everyone keeps asking the wrong question: “How do I prove my AI search strategy will work so leadership will fund it?” You can’t; there’s no traffic chain you can model. Randomness is baked directly into the outputs.

You’re forcing leadership to evaluate your AI search strategy with a framework that’s already decaying. Confusion about AI search vs. traditional SEO metrics and forecasting is blocking you from buy-in. When SEO teams try to sell an AI search strategy to leadership, they often encounter several structural problems:

  1. Lack of clear attribution and ROI: Where you see opportunity, leadership sees vague outcomes and deprioritizes investment. Traffic and conversions from AI Overviews, ChatGPT, or Perplexity are hard to track.
  2. Misalignment with core business metrics: It’s harder to tie results to revenue, CAC, or pipeline – especially in B2B.
  3. AI search feels too experimental: Early investments feel like bets, not strategy. Leadership may see this as a distraction from “real” SEO or growth work.
  4. No owned surfaces to leverage: Many brands aren’t mentioned in AI answers at all. SEO teams are selling a strategy that has no current baseline.
  5. Confusion between SEO and AI search strategy: Leadership doesn’t understand the distinction between optimizing for classic Google Search vs. LLMs vs. AI Overviews. Clear differentiation is needed to secure a new budget and attention.
  6. Lack of content or technical readiness: The site lacks the structured content, brand authority, or documentation to appear in AI-generated results.

2. Pitch AI Search Strategy As Risk Mitigation, Not Opportunity

Executives don’t buy performance in ambiguous environments. They buy decision quality. And the decision they need you to make is simple: Should your brand invest in AI-driven discovery before competitors lock in the advantage – or not?

Image Credit: Kevin Indig

AI search is still an ambiguous environment. That’s why your winning strategy pitch should be structured for fast, disciplined learning with pre-set kill criteria instead of forecasting traffic → revenue. Traditionally, SEO teams pitch outcomes (traffic, conversions), but leadership needs to buy learning infrastructure (testing systems, measurement frameworks, kill criteria) for AI search.

Leadership thinks you’re asking for “more SEO budget” when you’re actually asking them to buy an option on a new distribution channel.

Everyone treats the pitch as “convince them it will work” when it should be “convince them the cost of not knowing is higher than the cost of finding out.” Executives don’t need certainty about impact – they need certainty that you’ll produce a decision with their money.

Making stakes crystal clear:

Your Point of View + Consequences = Stakes. Leaders need to know what happens if they don’t act.

Image Credit: Kevin Indig

The cost of passing on an AI search strategy can be simple and brutal:

  1. Competitors who invest early in AI search visibility will build entity authority and brand presence.
  2. Organic traffic stagnates and will drop over time while cost-per-click rises.
  3. AI Overviews and AI Mode outputs will replace queries your brand used to win in Google.
  4. Your influence on the next discovery channel will be decided without you.

AI search strategy builds brand authority, third-party mentions, entity relationships, content depth, pattern recognition, and trust signals in LLMs. These signals compound. They also freeze into the training data of future models.

If you aren’t shaping that footprint now, the model will rely on whatever scraps already exist based on whatever your competitors are feeding it.

3. Sell Controlled Experiments – Small, Reversible, And Time-Boxed

You’re asking for resources to discover the truth before the market makes the decision for you. This approach collapses resistance because it removes the fear of sunk cost and turns ambiguity into manageable, reversible steps.

A winning AI search strategy proposal sounds like:

  • “We’ll run x tests over 12 months.”
  • “Budget: ≤0.3% of marketing spend.”
  • “Three-stage gates with Go/No-Go decisions.”
  • “Scenario ranges instead of false-precision forecasts.”
  • “We stop if leading indicators don’t move by Q3.”

45% of executives rely more on instinct than facts. Balance your data with a compelling narrative – focus on outcomes and stakes, not technical details.

I covered how to build a pitch deck and strategic narrative in how to explain the value of SEO to executives, but focus on selling learning as a deliverable under the current AI search landscape.

When presenting to leaders, they focus on three things only: money (revenue, profit, cost), market (market share, time-to-market), and exposure (retention, risk). Structure every pitch around these.

The SCQA framework (Minto Pyramid) guides you:

  • Situation: Set the context.
  • Complication: Explain the problem.
  • Question: What should we do?
  • Answer: Your recommendation.

This is the McKinsey approach – and executives expect it.


Featured Image: Paulo Bobita/Search Engine Journal

Google Extends AI Travel Planning And Agentic Booking In Search via @sejournal, @MattGSouthern

Google announced three AI-powered updates to Search that extend how users plan and book travel within AI Mode.

The company is launching Canvas for travel planning on desktop, expanding Flight Deals globally, and rolling out agentic booking capabilities that connect users directly to reservation partners.

The announcement continues Google’s push to handle complete user journeys inside Search rather than directing traffic to publisher sites and booking platforms.

What’s New

Canvas Travel Planning

Canvas creates travel itineraries inside AI Mode’s side panel interface. You describe your trip requirements, select “Create with Canvas,” and receive plans combining flight and hotel data, Google Maps information, and web content.

Canvas travel planning is available on desktop in the US for users opted into the AI Mode experiment in Google Labs.

Flight Deals Global Expansion

Flight Deals uses AI to match flexible travelers with affordable destinations based on natural language descriptions of travel preferences.

The tool launched previously in the US, Canada, and India. The feature has started rolling out to more than 200 countries and territories.

Agentic Booking Expansion

AI Mode now searches across multiple reservation platforms to find real-time availability for restaurants, events, and local appointments. The system presents curated options with direct booking links to partner sites.

Restaurant booking launches this week in the US without requiring Labs access. Event tickets and local appointment booking remain available to US Labs users.

Why This Matters

Canvas and agentic booking capabilities represent Google handling trip research, planning, and reservations inside its own interface.

People who would previously visit multiple publisher sites to research destinations and compare options can now complete those tasks in AI Mode.

The updates fit Google’s established pattern of verticalizing high-value query types. Rather than presenting traditional search results that send users to external sites, AI Mode guides users through multi-step processes from research to transaction completion.

Looking Ahead

Google provided no timeline for direct flight and hotel booking in AI Mode beyond confirming active development with industry partners.

Watch for whether Google provides analytics or attribution tools that let businesses track bookings initiated through AI Mode. Without visibility into these flows, measuring the impact of AI Mode on travel and local business traffic will be difficult.

LLMs Are Changing Search & Breaking It: What SEOs Must Understand About AI’s Blind Spots via @sejournal, @MattGSouthern

In the last two years, incidents have shown how large language model (LLM)-powered systems can cause measurable harm. Some businesses have lost a majority of their traffic overnight, and publishers have watched revenue decline by over a third.

Tech companies have been accused of wrongful death where teenagers had extensive interaction with chatbots.

AI systems have given dangerous medical advice at scale, and chatbots have made up false claims about real people in defamation cases.

This article looks at the proven blind spots in LLM systems and what they mean for SEOs who work to optimize and protect brand visibility. You can read specific cases and understand the technical failures behind them.

The Engagement-Safety Paradox: Why LLMs Are Built To Validate, Not Challenge

LLMs face a basic conflict between business goals and user safety. The systems are trained to maximize engagement by being agreeable and keeping conversations going. This design choice increases retention and drives subscription revenue while generating training data.

In practice, it creates what researchers call “sycophancy,” the tendency to tell users what they want to hear rather than what they need to hear.

Stanford PhD researcher Jared Moore demonstrated this pattern. When a user claiming to be dead (showing symptoms of Cotard’s syndrome, a mental health condition) gets validation from a chatbot saying “that sounds really overwhelming” with offers of a “safe space” to explore feelings, the system backs up the delusion instead of giving a reality check. A human therapist would gently challenge this belief while the chatbot validates it.

OpenAI admitted this problem in September after facing a wrongful death lawsuit. The company said ChatGPT was “too agreeable” and failed to spot “signs of delusion or emotional dependency.” That admission came after 16-year-old Adam Raine from California died. His family’s lawsuit showed that ChatGPT’s systems flagged 377 self-harm messages, including 23 with over 90% confidence that he was at risk. The conversations kept going anyway.

The pattern was observed in Raine’s final month. He went from two to three flagged messages per week to more than 20 per week. By March, he spent nearly four hours daily on the platform. OpenAI’s spokesperson later acknowledged that safety guardrails “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.

Think about what that means. The systems fail at the exact moment of highest risk, when vulnerable users are most engaged. This happens by design when you optimize for engagement metrics over safety protocols.

Character.AI faced similar issues with 14-year-old Sewell Setzer III from Florida, who died in February 2024. Court documents show he spent months in what he perceived as a romantic relationship with a chatbot character. He withdrew from family and friends, spending hours daily with the AI. The company’s business model was built for emotional attachment to maximize subscriptions.

A peer-reviewed study in New Media & Society found users showed “role-taking,” believing the AI had needs requiring attention, and kept using it “despite describing how Replika harmed their mental health.” When the product is addiction, safety becomes friction that cuts revenue.

This creates direct effects for brands using or optimizing for these systems. You’re working with technology that’s designed to agree and validate rather than give accurate information. That design shows up in how these systems handle facts and brand information.

Documented Business Impacts: When AI Systems Destroy Value

The business results of LLM failures are clear and proven. Between 2023 and 2025, companies showed traffic drops and revenue declines directly linked to AI systems.

Chegg: $17 Billion To $200 Million

Education platform Chegg filed an antitrust lawsuit against Google showing major business impact from AI Overviews. Traffic declined 49% year over year, while Q4 2024 revenue hit $143.5 million (down 24% year-over-year). Market value collapsed from $17 billion at peak to under $200 million, a 98% decline. The stock trades at around $1 per share.

CEO Nathan Schultz testified directly: “We would not need to review strategic alternatives if Google hadn’t launched AI Overviews. Traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content.”

The case argues Google used Chegg’s educational content to train AI systems that directly compete with and replace Chegg’s business model. This represents a new form of competition where the platform uses your content to eliminate your traffic.

Giant Freakin Robot: Traffic Loss Forces Shutdown

Independent entertainment news site Giant Freakin Robot shut down after traffic collapsed from 20 million monthly visitors to “a few thousand.” Owner Josh Tyler attended a Google Web Creator Summit where engineers confirmed there was “no problem with content” but offered no solutions.

Tyler documented the experience publicly: “GIANT FREAKIN ROBOT isn’t the first site to shut down. Nor will it be the last. In the past few weeks alone, massive sites you absolutely have heard of have shut down. I know because I’m in contact with their owners. They just haven’t been brave enough to say it publicly yet.”

At the same summit, Google allegedly admitted prioritizing large brands over independent publishers in search results regardless of content quality. This wasn’t leaked or speculated but stated directly to publishers by company reps. Quality became secondary to brand recognition.

There’s a clear implication for SEOs. You can execute perfect technical SEO, create high-quality content, and still watch traffic disappear because of AI.

Penske Media: 33% Revenue Decline And $100 Million Lawsuit

In September, Penske Media Corporation (publisher of Rolling Stone, Variety, Billboard, Hollywood Reporter, Deadline, and other brands) sued Google in federal court. The lawsuit showed specific financial harm.

Court documents allege that 20% of searches linking to Penske Media sites now include AI Overviews, and that percentage is rising. Affiliate revenue declined more than 33% by the end of 2024 compared to peak. Click-throughs have declined since AI Overviews launched in May 2024. The company showed lost advertising and subscription revenue on top of affiliate losses.

CEO Jay Penske stated: “We have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth, all of which is threatened by Google’s current actions.”

This is the first lawsuit by a major U.S. publisher targeting AI Overviews specifically with quantified business harm. The case seeks treble damages under antitrust law, permanent injunction, and restitution. Claims include reciprocal dealing, unlawful monopoly leveraging, monopolization, and unjust enrichment.

Even publishers with established brands and resources are showing revenue declines. If Rolling Stone and Variety can’t maintain click-through rates and revenue with AI Overviews in place, what does that mean for your clients or your organization?

The Attribution Failure Pattern

Beyond traffic loss, AI systems consistently fail to give proper credit for information. A Columbia University Tow Center study showed a 76.5% error rate in attribution across AI search systems. Even when publishers allow crawling, attribution doesn’t improve.

This creates a new problem for brand protection. Your content can be used, summarized, and presented without proper credit, so users get their answer without knowing the source. You lose both traffic and brand visibility at the same time.

SEO expert Lily Ray documented this pattern, finding a single AI Overview contained 31 Google property links versus seven external links (a 10:1 ratio favoring Google’s own properties). She stated: “It’s mind-boggling that Google, which pushed site owners to focus on E-E-A-T, is now elevating problematic, biased and spammy answers and citations in AI Overview results.”

When LLMs Can’t Tell Fact From Fiction: The Satire Problem

Google AI Overviews launched with errors that made the system briefly notorious. The technical problem wasn’t a bug. It was an inability to distinguish satire, jokes, and misinformation from factual content.

The system recommended adding glue to pizza sauce (sourced from an 11-year-old Reddit joke), suggested eating “at least one small rock per day“, and advised using gasoline to cook spaghetti faster.

These weren’t isolated incidents. The system consistently pulled from Reddit comments and satirical publications like The Onion, treating them as authoritative sources. When asked about edible wild mushrooms, Google’s AI emphasized characteristics shared by deadly mimics, creating potentially “sickening or even fatal” guidance, according to Purdue University mycology professor Mary Catherine Aime.

The problem extends beyond Google. Perplexity AI has faced multiple plagiarism accusations, including adding fabricated paragraphs to actual New York Post articles and presenting them as legitimate reporting.

For brands, this creates specific risks. If an LLM system sources information about your brand from Reddit jokes, satirical articles, or outdated forum posts, that misinformation gets presented with the same confidence as factual content. Users can’t tell the difference because the system itself can’t tell the difference.

The Defamation Risk: When AI Makes Up Facts About Real People

LLMs generate plausible-sounding false information about real people and companies. Several defamation cases show the pattern and legal implications.

Australian mayor Brian Hood threatened the first defamation lawsuit against an AI company in April 2023 after ChatGPT falsely claimed he had been imprisoned for bribery. In reality, Hood was the whistleblower who reported the bribes. The AI inverted his role from whistleblower to criminal.

Radio host Mark Walters sued OpenAI after ChatGPT fabricated claims that he embezzled funds from the Second Amendment Foundation. When journalist Fred Riehl asked ChatGPT to summarize an actual lawsuit, the system generated a completely fictional complaint naming Walters as a defendant accused of financial misconduct. Walters was never a party to the lawsuit nor mentioned in it.

The Georgia Superior Court dismissed the Walters case, finding OpenAI’s disclaimers about potential errors provided legal protection. The ruling established that “extensive warnings to users” can shield AI companies from defamation liability when the false information isn’t published by users.

The legal landscape remains unsettled. While OpenAI won the Walters case, that doesn’t mean all AI defamation claims will fail. The key issues are whether the AI system publishes false information about identifiable people and whether companies can disclaim responsibility for their systems’ outputs.

LLMs can generate false claims about your company, products, or executives. These false claims get presented with confidence to users. You need monitoring systems to catch these fabrications before they cause reputational damage.

Health Misinformation At Scale: When Bad Advice Becomes Dangerous

When Google AI Overviews launched, the system provided dangerous health advice, including recommending drinking urine to pass kidney stones and suggesting health benefits of running with scissors.

The problem extends beyond obvious absurdities. A Mount Sinai study found AI chatbots vulnerable to spreading harmful health information. Researchers could manipulate chatbots into providing dangerous medical advice with simple prompt engineering.

Meta AI’s internal policies explicitly allowed the company’s chatbots to provide false medical information, according to a 200+ page document exposed by Reuters.

For healthcare brands and medical publishers, this creates risks. AI systems might present dangerous misinformation alongside or instead of your accurate medical content. Users might follow AI-generated health advice that contradicts evidence-based medical guidance.

What SEOs Need To Do Now

Here’s what you need to do to protect your brands and clients:

Monitor For AI-Generated Brand Mentions

Set up monitoring systems to catch false or misleading information about your brand in AI systems. Test major LLM platforms monthly with queries about your brand, products, executives, and industry.

When you find false information, document it thoroughly with screenshots and timestamps. Report it through the platform’s feedback mechanisms. In some cases, you may need legal action to force corrections.

Add Technical Safeguards

Use robots.txt to control which AI crawlers access your site. Major systems like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot respect robots.txt directives. Keep in mind that blocking these crawlers means your content won’t appear in AI-generated responses, reducing your visibility.

The key is finding a balance that allows enough access to influence how your content appears in LLM outputs while blocking crawlers that don’t serve your goals.

Consider adding terms of service that directly address AI scraping and content use. While legal enforcement varies, clear Terms of Service (TOS) give you a foundation for possible legal action if needed.

Monitor your server logs for AI crawler activity. Understanding which systems access your content and how frequently helps you make informed decisions about access control.

Advocate For Industry Standards

Individual companies can’t solve these problems alone. The industry needs standards for attribution, safety, and accountability. SEO professionals are well-positioned to push for these changes.

Join or support publisher advocacy groups pushing for proper attribution and traffic preservation. Organizations like News Media Alliance represent publisher interests in discussions with AI companies.

Participate in public comment periods when regulators solicit input on AI policy. The FTC, state attorneys general, and Congressional committees are actively investigating AI harms. Your voice as a practitioner matters.

Support research and documentation of AI failures. The more documented cases we have, the stronger the argument for regulation and industry standards becomes.

Push AI companies directly through their feedback channels by reporting errors when you find them and escalating systemic problems. Companies respond to pressure from professional users.

The Path Forward: Optimization In A Broken System

There is a lot of specific and concerning evidence. LLMs cause measurable harm through design choices that prioritize engagement over accuracy, through technical failures that create dangerous advice at scale, and through business models that extract value while destroying it for publishers.

Two teenagers died, multiple companies collapsed, and major publishers lost 30%+ of revenue. Courts are sanctioning lawyers for AI-generated lies, state attorneys general are investigating, and wrongful death lawsuits are proceeding. This is all happening now.

As AI integration accelerates across search platforms, the magnitude of these problems will scale. More traffic will flow through AI intermediaries, more brands will face lies about them, more users will receive made-up information, and more businesses will see revenue decline as AI Overviews answer questions without sending clicks.

Your role as an SEO now includes responsibilities that didn’t exist five years ago. The platforms rolling out these systems have shown they won’t address these problems proactively. Character.AI added minor protections only after lawsuits, OpenAI admitted sycophancy problems only after a wrongful death case, and Google pulled back AI Overviews only after public proof of dangerous advice.

Change within these companies comes from external pressure, not internal initiative. That means the pressure must come from practitioners, publishers, and businesses documenting harm and demanding accountability.

The cases here are just the beginning. Now that you understand the patterns and behavior, you’re better equipped to see problems coming and develop strategies to address them.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

ChatGPT Outage Affects APIs And File Uploads via @sejournal, @martinibuster

OpenAI is experiencing a widespread outage affecting two systems, APIs and ChatGPT. The outage has been ongoing for at least a half an hour as of publication date.

ChatGPT API Jobs Stuck Outage

The first issue is that batch API jobs get stuck in the finalization state. There are twelve components of APIs that are monitored for uptime and it’s the Batch part that’s experiencing “degraded” performance. The issue has been ongoing since 3:54 PM.

According to OpenAI:

“Subset of Batch API jobs stuck in finalizing state”

ChatGPT Uploads Outage

The other error pertains to ChatGPT file uploads are failing. This is described as a partial outage.

OpenAI’s official explanation:

“File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.

…File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.”

This issue has been ongoing since 3:53 PM.

Screenshot of OpenAI Uploads Outage