OpenAI’s Sam Altman Raises Possibility Of Ads On ChatGPT via @sejournal, @martinibuster

OpenAI’s CEO Sam Altman sat for an interview where he explained that his vision for the future of ChatGPT is as a trusted assistant that’s user-aligned, saying that booking hotels is not going to be the way to monetize “the world’s smartest model.” He pointed to Google as an example of what he doesn’t want ChatGPT to become: a service that accepts advertising dollars to place the worst choice above the best choice. He then followed up to express openness to advertising.

User-Aligned Monetization Model

Altman contrasted OpenAI’s revenue approach with the ad-driven incentives of Google. He explained that Google’s Search and advertising ecosystem depends on Google’s search results “doing badly for the user,” because ranking decisions are partly tied to maximizing advertising income.

The interviewer related that he and his wife took a trip to Europe and booked multiple hotels with help from ChatGPT and ate at restaurants that ChatGPT helped him find and at no point did any kind of kickback or advertising fee go back to OpenAI, leading him to tell his wife that ChatGPT “didn’t get a dime from this… this just seems wrong….” because he was getting so much value from ChatGPT and ChatGPT wasn’t getting anything back.

Altman answered that users trust ChatGPT and that’s why so many people pay for it.

He explained:

“I think if ChatGPT finds you the… To zoom out even before the answer, one of the unusual things we noticed a while ago, and this was when it was a worst problem, ChatGPT would consistently be reported as a user’s most trusted technology product from a big tech company. We don’t really think of ourselves as a big tech company, but I guess we are now. That’s very odd on the surface, because AI is the thing that hallucinates, AI is the thing with all the errors, and that was much more of a problem. And there’s a question of why.

Ads on a Google search are dependent on Google doing badly. If it was giving you the best answer, there’d be no reason ever to buy an ad above it. So you’re like, that thing’s not quite aligned with me.

ChatGPT, maybe it gives you the best answer, maybe it doesn’t, but you’re paying it, or hopefully are paying it, and it’s at least trying to give you the best answer. And that has led to people having a deep and pretty trusting relationship with ChatGPT. You ask ChatGPT for the best hotel, not Google or something else.”

Altman’s response used the interviewer’s experience as an example of a paradigm change in user trust in technology. He contrasted ChatGPT’s model, where users directly pay for answers, with Google’s ad-based model that profits from imperfect results. His point is that ChatGPT’s business model aligns more closely with users’ interests, earning a sense of trust and reliability rather than making their users feel exploited by an advertising system. This is why users perceive ChatGPT as more trustworthy, even though ChatGPT is known to hallucinate.

Altman Is Open To Transaction Fees

Altman was strongly against accepting advertising money in exchange for showing a hotel above what ChatGPT would naturally show. He said that he would be open to accepting a transaction fee should a user book that hotel through ChatGPT because that has no influence on what ChatGPT recommends, thus preserving a user’s trust.

He shared how this would work:

“If ChatGPT were accepting payment to put a worse hotel above a better hotel, that’s probably catastrophic for your relationship with ChatGPT. On the other hand, if ChatGPT shows you it’s best hotel, whatever that is, and then if you book it with one click, takes the same cut that it would take from any other hotel, and there’s nothing that influenced it, but there’s some sort of transaction fee, I think that’s probably okay. And with our recent commerce thing, that’s the spirit of what we’re trying to do. We’ll do that for travel at some point.”

I think a takeaway here is that Altman believes the advertising model that the Internet has been built on over the past thirty-plus years can subvert user trust and lead to a poor user experience. He feels that a transaction fee model is less likely to impact the quality of the service that users are paying for and that it will maintain the feeling of trust that people have in ChatGPT.

But later on in the interview, as you’ll see, Altman surprises the interviewer with his comment about the possibility of advertisements on ChatGPT.

How OpenAI Will Monetize Itself

When pressed about how OpenAI will monetize itself, Altman responded that he expects the future of commerce will have lower margins and that he doesn’t expect to fully fund OpenAI by booking hotels but by doing exceptional things like curing diseases.

Altman explained his vision:

“So one thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I’m happy about that. I think there’s like a lot of taxes that just suck for the economy and getting those down should be great all around. But I think that most companies like OpenAI will make more money at a lower margin.

…I think the way to monetize the world’s smartest model is certainly not hotel booking.  …I want to discover new science and figure out a way to monetize that. You can only do with the smartest model.

There is a question of, should, many people have asked, should OpenAI do ChatGPT at all? Why don’t you just go build AGI? Why don’t you go discover a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology? And it is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually.

…Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever.

So maybe ChatGPT and hotel booking and whatever else is not the best way we can make money. In fact, I’m certain it’s not. I do think it’s a very important thing to do for the world, and I’m happy for OpenAI to do some things that are not the economic maxing thing.”

Advertisements May Be Coming To ChatGPT

At around the 18 minute mark the interviewer asked Altman about advertising on OpenAI and Altman acknowledged that there may be a form of advertising but was vague about what that would look like.

He explained:

“Again, there’s a kind of ad that I think would be really bad, like the one we talked about.

There are kinds of ads that I think would be very good or pretty good to do. I expect it’s something we’ll try at some point. I do not think it is our biggest revenue opportunity.”

The interviewer asked:

“What will the ad look like on the page?”

Altman responded:

“I have no idea. You asked like a question about productivity earlier. I’m really good about not doing the things I don’t want to do.”

Takeaway

Sam Altman suggests an interesting way forward on how to monetize Internet users. His way is based on trust and finding a way to monetize that doesn’t betray that trust.

Watch the interview starting at about the 16 minute mark:

Featured image/Screenshot from interview

Why AI Content All Sounds the Same & How SEO Pros Can Fix It via @sejournal, @mktbrew

This post was sponsored by Market Brew. The opinions expressed in this article are the sponsor’s own.

If your AI-generated articles don’t rank but sound fine, you’re not alone.

AI has made it effortless to produce content, but not to stand out in SERPs.

Across nearly every industry, brands are using generative AI tools like ChatGPT, Perplexity, Claude, and more to scale content production, only to discover that, to search engines, everything sounds the same.

But this guide will help you build E-E-A-T-friendly & AI-Overview-worthy content that boosts your AI Overview visibility, while giving you more control over your rankings.

Why Does All AI-Generated Content Sound The Same?

Most generative AI models write from the same training data, producing statistically “average” answers to predictable prompts.

The result is fluent, on-topic copy that is seen as interchangeable from one brand to the next.

To most readers, it may feel novel.

To search engines, your AI content may look redundant.

Algorithms can now detect when pages express the same ideas with minor wording differences. Those pages compete for the same meaning, and only one tends to win.

The challenge for SEOs isn’t writing faster, it’s writing differently.

That starts with understanding why search engines can tell the difference even when humans can’t.

How Do Search Engines & Answer Engines See My Content?

Here’s what Google actually sees when it looks at your page:

  • Search engines no longer evaluate content by surface keywords.
  • They map meaning.

Modern ranking systems translate your content into embeddings.

When two pages share nearly identical embeddings, the algorithm treats them as duplicates of meaning, similar to duplicate content.

That’s why AI-generated content blends together. The vocabulary may change, but the structure and message remain the same.

What Do Answer Engines Look For On Web Pages?

Beyond words, engines analyze the entire ecosystem of a page:

These structural cues help determine whether content is contextually distinct or just another derivative variant.

To stand out, SEOs have to shape the context that guides the model before it writes.

That’s where the Inspiration Stage comes in.

How To Teach AI To Write Like Your Brand, Not The Internet

Before you generate another article, feed the AI your brand’s DNA.

Language models can complete sentences, but can’t represent your brand, structure, or positioning unless you teach them.

Advanced teams solve this through context engineering, defining who the AI is writing for and how that content should behave in search.

The Inspiration Stage should combine three elements that together create brand-unique outputs.

Step 1 – Create A Brand Bible: Define Who You Are

The first step is identity.

A Brand Bible translates your company’s tone, values, and vocabulary into structured guidance the AI can reference. It tells the model how to express authority, empathy, or playfulness. And just as important, what NOT to say.

Without it, every post sounds like a tech press release.

With it, you get language that feels recognizably yours, even when produced at scale.

“The Brand Bible isn’t decoration: it’s a defensive wall against generic AI sameness.”

A great example: Market Brew’s Brand Bible Wizard

Step 2 – Create A Template URL: Structure How You Write

Great writing still needs great scaffolding.

By supplying a Template URL, a page whose structure already performs well, you give the model a layout to emulate: heading hierarchy, schema markup, internal link positions, and content rhythm.

Adding a Template Influence parameter can help the AI decide how closely to follow that structure. Lower settings would encourage creative variation; higher settings would preserve proven formatting for consistency across hundreds of pages.

Templates essentially become repeatable frameworks for ranking success.

An example of how to apply a template URL

Step 3 – Reverse-Engineer Your Competitor Fan-Out Prompts: Know the Landscape

Context also means competition. When you are creating AI content, it needs to be optimized for a series of keywords and prompts.

Fan-out prompts are a concept that maps the broader semantic territory around a keyword or topic. These are a network of related questions, entities, and themes that appear across the SERP.

In addition, fan-out prompts should be reverse-engineered from top competitors in that SERP.

Feeding this intelligence into the AI ensures your content strategically expands its coverage; something that the LLM search engines are hungry for.

“It’s not copying competitors, it’s reverse-engineering the structure of authority.”

Together, these three inputs create a contextual blueprint that transforms AI from a text generator into a brand and industry-aware author.

Market Brew’s implementation of reverse engineering fan-out prompts

How To Incorporate Human-Touch Into AI Content

If your AI tool spits out finished drafts with no checkpoints, you’ve lost control of what high-quality content is.

That’s a problem for teams who need to verify accuracy, tone, or compliance.

Breaking generation into transparent stages solves this.

Incorporate checkpoints where humans can review, edit, or re-queue the content at each stage:

  • Research.
  • Outline.
  • Draft.
  • Refinement.

Metrics for readability, link balance, and brand tone become visible in real time.

This “human-in-the-loop” design keeps creative control where it belongs.

Instead of replacing editors, AI becomes their analytical assistant: showing how each change affects the structure beneath the words.

“The best AI systems don’t replace editors, they give them x-ray vision into every step of the process.”

How To Build Content The Way Search Engines Read It

Modern SEO focuses on predictive quality signals: indicators that content is likely to perform before it ever ranks.

These include:

  • Semantic alignment: how closely the page’s embeddings match target intent clusters.
  • Structural integrity: whether headings, schema, and links follow proven ranking frameworks.
  • Brand consistency and clarity: tone and terminology that match the brand bible without losing readability.

Tracking these signals during creation turns optimization into a real-time discipline.

Teams can refine strategy based on measurable structure, not just traffic graphs weeks later.

That’s the essence of predictive SEO: understanding success before the SERP reflects it.

The Easy Way To Create High-Visibility Content For Modern SERPs

Top SEO teams are already using the Content Booster approach.

Market Brew’s Content Booster is one such example.

It embeds AI writing directly within a search engine simulation, using the same mechanics that evaluate pages to guide creation.

Writers begin by loading their Brand Bible, selecting a Template URL, and enabling reverse-engineered fan-out prompts.

Next, the internal and external linking strategy is defined, which uses a search engine model’s link scoring system, plus its entity-based text classifier as a guide to place the most valuable links possible.

This is bolstered by a “friends/foes” section that allows writers to define quoting / linking opportunities to friendly sites, and “foe” sites where external linking should be avoided.

The Content Booster then produces and evaluates a 7-stage content pipeline, each driven by thousands of AI agents.

Stage Function What You Get
0. Brand Bible Upload your brand assets and site; Market Brew learns your tone, voice, and banned terms. Every piece written in your unique brand style.
1. Opportunity & Strategy Define your target keyword or prompt, tone, audience, and linking strategy. A strategic blueprint tied to real search intent.
2. Brief & Structure Creates an SEO-optimized outline using semantic clusters and entity graphs. Perfectly structured brief ready for generation.
3. Draft Generation AI produces content constrained by embeddings and brand parameters. A first draft aligned with ranking behavior, not just text patterns.
4. Optimization & Alignment Uses cosine similarity and Market Brew’s ranking model to score each section. Data-driven tuning for maximum topical alignment.
5. Internal Linking & Entity Enrichment Adds schema markup, entity tags, and smart internal links. Optimized crawl flow and contextual authority.
6. Quality & Compliance Checks grammar, plagiarism, accessibility, and brand voice. Ready-to-publish content that meets editorial and SEO standards.

Editors can inspect or refine content at any stage, ensuring human direction without losing automation.

Instead of waiting months to measure results, teams see predictive metrics: like fan-out coverage, audience/persona compliance, semantic similarity, link distribution, embedding clusters and more. The moment a draft is generated.

This isn’t about outsourcing creativity.

It’s about giving SEO professionals the same visibility and control that search engineers already have.

Your Next Steps

If you teach your AI to think like your best strategist, sameness stops being a problem.

Every brand now has access to the same linguistic engine; the only differentiator is context.

The future of SEO belongs to those who blend human creativity with algorithmic understanding, who teach their models to think like search engines while sounding unmistakably human.

By anchoring AI in brand, structure, and competition, and by measuring predictive quality instead of reactive outcomes, SEOs can finally close the gap between what we publish and what algorithms reward.

“The era of AI sameness is already here. The brands that thrive will be the ones that teach their AI to sound human and think like a search engine.”

Ready to see how predictive SEO works in action?

Explore the free trial of Market Brew’s Light Brew system — where you can model how search engines interpret your content and test AI writing workflows before publishing.


Image Credits

Featured Image: Image by Market Brew. Used with permission.

Perplexity Bets $400M On Snapchat To Scale AI Search Adoption via @sejournal, @MattGSouthern

Perplexity will pay Snap $400 million to integrate its AI answer engine into Snapchat’s chat interface, with rollout starting next year.

  • Perplexity will pay Snap $400 million over one year to integrate its AI answer engine into Snapchat.
  • Snap calls this its first large-scale integration of an external AI partner directly in the app.
  • Perplexity handles 150+ million questions weekly, so the integration meaningfully expands distribution.
AI SEO: How To Understand AI Mode Rankings via @sejournal, @martinibuster

A simplified explanation of how Google ranks content is that it is based on understanding search queries and web pages, plus a number of external ranking signals. With AI Mode, that’s just the starting point for ranking websites. Even keywords are starting to go away, replaced by increasingly complex queries and even images. How do you optimize for that? The following are steps that can be taken to help answer that question.

Latent Questions Are A Profound Change To SEO

The word “latent” means something that exists but cannot be seen.  When a user issues a complex query the LLM must not only understand the query but also map out follow-up questions that a user might ask as part of an information journey about the topic. Those questions that comprise the follow-up questions are latent questions. Virtually every query contains latent questions.

Google’s Information Gain Patent

The issue of latent queries poses a new problem for SEO: How do you optimize for questions that are unknown? Optimizing for AI search means optimizing for the entire range of questions that are related to the initial or head query.

But even the concept of a head query is going away because users are now asking complex queries which demand complex answers. This is precisely why it may be useful for AI SEO purposes to optimize not just for one query but for the immediate information needs of the user.

How does Google understand the information need that’s hidden within a user’s query? The answer is found in Google’s Information Gain Patent. That patent is about ranking a web page that is relevant for a query then afterward ranking other web pages that have different but related content.

Identify The Latent (Hidden) Questions

One way to look at AI search results is to break them down into the questions that the AI answers are satisfying, to identify the hidden query fan-out questions.

For example, if you ask Google’s AI Mode how to make pizza dough the AI Mode will generate a pizza dough recipe. The answer is synthesized from multiple queries.

There is a way to extract the hidden (latent) questions by using a Reverse Question Answering Prompt, which I’ll show below.

Here is an example of an initial query and the additional query fan-out questions that are involved:

Initial Query: How to make pizza dough?

Query Fan-Out Questions:

  • What ingredients are needed to make pizza dough?
  • How much pizza dough does this recipe make?
  • What indicates that the yeast is active?
  • How long should the dough be kneaded by hand?
  • How long should the dough be kneaded with a mixer?
  • How long should the dough rise during the first proofing?
  • How can you develop more complex flavor in the dough?
  • What should the oven temperature be for baking the pizza?
  • How long should the pizza bake?

Actual AI Mode Question And Answers

Here are some of the hidden query fan-out questions accompanied by the answers that Google’s AI Mode generated.

Question: What ingredients are needed to make pizza dough?
Answer: “1 and 1/3 cups warm water (around 105-110°F or 40-43°C); 1 tablespoon granulated sugar (or honey); 1 packet active dry or instant yeast (2 1/4 teaspoons); 3 1/2 to 4 cups all-purpose or bread flour, plus more for dusting; 2 tablespoons extra-virgin olive oil, plus more for greasing; 1 teaspoon salt.”

Question: How much pizza dough does this recipe make?
Answer: “This recipe makes enough dough for two 10-12 inch pizzas.”

Question: What indicates that the yeast is active?
Answer: “Cover and let it sit for about 5-10 minutes, until the mixture is foamy and bubbly. This indicates the yeast is active.”

Screenshot Of The Actual AI Mode Answer

Screenshot of AI Mode answer for query

Reverse Question Answering Prompt

You can use the Reverse Question Answering Prompt to identify the underlying questions in any AI Mode answer. You can even use a similar but more precise prompt to analyze your own content to identify what questions the document answers. It’s a good way to check if your content does or does not answer the questions you want it to answer.

Prompt To Extract Questions From AI Mode

Here is the prompt to use to extract the hidden questions within an AI Mode answer:

Analyze the document and extract a list of questions that are directly and completely answered by full sentences in the text. Only include questions if the document contains a full sentence or sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.

Reverse Question Answering Analysis For Web Content

The previously described prompt can be used to extract the questions that are answered by your own or a competitor’s content. But it will not differentiate between the core search queries the document is relevant for and other questions that are ancillary to the main topic.

To do a Reverse Question Answering analysis with your own content, try this more precise variant of the prompt:

Analyze the document and extract a list of questions that are core to the document’s central topic and are directly and completely answered by full sentences in the text.

Only include questions if the document contains a full sentence or contiguous sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference. Crucially, exclude any questions about supporting anecdotes, personal asides, or general background information that is not the main subject of the document.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.

The above prompt is meant to emulate how an LLM or information retrieval system might extract the core questions that a web document answers, while ignoring the parts of the document that aren’t central to its informational purpose, such as tangential commentary that do not directly contribute to the document’s main topic or purpose.

Cultivate Being Mentioned On Other Sites

Something that is becoming increasingly apparent is that AI search tends to rank companies whose websites are recommended by other sites. Research by Ahrefs found a strong correlation between sites that appear in AI Overviews and branded mentions.

According to Ahrefs:

“So we looked at these factors that correlate with the amount of times a brand appears in AI overviews, tested tons of different things, and by far the strongest correlation, very, very strong correlation, almost 0.67, was branded web mentions.

So if your brand is mentioned in a ton of different places on the web, that correlates very highly with your brand being mentioned in lots of AI conversations as well.”

Read: Data Shows Brand Mentions Boost AI Search Rankings

This finding strongly suggests that visibility in AI search may depend less on backlinks and more on how often a brand is discussed across the web. AI models seem to learn which brands are recommended by how often those sites are mentioned across other sites, including sites like Reddit.

Post-Keyword Ranking Era

We are in a post-keyword ranking era. Google’s organic search was already using AI and a core topicality system to better understand queries and the topic that web pages were about. The big difference now is that Google’s AI Mode has enabled users to search with long and complex conversational queries that aren’t necessarily answered by web pages that are focused on being relevant to keywords instead of to what people are actually looking for.

Write About Topics

Writing about topics seems like a straightforward approach but what it means depends on the context of the topic.

What “topic writing” proposes is that instead of writing about the keyword Blue Widget, the writer must write about the topic of Blue Widget.

The old way of SEO was to think about Blue Widget and all the associated Blue Widget keyword phrases:

Associated keyword phrases

  • How to make blue widgets
  • Cheap blue widgets
  • Best blue widgets

Images And Videos

The up to date way to write is to think in terms of answers and helpfulness. For example, do the images on a travel site communicate what a destination is about? Will a reader linger on the photo? On a product site, do the images communicate useful information that will help a consumer determine if something will fit and what it might look like on them?

Images and videos, if they’re helpful and answer questions, could become increasingly important as users begin to search with images and increasingly expect to see more videos in the search results, both short and longform videos.

Read:

Featured Image by Shutterstock/Nithid

Google AI Mode Starts Rolling Out Agentic Booking In Labs via @sejournal, @MattGSouthern

Google is starting to roll out agentic capabilities in AI Mode as a Search Labs experiment.

This update enables AI Mode to find and book restaurant reservations, event tickets, and wellness appointments across multiple websites.

Availability is limited, and Google notes this experiment may not be available to everyone yet.

Robby Stein, VP of Product at Google Search, announced the rollout on X.

What’s New

AI Mode now performs multi-site searches for three booking categories and returns real-time options with a curated list of time slots or ticket prices.

Here’s what U.S. users see when visiting the landing page in Search Labs:

Screenshot from: labs.google.com/search/experiment/43, November 2025.
Screenshot from: labs.google.com/search/experiment/43, November 2025.

Restaurant Reservations

You can ask for party size, time, neighborhood, or cuisine.

Google’s example:

“find me a dinner reservation for 3 people this Friday after 6pm around Logan Square. craving ramen or bibimbap.”

Results include available times with links to book.

Event Tickets

Google AI Pro and Ultra subscribers can search for concert and event tickets with price and seating preferences, for example:

“find me 2 cheap tickets for the Shaboozey concert coming up. prefer standing floor tickets.”

Wellness Appointments

Also for Pro and Ultra subscribers, AI Mode can surface real-time availability from local service booking platforms and link you to complete the appointment.

How It Works

AI Mode searches across multiple websites to surface real-time availability, then presents a curated list. It links you directly to the provider’s booking page to finalize the reservation or purchase.

Requirements

Full functionality requires:

  • A personal Google Account you manage yourself
  • Web & App Activity turned on
  • Search Labs access
  • U.S. location and English language
  • Age 18 or older

These conditions are listed on the experiment page.

Why This Matters

If you handle bookings, AI Mode can reduce the steps it takes for people to compare times and prices across sites.

You still complete the transaction on the provider’s site, but discovery and comparison move into a single query.

Looking Ahead

Google calls this an early experiment that may make mistakes and invites feedback to improve quality.

Rollout is staged, so availability will expand over the coming days.


Featured Image: Koshiro K/Shutterstock

LLM Traffic Is Shrinking via @sejournal, @Kevin_Indig

LLM referral traffic has been growing +65% year-to-date. But we should assume 0 in the future.

LLM Referral Traffic Is Shrinking

LLM referral traffic in B2B grew +65.1% since January – but dropped -42.6% since July.

Image Credit: Kevin Indig

My December prediction of 50% organic by 2027 is dead:

  • In December 2024, I analyzed six B2B sites and found LLM referral traffic was growing at such a fast rate it would make up 50% of organic traffic in three years.
  • Today, I’m finding the monthly growth rate of LLM traffic dropped from 25.1% in 2024 to 10.4% in November 2025.
  • Even from January to July 2025, the average growth rate was lower (19.2%) than my projection. That’s fast, but not enough to reach 50% organic traffic in three years.

LLM contribution to organic traffic grew from 0.14% in 2024 to 1.10% in 2025, which is more than I projected (0.79%).

Image Credit: Kevin Indig

But with organic traffic falling due to AI Overviews, this growth becomes meaningless.

Fewer Citations Despite Growing Usage

In August, several factors influenced LLM referral traffic:

  1. Seasonality: Siege Media documented that B2B sites lost LLM traffic in August due to vacation season.
  2. Router: ChatGPT 5, which launched on August 7, has a router that picks the model. The router favors non-reasoning models, which show fewer citations and send less traffic out.
  3. Concentration: Josh from Profound found a higher concentration of referrals to Reddit and Wikipedia starting late July.

Business seasonality has a lower impact because neither ChatGPT (consumer focus) nor Claude (business focus) sees a decrease in site visits.

Image Credit: Kevin Indig
Image Credit: Kevin Indig

ChatGPT mentions, however, dropped by one-third in October and continue dropping in November.

Image Credit: Kevin Indig

Citations for large domains like Reddit or Wikipedia follow suit (based on Profound data).

Major sites see citation declines in September (Image Credit: Kevin Indig)

Conclusion: LLM visits are up, which removes seasonality as dominant cause. The driver of lower referral traffic is ChatGPT, showing fewer citations due to the model router.

Visibility Is The Real Price

Traffic was never the right way to value LLMs because LLMs make clicks redundant:

  • The AI Mode study I published last month validates that clicks only occurred for shopping-related tasks (zero-click share = ~100%).
  • Pew Research has found that only 1% of users click links in AI Overviews.

Focusing on traffic leads to disappointing results. ChatGPT is more like TikTok than Google Search. The currency of the AI world is visibility.

The good news: LLMs grow the pie. Semrush found people don’t use Google less often because they also use ChatGPT. If LLMs are additive to Google Search, the visibility surface grows even though clicks per source shrink. You have more places to be seen, fewer clicks per place.

But our success metrics need to change. Referral traffic neither works for ChatGPT nor Google, as AI Overviews and AI mode swallow more clicks. Instead, we need to adopt visibility-first.

Default To Zero LLM Traffic

  1. Track LLM and organic search seasonality for your vertical to measure the total pie of citations and make sense of drops/spikes.
  2. Monitor total citation and mention count to answer the question, “Are we growing because the market grows?” Lower citations/mentions means fewer chances to influence purchase decisions.
  3. Prioritize brand mentions over citations in LLMs. Mentions without links drive familiarity and influence purchase decisions.
  4. Stop expecting (meaningful) LLM referral traffic. Budget for visibility.
  5. Invest resources where LLMs go to train: UGC and third-party reviews like Reddit, YouTube, review sites, community forums.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!


Featured Image: Paulo Bobita/Search Engine Journal

Ahrefs Data Shows Brand Mentions Boost AI Search Rankings via @sejournal, @martinibuster

The latest Ahrefs podcast shares data showing that brand mentions on third-party websites help improve visibility across AI search surfaces. What they found is that brand mentions correlate strongly with ranking better in AI search, indicating that we are firmly in a new era of off-page SEO.

Training Data Gets Cited

Tim Soulo, CMO of Ahrefs, said that off-page activity that increases being mentioned on other sites improves visibility in AI search results, both those based on training data and those drawing from live search results. The benefits of conducting off-page SEO apply to both. The only difference is that training data doesn’t get into LLMs right away.

Tim recommends identifying where your industry gets mentioned:

“You just need to see like where your competitors are mentioned, where you are mentioned, where your industry is mentioned.

And you have to get mentions there because then if the AI chatbot would do a search and find those pages and create their answer based on what they see on those pages, this is one thing.

But if some of the AI providers will decide to retrain their entire model on a more recent snapshot of the web, they will use essentially the same pages.”

Tim cautioned that AI companies don’t ingest new web data for training and that there’s a lag in months between how often large language models receive fresh training data from the web.

Appear On Authoritative Websites

Although Tim did not mention specific tactics for obtaining brand mentions, in my opinion, off-page link-building strategies don’t have to change much to build brand mentions.

Tim underlined the importance of appearing on authoritative websites:

“So yeah, …essentially it’s not that you have to use different tactics for those things. You do the same thing, you appear like on credible websites, but yeah, let’s continue.”

The only thing that I would add is that authoritativeness in this situation is if a site gets mentioned by AI search. But the other thing to think about is if a site is simply the go-to for a particular kind of information, relevance

Topicality Of Brand Mentions

The other thing that was discussed is the topicality of the brand mentions, meaning the context in which the brand is discussed. Ryan Law, Ahrefs’ Director of Content Marketing, said that the context of the brand mention is important, and I agree. You can’t always control the narrative, but that’s where old-fashioned PR outreach comes in, where you can include quotes and so on to build the right context.

Law explained:

“Well, that segues very nicely to what I think is probably the most useful discrete tactic you can do, and that is building off-site mentions.

A big part of how LLMs understand what your brand is about and when it should recommend it and the context it should talk about you is based on where you appear in its training data and where you appear on the web.

  • What topics are you commonly mentioned alongside?
  • What other brands are you mentioned alongside?

I think Patrick Stox has been referring to this as the era of off-page SEO. In some ways, the content on your own site is not as valuable as the content about you on other pages on the web.”

Law mentioned that these off-page mentions don’t have to be in the form of links in order to be useful for ranking in AI search.

Testing Shows Brand Mentions Are Important

Law went on to say that their data shows that brand mentions are important for ranking. He mentions a correlation coefficient of 0.67, which is a measure of how strongly two variables are related.

Here are the correlation coefficient scales:

  • 1.0 = perfect positive correlation (two things are related).
  • 0.0 = no correlation.
  • –1.0 = perfect negative correlation (for example, for every minute you drive the distance gets smaller, a negative correlation).

So, a correlation coefficient of 0.67 means that there’s a strong relationship in what’s observed.

Law explained:

“And we did indeed test this with a bit of research.

So we looked at these factors that correlate with the amount of times a brand appears in AI overviews, tested tons of different things, and by far the strongest correlation, very, very strong correlation, almost 0.67, was branded web mentions.

So if your brand is mentioned in a ton of different places on the web, that correlates very highly with your brand being mentioned in lots of AI conversations as well.”

He goes on to recommend identifying industry domains that tend to get cited in AI search for your topics and try to get mentioned on those websites.

Law also recommended getting mentions on user-generated content sites like Reddit and Quora. Next he recommended getting mentioned on review sites and on YouTube video in the transcripts because YouTube videos are highly cited by AI search.

Ahrefs Brand Radar Tool

Lastly, they discussed their Ahrefs tool called Brand Radar that’s useful for identifying domains that are frequently mentioned in AI search surfaces.

Law explained:

“And obviously, we have a tool that does exactly that. It actually helps you find the most commonly cited domains.  …if you put in whatever niche you’re interested in, you can see not only the top domains that get mentioned most often across all of the thousands, hundreds of thousands, millions of conversations we have indexed. You can also see the individual pages that get most commonly mentioned.

Obviously, if you can get your brand on those pages, yeah, immediately your AI visibility is going to shoot up in a pretty dramatic way.”

Citations Are The New Backlinks

Tim Soulo called citations the new backlinks for the AI search era and recommended their Brand Radar tool for identifying where to get mentions. In my opinion, getting a brand mentioned anywhere that’s relevant to your users or customers could also be helpful for ranking in the regular search  as well as AI (Read: Google’s Branded Search Patent)

Watch the Ahrefs podcast starting at about the 6:30 minute mark:

How to Win in AI Search (Real Data, No Hype)

Report: Apple To Lean On Google Gemini For Siri Overhaul via @sejournal, @MattGSouthern

Apple is reportedly paying Google to build a custom Gemini AI model that will power a major Siri upgrade targeted for spring 2026, according to Bloomberg’s Mark Gurman.

The custom Gemini model is expected to run on Apple’s Private Cloud Compute infrastructure. Neither Apple nor Google has officially announced the partnership.

What’s Being Reported

Bloomberg reports Apple conducted an internal evaluation comparing AI models from Google and Anthropic for the next-generation Siri.

Google’s Gemini won based largely on financial terms. Bloomberg says Anthropic’s Claude would have cost Apple more than $1.5 billion annually.

According to the report, Google’s models will provide the query planner and summarizer components of Siri’s new architecture. Apple’s own Foundation Models would continue handling on-device personal data processing, with the Google-supplied models running on Apple’s servers.

The project carries the internal codename “Glenwood.”

Apple Won’t Acknowledge Google’s Role

Bloomberg reports Apple plans to market the updated Siri as Apple technology running on Apple servers through an Apple interface, without promoting Google’s involvement.

In practice, Gemini would operate behind the scenes while Apple positions the capabilities as its own work.

Launch Timeline

Bloomberg reports Apple is targeting spring 2026 for the Siri overhaul as part of iOS 26.4.

Earlier Bloomberg reporting also pointed to a smart home display device on a similar timeline that could showcase the assistant’s expanded capabilities.

What We Don’t Know Yet

Financial terms beyond the broad “paying Google” characterization are undisclosed.

Neither company has confirmed the partnership, and the legal and technical data-handling arrangements are not public. It’s also unclear whether the deal is finalized or still being negotiated.

Why This Matters

A Gemini-powered backend could change how Siri answers questions, and who gets credit in AI responses, even if the branding remains Apple-only.

If Bloomberg’s report holds, more answers will start and finish inside Siri and Spotlight on iPhone, which can reduce early web discovery.

The open questions are how sources will appear and whether traffic will be traceable.

Looking Ahead

Apple has already enabled ChatGPT access within Siri and Writing Tools as part of Apple Intelligence, and Anthropic says Claude is available in Xcode 26 for developers.

The potential Gemini partnership would be Apple’s most consequential AI arrangement to date because it would underpin core Siri functionality rather than optional features.

Watch for official details closer to the iOS 26.4 window.


Featured Image: Thrive Studios ID/Shutterstock

GEO Platform Shutdown Sparks Industry Debate Over AI Search via @sejournal, @MattGSouthern

Benjamin Houy shut down Lorelight, a generative engine optimization (GEO) platform designed to track brand visibility in ChatGPT, Claude, and Perplexity, after concluding most brands don’t need a specialized tool for AI search visibility.

Houy writes that, after reviewing hundreds of AI answers, the brands mentioned most often share familiar traits: quality content, mentions in authoritative publications, strong reputation, and genuine expertise.

He claims:

“There’s no such thing as ‘GEO strategy’ or ‘AI optimization’ separate from brand building… The AI models are trained on the same content that builds your brand everywhere else.”

Houy explains in a blog post that customers liked Lorelight’s insights but often churned because the data didn’t change their tactics. In his view, users pursued the same fundamentals with or without GEO dashboards.

He argues GEO tracking makes more sense as one signal inside broader SEO suites rather than as a standalone product. He points to examples of traditional SEO platforms incorporating AI-style visibility signals into existing toolsets rather than creating a separate category.

Debate Snapshot: Voices On Both Sides

Reactions show a genuine split in how marketers see “AI search.”

Some SEO professionals applauded the back-to-basics message. Others countered with cases where assistant referrals appear meaningful.

Here are some of the responses published so far:

  • Lily Ray: “Thank you for being honest and for sharing this publicly. The industry needs to hear this loud and clear.”
  • Randall Choh: “I beg to differ. It’s a growing metric… LLM searches usually have better search intents that lead to higher conversions.”
  • Karl McCarthy: “You’re right that quality content + authoritative mentions + reputation is what works… That’s not a tool. It’s a network.”
  • Nikki Pilkington raised consumer-fairness questions about shuttering a product and whether prior GEO-promotional content should be updated or removed.

These perspectives capture the industry tension. Some see AI search as a new performance channel worth measuring. Others see the same brand signals driving outcomes across SEO, PR, and now AI assistants.

How “AI Search Visibility” Is Being Measured

Because assistants work differently from web search, measurement is still uneven.

Assistants surface brands in two main ways: by citing and linking sources directly in answers, and by guiding people into familiar web results.

Referral tracking can come through direct links, copy-and-paste, or branded search follow-ups.

Attribution is messy because not all assistants pass clear referrers. Teams often combine UTM tagging on shared links with branded-search lift, direct-traffic spikes, and assisted-conversion reports to triangulate “LLM influence.”

That patchwork makes case studies persuasive but hard to generalize.

Why This Matters

The main question is whether AI search needs its own optimization framework or if it primarily benefits from the same brand signals.

If Houy is correct, standalone GEO tools might only produce engaging dashboards that seldom influence strategy.

On the other hand, if the advocates are correct, overlooking assistant visibility could mean missing out on profitable opportunities between traditional search and LLM-referred traffic.

What’s Next

It’s likely that SEO platforms will continue to fold “AI visibility” into existing analytics rather than creating a separate category.

The safest path for businesses is to continue doing the brand-building work that assistants already reward, while testing assistant-specific measurements where they are most likely to pay off.


Featured Image: Roman Samborskyi/Shutterstock

Can You Use AI To Write For YMYL Sites? (Read The Evidence Before You Do) via @sejournal, @MattGSouthern

Your Money or Your Life (YMYL) covers topics that affect people’s health, financial stability, safety, or general welfare, and rightly so Google applies measurably stricter algorithmic standards to these topics.

AI writing tools might promise to scale content production, but as writing for YMYL requires more consideration and author credibility than other content, can an LLM write content that is acceptable for this niche?

The bottom line is that AI systems fail at YMYL content, offering bland sameness where unique expertise and authority matter the most. AI produces unsupported medical claims 50% of the time, and hallucinates court holdings 75% of the time.

This article examines how Google enforces YMYL standards, shows evidence where AI fails, and why publishers relying on genuine expertise are positioning themselves for long-term success.

Google Treats YMYL Content With Algorithmic Scrutiny

Google’s Search Quality Rater Guidelines state that “for pages about clear YMYL topics, we have very high Page Quality rating standards” and these pages “require the most scrutiny.” The guidelines define YMYL as topics that “could significantly impact the health, financial stability, or safety of people.”

The algorithmic weight difference is documented. Google’s guidance states that for YMYL queries, the search engine gives “more weight in our ranking systems to factors like our understanding of the authoritativeness, expertise, or trustworthiness of the pages.”

The March 2024 core update demonstrated this differential treatment. Google announced expectations for a 40% reduction in low-quality content. YMYL websites in finance and healthcare were among the hardest hit.

The Quality Rater Guidelines create a two-tier system. Regular content can achieve “medium quality” with everyday expertise. YMYL content requires “extremely high” E-E-A-T levels. Content with inadequate E-E-A-T receives the “Lowest” designation, Google’s most severe quality judgment.

Given these heightened standards, AI-generated content faces a challenge in meeting them.

It might be an industry joke that the early hallucinations from ChatGPT advised people to eat stones, but it does highlight a very serious issue. Users depend on the quality of the results they read online, and not everyone is capable of deciphering fact from fiction.

AI Error Rates Make It Unsuitable For YMYL Topics

A Stanford HAI study from February 2024 tested GPT-4 with Retrieval-Augmented Generation (RAG).

Results: 30% of individual statements were unsupported. Nearly 50% of responses contained at least one unsupported statement. Google’s Gemini Pro achieved 10% fully supported responses.

These aren’t minor discrepancies. GPT-4 RAG gave treatment instructions for the wrong type of medical equipment. That kind of error could harm patients during emergencies.

Money.com tested ChatGPT Search on 100 financial questions in November 2024. Only 65% correct, 29% incomplete or misleading, and 6% wrong.

The system sourced answers from less-reliable personal blogs, failed to mention rule changes, and didn’t discourage “timing the market.”

Stanford’s RegLab study testing over 200,000 legal queries found hallucination rates ranging from 69% to 88% for state-of-the-art models.

Models hallucinate at least 75% of the time on court holdings. The AI Hallucination Cases Database tracks 439 legal decisions where AI produced hallucinated content in court filings.

Men’s Journal published its first AI-generated health article in February 2023. Dr. Bradley Anawalt of University of Washington Medical Center identified 18 specific errors.

He described “persistent factual mistakes and mischaracterizations of medical science,” including equating different medical terms, claiming unsupported links between diet and symptoms, and providing unfounded health warnings.

The article was “flagrantly wrong about basic medical topics” while having “enough proximity to scientific evidence to have the ring of truth.” That combination is dangerous. People can’t spot the errors because they sound plausible.

But even when AI gets the facts right, it fails in a different way.

Google Prioritizes What AI Can’t Provide

In December 2022, Google added “Experience” as the first pillar of its evaluation framework, expanding E-A-T to E-E-A-T.

Google’s guidance now asks whether content “clearly demonstrate first-hand expertise and a depth of knowledge (for example, expertise that comes from having used a product or service, or visiting a place).”

This question directly targets AI’s limitations. AI can produce technically accurate content that reads like a medical textbook or legal reference. What it can’t produce is practitioner insight. The kind that comes from treating patients daily or representing defendants in court.

The difference shows in the content. AI might be able to give you a definition of temporomandibular joint disorder (TMJ). A specialist who treats TMJ patients can demonstrate expertise by answering real questions people ask.

What does recovery look like? What mistakes do patients commonly make? When should you see a specialist versus your general dentist? That’s the “Experience” in E-E-A-T, a demonstrated understanding of real-world scenarios and patient needs.

Google’s content quality questions explicitly reward this. The company encourages you to ask “Does the content provide original information, reporting, research, or analysis?” and “Does the content provide insightful analysis or interesting information that is beyond the obvious?”

The search company warns against “mainly summarizing what others have to say without adding much value.” That’s precisely how large language models function.

This lack of originality creates another problem. When everyone uses the same tools, content becomes indistinguishable.

AI’s Design Guarantees Content Homogenization

UCLA research documents what researchers term a “death spiral of homogenization.” AI systems default toward population-scale mean preferences because LLMs predict the most statistically probable next word.

Oxford and Cambridge researchers demonstrated this in nature. When they trained an AI model on different dog breeds, the system increasingly produced only common breeds, eventually resulting in “Model Collapse.”

A Science Advances study found that “generative AI enhances individual creativity but reduces the collective diversity of novel content.” Writers are individually better off, but collectively produce a narrower scope of content.

For YMYL topics where differentiation and unique expertise provide competitive advantage, this convergence is damaging. If three financial advisors use ChatGPT to generate investment guidance on the same topic, their content will be remarkably similar. That offers no reason for Google or users to prefer one over another.

Google’s March 2024 update focused on “scaled content abuse” and “generic/undifferentiated content” that repeats widely available information without new insights.

So, how does Google determine whether content truly comes from the expert whose name appears on it?

How Google Verifies Author Expertise

Google doesn’t just look at content in isolation. The search engine builds connections in its knowledge graph to verify that authors have the expertise they claim.

For established experts, this verification is robust. Medical professionals with publications on Google Scholar, attorneys with bar registrations, financial advisors with FINRA records all have verifiable digital footprints. Google can connect an author’s name to their credentials, publications, speaking engagements, and professional affiliations.

This creates patterns Google can recognize. Your writing style, terminology choices, sentence structure, and topic focus form a signature. When content published under your name deviates from that pattern, it raises questions about authenticity.

Building genuine authority requires consistency, so it helps to reference past work and demonstrate ongoing engagement with your field. Link author bylines to detailed bio pages. Include credentials, jurisdictions, areas of specialization, and links to verifiable professional profiles (state medical boards, bar associations, academic institutions).

Most importantly, have experts write or thoroughly review content published under their names. Not just fact-checking, but ensuring the voice, perspective, and insights reflect their expertise.

The reason these verification systems matter goes beyond rankings.

The Real-World Stakes Of YMYL Misinformation

A 2019 University of Baltimore study calculated that misinformation costs the global economy $78 billion annually. Deepfake financial fraud affected 50% of businesses in 2024, with an average loss of $450,000 per incident.

The stakes differ from other content types. Non-YMYL errors cause user inconvenience. YMYL errors cause injury, financial mistakes, and erosion of institutional trust.

U.S. federal law prescribes up to 5 years in prison for spreading false information that causes harm, 20 years if someone suffers severe bodily injury, and life imprisonment if someone dies as a result. Between 2011 and 2022, 78 countries passed misinformation laws.

Validation matters more for YMYL because consequences cascade and compound.

Medical decisions delayed by misinformation can worsen conditions beyond recovery. Poor investment choices create lasting economic hardship. Wrong legal advice can result in loss of rights. These outcomes are irreversible.

Understanding these stakes helps explain what readers are looking for when they search YMYL topics.

What Readers Want From YMYL Content

People don’t open YMYL content to read textbook definitions they could find on Wikipedia. They want to connect with practitioners who understand their situation.

They want to know what questions other patients ask. What typically works. What to expect during treatment. What red flags to watch for. These insights come from years of practice, not from training data.

Readers can tell when content comes from genuine experience versus when it’s been assembled from other articles. When a doctor says “the most common mistake I see patients make is…” that carries weight AI-generated advice can’t match.

The authenticity matters for trust. In YMYL topics where people make decisions affecting their health, finances, or legal standing, they need confidence that guidance comes from someone who has navigated these situations before.

This understanding of what readers want should inform your strategy.

The Strategic Choice

Organizations producing YMYL content face a decision. Invest in genuine expertise and unique perspectives, or risk algorithmic penalties and reputational damage.

The addition of “Experience” to E-A-T in 2022 targeted AI’s inability to have first-hand experience. The Helpful Content Update penalized “summarizing what others have to say without adding much value,” an exact description of LLM functionality.

When Google enforces stricter YMYL standards and AI error rates are 18-88%, the risks outweigh the benefits.

Experts don’t need AI to write their content. They need help organizing their knowledge, structuring their insights, and making their expertise accessible. That’s a different role than generating content itself.

Looking Ahead

The value in YMYL content comes from knowledge that can’t be scraped from existing sources.

It comes from the surgeon who knows what questions patients ask before every procedure. The financial advisor who has guided clients through recessions. The attorney who has seen which arguments work in front of which judges.

The publishers who treat YMYL content as a volume game, whether through AI or human content farms, are facing a difficult path. The ones who treat it as a credibility signal have a sustainable model.

You can use AI as a tool in your process. You can’t use it as a replacement for human expertise.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock