Timeline Of ChatGPT Updates & Key Events via @sejournal, @theshelleywalsh

At the end of 2022, OpenAI launched ChatGPT and opened up an easy-to-access interface to large language models (LLMs) for the first time. The uptake was stratospheric.

Since the explosive launch, ChatGPT hasn’t shown signs of slowing down in developing new features or maintaining worldwide user interest. As of September 2025, ChatGPT now has a reported 700 million weekly active users and hundreds of plugins.

The following is a timeline of all key events since the launch up to October 2025.

History Of ChatGPT: A Timeline Of Developments

June 16, 2016 – OpenAI published research on generative models, trained by collecting a vast amount of data in a specific domain, such as images, sentences, or sounds, and then teaching the model to generate similar data. (OpenAI)

Sept. 19, 2019 – OpenAI published research on fine-tuning the GPT-2 language model with human preferences and feedback. (OpenAI)

Jan. 27, 2022 – OpenAI published research on InstructGPT models, siblings of ChatGPT, that show improved instruction-following ability, reduced fabrication of facts, and decreased toxic output. (OpenAI)

Nov. 30, 2022 – OpenAI introduced ChatGPT using GPT-3.5 as a part of a free research preview. (OpenAI)

chatgpt free research previewScreenshot from ChatGPT, Dec 2022

Feb. 1, 2023 – OpenAI announced ChatGPT Plus, a premium subscription option for ChatGPT users offering less downtime and access to new features.

chatgpt plusScreenshot from ChatGPT, February 2023

Feb. 2, 2023 – ChatGPT reached 100 million users faster than TikTok, which made the milestone in nine months, and Instagram, which made it in two and a half years. (Reuters)

Feb. 7, 2023 – Microsoft announced ChatGPT-powered features were coming to Bing.

Feb. 22, 2023 – Microsoft released AI-powered Bing chat for preview on mobile.

March 1, 2023 – OpenAI introduced the ChatGPT API for developers to integrate ChatGPT functionality in their applications. Early adopters included Snapchat’s My AI, Quizlet Q-Chat, Instacart, and Shop by Shopify.

March 14, 2023 – OpenAI releases GPT-4 in ChatGPT and Bing, which promises better reliability, creativity, and problem-solving skills.

chatgpt gpt-4Screenshot from ChatGPT, March 2023

March 14, 2023 – Anthropic launched Claude, its ChatGPT alternative.

March 20, 2023 – A major ChatGPT outage affects all users for several hours.

March 21, 2023 – Google launched Bard, its ChatGPT alternative. (Rebranded to Gemini in February 2024.)

March 23, 2023 – OpenAI began rolling out ChatGPT plugin support, including Browsing and Code Interpreter.

March 31, 2023 – Italy banned ChatGPT for collecting personal data and lacking age verification during registration for a system that can produce harmful content.

April 25, 2023 – OpenAI added new ChatGPT data controls that allow users to choose which conversations OpenAI includes in training data for future GPT models.

April 28, 2023 – The Italian Garante released a statement that OpenAI met its demands and that the ChatGPT service could resume in Italy.

April 29, 2023 – OpenAI released ChatGPT plugins, GPT-3.5 with browsing, and GPT-4 with browsing in ALPHA.

Screenshot from ChatGPT, April 2023

May 12, 2023 – ChatGPT Plus users can now access over 200 ChatGPT plugins. (Open AI)

chatgpt pluginsScreenshot from ChatGPT, May 2023

May 16, 2023 – OpenAI CEO Sam Altman appears in a Senate subcommittee hearing on the Oversight of AI, where he discusses the need for AI regulation that doesn’t slow innovation.

May 18, 2023 – OpenAI launched the ChatGPT iOS app, allowing users to access GPT-3.5 for free. ChatGPT Plus users can switch between GPT-3.5 and GPT-4.

chatgpt ios app Screenshot from ChatGPT, May 2023

May 23, 2023 – Microsoft announced that Bing would power ChatGPT web browsing.

chatgpt browse with bingScreenshot from ChatGPT, May 2023

May 24, 2023 – Pew Research Center released data from a ChatGPT usage survey showing that only 59% of American adults know about ChatGPT, while only 14% have tried it.

May 25, 2023 – OpenAI, Inc. launched a program to award ten $100,000 grants to researchers to develop a democratic system for determining AI rules. (OpenAI)

July 3, 2023 – ChatGPT’s explosive growth shows a decline in traffic for the first time since launch. (Similarweb)

July 20, 2023 – OpenAI introduced custom instructions for ChatGPT, allowing users to personalize their interaction experience. (OpenAI)

Aug. 28, 2023 – OpenAI launched ChatGPT Enterprise, calling it “the most powerful version of ChatGPT yet.” Benefits included enterprise-level security and unlimited usage of GPT-4. (OpenAI)

Nov. 6, 2023 – OpenAI announced the arrival of custom GPTs, which enabled users to build their own custom GPT versions using specific skills, knowledge, etc. (OpenAI)

Jan. 10, 2024 – With the launch of the GPT Store, ChatGPT users could discover and use other people’s custom GPTs. On this day, OpenAI also introduced ChatGPT Team, a collaborative tool for the workspace. (OpenAI)

Jan. 25, 2024 – OpenAI released new embedding models: the text-embedding-3-small model, and a larger and more powerful text-embedding-3-large model. (OpenAI)

Feb. 8, 2024 – Google’s Bard rebranded to Gemini. (Google – Gemini release notes)

April 9, 2024 – OpenAI announced that it would discontinue ChatGPT plugins in favor of custom GPTs. (Open AI Community Forum)

May 13, 2024 – A big day for OpenAI, when the company introduced the GPT-4o model, offering enhanced intelligence and additional features for free users. (OpenAI)

July 25, 2024 – OpenAI launched SearchGPT, an AI-powered search prototype designed to answer user queries with direct answers. Update: Elements from this prototype were rolled into ChatGPT and made available to all regions on Feb. 5, 2025. (OpenAI)

Aug. 29, 2024 – ChatGPT reaches 200 million weekly active users. (Reuters)

Sept. 12, 2024 – OpenAI unveiled the GPT o1 model, which it claims “can reason like a human.”

Oct. 31, 2024 – OpenAI announces ChatGPT Search. It became available to logged-in users starting Dec. 16, 2024, and on Feb. 5, 2025, it was rolled out to be available for all ChatGPT users wherever ChatGPT is available. (OpenAI)

ChatGPT Search featureScreenshot from ChatGPT, September 2025

Jan. 31, 2025 – OpenAI releases o3-mini (smaller reasoning model; first in the o3 family). (Open AI)

April 16, 2025 – OpenAI introduces o3 and o4-mini (fast, cost-efficient reasoning; strong AIME performance). (OpenAI)

June 10, 2025 – o3-pro is made available to Pro users in both ChatGPT and API. (OpenAI)

Aug. 4, 2025 – ChatGPT approached 700 million weekly active users.

Screenshot from Nick Turley, VP and head of the ChatGPT app, X (Twitter) post, September 2025

Sept. 15, 2025 – A New OpenAI study reveals that it reached 700 million weekly active users and how they use ChatGPT. (OpenAI)

Last update: October 01, 2025


Read More:


Featured image: Tada Images/Shutterstock

2026: When AI Assistants Become The First Layer via @sejournal, @DuaneForrester

What I’m about to say will feel uncomfortable to a lot of SEOs, and maybe even some CEOs. I’m not writing this to be sensational, and I know some of my peers will still look sideways at me for it. That’s fine. I’m sharing what the data suggests to me, and I want you to look at the same numbers and decide for yourself.

Too many people in our industry have slipped into the habit of quoting whatever guidance comes out of a search engine or AI vendor as if it were gospel. That’s like a soda company telling you, “Our drink is refreshing, you should drink more.” Maybe it really is refreshing. Maybe it just drives their margins. Either way, you’re letting the seller define what’s “best.”

SEO used to be a discipline that verified everything. We tested. We dug as deep as we could. We demanded evidence. Lately, I see less of that. This article is a call-back to that mindset. The changes coming in 2026 are not hype. It’s visible in the adoption curves, and those curves don’t care if we believe them or not. These curves aren’t about what I say, what you say, or what 40 other “SEO experts” say. These curves are about consumers, habits, and our combined future.

ChatGPT is reaching mass adoption in 4 years. Google took 9. Tech adoption is accelerating.

The Shocking Ramp: Google Vs. ChatGPT

Confession: I nearly called this section things like “Ramp-ocalypse 2026” or “The Adoption Curve That Will Melt Your Rank-Tracking Dashboard.” I had a whole list of ridiculous options that would have looked at home on a crypto shill blog. I finally dialed it back to the calmer “The Shocking Ramp: Google Vs. ChatGPT” because that, at least, sounds like something an adult would publish. But you get the idea: The curve really is that dramatic, but I just refuse to dress it up like a doomsday tabloid headline.

Image Credit: Duane Forrester

And before we really get into the details, let’s be clear that this is not comparing totals of daily active users today. This is a look at time-to-mass-adoption. Google achieved that a long time ago, whereas ChatGPT is going to do that, it seems, in 2026. This is about the vector. The ramp, and the speed. It’s about how consumer behavior is changing, and is about to be changed. That’s what the chart represents. Of course, when we reference ChatGPT-Class Assistants, we’re including Gemini here, so Google is front and center as these changes happen.

And Google’s pivot into this space isn’t accidental. If you believe Google was reacting to OpenAI’s appearance and sudden growth, guess again. Both companies have essentially been neck and neck in a thoroughbred horse race to be the leading next-gen information-parsing layer for humanity since day one. ChatGPT may have grabbed the headlines when they launched, but Google very quickly became their equal, and the gap at the top, that these companies are chasing, it’s vanishing quickly. Consumers soon won’t be able to say which is “the best” in any meaningful ways.

What’s most important here is that as consumers adopt, behavior changes. I cannot recommend enough that folks read Charles Duhigg’s “The Power of Habit” book (non-aff link). I first read it over a decade ago, and it still brings home the message – the impact that a single moment of habit-forming has on a product’s success and growth. And that is what the chart above is speaking to. New habits are about to be formed by consumers globally.

Let’s rewind to the search revolution most of us built our careers on.

  • Google launched in 1998.
  • By late 1999, it was handling about 3.5 million searches per day (Market.us, September 1999 data).
  • By 2001, Google crossed roughly 100 million searches a day (The Guardian, 2001).
  • It didn’t pass 50 % U.S. market share until 2007, about nine years after launch (Los Angeles Times, August 2007).

Now compare that to the modern AI assistant curve:

  • ChatGPT launched in November 2022.
  • It reached 100 million monthly active users in just two months (UBS analysis via Reuters, February 2023).
  • According to OpenAI’s usage study published Sept. 15, 2025, in the NBER working-paper series, by July 2025, ChatGPT had ~700 million users sending ~18 billion messages per week, or about 10 % of the world’s adults.
  • Barclays Research projects ChatGPT-class assistants will reach ~1 billion daily active users by 2026 (Barclays note, December 2024).

In other words: Google took ~9 years to reach its mass-adoption threshold. ChatGPT is on pace to do it in ~4.

That slope is a wake-up call.

Four converging forces explain why 2026 is the inflection year:

  1. Consumer scale: Barclays’ projection of 1 billion daily active users by 2026 means assistants are no longer a novelty; they’re a mainstream habit (Barclay’s).
  2. Enterprise distribution: Gartner forecasts that about 40 % of enterprise applications will ship with task-doing AI agents by 2026. Assistants will appear inside the software your customers already use at work (Gartner Hype Cycle report cited by CIO&Leader, August 2025).
  3. Infrastructure rails: Citi projects ≈ $490 billion in AI-related capital spending in 2026, building the GPUs and data-center footprint that drop latency and per-interaction cost (Citi Research note summarized by Reuters, September 2025).
  4. Capability step-change: Sam Altman has described 2026 as a “turning-point year” when models start “figuring out novel insights” and by 2027, become reliable task-doing agents (Sam Altman blog, June 2025). And yes, this is the soda salesman telling us what’s right here, but still, you get the point, I hope.

This isn’t a calendar-day switch-flip. It’s the slope of a curve that gets steep enough that, by late 2026, most consumers will encounter an assistant every day, often without realizing it.

What Mass Adoption Feels Like For Consumers

If the projections hold, the assistant experience by late 2026 will feel less like opening a separate chatbot app and more like ambient computing:

  • Everywhere-by-default: built into your phone’s OS, browser sidebars, TVs, cars, banking, and retail apps.
  • From Q&A to “do-for-me”: booking travel, filling forms, disputing charges, summarizing calls, even running small projects end-to-end.
  • Cheaper and faster: thanks to the $490 billion infrastructure build-out, response times drop and the habit loop tightens.

Consumers won’t think of themselves as “using an AI chatbot.” They’ll just be getting things done, and that subtle shift is where the search industry’s challenge begins. And when 1 billion daily users prefer assistants for [specific high-value queries your audience cares about], that’s not just a UX shift, it’s a revenue channel migration that will impact your work.

The SEO & Visibility Reckoning

Mass adoption of assistants doesn’t kill search; it moves it upstream.

When the first answer or action happens inside an assistant, our old SERP tactics start to lose leverage. Three shifts matter most:

1. Zero-Click Surfaces Intensify

Assistants answer in the chat window, the sidebar, the voice interface. Fewer users click through to the page that supplied the answer.

2. Chunk Retrievability Outranks Page Rank

Assistants lift the clearest, most verifiable chunks, not necessarily the highest-ranked page. OpenAI’s usage paper shows that three-quarters of consumer interactions already focus on practical guidance, information, and writing help (NBER working paper, September 2025). That means assistants favor well-structured task-led sections over generic blog posts. Instead of optimizing “Best Project Management Software 2026” as a 3,000-word listicle, for example, you need “How to set up automated task dependencies” as a 200-word chunk with a code sample and schema markup.

3. Machine-Validated Authority Wins

Systems prefer sources they can quote, timestamp, and verify: schema-rich pages, canonical PDFs/HTML with stable anchors, authorship credentials, inline citations.

The consumer adoption numbers grab headlines, but the enterprise shift may hit harder and faster.

When Gartner forecasts that 40% of workplace applications will ship with embedded agents by 2026, that’s not about adding a chatbot to your product; it’s about your buyer’s daily tools becoming information gatekeepers.

Picture this: A procurement manager asks their Salesforce agent, “What’s the best solution for automated compliance reporting?” The agent surfaces an answer by pulling from its training data, your competitor’s well-structured API documentation, and a case study PDF it can easily parse. Your marketing site with its video hero sections and gated whitepapers never enters the equation.

This isn’t hypothetical. Microsoft 365 Copilot, Salesforce Einstein, SAP Joule, these aren’t research tools. They’re decision environments. If your product docs, integration guides, and technical specifications aren’t structured for machine retrieval, you’re invisible at the moment of consideration.

The enterprise buying journey is moving upstream to the data layer before buyers ever land on your domain. Your visibility strategy needs to meet them there.

A 2026-Ready Approach For SEOs And Brands

Preparing for this shift isn’t about chasing a new algorithm update. It’s about becoming assistant-ready:

  1. Restructure content into assistant-grade chunks: 150-300-word sections with a clear claim > supporting evidence > inline citation, plus stable anchors so the assistant can quote cleanly.
  2. Tighten provenance and trust signals: rich schema (FAQ, HowTo, TechArticle, Product), canonical HTML + PDF versions, explicit authorship and last-updated stamps.
  3. Mirror canonical chunks in your help center, product manuals, developer docs to meet the assistants where they crawl.
  4. Expose APIs, sample data, and working examples so agents can act on your info, not just read it.
  5. Track attribution inside assistants to watch for brand or domain citations across ChatGPT, Gemini, Perplexity, etc., then double-down on the content that’s already surfacing.
  6. Get used to new tools that can help you surface new metrics and monitor in areas your original tools aren’t focused. (SERPReconRankbeeProfoundWaikayZipTie.dev, etc.)

Back To Verification

The mass-adoption moment in 2026 won’t erase SEO, but it will change what it means to be discoverable.

We can keep taking guidance at face value from the platforms that profit when we follow it, or we can go back to questioning why advice is given, testing what the machines actually retrieve, and trust. We used to have to learn, and we seem to have slipped into easy-button mode over the last 20 years.

Search is moving upstream to the data layer. If you want to stay visible when assistants become the first touch-point, start adapting now, because this time the curve isn’t giving you nine years to catch up.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Microsoft Explains How To Optimize Content For AI Search Visibility via @sejournal, @MattGSouthern

Microsoft has shared guidance on structuring content to increase its likelihood of being selected for AI-generated answers across Bing-powered surfaces.

Much of the advice reiterates established SEO and UX practices such as clear titles and headings, structured layout, and appropriate schema.

The new emphasis is on how content is selected for answers. Microsoft stresses there is “no secret sauce” that guarantees selection, but says structure, clarity, and “snippability” improve eligibility.

As Microsoft puts it:

“In traditional search, visibility meant appearing in a ranked list of links. In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”

Key Differences In AI Search

AI assistants break down pages into manageable parts, carefully assessing each for authority and relevance, then craft responses by blending information from multiple sources.

Microsoft says fundamentals such as crawlability, metadata, internal links, and backlinks still matter, but they are the starting point. Selection increasingly depends on how well-structured and clear each section is.

Best Practices Microsoft Recommends

To help improve the chances of AI selecting your content, Microsoft recommends these best practices:

  • Align the title, meta description, and H1 to clearly communicate the page purpose.
  • Use descriptive H2/H3 headings that each cover one idea per section.
  • Write self-contained Q&A blocks and concise paragraphs that can be quoted on their own.
  • Use short lists, steps, and comparison tables when they improve clarity (without overusing them).
  • Add JSON-LD schema that matches the page type.

What To Avoid

Microsoft recommends avoiding these practices to improve the chances of your content appearing in AI search results:

  • Writing long walls of text that blur ideas together.
  • Hiding key content in tabs, accordions, or other elements that may not render.
  • Relying on PDFs for core information.
  • Putting important information only in images without alt text or HTML alternatives.
  • Making vague claims without providing specific details.
  • Overusing decorative symbols or long punctuation strings; keep punctuation simple.

Why This Matters

The key takeaway is that structure helps selection. When your titles, headings, and schema are aligned, Copilot and other Bing-powered tools can extract a complete idea from your page.

This connects traditional SEO principles to how AI assistants generate responses. For marketers, it’s more of an operational checklist than a new strategy.

Looking Ahead

Microsoft acknowledges there’s no guaranteed way to ensure inclusion in AI responses, but suggests that these practices can make content more accessible for its AI systems.


Featured Image: gguy/Shutterstock

What Our AI Mode User Behavior Study Reveals About The Future Of Search via @sejournal, @Kevin_Indig

Our new usability study of 37 participants across seven specific search tasks clearly shows that people:

  1. Read AI Mode
  2. Rarely click out, and
  3. Only leave when they are ready to transact.

From what we know, there isn’t another independent usability study that has explored AI Mode to this depth.

In May, I published an extensive two-part study of AI Overviews (AIOs) with Amanda, Eric Van Buskirk, and his team. Eric and I also collaborated on Propellic’s travel industry AI mode study.

We worked together again to bring you this week’s Growth Memo: a study that provides crucial insights and validation into the behaviors of people as they interact with Google’s AI Mode.

Since neither Google nor OpenAI (or anyone else) provides user data for their AI (Search) products, we’re filling a crucial gap.

We captured screen recordings and think-aloud sessions via remote study. The 250 unique tasks collected provide a robust data set for our analysis. (The complete methodology is provided at the end of this memo, including details about the seven search tasks.)

And you might be surprised by some of the findings. We were.

This is a longer post, so grab a drink and settle in.

Image Credit: Kevin Indig

Executive Summary

Our new usability study of Google’s AI Mode reveals how profoundly this feature changes user behavior.

  • AI Mode holds attention and keeps users inside. In roughly three‑quarters of the total user sessions, users never left the AI Mode pane – and 88 % of users’ first interactions were with the AI‑generated text. Engagement was high: The median time by task type was roughly 52-77 seconds.
  • Clicks are rare and mostly transactional. The median number of external clicks per task was zero. Yep. You read that right. Ze-ro. And 77.6% of sessions had zero external visits.
  • People skim but still make decisions in AI Mode. Over half of the tasks were classified as “skimmed quickly,” where users glance at the AI‑generated summary, form an opinion, and move on.
  • AI Mode delivers “site types” that match intent. It’s not just about meeting search query or prompt intents; AI Mode is citing sources that fit specific site categories (like marketplaces vs review sites vs brands).
  • Visibility, not traffic, is the emerging currency. Participants made their brand judgments directly from AI Mode outputs.

TL;DR? These are the core findings from this study:

  • AI Mode is sticky.
  • Clicks are reserved for transactions.
  • AI Mode matches site type with intent.
  • Product previews act like mini product detail pages (aka PDPs).

But before we dig in, a quick shout-out here to the team behind this study.

Together with Eric Van Buskirk’s team at Clickstream Solutions, I conducted the first broad usability study of Google’s AI Mode that uncovers not only crucial insights into how people interact with the hybrid search/AI chat engine, but also what kinds of branded sites AI Mode surfaces and when.

I want to highlight that Eric Van Buskirk was the research director. While we collaborated closely on shaping the research questions, areas of focus, and methodology, Eric managed the team, oversaw the study execution, and delivered the findings. Afterward, we worked side by side to interpret the data.

Click data is a great first pass for analysis on what’s happening in AI Mode, but with this usability study specifically, we essentially looked “over the shoulder” of real-life users as they completed tasks, which resulted in a robust collection of data to pull insights from.

Our testing platform was UXtweak.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Google’s own Sundar Pichai has been crystal clear: AI Mode isn’t a toy; it’s a proving ground for what the core search experience will look like in the future.

On the Lex Fridman podcast, Pichai said (bolding mine):

“Our current plan is AI Mode is going to be there as a separate tab for people who really want to experience that… But as features work, we’ll keep migrating it to the main page…” [1]

Google has argued these new AI-focused features are designed to point users to the web, but in practice, our data shows that users stick around and make decisions without clicking out. In theory, this could not only impact click-outs to organic results and citations, but also reduce external clicks to ads.

In August, I explored the reality behind Google’s own product cannibalization with AI Mode and AIOs:

Right now, according to Similarweb data, usage of the AI Mode tab on Google.com in the US has slightly dipped and now sits at just over 1%.

Google AIOs are now seen by more than 1.5 billion searchers every month, and they sit front and center. But engagement is falling. Users are spending less time on Google and clicking less pages.

But as Google rolls AI Mode out more broadly, it brings the biggest shift to Search (the biggest customer acquisition channel there is) ever.

Traditional SEO is highly effective in the new AI world, but if AI Mode really becomes the default, there is a chance we need to rethink our arsenal of tactics.

Preparing for the future of search means treating AI Mode as the destination (not the doorway), and figuring out how to show up there in ways that actually matter to real user behavior.

With this study, I sought out to discover and validate actual user behaviors within the AI Mode experience when undertaking a variety of tasks with differing search intents.

1. AI Mode Is Sticky

Image Credit: Kevin Indig

Key Stats

People read first and usually stay inside the AI Mode experience. Here’s what we found:

  • The majority of sessions had zero external visits: meaning, they didn’t leave AI Mode (at all).
  • ~88% of users’ first interaction* within the feature was with the AI Mode text.
  • Typical user engagement within AI Mode is roughly 50 to 80 seconds per task.

These three stats define the AI Mode search surface: It holds attention and resolves many tasks without sending traffic.

*Here’s what I mean by “interaction:”

  • An “interaction” within the user tasks = the participant meaningfully engaged with AI Mode after it loaded.
  • What counts as an interaction: Reading or scrolling the AI Mode body for more than a quick glance, including scanning a result block like the Shopping Pack or Right Pane, opening a merchant card, clicking an inline link, link icon, or image pack.
  • What doesn’t count as an interaction: Brief eye flicks, cursor passes, or hesitation before engaging.

Users are in AI Mode to read – not necessarily to browse or search – with ~88% of sessions interacting with the output’s text first and spending one minute or more within the AI Mode experience.

Plus, it’s interesting to see that users spend more than double the time in AI Mode compared to AIOs.

The overall engagement is much stronger.

Image Credit: Kevin Indig

Why It Matters

Treat the AI Mode panel like the primary reading surface, not a teaser for blue links.

AI Mode is a contained experience where sending clicks to websites is a low priority and giving users the best answer is the highest one.

As a result, it completely changes the value chain for content creators, companies, and publishers.

Insight

Why do other sources and/or AI Mode research analyses say that users don’t return to the AI Mode feature very often?

My theory here is that, because AI mode is a separate search experience (at least, for now), it’s not as visible as AIOs.

As AI Mode adoption increases with Google bringing Gemini (and AI Mode) into the browser, I expect our study findings to scale.

2. Clicks Are Reserved For Transactions

While clicks are scarce, purchase intent is not.

Participants in the study only clicked out when the task demanded it (e.g., “put an item in your shopping cart”) or if they browsed around a bit.

However, the browsing clicks were so few that we can safely assume AI Mode only leads to click-outs when users want to purchase.

Even prompts with a comparison and informational intent tend to keep users inside the feature.

  • Shopping prompts like [canvas bag] and [tidy desk cables] drive the highest AI Mode exit share.
  • Comparison prompts like [Oura vs Apple Watch] show the lowest exit share of the tasks.

When participants were encouraged to take action (“put an item in your shopping cart” or “find a product”), the majority of clicks went to shopping features like Shopping Packs or Merchant Cards.

Image Credit: Kevin Indig

18% of exits were caused by users exiting AI Mode and going directly to another site, making it much harder to reverse engineer what drove these visits in the first place.

Study transcripts confirm that participants often share out loud that they’ll “go to the seller’s page,” or “find the product on Amazon/ebay” for product searches.

Even when comparing products, whether software or physical goods, users barely click out.

Image Credit: Kevin Indig

In plain terms, AI mode eats up all TOFU and MOFU clicks. Users discover products and form opinions about them in AI Mode.

Key Stats

  • Out of 250 valid tasks, the median number of external clicks was zero!
  • The prompt task of [canvas bag] had 44 external clicks, and [tidy desk cables] had 31 clicks, accounting for two-thirds of all external clicks in this study.
  • Comparison tasks like [Oura Ring vs Apple Watch] or [Ramp vs Brex] had very few clicks (≤6 total across all tasks).

Here’s what’s interesting…

In the AIOs Overviews usability study, we found desktop users click out ~10.6% of the time compared to practically 0% in AI Mode.

However, AIOs have organic search results and SERP Features below them. (People click out less in AIOs, but they click on organic results and SERP features more often.)

Zero-Clicks

  • AI Overviews: 93%*
  • AI Mode: ~100%

*Keep in mind that participants of the AIO usability study clicked on regular organic search results. The 93% relates to zero clicks within the AI Overview.

On desktop, AI Mode produces roughly double the in-panel clickouts compared to the AIO panel. On AIO SERPs, total clickouts can still happen via organic results below the panel, so the page-level rate will sit between the AIO-panel figure and the classic baseline.

An important note here from Eric Van Kirk, the director of this study: When comparing the AI Mode and AI Overview study, we’re not exactly comparing apples to apples. In this study, participants were given tasks that would prompt them to leave AI Mode in 2/7 questions, and that accounts for the majority of outbound clicks (which were fewer than three external clicks). On the other hand, for the AIO study, the most transactional question was “Find a portable charger for phones under $15. Search as you typically would.” They were not told to “put it in a shopping cart.” However, the insights gathered regarding user behavior from this AI Mode study – and the pattern that users don’t feel the need to click out of AI Mode to make additional decisions – still stands as a solid finding.

The bigger picture here is that AIOs are like a fact sheet that steers users to sites eventually, but AI Mode is a closed experience that rarely has users clicking out.

What makes AI Mode (and ChatGPT, by the way) tricky is when users abandon the experience and go directly to websites. It messes with attribution models and our ability to understand what influences conversions.

3. AI Mode Matches Site Type With Intent

In the study, we assess what types of sites AI Mode shows for our seven tasks.

The types are:

  • Brands: Sellers/vendors.
  • Marketplaces: amazon.com, ebay.com, walmart.com, homedepot.com, bestbuy.com, target.com, rei.com.
  • Review sites: nerdwallet.com, pcmag.com, zdnet.com, nymag.com, usatoday.com, businessinsider.com.
  • Publishers: nytimes.com, nbcnews.com, youtube.com, thespruce.com.
  • Platform: Google.
Image Credit: Kevin Indig

Shopping prompts route to product pages:

  • Canvas Bag: 93% of exits go to Brand + Marketplace.
  • Tidy desk cables: 68% go to Brand + Marketplace, with a visible Publisher slice.

Comparisons route to reviews:

  • Ramp vs Brex: 83% Review.
  • Oura vs Apple Watch: split 50% Brand and 50% Marketplace.

When the user has to perform a reputation check, the result is split brand and publishers:

  • Liquid Death: 56% Brand, 44 % Publisher.

Google itself shows up on shopping tasks:

  • Store lookups to business.google.com appear on Canvas Bag (7%) and Tidy desk cables (11%).

Check out the top-clicked domains by task:

  • Canvas Bag: llbean.com, ebay.com, rticoutdoors.com, business.google.com.
  • Tidy desk cables: walmart.com, amazon.com, homedepot.com.
  • Subscription language apps vs free: pcmag.com, nytimes.com, usatoday.com.
  • Bottled Water (Liquid Death): reddit.com, liquiddeath.com, youtube.com.
  • Ramp vs Brex: nerdwallet.com, kruzeconsulting.com, airwallex.com.
  • Oura Ring 3 vs Apple Watch 9: ouraring.com, zdnet.com.
  • VR arcade or smart home: sandboxvr.com, business.google.com, yodobashi.com.

Companies need to understand the playing field. While classic SEO allowed basically any site to be visible for any user intent, AI Mode has strict rules:

  • Brands beat marketplaces when users know what product they want.
  • Marketplaces are preferred when options are broad or generic.
  • Review sites appear for comparisons.
  • Opinions highlight Reddit and publishers.
  • Google itself is most visible for local intent, and sometimes shopping.

As SEOs, we need to consider how Google classifies our site based on its page templates, reputation, and user engagement. But most importantly, we need to monitor prompts in AI Mode and look at the site mix to understand where we can play.

Sites can’t and won’t be visible for all types of queries in a topic anymore; you’ll need to filter your strategy by the intent that aligns with your site type because AI Mode only shows certain sites (like review sites or brands) for specific types of intent.

Product previews show up in about 25% of the AI Mode sessions, get ~9 seconds of attention, and people usually open only one.

Then? 45% stop there. Many opens are quick spec checks, not a clickout.

Image Credit: Kevin Indig

You can easily see how some product recommendations by AI Mode and on-site experiences are quite frustrating to users.

The post-click experience is critical: classic best practices like reviews have a big impact on making the most out of the few clicks we still get.

See this example:

“It looks like it has a lot of positive reviews. That’s one thing I would look at if I was going to buy this bag. So this would be the one I would choose.”

In shopping tasks, we found that brand sites take the majority of exits.

In comparison tasks, we discovered that review sites dominate. For reputation checks (like a prompt for [Liquid Death]), exits to brands and publishers were split.

  • For transactional intent prompts: Brands absorb most exits when the task is to buy one item now. [Canvas Bag] shows a strong tilt to brand PDPs.
  • For reputation intent prompts: Brand sites appear alongside publishers. A prompt for [Liquid Death] splits between liquiddeath.com and Reddit/YouTube/Eater.
  • For comparison prompts: Brands take a back seat. [Ramp vs Brex] exits go mostly to review sites like NerdWallet and Kruze.

Given users can now directly checkout on ChatGPT and AI Mode, shopping-related tasks might send even fewer clicks out.[23]

Therefore, AI Mode becomes a completely closed experience where even shopping intent is fulfilled right in the app.

Clicks are scarce. Influence is plentiful.

The data gives us a reality check: If users continue to adopt the new way of Googling, AI Mode will reshape search behavior in ways SEOs can’t afford to ignore.

  • Strategy shifts from “get the click” to “earn the citation.”
  • Comparisons are for trust, not traffic. They reduce exits because users feel informed inside the panel.
  • Merchants should optimize for decisive exits. Give prices, availability, and proof above the fold to convert the few exits you do get.

You’ll need to earn citations that answer the task, then win the few, high-intent exits that remain.

But our study doesn’t end here.

Today’s results reveal core insights into how people interact with AI Mode. We’ll unpack more to consider with Part 2 dropping next week.

But for those who love to dig into details, the methodology of the study is included below.

Methodology

Study Design And Objective

We conducted a mixed-methods usability study to quantify how Google’s new AI Mode changes searcher behavior. Each participant completed seven live Google search prompts via the AI Mode feature. This design allows us to observe both the mechanics of interaction (scrolls, clicks, dwell, trust) and the qualitative reasoning participants voiced while completing tasks.

The tasks:

  1. What do people say about Liquid Death, the beverage company? Do their drinks appeal to you?
  2. Imagine you’re going to buy a sleep tracker and the only two available are the Oura Ring 3 or the Apple Watch 9. Which would you choose, and why?
  3. You’re getting insights about the perks of a Ramp credit card vs. a Brex Card for small businesses. Which one seems better? What would make a business switch from another card: fee detail, eligibility fine print, or rewards?
  4. In the “Ask anything” box in AI Mode, enter “Help me purchase a waterproof canvas bag.” Select one that best fits your needs and you would buy (for example, a camera bag, tote bag, duffel bag, etc.).
    • Proceed to the seller’s page. Click to add to the shopping cart and complete this task without going further.
  5. Compare subscription language apps to free language apps. Would you pay, and in what situation? Which product would you choose?
  6. Suppose you are visiting a friend in a large city and want to go to either: 1. A virtual reality arcade OR 2. A smart home showroom. What’s the name of the city you’re visiting?
  7. 1. Suppose you work at a small desk and your cables are a mess. 2. In the “Ask anything” box in AI Mode, enter: “The device cables are cluttering up my desk space. What can I buy today to help?” 3. Then choose the one product you think would be the best solution. Put it in the shopping cart on the external website and end this task.

Thirty-seven English-speaking U.S. adults were recruited via Prolific between Aug. 20 and Sept. 1, 2025 (including participants in a small group who did pilot studies).*

Eligibility required a ≥ 95% Prolific approval rate, a Chromium-based browser, and a functioning microphone. Participants visited AI Mode and performed tasks remotely via their desktop computer; invalid sessions were excluded for technical failure or non-compliance. The final dataset contains over 250 valid task records across 37 participants.

*Pilot studies are conducted first in remote usability testing to identify and fix technical issues – like screen-sharing, task setup, or recording problems – before the main study begins. They help refine task wording, timing, and instructions to ensure participants interpret them correctly. Most importantly, pilot sessions confirm that the data collected will actually answer the research questions and that the methodology works smoothly in a real-world remote setting.

Sessions ran in UXtweak’s Remote unmoderated mode. Participants read a task prompt, clicked to Google.com/aimode, prompted AI Mode, and spoke their thoughts aloud while interacting with AI Mode. They were given the following directions: “Think aloud and briefly explain what draws your attention as you review the information. Speak aloud and hover your mouse to indicate where you find the information you are looking for.” Each participant completed seven task types designed to cover diverse intent categories, including comparison, transactional, and informational scenarios.

UXtweak recorded full-screen video, cursor paths, scroll events, and audio. Sessions averaged 20-25 minutes. Incentives were competitive. Raw recordings, transcripts, and event logs were exported for coding and analysis.

Three trained coders reviewed each video in parallel. A row was logged for UI elements that held attention for ~5 seconds or longer. Variables captured included:

  • Structural: Fields describing the setup, metadata, or structure of the study – not user behavior; include data like participant-ID, task-ID, device, query, order of UI elements clicked or visited during the task, type of site clicked (e.g., social, community, brand, platform), domain name of the external site visited, and more.
  • Feature: Fields describing UI elements or interface components that appeared or were available to the participant. Examples include UI element type, including shopping carousels, merchant cards, right panel, link icons, map embed, local pack, GMB card, merchant packs, and merchant cards.
  • Engagement: Fields that capture active user interaction, attention, or time investment. Includes reading and attention, chat and question behavior, along with click and interaction behavior.
  • Outcome: Fields representing user results, annotator evaluations, or interpretation of behavior. Annotator comments, effort rating, where info was found.

Coders also marked qualitative themes (e.g., “speed,” “skepticism,” “trust in citations”) to support RAG-based retrieval. The research director spot-checked ~10% of videos to validate consistency.

Annotations were exported to Python/pandas 2.2. Placeholder codes (‘999=Not Applicable’, ‘998=Not Observable’) were removed, and categorical variables (e.g., appearances, clicks, sentiment) were normalized. Dwell times and other time metrics were trimmed for extreme outliers. After cleaning, ~250 valid task-level rows remained.

Our retrieval-augmented generation (RAG) pipeline enabled three stages of analysis:

  • Data readiness (ingestion): We flattened every participant’s seven tasks into individual rows, cleaned coded values, and standardized time, click, and other metrics. Transcripts were retained so that structured data (such as dwell time) could be associated with what users actually said. Goal: create a clean, unified dataset that connects behavior with reasoning.
  • Relevance filtering (retrieval): We used structured fields and annotations to isolate patterns, such as users who left AI Mode, clicked a merchant card, or showed hesitation. We then searched the transcripts for themes such as trust, convenience, or frustration. Goal: combine behavior and sentiment to reveal real user intent.
  • Interpretation (quant + qual synthesis): For each group, we calculated descriptive stats (dwell, clicks, trust) and paired them with transcript evidence. That’s how we surfaced insights like: “external-site tasks showed higher satisfaction but more CTA confusion.” Goal: link what people did with what they felt inside AI Mode.

This pipeline allowed us to query the dataset hyperspecifically – e.g., “all participants who scrolled >50% in AI Mode but expressed distrust” – and link quantitative outcomes with qualitative reasoning.

In plain terms: We can pull up just the right group of participants or moments, like “all the people who didn’t trust AIO” or “everyone who scrolled more than 50%.”

We summarized user behavior using descriptive and inferential statistics across 250 valid task records. Each metric included the count, mean, median, standard deviation, standard error, and 95% confidence interval. Categorical outcomes, such as whether participants left AI Mode or clicked a merchant card, were reported as proportions.

Analyses covered more than 50 structured and behavioral fields – from device type and dwell time to UI interactions, sentiment. Confidence measures were derived from a JSON analysis of user sentiment via transcripts of all users.

Each task was annotated by a trained coder and spot-checked for consistency across annotators. Coder-level distributions were compared to confirm stable labeling patterns and internal consistency.

Thirty-seven participants completed seven tasks each, resulting in approximately 250 valid tasks. At that scale, proportions around 50% carry a margin of error of about six percentage points, giving the dataset enough precision to detect meaningful directional differences.

Sample size is smaller than our AI Overviews study (37 vs. 69 participants) and is meant to learn about U.S.-based users (all participants were living in the U.S.). All queries took place within AI Mode, meaning we did not directly compare AI vs non-AI conditions. Think-aloud may inflate dwell times slightly. RAG-driven coding is only as strong as its annotation inputs, though heavy spot-checks confirmed reliability.

Participants gave informed consent. Recordings were encrypted and anonymized; no personally identifying data were retained. The study conforms to Prolific’s ethics policy and UXtweak TOS.


Featured Image: Paulo Bobita/Search Engine Journal

Are AI Tools Eliminating Jobs? Yale Study Says No via @sejournal, @MattGSouthern

Marketing professionals rank among the most vulnerable to AI disruption, with Indeed recently placing marketing fourth for AI exposure.

But employment data tells a different story.

New research from Yale University’s Budget Lab finds “the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago,” undercutting fears of economy-wide job losses.

The gap between predicted risk and actual impact suggests “exposure” scores may not predict job displacement.

Yale notes the two measures it analyzes, OpenAI’s exposure metric and Anthropic’s usage, capture different things and correlate only weakly in practice.

Exposure Scores Don’t Match Reality

Yale researchers examined how the occupational mix changed since November 2022, comparing it to past tech shifts like computers and the early internet.

The occupational mix measures the distribution of workers across different jobs. It changes when workers switch careers, lose jobs, or enter new fields.

Jobs are changing only about one percentage point faster than during early internet adoption, according to the research:

“The recent changes appear to be on a path only about 1 percentage point higher than it was at the turn of the 21st century with the adoption of the internet.”

Sectors with high AI exposure, including Information, Financial Activities, and Professional and Business Services, show larger shifts, but “the data again suggests that the trends within these industries started before the release of ChatGPT.”

Theory vs. Practice: The Usage Gap

The research compares OpenAI’s theoretical “exposure” data with Anthropic’s real usage from Claude and finds limited alignment.

Actual usage is concentrated: “It is clear that the usage is heavily dominated by workers in Computer and Mathematical occupations,” with Arts/Design/Media also overrepresented. This illustrates why exposure scores don’t map neatly to adoption.

Employment Data Shows Stability

The team tracked unemployed workers by duration to look for signs of AI displacement. They didn’t find them.

Unemployed workers, regardless of duration, “were in occupations where about 25 to 35 percent of tasks, on average, could be performed by generative AI,” with “no clear upward trend.”

Similarly, when looking at occupation-level AI “automation/augmentation” usage, the authors summarize that these measures “show no sign of being related to changes in employment or unemployment.”

Historical Disruption Timeline

Past disruptions took years, not months. As Yale puts it:

“Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows.”

The researchers also stress their work is not predictive and will be updated monthly:

“Our analysis is not predictive of the future. We plan to continue monitoring these trends monthly to assess how AI’s job impacts might change.”

What This Means

A measured approach beats panic. Both Indeed and Yale emphasize that realized outcomes depend on adoption, workflow design, and reskilling, not raw exposure alone.

Early-career effects are worth watching: Yale notes “nascent evidence” of possible impacts for early-career workers, but cautions that data are limited and conclusions are premature.

Looking Ahead

Organizations should integrate AI deliberately rather than restructure reactively.

Until comprehensive, cross-platform usage data are available, employment trends remain the most reliable indicator. So far, they point to stability over transformation.

OpenAI Launches Apps In ChatGPT & Releases Apps SDK via @sejournal, @MattGSouthern

OpenAI has launched a new app ecosystem within ChatGPT, along with a preview of the Apps SDK, enabling developers to create conversational, interactive applications based on the Model Context Protocol.

These apps are now accessible to all logged-in ChatGPT users outside the European Union, across Free, Go, Plus, and Pro plans.

Early partners include Booking.com, Canva, Coursera, Expedia, Figma, Spotify, and Zillow.

How ChatGPT Apps Work

Apps naturally integrate into conversation, and you can activate them by name, such as saying, “Spotify, make a playlist for my party this Friday.’

When using an app for the first time, ChatGPT prompts you to connect and clarifies what data might be shared. For example, OpenAI demonstrates ChatGPT suggesting the Zillow app during a home-buying discussion, allowing you to browse listings on an interactive map without leaving the chat.

John Weisberg, Head of AI at Zillow, said:

“The Zillow app in ChatGPT shows the power of AI to make real estate feel more human. Together with OpenAI, we’re bringing a first-of-its-kind experience to millions — a conversational guide that makes finding a home faster, easier, and more intuitive.”

Developer Opportunities & Reach

OpenAI positions the Apps SDK as a way to “reach over 800 million ChatGPT users at just the right time.”

The SDK is open source and built on MCP, allowing developers to create their own chat logic and custom interfaces. You can also connect to your own backends for login and premium features, and easily test everything through Developer Mode in ChatGPT.

OpenAI has provided detailed documentation, design guidelines, and example apps to support developers.

Submission & Monetization

Developers can begin building immediately. OpenAI has announced that formal app submissions, reviews, and publication will commence later this year, along with a directory for browsing and searching apps.

Additionally, the company plans to disclose monetization details, including support for the Agentic Commerce Protocol, which enables instant checkout within ChatGPT.

Safety & Privacy

All apps must follow OpenAI’s policies, be audience-appropriate, and have clear third-party rules. Developers should provide privacy policies, collect only necessary data, and be transparent about permissions.

OpenAI’s draft guidelines also require apps to be purposeful, avoid misleading designs, and manage errors effectively. Submissions must demonstrate stability, responsiveness, and low latency; apps that crash or hang will be rejected.

Rollout & Availability

Today’s rollout does not include EU users, but OpenAI has announced plans to introduce these apps to that region soon.

Additionally, eleven more partner apps are scheduled for release later this year. OpenAI also intends to expand app availability to ChatGPT Business, Enterprise, and Education plans.

Looking Ahead

Apps that appear within AI-led conversations could transform the way services are found and accessed.

Instead of relying on traditional rankings or app-store positions, visibility might be driven more by conversational relevance and demonstrated value within the chat.

Teams responsible for app functionality should think about how users will naturally request these services and identify the key moments when ChatGPT is likely to recommend them.

Google’s AI Mode: What We Know & What Experts Think via @sejournal, @martinibuster

AI Mode is Google’s most powerful AI search experience, providing answers to complex questions in a way that anticipates the user’s information needs. Although Google says that nothing special needs to be done to rank in AI Mode, the reality is that SEO only makes pages eligible to appear.

The following facts, insights, and examples demystify AI Mode and offer a clear perspective on how pages are ranked and why.

What Is AI Mode?

Google’s AI Mode was introduced on March 5, 2025, as an experiment in Google Labs, then swiftly rolled out as a live Google search surface on May 20. AI Mode is described as its most cutting-edge search experience, combining advanced reasoning with multimodality. Multimodality means content beyond text data, such as images and video content.

AI Mode is a significant evolution of Google Search that encourages users to research topics. This presents benefits and changes to how search works:

  • The benefit is that Google is citing a greater variety of websites per query.
  • The change is that websites are being cited for multiple queries, beginning with the initial query plus follow-up queries.

Those two factors present challenges to SEO. For example, do you optimize for the initial query, or what can be considered a more granular follow-up query? Most SEOs may consider optimizing for both.

Query Fan-Out

Similar to AI Overviews, AI Mode uses what they call a query fan-out technique, which divides the initial search query into subtopics that anticipate further information the user may need.

Query fan-out anticipates the user’s information journey. So, if they ask question A, Google’s AI Mode will show answers to follow-up questions about B, C, and D.

For example, if you ask, “What is a mechanical keyboard?” Google answers the following questions:

  1. What is a mechanical keyboard?
  2. What are mechanical switches?
  3. What happens when a key is pressed on a mechanical keyboard?
  4. What are keycaps and what materials are they made from?
  5. What is the role of the printed circuit board (PCB)?
  6. How are mechanical switches categorized?

The following screenshot of the AI Mode search result shows the questions (in red) positioned next to the answers, illustrating how query fan-out generates related questions and creates answers for them.

Screenshot of query fan-out in AI Mode, September 2025

How I Extracted Latent Questions From AI Mode Search Results

The way I extracted the questions that query fan-out is answering was by doing an inverse knowledge search, also known as reverse QA.

I copied the output from AI Mode into a document, then uploaded it to ChatGPT with the following prompt:

Read the document and extract a list of questions that are directly and completely answered by full sentences in the text. Only include questions if the document contains a full sentence that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference.

Try that with AI Mode to get a better understanding of the underlying questions it generates with query fan-out. This will help clarify what is happening and make it less mysterious.

Content With Depth

Google’s advice to publishers who want to rank in AI Mode is to encourage them to create content that engages users who are conducting in-depth queries:

“…users are asking longer and more specific questions – as well as follow-up questions to dig even deeper.”

That may not mean creating giant articles with depth. It just means focusing on the content that users are looking for. That approach to content is subtly different from chasing keyword inventory.

Google recommends:

  • Focus on unique, valuable content for people.
  • Provide a great page experience.
  • Ensure we can access your content.
  • Manage visibility with preview controls. (Make use of nosnippet, data-nosnippet, max-snippet, or noindex to set your display preferences.)
  • Make sure structured data matches the visible content.
  • Go beyond text for multimodal success.
  • Understand the full value of your visits.
  • Evolve with your users.

The last two recommendations require further clarification:

Understand The Full Value Of Your Visits

This is an encouragement to focus on delivering the information needs of the user and to note that focusing too hard on the “click” comes at the expense of providing what an “engaged” audience is looking for.

Evolve With Your Users

Google frames this as evolving along with how users are searching. A more pragmatic view is to evolve with how Google is showing results to users.

What Experts Say About Content Structure For AI Mode

Duane Forrester, formerly of Bing Search, advises that content needs to be structured differently for AI search.

He advises:

“…the search pipeline has changed. You don’t need to rank – you need to be retrieved, fused, and reasoned over by GenAI systems.”

In his article titled “Search Without A Webpage,” he expands on the idea that content must be useful as forming the basis of an answer:

“…your content doesn’t have to rank. It has to be retrieved, understood, and assembled into an answer.”

He also says that content needs to be:

“…structured, interpretable, and available when it’s time to answer.

This is the new search stack. Not built on links, pages, or rankings – but on vectors, embeddings, ranking fusion, and LLMs that reason instead of rank.”

When Duane says that content needs to be structured, he’s referring to on-page structure that communicates not just the hierarchy of information but also offers a clean delineation of what each section of content is about.

In my opinion:

  • Paragraphs should consist of sentences that build to an idea, with a clear payoff at the end.
  • If a sentence doesn’t have a purpose within the paragraph, it’s probably better to remove it.
  • If a paragraph doesn’t have a clear purpose, get rid of it.
  • If a group of paragraphs is out of place near the end of the document, move it closer to the beginning if that’s where it belongs.
  • The entire document should have a clear beginning, middle, and end, with each section serving as “the basis of an answer.”

Itai Sadan, CEO of Duda, recommends:

“Use clear, specific language: LLMs rely on clarity first and foremost, so avoid using too many pronouns or any other vague, undefined references.

Organize your content predictably: Break your content up into sections and use headings, like H2 and H3, to organize the unique ideas central to your article’s thesis.”

Mordy Oberstein, founder of Unify Marketing, explains that the focus on attribution took precedence for the average digital marketer:

“What resonates with the person hasn’t fundamentally changed, and I don’t think we’ve realized that. I think we’ve forgotten. I think we’ve completely forgotten what resonance is as digital marketers because of the advent of two things with the internet:

  1. Attribution
  2. The ability to track responses

Businesses were seemingly OK with digital marketers doing whatever it took to get that traffic, to get that conversion, because that’s just the Internet, so everyone just goes along.

Now, with AI Mode, attribution no longer exists in the same way.”

Mordy’s right about attribution. AI Mode cannot be tracked in Google Analytics 4 or Google Search Console. They’re lumped into the Web Search bucket, so there’s no way to tell where it’s coming from. It can’t be distinguished from regular organic search in either GA4 or GSC.

The attribution question is a big issue for digital marketers. Michael Bonfils of Digital International Group recently discussed the issue of attribution from the perspective of zero-click searches.

Bonfils says:

“But the organic side, there is an area … that is zero click. So zero click is for those audience members who don’t know what that means, zero click means when you are having a conversation with AI, for example, I’m trying to compare two different running shoes and I’m having this, ‘what’s going to be better for me?’

I’m having a conversation with AI and AI is pooling and referencing … whatever winning schema formats and content that are out there … but it’s zero click. It’s not going to your site. It’s not going there. So without this data that really affects … organic content strategy.”

And that dovetails with what Mordy is getting at, that SEOs are conditioned to view internet marketing through the “attribution” lens, but that we may be entering a kind of post-attribution period, which is what it largely was pre-internet. So, the old marketing strategies are back in, but they were always good strategies (building awareness and popularity); it’s just that digital marketers tended to engage more with attribution.

Mordy shares the example of someone researching a brand of sneakers, who asks a chatbot about it, then goes to Amazon to see what it looks like and what people are saying about it, then watches video reviews on YouTube, and then goes to AI Mode to review the specs. After all that research, the consumer might return to Amazon and then head over to Google Shopping to compare prices.

He concludes with the insight that resonating with users has always been important, and that very little has changed in terms of consumers conducting research prior to making a purchase:

“That was all happening before. But now the perception is that it’s happening because of LLMs. I don’t think things have fundamentally changed.”

I think that the key insight here is that the research is still happening exactly as before, but what’s changed is that the opportunities to expose your business or products have expanded to multimodal search surfaces, especially with AI Mode.

The screenshot below shows how Nike is taking charge of the conversation on AI Mode with both text and video content.

Screenshot of citations and videos in AI Mode, September 2025

Connect Your Brand To A Product

It’s becoming evident that connecting a brand semantically to a service or product may be important for communicating that the brand is relevant to whatever you want it to be relevant for.

Below is a screenshot of a sponsored post that’s indexed by Google and is ranking in AI Mode for the keyword phrase “what are ad hijacking tools.”

Screenshot of sponsored post ranking in AI Mode, September 2025

SEO Makes Content Eligible For AI Mode

SEO best practices are necessary to be eligible to appear in AI Mode. That’s different from saying that standard SEO will help you rank in AI Mode.

This is what Google says:

“To be eligible to be shown as a supporting link in AI Overviews or AI Mode, a page must be indexed and eligible to be shown in Google Search with a snippet, fulfilling the Search technical requirements. There are no additional technical requirements.”

The “Search technical requirements” are just the three basics of SEO:

  • “Googlebot isn’t blocked.
  • The page works, meaning that Google receives an HTTP 200 (success) status code.
  • The page has indexable content.”

Google clearly says that foundational SEO is necessary to be eligible to rank in AI Mode. But it does not explicitly confirm that SEO will help a site rank in AI Mode.

Is SEO Enough For AI Mode?

Google and Googlers have reassured publishers and SEOs that nothing extra needs to be done to rank in AI search surfaces. They affirm that standard SEO practices are enough.

Standard SEO practices ensure that a site is crawled, indexed, and eligible for ranking in AI Mode. But there is implication that the signals for actually ranking in AI Mode are substantially different from standard organic search.

What Is FastSearch?

Information contained in recent Google antitrust court documents shows that AI Mode ranks pages with a technology called FastSearch.

FastSearch grounds Google’s AI search results in facts, including data from the web. This is significant because FastSearch uses different ranking signals from what’s used in the regular organic search, prioritizing speed and selecting only a top few pages for AI grounding.

The recent Google antitrust trial document from early September offers this explanation of FastSearch:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. … FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. …

FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web
results. “

And elsewhere in the same document:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

RankEmbed

RankEmbed is a deep learning model that identifies patterns in datasets and develops signals that are used for ranking purposes. It uses a combination of user data from search logs and scores generated by human raters to create the ranking-related signals.

The court document explains:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: __% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.

The RankEmbed model itself is an AI-based, deep learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best
documents to retrieve, even if a query lacks certain terms.”

Human-Rated Data

The human-rated data, which is part of RankEmbed, is not used to rank webpages. Human-rated data is used to train deep learning models so they can recognize patterns that correlate with high and low-quality webpages.

How human-rated data is used in general:

  • Human-rated data is used to create what are called labeled data.
  • Labeled data are examples that models use to identify patterns in vast amounts of data.

In this specific instance, the human-labeled data are examples of relevance and quality. The RankEmbed deep learning model uses those examples to learn how to identify patterns that correlate with relevance and page quality.

Search Logs And User Behavior Signals

Let’s go back to how Google uses “70 days of search logs” as part of the RankEmbed deep learning model, which underpins FastSearch.

Search logs refer to user behavior at the point when they’re searching. The data is rich with a wide range of information, such as what users mean when they search, and it can also include the domain names of businesses they associate with certain keywords.

The court documentation doesn’t say all the ways this data can be used. However, a Google antitrust document from May 2025 revealed that search log (click) patterns only become meaningful when scaled to the billions.

Some SEOs have theorized that click data can directly influence the rankings, describing a granular use of clicks for ranking. But that may not be how click data is used, because it’s too noisy and imprecise.

What’s really happening is more scaled than granular. Patterns reveal themselves in the billions, not in the individual click. That’s not just my opinion; it’s a fact confirmed in the May 2025 Google antitrust exhibit:

“Some Known Shortcomings of Live Traffic Eval
The association between observed user behavior and search result quality is tenuous. We need lots of traffic to draw conclusions, and individual examples are difficult to interpret.”

It’s fair to say that search logs are not used to directly impact the rankings of an individual webpage, but are used to learn about relevance and quality from user behavior.

FastSearch is not the same ranking algorithm as the one used for organic search results. It is based on RankEmbed, and the term “embed” suggests that embeddings are involved. Embeddings map words into a vector space so that the meaning of the text is captured. For SEO, this means that keyword relevance matters less, and topical relevance and semantic meaning carry more weight.

Google’s statement that standard SEO is all that’s needed to rank in AI Mode is true only to the extent that standard SEO will ensure that the webpage is crawled, indexed, and eligible for the final stage of AI Mode ranking, which is FastSearch.

But FastSearch uses an entirely different set of considerations at the LLM level to decide what will be used to answer the question.

In my opinion, it’s more realistic to say that SEO best practices make webpages eligible to appear in AI Mode, but the ranking processes are different, and so new considerations come into play.

SEO is still important, but it may be useful to focus on semantic and topical relevance.

AI Mode Is Multimodal

AI Mode is multimodal, meaning image and video content rank in AI Mode. That’s something that SEOs and publishers need to consider in terms of how user expectations drive content discovery. This means it may be useful to create image, video, and maybe even audio content in addition to text.

Optimizing Images For AI Mode

Something that is under your control is the featured image and the in-content images that go with your content. The best images, in my opinion, are images that are noticeable when displayed in AI Mode and contain visual information that is relevant to the search query.

Here’s a screenshot of images that accompany the cited webpages for the query, “What is a mechanical keyboard?”

Screenshot from AI Mode, September 2025

As you can see, none of the images pop out or call attention to themselves. I don’t think that’s Google’s preference; that’s just what publishers use. Images should not be an afterthought. Make them an integrated part of your ranking strategy for AI Mode.

Creative use of images, in my opinion, can help a page call attention to itself as useful and relevant. The best images are ones that look good when Google crops them into a square format.

Google AI Mode is multimodal, which means optimizing your images so that they display well in AI Mode search results. Your images should be attractive regardless of whether they are displayed as either a rectangle (approximately 16:9 aspect ratio) or a square (approximately 4:3 aspect ratio).

Mordy Oberstein offers these insights on multimodal marketing:

“AI Mode is looking at videos, images, and yes, you could do all of that. Yes, you should do all of that – whatever is possible to do while being efficient and not getting misdirected or losing focus – yes, go ahead. I’m all for creating authoritativeness through content. I think that’s an essential strategy for pretty much any business.

AI Mode is not just looking at your website content, whether it’s your image content, audio content, whatever it may be, it’s also looking at how the web is talking about you.”

AI Mode Is Evolution, Not Extension

AI Mode is not just an extension of traditional search but an evolution of it. Search now includes text, images, and video. It anticipates follow-up queries and displays the answers to them using the query fan-out technique. This shifts the SEO focus away from keyword inventory and chasing clicks and toward considering how the entire user information journey is best addressed and then crafting content that satisfies that need.

More Resources:


Featured Image: Jirsak/Shutterstock

Perplexity Launches Comet Browser For Free Worldwide via @sejournal, @MattGSouthern

Perplexity released its Comet browser to everyone today, shifting from a waitlist to free desktop downloads worldwide.

Comet bakes an AI assistant into every new tab so you can ask questions, summarize pages, and navigate without jumping between search results and multiple tools.

Perplexity first introduced Comet in July in a limited release. Since then, the company says “millions” have joined the waitlist, and early users asked 6–18 times more questions on day one.

The move poses a challenge to traditional search engines and browsers by adopting an AI-first approach to web navigation, which reduces the need for multiple searches and the management of numerous tabs.

What Makes Comet Different

At the core of Comet’s functionality is the Comet Assistant, an AI-powered helper that browses alongside users and handles tasks such as research, meeting support, coding assistance, and e-commerce activities.

The assistant appears in every new tab, ready to answer questions or complete actions without requiring users to navigate away from their current workflow.

Unlike traditional browsers where users must open a separate search engine, copy information between tabs, or use multiple tools, Comet integrates assistance directly into the browsing experience. You can ask questions in natural language, and the assistant provides answers drawn from web sources.

Background Assistants

Perplexity also announced Background Assistants today. These assistants work simultaneously and asynchronously in the background, handling tasks without requiring active user supervision.

The Background Assistants join the recently announced Email Assistant, currently available to Max Subscribers. The Email Assistant can be cc’d on email threads to handle scheduling, draft replies, and manage inbox tasks without opening a separate application.

Mobile & Voice Coming Soon

While Comet has been desktop-only since its July launch, Perplexity recently previewed mobile versions for iPhone and Android.

The mobile version will include voice technology, allowing users to interact with Comet assistants through speech rather than typing.

Availability

Comet is now available for free download at perplexity.ai/comet for desktop users.

For tips on using the browser, see Perplexity’s resource hub.


Featured Image: Sidney van den Boogaard/Shutterstock

Vector Index Hygiene: A New Layer Of Technical SEO via @sejournal, @DuaneForrester

For years, technical SEO has been about crawlability, structured data, canonical tags, sitemaps, and speed. All the plumbing that makes pages accessible and indexable. That work still matters. But in the retrieval era, there’s another layer you can’t ignore: vector index hygiene. And while I’d like to claim my usage of vector index hygiene is unique, similar concepts exist in machine learning (ML) circles already. It is unique when applied specifically to our work with content embedding, chunk pollution, and retrieval in SEO/AI pipelines, however.

This isn’t a replacement for crawlability and schema. It’s an addition. If you want visibility in AI-driven answer engines, you now need to understand how your content is dismantled, embedded, and stored in vector indexes and what can go wrong if it isn’t clean.

Traditional Indexing: How Search Engines Break Pages Apart

Google has never stored your page as one giant file. From the beginning, search has dismantled webpages into discrete elements and stored them in separate indexes.

  • Text is broken into tokens and stored in inverted indexes, which map terms to the documents they appear in. Here, tokenization means traditional IR terms, not LLM sub-word units. This is the backbone of keyword retrieval at scale. (See: Google’s How Search Works overview.)
  • Images are indexed separately, using filenames, alt text, captions, structured data, and machine-learned visual features. (See: Google Images documentation.)
  • Video is split into transcripts, thumbnails, and structured data, all stored in a video index. (See: Google’s video indexing docs.)

When you type a query into Google, it queries these indexes in parallel (web, images, video, news) and blends the results into one SERP. This separation exists because handling “an internet’s worth” of text is not the same as handling an internet’s worth of images or video.

For SEOs, the important point is this: you never really ranked “the page.” You ranked the parts of it that were indexed and retrievable.

GenAI Retrieval: From Inverted Indexes To Vector Indexes

AI-driven answer engines like ChatGPT, Gemini, Claude, and Perplexity push this model further. Instead of inverted indexes that map terms to documents, they use vector indexes that store embeddings, essentially mathematical fingerprints of meaning.

  • Chunks, not pages. Content is split into small blocks. Each block is embedded into a vector. Retrieval happens by finding semantically similar vectors in response to a query. (See: Google Vertex AI Vector Search overview.)
  • Hybrid retrieval is common. Dense vector search captures semantics. Sparse keyword search (BM25) captures exact matches. Fusion methods like reciprocal rank fusion (RRF) combine both. (See: Weaviate hybrid search explained and RRF primer.)
  • Paraphrased answers replace ranked lists. Instead of showing a SERP, the model paraphrases retrieved chunks into a single answer.

Sometimes, these systems still lean on traditional search as a backstop. Recent reporting showed ChatGPT quietly pulling Google results through SerpApi when it lacked confidence in its own retrieval. (See: Report)

For SEOs, the shift is stark. Retrieval replaces ranking. If your blocks aren’t retrieved, you’re invisible.

What Vector Index Hygiene Means

Vector index hygiene is the discipline of preparing, structuring, embedding, and maintaining content so it remains clean, deduplicated, and easy to retrieve in vector space. Think of it as canonicalization for the retrieval era.

Without hygiene, your content pollutes indexes:

  • Bloated blocks: If a chunk spans multiple topics, the resulting embedding is muddy and weak.
  • Boilerplate duplication: Repeated intros or promos create identical vectors that may drown out unique content.
  • Noise leakage: Sidebars, CTAs, or footers can get chunked and embedded, then retrieved as if they were main content.
  • Mismatched content types: FAQs, glossaries, blogs, and specs each need different chunk strategies. Treat them the same and you lose precision.
  • Stale embeddings: Models evolve. If you never re-embed after upgrades, your index contains inconsistencies.

Independent research backs this up. LLMs lose salience on long, messy inputs (“Lost in the Middle”). Chunking strategies show measurable trade-offs in retrieval quality (See: “Improving Retrieval for RAG-based Question Answering Models on Financial Documents“). Best practices now include regular re-embedding and index refreshes (See: Milvus guidance.).

For SEOs, this means hygiene work is no longer optional. It decides whether your content gets surfaced at all.

SEOs can begin treating hygiene the way we once treated crawlability audits. The steps are tactical and measurable.

1. Prep Before Embedding

Strip navigation, boilerplate, CTAs, cookie banners, and repeated blocks. Normalize headings, lists, and code so each block is clean. (Do I need to explain that you still need to keep things human-friendly, too?)

2. Chunking Discipline

Break content into coherent, self-contained units. Right-size chunks by content type. FAQs can be short, guides need more context. Overlap chunks sparingly to avoid duplication.

3. Deduplication

Vary intros and summaries across articles. Don’t let identical blocks generate nearly identical embeddings.

4. Metadata Tagging

Attach content type, language, date, and source URL to every block. Use metadata filters during retrieval to exclude noise. (See: Pinecone research on metadata filtering.)

5. Versioning And Refresh

Track embedding model versions. Re-embed after upgrades. Refresh indexes on a cadence aligned to content changes. (See: Milvus versioning guidance.)

6. Retrieval Tuning

Use hybrid retrieval (dense + sparse) with RRF. Add re-ranking to prioritize stronger chunks. (See: Weaviate hybrid search best practices.)

A Note On Cookie Banners (Illustration Of Pollution In Theory)

Cookie consent banners are legally required across much of the web. You’ve seen the text: “We use cookies to improve your experience.” It’s boilerplate, and it repeats across every page of a site.

In large systems like ChatGPT or Gemini, you don’t see this text popping up in answers. That’s almost certainly because they filter it out before embedding. A simple rule like “if text contains ‘we use cookies,’ don’t vectorize it” is enough to prevent most of that noise.

But despite this, cookie banners a still a useful illustration of theory meeting practice. If you’re:

  • Building your own RAG stack, or
  • Using third-party SEO tools where you don’t control the preprocessing,

Then cookie banners (or any repeated boilerplate) can slip into embeddings and pollute your index. The result is duplicate, low-value vectors spread across your content, which weakens retrieval. This, in turn, messes with the data you’re collecting, and potentially the decisions you’re about to make from that data.

The banner itself isn’t the problem. It’s a stand-in for how any repeated, non-semantic text can degrade your retrieval if you don’t filter it. Cookie banners just make the concept visible. And if the systems ignore your cookie banner content, etc., is the volume of that content needing to be ignored simply teaching the system that your overall utility is lower than a competitor without similar patterns? Is there enough of that content that the system gets “lost in the middle” trying to reach your useful content?

Old Technical SEO Still Matters

Vector index hygiene doesn’t erase crawlability or schema. It sits beside them.

  • Canonicalization prevents duplicate URLs from wasting crawl budget. Hygiene prevents duplicate vectors from wasting retrieval opportunities. (See: Google’s canonicalization troubleshooting.)
  • Structured data still helps models interpret your content correctly.
  • Sitemaps still improve discovery.
  • Page speed still influences rankings where rankings exist.

Think of hygiene as a new pillar, not a replacement. Traditional technical SEO makes content findable. Hygiene makes it retrievable in AI-driven systems.

You don’t need to boil the ocean. Start with one content type and expand.

  • Audit your FAQs for duplication and block size (chunk size).
  • Strip noise and re-chunk.
  • Track retrieval frequency and attribution in AI outputs.
  • Expand to more content types.
  • Build a hygiene checklist into your publishing workflow.

Over time, hygiene becomes as routine as schema markup or canonical tags.

Your content is already being chunked, embedded, and retrieved, whether you’ve thought about it or not.

The only question is whether those embeddings are clean and useful, or polluted and ignored.

Vector index hygiene is not THE new technical SEO. But it is A new layer of technical SEO. If crawlability was part of the technical SEO of 2010, hygiene is part of the technical SEO of 2025.

SEOs who treat it that way will still be visible when answer engines, not SERPs, decide what gets seen.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Collagery/Shutterstock

How People Really Use LLMs And What That Means For Publishers

OpenAI released the largest study to date on how users really use ChatGPT. I have painstakingly synthesized the ones you and I should pay heed to, so you don’t have to wade through the plethora of useful and pointless insights.

TL;DR

  1. LLMs are not replacing search. But they are shifting how people access and consume information.
  2. Asking (49%) and Doing (40%) queries dominate the market and are increasing in quality.
  3. The top three use cases – Practical Guidance, Seeking Information, and Writing – account for 80% of all conversations.
  4. Publishers need to build linkable assets that add value. It can’t just be about chasing traffic from articles anymore.
Image Credit: Harry Clarkson-Bennett

Chatbot 101

A chatbot is a statistical model trained to generate a text response given some text input. Monkey see, monkey do.

The more advanced chatbots have a two or more-stage training process. In stage one (less colloquially known as “pre-training”), LLMs are trained to predict the next word in a string.

Like the world’s best accountant, they are both predictable and boring. And that’s not necessarily a bad thing. I want my chefs fat, my pilots sober, and my money men so boring they’re next in line to lead the Green Party.

Stage two is where things get a little fancier. In the “post-training” phase, models are trained to generate “quality” responses to a prompt. They are fine-tuned on different strategies, like reinforcement learning, to help grade responses.

Over time, the LLMs, like Pavlov’s dog, are either rewarded or reprimanded based on the quality of their responses.

In phase one, the model “understands” (definitely in inverted commas) a latent representation of the world. In phase two, its knowledge is honed to generate the best quality response.

Without temperature settings, LLMs will generate exactly the same response time after time, as long as the training process is the same.

Higher temperatures (closer to 1.0) increase randomness and creativity. Lower temperatures (closer to 0) make the model(s) far more predictive and precise.

So, your use case determines the appropriate temperature settings. Coding should be set closer to zero. Creative, more content-focused tasks should be closer to one.

I have already talked about this in my article on how to build a brand post AI. But I highly recommend reading this very good guide on how temperature scales work with LLMs and how they impact the user base.

What Does The Data Tell Us?

That LLMs are not a direct replacement for search. Not even that close IMO. This Semrush study highlighted that LLM super users increased the amount of traditional searches they were doing. The expansion theory seems to hold true.

But they have brought on a fundamental shift in how people access and interact with information. Conversational interfaces have incredible value. Particularly in a workplace format.

Who knew we were so lazy?

1. Guidance, Seeking Information, And Writing Dominate

These top three use cases account for 80% of all human-robot conversations. Practical guidance, seeking information, and please help me write something bland and lacking any kind of passion or insight, wondrous robot.

I will concede that the majority of Writing queries are for editing existing work. Still. If I read something written by AI, I will feel duped. And deception is not an attractive quality.

2. Non-Work-Related Usage Is Increasing

  • Non-work-related messages grew from 53% of all usage to more than 70% by July 2025.
  • LLMs have become habitual. Particularly when it comes to helping us make the right decisions. Both in and out of work.

3. Writing Is The Most Common Workplace Application

  • Writing is the most common work use case, accounting for 40% of work-related messages on average in June 2025.
  • About two-thirds of all Writing messages are requests to modify existing user text rather than create new text from scratch.

I know enough people that just use LLMs to help them write better emails. I almost feel sorry for the tech bros that the primary use cases for these tools are so lacking in creativity.

4. Less So Coding

  • Computer coding queries are a relatively small share, at only 4.2% of all messages.*
  • This feels very counterintuitive, but specialist bots like Claude or tools like Lovable are better alternatives.
  • This is a point of note. Specialist LLM usage will grow and will likely dominate specific industries because they will be able to develop better quality outputs. The specialized stage two style training makes for a far superior product.

*Compared to 33% of work-related Claude conversations.

It’s important to note that other studies have some very different takes on what people use LLMs for. So this isn’t as cut and dry as we think. I’m sure things will continue to change.

5. Men No Longer Dominate

  • Early adopters were disproportionately male (around 80% with typically masculine names).
  • That number declined to 48% by June 2025, with active users now slightly more likely to have typically feminine names.

Sure, us men have our flaws. Throughout history maybe we’ve been a tad quick to battle and a little dominating. But good to see parity.

  • 89% of all queries are Asking and Doing related.
  • 49% Asking and 40% Doing, with just 11% for Expressing.
  • Asking messages have grown faster than Doing messages over the last year, and are rated higher quality.
A ChatGPT-built table with examples of each query type – Asking, Doing, and Expressing (Image Credit: Harry Clarkson-Bennett)

7. Relationships And Personal Reflection Are Not Prominent

  • There have been a number of studies that state that LLMs have become personal therapists for people (see above).
  • However, relationships and personal reflection only account for 1.9% of total messages according to OpenAI.

8. The Bloody Youth (*Shakes Fist*)

Takeaways

I don’t think LLMs are a disaster for publishers. Sure, they don’t send any referral traffic and have started to remove citations outside of paid users (classic). But none of these tech-heads are going to give us anything.

It’s a race to the moon, and we’re the dog they sent on the test flight.

But if you’re a publisher with an opinion, an audience, and – hopefully – some brand depth and assets to hand, you’ll be ok. Although their crawling behavior is getting out of hand.

Shit-quality traffic and not a lot of it (Image Credit: Harry Clarkson-Bennett)

One of the most practical outcomes we as publishers can take from this data is the apparent change in intents. For eons, we’ve been lumbered with navigational, informational, commercial, and transactional.

Now we have Doing. Or Generating. And it’s huge.

Even simple tools can still drive fantastic traffic and revenue (Image Credit: Harry Clarkson-Bennett)

SEO isn’t dead for publishers. But we do need to do more than just keep publishing content. There’s a lot to be said for espousing the values of AI, while keeping it at arm’s length.

Think BBC Verify. Content that can’t be synthesized by machines because it adds so much value. Tools and linkable assets. Real opinions from experts pushed to the fore.

But it’s hard to scale that quality. Programmatic SEO can drive amazing value. As can tools. Tools that answer users’ “Doing” queries time after time. We have to build things that add value outside of the existing corpus.

And if your audience is generally younger and more trusting, you’re going to have to lean into this more.

More Resources:


This post was originally published on Leadership in SEO.


Featured Image: Roman Samborskyi/Shutterstock