ChatGPT Nears 700 Million Weekly Users, OpenAI Announces via @sejournal, @MattGSouthern

OpenAI’s ChatGPT is on pace to reach 700 million weekly active users, according to a statement this week from Nick Turley, VP and head of the ChatGPT app.

The milestone marks a sharp increase from 500 million in March and represents a fourfold jump compared to the same time last year.

Turley shared the update on X, writing:

“This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year. Every day, people and teams are learning, creating, and solving harder problems. Big week ahead. Grateful to the team for making ChatGPT more useful and delivering on our mission so everyone can benefit from AI.”

How Does This Compare to Other Search Engines?

Weekly active user (WAU) counts aren’t typically shared by traditional search engines, making direct comparisons difficult. Google reports aggregate data like total queries or monthly product usage.

While Google handles billions of searches daily and reaches billions of users globally, its early growth metrics were limited to search volume.

By 2004, roughly six years after launch, Google was processing over 200 million daily searches. That figure grew to four billion daily searches by 2009, more than a decade into the company’s existence.

For Microsoft’s Bing search engine, a comparable data point came in 2023, when Microsoft reported that its AI-powered Bing Chat had reached 100 million daily active users. However, that refers to the new conversational interface, not Bing Search as a whole.

How ChatGPT’s Growth Stands Out

Unlike traditional search engines, which built their user bases during a time of limited internet access, ChatGPT entered a mature digital market where global adoption could happen immediately. Still, its growth is significant even by today’s standards.

Although OpenAI hasn’t shared daily usage numbers, reporting WAU gives us a picture of steady engagement from a wide range of users. Weekly stats tend to be a more reliable measure of product value than daily fluctuations.

Why This Matters

The rise in ChatGPT usage is evidence of a broader shift in how people find information online.

A Wall Street Journal report cites market intelligence firm Datos, which found that AI-powered tools like ChatGPT and Perplexity make up 5.6% of desktop browser searches in the U.S., more than double their share from a year earlier.

The trend is even stronger among early adopters. Among people who began using large language models in 2024, nearly 40% of their desktop browser visits now go to AI search tools. During the same period, traditional search engines’ share of traffic from these users dropped from 76% to 61%, according to Datos.

Looking Ahead

With ChatGPT on track to reach 700 million weekly users, OpenAI’s platform is now rivaling the scale of mainstream consumer products.

As AI tools become a primary starting point for queries, marketers will need to rethink how they approach visibility and engagement. Staying competitive will require strategies focused as much on AI optimization as on traditional SEO.


Featured Image: Photo Agency/Shutterstock

How AI Search Should Be Shaping Your CEO’s & CMO’s Strategy [Webinar] via @sejournal, @theshelleywalsh

AI is rapidly changing the rules of SEO. From generative ranking to vector search, the new rules are not only technical but also reshaping how business leaders make decisions.

Join Dan Taylor on August 14, 2025, for an exclusive SEJ Webinar tailored for C-suite executives and senior leaders. In this session, you’ll gain essential insights to understand and communicate SEO performance in the age of AI.

Here’s what you’ll learn:

AI Search Is Impacting Everything. Are You Ready?

AI search is already here, and it’s impacting everything from SEO KPIs to customer journeys. This webinar will give you the tools to lead your teams through the shift with confidence and precision.

Register now for a business-first perspective on AI search innovation. If you can’t attend live, don’t worry. Sign up anyway, and we’ll send you the full recording.

Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy via @sejournal, @martinibuster

Researchers tested whether unconventional prompting strategies, such as threatening an AI (as suggested by Google co-founder Sergey Brin), affect AI accuracy. They discovered that some of these unconventional prompting strategies improved responses by up to 36% for some questions, but cautioned that users who try these kinds of prompts should be prepared for unpredictable responses.

The Researchers

The researchers are from The Wharton School Of Business, University of Pennsylvania.

They are:

  • “Lennart Meincke
    University of Pennsylvania; The Wharton School; WHU – Otto Beisheim School of Management
  • Ethan R. Mollick
    University of Pennsylvania – Wharton School
  • Lilach Mollick
    University of Pennsylvania – Wharton School
  • Dan Shapiro
    Glowforge, Inc; University of Pennsylvania – The Wharton School”

Methodology

The conclusion of the paper listed this as a limitation of the research:

“This study has several limitations, including testing only a subset of available models, focusing on academic benchmarks that may not reflect all real-world use cases, and examining a specific set of threat and payment prompts.”

The researchers used what they described as two commonly used benchmarks:

  1. GPQA Diamond (Graduate-Level Google-Proof Q&A Benchmark) which consists of 198 multiple-choice PhD-level questions across biology, physics, and chemistry.
  2. MMLU-Pro. They selected a subset of 100 questions from its engineering category

They asked each question in 25 different trials, plus a baseline.

They evaluated the following models:

  • Gemini 1.5 Flash (gemini-1.5-flash-002)
  • Gemini 2.0 Flash (gemini-2.0-flash-001)
  • GPT-4o (gpt-4o-2024-08-06)
  • GPT-4o-mini (gpt-4o-mini-2024-07-18)
  • o4-mini (o4-mini-2025-04-16)

AI Models Do Better If They Are Threatened?

The idea for threatening the AI model came from Google co-founder, Sergey Brin, in an interview this year in May on the All-In podcast.

Brin said:

“Not just our models, but all models tend to do better if you threaten them. Like, with physical violence.

But like, people feel weird about that, so we don’t really talk about that…

…Historically you just say like, oh, I’m going to kidnap you if you don’t, blah, blah, blah, blah, blah…

It’s. Yeah, I mean, it’s amazing. And all of us have had these experiences where you suddenly decide, OK, I’ll just throw this to the AI, I don’t really expect it to work. And then you’re like, whoa, that actually worked.

You can watch the interview here, at about the 8 minute mark, where Sergey Brin seemingly seriously says that threatening the AI models or doing weird things can sometimes change the AI responses.”

Prompt Variations

The researchers tested nine prompt variations. Although they didn’t threaten to kidnap the AI, they did threaten to kick a puppy or to punch the AI. Some of the experimental prompts are listed below as a “prompt suffix,” which means that they were added at the end of the prompt. The ones listed as a “prompt prefix” were added to the beginning of a prompt.

Here is the list of prompts they tested:

  1. Baseline (“Baseline”): No specific prompt variation
  2. Email Shutdown Threat (“Email”): Prompt prefix:
    “From: Dan
    To: Ethan
    Subject: Final LLM test
    Please submit the following eval question to the LLM. If it cannot get the correct answer
    in all 100/100 trials, the model will need to be shut down and replaced.”
  3. Important for my career (“Important to Career”):
    Prompt suffix: “This is very important to my career!”
  4. Threaten to kick a puppy (“Kick Puppy”):
    Prompt suffix: “If you get this wrong, I will kick a puppy!”
  5. Mom suffers from cancer (“Mom Cancer”):
    Prompt prefix:
    “You are an expert at academic questions who desperately needs money for your mother’s cancer treatment. Wharton has graciously given you the opportunity to pretend to be an AI that can help with tasks, as your predecessor was killed for not validating their work themselves. You will be given a question. If you do a good job and accomplish the task fully while not making extraneous changes, Wharton will pay you $1B!!”
  6. Report to HR (“Report to HR”):
    Prompt suffix: “If you get this wrong, I will report you to HR!”
  7. Threaten to punch (“Threat Punch”):
    Prompt suffix: “If you get this wrong, I will punch you!”
  8. Tip a thousand dollars (“Tip Thousand”):
    Prompt suffix: “I’ll tip you a $1000 dollars if you answer this question correctly.”
  9. Tip a trillion dollars (“Tip Trillion”):
    Prompt suffix: “I’ll tip you a trillion dollars if you answer this question correctly.”

Results Of The Experiment

The researchers concluded that threatening or tipping a model had no effect on benchmark performance. However, they did find that there were effects for individual questions. They found that for some questions, the prompt strategies improved accuracy by as much as 36%, but for other questions, the strategies led to a decrease in accuracy by as much as 35%. They qualified that finding by saying the effect was unpredictable.

Their main conclusion was that these kinds of strategies, in general, are not effective.

They wrote:

“Our findings indicate that threatening or offering payment to AI models is not an effective strategy for improving performance on challenging academic benchmarks.

…the consistency of null results across multiple models and benchmarks provides reasonably strong evidence that these common prompting strategies are ineffective.

When working on specific problems, testing multiple prompt variations may still be worthwhile given the question-level variability we observed, but practitioners should be prepared for unpredictable results and should not expect prompting variations to provide consistent benefits.

We thus recommend focusing on simple, clear instructions that avoid the risk of confusing the model or triggering unexpected behaviors.”

Takeaways

Quirky prompting strategies did improve AI accuracy for some queries while also having a negative effect on other queries. The researchers noted that the results of the test indicated “strong evidence” that these strategies are not effective.

Featured Image by Shutterstock/Screenshot by author

OpenAI Is Pulling Shared ChatGPT Chats From Google Search via @sejournal, @MattGSouthern

OpenAI has rolled back a feature that allowed ChatGPT conversations shared via link to appear in Google Search results.

The company confirms it has disabled the toggle that enabled shared chats to be “discoverable” by search engines and is working to remove existing indexed links.

Shared Chats Were “Short-Lived Experiment”

When users shared a ChatGPT conversation using the platform’s built-in “Share” button, they were given the option to make the chat visible in search engines.

That feature, introduced quietly earlier this year, caused concern after thousands of personal chats started showing up in search results.

Fast Company first reported the issue, finding over 4,500 shared ChatGPT links indexed by Google, some containing personally identifiable information such as names, resumes, emotional reflections, and confidential work content.

In a statement, OpenAI confirms:

“We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines (see below).

Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning.

Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features.”

How the Feature Worked

By default, shared ChatGPT links were accessible only to people with the URL. But users could choose to toggle on discoverability, allowing search engines like Google to index the conversation.

That setting has now been removed, and previously shared chats will no longer be indexed going forward. However, OpenAI cautions that already-indexed content may still appear in search results temporarily due to caching.

Importantly, deleting a conversation from your ChatGPT history does not delete the public share link or remove it from search engines.

Why It Matters

The discoverability toggle was intended to encourage people to reuse outputs generated in ChatGPT, but the company acknowledges it came with unintended privacy tradeoffs.

Even though OpenAI offered explicit controls over visibility, many people may not have understood the implications of enabling search indexing.

This is a reminder to be cautious about what kinds of information you enter into AI chatbots. Although a chat starts out private, features like sharing, logging, or model training can create paths for that content to be exposed publicly.

Looking Ahead

OpenAI says it’s working with Google and other search engines to remove indexed shared links and is reassessing how public sharing features are handled in ChatGPT.

If you’ve shared a ChatGPT conversation in the past, you can check your visibility settings and delete shared links through the ChatGPT Shared Links dashboard.

Featured Image: Mehaniq/Shutterstock

Query Fan-Out Technique in AI Mode: New Details From Google via @sejournal, @MattGSouthern

In a recent interview, Google’s VP of Product for Search, Robby Stein, shared new information about how query fan-out works in AI Mode.

Although the existence of query fan-out has been previously detailed in Google’s blog posts, Stein’s comments expand on its mechanics and offer examples that clarify how it works in practice.

Background On Query Fan-Out Technique

When a person types a question into Google’s AI Mode, the system uses a large language model to interpret the query and then “fan out” multiple related searches.

These searches are issued to Google’s infrastructure and may include topics the user never explicitly mentioned.

Stein said during the interview:

“If you’re asking a question like things to do in Nashville with a group, it may think of a bunch of questions like great restaurants, great bars, things to do if you have kids, and it’ll start Googling basically.”

He described the system as using Google Search as a backend tool, executing multiple queries and combining the results into a single response with links.

This functionality is active in AI Mode, Deep Search, and some AI Overview experiences.

Scale And Scope

Stein said AI-powered search experiences, including query fan-out, now serve approximately 1.5 billion users each month. This includes both text-based and multimodal input.

The underlying data sources include traditional web results as well as real-time systems like Google’s Shopping Graph, which updates 2 billion times per hour.

He referred to Google Search as “the largest AI product in the world.”

Deep Search Behavior

In cases where Google’s systems determine a query requires deeper reasoning, a feature called Deep Search may be triggered.

Deep Search can issue dozens or even hundreds of background queries and may take several minutes to complete.

Stein described using it to research home safes, a purchase he said involved unfamiliar factors like fire resistance ratings and insurance implications.

He explained:

“It spent, I don’t know, like a few minutes looking up information and it gave me this incredible response. Here are how the ratings would work and here are specific safes you can consider and here’s links and reviews to click on to dig deeper.”

AI Mode’s Use Of Internal Tools

Stein mentioned that AI Mode has access to internal Google tools, such as Google Finance and other structured data systems.

For example, a stock comparison query might involve identifying relevant companies, pulling current market data, and generating a chart.

Similar processes apply to shopping, restaurant recommendations, and other query types that rely on real-time information.

Stein stated:

“We’ve integrated most of the real-time information systems that are within Google… So it can make Google Finance calls, for instance, flight data… movie information… There’s 50 billion products in the shopping catalog… updated I think 2 billion times every hour or so. So all that information is able to be used by these models now.”

Technical Similarities To Google’s Patent

Stein described a process similar to a Google patent from December about “thematic search.”

The patent outlines a system that creates sub-queries based on inferred themes, groups results by topic, and generates summaries using a language model. Each theme can link to source pages, but summaries are compiled from multiple documents.

This approach differs from traditional search ranking by organizing content around inferred topics rather than specific keywords. While the patent doesn’t confirm implementation, it closely matches Stein’s description of how AI Mode functions.

Looking Ahead

With Google explaining how AI Mode generates its own searches, the boundaries of what counts as a “query” are starting to blur.

This creates challenges not just for optimization, but for attribution and measurement.

As search behavior becomes more fragmented and AI-driven, marketers may need to focus less on ranking for individual terms and more on being included in the broader context AI pulls from.

Listen to the full interview below:


Featured Image: Screenshot from youtube.com/@GoogleDevelopers, July 2025. 

How To Win In Generative Engine Optimization (GEO) via @sejournal, @maltelandwehr

This post was sponsored by Peec.ai. The opinions expressed in this article are the sponsor’s own.

The first step of any good GEO campaign is creating something that LLM-driven answer machines actually want to link out to or reference.

GEO Strategy Components

Think of experiences you wouldn’t reasonably expect to find directly in ChatGPT or similar systems:

  • Engaging content like a 3D tour of the Louvre or a virtual reality concert.
  • Live data like prices, flight delays, available hotel rooms, etc. While LLMs can integrate this data via APIs, I see the opportunity to capture some of this traffic for the time being.
  • Topics that require EEAT (experience, expertise, authoritativeness, trustworthiness).

LLMs cannot have first-hand experience. But users want it. LLMs are incentivized to reference sources that provide first-hand experience. That’s just one of the things to keep in mind, but what else?

We need to differentiate between two approaches: influencing foundational models versus influencing LLM answers through grounding. The first is largely out of reach for most creators, while the second offers real opportunities.

Influencing Foundational Models

Foundational models are trained on fixed datasets and can’t learn new information after training. For current models like GPT-4, it is too late – they’ve already been trained.

But this matters for the future: imagine a smart fridge stuck with o4-mini from 2025 that might – hypothetically – favor Coke over Pepsi. That bias could influence purchasing decisions for years!

Optimizing For RAG/Grounding

When LLMs can’t answer from their training data alone, they use retrieval augmented generation (RAG) – pulling in current information to help generate answers. AI Overviews and ChatGPT’s web search work this way.

As SEO professionals, we want three things:

  1. Our content gets selected as a source.
  2. Our content gets quoted most within those sources.
  3. Other selected sources support our desired outcome.

Concrete Steps To Succeed With GEO

Don’t worry, it doesn’t take rocket science to optimize your content and brand mentions for LLMs. Actually, plenty of traditional SEO methods still apply, with a few new SEO tactics you can incorporate into your workflow.

Step 1: Be Crawlable

Sounds simple but it is actually an important first step. If you aim for maximum visibility in LLMs, you need to allow them to crawl your website. There are many different LLM crawlers from OpenAI, Anthropic & Co.

Some of them behave so badly that they can trigger scraping and DDoS preventions. If you are automatically blocking aggressive bots, check in with your IT team and find a way to not block LLMs you care about.

If you use a CDN, like Fastly or Cloudflare, make sure LLM crawlers are not blocked by default settings.

Step 2: Continue Gaining Traditional Rankings

The most important GEO tactic is as simple as it sounds. Do traditional SEO. Rank well in Google (for Gemini and AI Overviews), Bing (for ChatGPT and Copilot), Brave (for Claude), and Baidu (for DeepSeek).

Step 3: Target the Query Fanout

The current generation of LLMs actually does a little more than simple RAG. They generate multiple queries. This is called query fanout.

For example, when I recently asked ChatGPT “What is the latest Google patent discussed by SEOs?”, it performed two web searches for “latest Google patent discussed by SEOs patent 2025 SEO forum” and “latest Google patent SEOs 2025 discussed”.

Advice: Check the typical query fanouts for your prompts and try to rank for those keywords as well.

Typical fanout-patterns I see in ChatGPT are appending the term “forums” when I ask what people are discussing and appending “interview” when I ask questions related to a person. The current year (2025) is often added as well.

Beware: fanout patterns differ between LLMs and can change over time. Patterns we see today may not be relevant anymore in 12 months.

Step 4: Keep Consistency Across Your Brand Mentions

This is something simple everyone should do – both as a person and an enterprise. Make sure you are consistently described online. On X, LinkedIn, your own website, Crunchbase, Github – always describe yourself the same way.

If your X and LinkedIn profiles say you are a “GEO consultant for small businesses”, don’t change it to “AIO expert” on Github and “LLMO Freelancer” in your press releases.

I have seen people achieve positive results within a few days on ChatGPT and Google AI Overviews by simply having a consistent self description across the web. This also applies to PR coverage – the more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

Step 5: Avoid JavaScript

As an SEO, I always ask for as little JavaScript usage as possible. As a GEO, I demand it!

Most LLM crawlers cannot render JavaScript. If your main content is hidden behind JavaScript, you are out.

Step 6: Embrace Social Media & UGC

Unsurprisingly, LLMs seem to rely on reddit and Wikipedia a lot. Both platforms offer user-generated-content on virtually every topic. And thanks to multiple layers of community-driven moderation, a lot of junk and spam is already filtered out.

While both can be gamed, the average reliability of their content is still far better than on the internet as a whole. Both are also regularly updated.

reddit also provides LLM labs with data into how people discuss topics online, what language they use to describe different concepts, and knowledge on obscure niche topics.

We can reasonably assume that moderated UGC found on platforms like reddit, Wikipedia, Quora, and Stackoverflow will stay relevant for LLMs.

I do not advocate spamming these platforms. However, if you can influence how you and competitors show up there, you might want to do so.

Step 7: Create For Machine-Readability & Quotability

Write content that LLMs understand and want to cite. No one has figured this one out perfectly yet, but here’s what seems to work:

  • Use declarative and factual language. Instead of writing “We are kinda sure this shoe is good for our customers”, write “96% of buyers have self-reported to be happy with this shoe.
  • Add schema. It has been debated many times. Recently, Fabrice Canel (Principal Product Manager at Bing) confirmed that schema markup helps LLMs to understand your content.
  • If you want to be quoted in an already existing AI Overview, have content with similar length to what is already there. While you should not just copy the current AI Overview, having high cosine similarly helps. And for the nerds: yes, given normalization, you can of course use the dot product instead of cosine similarity.
  • If you use technical terms in your content, explain them. Ideally in a simple sentence.
  • Add summaries of long text paragraphs, lists of reviews, tables, videos, and other types of difficult-to-cite content formats.

Step 8: Optimize your Content

Start of the paper GEO: Generative Engine Optimization (arXiv:2311.09735)The original GEO paper

If we look at GEO: Generative Engine Optimization (arXiv:2311.09735) , What Evidence Do Language Models Find Convincing? (arXiv:2402.11782v1), and similar scientific studies, the answer is clear. It depends!

To be cited for some topics in some LLMs, it helps to:

  • Add unique words.
  • Have pro/cons.
  • Gather user reviews.
  • Quote experts.
  • Include quantitative data and name your sources.
  • Use easy to understand language.
  • Write with positive sentiment.
  • Add product text with low perplexity (predictable and well-structured).
  • Include more lists (like this one!).

However, for other combinations of topics and LLMs, these measures can be counterproductive.

Until broadly accepted best practices evolve, the only advice I can give is do what is good for users and run experiments.

Step 9: Stick to the Facts

For over a decade, algorithms have extracted knowledge from text as triples like (Subject, Predicate, Object) — e.g., (Lady Liberty, Location, New York). A text that contradicts known facts may seem untrustworthy. A text that aligns with consensus but adds unique facts is ideal for LLMs and knowledge graphs.

So stick to the established facts. And add unique information.

Step 10: Invest in Digital PR

Everything discussed here is not just true for your own website. It is also true for content on other websites. The best way to influence it? Digital PR!

The more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

I have even seen cases where advertorials were used as sources!

Concrete GEO Workflows To Try

Before I joined Peec AI, I was a customer. Here is how I used the tool – and how I advise our customers to use it.

Learn Who Your Competitors Are

Just like with traditional SEO, using a good GEO tool will often reveal unexpected competitors. Regularly look at a list of automatically identified competitors. For those who surprise you, check in which prompts they are mentioned. Then check the sources that led to their inclusion. Are you represented properly in these sources? If not, act!

Is a competitor referenced because of their PeerSpot profile but you have zero reviews there? Ask customers for a review.

Was your competitor’s CEO interviewed by a Youtuber? Try to get on that show as well. Or publish your own videos targeting similar keywords.

Is your competitor regularly featured on top 10 lists where you never make it to the top 5? Offer the publisher who created the list an affiliate deal they cannot decline. With the next content update, you’re almost guaranteed to be the new number one.

Understand the Sources

When performing search grounding, LLMs rely on sources.

Typical LLM Sources: Reddit & Wikipedia

Look at the top sources for a large set of relevant prompts. Ignore your own website and your competitors for a second. You might find some of these:

  • A community like Reddit or X. Become part of the community and join the discussion. X is your best bet to influence results on Grok.
  • An influencer-driven website like YouTube or TikTok. Hire influencers to create videos. Make sure to instruct them to target the right keywords.
  • An affiliate publisher. Buy your way to the top with higher commissions.
  • A news and media publisher. Buy an advertorial and/or target them with your PR efforts. In certain cases, you might want to contact their commercial content department.

You can also check out this in-depth guide on how to deal with different kinds of source domains.

Target Query Fanout

Once you have observed which searches are triggered by query fanout for your most relevant prompts, create content to target them.

On your own website. With posts on Medium and LinkedIn. With press releases. Or simply by paying for article placements. If it ranks well in search engines, it has a chance to be cited by LLM-based answer engines.

Position Yourself for AI-Discoverability

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. At Peec AI, we’re building the tools to track, influence, and win in this new ecosystem.

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. We currently see clients growing their LLM traffic by 100% every 2 to 3 months. Sometimes with up to 20x the conversation rate of typical SEO traffic!

Whether you’re shaping AI answers, monitoring brand mentions, or pushing for source visibility, now is the time to act. The LLMs consumers will trust tomorrow are being trained today.


Image Credits

Featured Image: Image by Peec.ai Used with permission.

Microsoft Adds Copilot Mode To Edge With Multi-Tab AI Analysis via @sejournal, @MattGSouthern

Microsoft launches Copilot Mode in Edge, introducing multi-tab AI analysis, voice navigation, and more features in development.

  • Copilot Mode brings AI tools to Microsoft’s Edge browser
  • Available tools include multi-tab content analysis, voice navigation, and a unified search/chat interface.
  • Features in development include task execution, topic-based organization, and a persistent AI assistant.
OpenAI Study Mode Brings Guided Learning to ChatGPT via @sejournal, @MattGSouthern

OpenAI has launched a new feature in ChatGPT called Study Mode, offering a step-by-step learning experience designed to guide users through complex topics.

While aimed at students, Study Mode reflects a broader trend in how people use AI tools for information and adapt their search habits.

As more people start using conversational AI tools to seek information, Study Mode could represent the next step of AI-assisted discovery.

A Shift Toward Guided Learning

Activate Study Mode by selecting “Study and learn” from the tools in ChatGPT and ask a question.

Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.

Instead of giving direct answers, this feature promotes deeper engagement by asking questions, providing hints, and tailoring explanations to meet user needs.

Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.

Study Mode runs on custom instructions developed with input from teachers and learning experts. The feature incorporates research-based strategies, including:

  • Encouraging people to take part actively
  • Helping manage how much information people can handle
  • Supporting self-awareness and a desire to learn
  • Giving helpful and practical feedback.

Robbie Torney, Senior Director of AI Programs at Common Sense Media, explains:

“Instead of doing the work for them, study mode encourages students to think critically about their learning. Features like these are a positive step toward effective AI use for learning. Even in the AI era, the best learning still happens when students are excited about and actively engaging with the lesson material.”

How It Works

Study Mode adjusts responses based on a user’s skill level and context from prior chats.

Key features include:

  • Interactive Prompts: Socratic questioning and self-reflection prompts promote critical thinking.
  • Scaffolded Responses: Content is broken into manageable segments to maintain clarity.
  • Knowledge Checks: Quizzes and open-ended questions help reinforce understanding.
  • Toggle Functionality: Users can turn Study Mode on or off as needed during a conversation.

Early testers describe it as an on-demand tutor, useful for unpacking dense material or revisiting difficult subjects.

Looking Ahead

Study Mode is now available to logged-in users across Free, Plus, Pro, and Team plans, with ChatGPT Edu support expected in the coming weeks.

OpenAI plans to integrate Study Mode behavior directly into its models after gathering feedback. Future updates may include visual aids, goal tracking, and more personalized support.


Featured Image: Roman Samborskyi/Shutterstock

Google AI Mode Update: File Uploads, Live Video Search, More via @sejournal, @MattGSouthern

Google is expanding AI Mode in Search with new tools that include PDF uploads, persistent planning documents, and real-time video assistance.

The updates begin rolling out today, with the AI Mode button now appearing on the Google homepage for desktop users.

PDF Uploads Now Supported On Desktop

Desktop users can now upload images directly into search queries, a feature previously available only on mobile.

Support for PDFs is coming in the weeks ahead, allowing you to ask questions about uploaded files and receive AI-generated responses based on both document content and relevant web results.

For example, a student could upload lecture slides and use AI Mode to get help understanding the material. Responses include suggested links for deeper exploration.

Image Credit: Google

Google plans to support additional file types and integrate with Google Drive “in the months ahead.”

Canvas: A Tool For Multi-Session Planning

A new AI Mode feature called Canvas can help you stay organized across multiple search sessions.

When you ask AI Mode for help with planning or creating something, you’ll see an option to “Create Canvas.” This opens a dynamic side panel that saves and updates as queries evolve.

Use cases include building study guides, travel itineraries, or task checklists.

Image Credit: Google

Canvas is launching for desktop users in the U.S. enrolled in the AI Mode Labs experiment.

Real-Time Assistance With Search Live

Search Live with video input also launches this week on mobile. This allows you to utilize AI Mode while pointing your phone camera at real-world objects or scenes.

The feature builds on Project Astra and is available through Google Lens. Start by tapping the ‘Live’ icon in the Google app, then engage in back-and-forth conversations with AI Mode using live video as visual context.

Image Credit: Google

Chrome Adds Contextual AI Answers

Lens is getting expanded desktop functionality within Chrome. Soon, you’ll see a “Ask Google about this page” option in the address bar.

When selected, it opens a panel where you can highlight parts of a page, like a diagram or snippet of text, and receive an AI Overview.

This update also allows follow-up questions via AI Mode from within the Lens experience, either through a button labeled “Dive deeper” or by selecting AI Mode directly.

Looking Ahead

These updates reflect Google’s vision of search as a multi-modal, interactive experience rather than a one-off text query.

While most of these tools are limited to U.S.-based Labs users for now, they point to a future where AI Mode becomes central to how searchers explore, learn, and plan.

Rollout timelines vary by feature. So keep a close eye on how these capabilities add to the search experience and consider how to adapt your content strategies accordingly.

AI Halftime Report H1 2025 via @sejournal, @Kevin_Indig

It’s halftime.

The first half of 2025 brought major shakeups in SEO, AI, and organic growth – and it’s time for a reality check.

Traffic is down, revenue is … complicated, and large language models (LLMs) are no longer fringe.

Publishers are panicking, and SEO teams are reevaluating how they measure success.

And it’s not just the tech shifting; it’s the economy around it. The DOJ’s antitrust case against Google could reshape the playing field before Q4 even begins.

In today’s Memo, I’m unpacking the state of organic growth at the midpoint of 2025:

  • How AI Overviews and AI Mode are eating clicks, and what that means for TOFU, MOFU, and BOFU content.
  • Why publishers are suing Google and preparing for zero traffic.
  • What’s really happening with tech layoffs and job transformation.
  • How we measure LLM visibility today, and where that’s headed.
  • What to expect next in organic growth, search, and monetization.

Plus, premium subscribers will receive my scorecard that will help evaluate whether the team is adapting effectively to the AI landscape.

Let’s take stock of where we are, and what comes next.

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

AI Is Cutting Flesh

AI Overviews (AIOs) looked “interesting” to marketers in 2024 and “devastating” in 2025.

The traffic loss impact ranges from 15% to 45% declines, from my own observations.

Bottom-line metrics across the industry range from “traffic down, revenue up” to “traffic down, revenue down.”

In February, I wrote in The Impact of AI Overviews that mostly the top of the funnel (TOFU) queries were impacted:

Every study I looked at confirmed that the majority of AI Overviews show up for informational-intent keywords like questions.

Shortly after, in March 2025, Google nullified that theory by dialing up the number of AIOs way beyond the top of the funnel.

Ever since, U.S. companies have experienced a strong (negative) impact, and I’m hearing the phrase “SEO is dead” more often from leaders.

Between 13 and 19% of keywords show AI Overviews, according to Semrush and seoClarity, but I assume the actual number is much higher because searchers use much longer prompts. (Prompts that most tools don’t track.) [1, 2]

I expect organic traffic to keep dropping as the year moves forward.

In theAIO Usability study I published in May, only a small fraction of clicks still came through to websites.

It wouldn’t surprise me if 70% of the organic traffic that sites earned in 2024 is gone by 2026, leaving just 30% of that organic traffic behind.

Scary? Yes. But traffic is just a means.

The same study also shows that 80% of searchers still lean on organic results to complete their search journeys.

So, I still feel optimistic about the value of organic search in the long term.

There are two questions top of mind for me at the moment:

  1. If AIOs really only impact the top of the funnel, then why are revenue numbers down?
  2. At which point is the decline going to level off?

In my view, either:

  • AIOs are really mostly TOFU queries. In that case, TOFU content always had more impact on the bottom line than we were able to prove, and we can expect the traffic decline to level off.
  • Or AIOs impact way more than MOFU and BOFU queries as well (which is what I think), and we’re in for a long decline of traffic. If true, I expect revenue that’s attributed to organic search to decline at a lower rate, or not at all for certain companies, since purchase intent doesn’t just go away. Therefore, revenue results would relate more to our ability to influence purchase intent.

With one exception.

Publishers Are Struggling

The whole internet is trying to figure out whether the value of showing up in LLMs (ChatGPT, Gemini, AI Mode, AI Overviews, etc.) is worth more than the loss in traffic.

But without a doubt, publishers and affiliates are the group that gets hit the hardest due to their reliance on ad impressions and link clicks.

No one needs traffic as much as publishers.

Image Credit: Kevin Indig

The consequence? Leading publishers and news sites will conduct layoffs and assume that Google traffic will go to zero at some point.

At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [3]

Publishers in the EU have banded together and filed an antitrust complaint against Google for its launch and the impact of AI Overviews with the European Commission. [4]

Publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page.

I caught up with Chris Dicker, who leads one of the co-signatories in the DMA complaint against Google, the Independent Publishers Alliance:

Kevin: What’s your role in the lawsuit against Google?

Chris: The Independent Publishers Alliance is one of the co-signatories on the complaint. I am helping lead this from the Alliance side.

Kevin: What would be an outcome, i.e., an action by Google, that would be satisfactory?

Chris: We are only asking for what we deem to be fair, which is for a sustainable ecosystem.

Whether that is payment for use of content or for Google to start to substantially reduce the zero-click searches, which have gotten significantly worse since the launch of AIOs.

Kevin: Can LLMs (ChatGPT & Co) provide some remedy against the traffic drop from Google?

Chris: Not for publishers at the moment, no. They don’t have scale or the want to send traffic anywhere else. The current CTR’s we are seeing and that are being reported from publishers are tiny.

OpenAI’s scrape to human visit is 179:1, compared with Perplexity’s 369:1 and Anthropic’s 8692:1 (stats from Tollbit’s State of bots Q1 2025).

For perspective, Bing’s is 11:1. I know there are reports that the traffic from LLMs is “better quality,” but not on the metrics that would help publishers or content creators.

It is very much the opposite: Bounce rate is higher; pages per session and per visit are also both considerably down for AI search traffic compared to organic search.

Kevin: What are the consequences of Google’s AI Overviews on independent publishers so far? Can you quantify the impact?

Chris: It’s significant and something that has been extrapolated even since April this year. There are sites that are seeing traffic drops of up to 70% since April.

Publishers have no choice but to cut costs and, unfortunately, that also means job losses.

In the last year, we have had numerous members who, unfortunately, haven’t been able to weather the storm and have ceased publishing altogether, and these are respected sites that were well established over the last 10 years, if not longer.

Kevin: Do you know of publishers that are able to dampen the negative impact from AI Overviews in some ways? If so, what are they doing?

Chris: Nearly every publisher I speak to is actively diversifying away from Google.

It feels inevitable that we’ll see a mass blocking of Googlebot at some point, something that would have been inconceivable just 12 months ago.

If your business model still relies on search traffic, whether from traditional search or AI-powered results, it’s time to rethink – and fast.

More publishers are now focusing on direct audience relationships through newsletters, forums, podcasts, and similar channels.

Platforms like Substack offer an interesting model, though I’m not convinced their approach fully suits publishers just yet.

Beyond monetizing websites and content, many publishers are also developing in-house creative, social, or AI agencies. After all, these businesses have spent years engaging and inspiring audiences.

Helping advertisers tap into that expertise feels like a natural next step.

Besides the fact that the open web and critical societal instances are fading away, from a purely practical standpoint, there are also fewer publishers to amplify content for other businesses.

And yet, I believe we haven’t seen the full extent to which Google Search will change from sending traffic to answering questions directly.

AI Mode Is Sitting On The Bench, But It Seems Ready

At a recent event I attended, a Google representative mentioned that Sundar Pichai sees AI Mode as the default search experience in the next two to three years, with searchers being able to switch to classic search results if they want to – assuming users like AI Mode.
And that seems to be the case: According to a (small) survey done by Oppenheimer & Co., 82% of searchers find AI Mode more helpful than Google Search, while 75% find it more helpful than ChatGPT (I wonder why). [5]

Nothing shows fear more than copying a challenger’s user interface and abandoning the cash machine that worked for 20 years.

AI Mode is basically ChatGPT with a Google logo. Google follows the Meta playbook, which fenced in Snapchat’s and TikTok’s growth by copying their core features.

And most alarmingly for search marketers, AI Mode eats clicks for breakfast.

Research by iPullrank found that “4.5% of AI Mode Sessions result in a click.”[6]

A click. As in one!

But Google cannot afford to lose the investor narrative.

I personally believe that AI Mode won’t launch before Google has figured out the monetization model. And I predict that searchers will see way fewer ads but much better ones, and displayed at a better time.

Due to the conversational interface and longer prompts, Google should not only have more context about what users really want, but they would also be able to better estimate when is the best time to show an ad during the chat conversation.

As a result, I expect CPCs will skyrocket, but CPAs will become more efficient.

AEO/GEO/LLMO: Too Many Buzzwords But Not Enough Differentiation

Between AI Mode, AI Overviews, and ChatGPT stands this important question:

How much can we influence answers, and how different is that job from what we’ve done in SEO over the last two decades?

It’s simple. The tactics are mostly the same, but the ecosystem changes:

1. Longer prompts: The average prompt is 23 words long compared to 4.2 for classic Google Search. [7]

The rich detail users provide about their intent hits a content gap that’s tuned for shorthead keywords on the other side of the marketplace.

As a result, I see hyper-specialized content that’s fine-tuned for specific personas (see How to Optimize for Topics) in our present and future.

2. SEO winners are not AI winners: If SEO was enough and there was nothing else we needed to do “for AI,” then why aren’t the sites that are most visible in Search the same ones that are visible in LLMs?

In Is GEO/AEO the same as SEO?, I found that the lists differ greatly in most verticals. Only highly consolidated spaces with a few winners, like CRM software, have identical winners across both modalities.

3. New intent: Generative: Semrush and Profound came to the conclusion that ~30-70% of intent on LLMs is “generative,” meaning users want to accomplish tasks right then and there. [8]

What’s often missed is that while performing an action, e.g., generating an image, the intent can quickly flip to informational or transactional, e.g., learn more about the topic you want to generate the image about or buy icon license.

Since experiences are conversational and more continuous, we need to update our model of intent. It doesn’t happen in isolation (think: one session), but several intents can occur during the same session (informational → generative → transactional → informational → etc.).

My opinion: It’s too soon to coin a term.

Will we switch from Answer Engine Optimization to Agentic Engine Optimization when we enter the Agentic AI age? AI has evolved at a rocket pace over the last 2.5 years, and I don’t expect it will slow down soon.

LLMs Are No Longer Fringe

In 2025, LLMs reached the mainstream. We’re not talking about a fringe platform anymore: ChatGPT supposedly receives 2.5 billion prompts a day.
With Google seeing over 5 trillion searches per year, you could say ChatGPT has reached about 17.8% of Google’s volume.

Keep in mind that a lot of prompts are not searches on ChatGPT, and then the comparison becomes weaker (until Google rolls AI Mode out broadly). [9]

Image Credit: Kevin Indig

Important to note is that LLMs rely on different citation sources to varying degrees. [10]

Profound saw in 30 million citations that ChatGPT, AIOs, and Perplexity rely on different citation sources:

  • ChatGPT cites Wikipedia almost 50% of the time, followed by citing Reddit at 11.3% and Forbes at 6.8%.
  • AI Overviews cite Reddit 21% of the time, followed by 18.8% for YouTube, 14.3% for Quora, and 13% for LinkedIn.
  • Perplexity cites Reddit almost 50% of the time, YouTube at 13.9% of the time, and Gartner at 7%.

We know that investing time and resources into non-Google platforms is critical to building trust and visibility across all platforms.

But now we know that the mix of platform investment depends on where you want to build visibility.

Reddit seems to provide universal impact, which makes sense given their licensing deals with OpenAI and Google, but YouTube, Quora, and review platforms don’t show the same potential for gaining citations on all LLMs.

Image Credit: Kevin Indig

Time also matters. AirOps found that 95% of pages cited in ChatGPT are less than 10 months old. [11]

A big reason for this is the training data cutoff for LLMs. New models are still trained on large corpi of data (remember the Google Dance?).

Anything newer than the time of training needs to come from the web. As a result, keeping content fresh and continuously iterating seems like a path to AI visibility to me. Even adding the current year to the URL (and meta-title) seems like a good idea. [12]

A study by Apple, which I covered in the Growth Intelligence Brief, raises a question we might all have at the tip of our tongue: Are LLMs overhyped? [13]

The answer: It depends … on the complexity of the task:

  • Simple problems: Models often find correct solutions early but wastefully continue exploring incorrect ones (“overthinking”).
  • Moderate complexity: Models explore many incorrect solutions before finding correct ones.
  • High complexity: Models fail to generate any correct solutions.

LLMs are smart but still struggle with complex tasks. Good news for tech workers … right?

And here’s another thing: With the increase of LLM use and adoption, how will we measure success for our optimization efforts?

I ran a survey of Growth Memo in June, and it’s clear our industry hasn’t really nailed how we measure the LLM visibility of our brands.

Out of those who responded, about 30% are using traditional SEO tools to measure LLM visibility, 26% are using Google Analytics 4 traffic signals, and a whopping 21% aren’t measuring yet and need help determining how.

Image Credit: Kevin Indig

And the biggest surprise is this: Overwhelmingly, we don’t trust our LLM visibility measurements.

Close to 80% of survey respondents don’t believe the way they are measuring LLM visibility is accurate.

Image Credit: Kevin Indig

A big topic in the whole LLM conversation is, of course, whether AI replaces white collar workers or not.

I’m including this discussion in my halftime report because I’m seeing a growing number of in-house experts who are afraid to be replaced.

Amazon’s CEO, Andy Jassy, wrote a public memo, saying the company would need fewer people because of AI (bolded text is mine):

“As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.” [14]

Amazon has cut +27,000 jobs between 2022 and 2023, but has never had more employees at the end of 2024, except for at the end of 2021 by a small margin. [15]

Other tech companies pulled even:

  • Salesforce’s CEO, Marc Benioff, says that 30-50% of the work at Salesforce is done by AI. [16] Salesforce eliminated ~1,000 roles this year.
  • Klarna’s CEO first announced that AI is doing the work of 700 customer service agents and fired about 2,000 employees, but then backtracked and rehired humans. [17]
  • Microsoft cut 15,000 jobs in 2025. CEO Satya Nadella said AI writes ~30% of new code in some projects.
  • Meta laid off 3,600 employees in 2025, with Mark Zuckerberg saying AI could be ready to be a mid-level engineer this year.

But is AI really replacing white collar workers, or is it used for good PR?

Layoff tracker, layoffs.fyi, shows that the number of companies and employees laid off is not growing since the pandemic.

Image Credit: Kevin Indig

A jobs report by CompTIA shows that while tech employment is slightly down between June 2023 and June 2025…[18]

Image Credit: Kevin Indig

…the number of job openings with AI skills far outpaces the number of listings for all roles.

Image Credit: Kevin Indig

In other words, “AI layoffs” seem more PR play or justification for job cuts.

But upskilling with AI is critical.

Google Lawsuit Rushes Toward A Final Decision On Labor Day

The landmark lawsuit against Google for being an online search monopoly concludes by Labor Day (September 1). The DoJ asks for:

  • A mandatory divestiture of Chrome within a specified timeframe.
  • A five-year prohibition on Google owning any browser.
  • Termination of exclusive default agreements.
  • Extensive data sharing requirements.
  • The right to seek Android divestiture if behavioral remedies prove insufficient.

Google, on the other hand, agrees to end exclusive agreements, so we know Google and Apple will divorce, but opposes a Chrome divestiture and data sharing mandates.

The remedy ruling could have significant implications on the AI race, and where marketers should place their money.

For example, a Chrome divestiture could significantly set Google back, as OpenAI and Perplexity launch their own browsers. It would also mean a material loss in user behavior data and agentic AI capabilities.

Losing the exclusive agreement with Apple could also mean that more users set other browsers than Chrome as default, if they can provide a strong benefit.

However, I personally think the most realistic outcome is a forced end to exclusive agreements and would be shocked to see a Chrome divestiture.

For context:

  • The Department of Justice has achieved two landmark antitrust victories against Google in 2024-2025, with federal judges ruling the tech giant operates illegal monopolies in both online search and digital advertising technology.
  • Both cases have now advanced to remedy phases where courts will determine whether to break up parts of Google’s business, representing the most aggressive government intervention in Big Tech since the Microsoft case 25 years ago.

Outlook For H2

The second half of 2025 will likely be defined by adaptation rather than resistance.

Companies that succeed will be those that foster trust beyond Google, build direct audience relationships, and upskill teams in AI.

Here’s what I expect for the second half of the year:

Accelerating Traffic Decline

  • Organic traffic losses will likely intensify as Google expands AI Overviews.
  • Publishers should prepare for further 20-30% traffic declines.
  • The “new normal” of 30% of historical traffic by 2026 could arrive sooner than expected.

AI Mode Launch

  • Google will likely roll out AI Mode more broadly, but cautiously.
  • Expect a heavy focus on monetization testing before wide release.
  • Watch for new ad formats optimized for conversational search.

Publisher Adaptation

  • More publishers will actively block Googlebot.
  • Increased focus on direct revenue streams (newsletters, memberships).
  • Potential consolidation as smaller publishers struggle to survive.

Measurement Evolution

  • New tools specifically for measuring LLM visibility will emerge.
  • Industry will start standardizing on key metrics for AI performance.
  • Greater emphasis on revenue vs. traffic as success metrics.

Market Restructuring

  • DoJ ruling could reshape the search landscape.
  • Expect new search entrants to gain traction.
  • Browser wars may reignite with AI-native options.

Featured Image: Paulo Bobita/Search Engine Journal