OpenAI has launched a new feature in ChatGPT called Study Mode, offering a step-by-step learning experience designed to guide users through complex topics.
While aimed at students, Study Mode reflects a broader trend in how people use AI tools for information and adapt their search habits.
As more people start using conversational AI tools to seek information, Study Mode could represent the next step of AI-assisted discovery.
A Shift Toward Guided Learning
Activate Study Mode by selecting “Study and learn” from the tools in ChatGPT and ask a question.
Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.
Instead of giving direct answers, this feature promotes deeper engagement by asking questions, providing hints, and tailoring explanations to meet user needs.
Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.
Study Mode runs on custom instructions developed with input from teachers and learning experts. The feature incorporates research-based strategies, including:
Encouraging people to take part actively
Helping manage how much information people can handle
Supporting self-awareness and a desire to learn
Giving helpful and practical feedback.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, explains:
“Instead of doing the work for them, study mode encourages students to think critically about their learning. Features like these are a positive step toward effective AI use for learning. Even in the AI era, the best learning still happens when students are excited about and actively engaging with the lesson material.”
How It Works
Study Mode adjusts responses based on a user’s skill level and context from prior chats.
Key features include:
Interactive Prompts: Socratic questioning and self-reflection prompts promote critical thinking.
Scaffolded Responses: Content is broken into manageable segments to maintain clarity.
Knowledge Checks: Quizzes and open-ended questions help reinforce understanding.
Toggle Functionality: Users can turn Study Mode on or off as needed during a conversation.
Early testers describe it as an on-demand tutor, useful for unpacking dense material or revisiting difficult subjects.
Looking Ahead
Study Mode is now available to logged-in users across Free, Plus, Pro, and Team plans, with ChatGPT Edu support expected in the coming weeks.
OpenAI plans to integrate Study Mode behavior directly into its models after gathering feedback. Future updates may include visual aids, goal tracking, and more personalized support.
Google is expanding AI Mode in Search with new tools that include PDF uploads, persistent planning documents, and real-time video assistance.
The updates begin rolling out today, with the AI Mode button now appearing on the Google homepage for desktop users.
PDF Uploads Now Supported On Desktop
Desktop users can now upload images directly into search queries, a feature previously available only on mobile.
Support for PDFs is coming in the weeks ahead, allowing you to ask questions about uploaded files and receive AI-generated responses based on both document content and relevant web results.
For example, a student could upload lecture slides and use AI Mode to get help understanding the material. Responses include suggested links for deeper exploration.
Image Credit: Google
Google plans to support additional file types and integrate with Google Drive “in the months ahead.”
Canvas: A Tool For Multi-Session Planning
A new AI Mode feature called Canvas can help you stay organized across multiple search sessions.
When you ask AI Mode for help with planning or creating something, you’ll see an option to “Create Canvas.” This opens a dynamic side panel that saves and updates as queries evolve.
Use cases include building study guides, travel itineraries, or task checklists.
Image Credit: Google
Canvas is launching for desktop users in the U.S. enrolled in the AI Mode Labs experiment.
Real-Time Assistance With Search Live
Search Live with video input also launches this week on mobile. This allows you to utilize AI Mode while pointing your phone camera at real-world objects or scenes.
The feature builds on Project Astra and is available through Google Lens. Start by tapping the ‘Live’ icon in the Google app, then engage in back-and-forth conversations with AI Mode using live video as visual context.
Image Credit: Google
Chrome Adds Contextual AI Answers
Lens is getting expanded desktop functionality within Chrome. Soon, you’ll see a “Ask Google about this page” option in the address bar.
When selected, it opens a panel where you can highlight parts of a page, like a diagram or snippet of text, and receive an AI Overview.
This update also allows follow-up questions via AI Mode from within the Lens experience, either through a button labeled “Dive deeper” or by selecting AI Mode directly.
Looking Ahead
These updates reflect Google’s vision of search as a multi-modal, interactive experience rather than a one-off text query.
While most of these tools are limited to U.S.-based Labs users for now, they point to a future where AI Mode becomes central to how searchers explore, learn, and plan.
Rollout timelines vary by feature. So keep a close eye on how these capabilities add to the search experience and consider how to adapt your content strategies accordingly.
The first half of 2025 brought major shakeups in SEO, AI, and organic growth – and it’s time for a reality check.
Traffic is down, revenue is … complicated, and large language models (LLMs) are no longer fringe.
Publishers are panicking, and SEO teams are reevaluating how they measure success.
And it’s not just the tech shifting; it’s the economy around it. The DOJ’s antitrust case against Google could reshape the playing field before Q4 even begins.
In today’s Memo, I’m unpacking the state of organic growth at the midpoint of 2025:
How AI Overviews and AI Mode are eating clicks, and what that means for TOFU, MOFU, and BOFU content.
Why publishers are suing Google and preparing for zero traffic.
What’s really happening with tech layoffs and job transformation.
How we measure LLM visibility today, and where that’s headed.
What to expect next in organic growth, search, and monetization.
Plus, premium subscribers will receive my scorecard that will help evaluate whether the team is adapting effectively to the AI landscape.
Let’s take stock of where we are, and what comes next.
Image Credit: Kevin Indig
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
AI Is Cutting Flesh
AI Overviews (AIOs) looked “interesting” to marketers in 2024 and “devastating” in 2025.
The traffic loss impact ranges from 15% to 45% declines, from my own observations.
Bottom-line metrics across the industry range from “traffic down, revenue up” to “traffic down, revenue down.”
In February, I wrote in The Impact of AI Overviews that mostly the top of the funnel (TOFU) queries were impacted:
Every study I looked at confirmed that the majority of AI Overviews show up for informational-intent keywords like questions.
Shortly after, in March 2025, Google nullified that theory by dialing up the number of AIOs way beyond the top of the funnel.
Ever since, U.S. companies have experienced a strong (negative) impact, and I’m hearing the phrase “SEO is dead” more often from leaders.
Between 13 and 19% of keywords show AI Overviews, according to Semrush and seoClarity, but I assume the actual number is much higher because searchers use much longer prompts. (Prompts that most tools don’t track.) [1, 2]
I expect organic traffic to keep dropping as the year moves forward.
In theAIO Usability study I published in May, only a small fraction of clicks still came through to websites.
It wouldn’t surprise me if 70% of the organic traffic that sites earned in 2024 is gone by 2026, leaving just 30% of that organic traffic behind.
Scary? Yes. But traffic is just a means.
The same study also shows that 80% of searchers still lean on organic results to complete their search journeys.
So, I still feel optimistic about the value of organic search in the long term.
There are two questions top of mind for me at the moment:
If AIOs really only impact the top of the funnel, then why are revenue numbers down?
At which point is the decline going to level off?
In my view, either:
AIOs are really mostly TOFU queries. In that case, TOFU content always had more impact on the bottom line than we were able to prove, and we can expect the traffic decline to level off.
Or AIOs impact way more than MOFU and BOFU queries as well (which is what I think), and we’re in for a long decline of traffic. If true, I expect revenue that’s attributed to organic search to decline at a lower rate, or not at all for certain companies, since purchase intent doesn’t just go away. Therefore, revenue results would relate more to our ability to influence purchase intent.
With one exception.
Publishers Are Struggling
The whole internet is trying to figure out whether the value of showing up in LLMs (ChatGPT, Gemini, AI Mode, AI Overviews, etc.) is worth more than the loss in traffic.
But without a doubt, publishers and affiliates are the group that gets hit the hardest due to their reliance on ad impressions and link clicks.
No one needs traffic as much as publishers.
Image Credit: Kevin Indig
The consequence? Leading publishers and news sites will conduct layoffs and assume that Google traffic will go to zero at some point.
At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [3]
Publishers in the EU have banded together and filed an antitrust complaint against Google for its launch and the impact of AI Overviews with the European Commission. [4]
Publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page.
I caught up with Chris Dicker, who leads one of the co-signatories in the DMA complaint against Google, the Independent Publishers Alliance:
Kevin: What’s your role in the lawsuit against Google?
Chris: The Independent Publishers Alliance is one of the co-signatories on the complaint. I am helping lead this from the Alliance side.
Kevin: What would be an outcome, i.e., an action by Google, that would be satisfactory?
Chris: We are only asking for what we deem to be fair, which is for a sustainable ecosystem.
Whether that is payment for use of content or for Google to start to substantially reduce the zero-click searches, which have gotten significantly worse since the launch of AIOs.
Kevin: Can LLMs (ChatGPT & Co) provide some remedy against the traffic drop from Google?
Chris: Not for publishers at the moment, no. They don’t have scale or the want to send traffic anywhere else. The current CTR’s we are seeing and that are being reported from publishers are tiny.
OpenAI’s scrape to human visit is 179:1, compared with Perplexity’s 369:1 and Anthropic’s 8692:1 (stats from Tollbit’s State of bots Q1 2025).
For perspective, Bing’s is 11:1. I know there are reports that the traffic from LLMs is “better quality,” but not on the metrics that would help publishers or content creators.
It is very much the opposite: Bounce rate is higher; pages per session and per visit are also both considerably down for AI search traffic compared to organic search.
Kevin: What are the consequences of Google’s AI Overviews on independent publishers so far? Can you quantify the impact?
Chris: It’s significant and something that has been extrapolated even since April this year. There are sites that are seeing traffic drops of up to 70% since April.
Publishers have no choice but to cut costs and, unfortunately, that also means job losses.
In the last year, we have had numerous members who, unfortunately, haven’t been able to weather the storm and have ceased publishing altogether, and these are respected sites that were well established over the last 10 years, if not longer.
Kevin: Do you know of publishers that are able to dampen the negative impact from AI Overviews in some ways? If so, what are they doing?
Chris: Nearly every publisher I speak to is actively diversifying away from Google.
It feels inevitable that we’ll see a mass blocking of Googlebot at some point, something that would have been inconceivable just 12 months ago.
If your business model still relies on search traffic, whether from traditional search or AI-powered results, it’s time to rethink – and fast.
More publishers are now focusing on direct audience relationships through newsletters, forums, podcasts, and similar channels.
Platforms like Substack offer an interesting model, though I’m not convinced their approach fully suits publishers just yet.
Beyond monetizing websites and content, many publishers are also developing in-house creative, social, or AI agencies. After all, these businesses have spent years engaging and inspiring audiences.
Helping advertisers tap into that expertise feels like a natural next step.
Besides the fact that the open web and critical societal instances are fading away, from a purely practical standpoint, there are also fewer publishers to amplify content for other businesses.
And yet, I believe we haven’t seen the full extent to which Google Search will change from sending traffic to answering questions directly.
AI Mode Is Sitting On The Bench, But It Seems Ready
At a recent event I attended, a Google representative mentioned that Sundar Pichai sees AI Mode as the default search experience in the next two to three years, with searchers being able to switch to classic search results if they want to – assuming users like AI Mode. And that seems to be the case: According to a (small) survey done by Oppenheimer & Co., 82% of searchers find AI Mode more helpful than Google Search, while 75% find it more helpful than ChatGPT (I wonder why). [5]
Nothing shows fear more than copying a challenger’s user interface and abandoning the cash machine that worked for 20 years.
AI Mode is basically ChatGPT with a Google logo. Google follows the Meta playbook, which fenced in Snapchat’s and TikTok’s growth by copying their core features.
And most alarmingly for search marketers, AI Mode eats clicks for breakfast.
Research by iPullrank found that “4.5% of AI Mode Sessions result in a click.”[6]
A click. As in one!
But Google cannot afford to lose the investor narrative.
I personally believe that AI Mode won’t launch before Google has figured out the monetization model. And I predict that searchers will see way fewer ads but much better ones, and displayed at a better time.
Due to the conversational interface and longer prompts, Google should not only have more context about what users really want, but they would also be able to better estimate when is the best time to show an ad during the chat conversation.
As a result, I expect CPCs will skyrocket, but CPAs will become more efficient.
AEO/GEO/LLMO: Too Many Buzzwords But Not Enough Differentiation
Between AI Mode, AI Overviews, and ChatGPT stands this important question:
How much can we influence answers, and how different is that job from what we’ve done in SEO over the last two decades?
1. Longer prompts: The average prompt is 23 words long compared to 4.2 for classic Google Search. [7]
The rich detail users provide about their intent hits a content gap that’s tuned for shorthead keywords on the other side of the marketplace.
As a result, I see hyper-specialized content that’s fine-tuned for specific personas (see How to Optimize for Topics) in our present and future.
2. SEO winners are not AI winners: If SEO was enough and there was nothing else we needed to do “for AI,” then why aren’t the sites that are most visible in Search the same ones that are visible in LLMs?
In Is GEO/AEO the same as SEO?, I found that the lists differ greatly in most verticals. Only highly consolidated spaces with a few winners, like CRM software, have identical winners across both modalities.
3. New intent: Generative: Semrush and Profound came to the conclusion that ~30-70% of intent on LLMs is “generative,” meaning users want to accomplish tasks right then and there. [8]
What’s often missed is that while performing an action, e.g., generating an image, the intent can quickly flip to informational or transactional, e.g., learn more about the topic you want to generate the image about or buy icon license.
Since experiences are conversational and more continuous, we need to update our model of intent. It doesn’t happen in isolation (think: one session), but several intents can occur during the same session (informational → generative → transactional → informational → etc.).
My opinion: It’s too soon to coin a term.
Will we switch from Answer Engine Optimization to Agentic Engine Optimization when we enter the Agentic AI age? AI has evolved at a rocket pace over the last 2.5 years, and I don’t expect it will slow down soon.
LLMs Are No Longer Fringe
In 2025, LLMs reached the mainstream. We’re not talking about a fringe platform anymore: ChatGPT supposedly receives 2.5 billion prompts a day. With Google seeing over 5 trillion searches per year, you could say ChatGPT has reached about 17.8% of Google’s volume.
Keep in mind that a lot of prompts are not searches on ChatGPT, and then the comparison becomes weaker (until Google rolls AI Mode out broadly). [9]
Image Credit: Kevin Indig
Important to note is that LLMs rely on different citation sources to varying degrees. [10]
Profound saw in 30 million citations that ChatGPT, AIOs, and Perplexity rely on different citation sources:
ChatGPT cites Wikipedia almost 50% of the time, followed by citing Reddit at 11.3% and Forbes at 6.8%.
AI Overviews cite Reddit 21% of the time, followed by 18.8% for YouTube, 14.3% for Quora, and 13% for LinkedIn.
Perplexity cites Reddit almost 50% of the time, YouTube at 13.9% of the time, and Gartner at 7%.
We know that investing time and resources into non-Google platforms is critical to building trust and visibility across all platforms.
But now we know that the mix of platform investment depends on where you want to build visibility.
Reddit seems to provide universal impact, which makes sense given their licensing deals with OpenAI and Google, but YouTube, Quora, and review platforms don’t show the same potential for gaining citations on all LLMs.
Image Credit: Kevin Indig
Time also matters. AirOps found that 95% of pages cited in ChatGPT are less than 10 months old. [11]
A big reason for this is the training data cutoff for LLMs. New models are still trained on large corpi of data (remember the Google Dance?).
Anything newer than the time of training needs to come from the web. As a result, keeping content fresh and continuously iterating seems like a path to AI visibility to me. Even adding the current year to the URL (and meta-title) seems like a good idea. [12]
A study by Apple, which I covered in the Growth Intelligence Brief, raises a question we might all have at the tip of our tongue: Are LLMs overhyped? [13]
The answer: It depends … on the complexity of the task:
Simple problems: Models often find correct solutions early but wastefully continue exploring incorrect ones (“overthinking”).
Moderate complexity: Models explore many incorrect solutions before finding correct ones.
High complexity: Models fail to generate any correct solutions.
LLMs are smart but still struggle with complex tasks. Good news for tech workers … right?
And here’s another thing: With the increase of LLM use and adoption, how will we measure success for our optimization efforts?
I ran a survey of Growth Memo in June, and it’s clear our industry hasn’t really nailed how we measure the LLM visibility of our brands.
Out of those who responded, about 30% are using traditional SEO tools to measure LLM visibility, 26% are using Google Analytics 4 traffic signals, and a whopping 21% aren’t measuring yet and need help determining how.
Image Credit: Kevin Indig
And the biggest surprise is this: Overwhelmingly, we don’t trust our LLM visibility measurements.
Close to 80% of survey respondents don’t believe the way they are measuring LLM visibility is accurate.
Image Credit: Kevin Indig
A big topic in the whole LLM conversation is, of course, whether AI replaces white collar workers or not.
I’m including this discussion in my halftime report because I’m seeing a growing number of in-house experts who are afraid to be replaced.
Amazon’s CEO, Andy Jassy, wrote a public memo, saying the company would need fewer people because of AI (bolded text is mine):
“As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.” [14]
Amazon has cut +27,000 jobs between 2022 and 2023, but has never had more employees at the end of 2024, except for at the end of 2021 by a small margin. [15]
Other tech companies pulled even:
Salesforce’s CEO, Marc Benioff, says that 30-50% of the work at Salesforce is done by AI. [16] Salesforce eliminated ~1,000 roles this year.
Klarna’s CEO first announced that AI is doing the work of 700 customer service agents and fired about 2,000 employees, but then backtracked and rehired humans. [17]
Microsoft cut 15,000 jobs in 2025. CEO Satya Nadella said AI writes ~30% of new code in some projects.
Meta laid off 3,600 employees in 2025, with Mark Zuckerberg saying AI could be ready to be a mid-level engineer this year.
But is AI really replacing white collar workers, or is it used for good PR?
Layoff tracker, layoffs.fyi, shows that the number of companies and employees laid off is not growing since the pandemic.
Image Credit: Kevin Indig
A jobs report by CompTIA shows that while tech employment is slightly down between June 2023 and June 2025…[18]
Image Credit: Kevin Indig
…the number of job openings with AI skills far outpaces the number of listings for all roles.
Image Credit: Kevin Indig
In other words, “AI layoffs” seem more PR play or justification for job cuts.
But upskilling with AI is critical.
Google Lawsuit Rushes Toward A Final Decision On Labor Day
The landmark lawsuit against Google for being an online search monopoly concludes by Labor Day (September 1). The DoJ asks for:
A mandatory divestiture of Chrome within a specified timeframe.
A five-year prohibition on Google owning any browser.
Termination of exclusive default agreements.
Extensive data sharing requirements.
The right to seek Android divestiture if behavioral remedies prove insufficient.
Google, on the other hand, agrees to end exclusive agreements, so we know Google and Apple will divorce, but opposes a Chrome divestiture and data sharing mandates.
The remedy ruling could have significant implications on the AI race, and where marketers should place their money.
For example, a Chrome divestiture could significantly set Google back, as OpenAI and Perplexity launch their own browsers. It would also mean a material loss in user behavior data and agentic AI capabilities.
Losing the exclusive agreement with Apple could also mean that more users set other browsers than Chrome as default, if they can provide a strong benefit.
However, I personally think the most realistic outcome is a forced end to exclusive agreements and would be shocked to see a Chrome divestiture.
For context:
The Department of Justice has achieved two landmark antitrust victories against Google in 2024-2025, with federal judges ruling the tech giant operates illegal monopolies in both online search and digital advertising technology.
Both cases have now advanced to remedy phases where courts will determine whether to break up parts of Google’s business, representing the most aggressive government intervention in Big Tech since the Microsoft case 25 years ago.
Outlook For H2
The second half of 2025 will likely be defined by adaptation rather than resistance.
Companies that succeed will be those that foster trust beyond Google, build direct audience relationships, and upskill teams in AI.
Here’s what I expect for the second half of the year:
Accelerating Traffic Decline
Organic traffic losses will likely intensify as Google expands AI Overviews.
Publishers should prepare for further 20-30% traffic declines.
The “new normal” of 30% of historical traffic by 2026 could arrive sooner than expected.
AI Mode Launch
Google will likely roll out AI Mode more broadly, but cautiously.
Expect a heavy focus on monetization testing before wide release.
Watch for new ad formats optimized for conversational search.
Publisher Adaptation
More publishers will actively block Googlebot.
Increased focus on direct revenue streams (newsletters, memberships).
Potential consolidation as smaller publishers struggle to survive.
Measurement Evolution
New tools specifically for measuring LLM visibility will emerge.
Industry will start standardizing on key metrics for AI performance.
Greater emphasis on revenue vs. traffic as success metrics.
Market Restructuring
DoJ ruling could reshape the search landscape.
Expect new search entrants to gain traction.
Browser wars may reignite with AI-native options.
Featured Image: Paulo Bobita/Search Engine Journal
New research from Microsoft reveals that marketing and sales professionals are among the most affected by generative AI, based on an analysis of 200,000 real workplace conversations with Bing Copilot.
The research examined nine months of anonymized data from January to September 2024, offering a large-scale look at how professionals use AI in their daily tasks.
AI’s Role In Marketing & Sales Work
Microsoft calculated an “AI applicability score” to measure how often AI is used to complete or assist with job-related tasks and how effectively it performs those tasks.
Sales representatives received one of the highest scores (0.46), followed closely by customer service representatives (0.44), writers and authors (0.45), and other marketing roles like:
Technical Writers (0.38)
Public Relations Specialists (0.36)
Advertising Sales Agents (0.36)
Market Research Analysts (0.35)
Overall, “Sales and Related” occupations ranked highest in AI impact across all major job categories, followed by computing and administrative roles.
As Microsoft researchers note:
“The current capabilities of generative AI align most strongly with knowledge work and communication occupations.”
Tasks Where AI Performs Well
The study found AI is particularly effective at:
Gathering information
Writing and editing content
Communicating information to others
Supporting ongoing learning in a specific field
These tasks often show high success and satisfaction rates among users.
However, the study also uncovered that in 40% of conversations, the AI performed tasks different from what the user initially requested. For example, when someone asks for help with research, the AI might instead explain research methods rather than deliver information.
This reflects AI’s role as more of a helper than a replacement. As the researchers put it:
“The AI often acts in a service role to the human as a coach, advisor, or teacher.”
Areas Where Human Strength Excels
Some marketing tasks still show resistance to AI. These include:
Visual design and creative work
Strategic data analysis
Roles that require physical presence or in-person interaction, such as event marketing or client-based sales
These activities consistently scored lower for AI satisfaction and task completion.
Education, Wages & Job Security
The study found a weak correlation between AI impact and wages. The correlation coefficient was 0.07, indicating that AI is reshaping tasks across income levels, not just automating low-paying jobs.
For roles requiring a Bachelor’s degree, the average AI applicability score was slightly higher (0.27), compared to 0.19 for jobs with lower education requirements. This suggests knowledge work may see more AI involvement, but not necessarily replacement.
The researchers caution against assuming automation leads to job loss:
“This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive.”
What You Can Do
The data supports a clear takeaway: AI is here to stay, but it’s not taking over every aspect of marketing work.
Digital anthropologist Giles Crouch, quoted in coverage of the study, said:
“The conversation has gone from this fear of massive job loss to: How can we get real benefit from these tools? How will it make our work better?”
There are a few ways marketing professionals can adapt, such as:
Sharpening skills in areas where AI falls short, such as visual creativity and strategic interpretation
Using AI as a productivity booster for content creation and information gathering
Positioning themselves as AI collaborators rather than competitors
Looking Ahead
AI is reshaping marketing by changing how work gets done, not by eliminating roles.
As with past technological changes, those who adapt and integrate these tools into their workflow may find themselves better positioned for long-term success.
The full report includes a detailed breakdown of occupations and task types across the U.S. workforce.
You’ve heard the predictions: AI will replace SEO, generative search will eliminate organic traffic, and marketers should start updating their resumes.
With 73% of marketing teams using generative AI, it’s easy to assume we’re witnessing SEO’s funeral.
Here’s what’s actually happening: AI isn’t replacing SEO. It’s expanding SEO into new territories with bigger opportunities.
While Google’s AI Overviews and tools like ChatGPT are changing how people find information, they’re also creating new ways for your content to get discovered, cited, and trusted by millions of searchers.
The game isn’t ending. You just need to learn the new rules.
How AI Search Actually Works (And Where Your Content Fits)
Generative search doesn’t eliminate the need for quality content; it amplifies it.
When someone asks ChatGPT about email marketing or searches with Google’s AI features, these systems scan thousands of webpages to synthesize comprehensive answers.
Your content isn’t competing for traditional rankings anymore. You’re competing to become the authoritative source that AI systems pull from when generating responses.
The Citation Game
Here’s what most marketers miss: AI systems still cite their sources.
Google’s AI Overviews include links to referenced websites, and ChatGPT and Perplexity provide source citations.
Getting featured as a cited source can drive more qualified traffic than a traditional No. 1 ranking because users already know your content contributed to the answer they received.
Google AIO Citation Example:
Screenshot from search for [email marketing courses beginners must try], Google, July 2025
ChatGPT Citation Example:
Screenshot from ChatGPT, July 2025
What AI systems look for in sources:
Factual accuracy and reliability (they cross-reference information).
Update older content with recent statistics and insights.
Structure information in clear, scannable sections.
From Rankings To Retrieval
Traditional SEO targeted specific keyword rankings. AI search introduces “retrieval” – your content gets pulled into responses for queries you never directly optimized for.
Your comprehensive project management guide might get cited when someone asks, “How can I keep my remote team organized without micromanaging?” even though you never targeted that exact phrase.
Optimizing for retrieval requires a different mindset than traditional keyword targeting.
Create content that covers topics from multiple angles rather than focusing on single keyword phrases.
Structure your articles around the actual questions your audience asks, using headings that mirror real user queries.
Build comprehensive topic clusters that demonstrate your expertise across related subjects, showing AI systems that you’re a reliable source for broad topic coverage.
The SEO Fundamentals That Still Matter (With New Twists)
AI systems are far less forgiving than Google’s crawlers.
While Google’s bots can render JavaScript, handle errors gracefully, and work around technical issues, most AI agents simply fetch raw HTML and move on.
If they find an empty page, wrong HTTP status, or tangled markup, they won’t see your content at all.
This makes technical SEO non-negotiable for AI visibility. Server-side rendering becomes absolutely critical since AI agents won’t execute JavaScript or wait for client-side rendering.
Your content must be immediately visible in raw HTML.
Clean, semantic markup with valid HTML and proper heading hierarchy helps AI systems parse content accurately, while efficient delivery ensures AI agents don’t abandon slow or bloated sites.
AI bot requirements:
Allow AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) through robots.txt.
Whitelist AI bot IP ranges rather than blocking with firewalls.
Ensure critical content loads without JavaScript dependencies.
Avoid “noindex” and “nosnippet” tags on valuable content.
Optimize server response times for efficient content retrieval.
It could direct AI models to your best content during inference.
Place this plain text file at your domain root using proper markdown structure, including only your highest-value, well-structured content that answers specific questions.
Content Strategy For AI Citations
Your content strategy needs a fundamental shift. Instead of writing for search engine rankings, you’re creating content that feeds AI knowledge bases.
The key to successful retrieval optimization means leading with clear, definitive answers to specific questions.
When addressing common queries like [how long do SEO results take?], start immediately with “SEO results typically appear within three to six months for new websites.”
Break complex topics into digestible, extractable sections that include comprehensive explanations with supporting context.
AI systems favor content that provides complete answers rather than surface-level information, so include relevant data and statistics that can be easily identified and cited.
AI systems don’t retrieve entire pages; they break content into passages or “chunks” and extract the most relevant segments.
This means each section of your content should work as a standalone snippet that’s independently understandable.
Keep one focused idea per section, staying tightly concentrated on single concepts.
Use structured HTML with clear H2 and H3 subheadings for every subtopic, making passages semantically tight and self-contained.
Start each section with direct, concise sentences that immediately address the core point.
Building topical authority requires understanding how Google’s AI uses “query fan-out” techniques.
Complex queries get automatically broken into multiple related subqueries and executed in parallel, rewarding sites with both topical breadth and depth.
Create comprehensive pillar pages that summarize main topics with strategic links to deeper cluster content.
Develop cluster pages targeting specific facets of your expertise, then cross-link between related cluster pages to establish semantic relationships.
Cover diverse angles and intents to increase your content’s surface area for AI retrieval across multiple query variations.
Working With AI Systems, Not Against Them
The most successful marketers are learning to optimize for AI inclusion rather than fighting against machine-generated answers.
Optimizing For AI Summaries
Structure your content so AI systems can’t ignore it by leading with clear answers and using scannable formatting.
Include concrete data and statistics that make content citation-worthy, and implement schema markup like FAQ, how-to, and article schemas to help AI understand your content structure.
Key formatting elements that AI systems prefer:
Bullet points and numbered lists for easy parsing.
Clear subheadings that mirror actual user questions.
Natural language Q&A format throughout the content.
Building citation-worthy authority requires meeting higher trust and clarity standards than basic content inclusion.
AI systems prioritize content perceived as factually accurate, up-to-date, and authoritative. Include specific, verifiable claims with source citations that link to studies and expert sources.
Refresh key content regularly with timestamps to signal updated information, and consider publishing original research, surveys, or industry studies that journalists and bloggers reference.
AI search systems increasingly retrieve and synthesize content beyond text, including images, charts, tables, and videos. This creates opportunities for more engaging, scannable answers.
Ensure images and videos are crawlable by avoiding JavaScript-only rendering, and use descriptive alt text that includes topic context for all images.
Add explanatory captions directly below or beside visual elements, and use proper HTML markup like
and
instead of images of tables to support AI bot parsing.
Monitor Your AI Presence
Traditional rank tracking won’t show your full search visibility anymore. You need to track how AI platforms reference your content across different systems.
Set up Google Alerts for your brand and key topics you cover to catch when AI systems cite your content in their responses.
Regularly check Perplexity AI, ChatGPT, and Google’s AI Overviews for appearances of your content, and screenshot these citations since they’re becoming your new success metrics.
Don’t just monitor your brand presence. Track how competitors appear in AI summaries to understand what type of content AI engines prefer.
This competitive intelligence helps you adjust your strategy based on what’s actually getting cited.
Pay attention to the context around your citations, too, since AI engines sometimes present information differently than you intended, providing valuable feedback for refining how you present information in future content.
The Future Of SEO Is Bigger, Not Smaller
SEO isn’t shrinking. It’s expanding into a multi-platform opportunity. Your content can now appear in traditional search results, AI Overviews, chatbot responses, and voice search answers.
Skills That Matter Most
The SEOs thriving in this new landscape are developing expertise in data analysis to understand how different AI systems crawl and categorize content.
Multi-platform optimization has become essential, requiring the ability to write for Google, ChatGPT, Perplexity, and emerging AI tools simultaneously.
Advanced technical skills around implementing schema markup that actually helps AI understanding are increasingly valuable, along with content strategy integration that aligns SEO with broader content marketing and brand positioning efforts.
As AI makes search more complex, companies need expert guidance to navigate multiple platforms and opportunities.
The brands trying to handle this evolution internally often get left behind while their competitors appear across every AI-powered search experience.
SEO leaders today aren’t just optimizing websites; they’re building strategies that work across traditional and generative search platforms, tracking brand mentions in AI search, and ensuring their companies stay visible as search continues evolving.
Your Next Steps
The shift to AI-powered search isn’t a threat; it’s a call to expand your reach.
Start by auditing your current content for AI citation potential, asking whether it answers specific questions clearly and directly.
Create topic clusters that demonstrate deep expertise in your field.
Monitor AI platforms for mentions of your brand and competitors.
Update older content with fresh data and improved structure for AI retrieval.
The brands dominating tomorrow’s search landscape are adapting now.
Your SEO skills aren’t becoming obsolete; they’re becoming more valuable as companies need experts who can navigate both traditional rankings and AI-generated responses.
The game hasn’t ended. It just got more interesting.
Aleyda Solís conducted an experiment to test how fast ChatGPT indexes a web page and unexpectedly discovered that ChatGPT appears to use Google’s search results as a fallback for web pages that it cannot access or that are not yet indexed on Bing.
According to Aleyda:
I’ve run a simple but straightforward to follow test that confirms the reliance of ChatGPT on Google SERPs snippets for its answers.
Created A New Web Page, Not Yet Indexed
Aleyda created a brand new page (titled “LLMs.txt Generators”) on her website, LearningAISearch.com. She immediately tested ChatGPT (with web search enabled) to see if it could access or locate the page but ChatGPT failed to find it. ChatGPT responded with the suggestion that the URL was not publicly indexed or possibly outdated.
She then asked Google Gemini about the web page, which successfully fetched and summarized the live page content.
Submitted Web Page For Indexing
She next submitted the web page for indexing via Google Search Console and Bing Webmaster Tools. Google successfully indexed the web page but Bing had problems with it.
After several hours elapsed Google started showing results for the page with the site: operator and with a direct search for the URL. But Bing continued to have trouble indexing the web page.
Checked ChatGPT Until It Used Google Search Snippet
Aleyda went back to ChatGPT and after several tries it gave her an incomplete summary of the page content, mentioning just one tool that was listed on it. When she asked ChatGPT for the origin of that incomplete snippet it responded that it was using a “cached snippet via web search””, likely from “search engine indexing.”
She confirmed that the snippet shown by ChatGPT matched Google’s search result snippet, not Bing’s (which still hadn’t indexed it).
Aleyda explained:
“A snippet from where?
When I followed up asking where was that snippet they grabbed the information being shown, the answer was that it had “located a cached snippet via web search that previews the page content – likely from search engine indexing.”
But I knew the page wasn’t indexed yet in Bing, so it had to be … Google search results? I went to check.
When I compared the text snippet provided by ChatGPT vs the one shown in Google Search Results for the specific Learning AI Search LLMs.txt Generators page, I could confirm it was the same information…”
Proof That Traditional SEO Remains Relevant For AI Search
Aleyda also documented what happened on a LinkedIn post where Kyle Atwater Morley shared his observation:
“So ChatGPT is basically piggybacking off Google snippets to generate answers?
What a wake-up call for anyone thinking traditional SEO is dead.”
Stéphane Bureau shared his opinion on what’s going on:
“If Bing’s results are insufficient, it appears to fall back to scraping Google SERP snippets.”
He elaborated on his post with more details later on in the discussion:
“Based on current evidence, here’s my refined theory:
When browsing is enabled, ChatGPT sends search requests via Bing first (as seen in DevTools logs).
However, if Bing’s results are insufficient or outdated, it appears to fall back to scraping Google SERP snippets—likely via an undocumented proxy or secondary API.
This explains why some replies contain verbatim Google snippets that never appear in Bing API responses.
I’ve seen multiple instances that align with this dual-source behavior.”
Takeaway
ChatGPT was initially unable to access the page directly, and it was only after the page began to appear in Google’s search results that it was able to respond to questions about the page. Once the snippet appeared in Google’s search results, ChatGPT began referencing it, revealing a reliance on publicly visible Google Search snippets as a fallback when the same data is unavailable in Bing.
What would be interesting to see is whether the server logs held a clue as to whether ChatGPT attempted to crawl the page and, if so, what error code was returned in response to the failure to retrieve the data. It’s curious that ChatGPT was unable to retrieve the page, and though it probably doesn’t have any bearing on the conclusions, it would still contribute to making the conclusions feel more complete to have that last bit of information crossed off.
Nevertheless, it appears that this is yet more proof that standard SEO is still applicable for AI-powered search, including for ChatGPT Search. This adds to recent comments by Gary Illyes that confirms that there is no need for specialized GEO or AEO in order to rank well in Google AI Overviews and AI Mode.
Google has launched Web Guide, an experimental feature in Search Labs that uses AI to reorganize search results pages.
The goal is to help you find information by grouping related links together based on the intent behind your query.
What Is Web Guide?
Web Guide replaces the traditional list of search results with AI-generated clusters. Each group focuses on a different aspect of your query, making it easier to dive deeper into specific areas.
According to Austin Wu, Group Product Manager for Search at Google, Web Guide uses a custom version of Gemini to understand both your query and relevant web content. This allows it to surface pages you might not find through standard search.
Here are some examples provided by Google:
Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.
How It Works
Behind the scenes, Web Guide uses the familiar “query fan-out” technique.
Instead of running one search, it issues multiple related queries in parallel. It then analyzes and organizes the results into categories tailored to your search intent.
This approach gives you a broader overview of a topic, helping you learn more without needing to refine your query manually.
When Web Guide Helps
Google says Web Guide is most useful in two situations:
Exploratory searches: For example, “how to solo travel in Japan” might return clusters for transportation, accommodations, etiquette, and must-see places.
Multi-part questions: A query like “How to stay close with family across time zones?” could bring up tools for scheduling, video calls, and relationship tips.
In both cases, Web Guide aims to support deeper research, not just quick answers.
How To Try It
Web Guide is available through Search Labs for users who’ve opted in. You can access it by selecting the Web tab in Search and switching back to standard results anytime.
Over time, Google plans to test AI-organized results in the All tab and other parts of Search based on user feedback.
How Web Guide Differs From AI Mode
While Web Guide and AI Mode both use Google’s Gemini model and similar technologies like query fan-out, they serve different functions within Search.
Web Guide is designed to reorganize traditional search results. It clusters existing web pages into groups based on different aspects of your query, helping you explore a topic from multiple angles without generating new content.
AI Mode provides a conversational, AI-generated response to your query. It can break down complex questions into subtopics, synthesize information across sources, and present a summary or interactive answer box. It also supports follow-up questions and features like Deep Search for more in-depth exploration.
In short, Web Guide focuses on how results are presented, while AI Mode changes how answers are generated and delivered.
Looking Ahead
Web Guide reflects Google’s continued shift away from the “10 blue links” model. It follows features like AI Overviews and the AI Mode, which aim to make search more dynamic.
Because Web Guide is still a Labs feature, its future depends on how people respond to it. Google is taking a gradual rollout approach, watching how it affects the user experience.
If adopted more broadly, this kind of AI-driven organization could reshape how people find your content, and how you need to optimize for it.
Featured Image: Screenshot from labs.google.com/search/experiment/34, July 2025.
Google unveiled three new shopping features today that use AI to enhance the way people discover and buy products.
The updates include a virtual try-on tool for clothing, more flexible price tracking alerts, and an upcoming visual style inspiration feature powered by AI.
Virtual Try-On Now Available Nationwide
Following a limited launch in Search Labs, Google’s virtual try-on tool is now available to all U.S. searchers.
The feature lets you upload a full-length photo and use AI to see how clothing items might look on your body. It works across Google Search, Shopping, and even product results in Google Images.
Tap the “try it on” icon on an apparel listing, upload a photo, and you’ll receive a visualization of yourself wearing the item. You can also save favorite looks, revisit past try-ons, and share results with others.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
The tool draws from billions of apparel items in its Shopping Graph, giving shoppers a wide range of options to explore.
Smarter Price Alerts
Google is also rolling out an enhanced price tracking feature for U.S. shoppers.
You can now set alerts based on specific criteria like size, color, and target price. This update makes it easier to track deals that match your exact preferences.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
AI-Powered Style Inspiration Arrives This Fall
Later in 2025, Google plans to launch a new shopping experience within AI Mode, offering outfit and room design inspiration based on your query.
This feature uses Google’s vision match technology and taps into 50 billion products indexed in the Shopping Graph.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
What This Means for E-Commerce Marketers
These updates carry a few implications for marketers and online retailers:
Improve Product Images: With virtual try-on now live, high-quality and standardized apparel images are more likely to be included in AI-driven displays.
Competitive Pricing Matters: The refined price alert system could influence purchase behavior, especially as consumers gain more control over how they track product deals.
Optimize for Visual Search: The upcoming inspiration features suggest a growing role for visual-first shopping. Retailers should ensure their product feeds contain rich attribute data that helps Google’s systems surface relevant items.
Looking Ahead
Google’s suite of AI-powered shopping features can help create more personalized and interactive retail experiences.
For search marketers, these tools offer new ways to engage, but also raise the bar in terms of presentation and data quality.
For e-commerce teams, staying competitive may require rethinking how products are priced, presented, and positioned within Google’s growing suite of AI-enhanced tools.
If you spend time in SEO circles lately, you’ve probably heard query fan-out used in the same breath as semantic SEO, AI content, and vector-based retrieval.
It sounds new, but it’s really an evolution of an old idea: a structured way to expand a root topic into the many angles your audience (and an AI) might explore.
If this all sounds familiar, it should. Marketers have been digging for this depth since “search intent” became a thing years ago. The concept isn’t new; it just has fresh buzz, thanks to GenAI.
Like many SEO concepts, fan-out has picked up hype along the way. Some people pitch it as a magic arrow for modern search (it’s not).
Others call it just another keyword clustering trick dressed up for the GenAI era.
The truth, as usual, sits in the middle: Query fan-out is genuinely useful when used wisely, but it doesn’t magically solve the deeper layers of today’s AI-driven retrieval stack.
This guide sharpens that line. We’ll break down what query fan-out actually does, when it works best, where its value runs out, and which extra steps (and tools) fill in the critical gaps.
If you want a full workflow from idea to real-world retrieval, this is your map.
What Query Fan-Out Really Is
Most marketers already do some version of this.
You start with a core question like “How do you train for a marathon?” and break it into logical follow-ups: “How long should a training plan be?”, “What gear do I need?”, “How do I taper?” and so on.
In its simplest form, that’s fan-out. A structured expansion from root to branches.
Where today’s fan-out tools step in is the scale and speed; they automate the mapping of related sub-questions, synonyms, adjacent angles, and related intents. Some visualize this as a tree or cluster. Others layer on search volumes or semantic relationships.
Think of it as the next step after the keyword list and the topic cluster. It helps you make sure you’re covering the terrain your audience, and the AI summarizing your content, expects to find.
Why Fan-Out Matters For GenAI SEO
This piece matters now because AI search and agent answers don’t pull entire pages the way a blue link used to work.
Instead, they break your page into chunks: small, context-rich passages that answer precise questions.
This is where fan-out earns its keep. Each branch on your fan-out map can be a stand-alone chunk. The more relevant branches you cover, the deeper your semantic density, which can help with:
1. Strengthening Semantic Density
A page that touches only the surface of a topic often gets ignored by an LLM.
If you cover multiple related angles clearly and tightly, your chunk looks stronger semantically. More signals tell the AI that this passage is likely to answer the prompt.
2. Improving Chunk Retrieval Frequency
The more distinct, relevant sections you write, the more chances you create for an AI to pull your work. Fan-out naturally structures your content for retrieval.
3. Boosting Retrieval Confidence
If your content aligns with more ways people phrase their queries, it gives an AI more reason to trust your chunk when summarizing. This doesn’t guarantee retrieval, but it helps with alignment.
4. Adding Depth For Trust Signals
Covering a topic well shows authority. That can help your site earn trust, which nudges retrieval and citation in your favor.
Fan-Out Tools: Where To Start Your Expansion
Query fan-out is practical work, not just theory.
You need tools that take a root question and break it into every related sub-question, synonym, and niche angle your audience (or an AI) might care about.
A solid fan-out tool doesn’t just spit out keywords; it shows connections and context, so you know where to build depth.
Below are reliable, easy-to-access tools you can plug straight into your topic research workflow:
AnswerThePublic: The classic question cloud. Visualizes what, how, and why people ask around your seed topic.
AlsoAsked: Builds clean question trees from live Google People Also Ask data.
Frase: Topic research module clusters root queries into sub-questions and outlines.
Keyword Insights: Groups keywords and questions by semantic similarity, great for mapping searcher intent.
Semrush Topic Research: Big-picture tool for surfacing related subtopics, headlines, and question ideas.
Answer Socrates: Fast People Also Ask scraper, cleanly organized by question type.
LowFruits: Pinpoints long-tail, low-competition variations to expand your coverage deeper.
WriterZen: Topic discovery clusters keywords and builds related question sets in an easy-to-map layout.
If you’re short on time, start with AlsoAsked for quick trees or Keyword Insights for deeper clusters. Both deliver instant ways to spot missing angles.
Now, having a clear fan-out tree is only step one. Next comes the real test: proving that your chunks actually show up where AI agents look.
Where Fan-Out Stops Working Alone
So, fan-out is helpful. But it’s only the first step. Some people stop here, assuming a complete query tree means they’ve future-proofed their work for GenAI. That’s where the trouble starts.
Fan-out does not verify if your content is actually getting retrieved, indexed, or cited. It doesn’t run real tests with live models. It doesn’t check if a vector database knows your chunks exist. It doesn’t solve crawl or schema problems either.
Put plainly: Fan-out expands the map. But, a big map is worthless if you don’t check the roads, the traffic, or whether your destination is even open.
The Practical Next Steps: Closing The Gaps
Once you’ve built a great fan-out tree and created solid chunks, you still need to make sure they work. This is where modern GenAI SEO moves beyond traditional topic planning.
The key is to verify, test, and monitor how your chunks behave in real conditions.
Image Credit: Duane Forrester
Below is a practical list of the extra work that brings fan-out to life, with real tools you can try for each piece.
1. Chunk Testing & Simulation
You want to know: “Does an LLM actually pull my chunk when someone asks a question?” Prompt testing and retrieval simulation give you that window.
Tools you can try:
LlamaIndex: Popular open-source framework for building and testing RAG pipelines. Helps you see how your chunked content flows through embeddings, vector storage, and prompt retrieval.
Otterly: Practical, non-dev tool for running live prompt tests on your actual pages. Shows which sections get surfaced and how well they match the query.
Perplexity Pages: Not a testing tool in the strict sense, but useful for seeing how a real AI assistant surfaces or summarizes your live pages in response to user prompts.
2. Vector Index Presence
Your chunk must live somewhere an AI can access. In practice, that means storing it in a vector database.
Running your own vector index is how you test that your content can be cleanly chunked, embedded, and retrieved using the same similarity search methods that larger GenAI systems rely on behind the scenes.
You can’t see inside another company’s vector store, but you can confirm your pages are structured to work the same way.
Tools to help:
Weaviate: Open-source vector DB for experimenting with chunk storage and similarity search.
Pinecone: Fully managed vector storage for larger-scale indexing tests.
Qdrant: Good option for teams building custom retrieval flows.
3. Retrieval Confidence Checks
How likely is your chunk to win out against others?
This is where prompt-based testing and retrieval scoring frameworks come in.
They help you see whether your content is actually retrieved when an LLM runs a real-world query, and how confidently it matches the intent.
Tools worth looking at:
Ragas: Open-source framework for scoring retrieval quality. Helps test if your chunks return accurate answers and how well they align with the query.
Haystack: Developer-friendly RAG framework for building and testing chunk pipelines. Includes tools for prompt simulation and retrieval analysis.
Otterly: Non-dev tool for live prompt testing on your actual pages. Shows which chunks get surfaced and how well they match the prompt.
4. Technical & Schema Health
No matter how strong your chunks are, they’re worthless if search engines and LLMs can’t crawl, parse, and understand them.
Ryte: Detailed crawl reports, structural audits, and deep schema validation; excellent for finding markup or rendering gaps.
Screaming Frog: Classic SEO crawler for checking headings, word counts, duplicate sections, and link structure: all cues that affect how chunks are parsed.
Sitebulb: Comprehensive technical SEO crawler with robust structured data validation, clear crawl maps, and helpful visuals for spotting page-level structure problems.
5. Authority & Trust Signals
Even if your chunk is technically solid, an LLM still needs a reason to trust it enough to cite or summarize it.
That trust comes from clear authorship, brand reputation, and external signals that prove your content is credible and well-cited. These trust cues must be easy for both search engines and AI agents to verify.
Tools to back this up:
Authory: Tracks your authorship, keeps a verified portfolio, and monitors where your articles appear.
SparkToro: Helps you find where your audience spends time and who influences them, so you can grow relevant citations and mentions.
Perplexity Pro: Lets you check whether your brand or site appears in AI answers, so you can spot gaps or new opportunities.
Query fan-out expands the plan. Retrieval testing proves it works.
Putting It All Together: A Smarter Workflow
When someone asks, “Does query fan-out really matter?” the answer is yes, but only as a first step.
Use it to design a strong content plan and to spot angles you might miss. But always connect it to chunk creation, vector storage, live retrieval testing, and trust-building.
Here’s how that looks in order:
Expand: Use fan-out tools like AlsoAsked or AnswerThePublic.
Draft: Turn each branch into a clear, stand-alone chunk.
Check: Run crawls and fix schema issues.
Store: Push your chunks to a vector DB.
Test: Use prompt tests and RAG pipelines.
Monitor: See if you get cited or retrieved in real AI answers.
Refine: Adjust coverage or depth as gaps appear.
The Bottom Line
Query fan-out is a valuable input, but it’s never been the whole solution. It helps you figure out what to cover, but it does not prove what gets retrieved, read, or cited.
As GenAI-powered discovery keeps growing, smart marketers will build that bridge from idea to index to verified retrieval. They’ll map the road, pave it, watch the traffic, and adjust the route in real time.
So, next time you hear fan-out pitched as a silver bullet, you don’t have to argue. Just remind people of the bigger picture: The real win is moving from possible coverage to provable presence.
If you do that work (with the right checks, tests, and tools), your fan-out map actually leads somewhere useful.